AI is not worth the investment if people cannot use it
AI doesn’t operate alone but in a complex context that includes both individuals and organizations – people have an impact on AI systems and are impacted by them.
Learn why it’s so crucial to understand the human factors when implementing AI, and how organizations can successfully manage change to adopt AI in high-risk environments.
This episode answers key questions such as:
Why is human-AI collaboration so important when implementing AI in high-risk contexts?
How can people and organizations prepare for AI adoption?
Transcript:
Transcript:
MARTINE HANNEVIKWelcome to the Trust in Industrial AI video series, where we explore how to implement AI with speed and confidence in safety critical industries.AI doesn't operate alone, but in a complex context that includes both individuals and organisations.People impact and they're impacted by AI systems.And today's episode is all about the people.
I'm your host, Martine, and today I'm joined by our human factors and change experts, Koen and Karen.Welcome to both of you.
Transcript:
KOEN VAN DE MERWE It's great to be here.
Transcript:
KAREN STEINFELDThanks for having us.
Transcript:
MARTINE HANNEVIKGood.And I think we'll start with the basics.So why is it so important to consider the human factors when implementing AI in high-risk contexts?
Transcript:
KOEN VAN DE MERWEWell, humor factors is about making systems work for people and I think especially in high risk industrial applications where operators perform safety critical tasks, I think it's important that we develop these systems such that they support humans in their work.Now I think also that despite these rapid developments that we see in applications in general, but also now we're starting to see this in industrial settings, these systems have inherent limitations and the world is very complex.And thereforeI think the human role is as important as it is today.And therefore, I think we should put the effort into develop systems that support human decision making.
Transcript:
MARTINE HANNEVIKAnd if we don't consider the human factors, then people might not trust or use the systems, right?
Transcript:
KOEN VAN DE MERWEYeah, exactly.And well, I, I think it's important to highlight here that whenwe talk about trust, that it's not about maximizing trust.I think when we develop these systems and therelationship that humans have with these particular systems, that humans learn to calibrate their trust, which means that, you know, given these inherent limitations of these systems, humans need to be able to recognize when the system works well and when it has its, where its limitations are.So when they should be cautious.Soit's not about maximizing trust, it's calibrating trust and that I think is an important factor in human oversight.
Transcript:
MARTINE HANNEVIKThat's an interesting and a little bit different perspective from what we've heard so far.So can you give an example of this?
Transcript:
KOEN VAN DE MERWE Let's take an example from maritime for example, where I work most in, we have developments towards autonomous shipping.These systems, they employ image recognition systems to detect objects in the water, for example, or land and other in the wider environment.And they use algorithms for path planning and prediction where other ships are going right.So there might be limitations on the data set the system is trained on.And you knowthe maritime traffic situation can be quite complex.So which means that predicting where other ships are going can be quite difficult.And sometimes a solution can only be made if ships agree on how to solve it rather than strictly following the rules.And now there is, for example, a sentence in in the collision regulation that says you have to behave according to good seamanship.What does that mean?Yeah, well, how, how does the system know what good seamanship is?So, and it'sthose kind kinds of things that where we think that, well, maybe the role of a human is still quite important even though we're going towards autonomous shipping.So sometimes an actual human being needs to intervene.
Transcript:
MARTINE HANNEVIKYeah, yeah.And how can organizations work with human factors while they're developing AI systems?
Transcript:
KOEN VAN DE MERWE Well, I think if we take, if we go back to the, this definition of human factors, making systems that work for humans, which means that I think we should ensure that humans are at the center of the development.So a human centric approach rather than a technology centric approach.And this human centric approach, this basically means that we ask a number of questions, right?So we say, what are the goals we try to achieve with this human AI system?What are the main decisions that need to be made?What, what kind of information does an operator need in order to make good operational decisions regarding the use of that system?What are what are the tasks that that will be performed?What are the challenges?How, how can theythey be managed?So I think by taking this human centric approach, it'simportant, but it's quite challenging because you know, if we task a person to oversee a system and this system is really complex.These AI enabled systems are rather complex.That oversight task is complex in itself.So thereforeI think we need to support humans in that.A multidisciplinary human centred approach is a good way of achieving this.
Transcript:
MARTINE HANNEVIK But even if we take this human centred approach, it still means a big change for people.It means changing the way they work.They need to trust the new system, interact with the new system and at the same time we know that a lot of people are quite skeptical to change.How do we deal with that, Karen?
Transcript:
KAREN STEINFELD I think they have a good reason to be sceptical.It turns out that up to 80% of AI projects fail due to misaligned leadership, data quality issues and collaboration between teams.And it gets even worse because 62% of the population are uncomfortable with change in the firstplace.So that makes successful AI adaptation much harder.And I think managers and leaders need to take into consideration that change management is just as crucial as the technical development when they look at solutions for their company.And in high critical industries, you also need to look at what kind of processes need to change, what skill sets do we need to enforce and also create a culture so that you can have the support in order to get the adaptation fully running.
Transcript:
MARTINE HANNEVIK So what specifically can organisations do to succeed with change management?
Transcript:
KAREN STEINFELD It's amillion dollar question.I think a lot of people are scratching their head, but we do see that organisations that have effective change management are six times more likely to succeed with any project.To be honest.It's not just on AI,it's any technical implementation.So leaders need to create an environment of psychological safety and trust so that's like you say, employees can address their concerns and have an openness and ask questions, not take everything for granted.You need to have a dialogue about the risk and the benefits and the challenges with the AI system that's in either development or deployment.And then leaders need to also be the liaison and be the persons who advocate the change.They need to understand what they're inflicting on their organization by communicating the benefits, giving time to reskill and give them the opportunity to train if well in advance before they start operating the systems.We see that change management is all normally put in as a hindsight.We see, oh, adaptation isn't going as we thought, maybe we should put some change management into it.But the investment of not doing it from the start can be quite costly because you won't have their return on the technical investment that you'vedone unless you have people who are willing to use the systems.And if you have unwilling or people who are unable to use the systems, your business case is going to crash.
Transcript:
MARTINE HANNEVIK So you actually from the beginning need to think about also human centered approach, but also the change management side of things to get your people.
Transcript:
KAREN STEINFELD Exactly.The technical part is what we are looking at because that's easy to control. Human factors and the psychology around people are much harder.So you need to give it more time.So while you are thinking about how you want this future AI adaptation to be used within your critical factors, you need to give the organization time to mature with that investment that you're about to do.
Transcript:
MARTINE HANNEVIK Great, anyfinal thoughts from you, Koen?
Transcript:
KOEN VAN DE MERWE Well, I was thinking, I think it's important to highlight also here that it's not about choosing one over the other.I think at least I've tried to make my case that humans play an important role, but so does technology.So ideally in the future, where we're going is that AI systems or systems in general are not just a tool that you use, but will become some kind of a teammate.So where humans and AI systems will be able to collaborate in a way.I'm not sure exactly how this will play out, but that's the theory.And as we discussed earlier, as with human in human teams and teams consisting only of humans, trust is an important part and so will trust play an important role in human-AI teams.And I think if we, if we're able to establish that, then I think thethe sum will be greater than its individual parts.
Transcript:
MARTINE HANNEVIK Great.So think about the AI system as, as your teammates.
Transcript:
KOEN VAN DE MERWE Exactly.
Transcript:
MARTINE HANNEVIK Yeah, I think that’s great.We'llend with that great advice.So, thank you both of you for all these insights about people.And thank you to our audience for tuning in.If you have any questions or want to learn more about how DNV can support you with safe application of industrial AI, then please visit our website.Yeah, yeah.And how can organizations work with human factors while they're developing AI systems?
When it comes to trusting AI systems, it’s not about maximizing trust; it’s about calibrating it — knowing when to trust AI and when not to.
Dr Koen van de Merwe
Principal Researcher, Human Factors
DNV
About the speakers
Karen Steinfeld, People and Change Manager, DNV
Karen is a seasoned change manager with 14 years of experience in the IT industry, specializing in cultural transformations during mergers. She is passionate about culture, people and driving meaningful change within organizations. She thrives on helping businesses navigate the complexities of transformation, guiding teams from point A to B while embracing the uncertainty that often accompanies change. With a creative and engaging approach, she excels at finding innovative solutions to cultural and leadership challenges. Karen is energized by new perspectives and is committed to fostering modern, sustainable and progressive ways of working that empower both individuals and organizations to thrive.
Dr Koen van de Merwe, Principal Researcher, Human Factors, DNV
Koen has 18+ years of experience from working as an applied researcher and consultant in the aviation, oil & gas, and maritime industries. As a cognitive psychologist, he is passionate about the interplay between humans and systems in complex operations and safety-critical systems. He believes that humans uniquely contribute to system performance and resilience by drawing on a wealth of experience, knowledge and skills to solve complex problems. In DNV, he strives to develop processes and requirements that integrate and apply human factors principles in the development of autonomous solutions.
Martine Hannevik, Head of Innovation and Portfolio Management, DNV
The video series is hosted by Martine Hannevik.
Martine leads the innovation portfolio at Digital Solutions in DNV, focusing on developing future-oriented products and services in sustainability, AI and digital assurance. Her work lies at the intersection of strategy, innovation and digital transformation.
AI can enhance safety, operational efficiency, innovation, and sustainability in industries such as maritime, energy, and healthcare. However, organizations must balance risk and reward. By implementing AI responsibly, you can fully exploit its potential, even in high-risk contexts.
Combining our industry domain knowledge with deep digital expertise. DNV is dedicated to supporting industries with the safe and responsible use of industrial AI.