Optimizing Human-AI Collaboration for Trustworthy and Responsible Industrial AI
No matter how sophisticated and safe an AI system is, it can realize its potential only if humans are able and willing to collaborate with it as intended
Late one winter evening, Anna, a senior operator at an electric grid operator company, was monitoring the grid from the control centre as a severe snowstorm swept across the region. High winds and heavy snow increased heating demand while threatening the stability of several transmission lines, especially in remote areas. Just as she was about to take her break, GridAssistant, the AI system monitoring the network, flagged a line segment with an unusual heavy load and predicted an increased likelihood of faults if left unaddressed.
Using this early insight, Anna quickly reviewed GridAssistant’s recommended actions: redistributing load across neighbouring lines to minimize the stress on the affected segment. With years of experience, she validated the AI’s suggestions, authorizing the load redistribution in seconds, which GridAssistant implemented almost instantly. This collaborative approach kept the system stable and prevented a possible fault. As the storm intensified, Anna knew this human-AI partnership was essential in managing the unpredictable challenges of severe weather, keeping the community powered.
This fictitious scenario illustrates how humans and AI systems can work together, amplifying each other’s strengths and minimizing each other’s limitations. Human-AI collaboration refers to the cooperative interaction between humans and AI systems to achieve shared goals. As collaborators, the authority to make decisions is thus shared between humans and the AI systems (Figure 1). While a precise 50-50 authority sharing may not be feasible, setting up AI systems as collaborators generally involves granting them some decision-making authority, even if humans retain the final say.
Collaboration implies a two-way street, rather than assuming that humans are always right and AI systems are not yet trustworthy. With many people being concerned about the risks associated with AI systems, we often forget that humans, too, have limitations. Human judgment is inherently susceptible to both bias and noise [1]. Bias represents predictable deviations from accurate judgment, while noise encompasses the often-unpredictable variability caused by factors such as fatigue, fluctuating moods, team dynamics, and even physical states like low blood sugar. These influences can lead to inconsistencies in decision-making, including overconfidence, potentially clouding critical assessments.
This raises a crucial question: How can both AI and experts be trusted to manage critical tasks together when the stakes are already high? The solution lies in optimizing human-AI collaboration to foster trust and accountability, treating AI systems as team members. By combining diverse competencies of humans and technologies, such collaboration ensures a more comprehensive skill set, enabling the achievement of shared objectives more effectively.
Figure 1. The work scale between humans and AI
How to optimize human-AI collaboration
Optimizing human-AI collaboration involves enhancing the strengths and capabilities of both humans and AI systems while minimizing their individual limitations. When humans and AI work together, humans can provide critical thinking, intuition, and contextual understanding, while AI offers processing power, data-driven insights, and consistency. The goal is to combine these strengths for better, more effective and reliable outcomes while addressing areas where each may fall short on their own.
Figure 2. High-level overview of factors influencing human-AI collaboration
DNV’s comprehensive research into the matter has pinpointed several influencing factors to focus on for optimizing human-AI collaboration [2] (Figure 2).
1. Understand who the users are
Ideally, AI systems should be designed according to their target users’ characteristics and profiles:
· Are the users technology champions or laggards?
· How well do the users understand the capabilities and limitations of AI systems, particularly their real-world impacts and consequences?
· How realistic are the users’ expectations when working with AI systems?
Such considerations and user involvement are indispensable in the development and design of AI systems, as they foster optimal collaboration once deployed. It is also suggested to constantly incorporate relevant user feedback to further improve AI systems post-deployment.
2. Personalize the AI interface and features
The AI interface is where humans and AI collaborate and where trust in AI systems can be fostered. It is paramount to personalize AI design features and the user interface according to the users’ characteristics, needs, and preferences. High usability and usefulness of AI systems often result from a fit-for-purpose AI interface and features.
3. Explore how AI systems can be used
Users may interact with AI systems in ways not initially envisioned by developers. By exploring alternative uses and identifying the best and worst situations for AI application, we can ensure appropriate use and help users gauge how much to rely on AI outputs or recommendations. Typically, the effectiveness of AI systems increases in simple, low-risk scenarios and decreases in complex or high-risk ones.
4. Close the gap between explainability and interpretability
Explainability refers to the AI system’s efficient, effective, and timely explanation of the rationale behind its decisions to users. Interpretability, on the other hand, means that users understand what is being explained correctly, accurately, and quickly. Both are important for optimizing human-AI collaboration, because a gap between what is explained and what is understood can lead to ineffective collaboration or, worse, disastrous outcomes.
5. Personalize the presentation of AI output
AI developers should collaborate with users to determine what AI output information needs to be presented, in what format, when, in what order, and how. Information such as AI certainty, validity, accuracy, reliability, and confidence levels can be very helpful for users to evaluate the output. Additionally, understanding the potential failure points and limitations of AI systems can be even more informative, helping users to avoid overreliance and biases when reading the output.
6. Empower humans through fit-for-purpose infrastructure
Empowering humans in AI collaborations is not just about giving them tools but ensuring they can effectively act on AI-driven insights. For example, as highlighted in our Energy Transition Outlook – New Power Systems [3], legacy digital systems often constrain human operators, preventing real-time responses and limiting the potential of AI. This challenge reflects a broader theme: optimizing collaboration requires adjustments across the AI lifecycle, from design to deployment, tailored to specific contexts. The principle of human-centred AI emphasizes that technology and infrastructure should adapt to human needs, not the other way around. While the ideal is seamless human-AI synergy, we must also acknowledge and address practical barriers, like outdated or unfit infrastructure, that hinder progress towards trustworthy and responsible industrial AI implementation.
7. Foster user trust
Fostering user trust in AI-enabled systems is especially critical for human-AI collaboration and can be achieved by implementing mechanisms to build, sustain, and restore trust [4]. This includes, for example, ensuring robust data protection, offering responsive and solution-oriented technical support, and maintaining transparent communication with system operators and stakeholders. Designing AI systems with features that align with human workflows and decision-making processes can further enhance trust. Importantly, trust in industrial AI is dynamic and can increase over time through repeated interactions. These interactions allow operators to refine their expectations, become more comfortable with the system’s behaviours, and build confidence in its reliability and effectiveness [5], ultimately supporting more seamless integration into operations.
One size does not fit all
Relying solely on human judgment to align with AI output may not be ideal, as humans are also prone to error. Similarly, positioning humans as the ‘last line of defence’ to prevent AI failures is both risky and ethically questionable. Such an approach could inadvertently increase the chances of human error, particularly in high-stress, time-critical situations.
Instead, building trustworthy and responsible industrial AI involves designing and testing AI systems for specific use cases, user populations, and well-defined goals. This ensures that the strengths of both AI and humans are maximized, while their weaknesses are minimized. These use cases can also serve as valuable testbeds to simulate time-critical scenarios, improving both the AI’s performance and the human operator’s abilities – individually and in collaboration – prior to scaling up.
Finally, like any form of collaboration, human-AI partnerships need time to mature. Building trust in AI systems requires humans to develop accurate mental models, manage expectations, and fully understand the AI’s limitations and capabilities. Only then can humans and AI systems properly calibrate their teamwork, ensuring more effective and reliable interaction over time.
Learn more about DNV’s insights and services
Please visit our AI Insights pages for more advice on how to build trust and compliance into AI-enabled systems, and our Digital Trust pages to learn more about how your organization can manage digital risk and complexity with confidence.
Reference list
- Kahneman, D., O. Sibony, and C.R. Sunstein, Noise: A flaw in human judgment. 2021: Hachette UK.
- Bach, T.A., et al., Unpacking Human-AI Interaction in Safety-Critical Industries: A Systematic Literature Review. IEEE Access, 2024.
- DNV, Energy Transition Outlook – New Power Systems. 2024.
- Bach, T.A., et al., A systematic literature review of user trust in AI-enabled systems: An HCI perspective. International Journal of Human–Computer Interaction, 2022: p. 1-16.
- Glomsrud, J.A. and T.A. Bach, The Ecosystem of Trust (EoT): Enabling effective deployment of autonomous systems through collaborative and trusted ecosystems. arXiv preprint arXiv:2312.00629, 2023.