Painting the AI Risk Picture

Painting the AI Risk Picture

As the use of Artificial Intelligence (AI) in industrial applications is becoming a reality, we need to manage the new risks that come with it. But how does the risk picture change when we introduce AI?

 

What is AI risk?

AI risks cannot be viewed in isolation. When AI is introduced into systems or processes, it aims to provide value, reduce risks, or both. Hence, it changes existing risks and introduces completely new ones. By ‘AI risk picture’, we mean the overall landscape of risks associated with the introduction and integration of AI into systems or processes. This includes changes in existing risks and the emergence of new, AI-related risks. Managing AI risks involves both adapting current risk management approaches, such as barrier management, and establishing new risk mitigation strategies.

Fundamentally, risk is about real-world consequences for stakeholders and associated uncertainty. AI is neither an inherently good nor bad technology. Rather, the risks associated with AI arise from how it is integrated and utilized within specific systems – such as an autonomous ship, a wind farm, or a medical device. AI has applications across diverse domains, each with its own set of risks and risk management traditions. Isolating the risks unique to AI can be challenging, but, generally, AI risks can be understood as the system risks that are influenced by the introduction of AI.


There are also systems that can only be realized using AI; for such systems, the AI risks are the system risks originating from AI.

 

General AI risk sources

AI is not a monolithic technology; it encompasses a variety of tools and approaches, from deep learning models used in computer vision and natural language processing, to expert systems for diagnostics and statistical models for supply and demand forecasting. Despite the differences, there are specific challenges inherent to AI that must be addressed before introducing AI into a system. AI models often rely heavily on data that could be biased. Additionally, they may be frequently updated, hard to explain, can suddenly perform poorly in new situation, or may pose security or privacy challenges [Ref. 3, 4]. We refer to such underlying cases as general AI risk sources.

To effectively assess the AI risks within a specific system, we must understand how these AI risk sources evolve into consequences at the system level – i.e., what are the real-world consequences that impact stakeholders and the uncertainties associated with these outcomes. AI risk sources do not cause harm by themselves, but they can influence system risks, such as injuries or loss of life from a traffic accident, malfunction of medical equipment, or unfair distribution of public benefits or essential services. [Ref. MIT risk database [1], and AI incident database [2]]

 

Industry-specific AI risks

The use of AI in a specific industry – industrial AI – comes with particular risks. The figure below shows an example from maritime autonomy, where AI is used to operate a ship. The AI may assist with limited parts of the operation, such as detecting and tracking other objects at sea, or carry out more comprehensive tasks, such as route planning, navigation, or controlling the ship to follow a specific route. There may also be humans in the loop who oversee the AI and can intervene (i.e., the AI does not have full agency). Poor accuracy, biased data, low robustness, and security breaches are examples of general AI risk sources that, in this example, can lead to incorrect predictions or suboptimal recommendations or decisions from the AI model. If these issues are not corrected by the human in the loop, they could result in critical consequences, like a collision. Poor explainability and transparency are additional general AI risk sources that might contribute to poor human-machine interaction, thereby reducing the effectiveness of having a human in the loop.

Similarly, we can envision a healthcare setting where the prediction concerns a patient’s condition. If the prediction is incorrect and no human expert intervenes (and no other safety barriers are in place), it could result in misdiagnosis.

Understanding industrial AI risk means understanding how general AI risk sources create industry-specific risks. This requires both AI and domain knowledge. In particular, it involves understanding the technical performance of AI functions, the agency of these AI functions within the system, their interaction with other parts of the system, and how this ultimately affects various stakeholders, technically, legally, and in terms of responsibility.

Why should I care about AI risks specifically?

Naturally, one should care about managing risk in general, as is common practice in high-risk industries, but there are a few reasons why AI risks deserve special attention:

  • Many traditional risk management approaches are not suitable for AI. This is due to the complex and digital nature of systems with AI.
  • There is a certain level of distrust in AI within society [5], and demonstrating to stakeholders that AI risks are taken seriously is currently a key part of business.
  • Compliance with new forms of regulations specifically targeting AI and AI risks will be needed. In the EU, the AI Act entered into force on the 2nd of August 2024, and similar regulations are in place or in the making in other parts of the world. Examples include the Biden Executive Order [6] in the US, South Korea's AI Basic Act and The Interim Measures for the Administration of Generative Artificial Intelligence Services in China 《生成式人工智能服务管理暂行办法》[7].

Current risk management practices will have to be adapted and expanded to cover these AI-specific needs.

First steps towards managing industrial AI risks

To effectively manage AI risks, organizations must identify the specific risks associated with their AI systems and develop strategies to mitigate them.

Our research and experience from industrial projects have led us to the following recommendations:

  • Start by identifying what you are currently doing to manage your system risks. Will this be sufficient after AI is introduced into your system? Or do you need something more to handle the relevant AI risk sources?
  • Consider your industrial AI risk from a compliance, management, operation, and product performance perspective.
  • Focus your risk management on the sources of AI risk.
  • Consider how to manage AI risks in conjunction with other risks.
  • Assess whether your organization has the necessary competencies to manage the risks.
  • Set up a strategy to manage the risks.

Since AI risks may originate from various sources, their management will generally need to involve several parts of the organization. Below is a schematic overview of how an organization can be set up to manage AI risks.


 

We find this setup to be a useful way to operationalize the AI risk management strategy, as these organizational units manage risks of similar nature and have the domain competence needed.

Compliance risks

Regulations are important external factors that shape the risk of AI systems, including 1) applicable legal requirements, and 2) policies, guidelines, and decisions from regulators that impact the interpretation or enforcement of legal requirements in the development and use of AI systems.

Regulatory compliance is the minimum requirement that organizations must satisfy to mitigate the risks of developing and deploying AI systems. Many national and regional regulations comprise mandatory requirements to regulate certain aspects of AI systems, which makes compliance risks unprecedentedly challenging.

For example, personal data protection regulations apply to AI systems when they process personal data, such as GDPR (General Data Protection Regulation) in the EU and PIPL (Personal Information Protection Law) in China. The EU AI Act, meanwhile, is the first comprehensive AI regulation to have horizontal impacts on all kinds of AI systems.

In addition, guidelines and industrial standards for responsible AI are being published across different sectors, including healthcare and automobile, to regulate the development and use of AI systems. Organizations therefore need to take action to address compliance risks arising from emerging regulations.

Management risks

Risks may also arise from insufficient management, such as a lack of strategies, policies, processes, roles, and responsibilities within organizations. Hence, organizations must ensure they have the necessary tools, processes, competence, and capabilities in place to utilize AI technologies to add value to their work. This, in turn, requires establishing, implementing, maintaining, and continually improving an AI management system within their context. The first certifiable AI management system, ‘ISO/IEC 42001 Information technology – Artificial Intelligence Management System’ [8], released in 2023, provides guidance for organizations to build a sufficient management framework and reduce organizational governance and management risks.

Operational risks

Risks may arise from inadequate monitoring and evaluation techniques, which are crucial for ensuring the integrity of AI systems throughout their operational lifecycle. Additionally, these risks can arise from the interactions of AI systems with other systems, as well as their interactions with humans and the environment. Effective risk management strategies must account for these complexities and ensure that AI systems are safely and reliably integrated into operational processes.

It is essential to develop robust techniques and metrics to assess the uncertainty stemming from input variation and model response. This enables the tracking of AI system performance in real time or at regular intervals. Conducting regular performance assessments can identify high-uncertainty scenarios that may lead to operational disruptions or suboptimal outcomes. Furthermore, establishing a feedback loop is crucial for the iterative refinement of AI systems. Feedback can be gathered from monitoring outputs, user findings, or external audits, allowing organizations to identify areas for improvement.

Transparency is also vital in managing operational risks. Stakeholders – including users, regulators, and internal teams – must have insight into how AI systems function, the data they utilize, and their decision-making processes. This transparency helps to mitigate operational risks associated with miscommunication or misunderstandings.

Product and performance risks

Risks associated with specific AI products mainly concern the technology’s that is, the product’s performance. These risks are managed by qualifying the technology before it is put into operation, encompassing key areas such as data quality, robustness, privacy, fairness, accountability, transparency, explainability, security, and human-machine interaction. This also includes the assessment of how such AI risks manifest in the system in which the AI product is used.

AI providers should, from the earliest stages of product development, seek to understand how risks associated with their AI technology could manifest as system-level risks in various industrial contexts. This focus should be prioritized from day one when integrating an AI product into a system. Specifically, this includes implementing data quality governance frameworks to ensure accuracy and completeness, as poor data can lead to flawed models. AI systems should demonstrate robustness through stress-testing to withstand diverse conditions and attacks. Privacy protections are essential, especially under regulations like GDPR; strong data encryption and anonymization are important for legal compliance and fostering user trust.

Additionally, organizations must address algorithmic biases to prevent unfair treatment of different demographics. Establishing clear accountability is essential to ensure stakeholders understand who is responsible for AI outputs and decisions. Explainability and transparency are vital for user acceptance, necessitating interpretability through documentation, visualizations, and user-friendly interfaces. Cybersecurity measures should also be implemented to safeguard against attacks. Finally, effective human-machine interaction improves usability and reduces risks associated with misunderstandings.

 

Striking the right balance

AI is developing fast and increasingly moving into the industrial sector. This creates new opportunities for industrial value creation but also introduces new AI risks. The AI risk picture affects several parts of the industrial organization, necessitating the involvement of all relevant units in managing these risks. Effective AI risk management must be rooted in the organization’s current risk management practices. However, these practices need to be adapted and expanded to properly manage the novel AI risks. By doing so, the industrial sector can strike the right balance between creating value with AI and managing associated risks.



[1] The AI Risk Repository

[2] Welcome to the Artificial Intelligence Incident Database

[3] DNV-RP-0671 Assurance of AI-enabled systems

[4] GOV.UK Introduction to AI assurance

[5] Gillespie, N., Lockey, S., Curtis, C., Pool, J., & Akbari, A. (2023). Trust in Artificial Intelligence: A Global Study. The University of Queensland and KPMG Australia. doi: 10.14264/00d3c94

[6] Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence | The White House

[7] China’s Interim Measures for the Administration of Generative Artificial Intelligence Services

[8] ISO/IEC 42001:2023 - AI management systems