Managing AI risks with speed and confidence when stakes are high
Learn why it is important to carefully manage risks when you work with AI in industries such as healthcare or energy. Get practical advice on how to mitigate risks in an efficient way.
This episode answers key questions such as:
Why is it so important to think about risk when using AI in high-risk contexts such as healthcare and energy?
What are the key risks of using AI and how are they mitigated?
Transcript:
Transcript:
MARTINE HANNEVIKWelcome to the Trust in Industrial AI video series, where we explore how to implement AI with both speed and confidence in safety critical industries.
In today's episode, we'll explore why it's so important to carefully manage risk when you work with AI in industries such as healthcare and energy. I'm your host, Martine, and today I'm joined by Helga and Christian. Welcome to both of you.
Transcript:
HELGA M. BRØGGERI'm very happy to be here.
Transcript:
CHRISTIAN MARKUSSENYeah, thanks for having us.
Transcript:
MARTINE HANNEVIKGreat.So then we'll start with a question to you, Helga.So you actually used to be a medical doctor before diving into the fields of AI.So in your perspective, why is it so important to think about risk when using AI in a setting like healthcare?
Transcript:
HELGA M. BRØGGER So the healthcare sector faces a lot of challenges right now.There's worker shortage, there's budget cuts, and of course you have the demographic change of an increasingly ageing population, what we call the silver tsunami.And better use of health data enhanced by AI is a very promising solution.You can do better and faster diagnostics and you can solve a lot of logistical issues.But there is a really, really narrow margin for error in healthcare and that necessitates a really careful use of AI.And there are challenges to using AI in healthcare, such as bias and interpretability with a lot of legacy systems in the healthcare sectors.And of course, there's ethical elements to consider.So transparency and explainability will be the output of the AI models will be crucial to earn the trust of healthcare professionals and the patients.So if you don't manage the risk properly, you might end up with a product that people don'ttrust and they won't use it.
Transcript:
MARTINE HANNEVIK Yeah, that's true.And Christian, you come from the oil and gas industry.Can you what's it like in the oil and gas industry compared to healthcare?
Transcript:
CHRISTIAN MARKUSSEN Well, AI is expected to have a lot of potential and being adopted in the next years to come.So it's expected to provide, on the one hand, safety and efficiency and enhance those.But the flip side is that it can also contribute to accidents if it doesn't behave as one expects.So the authorities has a great focus on the responsible implementation of AI within the industry to ensure that it does not have a negative impact on the safety and environment.So we really have to manage these risks carefully.
Transcript:
MARTINE HANNEVIK Can you give some examples of some typical risks in the healthcare industry?
Transcript:
HELGA M. BRØGGER The most obvious one is the that it's trained on the wrong data set.So size and age and gender matter in in healthcare, you design products that are suitable for smaller children and design products that use it for adults.It's the same with AI.And if the data that the AI models is trained upon doesn't look like the patient population you will use this AI model on, it will not give accurate predictions.That's the main risk, not having the right data.
Transcript:
MARTINE HANNEVIK That's interesting.Are there any similar risks in the oil and gas industry?
Transcript:
CHRISTIAN MARKUSSEN Yeah, similar to what Helga mentioned, we need the right data and we need quite a bit of it.And we have a lot of data.But most of the data is on the more normal operating condition, the nominal states, while we're quite often looking at the anomalies or the edge cases, which is of a more interest.And there we don't have a lot of data.Soit's how do we take what we have and try to train the AI in a smart way so that we can operate within the limits of what the AI is trained for.
Transcript:
MARTINE HANNEVIK So both of you actually mentioned a lot of risks related to data, but are there any other risks that we need to consider?
Transcript:
CHRISTIAN MARKUSSEN Yes, we often see that AI is being developed by data scientists, AI professionals, but the domain knowledge is not often well enough represented so that you actually understand how the technology is being used to the scenarios to physics or what's going on in the domain.Soit's important to have that cross-discipline team involved to make sure that you capture all the physics or the things that going on that you need to be able to understand and control.
Transcript:
MARTINE HANNEVIK Yeah, that's a great point.Soit's not enough to just understand the risk of the AI technology. You actually need to understand the industry and the contexts and the system it operates within, right.
Transcript:
CHRISTIAN MARKUSSEN That's right.
Transcript:
MARTINE HANNEVIK Yeah.Great.So we know that there are quite a lot of risks related to using AI, but how do we then manage these risks?
Transcript:
HELGA M. BRØGGER One important thing to address, of course, again is the data.And there is a lack of available high quality data sets to train on.And a promising solution to this is synthetic data and this, The DNV research group is working a lot with this and we recently were assigned a major role in a European funded research project called Synthia.And Synthia aims at accelerating the AI adoption in healthcare.And in Synthia, we will use generative AI to create the synthetic data the mimics patient data.
Transcript:
MARTINE HANNEVIK So you actually use AI then to create synthetic data that you train the AI models on?
Transcript:
HELGA M. BRØGGER Yes.And through that process, you will also learn more where and in what part of a sort of AI development life cycle it is, it's good to use synthetic data and for what purposes.So we'll get a better understanding of synthetic data in this field.
Transcript:
MARTINE HANNEVIK Interesting.And is this similar within oil and gas?
Transcript:
CHRISTIAN MARKUSSEN Yes, I think looking at just the AI model is insufficient.You have to look at the system where AI is incorporated and you need to consider the not just the digital part of the system, but the physical representation of the industrial system with, for instance, a drilling system and all the equipment that goes in there.But not only that, you also need to consider the human factor side of things.How, how is the system going to communicate uncertainty to the users so that they have the right situational awareness?And how is the organization as a whole governing both the development and operation of these systems?
Transcript:
MARTINE HANNEVIK Yeah, I actually have a background in psychology.So that's the human element is particularly interesting to me.But now we've talked a lot about risks and some of our listeners might be a little afraid or discouraged to start using AI.What would your advice to our listeners be?
Transcript:
HELGA M. BRØGGERYou talked about domain experts and there are a lot of domain experts in healthcare had to train a long time to be a be a doctor and nurse.So there's this large group of people that have the competence to support patients and they have training in ethical dilemmas and they're also trained in in managing risks or Healthcare is a high risk life in that place.The healthcare sector is also really familiar with the product safety legislation and thinking, and this applies of course to AI in healthcare as well.And the last advice, it's important to always remember to have a human in the loop for these important decisions for patients.Patients in the healthcare sector are the most vulnerable point in life and we need to make sure that the right people with the right competence use this technology.But they are making the decisions.
Transcript:
MARTINE HANNEVIK Great.What about you?Any final advice, Christian?
Transcript:
CHRISTIAN MARKUSSEN Yeah.So the oil and gas industry issimilar to healthcare.It's very used to deal with risks and uncertainties.But you have to actually start building that into your AI system.So you actually plan for both the risks and the uncertainty and how you communicate that to the end user.So you know when it'sactually working as intended or when it's hallucinating or providing incorrect information.So you have to design the system to be trustworthy, not as an afterthought.
Transcript:
MARTINE HANNEVIK That's an interesting point.So you actually have to build safe AI from the very beginning.That can't come later.
Transcript:
CHRISTIAN MARKUSSEN That's right.And we have recommended practices that describe the methodologies and the requirements for how to do this.
Transcript:
MARTINE HANNEVIK Thank you.Those are, yeah, a lot of great, great insights.I think some of my key takeaways is definitely to start thinking about managing risk right from the beginning.Sobuildsafe AI is definitely a take-away, understanding the domain and the contexts.And then it's more likely that that people will actually be trusting the system and using it as well, so you can reap the benefits.Great.So thank you to both of you and thank you to our listeners for tuning in.If you have any questions or want to learn more about how DNV can support with safe application of industrial AI, then please visit our website. Thank you.
Transparency and explainability of the AI models will be crucial to earn the trust of healthcare professionals and the patients.
Helga M. Brøgger
MD/Principal AI Researcher in Healthcare
DNV
About the speakers
Helga M. Brøgger, MD/Principal AI Researcher in Healthcare, DNV
Dr Helga M. Brøgger is a Principal Researcher at DNV's Healthcare Research Programme, focusing on the safe and effective adoption of AI in clinical practice. She holds a medical degree, a specialty in radiology and a Bachelor's degree in Culture Studies and Oriental Languages, along with various certifications in public health, information security and project management. Helga has a diverse range of expertise in healthcare, technology and ethics, and has contributed to shaping policies and guidelines related to AI in healthcare. She has been recognized as one of Norway’s 50 Top Tech Women in 2024.
Christian Markussen,Global Practice Lead for Technology Qualification and Digital Twins, DNV
Christian has extensive expertise across various domains including development and deployment of digital twins, analytics and data quality, risk management, and technology qualification. In addition to co-authoring DNV’s recommended practice for technology qualification and assurance of digital twins, he recently performed a report for Havtil, the Norwegian Ocean Industry Authorities, on Responsible use of artificial intelligence in the petroleum sector.
Martine Hannevik, Head of Innovation and Portfolio Management, DNV
The video series is hosted by Martine Hannevik.
Martine leads the innovation portfolio at Digital Solutions in DNV, focusing on developing future-oriented products and services in sustainability, AI and digital assurance. Her work lies at the intersection of strategy, innovation and digital transformation.
Prepare for regulatory compliance and ensure your business meets relevant industry standards with expert guidance, comprehensive assessments, and tailored compliance strategies.
An independent third-party audit to demonstrate capability for developing and/or operating trustworthy artificial intelligence, machine learning (AI/ML) and data-driven solutions.
AI can enhance safety, operational efficiency, innovation, and sustainability in industries such as maritime, energy, and healthcare. However, organizations must balance risk and reward. By implementing AI responsibly, you can fully exploit its potential, even in high-risk contexts.
Combining our industry domain knowledge with deep digital expertise. DNV is dedicated to supporting industries with the safe and responsible use of industrial AI.