AI compliance and beyond

Learn how to navigate a complex regulatory and compliance landscape to ensure companies develop AI in a responsible manner.  

This episode answers key questions such as: 

  • How can organizations in high-risk contexts prepare for compliance in a complex regulatory environment
  • Why is it important to go beyond compliance and focus on responsible AI from day one?
Transcript:

Transcript:

MARTINE HANNEVIK     Welcome to the Trust in Industrial AI video series, where we explore how to implement AI with both speed and confidence in safety critical industries. AI is transforming industries, but with its adoption also comes new risk, as well as the need to navigate a complex regulatory landscape to ensure companies develop AI in a responsible manner. And this is a topic we'll explore today. And to discuss this topic, I'm joined by our AI experts, Tita and Per. Thank you both for being here today.ou're saying it's time to jump into gear. So, when policies and regulations are ready, the industry is also prepared to progress on these issues. 

Transcript:

TITA ALISSA BACH     Thank you for inviting us.

Transcript:

PER MYRSETH    Thank you for having us.

Transcript:

MARTINE HANNEVIK     And we'll start with you, Per. Can you explain why regulatory requirements are so important for AI and especially when we're talking about critical infrastructure?

Transcript:

PER MYRSETH     I think the key importance is related to that safety of AI is important for EU as society and society in large. And also managing the risks related to AI. AI is a novel technology, and also, AI has the capability of changing over time. So compared to traditional assets we build, which we build it and then it stayed the same over time, AI has this new capability. So there is some aspects of AI that doesn't fit totally in the, in the way we currently are, you know, are looking into IT systems. So it's a very exciting to see how we can navigate this.

Transcript:

MARTINE HANNEVIK     And what are some of the key frameworks that the companies should be aware of?

Transcript:

PER MYRSETH     Yeah, we have some legislations that is already in place and EU AI Act is one major legislation that is in for part of it is in force already and from February this year. And also UK have some legislations, so most of it is in the making, but they have a lot of effort in dealing how to do this. Biden signed an in US signed an executive order, but Trump have signed a contra executive order and so that is now cancelled. So in US, the legislation is mostly focused on the state level, but there is also legislations in China, in South Korea and most countries. So there are studies on looking into how the legislation to safeguard AI is done, but also how legislation is fostering innovation because you need to balance these two.

Transcript:

MARTINE HANNEVIK     Yeah, yeah, absolutely. And you mentioned that actually in the EU the first part is already enforced. And can you say a bit more about that?

Transcript:

PER MYRSETH     Yeah, the first part that is enforced is related to the abandoned usage of AI, which is you shall not use it for social scoring, predictive policing and use cases. But there is also another obligation and that is related to literacy. Companies that builds and use AI are now already obliged to have in place knowledge called literacy in the legislation in in place. So they can understand if they have a high-risk system and how they should build it, how they should monitor it and make it safe and trustworthy.

Transcript:

MARTINE HANNEVIK     Yeah. And what does this mean then for the industries that we typically work with? Energy, maritime, healthcare. 

Transcript:

PER MYRSETH     It means that the race to be compliant has started definitely and some of the obligations are in place or are active already. The main obligations for high-risk AI systems is in August 2026, then that will be activated. So they need to start getting ready now. Yeah, we are in precision phase now where the legislation is known and you have to prepare to be compliant. And it's not that long until August 2026. Because if you have an AI system already that is in use or planned to be in use very soon, changing it or updating it to make it compliant, then the time window you need to be able to do this for, for AI usage in critical industries may maybe you even need more time than the reminding time to August. So, so there should be a wakeup call now.

Transcript:

MARTINE HANNEVIK     Yeah. And, and what should organisations do then to get ready?

Transcript:

PER MYRSETH     First of all, they must investigate if they have AI systems that can be regarded to become high-risk and create that overview. And if some of the systems are of high-risk, then they definitely need to make certain that they know what to do to become compliant. And maybe the workload to get compliant is a bit larger than you may believe. There is quite a lot of stuff that should be in place.

Transcript:

MARTINE HANNEVIK     But do they typically have things in place already?

Transcript:

PER MYRSETH     If you do good AI practice and you have governance in place of your AI system and you do best effort, you probably have a lot of this in place already. But the legislation is also focusing a lot on the post market monitoring what happened to the system while it's in use. So you have to add more monitoring efforts and there are some reporting efforts and such that should be in place as well. So there are some issues you should look into definitely. So we are doing this kind of services for customers already and we find a lot of interesting stuff. So the first movers that have come to the wake-up point, they are already working with this.  

Transcript:

MARTINE HANNEVIK     Great. But Tita, you argue that compliance alone isn't enough, you need to think about responsible AI from the beginning, right? So can you say a bit more about what is responsible AI and why is it so important?

Transcript:

TITA ALISSA BACH     So thank you Per. Very interesting to hear. So responsible AI is basically basically just making sure that AI is built and used in a responsible way. It means that you protect your stakeholders rights and interests, especially the affected stakeholders. So why compliance alone is not enough? It's because as you said earlier, regulations keep changing like the Biden's executive order now is being revoked and changing by country to another country. So, and also regulations usually struggle to keep up with the technology advancement. So this is when responsible AI can actually, practicing responsible AI can guide the society and also businesses to avoid a harm and scandals and actually earning the trust from your stakeholders. So in the end it is actually good not only for the good of society but also for businesses to stay competitive.

Transcript:

MARTINE HANNEVIK     Yeah. And if they want to stay competitive and build in responsible AI, what are the main things they should be considering?

Transcript:

TITA ALISSA BACH     The first one, I would say that involve all of your stakeholders from day one, your users, developers, investors, everybody should be on the same page. And the second and throughout the AI life cycle. And the second one is make sure that AI is actually fair and safe. And it's not only for the privileged few. And the third one, I would say build your trust through the transparency and accountability. Users should understand how AI make decisions and how if AI makes harm, who is responsible for that.

Transcript:

MARTINE HANNEVIK     Yeah. So can you say a bit more about this AI accountability?

Transcript:

TITA ALISSA BACH     It's basically. AI obviously is a machine. It cannot be held, it cannot be held responsible. So there should be an organization, a system in place, mechanism in place. What happens when the AI benefits is not happening or when AI benefits one group but actually harm another group? What do you do there? So this is probably like the regulation, like AI Act comes into place. So I'm not saying that compliance is not, is not necessary, it’s extremely important and the driver, but responsible AI goes beyond the regulations. Yeah.

Transcript:

MARTINE HANNEVIK     So think about compliance, but also responsible AI in general.

Transcript:

TITA ALISSA BACH     Absolutely.

Transcript:

MARTINE HANNEVIK     Any final advice to our audience?

Transcript:

PER MYRSETH     I'm glad you say what you say. And I would also like to add the point that you made that there is organizations that build and use AI and you cannot take an AI system to court. So if there is some error happening, you must take the organization. So the organization is the legal object that needs to be responsible for this. So I'm very curious about the future. I'm very optimistic, but we need to move forward with care.

Transcript:

MARTINE HANNEVIK     Yeah. So I think my key take away is you actually have to start acting now and think about compliance and responsible AI from day one.

Transcript:

PER MYRSETH     Yes, yes.

Transcript:

MARTINE HANNEVIK     Well, thank you both of you for those great insights and thank you to our audience for tuning in. If you have any questions or want to learn more about how DNV can support you with safe application of industrial AI, then please visit our website. Thank you.

Portrait of Tita Alissa Bach

While regulations usually lag behind technological advancements and are constantly changing, practicing Responsible AI helps society and businesses prevent harm, avoid scandals, and earn stakeholder trust. Ultimately, Responsible AI benefits not only society, but also businesses by keeping them competitive.

  • Tita Alissa Bach
  • Principal Researcher, Digital Assurance
  • DNV

About the speakers

Dr. Tita Alissa Bach, Principal Researcher, Digital Assurance, DNV

Portrait of Tita Alissa BachTita’s work is driven by the dynamic relationship between humans and technology, with a particular focus on artificial intelligence deployed within safety-critical industries. Her current research delves into understanding how AI influences stakeholders and society, as well as how stakeholders and society shape and interact with AI systems. She is also focused on assessing and improving cybersecurity culture, as well as optimizing human-AI collaboration.

Connect on LinkedIn


Per Myrseth, Senior Principal Researcher, Digitalization and Trust, DNV

Portrait of Per MyrsethThroughout his career, Per has been focusing on digitalization, interoperability and data-driven value creation. In recent years, his focus has been on AI and the regulatory environment, supporting clients in safety-critical industries to prepare for compliance. He is passionate about ensuring that digital assets are fit for use within acceptable risk and cost and supporting organizations to succeed with safe and responsible use of digital technologies.

Connect on LinkedIn


Martine HannevikMartine Hannevik, Head of Innovation and Portfolio Management, DNV

The video series is hosted by Martine Hannevik.

Martine leads the innovation portfolio at Digital Solutions in DNV, focusing on developing future-oriented products and services in sustainability, AI and digital assurance. Her work lies at the intersection of strategy, innovation and digital transformation.

Connect on LinkedIn

Related services for AI regulations and compliance:

AI regulations and standards compliance

Prepare for regulatory compliance and ensure your business meets relevant industry standards with expert guidance, comprehensive assessments, and tailored compliance strategies.

AI vendor capability assessment

An independent third-party audit to demonstrate capability for developing and/or operating trustworthy artificial intelligence, machine learning (AI/ML) and data-driven solutions.

Explore the full video series

We explore how to implement AI with speed and confidence in critical infrastructure industries.

Return to full video overview

Get real value from industrial AI with DNV

AI can enhance safety, operational efficiency, innovation, and sustainability in industries such as maritime, energy, and healthcare. However, organizations must balance risk and reward. By implementing AI responsibly, you can fully exploit its potential, even in high-risk contexts. 

Combining our industry domain knowledge with deep digital expertise. DNV is dedicated to supporting industries with the safe and responsible use of industrial AI.

Receive insights, updates and invitations to webinars and events on AI and digital trust