AI Podcast series

Welcome to our mini-series on AI governance, presented by Thomas Douglas, Global ICT Industry Manager at DNV.

According to a recent ViewPoint Survey by DNV, 70% of companies are just beginning their AI journey. As AI evolves rapidly, early governance is crucial.

In this mini-series, Thomas discusses the importance of understanding AI's risks and rewards, bridging the AI trust gap, and implementing effective AI governance. He highlights the role of ISO certification and provides practical steps, including an 8-step guide and training resources.

We are pleased to offer a limited promotion to register for our ISO/IEC 42001:2023 Requirements Course, an e-learning designed to enable participants to gain basic knowledge on the standard’s requirements as well as foundational knowledge required to implement it. Use promo code "ISO42KAI-24" at checkout to access the e-learning for free. Hurry, only 25 seats available.

Read the transcription of our podcast series

Hello and welcome to Navigating AI Assurance from DNV, a globally leading independent certification body and training provider.
My name is Thomas Douglas, Global ICT Industry Manager at DNV. And over the next three episodes, I'll be guiding you through some of the key considerations for businesses starting on the AI journey and showing you how AI governance can put you on the path to safe, reliable and ethical usage of this technology.
So, as has been said many times, there really never is a dull moment with AI. It's always a topic of conversation in some way, shape or form and everyone is somewhere on the AI journey, or certainly strongly contemplating starting on this journey.
We begin by understanding the risks and rewards of AI. According to a recent viewpoint survey conducted by DNV, 58% of companies have not yet implemented AI technology, while 70% are starting on the AI journey. So it's important to get clarity on what to look out for, while AI does indeed continue to change and evolve at a fast pace. 
Every new technology comes with its risks and there's always two sides of the coin. So there's the rewards, and then there's also the risks. And it's extremely important for us as individuals as well as the organizations to understand these both sides - the limitations, what can go wrong and how can we benefit from AI, and what sorts of things need to be put in place in order to allow us to, to benefit safely from the use of AI.
So if we start off by looking at some of the rewards of AI, of course, top of my mind is efficiency and productivity. We know AI can automate repetitive tasks and it can free up some time for us to focus on more challenging or creative work, which can in turn lead to some increased productivity in various industries or everyday business. So AI can drive innovation in different fields such as transportation, healthcare, finance. 
To give one practical example, AI algorithms can analyse medical data to improve, diagnostics and different treatment plans. 
And then finally we could also discuss data analysis so AI can process and analyse huge amounts of data very quickly and give us real time insights that can ultimately lead us to better decision making as well as strategic planning.
So let's look at some of the risks of AI. 
Well, first and foremost lack of transparency. So there can tend to be a lack of transparency in AI systems, particularly when we look at deep learning models that can sometimes be complex to interpret.
Sometimes this can affect the decision making process, as well as the understanding of the logic of these technologies. And when people aren't able to comprehend how an AI system arrives, to its conclusions, this can actually lead to a little bit of consumer distrust and, and reluctance to actually adopt these technologies.
Privacy concerns - so the use of AI in data collection and analysis can lead to quite a few privacy violations if, of course, it is not properly regulated. 
Security threats - AI can be used for good, but it can also be used for bad. And it can be used maliciously in the creation of deepfakes, can also be used to automate cyber attacks. 
And then, of course, there's, there's the question of ethical use of AI. So by us instilling moral and ethical values in AI systems, especially those that have decision making context with quite significant, potentially, consequences. This can present, a bit of a challenge and bias and discrimination. So AI systems can sometimes amplify some existing biases that do exist if they are trained on biased data.
So it's all about balancing risks and rewards.
Great. So, AI is here. We know that we can harness this for the better. But now what? As we've said, organizations are wanting to quickly implement AI in order to innovate, streamline some processes, potentially incorporate it into some of their products or services. And of course, that there is that question of industry fervour. So not wanting to fall behind our industry peers and competitors. So organizations are trying to figure out what their strategy with AI is - where and how can they start. And one of the big topics when confronted with this enormous topic, which is AI, is that of governing AI.
So to maximize the benefits of AI while mitigating the risks, it really is vital in order for us to implement, some robust governance frameworks, ensure transparency and fairness in AI systems, and of course promote continuous learning. 
And it's within this governance topic that the recently released ISO 42001 standard for AI management system comes in as something that can be utilized for good. So it is a blueprint for designing and embedding AI governance within your organization. As it provides quite a nice and adaptable playbook for an organization that's thinking of implementing AI or utilizes AI, develops AI, and sort of prompts an organization to think before you start on your AI journey.
So related to policies, responsibilities, and of course, the governance of AI within your organization. So it's very important from that point of view.
So what this standard is great for is that it gives you a guidance and blueprint of how to go about this subject. So what are the sort of use cases within AI that you're thinking of in your organization and really think before you act. So, for example, as an organization, are we going down the path with AI that isn't really wise from a potential impact and risk perspective? And what are some of the key considerations that we have to have in mind if we do decide to go down this path with AI?
So I'd qualify this standard as being really the first step and a guiding baseline. So as I say, the governance of it all, the different processes, policies, objectives and people necessary to run an AI program where we can maximize all the rewards and avoid the risks which come with utilizing such powerful tools.
If you'd like to learn more about what an AI management system is and how you can reap the benefits, visit www.dnv.com/what-is-aims. On this page, you also find a link to our latest Viewpoint survey, which looks at how companies are approaching AI and where they are on their journey.
Thank you for listening to today's episode of DNV's Navigating AI Assurance on the risks and rewards of AI. Join me next time where we'll be taking a look at bridging the trust gap of AI.

Hello. Welcome back to Navigating AI Assurance from DNV, a globally leading independent certification body and training provider.
My name is Thomas Douglas, Global ICT Industry Manager at DNV.
Last episode we looked at the risks and rewards of AI for businesses starting on the AI journey. And today we're looking at how businesses can bridge the AI trust gap to overcome concerns about safety and ethics.
Yes. There is somewhat of, a trust gap with AI. And in essence, it links back to what we were discussing in the previous instalment related to the different risks and rewards of AI. So there's a lot of what you could term FUD - fear, uncertainty and doubt - as there has been with most new technologies in the past. This is in large part due to the very real risks that exist, and the fact that the speed of innovation and what is possible with AI is changing virtually every day. So the question then becomes, how can we harness this?
So there are several reasons why people and organizations might lacking confidence in AI being safe, reliable, and ethical. Some of these are disinformation and misinformation, so AI can be used to create and spread false information, which, you know oftentimes can be harmful and misleading. There's, of course, the safety and security concerns. You know, these systems can be vulnerable to hacking as well as other security threats. And these can lead to some serious consequences - what we would call the black box problem. So, many AI systems tend to operate in ways that aren't fully transparent, or understandable to us. So sometimes it can make it difficult to trust the decisions that the AI is making for us.
And ethical concerns - so there are quite a few ethical issues related to AI, such as, you know, the bias in the decision making privacy violations. And you know, the potential that does exist for AI to harm individuals or society.
Ultimately, though, AI does indeed bring new challenges and risks, but these can be addressed by responsible innovation, the implementation of safety measures and guardrails. So AI really isn't something that we should fear, but actually something that we should try to understand, embrace, for all the good that it can bring as a tool and as a partner.
So trusting technology often involves us understanding how it works and then being able to assess its reliability, and of course, safety. So in order for us to trust AI, it has to be trustworthy. So key questions around transparency have to be dealt with alongside those of is it being implemented in a responsible an ethical way. And when there’s usually a need for some public trust on a subject, standards play a pivotal role, as these promote trust by providing reliable frameworks for consistency, quality as well as safety. And many times standards provide foundational vocabulary, industry led standardization and documented best practices for benchmarking and evaluation, all while providing some clarity and accountability.
So standards are going to play a pivotal role in us being able to bridge the gap with a technology such as AI, as new technology needs standardization in order to scale.
So standards targeting AI management, such as the ISO 42001, will help in shaping the governance as well as the responsible deployment of AI, tackle some of the increasing societal safety and trust expectations, and offer a foundation for regulatory compliance.

And ultimately, which is what we all want, accelerate the ability to harness AI's global potential in a safe and ethical way. Of course, always ensuring transparency, trust and security towards stakeholders.
There is indeed some form of pressure to show assurance in this, in this domain.
So quite recent actually as part of their supplier security and privacy assurance program, Microsoft recently mandated ISO 42001 for certain suppliers whose service delivery involves sensitive use AI systems. So here is where we really see that need for trust being important.
So organizations need to know and need to be able to vet to you as, as a supplier to understand how are you approaching AI? Is this safe? Is it secure? Is it trustworthy? And in doing so ultimately this is going to contribute to, a safer and more secure ecosystem for AI.
If you would like to learn more about how ISO certification can help your business to bridge the trust gap in AI, please head to www.dnv.com/ai-management, where you'll find in-depth articles and our certification and training resources. On this page, you will also find a link to our online self-assessment tool, which you can use to get an in-depth understanding of your organizational readiness, a baseline towards the standard, and where to target improvement efforts.
Thank you for listening to today's episode of DNV's navigating AI Assurance on Bridging the Trust Gap in AI.
Join me for our final instalment where we will look at how to implement effective AI governance.

Hello and welcome back to Navigating AI Assurance from DNV, a globally leading independent certification body and training provider.
My name is Thomas Douglas, Global ICT Industry Manager at DNV.
In the first two episodes, we looked at the risks and rewards of AI for businesses starting on their journey and how to bridge the trust gap.
In today's final episode, ee look at practical steps to implement effective governance to set you up on a good path for safe, reliable and ethical usage. 
So ISO 42001. This is an international standard that specifies requirements for established, implementing, maintaining, and continually improving an artificial intelligence management system or ‘AIMS’.
ISO 42001 is intended for use by organizations of any size involved in developing, providing, or indeed using products or services that utilize AI systems, ensuring the responsible development and use of these. It is meant for any type and size of organization and industry from global, national, regional, nonprofit and public sector agencies.
ISO 42001 is the world's first certifiable AI management system standard and provides valuable guidance for this rapidly changing field of technology. It addresses the unique challenges the AI poses, such as ethical considerations, transparency and continuous learning.
For organizations, ISO 42001 sets out a structured way to manage risks and opportunities associated with AI, balancing innovation with governance.
Quality management systems provide a base level of accountability, repeatability and auditability and 42001 is modelled around these sets of principles and structures. Therefore, it can be integrated quite nicely with other management systems such as ISO 9001 or ISO 27001.
So ensuring top management commitment: get commitment from management early on by including all the main stakeholders in learning about the potential risks and harms of your AI systems, but equally the opportunities that using AI can offer. This is a crucial step to setting up the success of the certification process as it sets the tone for the entire organization.
Identifying gaps: analyse both your internal as well as your external AI context and position, and select the AI rolr your organization has. For example, are you a user, producer, or provider of AI? Or perhaps you have multiple roles? Conduct a gap analysis to assess the current state of processes and systems against ISO 42001 requirements, and identify areas that need to be improved.
This kind of pre-assessment is really a great way to get a snapshot of where your organization currently lies when it comes to complying with some of the requirements of this standard.
Undergo training and build awareness: This really is a crucial step within 42001, and your organization's road to certification, as it builds internal knowledge and competence in the core team that's ultimately going to play an important part in the implementation of the management system compliant with the standard.
Training and workshops will equip your team with relevant skills, tools and a clear understanding of the objectives and the roles that they will have 
For the implementation team to understand the ins and outs of the standard, training is essential to navigate your journey towards ISO 42001 and ensure that you are compliant and utilizing AI in a safe, responsible and ethical manner.
View our handy eight-step guide and training resources on www.dnv.com/ai-management and find out how you can achieve ISO 42001 certification.
As a thank you for listening, I'm pleased to offer promotional code for the first 25 people that sign up to our e-learning course, which is designed to enable participants to gain basic knowledge on the requirements of the standard, as well as provide the foundational knowledge required to implement this standard.
Visit www.dnv.com/ai-management-course and enter promo code "ISO42KAI-24" to access our e-learning for free.
Thank you for listening to DNV's Navigating AI assurance series where we've guided businesses starting out on their AI journey through some of the key considerations and practical steps towards safe, reliable and ethical AI adoption and usage.
AI is an always evolving and fast changing space, so be sure to continue to follow DNV for more advice and support - you can find us on LinkedIn and visit our website for regular insights.