Why the AI journey needs structured governance
For almost all businesses, artificial intelligence (AI) is very much the hot topic of today and most are keen to explore the claimed benefits. There are however potential challenges and risks related to AI that need to be managed for safe, responsible, reliable and ethical development, application and use. This is where the ISO/IEC 42001 artificial intelligence management system standard comes in as a recognised pillar to guide development and deliver trusted AI solutions.
Despite having been around for some time now, AI has not really been enthusiastically embraced until recently. A recent ViewPoint survey by DNV found that a significant portion of companies (58%) have not yet implemented AI technologies and in all more than 70% are what might be considered as starting on the AI journey. The figures highlight that AI is not yet a strategic priority for the majority.
Many companies responding to the survey may be starters as far as AI is concerned, but they are not new businesses in the accepted sense of the term or foreign to the need or benefits of governance. They have already implemented a management system compliant to at least one ISO standard, such as ISO 9001 (quality), ISO 14001 (environment) or ISO/IEC 27001 (information security). However, especially those at the very early stages have yet to see the need for the same approach to be applied to their AI journey.
The growing need for governance
What we could term “starting” companies are primarily focused on using AI to enhance efficiency and improve processes internally. This is a common first step due to the relatively straightforward nature of the solutions, like Copilot and ChatGPT and their implementation. As companies mature, however, there tends to be a shift towards external, more complex AI initiatives aimed at generating revenue and exploring new business opportunities. With progress comes the increased need for structured governance to best control risk and ensure safe, reliable, ethical development, implementation and use of AI.
Regulation around AI is growing and companies will have to demonstrate compliance. Moreover, there is an inherent scepticism from users creating a trust gap that must be bridged. In order to trust and leverage a technology we must understand how it works and be able to assess its safety, reliability and transparency. This issue is highlighted in a March 2024 report by the AI Assurance Club and titled AI Governance and Assurance Global Trends 2023-24. The report details global trends and law-making development across the globe. On the subject of trust, the report states, “At the centre of the AI governance puzzle is the issue of trust. As AI systems become more embedded in our lives – in infrastructure, business processes, public services, or in consumer goods – users need to have confidence that systems will consistently operate as intended, without causing harm.”
Bridging the gap
The trust gap refers to the potential lack of confidence in and transparency around an organization’s AI enabled products and/or services. For example, if a hospital uses AI enabled diagnostics, can I as a patient trust that it is just as good or even better than the assessment done by the doctor alone? Bridging the trust gap before AI systems become too complex and autonomous requires strong governance.
Building trust is a central feature of all the ISO management system standards, so it should not be a surprise that the same approach can be applied when it comes to AI. The ViewPoint survey revealed that most of the companies leading the way on AI development and implementation are already considering adoption of an AI management system to ensure due governance and they already know of the ISO/IEC 42001 standard.
Several AI frameworks and complementary standards already exist, including the NIST AI Risk Management Framework, ISO/IEC 23894 AI guidance on risk management, ISO/IEC 5338 AI system life cycle process and ISO/IEC TS 25058. In December 2023, ISO launched ISO/IEC 42001 AI management system standard (AIMS). This is the first certifiable AIMS standard which provides a robust approach to managing the many AI opportunities and risks across an organisation. This standard therefore helps companies take one step further in bridging the trust gap as compliance results on a certificate issued by an independent third-party certification body like DNV. Moreover, as it builds on the ISO High Level Structure (HLS), it can easily be integrated with other management systems.
ISO/IEC 42001 is modelled around the same high-level structures as other ISO management standards and will therefore be familiar to businesses with another ISO management system in place. Following the "Plan-Do-Check-Act" methodology, ISO/IEC 42001 specifies requirements and focuses on establishing, implementing, maintaining, as well as continually improving an AI management system. Knowing that all respondents in the ViewPoint survey pointed by cybersecurity as a main threat to their AI implementation, it can be hugely beneficial to integrate with an ISO/IEC 27001 compliant information security management system.
Building trust
The ISO/IEC 42001 standard will no doubt take on even greater significance as more and more countries bring in AI regulations. Regulations and policies already exist in the US and China. In Europe, the world’s first comprehensive AI law, the EU AI ACT, recently entered into force. The Act seeks to promote and ensure the safe, secure, trustworthy and ethical development of AI systems and establishing AI corporate governance emerges as a key aspect to ensure regulatory compliance and build trust.
Building a core centralized function across many teams that recognizes and comprehend AI technologies, use cases and providers is central to ensure AI governance. Moreover, due to the ever-evolving field of AI and its shifting regulatory environment, it will be necessary for an identified governance committee and stakeholders to review and update the AI governance program on a regular basis. Herein lies another benefit of implementing an ISO/IEC 42001 compliant management system and getting it certified.
Adopting a structured approach to governance and responsible use of AI can ultimately be a competitive advantage and will help any company demonstrate toward internal and external stakeholders that the business has in place a robust approach to safeguard users and continually improve. A certified management systems demonstrates a company commitment and actions taken every day to ensure safe, reliable and ethical development, implementation and use of AI in its processes and product and service offerings.