Despite its many obvious benefits, artificial intelligence (AI) may contribute to harm, perpetuate biases, introduce security vulnerabilities, and raise other ethical and societal concerns. Also, AI-enabled systems possess emergent behaviours, and capturing risks in complex systems is challenging.
As it is difficult to discover all the relevant and important stakeholder interests, industry is also struggling to demonstrate the trustworthiness of emerging technologies to stakeholders. This is made all the more challenging when there is rapid and continuous change of AI system properties.
Collaboration amongst actors in AI systems can also be challenging. Establishing confidence that warrants trust between the involved actors is essential.
Demonstrating AI-enabled systems are trustworthy and responsibly managed
Within the EU, the use of AI will soon be regulated by the AI Act, the world’s first comprehensive AI law. The law will establish obligations for providers and users depending on the level of risk from AI.
The RP helps bridge the gap between the generically written law and affected stakeholders, by helping to identify the applicable requirements and provide guidance on how to collect the evidence to show compliance.
It is a new assurance process based on fundamental principles. It takes a holistic, claims-and evidence-based approach to assessing AI-enabled systems, capturing emergent properties and behaviours arising from how AI components interact with other components, humans, and the environment.
The assurance process includes the mapping of stakeholders and their wide-ranging concerns along with identifying relevant standards and regulations. This helps to identify competing interests and facilitate compromises and collaboration, enabling assurance of an entire system based on the assurance modules and their interdependencies.
The benefits
AI-enabled systems can suffer from negative user perception, so establishing a framework to ensure and demonstrate the systems are safe, competent, and can perform as required is key to securing acceptance.
DNV-RP-0671 can be used by any actor to assure that an AI-enabled system is trustworthy and managed responsibly. This includes any organization that develops, sells, integrates, uses, operates, interacts with, depends on, or is affected by AI components or AI-enabled systems.
It will also be invaluable in proving compliance with any laws or regulations.
Market potential
AI technology is rapidly gaining in importance in many industries and will likely prove to be a major driver of economic growth. Any organization that develops AI-enabled products and systems will need to build trust with its customers.
Additionally, DNV believes that any tools that can improve assurance services for AI-enabled systems to an acceptable stakeholder and regulatory level will almost certainly open up opportunities for such services and for customers in a wide range of industries. This recommended practice is part of a set of documents that provide ‘Digital Trust’.
For more information:
AI insights page: Artificial intelligence - DNV
Direct download DNV-RP-0671: DNV-RP-0671
RPs for digital trust: Recommended Practices for Successful Digital Transformation - DNV