Artificial intelligence

Building trust and compliance into AI-enabled systems

Insights, resources, and advice on trustworthiness and compliance for AI designers and developers and organizations deploying or using AI systems.

As AI ushers in a new era of productivity and capabilities, it also poses new risks that must be managed in new ways. AI relies on data, which itself can change, leading to results which may be difficult to understand. AI is also ‘socio-technical’ and is affected by a complex and dynamic interplay of human behavioural and technical factors. DNV can help you develop the new risk approach that AI needs – both to ensure compliance with emerging regulations and to manage risks dynamically – to access the benefits of AI more rapidly, fully, and confidently.  

Recommended practices 

Our resources at your disposal include our Recommend Practice (RP) on AI-enabled Systems, that addresses quality assurance of AI-enabled systems and compliance with the upcoming EU AI Act. Other recommended practices developed by DNV cover the building blocks of AI systems – data quality, sensors, algorithms, simulations models, and digital twins – that we have developed through our extensive work with digitalization projects at asset-heavy and risk-intensive businesses worldwide. Cross-cutting all those digital building blocks is cyber-security, where DNV offers world-leading industrial cyber security services.

EU AI ACT
Group

The use of artificial intelligence (AI) in the European Union will be regulated by the EU AI Act, the world’s first comprehensive AI law. With a broad definition of AI, many businesses will be affected and should start preparing for compliance.

Act Now
Group

The time to act is now! This is the clear message from DNV Digital Assurance Director, Frank Børre Pedersen. While the EU AI Act is only expected to be enacted at the end of 2023 and come into full force two years after that, organizations should already be planning now for its consequences.

AI innovation aligned with societal needs requires the propagation of trust through all layers of organizations and society – like ripples of water
Group

What are the trust gaps to fill as the integration of AI becomes prevalent across industries

AI cyber
Group

The possibilities, limitations, and risks of large language models

Trust in AI
Group

An ecosystems approach to the identification of stakeholders and their trust needs when deploying autonomous technologies

Creating a secure and trustworthy digital world
Group

Organizations that struggle to demonstrate the trustworthiness of AI to their stakeholders, can close the trust gap with DNV’s new services and a set of recommended practices, for the safe application of industrial AI and other digital solutions.