AI training course: Introduction to quality assurance of machine learning applications

Learn how to ensure the trustworthiness of machine learning applications throughout their lifecycle with DNV’s recommended practice for assurance of machine learning applications DNV-RP-0665.

Description 

Build your AI skills by joining this course to get an introduction to quality assurance of machine learning applications. The goal of this two-day lecture-based course is to provide product owners, managers, data scientists and ML engineers with a robust framework that they can use to ensure the trustworthiness of machine learning applications throughout the entire lifecycle.  

The course will take a deep dive into DNV’s recommended practice for assurance of machine learning applications (DNV-RP-0665) and will guide course participants on how to use the recommended practice.

The course covers topics such as:  

  • Intro to ML applications 
  • Assurance process 
  • ML complexity levels and high-risk ML application 
  • Guidance and requirements on organizational maturity, data management, risk management and decision management 
  • Guidance and requirements on project planning, project control, information management and quality management 
  • Guidance and requirements on ML lifecycle processes 
  • Guidance and requirements on ML application features: performance, robustness, transparency, security, privacy/data protection, fairness and human oversight 
  • AI/ML-related regulations landscape

The course can be held at your premises, DNV’s premises or online, and is offered on request. Pricing depends on the number of participants and the venue. Please send us a non-binding request with your needs to receive a quote.

This course is exclusively available to groups representing an organization and is not open to individuals in a private capacity. We recommend a maximum group of 12 participants.

All course participants will receive a certificate after completing the course. 

In addition to the classroom course, you will get: 

  1. A planning session with DNV experts so we can get a better understanding of your objectives and how we can tailor the course to your needs  
  2. An assessment of competence pre- and post-course to map progress in understanding and knowledge  
  3. Online follow-up sessions approximately 12 and 24 weeks after the course with DNV experts to discuss challenges and successes after the course  

Learning objectives

Upon completing this course, you will: 

  • have a good understanding of what is meant by ML applications 
  • have an understanding of the assurance process  
  • be able to determine the complexity level of a high-risk ML application  
  • have understanding of the complete AI/ML life cycle 
  • know which requirements apply to your ML app case 
  • have an understanding of AI/ML-related regulations 
  • have an overview of the steps required to assure the quality of your ML application and to be compliant with regulations

Target group 

The course is tailored for product owners, managers, data scientists and ML engineers. Your role is crucial in the successful development and adoption of trustworthy and responsible AI. By mastering the essentials, you’ll gain the tools needed to drive your organization’s AI success. 


Meet the course trainer

 

abdillah suyuthi

Abdillah Suyuthi - Head of Machine Learning Services

Dr Suyuthi leverages extensive industry experience in executing simulation model projects, creating trustworthy machine learning solutions and developing efficient methods and tools, with a passion for data quality, integration of large language models and ontologies to propel progress and foster sustainability. 

Learn more about Abdillah

 

 
   

Read our frequently asked questions about Artificial Intelligence (AI)

Artificial intelligence (AI) is a common designation of technologies where a machine performs tasks that are considered to require intelligence. This typically relates to speech recognition, computer vision, problem solving, logical inference, optimalizations, recommendations, etc.

AI is often divided into two main domains: Rule-based AI and machine learning. Rule-based AI is where we take human insight and knowledge and codify it into rules, such that the machine can perform tasks based on these rules. This kind of AI is very structured and explainable, but less flexible, as it can only be used for tasks for which specific rules have been developed. Machine learning (ML), on the other hand, is AI which is created from data. The applications infer their own rules and correlations from the data. This makes for flexible models, but with larger ML models, it can be difficult to explain decisions. In many practical applications, a combination of rule-based and machine learning is used.

The EU AI Act is a new regulation of AI use in the European Union. 

The Act’s purpose is: 

‘To improve the functioning of the internal market and promote the uptake of human-centric and trustworthy artificial intelligence (AI), while ensuring a high level of protection of health, safety, fundamental rights enshrined in the Charter of Fundamental Rights, including democracy, the rule of law and environmental protection, against the harmful effects of artificial intelligence systems (AI systems) in the Union, and to support innovation.’ 

The Act sets a common risk-based framework for using and supplying AI systems in the EU. It is binding on all EU member states and requires no additional approval at national level. However, variations on how national regulatory bodies are set up, and guidelines on how to align with other member states’ regulations, and so on, will be established. Learn more on EU’s official pages here.

The EU AI Act regulates all AI in Europe. To understand what is required, one must first assess the risk category of the AI. Learn more on EU’s official pages here.

The EU Act passed the EU Parliament in March 2024, and will entry into force June 2024. There is in general a 2-year period until compliance must be in placeBut there are also earlier statutory milestones along the way. For example, after 6 months of the Act coming into force, a ban on prohibited AI practices must be in place. Rules on General Purpose AI (GPAI) are required after 12 months. Obligations for high-risk systems must be in force within 24 months. Learn more on EU’s official pages here.

High-risk AI means that the supplier and deployer (user) must meet stringent regulatory requirements for use. Providers of a high-risk AI system will have to put it through a regulatory conformity assessment before offering it in the EU or otherwise putting it into service. They will also have to implement quality and risk management systems for such an AI system. Learn more on EU’s official pages here.

Generative AI is a type of machine learning that can create new data (numbers, text, video, etc) from an underlying data distribution. Generative AI is therefore probabilistic in nature.

Conformance testing (or compliance testing) means to test a system to assess if it meets given standards or specific requirements.

Fairness of an AI system is often defined to mean that the AI system does not contain bias or discrimination. This means that the AI system is created from data that are representative for the kind of distribution and algorithmic behaviour we would want the AI system to have.

Algorithm verification means to assess if an algorithm meets specific requirements. When AI is deployed into larger systems, we need to assess how the system and its components work. Algorithm testing is one way of verifying that the AI algorithm works as intended.

AI assurance means to (i) establish what requirements the AI needs to meet and (ii) verify compliance to these requirements.

The AI lifecycle covers all the phases of AI, from problem definition to data acquisition, to model development, deployment and update. The lifecycle is often iterated several times.

Model validation means to ensure that the model is solving the right problem, by comparing model outputs to independent real-world observations. Without a validated model you cannot trust that the models are solving the right problem.

Black-box testing means testing of a model without access to or insight into its internal structure or working. Inputs are provided to the black-box and outputs are received.

Introduction to quality assurance of Machine Learning applications