Navigating the landscape of AI legislation
Artificial Intelligence (AI) is rapidly transforming industries and societies, bringing about unprecedented advancements and challenges. As AI systems become more integrated into how we do business and live daily lives, the need for robust legislation to govern the development and use has become increasingly critical.
Most regulatory frameworks aim to balance the need for innovation with the protection of fundamental rights and public safety. This article explores the current state of AI legislation, its implications, and the path forward.
The Rise of AI Legislation
The European Union (EU) has recently introduced the EU AI Act. This comprehensive framework aims to ensure that AI systems are safe, transparent and respect fundamental rights. Being more at the forefront of AI technology, countries like the United States and China, for example, have been earlier with legislative frameworks to address the unique challenges posed by AI. Regardless of speed, there is a need to regulate the AI technology landscape to both promote innovation and build trust.
A trusted path forward
Implementing AI legislation presents several challenges. The rapid pace of AI development can outstrip the ability of regulatory frameworks to keep up. Additionally, balancing innovation with regulation requires careful consideration to avoid stifling technological progress. However, AI legislation also offers significant opportunities. By establishing clear guidelines and standards, legislation can foster innovation, enhance public trust, and ensure that AI systems are developed and used responsibly. Moreover, by prioritizing safety, transparency and ethical considerations, we can harness the full potential of AI while safeguarding the interests of individuals and society.
Complimented by International Standards
Legislative efforts are complemented by international standards such as ISO/IEC 42001. This certifiable standard provides requirements for implementing an AI management system and transcends geographical boundaries. Although not a legislation per se, ISO/IEC 42001 is expected to play a significant role in promoting trustworthy AI by establishing best practices for AI governance. Designed to ensure safe, reliable and ethical AI development, implementation and use, it is mentioned in legislations as a valuable means to ensure that proper governance processes are put in place.
Key Components of AI Legislation
AI legislation typically encompasses several key components:
- Safety and Reliability: Ensuring that AI systems operate safely and reliably is paramount. Legislation often includes requirements for risk management, testing and validation to prevent harm to individuals and society.
- Transparency and Accountability: Transparency in AI decision-making processes is crucial for building trust. Legislation mandates that AI developers provide clear explanations of how their systems work and establish mechanisms for accountability.
- Ethical Considerations: Ethical principles, such as fairness, non-discrimination and respect for privacy, are integral to AI legislation. These principles guide the development and deployment of AI systems to ensure they align with societal values.
- Compliance and Enforcement: Effective enforcement mechanisms are essential for ensuring compliance with AI legislation. Regulatory bodies are tasked with monitoring AI systems, conducting audits and imposing penalties for non-compliance.
Europe: The EU AI Act
The European Union (EU) has taken a significant step in regulating artificial intelligence (AI). The EU AI Act aims to ensure that the systems are safe, transparent and respect fundamental rights, while also fostering innovation. Published on July 12, 2024, it entered into force on August 1, 2024. Its provisions will be gradually implemented, with obligations on prohibited practices applying from February 2, 2025, and high-risk AI system requirements from August 2, 2026.
The EU AI Act categorizes AI systems based on their risk levels:
- Unacceptable, e.g. social scoring, is prohibited.
- High-risk, e.g. recruitment or medical devices, is permitted subject to requirements and conformity assessments.
- “Transparency” risk, e.g. chatbots or deep fakes, is permitted subject to information/transparency obligations.
- Minimal or no risk is permitted with no restrictions.
Unacceptable risk AI systems, such as those that manipulate human behavior or exploit vulnerabilities, are prohibited. High-risk AI systems, which include applications in critical infrastructure, education, and employment, must undergo rigorous conformity assessments before deployment.
The EU AI Act also establishes requirements for transparency and accountability. AI developers must provide clear information about how their systems work and ensure that they can be audited. This includes maintaining detailed documentation and logs to facilitate oversight and compliance. It also applies to companies operating or selling AI products in the EU. Exporting companies must ensure that their AI systems meet the specified standards and undergo necessary assessments.
AI Legislation in the United States
While adopting a slightly different approach from the EU, The United States recognized early the need for comprehensive legislation to govern its development and use. This has resulted in several legislative acts at federal and state level, there were over 40 state bills introduced in 2023. The legislative landscape in the US is quite complex but largely reflects the growing need to balance innovation with safeguards to protect individuals and society from AI risks.
There are some key pieces of legislation and regulatory actions shaping the AI landscape in the United States. The Blueprint for an AI Bill of Rights, released in October 2022, outlines principles to protect individuals from the potential harms of AI. It focuses on rights such as privacy, protection from algorithmic discrimination and ensuring transparency. The wide-ranging Executive Order on AI signed by President Joe Biden in 2023 emphasized the need for safe, secure, and trustworthy AI. It also mandates federal agencies to adopt measures ensuring AI systems are developed and used responsibly. There are a few acts that aim to promote innovation in safe, reliable, ethical and transparent ways, such as the National AI Initiative Act of 2020 and the Future of AI Innovation Act. In May 2024 the groundbreaking Colorado AI ACT was signed, as the first of its kind in the US, it comprises a cross-sectoral AI governance law covering the public sector. Various Congressional Bills are under development, like the recently proposed SB 1047 California AI Safety bill, which aims to further regulate the most powerful AI model.
Chinese AI legislation
China so far has not released a comprehensive law on AI but has published regulations on AI applications. However, unlike the European Union where the EU AI Act acts as an umbrella regulatory framework all AI systems, China is taking a vertical approach to regulate certain AI services.
With an ambitious goal of becoming the global leader in AI development and applications, the government in Beijing published “New Generation AI Development Plan” in 2017, which is the first systemic and strategic plan in the AI sphere. It spurred an explosion in industry activity and policy support for AI development.
Five main categories comprise the Chinese AI governance landscape:
- Governance policies & strategies. Beijing has released a series of national-level governance principles, plans, guidance or opinions on AI technologies. These provide a foundation for AI legislation. For example, the State Council published “Opinions on strengthening the ethical governance in science and technologies”, which expresses Beijing’s thoughts on ethical principles of AI technologies.
- Laws. Some of the existing laws address certain aspects of developing, providing, deploying and using AI systems, which have a significant impact on AI legislation. Among these, three laws are highlighted, i.e. “Personal Information Protection Law”, “Data Security Law” and “Cybersecurity Law”.
- Administrative regulations. Some administrative regulations create concrete requirements for AI algorithms when they are used in Internet information services such as “deep synthesis contents”, “recommendation system” and “generative AI system”.
- Municipal regulations. Some cities in China (Shanghai and Shenzhen) also released municipal regulations on content-agnostic AI, which aims to promote development of AI industry. Both the regulations require that risks introduced by AI must be controlled effectively, with more extensive scrutiny of high-risk AI products and services than of those presenting less risk.
- National standards. To support regulating AI system, the China Electronics Standardization Institute is leading the development of series of recommended standards covering multiple aspects of AI technologies.
Impact on Businesses
The legislative landscape across the globe is quite comprehensive, adding a degree of complexity that has to be managed by businesses whether operating in a specific geography or internally. Common to most is that the governmental bodies want to address both the opportunities and challenges posed by AI technologies.
At its core is the intention to promote innovation and the responsible development and use of AI, ensuring that it benefits society while safeguarding against potential risks. As AI technology continues to evolve at rapid speed, regulators will aim to keep up and so must businesses to comply.
To manage and ensure robust governance, companies would benefit from implementing an artificial intelligence management system (AIMS) compliant with ISO/IEC 42001 coupled with management systems targeting information security (ISO/IEC 27001) or privacy (ISO/IEC 27701). It provides a structured approach to risk management, including regulatory compliance, and establishes practices for continual improvement. It also enables certification by an independent third-party like DNV, which can help ensure safe, reliable and ethical AI systems, support regulatory compliance and bridge trust gaps.