AI

Understanding the implications of the AI Act and AI Governance

Door:
Amber Ahmed,
Christiaan Dommerholt
Understanding the implications of the AI Act and AI Governance
Artificial Intelligence (AI) is transforming industries globally, offering unprecedented opportunities for innovation, efficiency, and growth. However, alongside these opportunities come significant challenges, and risks. To address these, AI governance frameworks are being developed to ensure responsible and ethical use of AI technologies.
Onderwerpen

One of the most comprehensive regulatory efforts in this area is the AI Act proposed by the European Union. This article explores AI governance and the AI Act, focusing on their implications for corporate companies.

Overview of the AI Act and the requirements

The European AI Act, also known as the European Artificial Intelligence Act, is designed to create a harmonized framework for AI Regulations across the European Union, ensuring that AI systems are developed and deployed in a manner that is safe, transparent, and respectful of the fundamental rights of European citizens. 

The Act established clear definitions for various actors involved in AI, including the providers, deployers, importers, distributors, and product manufacturers. This approach ensures that all parties engaged in the development, usage, importation, distribution, and manufacturing of AI systems are held accountable for their specific roles. The rules that apply to an AI system are based on its risk classification. 

Risk-based classification

The AI Act makes use of a risk-based approach. According to the AI Act, AI systems are classified into four different risk levels: Prohibited Risk, High Risk, Limited Risk, and Minimal Risk. 

  • Prohibited Risks are AI systems that can severely threaten the safety and rights of European Citizens. These systems are considered too dangerous to be used and are banned. Examples include social scoring and certain types of invasive surveillance that exploit the vulnerabilities of specific groups, such as children. 
  • High-risk systems are systems that operate in critical areas where biases or errors can have serious consequences. These AI systems have strict regulatory requirements to ensure the safety and transparency of these systems. Examples of high-risk AI include systems used in healthcare, Human Resources, and law enforcement. These systems must have a risk management system, detailed technical documentation, transparency requirements, human oversights, and more. 
  • Limited Risk systems require specific transparency measures but are not considered high-risk. These systems interact with users and must ensure that the users are informed they are dealing with AI. Examples include chatbots and AI generated content. 
  • Minimal Risk AI systems have little to no risk to users and society and are there are allowed more flexibility and are not heavily regulated. 

Timeline

Date
Milestone
2 February 2025
Ban on AI systems with unacceptable risk
2 May 2025  
Codes of conduct are applied 
2 August 2025  
Governance rules and obligations for General Purpose AI (GPAI) become applicable 
2 August 2026
Start of application of the EU AI Act for AI systems (this includes High Risk and Limited Risk AI systems)
2 August 2027
Application of the entire EU AI Act

Important steps for the organization

This approach of the European Union presents both challenges and opportunities. Understanding and adapting to these implications is crucial for staying compliant and competitive. Furthermore, these regulations and governance practices must be integrated into the company’s policies to ensure comprehensive AI Governance. Here are some steps that you can take as an organization to ensure that you have an AI governance structure within the company that is in line with the AI Act:

  1. Educate and train employees: ensure that all employees, especially those involved in the development and deployment of AI systems have a clear understanding of the AI Act requirements and ethical guidelines.
  2. Establish clear accountability: Define clear roles and responsibilities within the organization for AI governance to ensure that every aspect of AI development and deployment is overseen and managed effectively.
  3. Implement Policies: develop and enforce policies within the company for sensitive data, ensuring that the risk classification is a part of the policy and ethical internal guidelines.
  4. Conduct continuous monitoring and evaluation: Regularly monitor AI systems to ensure they operate as intended and remain compliant with the AI Act. Perform continuous evaluations, which can be done by external auditors, to identify and address emerging risks or issues promptly.

By taking these steps, companies can effectively integrate AI Governance into their organizational policies. This proactive approach will help mitigate risks, ensure compliance with the AI Act, and foster a culture of responsible AI use. This is crucial to harness the benefits of AI technology while also managing risks, ensuring that AI systems are deployed safely, transparently, and ethically.

Want to know more about the AI Act or AI Governance?

Please contact our specialists. They are here to help.

Contact us