The digital revolution is inextricably linked to Artificial Intelligence (AI). AI technologies have become the cornerstone of various products and services in finance, education, healthcare, and agriculture.

However, the regulatory framework for AI has lagged behind technological advancements. To address this gap, the EU is spearheading the Artificial Intelligence Act, the world’s first comprehensive set of rules governing AI.

What is the EU AI Act?

The primary goal of the EU AI Act is to balance the opportunities and risks associated with AI development and deployment. To achieve this, AI systems must be:

  • Safe: Ensuring that AI systems do not pose a threat to human safety or well-being.
  • Transparent: Users should be aware when they are interacting with AI.
  • Traceable: The development and deployment of AI systems must be traceable to ensure accountability.
  • Non-discriminatory: AI systems should not perpetuate or create biases.
  • Environmentally friendly: AI development should consider its environmental impact.
  • Human-in-the-loop: AI systems should be supervised by humans, not vice versa.

As outlined in Article 1, the AI Act aims to promote “human-centric and trustworthy AI” while safeguarding fundamental rights, democracy, and the rule of law.

Risk-Based Approach of AI

The AI Act adopts a risk-based approach to regulation. AI systems are categorized into different risk levels:

  • Unacceptable Risk: AI systems that pose an unacceptable threat to safety, health, or fundamental rights will be banned. This includes systems that manipulate human behavior, social scoring, and real-time remote biometric identification (e.g., facial recognition).
  • High-Risk: High-risk AI systems will be subject to strict pre-market assessments and ongoing monitoring. This category includes AI used in critical infrastructure, education, and products covered by EU safety directives.
  • Limited Risk: AI systems with limited risk will be subject to transparency requirements, ensuring users are aware when interacting with AI. Generative AI, such as ChatGPT, falls into this category.
  • Minimal or No Risk: AI systems with minimal or no risk will face few, if any, regulatory restrictions.

Key Provisions and Implications

  • Generative AI: AI models capable of generating text, images, or other media content, like ChatGPT, will be required to disclose that the content is AI-generated and to prevent the generation of illegal content.
  • Biometric Data: The use of biometric data, such as facial recognition, will be strictly regulated, with real-time remote biometric identification generally prohibited in public spaces.
  • Data Governance: The Act emphasizes the importance of high-quality data for training AI systems and addresses issues related to data privacy and security.

The EU AI Act represents a significant step forward in the global governance of AI. By adopting a risk-based approach and focusing on transparency, accountability, and human oversight, the EU aims to ensure that AI is developed and deployed in a responsible and ethical manner. As AI continues to evolve, it is likely that the regulatory landscape will also continue to adapt.