Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Info

The EU AI act applies only to areas within EU law and provides exemptions such as for systems used exclusively for military and defence as well as for research purposes.

8 December 2023, the European institutions reached provisional political agreement on the world's first comprehensive law on artificial intelligence: the new AI Act.

...

We also provide comments on what organizations should do to prepare.

1.) What is covered: the "AI system"

After much debate, the globally recognized standard developed by the OECD has been adopted. This should support a global consensus around the types of systems that are intended to be regulated as "Artificial Intelligence". Note that the input on which the outputs are generated may be provided by machine (e.g. autonomous vehicle sensors) or humans (e.g. Chat GPT prompts). References to "content" as an output emphasize recent focus on generative AI as within scope of the legislation.

2.) The risk-based approach.

The parties to the trialogue confirmed a risk-based approach: the higher the risk, the stricter the rules. The AI Act establishes obligations for AI, based on its potential risks and level of impact on individuals and society as a whole. Accordingly, AI systems are divided into systems of limited risk and those posing high risk. In addition, certain AI systems are prohibited (see item 5 below).

...

High-risk AI systems will require extensive governance activities to ensure compliance.

3.) GPAI systems and foundation models

This was a key area of the last-minute negotiations. Dedicated rules for general-purpose AI systems (GPAIs) will ensure transparency along the value chain. These rules include drawing up technical documentation, complying with EU copyright law and providing detailed summaries of the content used for training (increasing transparency).

For high-impact GPAI models which may create systemic risks, additional obligations will apply such as rules on model evaluations, systemic risks assessment and mitigation, adversarial testing, reporting to the Commission on serious incidents, cybersecurity and energy-efficiency reporting (until harmonized EU standards are published, GPAIs with systemic risk may rely on codes of practice to comply with the law). This has been introduced to seek to address some of the concerns about the societal risks that may be posed by the speed of development of these powerful tools. However, debate looks likely to extend to whether the bar for "high impact" has been set too high.

4.) Banned AI systems

In the end, it was agreed to ban certain high-risk AI systems considered a clear threat to the fundamental rights of people, such as:

  • biometric categorization systems that use sensitive characteristics (e.g. political, religious and philosophical beliefs, sexual orientation, race);
    untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases;

  • emotion recognition in the workplace and educational institutions;

  • social scoring based on social behavior or personal characteristics;

  • AI systems that manipulate human behavior to circumvent their free will; and

  • AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation).

  • Use of AI systems by law enforcement authorities for their institutional purposes will be subject to specific safeguards.

5.) Promoting innovation

The AI Act promotes "regulatory sandboxes" and "real-world testing", established by national authorities to develop and train innovative AI before placement on the market. This was seen as a key "win" for the political groups seeking to ensure a pro-innovation and supportive regulatory framework for AI to develop within the EU.

6.) A new AI Regulator?

The EU institutions agreed on establishing new administrative infrastructures including:

...

Also, it will be interesting to see which approach will be taken by the member states to establish the local AI Authorities i.e. whether they will empower existing authorities (e.g. Data Protection Authorities), or opt for other options (e.g. a new independent authority). The debate is still open.

7.) What are the penalties?
Non-compliance with the rules will lead to fines ranging from €7.5 million or 1.5% of global turnover to €35 million or 7% of global turnover, depending on the infringement and size of the company.

8.) When will it come into force?

The final text of the AI Act will likely be published in the Official Journal of the European Union at the beginning of 2024.

The AI Act would then become applicable two years after its entry into force. Some specific provisions will apply within six months, while the rules on GPAIs will apply within 12 months.

9.) How businesses can prepare now for the entry into force of the AI Act

While waiting for the AI Act to be formally adopted (and to become fully applicable), organizations using or planning to use AI systems should start addressing impacts by mapping their processes and assessing the level of compliance of their AI systems with the new rules. The AI Act is the first formal legislation to begin to fill in the gaps of ethical and regulatory principles to which organizations must adhere when deploying AI.

...