Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

We also provide comments on what organizations should do to prepare.

1. What is covered: the "AI system"

After much debate, the globally recognized standard developed by the OECD has been adopted. This should support a global consensus around the types of systems that are intended to be regulated as "Artificial Intelligence". Note that the input on which the outputs are generated may be provided by machine (e.g. autonomous vehicle sensors) or humans (e.g. Chat GPT prompts). References to "content" as an output emphasize recent focus on generative AI as within scope of the legislation.

2.) The risk-based approach.

The parties to the trialogue confirmed a risk-based approach: the higher the risk, the stricter the rules.

...

The AI Act establishes obligations for AI, based on its potential risks and level of impact on individuals and society as a whole. Accordingly, AI systems are divided into systems of limited risk and those posing high risk. In addition, certain AI systems are prohibited (see item 5 below).

...

High-risk AI systems will require extensive governance activities to ensure compliance.

3.) GPAI systems and foundation models

This was a key area of the last-minute negotiations. Dedicated rules for general-purpose AI systems (GPAIs) will ensure transparency along the value chain. These rules include drawing up technical documentation, complying with EU copyright law and providing detailed summaries of the content used for training (increasing transparency).

For high-impact GPAI models which may create systemic risks, additional obligations will apply such as rules on model evaluations, systemic risks assessment and mitigation, adversarial testing, reporting to the Commission on serious incidents, cybersecurity and energy-efficiency reporting (until harmonized EU standards are published, GPAIs with systemic risk may rely on codes of practice to comply with the law). This has been introduced to seek to address some of the concerns about the societal risks that may be posed by the speed of development of these powerful tools. However, debate looks likely to extend to whether the bar for "high impact" has been set too high.

4.) Banned AI systems

In the end, it was agreed to ban certain high-risk AI systems considered a clear threat to the fundamental rights of people, such as:

  • biometric categorization systems that use sensitive characteristics (e.g. political, religious and philosophical beliefs, sexual orientation, race);
    untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases;

  • emotion recognition in the workplace and educational institutions;

  • social scoring based on social behavior or personal characteristics;

  • AI systems that manipulate human behavior to circumvent their free will; and

  • AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation).

  • Use of AI systems by law enforcement authorities for their institutional purposes will be subject to specific safeguards.

5.) Promoting innovation

The AI Act promotes "regulatory sandboxes" and "real-world testing", established by national authorities to develop and train innovative AI before placement on the market. This was seen as a key "win" for the political groups seeking to ensure a pro-innovation and supportive regulatory framework for AI to develop within the EU.

6.) A new AI Regulator?

The EU institutions agreed on establishing new administrative infrastructures including:

An AI Office, which will sit within the Commission and will be tasked with overseeing the most advanced AI models, contributing to fostering new standards and testing practices, and enforcing the common rules in all EU member states. It seems likely this will become equivalent to the AI Safety Institutes that have recently been announced to be established in the UK and the US;
A scientific panel of independent experts, which will advise the AI Office about GPAI models and on the emergence of high-impact GPAI models, contribute to the development of methodologies for evaluating the capabilities of foundation models and monitor possible material safety risks related to foundation models;
An AI Board, which comprises EU member states' representatives, will remain as a coordination platform and an advisory body to the Commission while contributing to the implementation of the AI Act (e.g. designing codes of practice); and
An advisory forum for stakeholders will be set up to provide technical expertise to the AI Board.
The above reference to independent experts and advisory forums can set an example for AI governance models in the private sector, with an active involvement of external stakeholders.

Also, it will be interesting to see which approach will be taken by the member states to establish the local AI Authorities i.e. whether they will empower existing authorities (e.g. Data Protection Authorities), or opt for other options (e.g. a new independent authority). The debate is still open.

7.) What are the penalties?
Non-compliance with the rules will lead to fines ranging from €7.5 million or 1.5% of global turnover to €35 million or 7% of global turnover, depending on the infringement and size of the company.

8.) When will it come into force?

The final text of the AI Act will likely be published in the Official Journal of the European Union at the beginning of 2024.

The AI Act would then become applicable two years after its entry into force. Some specific provisions will apply within six months, while the rules on GPAIs will apply within 12 months.

9.) How businesses can prepare now for the entry into force of the AI Act

While waiting for the AI Act to be formally adopted (and to become fully applicable), organizations using or planning to use AI systems should start addressing impacts by mapping their processes and assessing the level of compliance of their AI systems with the new rules. The AI Act is the first formal legislation to begin to fill in the gaps of ethical and regulatory principles to which organizations must adhere when deploying AI.

Implementing an AI governance strategy should be the starting point. A robust strategy must be aligned with business objectives and identify areas within the business where AI will most benefit the organization's strategic goal. It will also require full alignment with the initiatives aimed at managing personal and non-personal data assets, in compliance with existing legislation.

...