It’s official: The European Union’s risk-based regulation for applications of artificial intelligence has come into force from Thursday, August 1, 2024.
This starts the clock on a series of staggered compliance deadlines that the law will apply to different types of AI developers and applications. Most provisions will be fully applicable by mid-2026. But the first deadline, which enforces bans on a small number of prohibited uses of AI in specific contexts, such as law enforcement use of remote biometrics in public places, will apply in just six months’ time.
Under the bloc’s approach, most applications of AI are considered low/no-risk so they will not be in scope of the regulation at all.
A subset of potential uses of AI are classified as high risk, such as biometrics and facial recognition, or AI used in domains like education and employment. Systems used in these areas will have to be registered in an EU database and their developers will need to ensure compliance with risk and quality management obligations.
A third “limited risk” tier applies to AI technologies such as chatbots or tools that could be used to produce deepfakes. These will have to meet some transparency requirements to ensure users are not deceived.
Another important strand of the law applies to developers of so-called general purpose AIs (GPAIs). Again, the EU has taken a risk-based approach, with most GPAI developers facing light transparency requirements. Just a subset of the most powerful models will be expected to undertake risk assessment and mitigation measures, too.
What exactly GPAI developers will need to do to comply with the AI Act is still being discussed, as Codes of Practice are yet to be drawn up. Earlier this week, the AI Office, a strategic oversight and AI-ecosystem building body, kicked off a consultation and call for participation in this rule-making process, saying it expects to finalize the Codes in April 2025.
In its own primer for the AI Act late last month, OpenAI, the maker of the GPT large language model that underpins ChatGPT, wrote that it anticipated working “closely with the EU AI Office and other relevant authorities as the new law is implemented in the coming months.” That includes putting together technical documentation and other guidance for downstream providers and deployers of its GPAI models.
“If your organization is trying to determine how to comply with the AI Act, you should first attempt to classify any AI systems in scope. Identify what GPAI and other AI systems you use, determine how they are classified, and consider what obligations flow from your use cases,” OpenAI added, offering some compliance guidance of its own to AI developers. “You should also determine whether you are a provider or deployer with respect to any AI systems in scope. These issues can be complex so you should consult with legal counsel if you have questions.”