The EU Parliament has approved a groundbreaking framework to regulate artificial intelligence (AI), becoming the first in the world to do so comprehensively. This move comes as the AI sector experiences rapid growth, yielding substantial profits but also raising concerns about issues like bias, privacy, and broader societal impacts.

The AI Act operates by categorizing AI products based on their risk levels, then adjusting scrutiny accordingly. The law’s creators emphasize its aim to make AI more “human-centric,” with MEP Dragos Tudorache highlighting that this legislation is just the beginning of a new era of governance centered around technology.

This regulation places the EU at the forefront of global efforts to manage the risks associated with AI, surpassing similar efforts in China and the US. Enza Iannopollo, a principal analyst at Forrester, believes the EU’s AI Act will establish a global standard for trustworthy AI, leaving other regions, including the UK, to catch up.

The AI Act aims to regulate AI based on its potential to harm society, with stricter rules for higher-risk applications. It prohibits AI that poses a clear risk to fundamental rights, such as biometric data processing. High-risk AI systems, used in sectors like healthcare and law enforcement, will face stringent requirements, while low-risk services like spam filters will be subject to lighter regulation.

The legislation also addresses risks associated with generative AI tools and chatbots. It requires developers to be transparent about the data used to train their models and comply with EU copyright law. While the Act still needs to undergo further scrutiny and translation before becoming law, businesses are already gearing up to comply with its provisions.