Skip to main content
ECOMMERCENEWSSAFETY

The European Union approves new rules on AI

By 14 December 2023No Comments
Nuove regole sulle IA

After a tour de force that lasted 36 hours, in the late evening of Friday 8 December the European Union finally reached an agreement (albeit provisional) on the law regarding artificial intelligence, known as Artificial Intelligence Act, effectively approving new rules on AI. 

This regulation is the first legislation in the world that aims to set limits on the development and use of artificial intelligence. In fact, a note from the European Chamber states: "this Regulation aims to ensure that fundamental rights, democracy, the rule of law and environmental sustainability are protected by high-risk artificial intelligence, while at the same time stimulating innovation and making Europe is the leader in the sector"1.

Ursula von der Leyen, President of the European Commission rejoices: “The European Union's AI law is a world first, a unique legal framework for the development of AI that we can trust and for safety and security. fundamental rights of people and businesses". 

However, the process of the package of rules on AI does not end here, but several more weeks will have to pass in which the details will be finalized, before it will then be submitted for final approval. Overall, the world's first AI regulation will only come into force in about two years. 

Waiting for the new rules on AI

In view of the transitional period necessary for the application of requirements on high-risk artificial intelligence systems, the European Commission introduced the AI Pact on 16 November. This initiative aims to encourage the industry to voluntarily adopt these requirements before the legal deadline, focusing particular attention on generative AI systems in preparation for the European elections scheduled for June next year. 

Companies participating in the Pact will sign formal commitments, supported by ongoing or planned actions. These efforts will be published by the European Commission to increase visibility and strengthen trust in the sector. A call for expressions of interest is currently underway, with discussions expected in the first half of 2024 to explore preliminary ideas and good practices. Following the formal adoption of the Artificial Intelligence Act, leading organizations in the Compact will be asked to make their first commitments public.

What does the regulation provide?

The new rules on AI provide for the categorization of the risk posed by AI into four levels: low, limited, high and unacceptable. Depending on the level of risk, different systems will be subject to different obligations. In the first case, for example, AIs that represent a low risk will not have to submit to any obligation, other than to make explicit that the content generated comes from an AI.

For AI with high risk, however, the impact it may have on fundamental rights will be assessed before being placed on the market. Finally, AIs deemed to have an unacceptable level of risk will be blocked from publication. 

But what is meant by unacceptable risk? An example is the indiscriminate collection of images of people's faces, whether taken from the internet, from security cameras, etc. 

Regole sulle IA - riconoscimento facciale

An exception

This aspect of the discussions risked creating impasses in the long negotiations, with numerous changes made compared to the Commission's initial proposal. One of the crucial issues concerns the emergency procedure that will allow the police to use a high-risk facial recognition system that has not completed the required evaluation. 

This exception, although approved, must be in line with the specific mechanisms for the protection of fundamental rights in order to be applied. Regarding the use of real-time remote biometric identification systems in public areas, exceptions have been introduced "with judicial approval and limited to a well-defined list of crimes". 

The use of such systems in 'post-remote' mode would be permitted exclusively for the targeted search of people convicted or suspected of serious crimes. Instead, real-time use would be limited to “specific times and places” for targeted searches of victims (such as in cases of kidnapping, trafficking or sexual exploitation), to prevent “specific and current” terrorist threats, or to locate or identify people suspected of specific crimes such as terrorism, human trafficking, sexual exploitation, murder, kidnapping, rape, armed robbery, participation in criminal organizations and environmental crimes.

Further provisions

The agreement then introduced new AI rules to deal with situations where AI is used for many different purposes and where the general technology is then integrated into other high-risk systems. In this framework, an AI Office will be created within the European Commission, tasked with “monitoring the most sophisticated models, supporting the development of standards and testing practices, and applying common regulations across all member states”. 

In parallel, a consultative forum for stakeholders, such as representatives from industry, SMEs, startups, civil society and academia, will be established to provide technical expertise to the AI Office.

To cope with the wide range of functions that AI systems can perform (such as the generation of videos, texts, images, natural language conversations, calculations or computer code generation) and their rapid evolution, it has been established that “high impact” generative AI models (trained on large sets of generalized and unlabeled data) will have to adhere to rigorous transparency obligations before entering the market. 

These obligations include drafting technical documentation, complying with EU copyright law and providing detailed summaries of the data used for training.

Support and sanctions

As regards support for innovation in the artificial intelligence sector, the agreement provides for the creation of “sandbox” regulations. These test environments will provide a controlled space for the development, testing and validation of new AI systems, including in real-world conditions. The objective is to reduce the administrative burden on small businesses, protecting them from competition from large market players, through support measures and well-defined exemptions.

Finally, there are obviously sanctions for those who break the new rules on AI. Individuals and organizations will have the possibility to lodge complaints with market surveillance authorities in case of non-compliance with EU AI legislation. For violations of the regulation, companies will be subject to fines based on a percentage of their annual global turnover or a fixed amount, whichever is greater. 

The fines vary based on the severity of the violation: 35 million euros or 7% of turnover for violations of prohibited applications, 15 million euros or 3% for violations of legal obligations, and 7.5 million euros or 1, 5% for providing incorrect information. For small and medium-sized businesses and startups, more proportionate sanctions are envisaged.