The European Union (EU) already has the first law in the world that will regulate and establish a global standard on the use of information. artificial intelligence (AI). After approval of the directive last March By the plenary session of the European Parliament, in May it was the Council - which brings together the representatives of the 27 governments - that gave, without debate and unanimously, the final approval of legislation described as “historic”. The regulations will be applied gradually until they fully come into force in 2026.

The European AI Law is now in force. As published last June in the Official Journal of the European Union, today, August 1, 2024, the first global regulation on artificial intelligence comes into action.

He European Parliament approved in March of this year the Artificial Intelligence Law, which guarantees security and respect for fundamental rights while promoting innovation.

He Regulation, agreed in negotiations with Member States in December 2023, was supported by the European Chamber with 523 votes in favor, 46 against and 49 abstentions.

Its objective is to protect the Fundamental rights, the democracy, he Rule of law and the environmental sustainability against high-risk AI, while driving innovation and establishing Europe as a leader in the sector. The Regulation establishes a series of obligations for AI based on its potential risks and your level of impact.

Prohibited applications

The new rules prohibit certain applications of artificial intelligence that attack the citizenship rights, such as biometric categorization systems based on sensitive characteristics and the indiscriminate capture of facial images from the Internet or recordings from surveillance cameras to create facial recognition databases. The recognition of emotions in the workplace and in schools, education systems citizen score, the predictive policing (when based solely on a person's profile or assessment of their characteristics) and AI that manipulates human behavior or exploits people's vulnerabilities.

Security Force Exemptions

The use of biometric identification systems by security forces is prohibited a priori, except in very specific and well-defined situations. “Real-time” biometric identification systems can only be used if a series of strict safeguards; For example, its use is limited to a specific period and place and has a judicial authorization or prior administrative. These cases may include the selective search for a missing person or the prevention of a terrorist attack. Using these systems after the fact is considered a high-risk use, which requires judicial authorization being linked to a criminal offense.

Obligations for high-risk systems

They are also planned clear obligations for other systems High risk AI (because they can be very harmful to the health, the security, the Fundamental rights, he environment, the democracy and the Rule of law). Examples of high-risk uses of AI include critical infrastructure, education and vocational training, employment, essential public and private services (for example, healthcare or banking), certain law enforcement systems, migration and customs management, justice and democratic processes (such as influencing elections). These systems must assess and reduce risks, maintain usage records, be transparent and accurate, and have human oversight. Citizens will have the right to present claims about AI systems and receive explanations on decisions based on them that affect their rights.

Transparency requirements

General-purpose AI systems and the models on which they are based must meet certain transparency requirements, respect EU copyright law and publish detailed summaries of the content used to train your models. The most powerful models that could be proposed systemic risks They will need to meet additional requirements, such as conducting model assessments, analyzing and mitigating systemic risks, and reporting incidents.

In addition, artificial or manipulated images, audio or video content ("deepfakes») must be clearly labeled as such.

“The regulations also establish that the models of generative artificial intelligence, as ChatGPT, they will have to make it clear whether a text, a song or a photograph has been generated through AI and guarantee that the data used to train the systems respects the Copyright"

Measures to support innovation and SMEs

It will have to be made available to SMEs and emerging companies controlled testing spaces and trials in real conditions at the national level so that they can develop and train innovative AI before its commercialization.

At ILcoworking & Legal Services, we are always up to date and committed to the best practices in the market. Furthermore, our legal partner, Acountax, you can provide more information if you are interested in knowing more details about this law and its implications.