Faithful to its business of regulating large technology and internet companies, which has taken shape in instruments such as general regulation of data security (GDPR) and the Digital Services and Markets Law (DSA and DMA, respectively), the EU took the first steps in regulating artificial intelligence (I one).
The European Parliament approved with the first rules to regulate a project I am one who want to make sure System supervised by people, secure, transparent, traceable, non-discriminatory and respect for the environment.
On 11 May, the European Parliament’s Internal Market and Civil Liberties committees approved a draft intelligence-negotiation mandate, with 84 votes in favor, 7 against and 12 abstentions. Synthetic.
In their amendment to the European Commission proposal, MEPs also established the need, by definition, to AI is designed to be technology neutral So that it can be applied to current and future AI systems.
European risk-based approach to AI
The regulations follow a risk-based approach and establish obligations for providers and users based on the level of risk posed by AI, and include a list of restrictions on intrusive and discriminatory use of AI systems:
- “In real time” remote biometric identification system in public access places.
- “POST” distance biometric identification system.
- Biometric classification systems that use sensitive characteristics (such as gender, race, ethnicity, status, citizenship, religion, and political orientation).
- Predictive policing systems (based on profile, location or past criminal conduct).
- Emotion recognition systems in law enforcement, border management, workplace and educational institutions
- Indiscriminate scraping of biometric data from social networks or CCTV images to build a database facial recognition,
and he expanded the classification of high-risk areas to include harm to health, safety, fundamental rights or the environment; as well as those AI systems that influence voters during political campaigns and recommendation used by social media platforms.
Measures for general use and high risk AI
In addition, the MEP includes obligations for suppliers of basic models, who must guarantee its solid protection. Fundamental Rights health and safety, environment, democracy and rule of law.
basic generative models, eg chatgpt Of OpenAI They must comply with additional transparency requirements, such as Reveal AI generated content design model for Prevents you from generating illegal content and publish summaries of copyrighted data used for training.
The project must be approved by the full parliament and is expected to be voted on during its June 12-15 session, before talks can begin with the Council on the final form of the law.