AI LAW: ONE STEP CLOSER TO FIRST RULES FOR ARTIFICIAL INTELLIGENCE
Current legislative procedures of the European Parliament: AI Law:
One step closer to first rules for artificial intelligence
In order to guarantee a human-centric and ethical development of artificial intelligence (AI), the European Parliament has approved new transparency and risk management rules for AI systems.
On May 11, the Internal Market Committee and the Civil Liberties Committee in Strasbourg adopted the draft negotiating mandate for the first rules on artificial intelligence with 84 votes in favour, 7 against and 12 abstentions. In their amendments to the Commission’s proposal , MEPs want to ensure that AI systems are human-monitored, safe, transparent, accountable, non-discriminatory and environmentally friendly. They also want a unified and technology-neutral definition of AI so that it can apply to the AI systems of today and tomorrow.
Risk-Based Approach to AI – Prohibited AI Practices
The regulations follow a risk-based approach and set out obligations for providers and users based on the level of risk that AI can create. AI systems that pose an unacceptable risk to human safety would be strictly prohibited. This includes systems that employ subliminal or intentionally manipulative techniques that exploit people’s vulnerabilities or are used for social scoring (classifying people based on their social behavior, socioeconomic status, or personal characteristics).
MEPs significantly amended the list to include bans on intrusive and discriminatory uses of AI systems, such as:
- Real-time biometric recognition systems in public spaces;
- ex post biometric recognition systems, with the sole exception of law enforcement agencies for the purpose of prosecuting serious crimes and only with judicial approval;
- biometric categorization systems using sensitive characteristics (e.g. gender, race, ethnicity, nationality, religion, political orientation);
- predictive police systems (based on profiling, location or past criminal behavior);
- systems for detecting emotions in law enforcement, border protection, the workplace and educational institutions; and
- indiscriminate reading of biometric data from social media or video surveillance recordings to create facial recognition databases (violation of human rights and the right to privacy).
High Risky AI
MEPs have expanded the classification of high-risk areas to include health, safety, fundamental rights and the environment. by social media platforms (with more than 45 million users according to the Digital Services Act) to the list of high-risk areas. They also added AI systems for influencing voters in political campaigns and in recommendation systems used
General Purpose AI – Transparency Measures
MEPs included commitments for providers of foundation models – a new and rapidly evolving area of AI – to ensure robust protection of fundamental rights, health and safety, the environment, democracy and the rule of law. They would have to assess and mitigate risks, comply with design, information and environmental requirements and register in the EU database.
Generative foundation models like GPT would have to meet additional transparency requirements and, for example, disclose that the content was generated by AI. Models would also have to be designed in such a way that no illegal content would be generated and no summaries of copyrighted data would be published.
Promoting innovation and protecting citizens’ rights
To encourage AI innovation, MEPs included exceptions for research activities and AI components under open-source licenses in the regulations. The new law encourages regulatory sandboxes, or controlled environments, set up by public authorities to test AI before it is deployed.
MEPs want to strengthen citizens’ rights to lodge complaints about AI systems and to receive explanations about decisions based on high-risk AI systems that significantly affect their rights. MEPs also recast the role of the EU’s artificial intelligence agency, which will be tasked with overseeing the implementation of the AI rulebook.
Before negotiations can begin with the Council on the final form of the law, the draft negotiating mandate must be approved by the full Parliament; the vote is expected at the June 12-15 session.