European Lawmakers pass AI regulation bill
June 15, 2023 |
Artificial intelligence has the world aflutter. The clear benefits that have already been demonstrated ahve been accompanied by panicked calls ranging from banning it, akin to putting the toothpaste back in the tube, to heavily regulating it. The European Union is opting for the regulation route. The European Parliament has just passed the first regulation for use of Artificial Intelligence. It is reported by Foreign Affairs with EU Lawmakers Pass Landmark AI Regulation Bill. Australia is in the very early stages of considering regulation of Artificial Intelligence. It is an almost foregone conclusion that there will be some form of regulation and it will impact on privacy law.
The European Parliament’s media release provides:
-
Full ban on Artificial Intelligence (AI) for biometric surveillance, emotion recognition, predictive policing
-
Generative AI systems like ChatGPT must disclose that content was AI-generated
-
AI systems used to influence voters in elections considered to be high-risk
The rules aim to promote the uptake of human-centric and trustworthy AI and protect the health, safety, fundamental rights and democracy from its harmful effects.
On Wednesday, the European Parliament adopted its negotiating position on the Artificial Intelligence (AI) Act with 499 votes in favour, 28 against and 93 abstentions ahead of talks with EU member states on the final shape of the law. The rules would ensure that AI developed and used in Europe is fully in line with EU rights and values including human oversight, safety, privacy, transparency, non-discrimination and social and environmental wellbeing.
Prohibited AI practices
The rules follow a risk-based approach and establish obligations for providers and those deploying AI systems depending on the level of risk the AI can generate. AI systems with an unacceptable level of risk to people’s safety would therefore be prohibited, such as those used for social scoring (classifying people based on their social behaviour or personal characteristics). MEPs expanded the list to include bans on intrusive and discriminatory uses of AI, such as:
-
- “Real-time” remote biometric identification systems in publicly accessible spaces;
- “Post” remote biometric identification systems, with the only exception of law enforcement for the prosecution of serious crimes and only after judicial authorization;
- biometric categorisation systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation);
- predictive policing systems (based on profiling, location or past criminal behaviour);
- emotion recognition systems in law enforcement, border management, the workplace, and educational institutions; and
- untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases (violating human rights and right to privacy).
High-risk AI
MEPs ensured the classification of high-risk applications will now include AI systems that pose significant harm to people’s health, safety, fundamental rights or the environment. AI systems used to influence voters and the outcome of elections and in recommender systems used by social media platforms (with over 45 million users) were added to the high-risk list.
Obligations for general purpose AI
Providers of foundation models – a new and fast-evolving development in the field of AI – would have to assess and mitigate possible risks (to health, safety, fundamental rights, the environment, democracy and rule of law) and register their models in the EU database before their release on the EU market. Generative AI systems based on such models, like ChatGPT, would have to comply with transparency requirements (disclosing that the content was AI-generated, also helping distinguish so-called deep-fake images from real ones) and ensure safeguards against generating illegal content. Detailed summaries of the copyrighted data used for their training would also have to be made publicly available.
Supporting innovation and protecting citizens’ rights
To boost AI innovation and support SMEs, MEPs added exemptions for research activities and AI components provided under open-source licenses. The new law promotes so-called regulatory sandboxes, or real-life environments, established by public authorities to test AI before it is deployed.
Finally, MEPs want to boost citizens’ right to file complaints about AI systems and receive explanations of decisions based on high-risk AI systems that significantly impact their fundamental rights. MEPs also reformed the role of the EU AI Office, which would be tasked with monitoring how the AI rulebook is implemented.
Quotes
After the vote, co-rapporteur Brando Benifei (S&D, Italy) said: “All eyes are on us today. While Big Tech companies are sounding the alarm over their own creations, Europe has gone ahead and proposed a concrete response to the risks AI is starting to pose. We want AI’s positive potential for creativity and productivity to be harnessed but we will also fight to protect our position and counter dangers to our democracies and freedoms during the negotiations with Council”.
Co-rapporteur Dragos Tudorache (Renew, Romania) said: “The AI Act will set the tone worldwide in the development and governance of artificial intelligence, ensuring that this technology, set to radically transform our societies through the massive benefits it can offer, evolves and is used in accordance with the European values of democracy, fundamental rights, and the rule of law”.
Next steps
Negotiations with the Council on the final form of the law will begin later today.
The Foreign Affairs article provides:
European Parliament lawmakers on Wednesday passed the landmark Artificial Intelligence Act, putting the bloc a critical step closer to formally adopting the world’s first major set of comprehensive rules regulating AI technology.
Under the AI Act, all artificial intelligence would be classified under four levels of risk, from minimal to unacceptable. Technology deemed to be an unacceptable risk—such as systems that judge people based on behavior, known as “social scoring,” as well as predictive policing tools—would be banned, and AI focused on children, other vulnerable populations, and hiring practices would face tougher scrutiny. The new regulations would also require greater privacy standards, stricter transparency laws, and steeper fines for failing to cooperate. The onus of enforcement would fall on European Union member states, with corporate violators facing fines of up to $33 million or 6 percent of the company’s annual global revenue, which could add up to billions of dollars for tech giants such as Google or Microsoft.
The AI Act was first proposed in 2021, but negotiations accelerated following last year’s release of ChatGPT. Final approval of the bill is expected by the end of this year. Yet despite the EU’s monumental push to regulate artificial intelligence, the 27-nation bloc remains a background character in global AI leadership. Instead, both the United States and China are jockeying for the lead role. In October 2022, the Biden administration released its “Blueprint for an AI Bill of Rights,” which focuses on privacy standards and testing before AI systems become publicly available. Then in April, China followed suit, publishing a set of draft rules that would require chatbot-makers to adhere to state censorship laws. Trailing behind is Britain, which announced last week that London would host the world’s first summit on AI sometime this fall.
“Setting universal regulatory frameworks for technology has always been tricky, but it has gotten more challenging as that technology advances,” FP’s Rishi Iyengar reported, noting that AI has proved uniquely difficult to regulate due to how quickly the tech is progressing. As for the AI Act, this may be the EU’s one and only chance to secure significant AI guardrails. “We are not going to have another negotiation,” Gerard de Graaf, the EU’s senior digital envoy to the United States, told Iyengar. So the AI Act “has to stand the test of time.”
Simultaneously the UK Information Commissioner has sounded a note of caution in his statement Don’t be blind to AI risks in rush to see opportunity – ICO reviewing key businesses’ use of generative AI which provides:
The Information Commissioner’s Office (ICO) will today call for businesses to address the privacy risks generative AI can bring before rushing to adopt the technology – with tougher checks on whether organisations are compliant with data protection laws.
New research indicates that generative AI could become a £1 trillion market within a decade, with potential to bring huge benefits to business and society.
Speaking at Politico’s Global Tech Day today, Stephen Almond, Exec Director of Regulatory Risk, will call for businesses to see those opportunities – but to see too the risks that come with them.
“Businesses are right to see the opportunity that generative AI offers, whether to create better services for customers or to cut the costs of their services. But they must not be blind to the privacy risks.
“Spend time at the outset to understand how AI is using personal information, mitigate any risks you become aware of, and then roll out your AI approach with confidence that it won’t upset customers or regulators.”
– Stephen Almond, Exec Director of Regulatory Risk
Generative AI creates content after collecting or querying huge volumes of information from publicly accessible sources online, including people’s personal information. Laws already exist to protect people’s rights, including privacy, and apply to generative AI as an emerging technology.
In April, the ICO set out eight questions organisations developing or using generative AI that processes personal data need to be asking themselves. The regulator also committed to acting where organisations are not following the law.
Stephen Almond will today say:
“We will be checking whether businesses have tackled privacy risks before introducing generative AI – and taking action where there is risk of harm to people through poor use of their data. There can be no excuse for ignoring risks to people’s rights and freedoms before rollout.
“Businesses need to show us how they’ve addressed the risks that occur in their context – even if the underlying technology is the same. An AI-backed chat function helping customers at a cinema raises different question compared with one for a sexual health clinic, for instance.”
Stephen Almond will be speaking at a panel discussion on Generative AI at a panel discussion a panel discussion on Generative AI at Politico’s Global Tech Day on Thursday 15 June, as part of London Tech Week 2023.
The ICO is committed to supporting UK businesses to develop and innovate with new technologies that respect people’s privacy. Our recently updated Guidance on AI and Data Protection provides a roadmap to data protection compliance for developers and users of generative AI. Our accompanying risk toolkit helps organisations looking to identify and mitigate data protection risks.
Innovators identifying novel data protection questions can get advice from us through our Regulatory Sandbox and new Innovation Advice service. Building on this offer, we are in the process of piloting a Multi-Agency Advice Service for digital innovators needing joined up advice from multiple regulators with our partners in the Digital Regulation Cooperation Forum.
The 8 questions the Commissioner wants the users of generative AI to process data are:
- What is your lawful basis for processing personal data? If you are processing personal data you must identify an appropriate lawful basis, such as consent or legitimate interests.
- Are you a controller, joint controller or a processor? If you are developing generative AI using personal data, you have obligations as the data controller. If you are using or adapting models developed by others, you may be a controller, joint controller or a processor.
- Have you prepared a Data Protection Impact Assessment (DPIA)? You must assess and mitigate any data protection risks via the DPIA process before you start processing personal data. Your DPIA should be kept up to date as the processing and its impacts evolve.
- How will you ensure transparency? You must make information about the processing publicly accessible unless an exemption applies. If it does not take disproportionate effort, you must communicate this information directly to the individuals the data relates to.
- How will you mitigate security risks? In addition to personal data leakage risks, you should consider and mitigate risks of model inversion and membership inference, data poisoning and other forms of adversarial attacks.
- How will you limit unnecessary processing? You must collect only the data that is adequate to fulfil your stated purpose. The data should be relevant and limited to what is necessary.
- How will you comply with individual rights requests? You must be able to respond to people’s requests for access, rectification, erasure or other information rights.
- Will you use generative AI to make solely automated decisions? If so – and these have legal or similarly significant effects (e.g. major healthcare diagnoses) – individuals have further rights under Article 22 of UK GDPR.