European Commission releases proposed regulatory framework governing artificial intelligence

April 27, 2021 |

The European Commission has recently released its proposed regulation of Artificial Intelligence. It is the first ever legal framework on AI.  Given the impact of the European Union’s implementation of the General Data Protection Regulation (GDPR) on worldwide data collection, use, storage and security this proposal, if the Artificial Intelligence Act becomes European law it will have similarly significant impact. The proposal runs to 108 pages with 17 pages of attachments.  It will be a seriously large, laborious and slow process to go from its proposal to adoption stage.

There is a useful 10 page overview titled Communication on Fostering a European approach to Artificial Intelligence.  And being the EU there is a 66 page overlong and detailed plan on AI titled Coordinated Plan on Artificial Intelligence 2021 Review.

The media release, for want of a better word, titled AI Excellence: Ensuring that AI works for people provides:

The Commission is committed to ensuring AI works for people by fostering digital skills and promoting a human-centric approach to AI globally.

AI technologies offer the potential to advance Europe’s economic growth and competitiveness. They also offer opportunities to improve the lives of EU citizens through developments in health, farming, education, employment, energy, transport, security, and more.

AI should work for people and people should be able to trust AI technologies. So, the EU has to ensure that AI developed and put on the market in the EU is human-centric, sustainable, secure, inclusive and trustworthy. The key proposed actions focus on:

    • nurturing talent and improving AI skills
    • developing a policy framework to secure trust in AI systems
    • promoting the EU vision on sustainable and trustworthy AI in the world

Nurturing talent and improving skills

Digital skills are incredibly important as Europe moves into the Digital Decade. The EU needs professionals with specialised AI skills to remain competitive globally and should ensure a high-level of computing skills in general to avoid job market polarisation.

To help achieve this, the Commission will:

    • support traineeships in digital areas, with an increased focus on AI skills. Traineeships should follow the principle of non-discrimination and gender equality as outlined in the Digital Education Programme
    • launch a call for specialised education programmes and courses in key areas, under the Digital Europe Programme
    • support networks of AI excellence centres to retain talent and develop PhD programmes and AI modules under the Horizon Europe programme
    • fund doctoral networks, postdoctoral fellowships and staff exchange projects in AI under the Marie Sk?odowska-Curie actions
    • support the development of new skills under the Skills Agenda

Developing a policy framework to ensure trust in AI systems

Trust is essential to facilitate the uptake of AI. The Commission has developed key principles to guide the European approach to AI that take into account the social and environmental impact of AI technologies. They include a human-centric way of developing and using AI, the protection of EU values and fundamental rights such as non-discrimination, privacy and data protection, and the sustainable and efficient use of resources.

The Commission proposes a number of measures and legislative actions to foster trust in AI. These include:

    • a proposal for a horizontal framework for AI, focusing on safety and respect for fundamental rights specific to AI technologies
    • EU measures adapting the liability framework to the challenges of new technologies, including AI
    • revisions to existing sectoral safety legislation
    • security operation centres, powered by AI, to act as a ‘cybershield’ for the EU, able to detect signs of a cyberattack early enough and to enable proactive action

The Commission will continue to cooperate with stakeholders and organisations including EU agencies and standard-setting organisations, to build trustworthy AI.

Promoting the EU vision on sustainable and trustworthy AI in the world

As outlined in the Digital Compass: the European way for the digital decade, Europe’s actions on the international stage are more important than ever. This includes in AI, as the risks and challenges of this technology go beyond national and continental borders.

The Commission will promote its human-centric approach to AI on the global stage and will encourage the adoption of global rules and standards on AI, as well as strengthen collaboration with like-minded countries and stakeholders.

It is not that AI has not been under consideration in other jurisdictions.  The Victorian Information Commissioner has produced some weighty tomes, such as a 155 ebook titled Closer to the Machine and a 14 page issues paper and a guideline of sorts titled Artificial Intelligence: Understanding Privacy Obligations.  The Australian Information Commissioner has confined herself to putting in submissions to the Australian Human Rights Commission white paper on AI in 2019. The efforts are limited and rely on regulation which is wholly inadequate for the task of regulating AI, and taming its potentially dangerous uses, without stifling its enormous potential.

The Economist has an excellent article on the EU’s proposal and its potential impact with The Brussels effect The EU wants to become the world’s super-regulator in AI.  The New York Times has also covered the story with Europe Proposes Strict Rules for Artificial Intelligence which provides:

The regulations would have far-reaching implications for tech firms like Amazon, Google, Facebook and Microsoft, which have poured resources into developing the technology.

The European Union unveiled strict regulations on Wednesday to govern the use of artificial intelligence, a first-of-its-kind policy that outlines how companies and governments can use a technology seen as one of the most significant, but ethically fraught, scientific breakthroughs in recent memory.

The draft rules would set limits around the use of artificial intelligence in a range of activities, from self-driving cars to hiring decisions, bank lending, school enrollment selections and the scoring of exams. It would also cover the use of artificial intelligence by law enforcement and court systems — areas considered “high risk” because they could threaten people’s safety or fundamental rights.

Some uses would be banned altogether, including live facial recognition in public spaces, though there would be several exemptions for national security and other purposes.

The 108-page policy is an attempt to regulate an emerging technology before it becomes mainstream. The rules have far-reaching implications for major technology companies that have poured resources into developing artificial intelligence, including Amazon, Google, Facebook and Microsoft, but also scores of other companies that use the software to develop medicine, underwrite insurance policies and judge credit worthiness. Governments have used versions of the technology in criminal justice and the allocation of public services like income support.

Companies that violate the new regulations, which could take several years to move through the European Union policymaking process, could face fines of up to 6 percent of global sales.

“On artificial intelligence, trust is a must, not a nice-to-have,” Margrethe Vestager, the European Commission executive vice president who oversees digital policy for the 27-nation bloc, said in a statement. “With these landmark rules, the E.U. is spearheading the development of new global norms to make sure A.I. can be trusted.”

The European Union regulations would require companies providing artificial intelligence in high-risk areas to provide regulators with proof of its safety, including risk assessments and documentation explaining how the technology is making decisions. The companies must also guarantee human oversight in how the systems are created and used.

Some applications, like chatbots that provide humanlike conversation in customer service situations, and software that creates hard-to-detect manipulated images like “deepfakes,” would have to make clear to users that what they were seeing was computer generated.

For years, the European Union has been the world’s most aggressive watchdog of the technology industry, with other nations often using its policies as blueprints. The bloc has already enacted the world’s most far-reaching data-privacy regulations, and is debating additional antitrust and content-moderation laws.\

But Europe is no longer alone in pushing for tougher oversight. The largest technology companies are now facing a broader reckoning from governments around the world, each with its own political and policy motivations, to crimp the industry’s power.

In the United States, President Biden has filled his administration with industry critics. Britain is creating a tech regulator to police the industry. India is tightening oversight of social media. China has taken aim at domestic tech giants like Alibaba and Tencent.

The outcomes in the coming years could reshape how the global internet works and how new technologies are used, with people having access to different content, digital services or online freedoms based on where they are.

Artificial intelligence — in which machines are trained to perform jobs and make decisions on their own by studying huge volumes of data — is seen by technologists, business leaders and government officials as one of the world’s most transformative technologies, promising major gains in productivity.

But as the systems become more sophisticated it can be harder to understand why the software is making a decision, a problem that could get worse as computers become more powerful. Researchers have raised ethical questions about its use, suggesting that it could perpetuate existing biases in society, invade privacy or result in more jobs being automated.

Release of the draft law by the European Commission, the bloc’s executive body, drew a mixed reaction. Many industry groups expressed relief that the regulations were not more stringent, while civil society groups said they should have gone further.

“There has been a lot of discussion over the last few years about what it would mean to regulate A.I., and the fallback option to date has been to do nothing and wait and see what happens,” said Carly Kind, director of the Ada Lovelace Institute in London, which studies the ethical use of artificial intelligence. “This is the first time any country or regional bloc has tried.”

Ms. Kind said many had concerns that the policy was overly broad and left too much discretion to companies and technology developers to regulate themselves.

“If it doesn’t lay down strict red lines and guidelines and very firm boundaries about what is acceptable, it opens up a lot for interpretation,” she said.

The development of fair and ethical artificial intelligence has become one of the most contentious issues in Silicon Valley. In December, a co-leader of a team at Google studying ethical uses of the software said she had been fired for criticizing the company’s lack of diversity and the biases built into modern artificial intelligence software. Debates have raged inside Google and other companies about selling the cutting-edge software to governments for military use.

In the United States, the risks of artificial intelligence are also being considered by government authorities.

This week, the Federal Trade Commission warned against the sale of artificial intelligence systems that use racially biased algorithms, or ones that could “deny people employment, housing, credit, insurance or other benefits.”

Elsewhere, in Massachusetts and cities like Oakland, Calif.; Portland, Ore.; and San Francisco, governments have taken steps to restrict police use of facial recognition.

 

Leave a Reply





Verified by MonsterInsights