You are viewing 1 of 2 articles without an email address.


All our articles are free to read, but complete your details for free access to full site!

Already a Member?
Login Join us now

EU publishes ethical AI guidelines

The European Commission is launching a pilot phase to ensure its guidelines can be implemented and aims to build an international consensus for human-centric AI.

LinkedInTwitterFacebook
The Commission has put forward seven essentials for trustworthy AI
The Commission has put forward seven essentials for trustworthy AI

The European Commission is progressing its work in the area of artificial intelligence (AI) and ethics with a pilot programme to ensure its proposed guidelines for AI development and use can be implemented in practice.

 

The pilot builds on the work of the group of independent experts appointed last year. The Commission is inviting industry, research institutes and public authorities to test the detailed assessment list drafted by the group, which complements the guidelines.

 

Ensuring trustworthy AI

 

The Commission is facilitating and enhancing cooperation on AI across the EU to boost its competitiveness and ensure trust based on EU values. Following its AI strategy published in April 18, the Commission set up the high-level expert group on AI representing academia, industry, and the civil society.

 

The latest plans are one of the deliverables under the strategy, which aims to increase public and private investments to at least €20 billion annually over the next decade, making more data available, fostering talent and ensuring trust.

“Ethical AI is a win-win proposition that can become a competitive advantage for Europe: being a leader of human-centric AI that people can trust”

“The ethical dimension of AI is not a luxury feature or an add-on. It is only with trust that our society can fully benefit from technologies,” said Andrus Ansip, vice-president for the Digital Single Market. “Ethical AI is a win-win proposition that can become a competitive advantage for Europe: being a leader of human-centric AI that people can trust.”

 

The Commission is taking a three-step approach: setting-out the key requirements for trustworthy AI, the large-scale pilot phase for feedback from stakeholders, and working on international consensus building for human-centric AI.

 

Key requirements

 

Trustworthy AI should respect all applicable laws and regulations, as well as a series of requirements and a specific assessment lists aim to help verify the application of each of the key requirements:

  • Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.
  • Robustness and safety: trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all lifecycle phases of AI systems.
  • Privacy and data governance: citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.
  • Transparency: the traceability of AI systems should be ensured.
  • Diversity, non-discrimination and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.
  • Societal and environmental wellbeing: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.
  • Accountability: mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.

 

International consensus

 

The Commission wants to build an international consensus for human-centric AI because “technologies, data and algorithms know no borders”. To this end, the Commission will strengthen cooperation with like-minded partners such as Japan, Canada or Singapore and continue to play an active role in international discussions and initiatives including the G7 and G20. The pilot phase will also involve companies from other countries and international organisations.

 

By the autumn,the Commission plans to launch a set of networks of AI research excellence centres, begin setting up networks of digital innovation hubs, and together with member states and stakeholders, start discussions to develop and implement a model for data sharing and making best use of common data spaces.

 

Companies, public administrations and organisations can sign up to the European AI Alliance and receive a notification when the pilot starts.

“The ethical dimension of AI is not a luxury feature or an add-on. It is only with trust that our society can fully benefit from technologies”

Meanwhile, Google has shut down its advisory council set up to advance the responsible development of AI less than a fortnight after it was announced. It follows resignation of one of the members from the Advanced Technology External Advisory Council (ATEAC) and controversy over another.

 

A statement on its blog reads: “It’s become clear that in the current environment, ATEAC can’t function as we wanted. So we’re ending the council and going back to the drawing board. We’ll continue to be responsible in our work on the important issues that AI raises, and will find different ways of getting outside opinions on these topics.

 

You might also like:

 

LinkedInTwitterFacebook
Add New Comment
You must be a member if you wish to add a comment - why not join for free - it takes just 60 seconds!