The European Commission is launching a pilot phase to ensure its guidelines can be implemented and aims to build an international consensus for human-centric AI.
The European Commission is progressing its work in the area of artificial intelligence (AI) and ethics with a pilot programme to ensure its proposed guidelines for AI development and use can be implemented in practice.
The pilot builds on the work of the group of independent experts appointed last year. The Commission is inviting industry, research institutes and public authorities to test the detailed assessment list drafted by the group, which complements the guidelines.
The Commission is facilitating and enhancing cooperation on AI across the EU to boost its competitiveness and ensure trust based on EU values. Following its AI strategy published in April 18, the Commission set up the high-level expert group on AI representing academia, industry, and the civil society.
The latest plans are one of the deliverables under the strategy, which aims to increase public and private investments to at least €20 billion annually over the next decade, making more data available, fostering talent and ensuring trust.
“Ethical AI is a win-win proposition that can become a competitive advantage for Europe: being a leader of human-centric AI that people can trust”
“The ethical dimension of AI is not a luxury feature or an add-on. It is only with trust that our society can fully benefit from technologies,” said Andrus Ansip, vice-president for the Digital Single Market. “Ethical AI is a win-win proposition that can become a competitive advantage for Europe: being a leader of human-centric AI that people can trust.”
The Commission is taking a three-step approach: setting-out the key requirements for trustworthy AI, the large-scale pilot phase for feedback from stakeholders, and working on international consensus building for human-centric AI.
Trustworthy AI should respect all applicable laws and regulations, as well as a series of requirements and a specific assessment lists aim to help verify the application of each of the key requirements:
The Commission wants to build an international consensus for human-centric AI because “technologies, data and algorithms know no borders”. To this end, the Commission will strengthen cooperation with like-minded partners such as Japan, Canada or Singapore and continue to play an active role in international discussions and initiatives including the G7 and G20. The pilot phase will also involve companies from other countries and international organisations.
By the autumn,the Commission plans to launch a set of networks of AI research excellence centres, begin setting up networks of digital innovation hubs, and together with member states and stakeholders, start discussions to develop and implement a model for data sharing and making best use of common data spaces.
Companies, public administrations and organisations can sign up to the European AI Alliance and receive a notification when the pilot starts.
“The ethical dimension of AI is not a luxury feature or an add-on. It is only with trust that our society can fully benefit from technologies”
Meanwhile, Google has shut down its advisory council set up to advance the responsible development of AI less than a fortnight after it was announced. It follows resignation of one of the members from the Advanced Technology External Advisory Council (ATEAC) and controversy over another.
A statement on its blog reads: “It’s become clear that in the current environment, ATEAC can’t function as we wanted. So we’re ending the council and going back to the drawing board. We’ll continue to be responsible in our work on the important issues that AI raises, and will find different ways of getting outside opinions on these topics.
You might also like: