It will initiate research and build the evidence base on how such technologies affect society as a whole and different groups within it
The Nuffield Foundation is to establish an institute to study the social and ethical implications stemming from the use of data, algorithms, and artificial intelligence (AI) and to make certain their benefits are shared across society.
The £5m Ada Lovelace Institute, named after the 19th Century mathematician widely regarded as one of the first computer scientists, is thought to be the first of its kind in the UK.
The institute will act as an independent voice, speaking on behalf of the public interest and society, informing thinking of governments, industry, public bodies and civil society organisations, in the UK and internationally.
Over the past six months, the Nuffield Foundation has convened a partnership of leading organisations to address the need for agreed ethical frameworks and codes of practice for the use of new technologies, which have developed rapidly over recent years.
The recent public debate sparked by Cambridge Analytica’s alleged use of Facebook data underlines the importance of anticipating the ethical questions raised by emerging technologies and their application, which will be a core part of the new institute’s remit.
The institute aims to:
“Technology offers great potential to improve individual and social wellbeing, for example in early diagnosis of cancer, or improving the lives of people with disabilities,” said Dame Colette Bowe, trustee of the Nuffield Foundation and chairman of the Banking Standards Board.
“However, this month we have seen the first pedestrian fatality in a self-driving car crash, leading to calls for testing programmes on public roads to be suspended. And revelations about Cambridge Analytica’s alleged use of Facebook data have heightened public concern about how data is used, with serious implications for trust in digital technologies and industry.”
She continued: “These examples show that in many cases, public scrutiny of the use of data and automated technologies only occurs when something ‘goes wrong’. Valid questions are being asked about data rights, as well as about consent, public interest and what constitutes an ethical approach. The Ada Lovelace Institute will work with its partners to ensure we have these conversations before a critical incident, with the aim of developing codes of behaviour for the application of innovations of data and AI that are deserving of public trust.”
The Nuffield Foundation is an independent charitable trust that has been at the forefront of addressing the ethical questions raised by scientific advancements. In 1991, it established the Nuffield Council on Bioethics, which, it reports, has been influential in establishing ethical frameworks for policy and regulation relating to innovations in biology and medicine.
The Nuffield Foundation has committed £5m over five years to establish the Ada Lovelace Institute. The chair of the new Institute will be appointed within the next few months, with the aim of having the Institute fully established before the end of 2018.
The contributing partners to the initiative are: the Alan Turing Institute; the Royal Statistical Society; Nuffield Council on Bioethics; the Wellcome Trust; the Royal Society; the British Academy; techUK; and Omidyar Network’s Governance & Citizen Engagement Initiative.
If you enjoyed this, you may wish to view the following:
AI’s ethical reckoning, ready or not
The UK government wants to lead in AI, seeing great potential for the economy and public services. With the push for progress, tensions between ethics and innovation are reaching a tipping point. Sarah Wray looks at what’s being done about it
Artificial intelligence early adopters already reaping rewards
India and China are much more likely to state that they are ahead of their industry competitors when it comes to AI use
Closing the gap between emerging technology and policy
New governments join Rwanda, Japan and 38 companies at the first WEF centre in San Francisco