Survey finds that the majority of people aren’t aware that technologies like machine learning and AI are used in decision-making that affects them
A citizen voice must be embedded in ethical artificial intelligence, argues a new report from the Royal Society for the encouragement of Arts, Manufactures and Commerce (RSA). This means initiating “a public dialogue”, so that when it comes to contentious uses of AI, the public’s views, and crucially, their values can help steer governance in the best interests of society. To this end, the RSA is convening a citizen’s jury to deliberate on the ethical issues of AI.
The RSA report, Artificial Intelligence: real public engagement, sets out that risks to privacy and psychological wellbeing as well increasing susceptibility to political manipulation and fraud, are likely to be heightened by AI if it isn’t handled carefully.
One of the applications that demonstrates the “double-edged” potential of AI is the use of automated-decision systems. The report highlights that public bodies are experimenting with the use of technologies such as machine learning to make decisions in key areas that have a major impact on society such as planning and managing new infrastructure, rating the performance of schools and hospitals, deploying policing resources and minimising the risk of reoffending.
In a survey the RSA carried out in partnership with YouGov, findings revealed that the majority of people aren’t aware automated-decision systems are being use in this way.
Only 32 percent of people are aware that AI is being used for decision-making in general, and this drops to 14 percent and nine percent respectively when it comes to awareness of the use of automated decision systems in the workplace and in the criminal justice system.
The survey reveals that on the whole, people aren’t supportive of the idea of using AI for decision-making, and they feel especially strongly about the use of automated decision systems in the workplace and in the criminal justice system (60 percent of people oppose or strongly oppose its use in these areas).
“The public’s doubts about AI have yet to seriously impede the technological progress being made by companies and governments,” says the report. “Nevertheless, perceptions do matter; regardless of the benefits of AI, if people feel victimised by the technology rather than empowered by it, they may resist innovation, even if this means that they lose out on those benefits. The problem may be, in part, that people feel decisions about how technology is used in relation to them are increasingly beyond their control.”
The RSA’s Forum for Ethical AI is making the case for entering into a public dialogue with citizens about the conditions under which this technology is used. “While human rights law serves to protect people from egregious violations, we also need to engage directly with people to address the wider problems of mistrust and disempowerment that can arise when only a few are making critical decisions on behalf of many.”
If you like this, you might be interested in reading the following:
Using data and AI to solve “wicked” problems
The Alan Turing Institute in the UK has launched a data science and artificial intelligence research programme for public policy
AI’s ethical reckoning, ready or not
The UK government wants to lead in AI, seeing great potential for the economy and public services. With the push for progress, tensions between ethics and innovation are reaching a tipping point. Sarah Wray looks at what’s being done about it
Institute aims to safeguard society
Nuffield Foundation announces new £5 million Ada Lovelace Institute which will examine ethical and social issues arising from the use of data, algorithms and AI