You are viewing 1 of 2 articles without an email address.


All our articles are free to read, but complete your details for free access to full site!

Already a Member?
Login Join us now

AI’s ethical reckoning, ready or not

The UK government wants to lead in AI, seeing great potential for the economy and public services. With the push for progress, tensions between ethics and innovation are reaching a tipping point. Sarah Wray looks at what’s being done about it.

LinkedInTwitterFacebook
The UK government has estimated that AI could add £630 billion to the UK economy by 2035
The UK government has estimated that AI could add £630 billion to the UK economy by 2035

Artificial intelligence (AI) is one of the most hyped group of technologies around, but these days we hear almost as much about its potentially catastrophic results as we do about the benefits it could drive.

 

At SXSW, even Elon Musk said: “AI scares the hell out of me” -- and he’s working at the cutting edge of it. Musk has been vocal about the dangers of AI but he recently left the board of OpenAI, the ethics research group he co-founded. He will continue to donate and advise, but the body said the decision was to avoid any conflict of interest as Musk’s company, Tesla, becomes "more focused on AI".

 

Now or never

 

This highlights some of the tensions around AI and innovation. Its use is unavoidable if we want to deliver truly smart cities, but much is uncertain. We are at a tipping point where the ethical issues must be addressed. It’s potentially now or never.

 

The UK government has estimated that AI could add £630 billion to the UK economy by 2035, and it has set its stall out to be a global leader in AI. The government published a report including a series of recommendations for making this a reality, including making the Alan Turing Institute the national institute for AI and data science, and setting aside £9 million for a Centre for Data Ethics and Innovation. The centre will focus on ensuring safe, ethical and innovative uses of data-driven technologies. It has advertised for a leader but no further specifics on its remit or activities are available.

 

AI for humans

 

AI is already in use in our public services through the growing use of chatbots; use of predictive algorithms for forecasting hospital admissions or the likelihood of criminals re-offending, for example; and AI-based traffic light management systems which speed emergency crews through congested city streets.

 

AI can help public sector workers make more informed decisions and free them up from routine tasks.

 

Andrew Collinge, Assistant Director, Intelligence and Analysis, Greater London Authority (GLA), sees great potential in the area of social care. He says: “In a children’s services department, we have professional people paid and trained to take very, very good care of children, but in the vast majority of cases, they are not people who understand business intelligence.”

 

AI could help them glean insights from linking a wider variety of data together, he says, such as various forms of socio-economic data and data from caseloads, budgets, performance management, etc.

 

On the customer-facing side, he adds: “I don’t think a tool exists that makes the choice or the set of choices around adult social care, for example, a simple enough thing for the general public.”

This is where AI can make a huge impact, Collinge says. “Deep down in human services.”

 

What could possibly go wrong?

 

For all the good that AI could do, there are still huge unknowns and concerns about its impact. A report last year from PwC found that up to 30 per cent of existing UK jobs are at risk of automation from robotics and AI by the early 2030s, although it noted that in many cases the nature of jobs will likely change rather than disappear.

 

Who is responsible if AI makes a mistake with potentially disastrous consequences? What about Ai bias, inherited from its human creators or ‘learned’? Is AI secure and, of course, will AI reach the singularity and take over?

 

We’ve already seen glimpses of the potential for a much darker side to AI. In 2016, a ProPublica report found that a computer programme widely used to predict whether a criminal will re-offend discriminated against black people. Microsoft’s AI-powered chatbot "Tay" started spouting racist and sexist tweets within a day and was swiftly pulled. There are many other examples.

 

Technology marches on

 

Now, we are moving towards using AI for critical applications such as health and cars. In government the next stage is increased “algorithmic decision-making,” says Eddie Copeland, Director of Government Innovation, Nesta. And this is what tends to scare people most.

 

He says: “While some such decisions and assessments are minor in their impact, such as whether to issue a parking fine, others have potentially life-changing consequences, like whether to offer an individual council housing or give them probation. The logic that sits behind those decisions is therefore of serious consequence.”

 

GLA’s Collinge admits that the greater use of data and AI brings increased weight of responsibility as a government worker. As data is increasingly being “merged with other forms of data, [it creates] a new set of issues for us. It is being reused and repurposed, often to very positive ends, but increasingly that does feel like it puts us in a new place, definitely. And it’s a place where it’s harder keep control of it.

 

“If you’re using data for analysis, then you can control that. If you are increasingly taking data from citizens or households as they move around the city and then using that to generate value, then that’s a different thing entirely.”

 

He says we are at an “inflection point.”

 

Less talk, more action

 

The issue is not being ignored by cities and technology companies, and there are a range of collaborative efforts, which include academia, to find a safe way ahead. As well as the UK’s Centre for Data Ethics and Innovation, the Nuffield Foundation is working to establish an independent body, and the Alan Turing Institute has several initiatives in this area. Nesta is actively working with cities and government bodies in this field and has put forward 10 principles for public sector use of algorithmic decision making, which could form the basis of a code of standards. Globally there are groups such as OpenAI and AI Now.

 

Nesta’s CEO, Geoff Mulgan, has predicted that 2018 will be the year when governments get serious about regulating AI to contain the risks.

 

The GDPR (General Data Protection Regulation) legislation is set to kick in this May. It gives consumers more consent over their data and how it is used, and technology and tools which enable that are being developed, piloted and rolled out.

 

Who knows?

 

Despite the activity, the landscape is highly fragmented and it is unclear how or when a more cohesive strategy will emerge. It is clear, however, that cities need to engage more robustly with people to allay their fears and concerns, as well as give them better information about AI’s benefits and risks – and how the risks will be mitigated. Some cities are starting to do this.

 

GLA is exploring a range of possibilities, such as Data Trusts to promote proper stewardship of data by cities and to build public trust. London is also set to survey citizens on their attitudes towards data and its use.

 

The city’s Chief Digital Officer, Theo Blackwell, was quoted as saying at a recent conference: “I think some of the arguments around data have strayed quite far away from civic benefits. We’re being dominated by commentators who perhaps have a hypersensitivity around privacy. But if we can tell the story right and provide the right safeguards, data is a great benefit and we need to start loudly setting that out.”

 

Some progress is being made and the doors to discussion and debate are being opened wider, but the way ahead is still uncertain.

 

Collinge admits: “We don’t quite know where we’ll be in a couple of years.”

 

 

If you enjoyed this, you may wish to view the following:

 

Refuse disposal at your service

Artificial intelligence enhanced robotic system allows for a continuous cycle of mobile robotic devices to work

Read more

 

 

Citizen engagement is key to smart city success

Machine learning and chatbots are being used to engage citizens or assets with their environment

Read more

 

 

 

LinkedInTwitterFacebook
Add New Comment
LoginRegister