From data foundations and ethical guardrails to workforce skills and citizen trust, the challenge is no longer whether to use AI in cities – but how to embed it in ways that are practical, human-centred, and built to last.
For cities around the world, artificial intelligence (AI) is no longer a distant experiment or a speculative future. It is rapidly becoming a defining capability of modern government. Yet while the technology itself moves fast, the work of integrating it into public services is slow, complex, and deeply human. The real challenge is not whether AI can do something, but how cities build the foundations, rules, skills, and trust needed to use it responsibly and at scale.
That tension sits at the heart of today’s local government conversation. On one hand, there is genuine excitement about what AI can unlock – from more efficient operations to better services and new forms of civic engagement. On the other, there is a recognition that without the right data, governance, culture, and guardrails, the same tools can amplify bias, erode trust, or simply become another expensive pilot that never quite makes it into production.
Rochelle Haynes, managing director, Bloomberg Philanthropies’ What Works Cities, frames the moment in both practical and strategic terms. “What Works Cities is the international standard of excellence for what it means to be a good, well-managed local government. We’re helping cities build strong data and evidence-based practices and infrastructure so they can deliver quality results to their residents,” she says. For her, AI is not a side project – it sits squarely inside the broader agenda of better management, better data, and better outcomes. “What comes top of mind for me is making sure that cities are ready for the AI revolution,” Haynes adds.
“We’re helping cities build strong data and evidence-based practices and infrastructure so they can deliver quality results to their residents”
Readiness, however, is not about rushing to buy tools. It is about putting in place the less glamorous but more durable building blocks. “There is something there, and I see our role as making sure cities have a strong foundation of understanding, but also a strong foundation when it comes to data management and governance, so they can leverage the technology in a really thoughtful way and lead from the front in this conversation,” Haynes explains. That emphasis on foundations echoes across cities that are already experimenting with AI, and across those just starting to explore what it could mean for them.
Jung-Hoon Lee, Professor of Technology & Innovation Management at the Graduate School of Information, Yonsei University in Seoul, sees AI as part of a broader wave of disruption that goes well beyond any single use case or department. “I think this is different from some short life-cycle technologies we’ve experienced before. It will be a disruptive technology that changes not only industry or urban life, but also our experiences of these on a personal level,” Lee says.
That long-term, city-wide impact is precisely why early choices about governance, skills, and values matter so much.
One of the most persuasive ways to cut through abstract debate is with concrete results. Haynes points to a simple but powerful example. “In Dallas, Texas, they’re using it for their procurement process, and it saved them over a million dollars in staff time and cost,” she says. For many city leaders, this kind of outcome reframes AI from a futuristic concept into a practical tool. “Those are just practical uses of it. And so for us, it’s making sure those cities are well positioned to use technology in a thoughtful way.”
This focus on practical value also shapes how cities think about scale. Teppo Rantanen, director for competitiveness and innovation, City of Tampere, describes a shift from isolated projects to something more systemic. “Digitalisation is becoming part of everyday life in our services. As a city, these can’t just be single projects or separate services. It’s something we need to do at a very large scale, in a way that truly helps us achieve our goals – happier citizens and a more sustainable, smooth-running city,” Rantanen says.
But getting from a promising pilot to a city-wide capability is rarely linear. Rantanen emphasises the importance of learning in motion. “When we start developing, we often see different paths emerging – it can go this way, that way – and then we learn. It’s not black and white. We certainly find things that don’t work, and we need to work harder to develop what makes them possible. But the development continues, and it comes to life.”
That experimental mindset is paired with a strong bias toward delivery. “Our motto is that we are a city of doing. Instead of being a city of planning, piloting, and testing, we aim to be a city of doing. We always strive to have something tangible. There are too many projects that lead to nothing – just a fun exercise, and then that’s it. We don’t want that. We always want to do something that has continuation,” Rantanen says.
For cities that have been investing in digital systems for decades, AI does not arrive on a blank slate. It lands on top of complex, mission-critical infrastructure. Matt Mahan, mayor, City of San José, describes the tension clearly. “San José is one of the most technologically advanced cities in the world – and has been for generations. But tools that were once cutting-edge eventually become status quo – inhibiting innovation.”
In practice, that means finding ways to innovate without putting essential services at risk. “To make updates without jeopardising the entire system, we’ve done pilot deployments external to the system for a few simpler permit processes as we prepare for a larger reengineering down the road as newer tools mature,” Mahan explains. This kind of staged approach – testing at the edges, learning, then modernising the core – is becoming a common pattern for cities that cannot afford a big-bang transformation.
If legacy technology is one constraint, culture is often a bigger one. Mahan puts it bluntly. “But the biggest barrier isn’t the technology itself – it’s a culture of caution. Government tends to be highly risk-averse – and for good reason. We serve the people, we answer to the people and we are responsible for their hard-earned tax dollars.
We have a duty to be fiscally, legally and ethically responsible in all of our actions. But we also have a duty to solve our constituents’ problems. And for too long, government has been too slow and inefficient in delivering results.
“But the biggest barrier isn’t the technology itself – it’s a culture of caution. Government tends to be highly risk-averse – and for good reason”
That tension between caution and action runs through almost every AI conversation in the public sector. Haynes argues that cities need to acknowledge it openly and build frameworks that allow responsible experimentation. “People are going to use it whether you make it explicit or not. So create a policy, be clear on it, encourage it within reason and guidelines – make sure you’re giving permission while creating a framework for when and how to use it.”
At the same time, that framework has to be grounded in skills and judgement, not just rules. “You need your staff to be able to write a really good prompt and then be excellent critical thinkers about the information they receive before putting it out into the public or interacting with a resident,” Haynes says. In other words, AI literacy is becoming as much a part of public service professionalism as data literacy or policy analysis.
Almost every serious conversation about AI in cities eventually comes back to data. Haynes is clear about its centrality. “Data management and data governance are among our core criteria, and we are focused on helping cities upskill in these areas,” she says. But she is equally clear that perfection is not the starting line. “You don’t have to be perfect, right? Because at one point the conversation was that your data had to be perfect before you could leverage AI – and data is never going to be perfect. But it is important right now to acknowledge where we have gaps and biases and ask: how do we fix them?”
That honesty about imperfections is what allows cities to move forward responsibly. It also connects to a broader push for openness. “There’s a desire right now to have more accessible, inclusive data collection, better management, and greater transparency of process – and I think that’s exciting,” Haynes adds.
Mahan describes how this plays out in practice at a national level through the GovAI Coalition. “There is a lot of fear and rhetoric surrounding AI. The decisions we make today will affect future generations – and frankly, we’re seeing a lot of fighting and indecision at the state and federal levels of government when it comes to if and how we regulate this new technology. That’s why we took action at the local level and started the GovAI Coalition.”
The aim is not just to share stories, but to build reusable infrastructure for responsible adoption. “Looking forward, we’re focused on updating our GovAI working groups to align with the most immediate needs of the coalition. For example, our Data Governance Committee recently released a new template for data-sharing agreements, which helps agencies form partnerships more efficiently while maintaining strong safeguards. We’re also collaborating on shared procurement opportunities, including a multi-agency digital twin RFP, and strengthening the trust ecosystem between vendors and government through our AI registry platform.”
If data is the fuel for AI, trust is its social licence. Haynes argues that cities need to start from first principles. “I think first, it’s leading with transparency and accountability. Those are the two words that first come to mind,” she says. That translates into very concrete commitments. “Having a transparent AI policy that your public is aware of – so residents know when, where, and how you’re using the technology, the purpose of the data, and that you hold yourself accountable to that policy.”
She also points to the importance of being honest when things do not work as expected. “That’s why I like that San José example – they rolled out the technology, recognised and owned that it wasn’t perfect, pivoted, and co-designed with their residents. That’s exactly the sort of fluency you want to create.”
Lee comes at the same issue from a more technical angle but lands in a similar place. “When AI is designed and moves into the operational stage, you need to think about regulations and ethical approaches. We don’t always know how an agent is working inside, so how do we regulate or audit it to make sure it is working properly for people?” He also warns that complexity can produce unexpected outcomes. “Algorithms generate a great deal of complexity, and that complexity can create behaviours different from what we expected.” For him, the answer is clear. “These are the two things I relate to at the design and operational stages – ensuring AI works in a more human-centred and people-driven way.”
“We came from a period where most interaction between citizens and the city was done in a traditional way, and we began thinking about new ways of engaging citizens in what we do”
Rantanen places trust squarely in the relationship between city and citizen. “We need to make sure that the way we use data about our citizens is totally transparent, so they know exactly what data we’re using and can track and see it. But the other element is trust, and that is so important for us. We have this trust built already, and we need to capitalise on it while building in a way that people still feel their data isn’t just ‘out there,’ but is helping them live better lives.”
He also describes how his city developed its approach. “We created specific AI ethical guidelines for the city and published the document. In building it, we included not only city employees but also citizens to get feedback on how AI should be used. It embeds a strong focus on data within those ethical guidelines.”
For Rantanen, participation goes beyond consultation documents. “We came from a period where most interaction between citizens and the city was done in a traditional way, and we began thinking about new ways of engaging citizens in what we do. Of course, you bring in digital tools – new ways of generating interest and involvement – co-creation initiatives, hackathons, participatory budgeting, even piloting crowdfunding where people fund part of the initiatives and the city funds the other part.”
One risk of the current AI boom is that cities simply adopt generic tools and workflows that do not reflect local needs or values. Haynes is wary of that. “You don’t want generic tools. If you customise a tool for your city, are you putting some heart and soul into it? Are you thinking about how residents will interact with it – the responses, the tone? There is still a need for the human touch.” She is also clear that efficiency should not come at the expense of identity. “I think of it as freeing up time, but not taking away the soul. It’s definitely an efficiency tool, but it shouldn’t remove that soul.”
Lee frames this in design terms. “At the design stage, we need to think about human-centred design – how AI can assist people and what the demands are.” He also describes experimenting with more collaborative models. “We have been using the living lab concept to understand how citizens perceive AI and how they can work together with it – not as a replacement for jobs, but as a collaborator.”
One of the most common failure modes in public sector innovation is the shiny object problem – a collection of disconnected projects that never quite add up to transformation. Haynes urges cities to step back before they leap. “Beyond that, for me, it’s actually about taking a step back. I love that everyone’s jumping in – it’s exciting – but what’s your vision? What’s the purpose? What is it that you’re actually trying to accomplish? It’s not technology for technology’s sake.” She links that directly to organisational structure. “What’s the comprehensive strategy? With What Works Cities, certification is designed to pull departments together.
You can’t answer it in a silo for that reason.” Her advice is simple and hard to execute. “Create a vision, come out of your silos, and create a comprehensive plan.”
Rantanen describes a similar philosophy in how his city works with industry. “When we work with big companies, we first say: we’re not going to just buy things from you – we’re going to co-develop them with you. And in doing that, we need you to follow the guidelines and principles we have, and then we can start working together.” He is equally focused on what happens after the pilot. “If you think about this particular case, the key thing is that it doesn’t stop when the project stops.

It should form the basis of something that comes alive. The measure is that this is not just a project that ends with great technical development – it builds something with a sound business case that goes into production one way or another.”
That ambition is paired with a willingness to aim high. “What is very natural for us is always to go for big, bold solutions – the big, audacious goals. We want to go where no one has gone before. When I came to the city, I found we couldn’t set goals unless we were sure we would reach them. I said, ‘Well, I don’t agree.’ Let’s set goals so high that even if we don’t fully reach them, they inspire everyone to work toward them.”
No AI strategy survives contact with reality without people to carry it forward. Haynes stresses that this is not about politics or personalities. “We certify the city – this is not tied to a political party. This is about how your city operates. Is it efficient? Is it effective? Does it deliver results for residents? It is not tied to any one person.” Or, as she puts it more simply, “Let’s not make it about an individual – let’s make it about how we serve our cities.” At the same time, she is realistic about change management. “You always need champions – not just at the top with leadership, but also civil servant staff who will be there throughout mayoral administrations. You need those staff bought in; they are your champions.”
Preparing that workforce is a strategic task in its own right. “If you’re thinking about economic growth in your city, how are you preparing your current and future workforce – including youth – to engage with AI technology? Because when that youth is ready to enter the workforce full time, that’s the reality they’ll be facing,” Haynes asks. “It’s an education piece, it’s a workforce development piece. That kind of holistic thinking would be excellent as the next step.”
“You always need champions – not just at the top with leadership, but also civil servant staff who will be there throughout mayoral administrations”
San José offers a concrete example of what that investment can look like. “Not every initiative is successful, but city workers need to know they won’t be penalised for good faith efforts to improve city services. We work hard to empower city employees to create and share solutions. That’s why our IT Training Academy – using workforce upskilling curricula co-developed with our public university, San José State – teaches city employees to use AI and advanced data solutions and encourages them to bring forward their best applications,” Mahan says.
The results have been tangible. “In less than a year, programme graduates have saved between 10,000 and 20,000 staff hours and $50,000 in consulting costs. Given these early successes, we’re expanding to train more than 1,000 employees – 15 per cent of our workforce – by the end of 2026. We’re proud to be one of the first major cities to roll out AI training at this scale, and hope this serves as a national model for other cities to do the same.”
Across all of these perspectives, a common thread emerges. AI is not something cities can afford to ignore, but neither is it something they can afford to adopt carelessly. It demands foundations in data and governance, a culture that balances caution with action, a commitment to transparency and trust, and a long-term investment in people and skills. As Mahan puts it, “Innovation happens whether we like it or not, and it almost always comes with trade-offs even if it makes certain products or services faster, better, cheaper. Those of us working in the public sector have a responsibility to help shape technological change so that it does more good than harm at the societal level.”
For cities at the start of their AI journey, that responsibility is both a warning and an opportunity. The warning is that there are no shortcuts – no tool can substitute for governance, skills, and trust. The opportunity is that, by getting those foundations right, cities can move beyond one-off pilots and toward a repeatable, human-centred, and genuinely transformative use of AI in the service of their residents.
Why not try these links to see what our SmartCitiesWorld AI can tell you.
(Please note this is an experimental service)
How can cities build strong data governance for responsible AI adoption?What strategies enable scaling AI pilots into city-wide public services?How does citizen co-design improve trust in municipal AI applications?Which workforce skills are essential for sustaining AI in local government?How can transparency policies enhance accountability in city AI initiatives?