Artificial Intelligence (AI) is reshaping how local governments serve their communities. From smarter waste management to predictive policing and urban planning, AI offers powerful tools to improve public services. But with these opportunities come real concerns – about privacy, fairness, and accountability. For business leaders partnering with local governments, and for public officials deploying AI, understanding how the public perceives these technologies is essential. Trust isn’t a nice-to-have – it’s the foundation of successful, ethical AI adoption.
A recent international study spanning Australia, the United States, and Spain, used the Theory of Planned Behaviour to explore what drives public support for responsible AI in local government. The findings reveal a complex but insightful picture of how people think about AI – and what leaders can do to earn their trust.
Risk perception: The public’s top concern
One of the most striking insights is that perceived risk plays a dominant role in shaping public attitudes. People are deeply concerned about the potential downsides of AI, including threats to privacy, job displacement, and algorithmic bias. These fears often outweigh the perceived benefits, suggesting individuals tend to focus more on potential losses than gains. For leaders, this means that risk mitigation must be front and centre. Transparency about how data is used, how bias is being addressed, and what steps are being taken to protect jobs – such as retraining programs – can go a long way in building public confidence.
Policy awareness: A key driver of support
Equally important is policy awareness. The study found that when citizens are informed about the AI policies their governments are implementing, they’re more likely to support responsible AI practices. Awareness helps people understand not just what AI does, but how it’s governed – and that understanding reduces fear. In countries like Australia, where AI governance is more proactive and visible, this connection is especially strong. For governments and their partners, investing in clear, ongoing communication about AI policies is essential. This could include websites, newsletters, public forums, and even AI-powered chatbots that explain policies in plain language.
Positive attitudes matter – but context is crucial
Positive attitudes toward AI also play a role, though the impact varies by country. In Australia, strong governance frameworks and a cultural emphasis on ethical reasoning seem to foster more favourable views. In Spain, a more collectivist culture and top-down policy approach may dampen this effect, while the U.S. presents a more fragmented picture due to its decentralised and often polarised political landscape. This suggests that while highlighting the benefits of AI, such as improved service delivery and greater efficiency, is important, it must be done in a way that resonates with local values and governance styles.
Social influence: surprisingly minimal
Interestingly, the study found that subjective norms – what friends, family, or coworkers think – do not significantly influence public support for AI in local government. This challenges traditional assumptions about social influence in technology adoption. When it comes to complex and unfamiliar technologies like AI, people seem to rely more on their own understanding and direct experiences than on social consensus. This means that while community engagement is still valuable, it shouldn’t be the sole strategy. Direct education and transparent communication are far more effective.
The realism effect: Awareness grounds expectations
Another interesting finding is what the researchers called the ‘realism effect’. As people become more aware of AI policies, their expectations tend to become more grounded. They start to see the limitations and complexities of AI governance, which can temper overly optimistic views. Yet, those who support responsible AI also expect governments to uphold high standards. This paradox highlights the importance of managing expectations from the outset. Leaders should be honest about what AI can and cannot do, and about the challenges involved in governing it responsibly.
Building trust through citizen-centric AI
So, how can local governments and their partners build trust and ensure responsible AI adoption? It starts with citizen-centric policies – systems that respect privacy, promote fairness, and are designed with transparency and security in mind. Communication should be tailored and multi-channel, reaching people where they are and in ways they understand. And most importantly, there should be opportunities for two-way engagement. Programs that invite public feedback, pilot projects with built-in transparency, and initiatives that empower citizens to shape AI practices can all help foster trust and legitimacy.
AI is about people first and foremost
In the end, responsible AI in local government isn’t just about technology – it’s about people. By listening to public concerns, educating citizens, and designing systems that reflect shared values, leaders can ensure that AI becomes a trusted partner in building better communities.
.
Find out more
If you’re interested in exploring more about how public trust in AI can be built through thoughtful policy, transparent communication, and citizen engagement, the full CFE study offers valuable insights. It includes country-specific findings, practical recommendations for leaders, and a deeper dive into the Theory of Planned Behaviour as applied to emerging technologies. Whether you’re a policymaker, a business partner, or a curious citizen, understanding the human side of AI adoption is key.
Read the full article
![]() |
David, Anne, Yigitcanlar, Tan, Desouza, Kevin, Mossberger, Karen, Cheong, Pauline, Corchado Rodriguez, Juan, Beeramoole, Prithvi, & Paz, Alexander (2025) Public Perceptions of Responsible AI in Local Government: A Multi-Country Study Using the Theory of Planned Behaviour. Government Information Quarterly, 42(3), Article number: 102054. |
Or connect with our research team to learn how you can contribute to responsible AI in your community. Email future.enterprise@qut.edu.au