A new study reveals that government use of artificial intelligence significantly erodes public trust and democratic control, with over 81% of citizens reporting loss of democratic control when aware of AI risks. The research highlights a critical disconnect between governments eager to automate processes and citizens who become increasingly resistant once they understand the potential consequences of AI decision-making in public services.
The study’s findings: Researchers at Ludwig Maximilian University of Munich surveyed approximately 1,200 UK citizens about their attitudes toward AI in government tasks like tax processing, welfare applications, and bail assessments.
- When participants learned about both AI benefits and risks—including difficulty understanding AI decisions, growing government dependence, and lack of clear paths to contest decisions—trust in government declined significantly.
- The percentage of people reporting loss of democratic control jumped from 45% to over 81% in scenarios where government became increasingly dependent on AI.
- Demand for less AI in government surged from under 20% in baseline scenarios to more than 65% when participants understood both benefits and risks.
Why this matters: “Focusing only on short-term efficiency gains and shiny technology risks triggering public backlash and contributing to a long-term decline in democratic trust and legitimacy,” says Alexander Wuttke, the study’s lead researcher.
Real-world consequences: The stakes of government AI failures can be devastating for citizens, as evidenced by existing automation disasters.
- US state efforts to automate public benefits processing have led to tens of thousands of people being wrongly charged for fraud.
- Some affected individuals were forced to file for bankruptcy or lose their homes due to AI errors.
- “Government mistakes have enormous, long-reaching impacts,” says Hannah Quay-de la Vallee at the Center for Democracy & Technology, a Washington DC-based advocacy organization.
The bigger picture: While democratic governments could potentially use AI responsibly while maintaining citizen trust, success stories remain scarce compared to documented failures.
- Few examples exist of successful AI implementation in government that has preserved public confidence.
- The research suggests that transparency about AI risks, rather than focusing solely on efficiency benefits, is crucial for maintaining democratic legitimacy.
- The study underscores the need for governments to prioritize citizen trust and democratic control over technological efficiency gains.
How government use of AI could hurt democracy