Can AI save democracy or destroy it? New study maps political future of machine power

The study addresses a critical question - does AI empower democratic institutions or threaten to replace human political will with algorithmic manipulation? While AI does not yet vote or legislate, its application in political messaging, public decision-making, and electoral campaigning has intensified to a level that increasingly defines how citizens interact with democratic systems.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 03-04-2025 18:16 IST | Created: 03-04-2025 18:16 IST
Can AI save democracy or destroy it? New study maps political future of machine power
Representative Image. Credit: ChatGPT

AI’s growing integration into democratic processes could either enhance citizen participation or erode electoral integrity and civil liberties, warns a new study submitted on arXiv. Titled "Artificial Intelligence and Democracy: Towards Digital Authoritarianism or a Democratic Upgrade?," the study by Associate Professor Fereniki Panagopoulou, offers a comprehensive investigation into how artificial intelligence is reshaping the foundations of democracy.

The study addresses a critical question - does AI empower democratic institutions or threaten to replace human political will with algorithmic manipulation? While AI does not yet vote or legislate, its application in political messaging, public decision-making, and electoral campaigning has intensified to a level that increasingly defines how citizens interact with democratic systems. The most direct impact, Panagopoulou argues, is not AI taking control, but the subtle transformation of the electoral and governance processes via data exploitation, behavioral manipulation, and algorithmic decision-making.

One of the study’s key areas of concern is misinformation. AI-generated content, especially from generative models, is rapidly replacing traditional campaign material with highly persuasive synthetic media. The 2024 U.S. presidential election served as a dramatic example, with both major parties using AI to create deepfake videos and fake endorsements. These manipulations were often indistinguishable from reality and widely circulated on social media. According to the study, such disinformation directly threatens the majoritarian foundation of democracy, allowing those with access to advanced AI tools to distort public perception and potentially influence electoral outcomes without voter awareness or consent.

Data exploitation is another key issue raised in the study. Data is described as the “new gold,” with political campaigns increasingly relying on AI to mine psychographic and behavioral profiles from social media and digital behavior. Campaign strategists use this data to craft tailored messages, not merely to inform but to manipulate. Voters’ vulnerabilities, emotional, ideological, or psychological, can be targeted in ways that alter voting behavior without voters understanding how or why. The study notes that this shift represents a move from traditional persuasion to what could be considered automated coercion.

Manipulation through AI is further illustrated by recent cases like the Cambridge Analytica scandal and a 2024 investigation into TikTok’s role in Romania’s election, where 25,000 bot accounts were allegedly activated to support a candidate with ties to foreign actors. The concern extends beyond voter deception to what Panagopoulou calls a “privatization of elections,” where tech companies, not voters, set the terms of engagement. These corporations own the infrastructure that controls how information flows and what citizens see, effectively becoming gatekeepers of public discourse without democratic oversight.

Furthermore, the study poses a counterpoint: can AI also be used to improve democracy? The answer, according to the author, is conditionally yes. The report explores how AI tools are being employed to support deliberative democracy through citizen assemblies, public consultations, and large-scale participatory experiments. Examples include the Taiwanese government’s use of AI-driven summarization tools to aggregate public opinion, as well as the LEXIMIN algorithm to randomly select diverse groups for consultation. These practices allow more inclusive public engagement, potentially addressing representation gaps seen in traditional systems.

Moreover, AI has been used to facilitate political communication across language barriers. During the 2024 elections, politicians in countries like India, Japan, and the United States used AI translation and avatars to connect with multilingual audiences. In Japan, an AI avatar candidate directly answered thousands of voter questions, helping an unknown independent secure fifth place in a crowded race. AI-powered chatbots and campaign assistants are also enabling new forms of real-time political engagement.

Further, the study warns of new challenges. AI-facilitated participation risks excluding the digitally illiterate and creating new forms of inequality. Citizens without digital skills may be unable to engage in AI-powered consultations, while algorithmic systems could be designed with biases that marginalize underrepresented groups. In parallel, the proliferation of unregulated participation, especially when identification requirements are lax, raises concerns about the legitimacy and security of expanded democratic involvement.

A critical point raised in the study is whether this technological expansion truly revitalizes representative democracy or merely creates the illusion of participation. When decisions appear crowdsourced but are guided or filtered by opaque algorithms, the foundational principles of accountability and sovereignty may be undermined. Panagopoulou emphasizes that AI must assist in consultation, not replace political responsibility or serve as a scapegoat for unpopular decisions.

In addressing the broader regulatory landscape, the study underscores the role of recent legislation such as the Digital Services Act, the AI Act, and the General Data Protection Regulation. These frameworks are viewed as essential but insufficient. Effective oversight, the study argues, must include proactive measures like watermarking AI-generated content, enhancing AI literacy, enforcing algorithmic transparency, and democratizing data access. Only through structural regulation and public accountability can AI be aligned with democratic values.

Lastly, the paper calls for conditional acceptance of AI in democratic life, emphasizing that technophobia is unproductive, but blind optimism is dangerous. As the boundary between public deliberation and algorithmic governance becomes increasingly blurred, the future of democracy may depend on society’s ability to balance innovation with institutional safeguards.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback