AI and Democracy: Scholars Unpack the Intersection of Technology and Governance

Authored By 
Rick Harrison
April 23, 2025

Abstract AI-created image of faces in a collage, linked by circuitry and surrounding a digitally pixelated face on a smartphone screen

When Facebook launched in 2004, it took 10 months to reach 1 million users. Twitter took two years. Spotify took five months, and Instagram took 2.5 months.

When ChatGPT launched in 2022, it reached 1 million users in five days. One year later, the large-language model (LLM) generative artificial intelligence (AI) chatbot had 100 million weekly users. As of March, that number is 500 million.

“AI algorithms are becoming more powerful and affordable,” said Shir Raviv, a postdoctoral research fellow at Columbia University and a nonresident fellow with ISPS’s Democratic Innovations program. “Millions of people now use these tools daily, reshaping how citizens access and process information, communicate with elected officials, organize politically, and participate in society. The stakes and implications of this technology for democracy are far-reaching.”

Earlier this month, Raviv organized a conference bringing together a diverse group of scholars to explore the various ways in which AI and democracy increasingly intersect: the challenges AI poses to democratic processes, effective and responsible governance frameworks to address these challenges, and the potential for AI to enhance democratic participation and representation.

Democratic Innovations aims to identify and test new ideas for improving the quality of democratic representation and governance.

“So much has changed with artificial intelligence in just the year since Shir held the first meeting of this group in 2024,” said ISPS Director Alan Gerber, Sterling Professor of Political Science. “We can see its influence everywhere. Thankfully, Shir has assembled an interdisciplinary dream team of talented scholars who can help us understand what is happening now and where we are likely heading.”

Peter John Loewen, Harold Tanner Dean of the College of Arts and Sciences and professor of government at Cornell University, highlighted the importance of understanding public beliefs about AI to anticipate political backlash. For example, he discussed how people often mistakenly blame the decline in manufacturing jobs on offshoring rather than automation, despite evidence to the contrary.

Loewen presented evidence showing that survey respondents believe AI will increase profits, demand, and quality of products and services for companies that adopt the technology. But they also believe AI will decrease wages and hiring and increase inequality.

“The disruption from AI is akin to the industrial revolution,” Loewen said. “It’s not a question of if, but when and how deeply it will reshape our labor markets.”

Alexis Palmer, a Neukom Fellow affiliated with the Government Department and the Minds, Machine, and Society Group in the Computer Science Department at Dartmouth College, compared the quality of political arguments generated by LLMs to those written by humans. She found that they are often indistinguishable, though people are more likely to prefer arguments when they are informed they were written by humans.

“Knowing the author of an argument — whether human or AI — profoundly affects its persuasiveness, revealing our deep-seated biases towards human-generated content,” Palmer said, while noting that could change. “As AI becomes more integrated into daily life, public perception will evolve, potentially shifting from skepticism to acceptance and even reliance.”Shir Raviv speaks at a conference table in a classroom

Ryan P. Kennedy, Timashev Chair of Data Analytics and professor of political science at The Ohio State University, shared research findings that people are more likely to blame judges who use AI advice in sentencing decisions for mistakes — regardless of whether the judges agree or disagree with the advice. He and his co-authors also found that while AI can reduce the blame on individual soldiers using autonomous weapons to select targets in a war zone, the technology increases blame on the military system and policy.

“Understanding the mechanisms behind blame allocation and public trust in AI systems is essential for developing responsible and effective AI governance,” Kennedy said.

Seulki Lee-Geiller, associate research scientist with ISPS’s Democratic Innovations program, presented the results of a 3,000-participant survey experiment finding that AI-integrated policymaking generally received higher public policy evaluations, especially when combined with civic consultation. However, she said that providing detailed information about AI use can overwhelm and confuse the public, creating a “transparency dilemma.”

“Our survey experiment reveals that incorporating citizen consultations in AI-driven policymaking can enhance public trust and satisfaction,” Lee-Geiller said. “But it must be done thoughtfully, highlighting the need for clear and accessible communication.”

Salon Barocas, a principal researcher for Microsoft Research in New York and an adjunct associate professor of information science at Cornell University, discussed the implications of using generative AI for high-stakes decisions. Historically, such decisions have been made with predictive AI — models designed to predict a specific outcome based on historical data. Users of generative AI, on the other hand, have often viewed it as a tool for creating new content, such as written pieces, computer code, and images.

But Barocas said nothing prevents people from using generative AI to answer questions more traditionally handled by predictive AI, such as decisions about hiring, loans, and college admissions. He said there is growing interest in using generative AI for decisions because it is easier to access, costs less to use, and requires little or no new data beyond the original trove of training material.

However, Barocas warned that generative AI’s lower individual barriers to adoption make it harder to set top-down institutional policies, leaving the results less visible and subject to less scrutiny. In addition, he said decision-making based on generative AI may also lack many of the qualities commonly valued about decision-making based on predictive AI. Such decision-making can be more informal, less consistent, and less insulated from irrelevant or even illegal factors. Barocas warned that many beliefs about the appropriate way to regulate the use of AI in decision-making will need to be reconsidered as generative AI displaces predictive AI.

Raviv presented co-authored work exploring the potential polarization of public opinion on AI regulation as a key challenge for implementing effective regulation. Her findings reveal an emerging partisan divide in media coverage of AI but also that the polarization of the public debate is not inevitable. She and her colleague found that rather than directly adopting elite positions, voters rely on trusted sources to determine which information deserves attention. In turn, this shapes their policy preferences in an unbiased manner.

“Effective communication about the real-world impacts of AI can bridge partisan divides and build broader coalitions for AI governance frameworks that prioritize the public interest over partisan agendas,” Raviv said.

Daniel S. Schiff, assistant professor of political science at Purdue University and co-director of the Governance and Responsible AI Lab (GRAIL), shared a multinational investigation of the impact of company ethics commitments and audits on consumer trust in AI. He and his co-authors found experimental evidence that company AI ethics practices significantly boost consumer trust in AI products and services in both the United States and Germany, though more costly strategies, such as independent audits, were not necessarily more effective compared with simpler ethics statements.

“But building sustainable consumer trust in AI will require ongoing work,” he said. “For the market to work successfully, it will demand better consumer literacy about AI ethics, government support of the auditing ecosystem, and affirmation of the empirical benefits to organizations which adopt responsible practices.”

Seulki Lee-Geiller speaks at a lectern in a classroom

Yamil R. Velez, assistant professor of political science at Columbia University, showcased how he and a co-author used AI-assisted surveys to explore public opinion, focusing on issues that are often excluded from the formal agenda. He found that well over a majority of participants saw their issue priorities reflected in Congress, suggesting the government considers a broader range of issues than previously thought.

“Policies addressing urgent, extreme, or novel issues are less likely to be included in the agenda, highlighting the challenges of representing diverse public concerns,” Velez said.

Kaylyn Jackson Schiff, assistant professor of political science at Purdue University and an ISPS external faculty fellow, presented the results of experiments she conducted with colleagues as an ISPS Democratic Innovations postdoctoral associate to assess the broader impact of generative AI on political communication. She found that AI tools like ChatGPT increase the perceived ease of contacting representatives, but they might not motivate individuals to engage more frequently in political communication.

“High public support for using AI in political communication reflects the potential of these tools to enhance democratic processes,” Schiff said. “But while there is support for AI use, there are also recognized downsides — such as growing numbers of AI-generated emails in representatives’ email inboxes — and other perceived barriers to political communication, highlighting the complexity of AI as a tool for democracy.”

José Ramón Enríquez, postdoctoral scholar at Stanford University, presented a new platform, deliberation.io, designed to facilitate effective and scalable deliberation using generative AI. His team found that participants in experimental conditions showed more willingness to find compromise, reported higher satisfaction, and felt more respected and represented.

“Generative AI can act as a guide, not an actor, to amplify agency, respect, and inclusiveness in deliberative processes, ensuring that every voice is heard and valued,” Enriquez said. “The future of democracy lies in our ability to leverage AI to facilitate meaningful dialogue, bridging gaps and fostering understanding among diverse groups.”

In between conference sessions, Raviv discussed the evolving nature of AI and its impact on society. She noted that while technologies evolve rapidly, many human cognitive biases and social behaviors remain stable. She said our challenge is to distinguish where AI simply amplifies existing patterns and where it introduces new dynamics that require new concepts and frameworks.

“We should also avoid thinking about AI as a monolithic technology,” Raviv said. “Different AI tools create different challenges and opportunities, requiring targeted governance strategies to ensure they support rather than undermine democracy. And that was precisely the goal of this conference.”