Democracy in the Age of AI: Scholars Explore Risks, Opportunities, and Innovation

How is artificial intelligence shaping democracy?
“There are far more unknowns than knowns,” said Seulki Lee-Geiller, a research scientist with Yale’s Institution for Social and Policy Studies, introducing a conference of scholars last month to explore ways to innovate democracy in an era of rapid technological transformation. “I want you to share your thought-provoking questions and challenge our assumptions so we can all leave with better questions to guide our understanding of this evolving change.”
The conference was sponsored by ISPS’s Democratic Innovations, a program that identifies and tests new ideas for improving the quality of democratic representation and governance.
“It can be difficult to analyze fast-moving technological developments as they are happening,” said Alan Gerber, ISPS director and Sterling Professor of Political Science. “Seulki Lee-Geiller and her distinguished guests understand these questions cut across society and do not belong to a single field of research. I am pleased that ISPS and Democratic Innovations can support efforts by Lee-Geiller and her colleagues to explore and address these emerging issues.”
Daniel Schiff, an assistant professor of political science at Purdue University and co-director of the Governance and Responsible AI Lab (GRAIL), presented new research on how human resources professionals choose AI tools and what values shape those choices.
He and his co-authors surveyed more than 1,400 professionals in the public and private sectors and found that both efficiency and ethical values mattered. HR professionals were more likely to select AI systems that were low-cost, easy to integrate, transparent, and included human oversight or approval. Tools that provided opportunities for consent also increased adoption preferences, while privacy protections had little effect.
At the same time, many HR professionals viewed AI as overused for high-stakes tasks like hiring and performance evaluation but underused for improving employee well-being and supporting worker voice. The research also found stark differences across sectors.
“The public and private sectors approach AI with very different priorities,” Schiff said. “Public sector professionals tend to weigh fairness, transparency, and human oversight more heavily, which raises the question about what policies and governance structures will encourage responsible AI adoption across organizations.”
Peter Loewen, Harold Tanner Dean of the College of Arts and Sciences and professor of government at Cornell University, presented his work on gender and attitudes toward AI.
He found that women perceive AI as riskier than men, especially when job outcomes are uncertain. Women are more likely to associate AI with job loss, economic insecurity, and fairness, he said. While men are more likely to associate AI with science fiction tropes, such as robots taking over and dystopian futures, or misinformation, such as deepfakes and manipulation.
In addition, women with a university education were particularly skeptical of AI, suggesting that higher education does not always correlate with tech optimism.
“Woman are concerned about jobs,” Loewen said. “Men are down to science fiction the world.”
Loewen proposed that AI governance should account for such gendered risk perceptions to avoid exacerbating inequality.
Paolo Agnolin, a postdoctoral research associate at the Initiatives on Contemporary European Affairs (ICEA) at Princeton University, built a granular dataset across 15 Western European countries to track union membership by region and industry over time.
He found that automation shifts jobs from unionized sectors, such as manufacturing, to non-unionized sectors, such as logistics and services. This reallocation leads to a decline in union density, especially in areas with high adoption rates of robotics.
“People are not working in factories as much anymore,” Agnolin said, noting the political fallout of such trends. “As unions disappear, there is more fertile ground for radical right political parties.”
Eduardo Albrecht, an associate professor and director of international relations and diplomacy at Mercy University and adjunct professor of international and public affairs at Columbia University, discussed how AI is transforming political institutions and the nature of citizenship.
As an anthropologist, Albrecht outlined the technologically driven evolutions humanity has seen over the last 6,000 years. First, the written word allowed religious and state bureaucracies to develop. Then the printed word led to a shattering effect on these bureaucracies, reforming the church and creating the Enlightenment and modern democracy.
“These gear-shift moments are important,” Albrecht said. “And what moment is happening now?
He suggested that the flood of data, shared with governments and analyzed by powerful computers, changes citizens’ relationship with the state and themselves, as government grows more able to see into minds through search histories and other digital habits.

“Abstract surveillance has become actual surveillance, where we don’t have that private space in our minds,” he said. “In a sense we have given up on privacy.”
Albrecht envisioned a future in which data forces citizens to commingle with the state, where each person’s data double represents them, allowing the state to micromanage the population at the speed of AI.
“We cannot participate in this type of virtual institution as human beings, because we cannot operate at that speed,” he said. “This means we need to talk about some sort of new representation, like citizen AI avatars, that goes beyond traditional democratic representative forms of government.”
And even if legacy institutions would attempt to stop such an evolution, Albrecht suggested such a change was likely inevitable in the long term.
“From an anthropological perspective, this is a train we cannot stop,” he said.
Johannes Himmelreich, associate professor of public administration and international affairs at Syracuse University, discussed what makes a public servant do a “good job” and how automation might undermine that dynamic.
He argued that doing a task well as measured by efficiency is not the same as doing a job well as measured by ethics or good judgement. He raised the example of Stanislav Petrov, a lieutenant colonel in the Soviet Union who prevented a nuclear war with the United States in 1983 by ignoring protocol and judging supposed missile launches reported by the defense system to be a false alarm.
“If we automate a task without an eye to what makes the job done well, we might lose something really important,” Himmelreich said. “We want to automate deliberately and thoughtfully.”
Jason Jeffrey Jones, associate professor of sociology at Stony Brook University, presented his work on an automated system to track public opinion on AI in real time.
The online dashboard fields daily surveys, updates the results automatically, and publishes open data. So far, the project has shown support for AI peaked in spring 2025 and then declined, that women consistently support AI at rates lower than men, and Republicans overtook Democrats in supporting AI early in 2025.
Beyond tracking AI attitudes, Jones said he wants to build many more such online tools.
“Social science is just too slow,” he said. “I want to shorten feedback loops.”
The conference ended with a roundtable discussion wrestling with the tension between technological inevitability and democratic resilience. While some participants emphasized designing new institutions, others stressed the importance of preserving human judgement, civic rituals, and ethical safeguards.
“I think technology has always been essential to a well-running state,” Himmelreich said. “But we’ve lost ability in government to build technology. I think if we had people in a democracy who can innovate tools internally, that would be much better than private industry coming in with their own ideas and motivations.”
Loewen stressed the importance of citizens knowing if they are dealing with humans or a computer in government.
“There will be times we don’t care at all that things are automated on the back end, but there will be times when we care a lot,” he said. “I’m not sure why, but I think it would be nice to know when in the chain a decision was made by a robot or a human. We want to know what kind of world we are navigating.”