Reimagining Democracy: Yale Hosts Global Experts on AI, Governance, and the Future of Participation

Imagine, if you could, reinventing democracy from scratch.
How much power should reside in any individual leader? How much influence should corporations wield? Can randomly selected citizens accomplish more than elected representatives? Is there a role for artificial intelligence to accelerate political processes and make them fairer?
Science fiction author Eugene Fischer has an idea. He proposes everyone gets a simple ballot to vote for candidates or issues, just like today. But behind the scenes, an algorithmic system, secured through encryption, analyzes your voting history and adjusts how much weight your vote carries over time. If you regularly vote for the same outcome even when you lose, you can be rewarded for consistency. It shows you really care. If you change your mind after thinking through an issue, that matters as well. You can be rewarded for your reflection.
Fischer described his system as something like loyalty points for democracy.
“The idea is to reward people for persistence or thoughtfulness,” he said. “And give them more reason to believe their votes really matter, without making voting itself any harder.”
Fischer spoke last month at the fourth International Workshop on Reimagining Democracy (IWORD), a conference hosted this year at Yale’s Institution for Social and Policy Studies. Political scientists, economists, sociologists, public servants, fiction writers, lawyers, journalists, futurists, computer scientists, and technologists gathered for two days to “suggest, develop, and debate ideas equal to our present and to the future, as best we can see it.”
“This is the most AI-heavy year we’ve had — not by design, but because that’s what’s in the air,” said Bruce Schneier, a faculty fellow at the Berkman Klein Center for Internet & Society at Harvard University, a best-selling author, founder of the IWORD format, and co-organizer of the conference. “I bring fiction writers because imagining new forms of governance under constraints is exactly what we need.”
Hélène Landemore, ISPS faculty fellow and Damon Wells ’58 professor of political science, hosted and helped organize the conference with Kevin Elliott, a political theorist and lecturer in the Ethics, Politics, & Economics Program. Landemore helps lead ISPS’s Democratic Innovations, a program that identifies and tests new ideas for improving the quality of democratic representation and governance.
“What I love about this workshop is that it brings people from across disciplines, methods, and life experiences who would not necessarily talk to each other otherwise,” Landemore said. “At this juncture in history, when light-speed technological acceleration meets governance paralysis and unprecedented economic concentration of power, we need bold, outside-the-box thinking. It isn’t about defending democracy as we know it — it is about building the next, more resilient version of it, with institutional design and computational tools in the service of human relations.”

Landemore’s own presentation, shared with her former student and Yale senior Gryffins Wilens-Plumley, introduced the design for a self-governing and AI-augmented citizens’ assembly at the state level on the matter of local public services, a project she is pursuing in partnership with the Connecticut Conference of Municipalities, the Connecticut Comptroller’s Office, and the University of Connecticut.
Elliott noted how many participants spoke of AI helping to improve political processes by filling in gaps in people’s knowledge and facilitating deliberation between them.
“Many speakers emphasized human-AI ensembles, using AI to improve actual deliberations of real citizens by mapping their disagreements and synthesizing contrasting perspectives into common ground,” Elliott said. “In addition to improving the efficiency of what are often otherwise quite time-intensive processes, this pairing allows for oversight of AI outputs and for all participants to be able to better identify with the products of AI-assisted democratic decision-making.”
Elliott’s conference presentation addressed the difficulty of diagnosing the death of democracy in a country because people often rely on shortcuts and you often cannot know democracy has died until after it is already dead. Current metrics, such as incumbents losing elections, only work after the fact.
He called on the need for strong political judgement over neutrality and framing democracy as a partisan project.
“Democracy is not born from consensus — it’s a fighting creed,” Elliott said. “Historically, it was imposed by actual political struggle, often involving violence. Rebuilding it will require organizing and conflict, not just compromise.”
Nick Chedli Carter of Spring Projects and Resilient Democracy — and former director of democracy initiatives at Harvard Kennedy School’s Ash Center — described how mainstream pro-democracy academia, practice, and framing emphasize procedures, norms, tactics and institutions but neglect belonging, justice, and deep purpose. In contrast, he said movements and political projects that incorporate narratives such as Christian nationalism, religion and spirituality more broadly offer certainty, identity, and transcendence.

As technology accelerates and common problems become more complex and interconnected, he called for reconnecting democracy with deeper meaning through bolder intellectual inquiry, acknowledgement of the “rational existential dread” felt by large swaths of the electorate, and more openness to spiritual, alternative, and creative points of view. Not just for partisan gain but ultimately for democratic renewal.
“Politics without the sacred is administration,” Carter said. “And nobody storms the Capitol for better administration.”
Jon Alexander, a non-resident democracy fellow with Harvard Kennedy School’s Ash Center for Democratic Governance and Innovation, argued that we are living through the collapse of what he called humanity’s consumer story — the dominant narrative that shaped 20th century institutions — with people as independent individuals pursuing self-interest and institutions serving goals of efficiency and delivery of products and services. He called the emerging paradigm “the citizen story,” in which people are interdependent contributors and institutions facilitate participation.
Alexander said consumer framing contributes to and therefore cannot solve ongoing crises of loneliness, inequality, and ecological collapse. In the new model, the state and institutions enable collaboration, organizing and empowering citizens to move from protest to co-creation.
“We need a massive moment of institutional renewal — designed for people as citizens, not consumers,” he said.
Most of the participants grappled with how AI might serve as an opportunity for such renewal, while acknowledging the many risks.
Liz Barry, executive director of Metagov, a nonprofit research and infrastructure laboratory seeking to facilitate self-governance through technology, highlighted the need for digital deliberation systems to be equipped with decentralized IDs and personal data sovereignty to allow for the training of agents on one’s own data.
She warned against platform lock-in, where user data becomes trapped in one corporate-owned system, limiting choice. She advocated for cooperatives allowing groups of people to govern the use of their combined data. And she opposed the proliferation of algorithms that reduce citizens’ opinions to raw data for manipulation rather than fostering genuine deliberation. She said that participating in government should not only focus on outcomes but developing the capacities to self-rule.
“The digital scaffolding that we welcome people into has to leave them stronger as they head back into their lives — more capable of engaging with complexity and change,” Barry said. “And fortified with solidarity to advocate for what they now know they hold in common. If we’re only designing for preference extraction, we’re merely blending information smoothies for dictators to drink.”
Jon Evans, a novelist and journalist now working as a solutions architect for Meta Superintelligence Labs, predicted that with richer world models and scalable conditional forecasts, technology companies might soon develop “Moneyball”-type tools to analyze politics, optimizing candidate selection, messages, and county-level strategies. Though, he said, such a system could invite the cynical manipulation of voters.
“AI forecasting might just make the future somewhat more legible,” Evans said.
Tina Eliassi-Rad, Inaugural Joseph E. Aoun Professor of Computer Science at Northeastern University, traced the evolution of scientific paradigms to the current AI-driven model, raising concerns about measurement bias, illustrating challenges in understanding the human-AI feedback loop, critiquing the bias of large-language models (LLMs), and arguing that science should prioritize understanding mechanisms, not just forecasting outcomes.
“Even if AI turns out to be an oracle — and I don’t think it will — we need to emphasize explanation over prediction,” Eliassi-Rad said.

Michiel Bakker, an assistant professor at MIT and a senior research scientist at Google DeepMind, discussed opportunities for AI to improve the crowdsourced fact-checking system Community Notes on X. The notes feature allows users to collaboratively add contextual explanations and fact-checks to posts. He advocated for a system in which LLMs research and suggest notes and allow people on the platform to provide oversight via the already existing approval system on the platform.
“This could have a huge impact on the quality and coverage of X Community Notes, but more importantly, it is a really exciting blueprint for collective knowledge generation,” Bakker said. “Think AI-assisted Wikipedia with human approval.”
Deb Roy, a professor of media arts and sciences at MIT and director of the MIT Center for Constructive Communication (CCC), explained how humans speak with stakes — social, legal, and physical consequences. But AI speaks fluently while lacking accountability.
“It has no skin in the game,” Roy said, noting how this undermines trust, cooperation, and legitimacy — the foundations of democracy. He called for embedding accountability channels in AI systems.
“Any agent allowed persuasive output must carry enforceable accountability,” he said. “Otherwise, democracy itself is at risk.”
Judith Donath, a faculty associate at Harvard’s Berkman Klein Center, outlined the growing calls to grant legal personhood to AI systems. If AI gains free speech rights, regulations limiting chatbot outputs could be found unconstitutional, she said.
“We are meaning-making beings,” Donath said. “Our metaphors — like calling corporations ‘persons’ — shape law and reality.”
Inspired by Donath’s presentation, Ada Palmer, associate professor of early modern European history at the University of Chicago, launched into an impromptu lecture on the history of the treatment of nonhuman beings in history and literature, from early stories in which nonhumans lacked interiority, to the moral ambiguity introduced by Frankenstein’s monster, to the Japanese robot Astro Boy garnering symbolic legal status as an oppressed minority.
“Science fiction teaches us empathy across difference,” Palmer said. “That’s why it matters in conversations about governance and technology.”
She warned to avoid anti-labor rhetoric when opposing AI rights because “history shows that when rights discourse is weaponized against workers, democracy suffers.”
“We need to stop thinking of democracy as something we inherited and start thinking of it as something we have to build every day,” Palmer said.
Cory Doctorow, an author, journalist, and activist, shared his work with the Electronic Frontier Foundation opposing anti-circumvention laws that make it illegal to modify devices you own even for lawful purposes. As an example, Doctorow cited how computer printers might check for branded ink and that removing that check can be a felony punishable by years in prison and steep fines, enabling monopolies and blocking privacy tools.
He called for repealing anti-circumvention laws to enable digital sovereignty and interoperability.
“Your margin is my opportunity,” Doctorow said of busting up monopoly power. “We can move fast and break billionaires’ things.”
Nathan Sanders, a data scientist at the Berkman Klein Center, offered a proposal as provocation: Congress could encode AI models into laws to restore legislative supremacy against federal court interpretations. When a legal circumstance requires an interpretation of a novel circumstance impossible for humans to anticipate through legal language, the AI-encoded law could produce dynamic interpretations aligned with the lawmaker’s expressed intent.
“AI is a mechanism that Congress can use to exert more control over legislative interpretation than is possible with words alone,” Sanders said.
Josh Fairfield, William Donald Bain Family Professor of Law and director of artificial intelligence legal innovation strategy at Washington & Lee University School of Law, said the next 18 months represent a vital moment to regulate AI.
“It’s always either too early to regulate technology or too late — but somehow never the right time,” Fairfield said. “The right time is now.”
Marci Harris, executive director of the nonprofit POPVOX Foundation, warned that policymakers have difficulty understanding the accelerating pace of technology and that old methods of lawmaking are not sufficient. She urged Congress to increase its human and technical resources and focus on laws that specify outcomes and guardrails rather than attempting to micromanage implementation details.
She also emphasized the role of policymakers in engaging the public on the big questions that emerging technologies bring.
“We should use technology to expand human agency and maintain democratic control,” she said. “But that is going to require a much more tech-literate and tech-enabled legislative branch.”
Aditi Juneja, founder and executive director of Democracy 2076, argued for long-term thinking to expand possibilities.
“Over 50 years, parties, leaders, and norms change — so should our imagination,” Juneja said. “We’re at a pivot point. Nothing is predetermined.”
Schneier expressed satisfaction with this year’s workshop and even greater urgency for the progress he designed it to achieve.
“I started this conference four years ago because I felt there was a paucity of fresh ideas for democracy,” he said. “We get stuck thinking about the current systems and incremental ways to improve them.”
IWORD seeks to inspire a different approach, he said.
“I want us to imagine other possibilities for democracy — possibilities that may be unachievable in the near term,” he said. “Because if we don’t know where we want to go, it’s unlikely that a series of incremental steps will get us there.”