AI and Democracy: Yale Conference Sparks Collaboration and Innovation

Authored By 
Rick Harrison
March 28, 2024

Helene Landemore stands to speak into a microphone in a conference room as attendees listen

If the promise and risks of artificial intelligence (AI) involve replacing mundane and difficult human tasks through superhuman access to knowledge and processing power, a Yale conference on how to regulate and govern with this evolving technology employed low-tech, human interactions.

At times over the two-day event, participants chatted in tight circles of chairs, posted sticky notes to easels, and stood in clusters along a line representing their relative support for various statements concerning the potential for AI to reshape our world.

For example, agree or disagree: “Democratic AI is incompatible with capitalism.” “AI should represent all world societies equally.” “It’s impossible to have a universal set of values governing AI systems.” “AI will replace politicians.”

No statement received unanimous support, and most instigated participants to spread across the length of the room, with those at the extreme ends and the middle sharing their conflicting views. And the enthusiastic debates, conversations, and collaborations continued nonstop over breaks and meals.

Mahmud Farooque gestures at an easel

“This has been an incredibly inspiring and useful conference,” said Teddy Lee, product manager at ChatGPT creator OpenAI for a team developing processes and platforms enabling democratic inputs to steer AI. “It’s not often enough that technologists and people who are working on improving democracy are in the same room in such a constructive environment.”

Hosted by the Institution for Social and Policy Studies (ISPS), the “Governing (with) AI” conference overlapped with a conference on governing citizens’ assemblies, part of ISPS faculty fellow Hélène Landemore’s new “Governing X” series. Landemore, a professor of political science, helps lead ISPS’s Democratic Innovations program, designed to identify and test new ideas for improving the quality of democratic representation and governance. The “Governing (with) AI” conference was funded by the AI-2050 program at Schmidt Sciences.

“Hélène’s work and this conference demonstrate the value of fostering interaction between concerned and knowledgeable individuals from a variety of backgrounds,” said Alan Gerber, ISPS director and Sterling Professor of Political Science. “Every day it becomes clearer that artificial intelligence will shape our future — including how we govern ourselves. ISPS, and certainly Hélène, are committed to helping promote understanding of AI and its applications so it can be safely and thoughtfully integrated into our society and institutions.”

Marjan Ehsassi places sticky notes on an easel

Allen Gunn, a facilitator with the California-based nonprofit organization Aspiration, led the conference’s activities. With a friendly, upbeat attitude, Gunn encouraged respect, listening, self-awareness to allot everyone an opportunity to contribute to conversations, and the use of accessible language for people with different backgrounds and fields of expertise. To promote collegiality, name tags displayed only first names and no titles or affiliations.

“We aren’t going to solve anything today, but we can make designs — we can make plans for plans,” Gunn said to introduce the second day’s morning session. “We can get clarity on the work we can do together moving forward.”

Landemore encouraged participants to reconvene in the next six to eight months and perhaps again for a follow-up conference in the spring of 2025. In reflecting on the week’s interactions, she expressed great satisfaction with how everyone embraced the format to best share ideas and forge collaborations.

“The hope was that if we empower people, they will come up with their own ideas and find each other,” Landemore said. “If I had tried to micromanage that into life, I don’t think we would have gotten close.”

In addition to representatives from OpenAI, conference participants included leaders from Google, AI company Anthropic, and Meta, the parent company of Facebook. Academic attendees included experts in political science, sociology, economics, communications, engineering, computer science, data science, law, and ethics. Topics included the potential for using AI to synthesize a collective will, adversarial uses of AI in democracy, mapping use cases for AI in citizens’ assemblies, insufficiencies of global AI regulation, and questions around who owns and controls AI data.

Tyna Eloundou speaks to fellow conference attendees in a wood-paneled room

Attendees expressed appreciation for the conference’s intellectual humility and constructive curiosity, its openness and shared sincerity, and how the group’s diversity fostered creativity and provoked challenges to siloed thinking.

“I initially wasn’t sure about the interactive methodology proposed for the conference,” said John Tasioulas, a moral and legal philosopher who, as the inaugural director of the Institute for Ethics in AI at Oxford University, partners with Landemore on a three-year project exploring AI ethics funded through the Schmidt Sciences award. “But it was totally in keeping with the underlying philosophy of the project and just remarkable to see this really productive discussion. How do you get so many people to engage constructively? I thought it was a master class in achieving that.”

Tasioulas said he was particularly pleased with the receptiveness of representatives from OpenAI and Anthropic.

“They were willing to entertain ideas about how to make their decision-making process more responsive to democratic values and human rights,” he said. “Willing to interrogate some of the assumptions that perhaps they uncritically make. So, for me that was a real eye opener. It was great.”

Mahmud Farooque, associate director of the Consortium for Science, Policy and Outcomes and a clinical professor at Arizona State University’s School for the Future of Innovation in Society, also said he enjoyed the conference’s interactive structure, which he felt facilitated deeper conversations more quickly than a typical series of academic presentations and panels.

“Gatherings like this can get at some of these hard answers about these questions we’re facing in terms of democracy, security, and humanity’s survival,” Farooque said. “You know, you can get as hyperbolic as you want when you’re talking about these topics.”

Royal Hensen gestures while speaking in a group seated in a circle of chairs in a conference room

Royal Hansen, a Yale College graduate from the Class of ’97 and vice president of privacy, safety, and security engineering for Google, led a conference discussion on privacy and security issues on the use of AI in democratic processes.

“I like to look at wherever there is an opportunity for dual use,” Hansen said of any new technological tool. “I assume that for every good thing that will happen, there is an equal and opposite bad thing.”

Hansen praised the conference for strengthening the connection between academic research and tech companies to engage in nuanced solutions across broad perspectives, bolstering his optimism for overcoming the challenges represented by AI.

“I think the key that I like about this group is that we are not viewing this as an inhuman problem,” Hansen said. “But in fact, it’s another form of human question that humans need to work out. As long as we approach it this way — and not as something happening to us from the outside — we should be hopeful.”

Michelle DiMartino, a senior advisor at the Behavioral Insights Team, said she hopes she can apply ideas from the conference to her company’s governance work on community forums with Meta.

“I’d like to figure out ways to use AI more effectively at different parts of the process we’ve been piloting and make it more scalable and adaptable,” she said. “I’m hoping to come back to my firm with the current state of play with AI. What are the different social purpose innovations of AI that I heard about and some of the risks that we should be considering in making communications more effective and actionable?”

Kevin Feng, a third-year Ph.D. student in the University of Washington’s Human Centered Design and Engineering Department, found it refreshing to engage with people from outside of his technical background, particularly those involved ingovernment.

Participants in a conference sit along a wood-paneled wall as they take turns speaking into a microphone

“We don’t talk to these folks on a regular basis,” he said. “It’s really insightful to be able to communicate across these boundaries and see what goals we share and how our expertise and their expertise can be combined to solve these collective goals.”

Feng also found the conversations productive as an exercise in communication.

“I think it really challenged me to explain my work in a way that’s more accessible, which I think is going to be really important as the technologies that I’ll be building and deploying may be used in democratic and deliberative settings,” he said.

Oliver Hart, the Lewis P. and Linda L. Geyser University Professor at Harvard University, shared the 2016 Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel. He attended the conference more for its overlap with the preceding conference on citizens’ assemblies, wondering how such deliberative structures could apply to furthering shareholder democracy in corporations.

But he found some unexpected insight discussing AI.

“I normally talk to economists and lawyers,” Hart said. “I found it valuable to get other perspectives on matters that interest me. It turns out that AI could be useful in providing all sorts of information to people and also maybe aggregating and summarizing.”

Matthew Meyers speaks into a microphone as other conference attendees listen

Teddy Lee, the OpenAI product manager, aims to serve as a conduit within his company for questions relating to democratic practices and theories, tapping the contacts he made at Yale.

“I think probably the most valuable takeaway has been the connections I’ve made here,” he said. “As we continue to explore democratic governance, it’s nice to know we have this network of democracy experts and fellow practitioners to collaborate with and hear from.”

Mark Gorton, founder of Tower Research Capital and a leading advocate for safer streets initiatives, supports the Democratic Innovation program and took an active role in the conference.

“I’m leaving here legitimately more optimistic about the future of the world and democracy,” he said. “This is a difficult problem. This technology is advancing so rapidly. But it is comforting and exciting to know there are so many great people who are earnestly trying to make the world better.”

Landemore has great ambitions for the people she invited to New Haven. She wants to pave the way for global institutions that are more democratic and legitimate.

“Somebody’s got to do it,” she said. “And it might as well start with the brilliant and sincere people who came this week. I think they are going to go back home and start planting seeds that will grow over time.”

And as for the future of self-governance in an era or superhuman computers? Landemore is betting on human ingenuity and values to prevail.

“I believe in the power of ideas over the power of interests,” she said.

Read about the conference on governing citizens’ assemblies

Area of study 
Science & Technology