From Tech Tools to Human Values: ISPS Conference Explores the Impact of AI in Government
Artificial Intelligence (AI) is not the future of government. In many ways, it’s happening now.
Government officials increasingly use AI and data-driven algorithms to influence critical choices, ranging from determining the distribution of food assistance and parole decisions to selecting targets for tax audits and planning the routes for police patrols.
“As AI algorithms become more powerful and impactful, so does the realization that we are facing a major change that touches on the very core of what makes us a democracy, namely the way that we make public decisions,” said Shir Raviv, a postdoctoral research fellow at Columbia University and a nonresident fellow with Yale’s Institution for Social and Policy Studies’ Democratic Innovations program. “It raises some urgent and timely questions about how to unlock the potential value of using AI to improve government decisions and processing while maintaining democratic values and human rights.”
Raviv organized a one-day conference at ISPS last week to explore the latest research on how government uses technology to guide decision making and what might be done to ensure it is used responsibly.
“I feel we should be careful about the path we are on and try to govern AI before it governs us,” Raviv said. “This conference is an attempt to cut through the hype surrounding AI — the extreme or polarized narratives. And to understand more carefully what is actually happening on the ground.”
The conference included presentations from Kirk Bansak of the University of California, Berkeley on refugee integration; Virginia Eubanks of the University at Albany, State University of New York on automating caregiving; and Kaylyn Jackson Schiff of Purdue University on citizen perceptions of AI in policing.
Further discussions focused on AI in the criminal justice system, introducing new frameworks to assess the risks and benefits of AI, including presentations by Melody Huang of Harvard University on whether AI helps humans make better decisions; Dasha Pruss of Harvard University on judicial resistance to a recidivism risk assessment instrument; and Eddie Yang of the University of California, San Diego on ethnic discrimination in AI-assisted criminal sentencing in China.
Other presentations discussed the politics involved in using AI in government. Baobao Zhang of Syracuse University shared survey evidence from machine learning and AI researchers on the ethics and governance of artificial intelligence. Daniel Schiff of Purdue University focused on the viewpoints of policymakers and legislators. And Raviv examined the public’s reaction to AI.
ISPS Director Alan Gerber, Sterling Professor of Political Science, moderated an interdisciplinary roundtable discussion on the promise and challenges of ensuring responsible AI in government, featuring ISPS faculty fellow and Democratic Innovations co-coordinator Hélène Landemore, a political scientist specializing in non-electoral forms of government representation who is co-leading a three-year project on the ethics of AI; Yale political science Ph.D. candidate Eliza Oak, who researches innovations in technology and democracy; Savannah Thais of Columbia University, a physicist who develops responsible and trustworthy machine learning; and Suresh Venkatasubramanian of Brown University, a professor of computer science and data science who was the co-author of the Blueprint for an AI Bill of Rights — one of the first significant actions taken by the Biden-Harris administration to regulate AI.
“We are thrilled to have Shir as an active member of our community at ISPS,” Gerber said. “Her forward-thinking research and success in gathering such an impressive group of scholars to explore the political implications of new technologies demonstrate the guiding principles of our Democratic Innovations program.”
Democratic Innovations aims to identify and test new ideas for improving the quality of democratic representation and governance.
Eubanks discussed topics highlighted in her book, “Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor,” including how states deny social benefits to people because of technical errors or an algorithm’s assessment that they fit an incorrect statistical profile.
“Though these systems promised to lower administrative barriers to programs to allow people to claim benefits from their cell phones or from the comfort of their own homes, in reality, the systems tend to work best for those people who are least vulnerable.”
And because new automated decisions reduce the need for frontline caseworkers, fewer people receive the support they are seeking, Eubanks said.
“These systems end up working really badly for folks who are particularly vulnerable,” she said. “There is less hands-on help. These are the very people who public benefits programs are supposed to be helping.”
Pruss, a fellow at the Berkman Klein Center for Internet & Society and an Embedded EthiCS postdoctoral fellow at Harvard, presented her research showing criminal judges in Pennsylvania ignoring a new tool intended to help sentencing decisions through evidence-based risk assessment. She argued that policymakers should be wary when presented with a new instrument advertised as evidence based.
“In evidence-based sentencing, the term ‘evidence based’ carries a lot of political authority, but that label gets used in a fairly misleading way because sentencing decisions are being grounded in past arrest or conviction data, which are inherently biased,” Pruss said. “It’s called evidence based, but there is no evidence about what actually happens in the future when the tools get implemented on the ground.”
But Pruss did not dismiss the utility of AI or algorithms to help build a more just world. She said policymakers should frame the intention of a technological tool in criminal justice based on the human values they seek to uphold.
“What outcomes are considered important to predict?” she said. “Somebody’s risk of reconviction? Or is it more important to predict, say, which judges are going to be making the most discriminatory decisions? Should we use data to incarcerate more people who are at higher risk of committing more crimes or use evidence to allocate extra resources to people who really need it?”
In concluding her presentation, Eubanks echoed other participants at the conference on what she considered the central question facing a society drifting quickly into automation.
“We need to center human dignity,” she said. “We need to make the labor of love visible.”