Democracy, Data, and Representation: A Q&A with New ISPS Resident Faculty Fellow Shiro Kuriwaki

November 28, 2022

Shiro Kuriwaki

What do voters want from their elected representatives? How can we improve measurements of public opinion and electoral behavior to understand this fundamental relationship in a democracy and improve policies to benefit everyone?

At least the partial answer to this and so many questions in science is data. Lots of it. But, as Shiro Kuriwaki teaches, more is not necessarily always better.

“One big theme facing researchers today is a need to quantify uncertainty better,” said Kuriwaki, an assistant professor in the Department of Political Science and a resident faculty fellow with the Institution for Social and Policy Studies (ISPS). “Everything is an estimate, with noise and some bias. When the data becomes larger, we typically think the number becomes more precise. But there are other types of bias and error that don’t disappear even as you get more data.”

Kuriwaki has been exploring such issues through his research and teaching. He earned his Ph.D. in political science from Harvard University, where he received the Dean’s Excellence in Student Teaching Award from the Kennedy School of Public Policy in 2020.

Now at Yale, Kuriwaki teaches classes on Congressional policymaking and quantitative research design while helping ISPS launch Democratic Innovations, a program designed to identify and test new ideas for improving the quality of democratic representation and governance.

We recently spoke to Kuriwaki about digging through political data to reveal what is really going on and chart a better way forward.

ISPS: How would you describe your research interests?

Shiro Kuriwaki: I study American politics and elections. Basically, who wins and why.

ISPS: Sounds simple. But of course, it’s not, right?

SK: Correct. It is hard for anyone to re-analyze why they acted in a certain way — why they chose to vote for someone. Some of it may be unconscious. Some of it may be by habit or idiosyncratic information they came across. If it’s hard for each of us to explain the formula we use to decide how we vote, it’s hard for political scientists to understand how large groups of people act.

ISPS: You participate in several projects to collect and organize large datasets, such as public ballot image logs across the country and a comprehensive list of winning and losing candidates in U.S. congressional, presidential, and gubernatorial elections. For example, your paper on slave ownership and fighting for the Confederacy takes advantage of a recently digitized dataset on the entire population of the Confederacy, accumulating records for 3.9 million free citizens in the pre-war South and applying a natural experiment based on an 1832 land lottery in Georgia. How have recent advancements in technology created unique opportunities to understand current trends and examine even long-ago historical topics?

SK: In that historical study, I am grateful for the collection and preservation of the original census data, as well as the rosters of Civil War soldiers that organizations have digitized and posted online for everyone to examine in creative ways. Almost all the research I do relies on large-scale datasets. But we should not put too much confidence in their size alone. Huge datasets can still be unrepresentative or contain errors. The impact of errors or non-representativeness of a subset can in some cases be magnified when the data gets very large.

ISPS: This brings to mind your paper on overestimation of vaccine uptake, which showed how a survey of 250,000 respondents produced an estimate of the population no more accurate than a simple random sample size of 10. The public’s faith in science has suffered greatly over the course of the pandemic. What can be done to improve methodologies, such as those deployed in large surveys, to improve reliability of results and restore public trust?

SK: I think as a community, scientists and scientific communicators need to be careful and convey how the large size of a dataset is not evidence of superiority on its own. There can still be errors and bias that remain in the collection. In our vaccine study, one takeaway message is to be aware of large datasets being precisely wrong. One way of addressing this is to improve survey logistics, which is something I explore in my work analyzing the U.S. census. Because even if the data is biased in some way, you can collect other information, such as education of the respondents, and see how this compares to established benchmarks. Then we can incorporate this new information to bolster the reliability of the sample.

ISPS: I think for the average American, something like the 10-year census sounds simple. Collect data on every citizen, either through mailed surveys or door-to-door canvassing. Tabulate, then report. But it’s not that simple. Why? What needs to be done to balance privacy and accuracy. What can we do better?

SK: The census is a huge logistical operation. These days, the Census Bureau tries to reach everyone online first from a master address file before going door to door for those it can’t reach. It’s a continuous operation coordinated with other agencies. In addition, there are privacy safeguards. The bureau swaps information and injects statistical noise into the data, which masks the identity of the participants but creates a certain tradeoff between privacy and the utility of the data. In a recent paper, my coauthors and I studied this trade-off. So, it is important for researchers and policymakers, who only have access to the manipulated public data, to understand how this can affect their conclusions and policy choices.  

ISPS: In your paper on accountability of congressional representatives to constituents, you and your co-author found that individuals adjust their voting preference based on agreement with an incumbent’s roll call vote, though partisanship weighs more heavily. And the diverse mix of issue preferences and party identification in congressional districts disguises this effect. What are the practical implications of this finding? How can representatives and campaigns utilize this knowledge to be more responsive?

SK: We found that there is a good chunk of the public, about 40%, who are not sure how their member voted on a particular bill. I think the implication for campaigns is if they can reduce that number of “don’t knows,” that would strengthen the accountability effect that we find. Because if you learn something and find your member voted the same way you prefer, our finding is that you are more likely to vote for that member. It is interesting to note that this is true even if someone doesn’t know for certain which way their representative might have voted. Their responses are often guesses. Voters might assume that if their representative voted in a surprising way, they would have heard about it.

Shiro Kuriwaki teaches in a classroom

ISPS: How did you become interested in this type of research?

SK: I’ve been interested in public opinion surveys for the longest time, starting from when I first watched TV reporters interviewing passersby on events of the day. I was curious about how people came to those opinions, particularly on issues that are complicated. As an undergraduate student, I was advised by a social psychologist who pioneered the field of social cognition — how people make sense of other people. In graduate school, I became more interested in elites and the connection between public opinion, how policy is formed, and how representatives act. I think I gravitated toward political science because at the end of the day, the politicians are the ones who make the decisions on policies, not the economic advisors or the think tanks who advise them. Political science sets out to understand the systems where everything derives real-world significance.

ISPS: You grew up in Tokyo. How did you become interested in American politics?

SK: I first came to the United States for four years in elementary school due to my family’s work. I then did all my secondary education in Japan. As I grew older, and when I took Introduction to American Politics at Princeton, I felt that American politics was more exciting than back home. Different, for sure. In Japan, for example, money in politics is heavily regulated, including who can launch advertisements and when. Parties in Japan have their own problems of forming stable coalitions. But they appear to have fewer extremist candidates. I now see my study of American politics as a prominent case study of domestic politics in democracies.

ISPS: What do you think the average American voter should understand about their role in our current political system?

SK: I think having a vote is power. Even a single vote. And in the United States, voters have more opportunity to select representatives at local and state levels. Government positions that might be appointed with bureaucrats in some other countries are filled by elected officials in the United States.

ISPS: What do you think people might misunderstand about the interaction between policy and social research?

SK: An extraordinary amount of information that we use in our day-to-day lives is based on surveys. For example, people might know that the census is conducted every 10 years as required by the Constitution. But they might not know that the Census Bureau also produces annual surveys and that other products rely on this same data, such as the tools we use to track the unemployment rate. People might underappreciate how pervasive and important this information is. We rely on these surveys, and so we need to make sure they are as sound as possible.

ISPS: What drew you to Yale and ISPS? What are you hoping to accomplish here?

SK: I feel privileged to be able to teach and research here. Political science interest is exceptionally strong at Yale. I’m excited to be part of that educational experience while producing research that can improve policy. For example, my collection of ballot image logs is relevant for our crises today in election administration and the need to protect election administrators from abuse by political actors. I think what we can do at ISPS is to help train students so they not only consume research but can detect faulty reasoning. I also hope to conduct research so that people who are not my students can evaluate false statistical claims and better understand how their government operates.