The Pitfalls of Self-Reported Attitude Change

Authored By 
Matthew H. Graham
Blog contributor 
Policy Fellow
November 30, 2017

Soon after allegations of sexual impropriety broke against Alabama Senate candidate Roy Moore, media widely reported that 29 percent of Alabamans became more supportive of Moore due to the allegations. This is terrible, but not for the reason you might think.

The problem is that the statistic does not measure what it purports to. Sure, these are the answers people actually gave to the poll question, but people are bad at self-reporting their own attitude change.

Why distrust self-reported attitude change? One danger is expressive responding. People who support Roy Moore might wish to express their support by giving the most-supportive answer, even if it isn’t quite true. Another danger is that people interpret the question differently. For example, people could consider their support to be “stronger” because it survived a tough test, even if they are less supportive than they would otherwise have been. Finally, people might just be bad at gauging how strongly they feel now relative to earlier.

A classic example of the pitfalls of self-reported attitude change is the famous study of death penalty attitudes by Charles G. Lord and two coauthors. After reading counter-attitudinal evidence, people reported becoming more certain of their prior attitude. But experimental tests that measure before-and-after attitudes have failed to validate the finding. Instead, people update their attitudes in the direction of the evidence.

Before-and-after measures avoid the pitfalls of self-reporting. In panel surveys, the same respondents are asked the same questions at multiple points in time. This allows individual-level attitude change to be directly measured at both points in time, then compared. Lacking a panel, we can still get a good estimate of overall attitude change by comparing random samples of different people taken before and after the event.

In Roy Moore’s case, there were no panel surveys running at the time. The best we can do is compare polls taken before and after the event. Such polls show a decline in support. This doesn’t prove that there are no individuals whose support strengthened after the event. The only evidence we have about attitude change in response to the allegations is that Moore lost support. The extent to which some peoples’ support increased is a matter of speculation.

Acknowledging the limits of our data is no fun. Why not just interpret the data we have? Beyond the obvious call for scientific integrity, survey researchers need to defend against naïve interpretations of data because of the message it sends to Americans about their fellow citizens. Mass polarization is in part an affective phenomenon: Democrats and Republicans don’t feel any more positive about their own party than in the past, but they feel significantly more negative about the opposing party. Exaggerated narratives about public irrationality are likely to contribute to this dynamic—especially due to the well-known phenomenon of out-group attribution bias. People tend to view their own group’s flaws as an exception and the other group’s flaws as personal shortcomings. In the context of mass polarization, exaggerated accounts of public irrationality give each party an excuse to look down on the other party while excusing its own flaws.

All researchers want their work to have an impact. Feeding the beast of mass polarization is probably not the impact many researchers have in mind. Social scientists have a responsibility to give Americans accurate information about their fellow citizens. Resisting the temptation to read too much into polls about Roy Moore is one small part of this obligation.

Matthew H. Graham is a Graduate Policy Fellow at ISPS and doctoral student in Yale’s political science department. His research focuses on information in politics, polarization, and mass preferences.