A Note on Dropping Experimental Subjects who Fail a Manipulation Check

Author(s): 

Peter M. Aronow, Jonathon Baron, and Lauren Pinson

ISPS ID: 
ISPS19-18
Full citation: 
Aronow, P., Baron, J., & Pinson, L. (2019). A Note on Dropping Experimental Subjects who Fail a Manipulation Check. Political Analysis, 27(4): 572-589. DOI:10.1017/pan.2019.5
Abstract: 
Dropping subjects based on the results of a manipulation check following treatment assignment is common practice across the social sciences, presumably to restrict estimates to a subpopulation of subjects who understand the experimental prompt. We show that this practice can lead to serious bias and argue for a focus on what is revealed without discarding subjects. Generalizing results developed in Zhang and Rubin (2003) and Lee (2009) to the case of multiple treatments, we provide sharp bounds for potential outcomes among those who would pass a manipulation check regardless of treatment assignment. These bounds may have large or infinite width, implying that this inferential target is often out of reach. As an application, we replicate Press, Sagan, and Valentino (2013) with a design that does not drop subjects that failed the manipulation check and show that the findings are likely stronger than originally reported. We conclude with suggestions for practice, namely alterations to the experimental design.
Supplemental information: 

Link to article here.

External ID: 
DOI:10.1017/pan.2019.5
Publication date: 
2019
Publication type: 
Publication name: 
Discipline: 
Area of study: