Efficient campaign managers should identify these fence-sitters and mobilize only them
Recent randomized experiments have shown that door-to-door mobilization efforts can have massive payoffs, boosting turnout by 7 to 10 percentage points among those targeted.1 But although previous studies have shown that mobilization has a large aggregate effect, they have not shown whether mobilization effects some types of voters more than others. Does door-to-door canvassing raise the probability of turnout equally for all voters, or are some types of voters more mobilized than others?
Briefly: The authors argue that mobilization has the strongest effects on voters who are indifferent about turning out. Efficient campaign managers should identify these fence-sitters and mobilize only them; money spent mobilizing those who are likely to turn out (or stay home) regardless of the campaign’s efforts is money wasted. Crucially, however, the authors demonstrate that these indifferent voters are not the same from one election to the next. In highly visible elections (like presidential elections), mobilization efforts should target those who rarely vote; in obscure elections (like legislative primaries), mobilization efforts should target those who regularly vote; and in mid-level elections (like Congressional or mayoral races), mobilization efforts should target those who vote occasionally.
Contribution to the Literature
This argument resolves a conflict in the literature between four different models of mobilization, which the authors summarize in their figure 1 (at right; click to enlarge). In panel A, mobilization influences all voters equally; in panel B, mobilization influences those who would be least likely to turn out otherwise (that is, mobilization has the strongest effect on “low propensity” voters); in panel C, mobilization has the strongest effect on high-propensity voters; and in panel D, mobilization has the strongest effect on voters with a moderate propensity to vote.
Although the authors reject panel A, their theory can produce a theory that can lead to either B, C, or D. In high-salience elections, panel B is accurate–since in high-salience elections, it is low-propensity voters who are debating whether to turn out. In low-salience elections, panel C is accurate–since it is the regular voters who are debating whether to turn out.
For empirical evidence, the authors re-evaluate the results of 11 previous experiments.2 They use turnout among each study’s control group as a proxy for salience. They estimate each voter’s “propensity to vote” using a bunch of demographic variables (mostly) and past turnout data. Sure enough, they confirm their theory.
The authors characterize a voter’s propensity to vote as an “enduring, individual-level trait.” I find this puzzling. It is well-known, for example, that turnout rises with age (to a point). We also know that voting can be habit-forming; a voter mobilized in one election becomes more likely to turn out in subsequent elections (Green and Shachar 2000; Gerber, Green, and Shachar 2003; Fowler 2006). Of course, I doubt that this measurement choice undermines their results.
On the whole, though, a welcome contribution. Campaign consultants should read this closely. When political scientists spend NSF money on mobilization experiments, they can use a blanket strategy. But when campaigns spend hard-earned dollars on mobilization efforts, they need to know exactly which voters to target.
Update: Research published by Catalist, a Democratic group, seems to support this paper’s conclusions. Read more here.