High propensity voters are affected most by an early turnout appeal, four weeks out. Low propensity voters are affected most by a late appeal, three days out.
In recent years, political scientists have run a variety of field experiments to show exactly which methods of voter mobilization are most effective. However, those experiments have focused mostly on method, not on timing. In a recent article, Costas Panagopoulos used a randomized field experiment to test whether voter mobilization drives work better when they are conducted on the eve of an election rather than a month out. The results may surprise some.
Common wisdom dictates that voter mobilization efforts should happen as close to Election Day as possible. The logic can be traced to a variety of theories, such as Zaller’s “bucket model” wherein ideas considered more recently are more likely to be at the top of a voter’s mental “bucket.” This is Panagopoulos’s “recency” hypothesis.
At the same time, messages received a month out could also have a “priming” effect. If we remind voters that a low-salience municipal election is coming up, then voters might take more notice of the campaigns and feel more prepared to vote when Election Day actually arrives. This is the “primacy” hypothesis.
Experimental context and results
Panagopoulos ran his experiment during 2005 municipal elections in Rochester, New York. He divided 25,000 voters into 4 groups. The control group did not receive a mobilization message. The remaining three groups received a non-partisan turnout message by phone. One group received the message 3 days before the election; one received it 2 weeks before the election; one received it 4 weeks before the election.
Previous studies have already shown that telephone calls don’t do much to boost turnout, so it’s not surprising that Panagopoulos found a small aggregate affect. Together, the three treatment groups turned out at a rate only 0.9 percentage points higher than the control, an effect within the margin of error.
Perhaps surprisingly, Panagopoulos finds no significant difference across the three treatment groups. Those contacted four weeks prior to the election showed the same small bump in turnout as those contacted just three days prior to the election.
Propensity to vote matters
The most interesting result comes in a brief section near the end of the article, though. Based on each subject’s previous turnout history, Panagopoulos classifies each subject as a “low propensity” or “high propensity” voter. High propensity voters have a good turnout record; low propensity voters don’t. It appears that these two types of voter respond differently to the three treatments.
High propensity voters are affected most by an early turnout appeal, four weeks out. Low propensity voters are affected most by a late appeal, three days out. This makes sense to me, although I struggle to put into words exactly why it does. High propensity voters like voting, but they may not be aware of low-salience municipal elections. Making them aware of the elections early on may prompt them to make plans to vote, something they’re happy to do if they know an election is coming. But if you contact them too late, they know enough about how voting is supposed to work that they might feel uncomfortable showing up if they haven’t had a chance to learn anything about the candidates.
By contrast, low propensity voters don’t make voting a priority, so contacting them early gives them a month to find a reason not to vote. We might also suppose that this group is less interested in politics overall. If that’s true, then maybe when they do vote, they just show up planning to vote a straight party ticket. Maybe it’s less important to them to feel informed before showing up, so they’re willing to respond to a last-minute appeal.
Of course, my reasoning there is all post hoc.
With such small treatment effects and such large standard errors, Panagopoulos is unable to present a persuasive test as to whether timing matters. Had he run this experiment using a different method–such as door-to-door canvassers or scary postcards–he might have produced large enough effects to make his statistical tests more compelling.
I also find the lack of a placebo problematic. When Gerber and Green (2005) tested experimentally whether phone calls can increase turnout, they found that placebos were critical. When they compared their phone call treatment group to an uncontacted control group, they found (falsely) that phone calls boosted turnout by 16 percentage points. But when they compared their phone call treatment group to a placebo group that received a call inviting them to participate in a blood drive, they found that their turnout appeal had no effect; even the placebo group turned out at a much higher rate than the uncontacted control group. They concluded that being “reachable” by phone was more important in predicting turnout than actually being reached.
I worry about the lack of a placebo in the present study, especially because the contact rate varies so much between treatment groups. Panagopoulos contacted 74% of subjects in treatment groups 1 and 2 but only 52% of subjects in treatment group 3, a huge gap.
In sum, I like what Panagopoulos is trying to do here, but two things reduce the study’s usefulness: Its use of phone calls as a mobilization tactic and the lack of a placebo.