Pay no attention to those 30,000 people behind the curtain…

There is no shortage of misleading research on issues related to campaign finance, as is demonstrated by the latest bit of methodological manipulation by Public Campaign, a leading voice in the fight for taxpayer-funded political campaigns.

A few days ago I came across this: All Over The Map: Small Donors Bring Diversity to Arizona’s Elections. The study purports to show that "Arizona’s qualifying contribution donors have a different profile than typical big donors giving to Arizona campaigns for those candidates who opt into the private system."

Sadly for anyone hoping to glean useful information, a careful parsing of the language and close look at the methodology and data source reveals that all Public Campaign managed to do is cloud the issue while wasting some vast number of trees and electrons in the process of distributing this nonsense.

I read the report with a great deal of interest, since CCP has been studying the $10 donors to New Jersey’s taxpayer-funded candidates in their now thankfully-defunct pilot project. Initially, the report’s results didn’t even seem that far out line with what a fair-minded person might suspect – somewhat more donors to "clean election" candidates came from less-wealthy zip codes, and similar findings.

Then I read the footnotes, a description of the methodology, and most important, information about the data. At this point, the entire study pretty much collapses and reveals itself to be utterly worthless.

The researchers chose to examine gubernatorial campaigns in 2002 and 2006 funded by taxpayers and compare them with those funded by private, voluntary contributions. Those funded by taxpayers were required to gather at least 4,000 contributions of $5 each from Arizona citizens in order to qualify for their millions of dollars from the government.

Gubernatorial candidates and campaigns are often tough to directly compare because there are generally few of them (each state only has one governor, after all) and each candidate has unique and specific characteristics and appeals that, to say the least, make direct comparisons highly suspect. This is generally why CCP usually focuses on legislative races in our studies, because there are hundreds of candidates to draw data from, making it less likely that a handful of candidates with unusual characteristics will skew the data.

The bigger problem, though, is that only one candidate, Matt Salmon in 2002, opted out of the taxpayer-funded system, compared to nine who accepted taxpayer funds (this counts Janet Napolitano twice, for both her 2002 and 2006 campaigns).

In order to increase the pool of donors to privately-funded candidates beyond just those of Salmon, the researchers chose to add donors from both the Republican (Jon Kyl) and Democratic (Jim Pederson) candidates for U.S. Senate in 2006, using FEC disclosure forms as their data source.

The problem, of course, is that the FEC reports only disclose contributors who gave more than $200, meaning that all their contributors who gave $200 or less aren’t included at all in the analysis (nor were Salmon donors under $25, although this is an exclusion of somewhat less importance).

The study tells us nothing about who is actually funding the privately-funded candidates, because the study excludes everyone who supported them but was unable or unwilling to contribute more than $200, instead giving some lesser amount. In other words, those donors more likely to match the demographic profile of the "clean elections" donors were knowingly left out of the analysis of donors to privately-funded candidates.

This doesn’t stop Public Citizen from misrepresenting their own research, of course. In the Executive Summary they state that "…Clean Elections $5 donors more accurately represent the diversity of the state than the private system does," and in their conclusion they state that "…when candidates rely on small donor qualifying contributions they engage… a far more diverse group of people than do candidates who chose private financing for their races."

Public Citizen acknowledges this exclusion while attempting to dismiss it by noting in a footnote that "Ninety-one percent of the individual contributions collected by these two U.S. Senate candidates" were in contributions of greater than $200. This is true of the dollar amounts, but not the number of contributors. In fact, Kyl and Pederson raised more than $1.4 million in contributions from donors who gave less than $200.

If Kyl’s and Pederson’s fundraising was anything like that of most U.S. Senate candidates, that $1.4 million raised almost certainly represents more than 20,000 individual donors, and probably closer to 25,000 or even 30,000 or more donors (as a former fundraiser for a U.S. Senate candidate, I can attest to the fact that there is an almost unlimited supply of citizens willing to write a $10 or $25 check to a candidate they like, and both Kyl and Pederson would have tapped this nearly bottomless well). Salmon’s under-$25 donors would likely run into the thousands as well.

By excluding Kyl and Pederson’s donors below $200 and Salmon’s donors under $25, Public Citizen is probably ignoring at least 30,000 individual contributors, or close to half of the 67,000 donors that they did include in their analysis.

Such a serious flaw renders the study worthless, other than demonstrating the obvious fact that contributors who give more than $200 to a candidate have different demographic characteristics than those who give $5. It does not, contrary to Public Citizen’s claims, demonstrate that the donor base for candidates funded with private voluntary contributions is substantially different than those funded through the government-run system.