Oct 07 2008

Polls May Be Exaggerating Party Affiliation

Published by at 1:57 pm under 2008 Elections,All General Discussions

McCain wants to change DC, Obama wants to change America!

I noted in a previous post that the polls seem to be weighted on Party ID and not Policy Preference, where conservatives and independents dwarf liberals by a whopping 72-23.. Party ID has dems ahead of Reps 43-36, which means if people vote party Obama would lead, but if they vote their policy preferences Obama will lose. Here’s the graph again:

Reader Trent Telenko pointed me to an interesting historic analysis of Party ID trends over the last 20 years:

That’s worth repeating — in the best year for Democrats in congressional races since 1974, the partisan makeup of the electorate was 3 percent, and every major poll overestimated the party ID gap.

We may be seeing another “Dewey For President” debacle in the making. And for all those liberals stopping by from Firedoglake, let me point out that I can (a) find the polls out of whack and (b) see that as a plus for McCain. If out of whack polls are showing a tie for Obama then clearly that means McCain could be well ahead. I know math is very challenging for many on the left, so I suggest we just wait and see how this pans out. I have been clear in many of my posts I worry about being Pollyannish, but the data in front of me tells me these polls could have some serious mathematical issues.

Time will tell. But here are two more polls showing the race to be a statistical tie: ARG (Oct 7) and Zogby. That makes now 5 polls (Democracy Corps, CBS and Diageo/Hotline) showing a statistical tie.

McCain wants to change DC, Obama wants to change America!

One response so far

One Response to “Polls May Be Exaggerating Party Affiliation”

  1. AJ,

    I have a friend who is currently a market intelligenc analyst and former pollster. I sent him a copy of a blog post on a report on the American Association of Public Opinion Research (AAPOR) at this link:

    http://www.collinsreport.net/?p=46#respond

    That blog post said in part:

    The American Association of Public Opinion Research (AAPOR) provides some answers.

    The AAPOR suggests that of each 100 calls a typical pollster makes just 12 fully completed interviews of “registered” voters will be conducted. There are 88 calls that fail to meet the standard for a usable interview and are disgarded. There are hang-ups, bad numbers, answering machines and most frequently, refusals to even start. Consequently, in order to get 500 respondents, the usual number used in state polls, they have to make 6,000 calls.

    As experience has shown us, “registered voters” are less reliable indicators of eventual electoral outcomes than “likely voters.” Since just 60% of registered voters are likely voters, the number of calls balloons to 9600. This is why, although it is the actual ‘gold standard” of polling, we rarely see polls of likely voters, except from the largest companies.

    AAPOR acknowledges that pollsters can make polls “come out” as they want them to.

    As AAPOR reports, there is nothing random about polls. Zogby uses Internet polls that beg for abuse by agenda motivated respondents.

    Pollsters re-call the same people and admit doing so.

    How else could they meet deadlines and budget targets?

    This was what the market intel guy said about it when I forwarded that to him:

    Bits of this are dead-on, bits are exaggerated, and many of the interesting points are missed:

    Democrats have been particularly good in the last two electoral cycles at encouraging their members not to hang up on pollsters and to put in the time to give a usable interview, because it’s important (and in the present system, it is). Republicans have trouble overcoming their own membership’s distaste for the media (however earned that may be) and have simply found it cheaper and more effective to cast doubt on polling in general.

    Many polltakers are bottom of the barrel barely-employables, the same population that gets shunted into maintenance at fast food populations and low-end boiler room phone sales. They’re often motivated — the difference a job makes in their lives is huge, and they want to do well — but they lack skills, tact, vocabulary, and sometimes just plain comprehension. “Well-meaning idiots with erratic impulse control” pretty much covers it. So on open-ended interviews, ones in which they ask a series of questions and write down answers rather than code them with check boxes, you have a lot of them who don’t understand the question they are asking (even if they were put through training, and many weren’t) and that’s at least a script they get to practice. Their actual understanding of the oral response is close to nil, and their ability to write it down even if they understood it sharply limited. Generally a “back up” reads through those written records and then listens to recordings of the ones that look like they might be usable, and does the actual coding that is used — but that still means the calls are idiot-screened. Thus respondents who use a lot of buzz words and have an unambiguous preference are drastically over-sampled, because they’re much more likely to make it through that screening process.

    For the low-budget media polls (independent or in-house) stratifications are badly out of date, sometimes by as much as 12 or even 15 years. Good strats are done every couple years by marketing research companies like Claris, and updated every 3 months in between, but access to a good-quality commercial marketing strat is about a million a year, and they generally require you to sign up for a multi-year contract. So even if you get, e.g., an accurate picture of the preferences of middle-income Evangelicals in mainly-wheat counties on the Great Plains, your results are going to be multiplied by the number of them there were in 1998, which may have gone up, down, or sideways (sideways meaning the demographic may have split, fused into another, or become something else entirely).

    When I first started analyzing market research, there wasn’t even a category for “hostile respondent trying to sabotage the results.” They went into “Other” along with the apparently mentally ill, pranksters, non-English-speaking, and other “Wow that was weird” calls, and the people doing the sampling were mostly college students, so they had a fair bit of ability to evaluate what the responses were. (Favorite of mine: “And what do you like best about (SOAP)?” “After I eat a big bowl of it, I feel horny all day.”) Nowadays, hostile respondents are more than common enough to have their own category, and the people doing the categorizing often don’t catch on that that’s what they have. Hostile respondents tend to cluster in groups that feel disenfranchised; I’m guessing in this election that’s pritnear everyone.

    The short story is that really good and trustworthy polls cost a great deal of money because

    1) They need expensive market research that costs $1 million (+) a year,

    2) Uses “Hostile territory” polling methods to get past the public’s growning hate of telemarketers and politics and

    3) Uses lots of really expensive per hour used human talent to do the polling questions, often via face to face interviews.

    Only major campaigns, Political Parties, Corporate giants of their trade associations and deep pockets like George Soros can do afford to do polling of this nature.

    Note as well that there is an absolute limit to the number of firms in the polling industry capable of doing this type of polling and those firms don’t have the ability to do more than a limited number of polls of this type due to industrial capability limits.

    Getting enough people who are not “bottom of the barrel barely-employables” to do this work right is a matter of long term training and experiance that throwing money at the problem doesn’t help.