Complete Story
 

Interpreting election polls

By Bo Harmon, BIPAC's senior vice president of political affairs

Editor's note: Don't forget to vote in Tuesday's primary election for various positions, including statewide executive offices (governor/lieutenant governor, attorney general, auditor, secretary of state, treasurer), Ohio General Assembly, representatives to the U.S. Congress, state Board of Education, county offices, judicial offices and political party members.

 

Between now and Election Day, you will be bombarded with polling data about campaigns, who is up and who is down. Pundits on TV will cite dueling polls trying to prove their points.  How do you know which to believe and which to dismiss?

There are several things to note when examining polling data and they are often NOT what the media suggests when determining whether a poll is valid or not. The first is to make sure it is a "scientific" poll that the pollster selected the participants rather than an "unscientific" survey that respondents self-select to take. Unscientific surveys tend to be things like "Vote on our website for who you think should be the next president." They may be interesting, but they are not representative of public opinion and can be easily dismissed.

 

"Public polls" vs. "Internal polls"

Media organizations often hold up "public" polls as more reliable because they are conducted by a non-involved party and look skeptically at "internal" polls released by a campaign or party committee. In fact, in terms of accuracy, the opposite is generally true.

Public polls are those that are not paid for by a candidate or party committee. They are frequently paid for by a media organization or a public polling firm such as PPP, Rasmussen, Harris or others. These are seen as "unbiased" and thus more reliable. This perception is a fallacy.

Internal polls are polls paid for by a campaign, party committee or someone else with a direct interest in impacting the outcome of the race. When a campaign releases internal polling data, you can be sure it is because they view the results as good news. That does NOT mean that the numbers are inaccurate. It just means that their release is favorable to the campaign.

Campaigns and party organizations spend tens of thousands of dollars on polls using experienced pollsters who run complex algorithms on the results and base multi-million dollar strategic decisions about media buys, messaging and turnout on the results. Public polling firms spend a few hundred dollars on robo-calls to answer a few questions. Most media polls are somewhere in the middle.

Public polling CAN certainly be accurate, but an unbiased information source doesn't equate to accuracy (as the media tend to portray) as much as the polling sample that we look at next. And, let's be honest, in politics, there are no completely unbiased sources anyway.

While campaigns never release the entirety of their polls because they hold a blueprint for decision making, if they release a head to head number against their opponent or a favorable/unfavorable rating, it is data that they just paid a lot of money for and are using to guide hundreds of strategic decisions, so THEY certainly believe it is accurate.  You should take "internal" poll numbers you see seriously. The downside of the "internal" polls is that campaigns only release the information that is good news for them, so while you may be getting accurate information, it is HIGHLY selective.

 

Polling Sample

This is where polls often face the most controversy. WHO participated in the poll? Is it intended to be a reflection of the public as a whole? Of ALL registered voters? Only those likely to participate in the election? For most campaign polls, the line that gets the most attention is the "ballot test" commonly framed "If the election were held today, would you vote for Candidate A or Candidate B?"

The assumption is that the question represents what would happen if the election were in fact held today. But if you are asking people who aren't registered to participate in the election in the first place, the results are less meaningful. If you ARE hoping to reflect what the election results would be if held today, then you need a very careful sample that accurately reflects expected participation across age, gender, race, geography, party affiliation, etc. Over or under sampling an important demographic obviously makes the results less accurate. That is why campaign polls are often more accurate than media or public polls – it takes a lot of money to ensure a balanced and representative sample of respondents. Media and public polls are often much looser with their samples and it is something you should check before giving validity to polling results you see reported. If you hope to answer the question "what does the public think of XYZ" then a sample of all adults makes sense. Answering "How will XYZ affect the election" means it makes more sense to ask only those who are registered or, better, likely to vote. Subtle but important difference. And one media polls rarely differentiate.

Part of obtaining an accurate sample is using the correct channel of communicating with respondents. Traditionally polls all went to home-based land line phones. With the number of people with a home based land line phone shrinking, pollsters now use a mix of land lines, cell phones, and increasingly, on-line based samples to ensure accuracy. While younger and minority voters are increasingly found only via cell phone or on-line, older voters, who are the most likely to turn out in elections, still use land lines predominantly.

Always take careful note of the polling sample in determining the credibility you assign it.

 

Question phrasing

HOW you ask a question makes a huge difference in the results you get. Google "Jimmy Kimmel Obamacare vs. Affordable Care Act" for a funny demonstration of how this works. In the sketch, dozens of people on the street indicated they supported the Affordable Care Act but were opposed to Obamacare. Beyond using different names for the same legislation, the phrasing of a question greatly impacts the results. "Would you vote for Joe Smith or Mary Jones" often gets very different results than "Would you vote for the Democrat, Joe Smith or the Republican, Mary Jones?" When party identification will be available on the ballot, which is the more accurate way to ask the question? This says nothing of message testing – or its unethical cousin, push-polling. Push polling is not polling. It is simply distributing a negative message about a candidate in the form of a question. "Would you STILL vote for Joe Smith if you knew he hated puppies and children?" You are unlikely to see any push-poll questions reported as part of a poll, but you should be very aware of HOW a question was phrased – especially if it has to do with a public policy question.

Issue advocacy groups are often the worst offenders of asking deliberately slanted questions such as "Do you support the all American, God given right to do ABC or do you think a heavy handed government bureaucrat should be able to take away your rights because they are having a bad day?" They then try to say that 99 percent of Americans agree with their position. Even seemingly subtle wording differences can have a sizable impact on the results. Pay close attention to question wording.

 

Margin of Error and Averaging

While public vs. internal polls, determining an accurate polling sample and understanding of question phrasing make up the vast majority of the determination of whether you should take a poll as an accurate reflection of public sentiment, there are a few other factors also at play.

Sample size – you want to ensure that a poll has a large enough sample size to be statistically significant. The larger the sample size, the lower the margin of error. A margin of error greater than 6% makes it hard to take the poll too seriously. You should mentally apply that margin of error to the results when you read them – and do it both directions. A candidate with 40 percent of the vote in a poll COULD actually have anywhere between 34 and 46 percent. Most elections are decided by much slimmer margins, so watch margin of error carefully.

It has become popular to look at "poll averages" such as Real Clear Politics does. This is a very helpful way of smoothing out some of the variation that comes from margin of error differences and gives a more accurate picture of public sentiment, but you should always be aware of when the polls in the average were conducted. A lot can change in politics in a few months and in some of these races, polls only come out every few months, so the average could include numbers that are quite stale.

Talking about the latest polls is always a favorite parlor game in Washington. With the large number of competitive races this year, you will see new polling numbers on SOMETHING every day. Just be sure to look at who conducted the poll, the polling sample, and how the questions were asked before giving them too much credence. Then go spin like a pro.

Printer-Friendly Version

0 Comments