More about polls on the AP wire

Public opinion polling serves as a vital source in The Associated Press's journalism. Short of conducting a census, it is the only scientifically proven way to know what a broad group of people are thinking, whether that group is all Americans, voters, baby boomers, pet owners or parents. Polls conducted for or covered by AP have to meet certain standards. Here are some questions and answers by AP Polling Director Jennifer Agiesta on why we view polling the way that we do.

How can a sample of 1,000 people represent the views of more than 315 million Americans?

There's a well-worn joke in the polling community: Don't believe random sampling works? Next time you need a blood test, tell your doctor to take it all. It is a scientifically proven fact that a randomly selected sample, even a small one, is representative of the larger population from which it's drawn, whether that population is the size of Mayberry, New York City or China. And after you've got a random sample of about 1,000 people, the level of accuracy you'd gain by surveying more people starts to be very small relative to the costs and effort required to collect those interviews. The key to an accurate result is random selection, not size.

What makes a sample "randomly selected"?

Here's the technical definition: A random sample is one that's been chosen so that each person in the population being sampled has a known and non-zero probability of being selected for the sample. In plain English: Everyone should have a chance of being selected, and the researcher should know what that chance is. This is called probability sampling.

Since this is the real world, almost no surveys achieve that perfect definition. Let's start with the "non-zero probability of being" selected part. The share of people who have no chance to be include in the sample is known in the survey world as non-coverage. Just about every poll you see has some rate of non-coverage, knowing what it is and what that means can help you interpret a poll's accuracy.

The most prominent example of non-coverage in recent survey research comes from the spread of cellphones. Just about a third of the American adult population lives in a household where there is no landline telephone. Until a few years ago, a pollster could make a random selection of Americans by calling randomly selected landline telephone numbers and asking for a randomly selected member of the household. About a decade ago, the pollster's non-coverage using that method would've been about 2 to 4 percent. That would include people who didn't have any home telephone plus a relatively small institutionalized population (people in large group facilities such as prisons or hospitals).

That same survey conducted in the U.S. today would have a non-coverage rate closer to 35 percent because of all the people who've given up their landlines. That's an unacceptable rate of non-coverage, and it's why we don't cover polls of people in the U.S. that don't include cellphones.

Likewise, polls conducted using an Internet panel that does not provide Internet access to people who don't have it would leave out about 1 in 5 adults who do not use the Internet.

The other requirement for a random sample is that the pollster must know each person's chance of being selected for the sample. That's so that a pollster can make sure that people who had twice the chance of being in the sample as everyone else, don't end up counting for twice as much as the others in the survey results. Before cellphones, that meant people who had multiple phone lines in their home were handled differently than those with only one phone line in their home and people who lived with four other adults were handled differently than people who lived alone. Now, pollsters have to account for the number of adults in the household, the number of landlines and cellphones on which a person can be reached, and sometimes, how often people say they use each of those types of phones.

Telephone polls aren't the only surveys where each person's chance of being selected can be identified. Face-to-face surveys can pin down probabilities of selection, and Internet polls which are based on a panel selected using traditional, random-sampling methods also have enough information to calculate each person's chance of being chosen for the poll.

How are the samples selected for AP polls?

The Associated Press always uses probability sampling when conducting polling. More detailed information on the methodology of AP-GfK polling can be found here:

Ok, that's helpful for adults, but what about voters?

Identifying voters is the biggest challenge for any pollster trying to understand the electorate. In the U.S., voting is confidential and not mandatory, turnout fluctuates from year to year, and some states don’t even keep voter records, so there’s very little to go on except for the answers people give to questions that have been proven to track voting closely.

Because of that, most public pollsters have developed a series of questions that they use to identify likely voters. At AP, we ask a series of questions, and voters who meet certain criteria are considered to be likely voters. In our final poll before the 2012 election, the likely voter model we used suggested that about 61 percent of adults would turn out. Recently released Census data show that about 57 percent of adults did vote.

Not every pollster uses the same method. Some construct a point scale based on answers from each person questions, and consider those with higher scores to be likely voters. Others assign each person in the survey a probability of voting and use that number to weight the responses. Many campaign pollsters use voter records to target likely voters in their dialing. No method is perfect, and pollsters are constantly researching ways to improve their models.

What is weighting and why should I care about it?

Weighting is a process by which pollsters adjust the results of their poll to match known parameters for certain characteristics. For example, we know that women are more likely to respond to phone polls than men. So most polls conducted by phone wind up interviewing more women than they should. To correct for that imbalance, a pollster might apply weights so that each woman interviewed counts for a little bit less than 1 response and each man interviewed counts for a little bit more than 1 response.

The math goes something like this: Pollster Jane conducts a survey, which comes back with 56 percent women and 44 percent men. The population she's trying to measure is actually 52 percent women and 48 percent men. To make her survey look like the population she's studying, Jane applies a weight of 0.929 to responses from each woman in her survey and a weight of 1.091 to the responses from each man. This way, women aren't overrepresented in her final results.

This process only works if Jane knows that the men who did not complete her survey are not that different from the men who did. On most political topics, research has shown that non-respondents are generally not that different from demographically-similar poll takers. That's not true for some other subjects, such as health surveys or measures of financial well-being.

Additionally, weighting should only be used for known parameters. The Census Bureau's statistics on demographics and geographic population distribution are accurate enough to be used to weight survey data. Exit poll data, on the other hand, are not accurate measures to use for survey weighting.

What is the margin of error and why should I care about it?

The margin of sampling error that is typically reported with a poll is frequently misunderstood. Think of the reported result as the most likely outcome if you asked the same question of everyone in the target population. Nineteen times out of 20, the result you’d get from asking everyone will be close enough to the poll’s estimate to be within the margin of error. One time out of 20, the actual number will be outside the range of the margin of error.

The reason to pay attention to the margin of error is that no one knows for sure where within that range the actual result falls, so in order to say that one poll result is different from another, the difference between the two has to be at least as large as the margin of error for each result, and preferably, twice as big as the margin of error. This means it is highly unlikely that a difference of 1 or 2 percentage points in a poll is ever meaningful. And remember, results among subgroups (i.e. men vs. women, Democrats vs. Republicans) have a higher margin of sampling error than results for everyone in the poll, because they are much smaller groups.

It’s also worth keeping in mind that sampling error is only one source of error in a poll. Non-coverage, non-response, poorly worded questions, questions that are asked in an order that affects how people answer later questions and the natural desire by many people to be accommodating or appear to give the “right” answer all can add further error to a poll’s results.

Does question wording or order impact a poll's results?

Absolutely. Pollsters looking for accurate results strive to write questions that are clear, balanced and measure only one thing at a time. Most of these things, however, are in the eye of the beholder, so each pollster tends to ask their question a little differently.

Take that polling standard "right direction" question. AP asks:

"Generally speaking, would you say things in this country are heading in the right direction or in the wrong direction?"

Another prominent public pollster pairs the right direction with the "wrong track," and asks about feelings instead of thoughts:

"Do you feel things in this country are generally going in the right direction or do you feel things have pretty seriously gotten off on the wrong track?"

And yet another asks for thoughts, pairs right direction with wrong track and drops the "seriously:"

"All in all, do you think things in the nation are generally headed in the right direction, or do you feel things are off on the wrong track?"

As a result, each pollster tends to get a slightly different response to the question, even if they poll at the same time. But since their results are comparable to their own previous polls, you'll notice that most pollsters find their polls moving in the same general direction, you might even say they're all on the same track.

Can polls predict the future?

If they could, pollsters would all be millionaires. They are merely a snapshot of public opinion at the time the poll is taken. Most people aren't very good at predicting their future behavior, so we can only expect their results to reflect what they feel right now.

What if I want to know more?

If you’re looking for more information about AP’s polling, all of our recent survey releases can be found at Reports and other information about polling conducted through the AP-NORC Center for Public Affairs Research can be found online at, and information about our polls with GfK can be found at

And here are a couple of useful resources on polling available online:

The American Association for Public Opinion Research ( has many great resources on its website, including information about the group’s Transparency Initiative to encourage public pollsters to release enough information about their methods so that consumers can make informed judgments about polling, of which AP is a supporter.

The National Council on Public Polls website hosts the always useful “20 Questions A Journalist Should Ask About Poll Results” ( Worth a read if you’re interested in evaluating polls.


All contents © copyright 2016 Associated Press. All rights reserved.