6:31
News Story
It’s harder than ever to conduct polls. Should we still believe what they say?
Over the course of six days in March, researchers from Franklin & Marshall College surveyed 540 Pennsylvania voters to get their opinions on everything from gun control and nuclear energy to President Donald Trump and Gov. Tom Wolf’s job performance.
Their findings were released Thursday, in the latest installment in F&M’s oft-cited public opinion survey.
Polls like this one still provide reliable fodder for the press, but it’s no secret that public polling has suffered some blowbacks in recent years.
Pollsters say that changing communication habits, shrinking budgets among media companies, and a growing reluctance to participate in public opinion research have all created new challenges for pollsters. (The Capital-Star is one of several news organizations that are partners in the F&M poll.)
Veteran pollster Chris Borick of Muhlenberg College in Allentown said that it’s growing ever more difficult to compete for people’s attention and convince them to participate in a survey.
“I’m probably nostalgic for when I started because it was so much easier,” Borick said. “It was a landline world, where people were generally willing to participate in public opinion research, and I didn’t realize how good I had it.”
Even though polling is more difficult and expensive than it was decades ago, pollsters don’t think their work is any less important or reliable now than in the past.
“Polling still does a decent job and still is the best tool we have for understanding public opinion,” Berwood Yost, director of F&M’s Center for Opinion Research, said. “I don’t know how else we would do it.”
Many people blasted polls after the 2016 presidential election, saying that they had failed to predict Donald Trump’s victory over Hillary Clinton.
But pollsters say that polls can still predict election results with a high degree of accuracy, give or take a slim margin of error. Borick and others said final vote tallies for the 2016 election were closely in line with what most polls predicted. An industry group came to the same conclusion. They found that 2016 polls were among the most accurate in estimating the popular vote since 1936.
The polls just seemed unreliable because Clinton lost, and most polls had her in a narrow lead, Borick said.
“That’s just the nature of things — when there’s a failure, it gets more attention,” Borick said. “There are some system failures, but on the whole the accuracy levels aren’t all that different from 20 years ago.”
Polls may not be going away anytime soon, but it’s still important to keep some things in mind as you read about them in the news.
Look at the sample size
The term “sample size” refers to the group of people who participated in a poll. Pollsters generally agree that the bigger the sample size, the better.
The reason? The more people who participate in a survey, the more likely it is that the findings will reflect the general opinion of the population.
The sample size for F&M’s latest poll was 540 registered voters. That’s how many people actually completed the survey online or over the phone, out of the 8,000 that Yost said were invited to participate.
F&M chooses participants at random from a list of registered voters.
That response rate — 6.7 percent — is fairly typical of polls today, which have an average response rate of about 7 percent, according to F&M pollster Terry Madonna. He said that’s a far cry from decades past, when researchers could count on a response rate of at least 50 percent.
“It’s getting tougher and tougher to reach people,” Madonna said Wednesday.
Madonna explained that when a polling center can’t reach all of the people in its sample pool, it has two choices. It can “breeze through” its list of names and invite a new participant for each one who doesn’t respond.
That may yield more participants, but it also drives up costs. And polls aren’t cheap to begin with. Between labor hours and technical costs, a poll like the one F&M released Thursday takes between $25,000 and $30,000 to execute, Yost said.
Alternatively, pollsters can wait and see how many of the people in its initial sample poll call back, and make peace with a relatively smaller pool of survey data.
Even though pollsters agree that more participants makes more reliable data, Borick said that a small sample size doesn’t mean that a poll is less reliable than ones with large sample sizes.
Borick said that other methods, such as the way researchers craft questions, can undermine a poll with a large sample size. That’s why it’s important to look at poll questions in context.
Which brings us to our next tip.
Context is key
Lots of things have changed in polling in the past two decades. But one thing that’s stayed consistent, according to Yost, is the way that pollsters craft questions.
“We’ve always tried to write questions in a way that’s clear, concrete, and understandable, using simple descriptors,” Yost said. “Policies may be polarizing, but we make [our questions] as neutral as possible.”
That means that surveys will often describe policy proposals in broad strokes, offering as many details as they can without betraying its partisan associations or confusing a respondent.
After F&M released its most recent poll on Thursday, some observers were quick to point out that some questions omitted details that could have affected the polling data.
A question about a proposed bailout for the nuclear industry, for instance, described the policy broadly but did not note the anticipated $500 millon cost to consumers.
Fifty-five percent of poll respondents said they supported the proposal. It’s impossible to know how they would have been swayed if surveyors noted the cost.
Yost said that his polling team was writing its questions in February, before the details of the nuclear bill were announced. At that time, they didn’t know that it would carry a $500 million price tag for consumers.
Even if they did, he said, it would be hard to translate that figure in a way that didn’t immediately turn the public against the bill.
“Even if we wrote a question and knew the cost, we would have had to be very careful to represent the true cost to the individual,” Yost said. “We tried to craft a question that hits the high points of the legislation. Now people can have a conversation about that and talk about the costs.”
Even the order in which pollsters ask questions can affect polling results.
Questions at the end of a survey can poll lower than those that appear at the beginning, Borick said. Yost also said that his researchers try to start their surveys with broad questions before drilling down into more specific topics.
Borick said that anyone who wants to interpret polling data should try to read the full script that surveyors read to respondents. That will let you see how they framed their questions, and what details they decided to include or omit.
Mind the margin of error
All polls come with a disclaimer. It’s called the “margin of error” rate. And it tells you how close the pollsters think their findings reflect the reality of public opinion.
Pollsters know that myriad factors can affect their poll results: the race of respondents, their age, location, or party affiliation. They hope that a random sampling of people will be representative of the general population, but they also know they’re at the mercy of the people who respond.
If respondents are overwhelmingly white, for instance, or skew very young, pollsters know to adjust their findings accordingly with statistical methods.
How do they know how much to adjust? Borick said pollsters rely on data generated by exit polls, which tell us about the people who voted in elections. Exit polls help pollsters infer how people are likely to vote based on their gender, level of education, income, and other factors.
The most recent F&M poll adjusted for factors including age, gender, education and party affiliation. Pollsters didn’t adjust for race, even though 90 percent of their respondents were white, compared to 80 percent of Pennsylvanians in the general population.
Yost said their respondents were in line with the population of registered voters, which tends to be older and whiter than the general population of Pennsylvania.
But in addition to adjusting their data, pollsters also issue a margin of error. This figure represents a range of potential accurate data. Generally, the larger the sample size, the smaller the margin of error.
The margin of error for the F&M poll was plus or minus 5.5 percent. That means that if 59 percent of people support legalizing marijuana, as the poll found, the public support likely rests somewhere between 53.5 percent and 64.5 percent.
The Pew Research Center and Harvard University’s Journalist’s Resource have handy guides to margin of error if you want to learn more.
READ MORE
- Poll: Pennsylvania voters strongly support nuclear energy as Legislature debates bailout
- Majority of Pennsylvania voters favor raising the minimum wage to $12 an hour new poll shows
- Poll finds support for new gun laws rising among Pa. voters after brief decline
- Pennsylvania voters support Wolf, remain lukewarm on Trump in new poll
Our stories may be republished online or in print under Creative Commons license CC BY-NC-ND 4.0. We ask that you edit only for style or to shorten, provide proper attribution and link to our web site. Please see our republishing guidelines for use of photos and graphics.