Except that 538 hasn't tossed NBC/WSJ, while they
Post# of 65629
Two points:
The RCP average certainly takes into account the high and the low 'outliers' in compiling its 'average'. That's how we get to the 6.3 spread as of today.
By Oct at the latest you will be visiting http://fivethirtyeight.com/ right down to the day before the election. They really are the 'only game in town'.
Triumph of the Nerds: Nate Silver Wins in 50 States
http://mashable.com/2012/11/07/nate-silver-wi...CIWi5nDaql
Quote:
The Fivethirtyeight.com analyst, despite being pilloried by the pundits, outdid even his 2008 prediction. In that year, his mathematical model correctly called 49 out of 50 states, missing only Indiana (which went to Obama by 0.1%.)
This year, according to all projections, Silver's model has correctly predicted 50 out of 50 states. A last-minute flip for Florida, which finally went blue in Silver's prediction on Monday night, helped him to a perfect game.
http://fivethirtyeight.blogs.nytimes.com/2012...race/?_r=0
Quote:
Which Polls Fared Best (and Worst) in the 2012 Presidential Race
By Nate Silver
November 10, 2012 8:38 pm November 10, 2012 8:38 pm
As Americans’ modes of communication change, the techniques that produce the most accurate polls seems to be changing as well. In last Tuesday’s presidential election, a number of polling firms that conduct their surveys online had strong results. Some telephone polls also performed well. But others, especially those that called only landlines or took other methodological shortcuts, performed poorly and showed a more Republican-leaning electorate than the one that actually turned out.
Our method of evaluating pollsters has typically involved looking at all the polls that a firm conducted over the final three weeks of the campaign, rather than its very last poll alone. The reason for this is that some polling firms may engage in “herding” toward the end of the campaign, changing their methods and assumptions such that their results are more in line with those of other polling firms.
There were roughly two dozen polling firms that issued at least five surveys in the final three weeks of the campaign, counting both state and national polls. (Multiple instances of a tracking poll are counted as separate surveys in my analysis, and only likely voter polls are used.)
For each of these polling firms, I have calculated the average error and the average statistical bias in the margin it reported between President Obama and Mitt Romney, as compared against the actual results nationally or in one state.
For instance, a polling firm that had Mr. Obama ahead by two points in Colorado — a state that Mr. Obama actually won by about five points — would have had a three-point error for that state. It also would have had a three-point statistical bias toward Republicans there.
The bias calculation measures in which direction, Republican or Democratic, a firm’s polls tended to miss. If a firm’s polls overestimated Mr. Obama’s performance in some states, and Mr. Romney’s in others, it could have little overall statistical bias, since the misses came in different directions. In contrast, the estimate of the average error in the firm’s polls measures how far off the firm’s polls were in either direction, on average.
Among the more prolific polling firms, the most accurate by this measure was TIPP, which conducted a national tracking poll for Investors’ Business Daily. Relative to other national polls, their results seemed to be Democratic-leaning at the time they were published. However, it turned out that most polling firms underestimated Mr. Obama’s performance, so those that had what had seemed to be Democratic-leaning results were often closest to the final outcome.
Conversely, polls that were Republican-leaning relative to the consensus did especially poorly.
Among telephone-based polling firms that conducted a significant number of state-by-state surveys, the best results came from CNN, Mellman and Grove Insight. The latter two conducted most of their polls on behalf of liberal-leaning organizations. However, as I mentioned, since the polling consensus underestimated Mr. Obama’s performance somewhat, the polls that seemed to be Democratic-leaning often came closest to the mark.
Several polling firms got notably poor results , on the other hand. For the second consecutive election — the same was true in 2010 — Rasmussen Reports polls had a statistical bias toward Republicans, overestimating Mr. Romney’s performance by about four percentage points, on average.
Polls by American Research Group and Mason-Dixon also largely missed the mark. Mason-Dixon might be given a pass since it has a decent track record over the longer term, while American Research Group has long been unreliable.
FiveThirtyEight did not use polls by the firm Pharos Research Group in its analysis, since the details of the polling firm are sketchy and since the principal of the firm, Steven Leuchtman, was unable to answer due-diligence questions when contacted by FiveThirtyEight, such as which call centers he was using to conduct the polls. The firm’s polls turned out to be inaccurate, and to have a Democratic bias.
It was one of the best-known polling firms, however, that had among the worst results. In late October, Gallup consistently showed Mr. Romney ahead by about six percentage points among likely voters, far different from the average of other surveys. Gallup’s final poll of the election, which had Mr. Romney up by one point, was slightly better, but still identified the wrong winner in the election.
Gallup has now had three poor elections in a row. In 2008, their polls overestimated Mr. Obama’s performance, while in 2010, they overestimated how well Republicans would do in the race for the United States House.
Instead, some of the most accurate firms were those that conducted their polls online.
The final poll conducted by Google Consumer Surveys had Mr. Obama ahead in the national popular vote by 2.3 percentage points – very close to his actual margin, which was 2.6 percentage points based on ballots counted through Saturday morning.
Ipsos, which conducted online polls for Reuters, came close to the actual results in most places that it surveyed, as did the Canadian online polling firm Angus Reid. Another online polling firm, YouGov, got reasonably good results.
The online polls conducted by JZ Analytics, run by the pollster John Zogby, were not used in the FiveThirtyEight forecast because we do not consider their method to be scientific , since it encourages voters to volunteer to participate in their surveys rather than sampling them at random. Their results were less accurate than most of the online polling firms, although about average as compared with the broader group of surveys.
We can also extend the analysis to consider the 90 polling firms that conducted at least one likely voter poll in the final three weeks of the campaign. One should probably not read too much into the results for the individual firms that issued just one or two polls, which is not a sufficient sample size to measure reliability.
However, a look at this broader collective group of pollsters, and the techniques they use, may tell us something about which methods are most effective.
Among the nine polling firms that conducted their polls wholly or partially online, the average error in calling the election result was 2.1 percentage points. That compares with a 3.5-point error for polling firms that used live telephone interviewers, and 5.0 points for “robopolls” that conducted their surveys by automated script.
The traditional telephone polls had a slight Republican bias on the whole, while the robopolls often had a significant Republican bias.
(Even the automated polling firm Public Policy Polling, which often polls for liberal and Democratic clients, projected results that were slightly more favorable for Mr. Romney than what he actually achieved.) The online polls had little overall bias, however.
The difference between the performance of live telephone polls and the automated polls may partly reflect the fact that many of the live telephone polls call cellphones along with landlines, while few of the automated surveys do. (Legal restrictions prohibit automated calls to cellphones under many circumstances.)
Research by polling firms and academic groups suggests that polls that fail to call cellphones may underestimate the performance of Democratic candidates .
The roughly one-third of Americans who rely exclusively on cellphones tend to be younger, more urban, worse off financially and more likely to be black or Hispanic than the broader group of voters, all characteristics that correlate with Democratic voting.
Weighting polling results by demographic characteristics may make the sample more representative, but there is increasing evidence that these weighting techniques will not remove all the bias that is introduced by missing so many voters.
Some of the overall Republican bias in the polls this year may reflect the fact that Mr. Obama made gains in the closing days of the campaign, for reasons such as Hurricane Sandy, and that this occurred too late to be captured by some polls.
In the FiveThirtyEight “now-cast,” Mr. Obama went from being 1.5 percentage points ahead in the popular vote on Oct. 25 to 2.5 percentage points ahead by Election Day itself, close to his actual figure.
Nonetheless, polls conducted over the final three weeks of the campaign had a two-point Republican bias overall, probably more than can be explained by the late shift alone. In addition, likely voter polls were slightly more Republican-leaning than the actual results in many races in 2010.
In my view, there will always be an important place for high-quality telephone polls, such as those conducted by The New York Times and other major news organizations, which make an effort to reach as representative a sample of voters as possible and which place calls to cellphones.
And there may be an increasing role for online polls, which can have an easier time reaching some of the voters, especially younger Americans, that telephone polls are prone to miss. I’m not as certain about the future for automated telephone polls.
Some automated polls that used innovative strategies got reasonably good results this year. SurveyUSA, for instance, supplements its automated calls to landlines with live calls to cellphone voters in many states. Public Policy Polling uses lists of registered voters to weigh its samples, which may help to correct for the failure to reach certain kinds of voters.
Rasmussen Reports uses an online panel along with the automated calls that it places. The firm’s poor results this year suggest that the technique will need to be refined. At least they have some game plan to deal with the new realities of polling.
In contrast, polls that place random calls to landlines only, or that rely upon likely voter models that were developed decades ago, may be behind the times.
Perhaps it won’t be long before Google, not Gallup, is the most trusted name in polling.