Statistics, probability, and Nate Silver

In the last few days Nate Silver has become the third most talked-about man in politics, with pundits left and right saying he’s audaciously staked his professional reputation on an Obama win.

This is sad and shows how little we understand about the nature of statistics and probability, even the more educated among us. Nate’s electoral prognostications over the last several months have really been two separate things melded together:

First, they are predictions of the accuracy of the national polls, the tracking polls, the swing-state polls and those pollsters estimates of how registered voters will translate to likely voters. Polls use well-worn statistical models to give confidence intervals for those polls, but by merging several polls and increasing the sample size, Silver is able to reduce that confidence interval significantly, giving a more accurate model. Silver’s ‘now-cast’ numbers are purely based on those polls, how likely they are to be wrong to a degree that would swing the result in that state, and a Monte Carlo simulation to generate a probabilistic distribution of outcomes. Then he shares what portion of those outcomes lead to an Obama victory, a Romney victory, or a tie.

The second thing Silver does (or, to be more accurate, did) is predict future effects that could change the electoral response between the time the poll was taken and Election Day. This involves a great deal of educated guesswork about economic factors, foreign policy issues, natural disasters (ahem) and, more than anything, a general regression to the mean. Throwing those variable ingredients in to the Monte Carlo soup churns out an outcome distribution that Silver presents as the ‘Nov. 6 forecast’. One could definitely make the case that since there’s a level of subjectivity in weighting different factors, bias could creep in to the model at this stage. It’s extremely hard to document whether such a bias actually exists in these forecasts but thankfully at this point we don’t have to.

I mentioned that Silver ‘did’ use multi-factor predictive models because as the poll dates approached the election date, those factors that might change the feeling of the electorate in the intervening time were, naturally, given less and less weight until today when that factor is zero. Today’s estimate, the one getting so much press, is based entirely on polling data and confidence intervals and not on future factors. Today the ‘Nov. 6 forecast’ and the ‘Now-cast’ are exactly the same. Pundits could still argue that there are other vectors of possible bias including Silver’s weighting of polls against each other and calculations of ‘house bias’, but those are all pretty clearly grounded in historical data and criticisms of them are harder to give credit.

It’s a shame we don’t do more to teach statistics and probability in school because the average person usually sees different kinds of probabilities the same way. Take a football game: You can generate a reasonably accurate probability model of who will win based on past performance, where the variance comes from the ‘noise’ in the game. A single interception or a lucky play can drastically change the game’s outcome. In this sense the measure of probability is to say that if the two teams played 100 games with the same team members in the same state of health, the tallied wins for each team would fall roughly in line with the probabilities. There is internal chaos in the game that forces the probabilistic distribution.

Predicting an election based on polls is an entirely different matter. The election will turn out one way or another. If the same people voted for President 100 times without an external factor interfering differently across samples, the outcome would be the same every time. There is almost no internal chaos within the game of voting that forces a probabilistic distribution (technically there are extremely minor chaotic factors within the system, such as voters who literally coin-flip on their way in, or who mis-cast their vote, but those chaotic factors have no ‘lean’ toward a particular candidate and en-masse are nearly impossible to change a sample’s electoral outcome).
In these cases where the event being predicted has such low internal chaotic factors, the statistician isn’t actually predicting the probability that candidate X or Y will become President because that event is already unchanging. Instead, they’re predicting the accuracy of their model. In this case, Nate Silver is predicting with a confidence of 91% that his model is correct in saying that Obama will win today’s election.

Don’t believe me? Let’s look at it a different way. Say there were two statisticians trying to predict the same election. One has a single poll from each state to work from, and the other has ten polls from each state. The first statistician, using his polls and the relatively low confidence intervals his single polls provide, can say with a 56% certainty that Obama will win the election. The second statistician, with more data, more people polled, and much higher mathematical confidence levels and smaller confidence intervals, predicts with a 91% certainty that Obama will win.

Both of these models can be completely mathematically correct even though they’re vastly different because, as stated earlier, they’re predicting the confidence that their model is correct. As each statistician is using different models, they naturally have different probabilities. Given 100 completely different elections, the statistician with more polls to work with would be right more often.

Take a third hypothetical statistician who, amazingly, is able to poll every single voter just before they vote. That statistician has a nearly absolute certainty in their polling data with a confidence interval that is nearly zero. That person can predict with 99.9999% confidence who will win the election.

This is a trick the sports bookie can’t accomplish because, even with absolute knowledge of the opening state, the outcome is in doubt. But elections aren’t football games or horse races (no matter that the pundits so enjoy those metaphors), and longer odds in closer races don’t have to be the product of audacity and bias. They’re simply the result of more polls, better science, and a lack of a need to create the sense of a ‘dead heat’ to bolster ad revenues.

 
310
Kudos
 
310
Kudos

Now read this

What an Apple Watch is good for

Rather than speculate on whether Apple is making a watch, when they might unveil such a product, and how much it would sell for, I’m going to take a few minutes to talk about how such a device would fit into the ecosystem of products and... Continue →