This morning’s Radio 4 Today programme included an interview with Nate Silver, the statistician and analyst renowned for predicting the most recent US election results via models based on electoral history, demographics and polling. He correctly called all 50 states in the US Presidential election. His stock is now very high and he is viewed by many as the go-to predictor in the political arena. But he is determined not to be misunderstood or considered infallible. Indeed, he very humbly forecast his own future ‘unravelling’. As numbers go up and down and failure follows success, he could also see that, at some point in the future, he will, undoubtedly, get things wrong, maybe even very wrong.
The interview focused quite a bit on the distinction between predicting and forecasting. Nate encouraged us not to trust anybody who is too confident in their predictions, especially long-term predictions. Predicting the economy more than 3-6 months ahead is ”nearly impossible”. Political pundits have been found to have no more than a 50-50 chance of turning out to be correct, most do no better at predicting election results, than if they had just flipped a coin.
On the whole, statisticians predict but they don’t mean predicting in the everyday meaning of the term. They deal in uncertainty and illogical though it may at first appear, there is more value and more accuracy in a forecast – of different possible futures – that says something about the underlying conditions of uncertainty in identified trends, than in attempting to predict exactly what specifically will happen, how and when.
Think about it when it comes to a political poll, you are not polling everybody just a sample of a population or group. Similarly with a survey, you will be looking at a small portion(s) of a larger population to see what the trend might be for that larger population. It’s only to be expected that there will be some degree of error or uncertainty in your calculation.
Statisticians include a measure of uncertainty in their predictions – typically captured by the ‘margin of error’ in a confidence interval. The more precise an estimate or prediction is, the smaller the margin of error. So an opinion poll based on a sample of 1000 people may have a margin of error of +/- 3%. It would require a sample of 9,000 people to reduce the margin of error to +/-1%.
Confidence intervals and their associated margins of error are ways to mark the uncertainty attached to a statistical prediction. They make it clear that there is no such thing as a certain bet at least not until the race is over.
The world and society we live in are complex and very unpredictable. It what’s makes life exciting and grappling with – making sense of – that unpredictability is what inspires and motivates statisticians.