+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

NATE SILVER: Why I 'screwed up' on Donald Trump

May 19, 2016, 03:13 IST

Advertisement
AP

A few weeks ago, as Donald Trump became the presumptive Republican presidential nominee, Nate Silver admitted that his data-driven site, FiveThirtyEight, "got the Republican race wrong."

Silver's post, which was discussed here, was something of a mea culpa. But it pointed to a slew of external factors ostensibly out of Silver's control and which he rationally could not have predicted. (The short of it, according to Silver, was that three assumptions had gone wrong.)

But in a new post on FiveThirtyEight on Wednesday, Silver seemed to admit that - despite the fact that Trump's rise to the Republican nomination was a highly unprecedented event - something did, in fact, go wrong.

"We didn't just get unlucky," Silver writes. "We made a big mistake, along with a couple of marginal ones."

The mistake? Here's what Silver said (emphasis added):

Advertisement

The big mistake is a curious one for a website that focuses on statistics. Unlike virtually every other forecast we publish at FiveThirtyEight - including the primary and caucus projections I just mentioned - our early estimates of Trump's chances weren't based on a statistical model. Instead, they were what we sometimes called "subjective odds" - which is to say, educated guesses. In other words, we were basically acting like pundits, but attaching numbers to our estimates. And we succumbed to some of the same biases that pundits often suffer, such as not changing our minds quickly enough in the face of new evidence. Without a model as a fortification, we found ourselves rambling around the countryside like all the other pundit-barbarians, randomly setting fire to things.

REUTERS/Lucas Jackson

Silver proceeded to break down the issue into five parts, in his words:
  1. Our early forecasts of Trump's nomination chances weren't based on a statistical model, which may have been most of the problem.
  2. Trump's nomination is just one event, and that makes it hard to judge the accuracy of a probabilistic forecast.
  3. The historical evidence clearly suggested that Trump was an underdog, but the sample size probably wasn't large enough to assign him quite so low a probability of winning.
  4. Trump's nomination is potentially a point in favor of "polls-only" as opposed to "fundamentals" models.
  5. There's a danger in hindsight bias, and in overcorrecting after an unexpected event such as Trump's nomination.

The post is long (and worth reading) and goes into depth on each of the above facets of the story.

In the first three points, Silver outlined what he suggested may have been his biggest mistake (failing to build a statistical model earlier and instead relying on what he calls "educated guesses"). He also ruminated on the difficulty of assessing the scale of the predictive failure of misreading the Trump phenomenon and reanalyzed Trump's electability in terms of the admittedly small historical precedent. 

In the second of those points, Silver remained a bit defensive of the process. He discussed the notion of a model's calibration - effectively, is the model correct about as often as it thinks it should be? - and the difficulty of assessing a true predictive failure for a single event.

Advertisement

But he concluded on the side of self-critique: "Still, I think our early forecasts were overconfident ..."

In the fourth section, Silver discussed how the case of Trump could be an argument for adjusting the methodology of FiveThirtyEight's analyses. And in the fifth, he cautioned against overcorrecting too much just because so many people got Trump wrong. 

Justin Sullivan/Getty Images

Silver wrote about how he criticized "experts" for being so sour on Herman Cain in the 2012 election.

At the time, he wrote (emphasis Silver's): "Experts have a poor understanding of uncertainty. Usually, this manifests itself in the form of overconfidence: experts underestimate the likelihood that their predictions might be wrong." A month later, Cain dropped out amid accusations of sexual harassment.

Advertisement

When Trump came along in 2015, Silver said he "over-learned" his lesson.

"I'd turn out to be the overconfident expert, making pretty much exactly the mistakes I'd accused my critics of four years earlier," he wrote.

Looking forward, he said there is a risk that the political commentariat might make the same mistake again, thinking that the next "Trumpian" candidate has better-than-realistic chances simply because Trump succeeded.

"Still," he concluded, "it's probably helpful to have a case like Trump in our collective memories. It's a reminder that we live in an uncertain world and that both rigor and humility are needed when trying to make sense of it."

Read 'How I Acted Like A Pundit And Screwed Up On Donald Trump' at FiveThirtyEight >>

You are subscribed to notifications!
Looks like you've blocked notifications!
Next Article