As I mentioned below, I was underconfident in many of my predictions for winners, and overrated the chances for the minor candidates. In particular, here are the actual probabilities for each of my forecasted classes:
- All 9 candidates who I said had a 95-100% chance of winning won.
- All 6 candidates who I said had a 70-80% chance of winning won.
- 4 out of 5 (80%) of the candidates who I said had a 55-60% chance of winning won. Katherine Hobbs was the exception.
- 3 out of 7 (43%) of the candidates who I said had a 30-40% chance of winning won.
- 2 out of 13 (15%) of the candidates who I said had a 20-25% chance of winning won.
- None of the 13 candidates who I said had a 10-15% chance of winning won.
- None of the 22 candidates who I said had a 1-5% chance of winning won.
The calibration curve from this data is shown below. The line represents a properly calibrated forecast, where the forecasted probability is the same as the actual probability.
I underestimated the probability that the leading candidates would win, as the dots are above the line for forecasted probabilities greater than 40% or so. I similarly overestimated the chances that trailing candidates would win, as the dots are below the line for probabilities smaller than 40%. To be fair, this analysis assumes the races are uncorrelated, which I never claimed and I don't think is true. I think this may have been an unusually good year for incumbents, especially in comparison to, for example, 2010.
The chart below shows the margin of victory for forecasted winners, as a function of the forecasted win probability:
In terms of a single number, the average Brier Score of my forecasts is 0.209.
No comments:
Post a Comment