Friday, June 8, 2018

Ontario Election Wrap-up

The 2018 Ontario General Election is over, and if your team won then congratulations to you!

Over the last month or so I've been tracking the election polls and testing out a few different ideas in order to improve a general model that I'll end up using for the upcoming Alberta election. Of course, I wasn't the only person doing this, and I was able to find at least six other sites tracking and projecting alongside.

But who did the best? Can we learn anything specific about which models produce more reliable results?

First of all, we can look at seat projections. As far as I could tell by mid-day June 7th, this was the seat projection distribution between the seven of us:



CBC Too Close to Call QC125 Lispop Teddy on Politics Calculated Politics Extreme Enginerding Average Actual
PC 78 74 70 69 60 71 70 70.3 76
NDP 45 46 47 50 55 44 45 47.4 40
LIB 1 3 6 4 8 8 9 5.6 7
GRN 0 1 1 1 1 1 0 0.7 1
OTH 0 0 0 0 0 0 0 0 0


Ranking these by the root sum of squares difference from the actual results, we get:

  1. Calculated Politics (diff: 6.48). Their method involved seat-by-seat projections, suggesting a regional breakdown that seemed to work pretty well for them!
  2. Too Close to Call (diff: 7.48). They also provided seat-by-seat projections, and had regional factors involved to project those. Also, most handily, their simulator was interactive, but putting the correct values into it actually made their predictions slightly worse (still second place at 7.87 though).
  3. (Tie: CBC and Me) (diff: 8.12). We ended up with the same predictions for the NDP, but CBC was way under for the Liberals and I was quite a bit under for the PCs. My model didn't involve individual seat projections and instead just approximated historical trends for seat ranges based on party vote share, so that's a win for simplicity I suppose.
  4. QC125 (diff: 9.27). Another site with seat-by-seat projections. The actual seats fell well within their expected ranges, but were all off by a little bit. I'm unsure how they came up with the seat vote projections.
  5. Average (diff: 9.48). In this case, the wisdom of the crowds didn't pan out. 
  6. Lispop (diff: 12.57). Hypothetically they used a regional swing model similar to mine, so I'm not quite sure where the difference comes from here. It looks like they anticipated a much higher NDP voter base than actually happened.
  7. Teddy on Politics (diff: 21.95). It seems like Teddy paid more attention to leader favorability numbers than most of the rest of us, and that seems to have tilted the seat distribution against him. His was the only model to predict a minority government.
For most of the models, the seat projections came directly from the popular vote estimates. If we take a look at those, we get:




CBCToo Close to CallQC125LispopTeddy on PoliticsCalculated PoliticsExtreme EnginerdingAverageActual
PC38.737.937.83837.938.439.838.440.5
NDP35.53636.13736.836.135.935.933.6
LIB19.619.819.71920.919.519.619.719.6
GRN4.94.65?4.54.65.24.84.6
OTH1.31.71.4?01.51.31.41.8

Ranking these again by the same criteria we get:

  1. Me! (diff: 1.15) 
  2. CBC (diff: 2.69)
  3. Average (diff: 3.21) This is a better example of the group as a whole performing better than most individual members. This also probably makes sense as these numbers would have come mostly from the same pool of publicly available polls with a small amount of interpretation for trends and recency, as opposed to a large amount of interpretation as in the case with seat projections.
  4. Calculated Politics (diff: 3.29)
  5. Too Close to Call (diff: 3.56)
  6. QC125 (diff: 3.73)
  7. Lispop (diff: ~4.3) Note that Lispop didn't list their prediction for the green party vote total, despite projecting them to win a seat.
  8. Teddy on Politics (diff: 4.37)
Overall I'm really pleased with how I did, and I've learned a few tricks to use in upcoming elections. Next up will probably be Québec, hopefully with the same group of people, and we can see if this was a fluke for me or not!

Finally, here's my seat model with the actual results input as though they were one final gigantic poll at the end. Using these correct values would have resulted in the model being the most accurate seat projection of them all (diff: 4.24), which is an encouraging sign that the model itself was sound!


See you next election!