Monday, June 24, 2013

Spring Weather


I've now been doing this weather analysis of mine for 365 days, and what a year it's been. That's 365 days of exciting weather, culminating with tornado watches a couple of weeks ago and the horrible flooding around Southern Alberta going on right now.

The end of the year also marks the end of my experiment with tracking weather stations, which I previously reported for the summer, fall, and winter. The scores for spring (out of 100) were:

  • Weather Network: 72.08
  • Weather Channel: 69.56
  • Global Weather: 68.52
  • TimeandDate.com: 68.47
  • Environment Canada: 65.96
  • CTV Weather: 64.31
Just in case you didn't have the previous values memorized, this is how they've changed over the last season:


A couple of things about this are really quite noteworthy. First of all, the forecasters are all a lot closer together than they were in the previous two seasons. Good for them? Also, TimeandDate.com and CTV had massive improvements to their scores, with TimeandDate.com jumping two ranks higher. For the fourth season in a row, the Weather Network and the Weather Channel both hold the number 1 and number 2 spot.
As I mentioned before, one of the issues with this analysis is that CTV Weather only presents numerical values for POP up to four days in the future. If we only take these four days for all weather stations, the results change fairly drastically:

  • Weather Network: 74.47
  • CTV Weather: 71.09
  • Global Weather: 70.63
  • Weather Channel: 70.46
  • TimeandDate.com: 69.84
  • Environment Canada: 67.74
The overall score graphs for spring look like this:


As always, graphs like this highlight the fact that weather forecasting gets increasingly more difficult the farther into the future you attempt to predict. Any given day into the future only has a difference in score of about 10 points between any station, and all six weather forecasters have a nearly identical trend decreasing with time.

Since it's now been a year, I can also share the overall yearly scores for each forecaster. They are:
  • Weather Network: 76.39
  • Weather Channel: 73.92
  • Global Weather: 72.00
  • Environment Canada: 71.20
  • TimeandDate.com: 70.02
  • CTV Weather: 60.99*
*CTV Weather isn't a fair comparison as the analysis was started only in the fall, and the best season for all the other forecasters was summer. This probably messes up their average scores a wee bit...

Aside from the difference in numbers of days accounted for and length of analysis, one of the other major differences between stations is the presentation of POP values. For instance, Environment Canada has listed policies that state that if their calculated chance of precipitation is between 30 and 70% it will be listed (at increments of 10%), but 50% isn't allowed to be listed. This forces their values to be listed as either 0, 30, 40, 60, 70, or 100. CTV has a similar policy, but the other stations present values all the way between 0 and 100 at increments of 10.

The problem with this is that all forecasters could predict a 10% chance of rain, and if it does end up raining they'd receive a pretty bad score for that prediction. However, Environment Canada and CTV would receive a much worse score, because they would have published a value of 0%.

If we account for this, the new net yearly scores become:
  • Weather Network: 74.61 (-1.78)
  • Weather Channel: 71.94 (-1.98)
  • Environment Canada: 71.20 (No change)
  • Global Weather: 69.68 (-2.32)
  • TimeandDate.com: 68.14 (-1.87)
  • CTV: 60.99 (No change)
So yeah, changes in how values are presented definitely have an effect on the scores. This isn't quite enough to counteract the full difference between the Weather Network and everyone else, but it does tighten things up a little bit.

One last thing. When a weather station gives a POP value, what is the actual chance of any rain falling?


Ideally, the graph would be a straight line from bottom left to top right, but almost exclusively the bars on the graph are higher than what would be expected.

Part of the reason for this is that the definition of precipitation that I used was similar to how Environment Canada defines POP - the "chance that measurable precipitation (0.2 mm of rain or 0.2 cm of snow) will fall on “any random point of the forecast region” during the forecast period." 0.2 mm isn't a ton of rain, really, so what if I bump the values up to 1 mm?


Well that's quite the improvement. In fact, it substantially helps the scores for all weather stations by about 4-7 points each. What's cool is how close the bars are for low values of POP, and how they tend to diverge at higher values.

Anyway, this has been a pretty cool project for me for the last year. Hope you had fun too!

No comments: