Friday, November 30, 2012

What Democracy Looks Like

A while ago I wrote a tongue-in-cheek post about what democracy looks like amid the Occupy Protests.

It was pretty silly. I promise from here on out to only be serious. I swear.

I also posted a while ago about different electoral systems that are in existence, including First Past the Post, Borda Count, and Instant Runoff Voting. The second two electoral systems rely on ranking candidates, while First Past the Post only depends on one candidate chosen per ballot. Because some of the lower rankings can take effect when counting votes, an election with first rank results that would cause a winner in a First Past the Post system may not necessarily cause the same winner in a Borda or Instant Runoff election.

With that being said, this is what Democracy looks like:


This is a ternary plot based on a three-candidate First Past the Post system. The bottom axis ranks the percentage of votes received by the reference candidate, and each other axis ranks the other two candidates, respectively. The colour of each triangle is the chance of that candidate winning, where green means 100% and red means 0%. The First Past the Post plot makes intuitive sense - as long as you have more votes than anyone else, you win, otherwise you lose. This graph is pretty straightforward, so let's move on.


This plot is based on Instant Runoff Voting. What's interesting here is that the middle has opened up a little bit - a candidate could have more votes than anyone else but lose as long as their opponents were close to each other. The outsides of the plot are still similar to the First Past the Post plot, though. An example of this could be a race where the vote is split 49%-46%-5% - First Past the Post would give the win to the first candidate, and almost every time the first candidate would win in an Instant Runoff Vote too, as the second candidate would need all of the third candidate's votes to win (which is unlikely). Once we get to the middle though, the fraction of vote needed by the second place candidate from the losing candidate is less, and so there's a range of voting combinations that give a candidate a chance of winning. This gets even more noticeable in...

The Borda Count! This was the count where you get a certain number of points for each first-place vote, fewer points for a second place vote, etc. Here the boundary is significantly blurred between each candidate's corner - in fact, it is statistically possible for a candidate to win the election even if they only have 16% of the first preference votes, provided their opponents split the difference and hand them a lot of second-round votes. Funky.

Two last fun graphs:

(You may have to click on it to expand it...)

This is the same Instant Runoff graph based on the vote distribution for each race in the 2012 SU elections. In each case the winner is circled with the appropriate vote distribution, at the point in the contest when only three candidate remained in the count. In each case, the winner was the candidate with more votes than any other candidate, but both Andy Cheema and Brent Kelly were located at a point where they had more than a 10% chance of losing still.

 
Last graph: This is a more visual interpretation of the 2012 Board of Governors race. After NotA gets eliminated, its vote share goes to 0%, which is shown by the arrow pointing to the second circle. In this case it looks like the NotA vote share was mostly split evenly, with a bit of favoritism towards Brent Kelly. As the NotA vote was a sizeable 18%, it could have potentially swung the election towards Rebecca Taylor. Cool visualization, eh?

Wednesday, November 21, 2012

Light Rail Transit

If I’m on a train going the speed of light and walk from the back to the front, what happens?

I recently got a fun question that highlights a seeming contradiction in what we’ve been taught in physics lessons. On the one hand we’ve been told that nothing can go faster than the speed of light, but on the other hand we’ve been told that speeds in the same direction can be added up. So what’s really going on here?

Normally if you’re riding a train going 100 km/h and you walk forward at 5 km/h along the train, those speeds can be added up, depending on the reference point you’re using. Someone sitting on the train would see you walking 5 km/h forward and the outside moving 100 km/h backwards, but someone sitting outside would see you as moving forward at 105 km/h relative to the ground. Similarly, a jet taking off from a moving aircraft carrier gets a helpful speed boost, and firing a gun from that jet massively increases the speed of the bullet.

The problem is that the speed of light is strange. In 1905 Albert Einstein suggested that the speed of light is independent of the speed of the light source and of the observer. That’s insane! No matter how quickly you’re moving towards or away from light, its speed will never change (but its colour will).
This also suggests that simply adding up speeds no longer works when we’re close to the speed of light. Fortunately a fun formula exists that relates speeds at relativity, and it looks something like this:




What’s fancy is that this equation gives the results we'd expect for speeds that are substantially slower than the speed of light, but that no matter how high the speeds are, the result of the equation is still less than light (as long as neither individually is faster than light).

Sadly, the question that’s been asked can’t be solved straight-up because nothing with mass can go faster than the speed of light (trust me, science has tried). But we can still try with something close enough.

The fastest speeds humans have achieved are at the Large Hadron Collider when a proton was accelerated to 99.9999991% the speed of light. That’s only about 10 km/h slower than the supposed universal speed limit. The fastest a human has ever run was Usain Bolt at 37 km/h. Using the equation from before, instead of topping out 27 km/h above the speed of light, Usain Bolt actually only ends up speeding up to 99.9999991000001% of the speed of light. Not really a lot of progress.

As a bonus answer, even though the speed of light isn’t exceeded in this example, other cool things happen to the train. A passenger on the train sees Bolt travelling at 37 km/h, but an observer watching outside would only see him moving one millimeter per hour faster than the speed the train is already moving. A messed-up consequence of this is that the train would appear to flatten to one ten-thousandth of its original length due to Lorentz Contraction. And if that doesn’t boggle your mind, nothing will.

Thursday, November 1, 2012

Sneaky Statistics

Statistics are cool. Statistics are your friend. If you treat them right, they'll love you forever and never lie to you.

The problem with statistics is that sometimes they're confusing, and people very frequently think they understand them better than they do. Because of this, it's really easy sometimes for people to lie by tricking you with stats. Meanies...

Here's a cool example from Wikipedia of statistics playing tricks with you. Pretend that you're a doctor and you're trying to figure out which treatment is better for curing kidney stones. You use both for a while and this is what you get:

Treatment A Treatment B
Small Stones 81/87 = 93% 234/270 = 87%
Large Stones 192/263 = 73% 55/80 = 69%
Total 273/350 = 78% 289/350 = 83%

At first glance this test may seem pretty fair - both treatments were used 350 times, so we can compare them, right? And it looks like Treatment B was better than Treatment A. Maybe we should use it? Sounds good!

But wait. When we break it down into small stones versus large stones, the story changes. In small stones, Treatment A is 6% better than Treatment B, and in large stones Treatment A is 4% better. That's crazy though - how can A be both better at treating small stones and better at treating large stones, but worse at treating both? Clearly evil forces are at work here.

Around this point it wouldn't be horrible for you to be confused about which treatment is actually better, and it turns out that this study was, in fact, not fair. Large stones had a lower rate of successful curing, and Treatment A was used more than three times more often for these stones. Similarly, the easier smaller stones were more often given Treatment B. This creates such an unbalanced weighting between the treatments and stones that when it's all added up Treatment B looks better.

This highlights two cool concepts in statistics. The first is Simpson's paradox, where the correlation observed in two separate groups is reversed when they are combined together. Obviously this could offer juicy opportunities to people with an agenda - a drug company representing either Treatment A or B could make a case that their drug is better, simply based on how they add the numbers up in the study.

The second is the confounding (or lurking) variable - a variable that wasn't originally accounted for that has an effect on both the dependent and independent variables in the study. A good example is as follows: a statistician could do a week-by-week analysis of human behavior on a beach, keeping track of both drownings and slurpee consumption. They might make the observation that in weeks with high slurpee consumption, more people drown, and someone could then declare that drinking slurpees increases the chance of drowning. 

Boy that would suck. As a researcher, you could probably even justify this a little - perhaps drinking slurpees fills you up or makes you lethargic, increasing your chance of drowning. However, a more likely explanation is to take something new into account: the season. People just plain eat drink more slurpees in the summer than the winter (unless they're me). People also go swimming more at beaches during the summer, increasing the chance of drowning. In this example, the season would be a lurking variable - it correlates with both previously-considered variables, and explains the phenomenon.

Similarly, in our kidney stone example a lurking variable could be the size of the stone. Doctors disproportionately used Treatment A more for large stones, and Treatment B more for small stones - at the same time, small stones were easier to cure than large stones. By not taking into account the effect of the stone size on the treatment distribution, we arrive at the paradox from before.

Funnily enough, Simpson's paradox occurs fairly frequently - in fact, statisticians have estimated that in any similar 2x2 table as in the kidney stone example, we'd expect about 1/60 of them to have some version of the paradox.

On famous example involved a sex discrimination lawsuit at Berkeley in 1973. The admission results from the six largest departments looked something like this:

Department Men Women
A 512/825 = 62% 89/108 = 82%
B 353/560 = 63% 17/25 = 68%
C 120/325 = 37% 202/593 = 34%
D 138/417 = 33% 131/375 = 35%
E 53/191 = 28% 142/393 = 24%
F 16/272 = 6% 24/341 = 7%

 When the total data was added up across all departments, though, the distribution was as follows:

Applicants Admitted
Men 8442 44%
Women 4321 35%

At first glance, it looks like a case of gender discrimination - nearly 10% more men were admitted across the board than women, and some people who felt cheated took it to court.


Looking at those six departments in the first table, though, shows something interesting - Departments A, B, and D where the most popular with the men, and the least popular for the women. In these, the women consistently were more likely to be admitted than men. On the other hand, Departments C and E were the most popular for the women, and they lost to the men. Unfortunately, the Departments most popular with women also had admission averages that were about half of the ones that the men chose.


In fact, a study of these results suggested that there was a "small but statistically significant bias in favor of women" in the admission process when examining all departments in question, and the lawsuit failed. The lurking variable in this case was the character of the departments themselves - men tended to go into studies that were more math-intense (engineering, science, etc.), which happened to have more room to accept students.


It's really important to keep concepts like this in mind when examining statistics. For instance, one has to be extremely careful performing direct comparisons of male versus female earnings to account for factors such as preference in employment - it's much better to compare across identical jobs than comparing aggregate numbers. Aggregate statistics in that case are only really good for highlighting disparity in employment distributions, not earning statistics. Similarly, the Berkeley sex bias case, while not showing a bias against admitting women into studies, highlighted a lack of female participation in programs involving math that was more indicative of early societal pressures than active denial.


One final word of caution regarding Simpson's paradox: due to its relative likelihood, it's not impossible to make it appear as though it is taking place when in fact it isn't. Breaking up applicants by department makes sense because each department's admissions process is hypothetically  independent of each other, but one could easily also break the applicants into groups based on eye colour, height, birth place, beer preference, favorite hockey team, or blog readership. Chances are that in any given group of people, there's a way of breaking the data up nearly arbitrarily that could result in such a paradox. So if you ever see crazy differences between aggregate results and group results, make sure to keep an eye out for any funny business!