Tuesday, July 17, 2012

Watch out for thinking traps

Imagine that a new test for HIV has been developed, and that the powers that be have decided to pursue universal testing with it in order to improve public health. Let's assume that it's a very accurate test - it tests positive for 95% of people who actually have it, and it tests negative for 99% of people who don't.

Pretend that you get the blood work done, and get the dreaded bad news of a positive result. Now, you've probably been safe - stayed away from dirty needles, used protection when needed, et cetera, and you immediately go into denial. "This is outrageous," you might say, "it must have been a false positive!"

Taking a cursory look at the information provided about the test, what do you think your chance of a false positive is? Your first impression might be that it's 1%, which certainly seems sadly unlikely.

Let's do the math:

In 2009, the number of people in Canada with HIV was 65,000, and the population was 34 million. That leaves 33,935,000 Canadians who don't have HIV.

The test is positive 95% of the time if someone has the disease, so 61,750 of the people with the disease will get a true positive result. Similarly the test is positive 1% of the time for people without the disease, and would give 339,350 devastating false positives.

That means that of the approximately four hundred thousand Canadians who got a positive result, only sixty thousand of them actually have the disease. If you're not part of a particular risk group, your chance of being healthy with a positive result is not 1%, it's actually a whopping 85%.

This type of approach is critical when dealing with science and technology. A politician could perhaps argue that, with such a seemingly accurate test, everyone should be screened for the good of the public. Due to the low prevalence of the disease, though, such universal testing would do far more damage than good. Actually sitting down and working out the numbers is crucial when dealing with science and statistics, and you should always be wary of blindly listening to information relayed at the beginning, or that's spun against you.

Now that you're primed, let's try a more famous example - the famous Monty Hall problem:

Congratulations! You're in the middle of the game show Let's Make a Deal, and you're standing in front of three doors. Behind one door is a car, and behind the other two are goats. You pick a door, say #1, but it isn't opened. The host, who knows what's behind all three doors, opens one of the other doors, say #2, and shows that it has a goat. The host then says to you, "Do you want to pick door 3?" Is it to your advantage to switch your choice, or does it matter?

Your first reaction may be that it doesn't matter - at this point you are faced with two doors, and you know that one has a goat and that the other has a car. Surely it must be a fifty-fifty decision, and it doesn't matter what you choose. This is, perhaps surprisingly, incorrect. In fact, you can double your chances of winning a car if you switch doors!

Because the host knows what's behind each door, if you switch you are guaranteed to be switching from a goat to a car or vice versa (never a goat to a goat, because that option has been eliminated by the host). Two thirds of the time you're going to pick a goat right off the bat, and switching would get you the car, and only one third of the time would you pick a car first and regret switching. Even though at first glance it would seem your choice doesn't matter, taking a second to think things through can definitely turn out to be profitable to you.

One last example of critical thinking: in front of you are four cards, and you know that each card has a letter on one side and a number on the other. The four cards you can see have E, M, 4, and 8 written on the side facing you, and you can't see the reverse side.

Suppose you were told that the cards always obey the following rule: If a card has an E on one side, then it always has a 4 on the other side. If you wanted to figure out whether this rule is true or false by turning over some of the cards in front of you, which would you turn over to find out? Take a second and think it through. Go on.

Most people immediately choose the card with an E on the front, and rightly so - if it turned out to not have a 4 on the back, we would know the rule is false. Some people will also choose the card with a 4 on the front, but this isn't necessary - the rule was only specified what happened to all cards with an E, not all cards with a 4. Similarly there's no sense in checking the card with the M.

But most people will stop there, and completely overlook checking the card with the 8 on the front. What if you checked it and there was an E on the back? Then the rule would be broken just as easily as if there was no 4 on the back of the card with the E on its front.


This is an example of a confirmation bias: when trying to test theories we often set up tests that are designed to prove something, as opposed to set up tests that are designed to disprove the same theory. Ancient scientists would often accumulate a massive amount of data that agreed with their hypothesis and then call it confirmed, only to be embarrassed when a simple test disproved their theory completely.

Proper critical analysis of probability and the awareness of cognitive biases are important to keep in mind in science and technology, but also in making everyday decisions. The next time you are shocked by something on the news, or need to make a major decision, it always helps to take a step back and really think about  both what you're looking for and the information you've been given.

This post also available at The Wanderer Online.

No comments: