Remember that quiz you all took? This one?

Let’s talk about it.

1. How long will this quiz take you?

I realize that it’s hard to estimate when you have no idea what the quiz is about, and I’m also aware that I may have skewed this by promising it would take under 10 minutes, but this question was meant to illustrate the planning fallacy. (A paper about it here and more information here). People tend to underestimate how much time something will take them, even if they have experience of going over time. This applies to a wide range of activities, from carpentry to origami, and does not apply to disinterested observers guessing about how long something will take someone else.

Let’s talk about it.

1. How long will this quiz take you?

I realize that it’s hard to estimate when you have no idea what the quiz is about, and I’m also aware that I may have skewed this by promising it would take under 10 minutes, but this question was meant to illustrate the planning fallacy. (A paper about it here and more information here). People tend to underestimate how much time something will take them, even if they have experience of going over time. This applies to a wide range of activities, from carpentry to origami, and does not apply to disinterested observers guessing about how long something will take someone else.

When we were discussing it at our club meeting, one of the club officers, Mike Mei, pointed out that this might also be an example of the Dunning-Kruger effect, in which unskilled (in a given area) people overrate their abilities in that area, essentially because they lack the knowledge to see where they have failed. There is a corrollary effect, in which skilled people underrate their abilities, because they spend time with people even more skilled than they are and have a better understanding of their own limitations.

The answers I got from SA were, in minutes: 10, 2, 10, 2, 5, 3, 2, 1 (possibly a 10), and blank. They all took between 3 and 7 minutes, so it’s possible the Dunning-Kruger effect was stronger than the planning fallacy here

(1.b.) On the original quiz, which I gave to the SA, the first question also had: How many questions do you expect to get right? which was meant to illustrate much the same points. I took this off since not all the questions have right and wrong answers.

2. Samantha was part of the Intervarsity Christian Fellowship in college and was abstinent until marriage. She has four children and does not use birth control. Is it more likely that she is a teacher, or a Christian and a teacher?

This is a spin on Jane being a feminist and a bank teller, which is a classic thought experiment/trick question in psychology. One example in a psychology presentation can be found here. It is a demonstration of the representativeness heuristic, in which people estimate probabilities of events by analyzing the data they have available to them, rather than by being aware of all the data they don’t have. In this case, people focus on the information I gave, which points strongly to Samantha being a Christian. This gives us a deviance from a Bayesian calculation, in some cases because we neglect the base rates of an event (this would be a prior), but in this case because of the conjugation fallacy. This fallacy occurs when we assume that a more restricted situation is more likely than a more general one. In particular, if we say that Samantha is more likely to be a Christian and a teacher, then we are claiming that the probability that Samantha is a Christian AND that Samantha is a teacher is less than the probability that Samantha is a teacher of any kind, which is clearly false. The wikipedia article has the math, but what you really need to see is this:

If A is the probability that Samantha is a teacher of any kind and B is the probability that Samantha is a Christian, we see that the overlap (C) can’t be larger than either A or B.

To my disappointment, SA answered with 2 saying teacher, 5 saying christian and teacher and two people rebelling against the two options I gave them to say:

“Just a Christian. Women must be oppressed and pregnant. Quote the Bible”

And

“The probabilities seem similar, though one should never take professed faith at face value.”

To be fair, these answers actually have a lot of merit. Even though it wasn’t the point of the question, it probably is much more likely that Samantha is a Christian than that she is either a teacher or both. Given that, it’s probably also true that the probabilities of her being a teacher and both being a teacher and a Christian are similar (if the probability of her being a Christian is high enough). Don’t believe me? Pick some probabilities at random and do the math! It’s just multiplication, I promise.

I find the last bit particularly intriguing. Perhaps this intrepid secularite is referring to the phenomenon of belief in belief?

By the way, if you were confused about all that Bayes talk, here’s a fairly simple explanation of Bayesian probability.

3. Do you think the percentage of countries in the UN that are African countries is higher or lower than 65%/10%? What is the percentage of countries in the UN that are African countries?

If you’ve already checked out both instantiations at the quiz, you probably realized this is one of the places they deviated. This is supposed to illustrate the anchoring effect, in which our analysis of what answer is reasonable is heavily affected by the information we’re given to start with. Sometimes this is because we adjust from that number, and sometimes because our brains remember information consistent with the number we start with. This can occur in context, as in this question or a starting bid for a salary, or out of context, as in spinning a Wheel of Fortune before answering the question (this link goes to a generally great paper). Crazy, isn’t it? But it’s true. It’s also worth pointing out that, despite the claim of some SA members that science people might be less prone to the fallacy than humanities people, even those who are reminded of the anchoring effect and told to avoid it are subject to it, at least when the anchor comes externally (as in this quiz). However, with internally created anchors (if I hadn’t given the first part of the question), warnings and high Need for Cognition do lower the extent of the effect.

SA Answers:

65%: 10%, 10%, 30%, 30%, 20% - Mean: 20%

10%: 18%, 52%, 8%, 13% - Mean: 22.75%

Oddly, the SA at UofC appears to be immune to the anchoring effect. Or something.

4. (5 seconds) Guess the value of the following arithmetical expression: 8 x 7 x 6 x 5 x 4 x 3 x 2 x 1 = ? OR 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8 = ?

This is the same issue. People tend to look at the first few numbers, multiply, and then adjust, since they’re asked to do it in 5 seconds.

Answers from SA

Descending order: 400,000; 1,000; 1,024; 500 Average: 100,631 (obviously not particularly useful given the high variance). Without the outlier, the average is 841.33

Ascending order: 1,000; 900; 16,320 (calculated, not guessed); YAY MATH!; 4,000; Average: 5555

Again, not entirely expected, but that’s ok.

Source: I got both of these questions straight from here: http://lesswrong.com/lw/j7/anchoring_and_adjustment/

5. 1% of women at age forty who participate in routine screening have breast cancer. 80% of women with breast cancer will get positive mammographies. 9.6% of women without breast cancer will also get positive mammographies. A woman in this age group had a positive mammography in a routine screening. What is the probability that she has breast cancer?

6. 1500 out of every 10,000 men at age forty who participate in routine screening have prostate cancer. 1300 of these 1500 men will get positive screening tests. 8,000 of the men who do not have prostate cancer will also get positive screening tests. A man in this age group had a positive test in a routine screening. What is the probability that he has prostate cancer?

You probably noticed that these are the same question, one with percentages, one with numbers. Apologies for the typo in the second question, by the way; that’s been fixed. These questions ask for a Bayesian calculation of probability. As someone has pointed out in the comments, it might seem like the test is asking for mathematical proficiency rather than rational abilities. I take the criticism willingly, but nothing on this quiz requires more than basic multiplication. Knowing how to set up a Bayesian calculation may be mathematical in some sense, but I would argue that it’s also simply something a rational person should know how to do, in the same way that calculating iterated probabilities of coin flips requires multiplication but if you think that the probability of getting heads at least once when you flip a coin twice is .5 + .5 = 1, there’s a problem that goes beyond arithmetic. This will become even more clear when I demonstrate the answer to this problem.

The way this works is as follows. We know that some people have cancer and some don’t, and some people get positive tests and some don’t. So we set up a table. The answers will be put in as (breast cancer problem numbers, prostate cancer problem numbers).

Has Cancer | Doesn’t Have Cancer | |

Positive Test | ||

Negative Test |

So for prostate cancer, the numbers are all given

Has Cancer | Doesn’t Have Cancer | |

Positive Test | ( , 1300) | (, 8000) |

Negative Test | (, 200) | ( , 500) |

For breast cancer, we have to do some calculations. For ease’s sake, let’s pick 10,000 as our total number of people (though it doesn’t matter). So of 10,000 women, 1%, or 100, have breast cancer, so our left column must add up to 100. 80% of these women will get a positive test, so 80 of them will and 20 won’t. Now we’re considering 9900 women who don’t have breast cancer, 9.6% of whom (or about 950) will get a positive test anyway leaving 8950 for the final quadrant.

Has Cancer | Doesn’t Have Cancer | |

Positive Test | (80, 1300) | (950, 8000) |

Negative Test | (20, 200) | (8950, 500) |

So if you get a positive test, you know you’re in the top row. If you’re a women who got a positive mammography, you have an 80/(80+950) = 7.76% chance of having cancer, and if you’re a man who got a positive test for prostate cancer, you have a 1300/(1300+8000) = 12.9% chance of having cancer.

From SA, who only got the breast cancer problem: 8/9 = 88%, 9.8%, 70.4%, 9.5%, 90.4%, 10%, 90.4%

Not to engage in scare rationalism here, but this is a problem. This means that women who go and get positive mammographies might be overestimating their probability of having cancer, and therefore undergoing possibly unnecessary biopsies, tests, chemotherapy, radiation, hospital visits, with the fear, stress and bills that come along with them, by an order of magnitude. Not good, people, not good.

# And look what just came out: NYTimes: Considering When It Might Be Best Not to Know About Cancer

7. There are four cards on a table. Every card has one side which is white or black and one side with a number on it. The Rule: Every card with a white side must have an even number on the other side. How many cards (and which ones) must you flip in order to check if all four cards follow this rule?

8. You are an employee at an all age party venue, and people are allowed to come in with drinks. You see a group of four guys coming in, all carrying red Solo cups. One has an ID which says he’s 19, one is drinking orange juice, one is drinking beer, one has an ID which says he’s 24. Assuming you are accurate in your assessment of the drinks and all the ID’s are real, whose IDs/drinks do you check in addition to the information you already have to make sure no one is drinking illegally?

Congratulations to whomever realized that these are the same problem. In both cases you have four instantiations of an element of the problem with two pieces of information associated with it, only one of which you currently know. (4 cards, each has a number and a color; 4 people, each has a drink and an age). You are given a rule: If x, then y. If white card, then even number on opposing side. If drinking alcohol, must be 21. Now, if-then statements with x and y can be written four ways.

1. If x, then y is the original statement.

2. If y, then x is the converse.

3. If not x, then not y is the inverse.

4. If not y, then not x, is the contrapositive.

What you’ll notice if you’ve taken logic is that 1 and 4 are equivalent, and 2 and 3 are equivalent. So we have a rule, so we have to check it and its equivalent form. In these cases, if white then even must be checked (so check the white card) and if alcohol then at least 21 must be checked (so check the guy with beer), and also the contrapositive: if odd (not even), then black (not white), so you check the ‘9’ card, and if 19 (not at least = less than 21), not alcohol, so you check the 19 year old. The other ones don’t matter! So what if the even number has a black face on the other side? That’s like saying that if you’re 21 and above you must drink alcohol!

If you totally didn’t follow this, check out this link.

The cool thing about this question, called the Wason Selection Task, is that people are universally pretty bad at the card example and pretty good at the people example. The explanation given is that people are better at thinking about people and cheating (people possibly breaking rules) than abstract logical concepts. Maybe you agree?

SA Answers: Check the ‘2’ card, check the beer & juice; Check the ‘2’ and white cards, check the 19 year old’s drink and beer drinker’s ID; Check all the cards, check all the people except the 24 year old; Check the black card, and the one drinking orange juice and the one drinking beer; Check the ‘2’ card, check all of the people; Check the ‘2’ card, check the one drinking orange juice and the one drinking beer; Check all the cards, check the one drinking orange juice and the one drinking beer; Check the ‘2’ card, the ‘9’ card and the white card; check everyone’s drink.

Caveat: I phrased the question differently to them, and many thought that part of your job was to get the under 18’s out as well as check drinking legality, so you can’t draw that much from this sample. I would like to like to point out, though, that with the original Wason selection test part, no one got it right. Interesting...

9. A fair coin is tossed repeatedly until a tail appears, ending the game. The pot starts at 1 dollar and is doubled every time a head appears. You win whatever is in the pot after the game ends. Thus you win 1 dollar if a tail appears on the first toss, 2 dollars if a head appears on the first toss and a tail on the second, 4 dollars if a head appears on the first two tosses and a tail on the third, 8 dollars if a head appears on the first three tosses and a tail on the fourth, etc. This game can be played as many times as you wish (with a fixed fee paid every time). How much would you pay to enter this game?

I won’t lie, this one’s pretty math-y. Basically, when you’re deciding whether to take a bet, you should calculate something called expected value, that is, what do you expect to win? If you have a 50% chance of winning $2 and a 50% chance of winning nothing, then your expected value is .5*2 +.5*0 = 1, so you should be willing to pay a dollar or less (probably less, since people are loss averse).

Same thing applies. You have a 50% chance of winning $1 (if it’s tails the first time), then 25% of winning $2 (if it’s tails then heads), 12.5% of 4$, etc. The thing is that when you add up .5*1 + .25*2 + .125*4 + … you get .5 +.5 +.5 + .5 into infinity, and that adds up to infinity, which means the expected value from this bet is...infinity. Obviously there aren’t infinity dollars, and loss aversion plays a role here, but seriously, you should be willing to pay a lot of money to play this game. Crazy, huh?

SA Answers: $1, $1, $0.50, $3, 5 euro, $0, $2, $10, $2.

10. A magazine you're interested in has three/two subscription options: Which do you choose?

Last one, promise. This one’s pretty simple. It’s all here, really: http://tomyumthinktank.blogspot.com/2008/03/economics-of-irrationality-relativism.html. the basic idea is, we see that print/online sells for the same price as just print, so we think it’s a better deal, so we’re way more likely to pick it than if that second option isn’t there to favorably compare the third one to. Think about it next time you go to a restaurant! Cool, huh?

In SA, of the group that got three questions, 1/5 chose the cheaper one and of the group that got two questions, 1/4 chose the cheaper one.

(Thanks to Mike Mei for pointing me to this question)

Thanks for sticking with me through all that. This was my first venture into quiz making. I welcome comments and criticisms in the comments. Please also tell me if you’d heard of the fallacies/biases before taking the test! If you’re interested in this stuff and want to try to become more rational, I recommend Less Wrong and this Wikipedia Page. There’s a whole amazing world out there!

## No comments:

## Post a Comment