Readers as old as I am may remember one of TV’s more popular game shows, Let’s Make a Deal, and its host Monty Hall. Those keen on probability and statistics may also know of a now-classic, once in dispute, problem known as the Monty Hall problem.
The final game on each show involved a single contestant given a choice — door #1, door #2, or door #3. One of them conceals the “grand” prize — often a car, which seems to be a popular choice among producers and contestants alike. The other two doors conceal lesser prizes, sometimes even a “booby prize.” The relevant fact is that the contestant wants the grand prize, not either of the others.
After the contestant chose a door, Monty then revealed what was behind one of the other doors. He didn’t reveal the contestant’s choice, nor the grand prize, he showed one of the lesser prizes. Specifically, he showed a lesser prize which the contestant did not choose. In order to do so, Monty had to know which door concealed the grand prize — otherwise, he might accidentally reveal the grand prize door before the game was over, which would be terribly anti-climactic.
Then came a crucial choice: Monty offered the contestant the opportunity to change selection. If, for instance, you had originally chosen door #1, then Monty revealed that door #2 concealed a goat (not the grand prize), he would then ask, “Do you want to keep what’s behind door #1, or pick what’s behind door #3?”
The Monty Hall problem is: should you stay with your original selection, or switch?
To most people, it seems that switching doesn’t gain any advantage. After Monty reveals one of the booby prizes, there are two doors left, so the odds the grand prize is behind your original choice are 50-50. Hence it doesn’t matter whether you switch or not.
But that’s wrong. Originally — with no information at all — the chance you picked the right door is 1/3. What Monty does afterward doesn’t change that. The chance the grand prize is behind one of the other doors is 2/3.
If you picked the right door, Monty can show you either of the others. But if not, he has to show you the other one which doesn’t conceal the grand prize. The fact that Monty showed you which of those it is, doesn’t alter the fact that there’s a 2/3 chance. So: switch. It gives you a 2/3 chance of winning, whereas you only have a 1/3 chance with your original choice.
I could give formulae, present logical constructions, even run computer simulations, and they would show the same thing: switch.
The result seems counterintuitive. So much so, that many had a hard time believing it, including many mathematicians. But the answer, counter-intuitive though it may be, is correct. Switch.
Yet there are those who still don’t accept the solution. Perhaps it won’t surprise you that some of them are global warming deniers. So I’ll try another approach to persuade the unconvinced.
Let’s modify Let’s Make a Deal so that the final game has not just 3 doors to choose from, but 100 of them. Only one conceals the grand prize, the other 99 hide booby prizes. You pick one.
Then, Monty shows you what’s behind 98 of the doors you didn’t choose. He knows ahead of time which door conceals the grand prize, so he never uncovers the grand prize by accident. After all is said and done, there are two doors unrevealed: your original choice and one which you didn’t choose. The other 98 doors have already been shown to hide a goat.
Now which would you choose?
If you like what you see, feel free to donate at Peaseblossom’s Closet.
The Monty Hall problem is very counterintuitive, and I know quite a few scientists and engineers who got it wrong when they were first exposed to it (I did). The easiest explanation is the one given here, namely to realize that the game show host knows which door the prize is behind and never opens that one. So the host’s selection adds to the information that the contestant had available when he or she initially chose a door.
Extending it to 100 doors as ATTP does here makes the point even more clear. (It obviously doesn’t work with two doors… host opens unpicked door to reveal goat… “Want to change?”)
Well, obvious except there’s an XKCD cartoon of just that, with a twist, of course. https://xkcd.com/1282/
For ‘ATTP’ read ‘Tamino’. I was reading both blogs and got mixed up.
You even had me confused for a while :-) . I do like the Monty Hall problem. I once did 5-day hike with a friend who was doing a PhD in Engineering. I spent one evening trying, unsuccessfully, to convince them that you should switch.
There is, of course, a strategy that guarantees a win every single time. It’s very simple, too. All you have to do is decide that you want a goat.
you think so?
Yep, if the game show host behaves according to the classic structure, you choose the opened door. Always a goat.
I don’t think you’re allowed to pick the opened door, so there is no way to guarantee winning a goat.
Whoops. Apologies, Grant. I was posting to ATTP at the same time and since you both use WordPress with a similar style I got mixed up.
The initial chances of you picking the door with the prize are 1/100. It’s almost guaranteed that you are wrong, which means when it comes down to your door vs the last unopened door, you have a 99% chance of winning (statistically speaking) if you switch.
When I took probability this this was initially a bit difficult to wrap my head around as well, but now I just remember that my first choice had a 99/100 chance of being wrong, and if I am wrong, then the last unrevealed door can only be the prize door, 100% of 99% of the time.
I once presented this puzzle to a group of my friends at a party, and as expected, everyone went for the ‘doesn’t matter’ answer.
I calmly explained how Monty’s inside information had an effect on the game, but they persisted in their stance, even to the point of mocking me for being too dense to see how they had to be right.
I didn’t take the bait, but egged them on to think deeper, and not be locked in to their original thinking. This went on for about and hour, at which point I added a forth door, and Monty opens two ‘goat’ doors. Do you change now. Nope “still the same odds”. Hmmmm…how about a fifth, a sixth, seventh, eighth? The eyes begin to open, then the answer smacks them in the face. But then the cognitive dissonance kicks in and they try to find a way in which it is a trick, or manipulation of the facts.Nope.
Eventually acceptance is achieved by most (but not all).
Recently I revisited the puzzle with one of those present (the mocker) who did ‘get it’ at the time, and still grudgingly accepts the formal answer, but curiously still thinks that even though the odds are in the favor of switching, that it still doesn’t matter, because “I still could have picked the car initially, so it switching could be a mistake, and that’s sorta fifty/fifty”.
I see it this way: the odds remain at ⅓ for the initial choice and ⅔ for *both* of the other 2. What is happening is that the ⅔ probability for the “other 2” changes from ⅓ + ⅓ to ⅔ + 0 (0 being the opened door, of course). Imagine that, instead of opening a door, Monty were to say, “would you like to change to *both* of the other 2?”. Of course you would take that option and switch. No inside information would be required.
That’s a really neat way of explaining it.
When I first heard this, I also thought that it makes no difference and the odds were 50/50, but it only takes a look at the simple table here to show the true odds.
I do have a suspicion, though, that anyone who thinks it’s 50/50 after opening 1 of 3 doors will also think it’s still 50/50 after opening 98 of 100!
Now, I wonder why Tamino brought this up. The odds that GW deniers are right is about 1 in a 100?
Now, I wonder why Tamino brought this up. The odds that GW deniers are right is about 1 in a 100?
Or the blind stubbornness described by skepticmac57 in the post immediately before yours. (Given the sheer amount and consistency of the physical evidence I think it’s well under a 1% chance that deniers are right… the best they can hope for is that ECS is under 2.5 °C)
Thanks for the link. After reading the answer–which seemed terribly ‘wrong’ to me, too, though I trusted Tamino had it right–I more or less replicated the table, like this:
Choice 1 Choice 2 Success?
1 1 No
1 3 Yes (Switch)
2 1 No
2 3 Yes (Switch)
3 3 Yes (Stay)
3 1/2 No
So, switching wins 2/3s of the time, staying only 1/3 of the time.
There are several parallels with denialism, as discussed in your link, and in skepticmac’s comment.
1) Disregarding available information (whether intentionally, as in citing only RSS temperature data, or through cognitive bias, as in the MHP).
2) Stubborn defense, on the part of some, of the correctness of one’s answer, regardless of demonstration. I won’t speculate about motivations just now…
Should have said the table assumes 3 as the correct choice, though that’s not too hard an inference… :-)
Why does my brain tell me it doesn’t matter whether I switch or not?
Because your brain falls for the “obvious” answer that there are now only 2 choices and that the odds must therefore be 50/50, regardless of how many doors/choices there were initially. I hope the table at Wikipedia I linked to earlier, and Doc’s reply above, makes the true odds clear.
As Tamino and the Wikipedia article explain, probability is often counter-intuitive and you really have to map out the various possibilities to overcome this. You know that old one about how many people you need for any 2 to have the same birthday (day and month, not necessarily year)? It’s much fewer than intuition (common sense?) would tell you.
The key here is that your brain fails to see information has been communicated. Pigeons needing food see it soon enough, People needing food probably would as well as they would respond to the actual probabilities not a surface analysis.
That does help, but for me, the knowledge that the one door you initially pick has only a 1/3 chance at the beginning and the combination of the two other doors a 2/3 chance of containing the car, and then the that the host will always be able to pick a door that does not contain the car and will always do so — is sufficient to demonstrate that you should switch. For me it is the cartoon, oddly enough, showing the two doors that you did not pick – grouped together.
When the host shows that one of those doors does not contain the car he is not in any way affecting the probability that the group of doors that you did not pick or the door that you did pick contains the car. He is instead simply shifting the probability of containing the car that had been evenly distributed between the doors that you did not pick to a single door by eliminating as a possibility the door that he knows does not have the car.
Oddly enough, as a matter of human psychology, I suspect that a contestant is more likely to stand by their original choice, partly because it is what they already chose.
For some it will simply be the case that they will not see any good reason for switching a decision they already made. It might be understood as vacillation, confusion or uncertainty, and they wouldn’t want to project that. For others they may have picked a door feeling that it was the lucky door, and when offered the chance to switch they will stick with what they already decided was the lucky choice.
For still others there will be the sense that the host is trying to get them to switch, and with the world being a zero sum game (what would this be,,, the principle of conservation of win?), the contestant will think that they can’t win without someone else losing – so clearly the host is trying to get them to switch because the contestant will lose so that the host can win. They will sense that the host is trying to cheat them out of a well deserved win.
Not that the contestant is likely to put any of this into so many words. For many, if they were to actually to actually put their “reasoning” into such words they would likely see the weakness in their arguments for standing by their original choice — even if this did not give them enough insight to see that they should switch.
There was a rather creepy, dreamlike version of this on the “Ghostbusters” cartoon many years ago–“And behind Door Number One, it’s… Door Number Three!”
I’ve had fun with the problem for years, even tripping up a really good high school math teacher I know (until he grudgingly admitted I was right). But I simplify the Monty Hall problem this way:
1. Since the odds of picking the winning door (at the outset) are 1/3, there is by definition a 2/3 chance that the winning door is one of the two doors you did NOT pick.
2. By giving you a chance to switch, Monty is basically giving you the chance to pick TWO doors — the one he opened, and the remaining door — rather than just one!
3. Your odds of winning or losing do not change from #1 above, because the winning prize has not moved and Monty will always open a door with a goat.
Dennis, That is a perfectly reasonable and correct way of approaching the puzzle, but it bumps up against a deep intuition (heuristic?) that when choices are reduce to two, then it becomes 50/50.
It seems to be the default for most people, including some very educated people, including mathematicians. I know that it fooled me at first too.
There is a very interesting story about this involving the Parade Magazine column of Marilyn Vos Savant where I first saw the puzzle . After she presented the correct answer, she was attacked and rudely vilified and scorned by some of her ‘learned’ readers :
My best and quickest way to get people on board is to do the same thing that Tamino did, which is to take it to some extreme. They usually get it pretty quickly then, but you still get some squirming from people who get themselves and their egos too deeply entrenched in to their initial position.
That really is a nice parallel to AGW denialists.
Yes, see my reply to skeptictmac57 above. The ⅔ probability shared by the 2 doors changes from ⅓ + ⅓ to ⅔ + 0.
The 100 doors is a good parable of denial.
After 98 doors open and are shown to hold goats, the denier still sticks to his or her original choice, for emotional or ideological reasons, even though the last unopened door has a 99% chance of concealing the prize.
Tamino, you mention that the host must know where the major prize is to avoid anticlimax, but am I right that switching is the best strategy regardless of whether the host knows or not? Of course if he doesn’t know there is 1/3 chance he will reveal the major prize and switching becomes irrelevant, but IF a booby prize is revealed first, the best strategy will be to switch, since there is always a 1/3 chance your first pick was wrong (regardless of the hosts knowledge of the correct door), hence a 2/3 chance of winning if you switch after a booby has been revealed.
The host not knowing would decrease the overall probability of winning with a switch strategy (to 2/3 X 2/3 = 4/9), but would not affect the probability of winning conditional on a booby prize being revealed first (=2/3). Right?
[Response: I don’t think so. If Monty doesn’t know (so he might accidentally reveal the grand prize), then I think your chances are the same whether you switch or not: 50/50 if he shows a goat, and zero if he trips the grand prize.]
If you always take first door, and host always takes second, then there are three possibilities: you win, you lose, and host picks the prize. If you exclude the 3rd, there’s a 50% chance of a win.
the question of what happens when Monty doesn’t know illustrates why this situation is so hard to understand. Even though exactly the same sequence of events may occur in a trial in the “monty knows” and a trial of the “monty doesn’t know” scenarios (e.g. you choose door 1, monty reveals a goat behind door 2), the odds of winning by switching are still different (2/3 vs 1/2).
The fact that probability involves many trials is a hard concept to grasp.
On reflection (and simulation) I think I am completely wrong, and the hosts prior knowledge is crucial – now my head hurts!
Monty is required only to open a losing door (of the 2 the contestant did *not* pick) so that the contestant has only one remaining door to switch to (or not). As I said earlier, if Monty opened no doors but instead offered a switch to *both* of the other doors, you would improve your odds from ⅓ to ⅔ by switching. Leaving one remaining door by opening a losing one gives the same odds.
I well remember this one, and the unfair dressing down one very intelligent woman (vos Savant) got for getting it right. If you haven’t already, you should all read this:
Oops, scratch that. skeptictmac already beat me to it. I’m usually more careful in checking the comments before I post :-(
Even more obvious if there are a million doors. The host has two reasons not to open a door: you chose it and/or it’s the winning one. The chance you chose the winner remains one in a million. Had me thinking though! :)
Was this the first time this problem was exposed to the public?
I read that article on a Sunday morning. By Monday entire office towers nationwide were arguing about it. Almost everybody thought there was no change in odds: staying with the first choice was just as good as switching.
When the answer came out people were angry. She got called a lot of names.
As a kid we watched the show. The living room would always have a vocal advocate of switching and a vocal advocate staying put. Nobody ever said switching had better odds. What I wonder is did the original designer of the game show actually know the odds improved by switching, or did they just stumble into it?
It also seemed people who switched were just as likely to blow it. Until the Parade Magazine column, I don’t think very many people knew about the true odds.
The beauty of that column is that she challenged them to test the hypothesis–and when they did, they saw that the data support the hypothesis. The sad thing for the climate change situation is that, like some of the folks in the Parade article who did the experiment, the deniers refuse to accept the empirical data, even when they have been shown, ad nauseum, that the data don’t support their hypotheses (about the sun, natural variability, etc.).
The game design was discussed in a statistics journal in 1975, so apparently not a lot of Americans subscribed to statistics journals.
To me, the key way to look at it is well illustrated by jim’s question, i.e., when does what Monte Hall shows you constitute new information that actually changes the probability that the prize is behind the door you originally chose and one does it not?
Clearly, in the scenario where MH knows which door the grand prize is behind and always shows you the goat, the fact that he shows you one door with the goat behind it gives you no new information.
On the other hand, if MH really doesn’t know where the prize is and just opens another door at random, this really can give you new information that changes the probability that the prize is behind the door you picked. In particular, 1/3 of the time he will show you the prize and the chance that the prize is behind the door you picked drops to zero and 2/3 of the time he will show you a goat and the chance that the prize is behind the door you picked increases to 1/2: 1/3 = (1/3)*0 + (2/3)*(1/2)
One could also imagine another scenario where MH knows where the prize is and doesn’t always show you what it behind another door, but instead only tempts you to switch by showing you a goat if you have picked the correct door. In that case, of course, you should never switch!
I think the most interesting thing about this problem is that you can’t determine the best strategy simply by being told that MH shows you a goat behind another door: You have to know what MH’s algorithm is in order to know if you should switch, hold, or it doesn’t matter.
Given that MH knows where the grand prize is and never opens that door, he most certainly has given you information. That is the crux here.
Given that MH does NOT know, 1/3 of the time the grand prize would have been revealed. This demonstrably did not happen. In this case no information is passed and the odds go to 50% for the 2 remaining doors.
Sorry…The sentence where I said “…the fact that he shows you one door with the goat behind it gives you no new information” should have said “…the fact that he shows you one door with the goat behind it gives you no new information about the door that you picked”. It does give you new information about the other door.
I agree with the rest of what you said, which is just restating what I said in your own words.
Just to re-emphasize, ambiguous phrases like “gives you no new information” should be avoided. It is ambiguous because it is unclear what sort of information one is talking about. (I tried to be clear in my post but accidently slipped up in one spot and then that caused you to misinterpret what I said.)
In my post, I emphasized new information about the probability that the door that you picked has the prize. You have emphasized new information about the ratio of probabilities for the prize to be behind the two remaining doors.
Those are two very different things.
Urgh…I keep realizing potentially-ambiguous phrases: by “remaining doors” I meant “unopened doors”, i.e., the remaining doors are the one you picked originally and the one you could switch to.
The problem with this question is that there’s so much ambiguity that it’s really hard to completely specify the problem.
(I think) The critical points are:
1. You must know before you make your first choice that Monty Hall will reveal what’s behind one of the other two doors.
2. Monty must know where the prize is.
3. Monty can only reveal a goat when he opens his door.
Without all three of these explicitly being specified (and I don’t think Tamino specifies the first one in his description) then there is a valid argument that it makes no difference to switch.
My understanding is that Monty must always offer a switch to one other door. To do this, he must open a losing door. This is known from the outset.
That’s not sufficient. One thing you need to remember is that not everybody talking about this problem has ever seen Monty Hall or know who/what he is. My only knowledge at all of “Monty Hall” is through this question.
So you cannot assume anything if you want to properly ask this question.
After the contestant chose a door, Monty then revealed what was behind one of the other doors.
But unless the person reading this post *knew* that Monty didn’t have a choice about doing that and *knew* that Monty knew where the goats were and *knew* that Monty would reveal the goat and not the prize then they can genuinely argue that it makes no difference to switch.
The phrase as Tamino wrote it is ambiguous – it doesn’t specify if it’s the rules of the show or what happened in one particular show.
It’s interesting that outside of the US where I’ve had this discussion a significant proportion of the people have thought that it’s a deliberate action by the host to get the contestent to change away from the prize – in many gameshows we have similar where the contestent has obviously picked the correct answer but the host keeps asking them if they’re really sure or would like to change and otherwise Monty would have revealed the car and said “Oh dear, bad luck”
I’m in the UK so I’ve never seen the programme. I was introduced to the problem about 10 years ago by a friend in the pub. As I remember it, he had heard it from someone else and didn’t know its origin. Shortly afterwards I discovered that origin and read about it at Wikipedia and Vos Savant’s site. I am not basing what I say purely on what Tamino has said. Have you followed any of the links posted here? You will see much discussion of the conditions under which the game is conducted.
Next time I get someone who doesn’t believe me after I have explained the correct answer I will not try to argue. I will just offer to play the game 50 (or more) times with the doubter as the presenter and me as the contestant. We can use a pea and thimble or a computer. I will pay him/her $1 per game, he will pay me $1.90 if I win. On my calculations I would have a less than 5 percent chance of losing, though the doubter should see the odds as being loaded in his/her favour.
Hmm, interesting. But I echo Martin Smith’s question: why does my brain tell me that it doesn’t matter if I switch or not?
I can see the odds are in favour of switching when you look at the whole contest (from when there are 3 closed doors). However, is it not true that the contest really only has two doors? Isn’t the first choice irrelevant because, after one door is opened, it’s a new question? There are now two doors and the main prize is behind one of them. The contestant, effectively, has a new decision to make – choose one of the two doors. The fact that one of those doors was originally chosen doesn’t alter the new decision basis (two doors, one prize) does it?
The wikipedia article doesn’t seem quite right in claiming that, once opened the 2/3 chance now switches entirely to the unopened, unchosen, door, because it’s now a new problem, isn’t it?
Well, you’ve just summed up the intuitive (but wrong) analysis that most of us make, and thereby answered your own question to a considerable extent.
As to the correct answer, let me try a question: however tempting it might be as a heuristic, does it really make any sense to think that the seemingly ‘new decision’ changes the 2:1 odds that your original pick was wrong?
This is exactly the point. You are saying that the odds become 50/50 (which I did when I first heard about this) whereas the Wikipedia article, Vos Savant, and numerous experiments show that this does not happen.
Would you say it’s also 50/50 in the case of the host opening 98 losing doors of 100 initial ones?
Well, yes, I would say the odds revert to 50/50, no matter how many doors there are. If you are left with a choice between two doors (which is what you are always left with), then you have a fifty/fifty chance of being right.
I’d always been told that a random event has no history of the previous random events. Like the lottery. No matter what the result was in the previous lottery, you still have the same chance in the new lottery (assuming you bought the same number of tickets). In this case, there are two lotteries. The first gives you a 1 in three chance of success. However, you don’t get to know whether you actually won or not. Now there is a second lottery, where you get a chance to pick the winning number (one of two numbers on the remaining doors). This new lottery doesn’t have any knowledge of the previous lottery. Now it’s a choice between doors 1 and 3 (say), rather than a choice among doors 1, 2 and 3. Only if one conflates the two “lotteries” does one end up with there being an apparent advantage to switching.
Perhaps what I’m not seeing is why this new decision is NOT a separate choice and why the original decision is not irrelevant when the doors are reduced to 2. In a sense, there are never more than 2 doors, when it comes down to it; the preliminaries are just theatre. No?
Oh, what a twisted web we weave…
Your error, Mike, is in thinking that the probability is associated with your guess, so a second guess has different odds. The probability in question is associated with the placement of the goats and the prize – which only happens once. Monty doesn’t move goats and prizes around between guesses.
Monty (or his helper) randomly places one prize and two goats behind three doors. Prior to placement, each door with have a 1/3 chance of receiving the prize. You guess. Monty opens a door with a goat. The prize doesn’t move, so when you are offered a new guess (switch or not), your original door still has the original 1/3 probability that the prize was placed there. The other two doors still have a 2/3 probability, but now you also know that it is NOT behind one of them (where Monty showed you the goat). Thus, all of that 2/3 chance is now associated with the door you should switch to. It doesn’t matter how many times you guess, the probabilities were cast during the single event of placing the prize and goats.
[Technically, for a single game, once the prize and goats are placed, one door has a 100% chance of having the prize, and the other two doors have a 0% chance, but were talking about the probability of a number of games here. On average, each door has a 1/3 chance of receiving the prize.]
Mike: There is no second lottery. You have just learned more information about the first lottery. In particular, his showing you one of the other other two doors gives you information (in a probabilistic sense) about what’s behind the other door that he didn’t show you. (It doesn’t give you any new information about the door that you already picked…since he can, and by the problem statement does, always show you one of the other two doors that has a goat behind it. That is one way you know that the probability of it being behind the door you picked must still be 1/3.)
I must be missing something because this still doesn’t sound right, Joel. On the first choice, the contestant has a 1 in 3 chance, because he/she can’t possibly know anything about what is behind each door. However, after a non-winning door is revealed, he/she now has a 1 in 2 chance with the next choice he/she makes, because there are only two doors left (the other door has been taken out of the equation). The situation has indeed changed and the contestant can make a new decision. At the time he/she makes that new decision, he/she has a 50% chance of being correct. No choice now has a 1 in 3 chance, at the time the switch decision is made (because there is no history for that new choice). Only if you undo the open door and remove knowledge of what was behind it can the contestant’s chance go back to 1 in 3.
Gaaa! I must be missing something here as everyone else seems to be happy that switch is the best strategy.
I invite you to look at the table shown in the Wikipedia article and at Vos Savant’s site. Do they not show that the odds favour switching?
The world simply doesn’t work that way. The probability that the prize is behind the door that you originally picked (let’s call it Door 1) cannot change when Monte Hall does something that he always does…show you the goat behind one of the two doors that you didn’t pick.
The only way that the probability that the prize is behind Door 1 can increase is if there was an alternative possibility where it would have decreased.
So, for example, in the variation of this problem where MH opens one of the other doors without regard to what is behind it, then if he opens the door and reveals a goat, then the probability that the prize is behind Door 1 really does increase to 1/2. But that is only because 1/3 of the time he does this, he reveals the prize and the probability that the prize is behind Door 1 drops to zero. So, in this case,
1/3 = (2/3)*(1/2) + (1/3)*0
In this equation, the first 1/3 is the original probability the prize is behind Door 1. The 2/3 represents the fact that 2/3 of the time he opens another door and reveals a goat and the 1/2 represents the new probability that the prize is behind Door 1. The next 1/3 represents the fact that 1/3 of the time he opens another door and reveals the grand prize and the 0 represents the new probability that the prize is behind Door 1.
The world simply has to satisfy these sort of consistency conditions. And, since, under the rules of the real MH problem, MH knows where the prize is and always shows you a goat behind one of the other two doors, there is no way that the probability that the prize is behind Door 1 can change…because there is no “bifurcation” (you always get the same result of him showing you a goat) and 1/3 is simply not equal to 1*(1/2).
The error in your reasoning is in thinking that it is a “new question” by which you seem to mean that it must be equally-probable that the prize is behind either of the two doors. However, it is not a new question in that you have gotten to this point through a process in which Monte Hall has showed you another door…and, his algorithm for showing you the other door is vitally important in determining what the relative probabilities are.
As I noted in a comment above, think of another case in which Monte Hall ONLY chooses to show you another door with a goat behind it if you have chosen the correct door to begin with (i.e., he only tries to intice you to switch only when you have guessed correctly). Clearly in this case, the probability of the prize being behind the two remaining doors is not equal. Rather, if Monte Hall showed you a door with the goat behind it, you know the prize is behind the door you originally picked!
For the algorithm as described in the real Monte Hall problem (i.e., Monte Hall always shows you one of the other two doors that has a goat behind it), what turns out to be true is that you learn no new information about the probability that the prize is behind the door that you originally chose. (How could you if you know he ***always*** can and will show you one of the other two doors with a goat behind it? Then you could just magically increase your odds of originally guessing correctly from 1/3 to 1/2 by having Monte Hall do something that he always does!) Hence, the probability for the door you originally chose to contain the prize remains 1/3 and the probability for it to be behind the other door must be 2/3.
Another way to think about it is that, if you switch, Monte Hall has essentially allowed you to pick two doors…the one that he showed you that does not contain the prize and the other one. So, it makes sense that you win 2/3 of the time if you switch.
But isn’t it also true to say that he has essentially allowed to pick either of:
1. The opened door and the remaining unchosen door
2. The opened door and the originally chosen door
Both of those have the same odds, so it’s still 50/50.
I don’t think (and, of course, neither do you) it’s “magically” increasing the odds to 50/50 by having Monty do what he always does. But I don’t see why it’s not a separate choice.
Consider this thought experiment. Suppose the show must go on. Whatever happens. Contestant makes original choice and Monty opens one of the other doors. Contestant has a seizure and has to be taken to hospital. Substitute contestant is now asked whether he/she wants to keep the original contestant’s choice or switch. From the substitute contestant’s point of view, it’s a 50/50 choice. That Monty is asking if he/she wants to stick or switch is irrelevant, since it wasn’t the substitute contestant’s original choice, it’s now a straight choice between two possibilities – the prize is behind one of the two remaining doors.
Another thought experiment occurs to me. Again, the show must go on. One of the doors isn’t working and can’t be fixed in time, before the contestant chooses originally. Consequently, they put the prize behind one of the other two doors (if it isn’t already) and the choice is simply between one of the two working doors. Is that different from a smoothly running show, after Monty opens one of the doors, given that the prize is definitely not behind the non-working door – similar to Monty having opened it? The only difference is that, this time, the contestant didn’t get a first choice.
No. If you stick with the door you have, you only win if the prize was originally behind the chosen door. If you change, you win if the prize was behind either of the other two doors. I explained this in detail here: https://tamino.wordpress.com/2016/01/19/monty-hall/#comment-93107
Yes you do. By the statement of the problem, MH ***always*** shows you the goat behind one of the other two doors and, so by your logic, your chance of winning with your original guess ***always*** becomes 1/2 just by virtue of MH doing something he always can and doe do. That makes no sense.
As for your thought experiment about the first contestant having a seizure: It doesn’t matter. Once MH has reacted by showing the everyone the goat behind one of the doors that wasn’t picked, the odds of the prize being behind the door originally picked don’t change but the odds of the prize being behind the remaining door go up to 2/3.
You seem to have some odd ideas about probability where you have invented your own axioms and one of them seems to be that the odds of something always have to be 50:50, no matter by what means you have come to the point of having the two choices. That idea is simply nonsense.
The change to 100 doors is brilliant. I’ve been aware of this problem and understood the correct answer for many years, but I’ve never seen such an easy way to dispel the counterintuitiveness. Very clever.
I don’t get this. If the ⅔ argument doesn’t convince, why should 9/10, 99/100, or 999/1000? In all of these cases you are left with only 2 doors.
Because in the 100 door case, assuming Monty does not reveal the prize, Monty has (1) well and truly proven he knows which door the prize is behind and (2) has shown you that all but one of the doors you originally did not choose do not contain the prize.
Let’s put 100 chips in a pile one of which is secretly marked. You take one and hold on to it in your hand. Monty now shows you 98 chips from the remaining pile which are not marked. Would you keep the chip in your hand or choose Monty’s remaining chip? Clearly Monty is giving you rather important information that you should switch.
It is assumed that Monty must always offer the option of a switch, or he might be trying to tempt you away from a correct initial choice and, in order to do that, he must know the location of the prize to avoid revealing it accidentally. The number of doors doesn’t alter the conclusion (that switching is the better option).
It doesn’t convince because it is logically better. It convinces (some of the time!) because it dramatizes the difference in the odds in a way that assists intuition to grasp the problem.
TrueSceptic- I think that the reason that it is harder for most people to grasp the answer when dealing with only 3 doors, is that they haven’t yet bought in to the idea that information is being made available to them by Monty’s reveal. They haven’t put it in to the context the way that the puzzle is stated (that he knows from the outset where the prize is, and will never reveal the prize when he opens the door). They are treating it more as though he randomly opens a door even though that is expressly not the case.
But by having 100 doors, as soon as Monty starts opening door after door, fairly soon, the contestant starts to see that he is actively filtering out doors, which they should have realized in the beginning. I don’t think he would have to go very far in this scenario before they would finally get it.
Interestingly, in the ‘Monty doesn’t know where the prize is’ scenario, the 100 door approach can help there too if a person is having a problem trying to see why it is not the same 1/3 probability (keeping their door) as the original puzzle. How many trials would it take for Monty to finally get down to only two doors if he kept inadvertently revealing the prize, and had to start over?
You must be right because some people *are* convinced by larger numbers of doors when they are not in the case of 3. It’s just that I don’t see why!
Using the (false) logic that the odds change to 50/50 when Monty opens 1 of 3 doors, if I watch Monty open 98 of 100 doors, why wouldn’t my faith in my original guess get stronger as he does so? As each door reveals a goat, my original guess remains as potentially correct, until we are left with just the 2 doors.
I agree that my brain falls for the obvious answer, but the question is why does my brain fall for the obvious answer in this case and not in the 2-people-with-the-same-birthday problem? It’s not the same as the 2-people-with-the-same-birthday problem, because that problem doesn’t have an obvious answer. I have to do the calculation, and once I’ve done it, I see the true answer. I am initially surprised, but my but my brain doesn’t keep telling me to think incorrectly about it.
In the Monty Hall problem, even after I’ve seen the calculation, my brain keeps telling me the correct answer doesn’t make sense, just as strongly as it told me the wrong answer before I saw the calculation. The key is to see that Monty Hall doesn’t use random choice to choose the door to reveal. After Monty opens a door, my second choice isn’t as random as my first choice was.
But even after knowing this, I still feel the urge to say there is no point in switching doors, which means the reasoning about how to choose the door and the reasoning about how to solve the Monty Hall problem are done by different areas of the brain?
But who or what is Monty Hall in the AGW acceptance/rejection problem? I think Steven Goddard is no Monty Hall. I think he really believes that showing a graph of adjusted data next to a graph of unadjusted data proves fraud. Roy Spencer isn’t Monty Hall; they don’t even use his data anymore. they use RSS, even after Carl Mears splains it to ’em. I guess Lord Monckton is Monty Hall.
I had a similar problem coming to grips with the following:
Buses arrive randomly at a bus stop such that the average time between buses is 10 minutes. If you turn up to the bus stop, on average how long do you wait for a bus. The answer is, 10 minutes. Which I found incredibly confusing.
I’ll leave you to resolve this apparent paradox.
(My son, who only attended school to age 15 because of anxiety problems, had no trouble with Monty Hall at all…)
This bus paradox is interesting. It’s true that it may be tempting to imagine that, since we seldom show up at the stop exactly at the time when a bus departed, it would seem that the average time we have to wait for the next bus ought to be less than the average time between buses. What is then overlooked is that there are no correlations at all between the times of arrival of the buses.
The times of arrivals of the busses are akin to the moments of individual atom disintegrations in a radioactive sample. If the average time between individual disintegrations is 10 minutes, this simply means that, after an atom disintegrates, the average time before another one disintegrates is 10 minutes. But since there are no correlations at all — all the atoms are independent — then this also means that the time before the next disintegration occurs always will be 10 minutes whatever the starting time. If a long time lapses before a disintegration occurs, that doesn’t make it any more likely that another one will happen soon.
Indeed – if 2 buses leave 10 seconds apart, the chances that you turn up between them is pretty small. If 2 buses leave 3 hours apart, the chances of you turning up between them is pretty big. If you do the maths, you end up waiting (on average) exactly the same time as the average time between buses.
Yes, that’s a good qualitative explanation that directly undercuts the intuitive basis for the wrong conclusion. Another simple positive argument that occurred to me is that it doesn’t make any difference whether you arrive at the bus stop through walking there, or were dropped by one of the busses. The time of arrival of the bus that dropped you there also is completely random (uniform distribution of probabilities). So, since the way you get there doesn’t matter, the average time that you need to wait just is the average time between the arrivals of the busses — 10 minutes (as it would be, ex hypothesis, if you had gotten there by bus).
What about Sleeping Beauty? Two-thirds or half?
but what if i really like goats?
It is always critical to well define the objective function that defines best/winning outcome.
Ah, *you’re* the guy in the xkcd cartoon magma linked! My hat’s off to you–much lower carbon footprint.
I was shown this probability exercise on beer coasters at a pub. On the back of one coaster was written $1,000,000, the other two blank. The explanation was basically the same, except that there were no prizes behind the other two coasters. Stick or switch?
It took me an hour to finally get it, and about half an hour after it was explained to me. The lesson? Probability can be counterintuitive.
Even if I use the same formula with 100, 1000, or a billion coasters, most people still can’t see it. They think the odds magically switch to 1/2 when all but one coaster in the ‘switch’ pile is revealed. If I persevere, ramping up the number of coasters, eventually most people get it.
It’s one of the best, simplest demonstrations I know to get uninformed people thinking about probability.
Answer: switch – every time.
[I’ve tried asking the holdouts to imagine a bag of 1 million white marbles and 1 black. They pick one without looking at it and keep it concealed in their hand. Then I deliberately pull out 999,998 white marbles (cos I can see them), leaving one in the bag. What are the odds they are holding a black marble? This works quite well. Actually doing the beer coaster exercise 10 times has also done the trick. They get it during, rather than after]
Related question: for those convinced that the odds become 50/50, why wouldn’t they randomly switch (ironically, this would, on average, increase their real chances even if they don’t know it)? Not switching implies that, in fact, they somehow don’t even believe it’s 50/50: they feel the odds favour their original choice. Or is this simple inertia, or unwillingness to look indecisive, as someone said earlier?
In a sense, this is the inverse of the well-known gambler’s fallacy, whereby past outcomes are ‘felt’ to affect the odds of future outcomes. In Monty Hall, the present choice is ‘felt’ to collapse the previous 2:1 odds to 50:50.
In both cases, there seems to be what one might call an intuition of (inappropriate) causality. We don’t seem to like things to be radically random.
In a sense, this is the inverse of the well-known gambler’s fallacy, whereby past outcomes are ‘felt’ to affect the odds of future outcomes. In Monty Hall, the present choice is ‘felt’ to collapse the previous 2:1 odds to 50:50.
In both cases, there seems to be what one might call an intuition of (inappropriate) causality. We don’t seem to like things to be radically random.
Another way to think about it is this:
Before the game you are given a choice of two strategies, “stick” or “flip”.
Then you pick your door. The odds you select the prize are 1/3; wrong 2/3.
With the “stick” strategy you stay with your original selection and win 1/3 of the time. With the “flip” strategy you flip the odds. If you selected the right one first (1/3) you lose, if you selected the wrong one first (2/3), you win.
Psychologically, to answer Martin M’s question, if the host had asked you if you wanted to “flip the odds” rather than if you wanted to “switch your selection”, then your brain might have had an easier time of it.
This seems like quite a good explanation, to begin with. Your first choice has a 33.3% chance of being the correct one, so switching gives you a 66.6% chance of being right. However, you aren’t switching to an “or” choice, where you win if either of the other doors is correct, you’re switching to a specific one of the other two. That specific one door only had a 33.3% chance of being correct, if you originally chose it. Why does that chance increase to 66.6%, just because you flipped?
I think they key point is that if Monty knows where the car is and always reveals a goat first, then you effectively are switching to an “or” choice. Its like you get to switch to the “or” choice and then the host shows which of the 2 ‘ors’ was wrong – doing so doesn’t change the odds that one of the them was right (2/3), because he always shows the wrong one first.
If the host didn’t know where the car was, then their choice of which door to open would be random and would only provide information on that door.
I found the Bayes theorom solution on Wilipedia very helpful for understanding how the host’s prior knowledge and fixed behavior adds information:
for the same reason that the opened door had a 33.3% chance of being correct, but it always has a goat.
“However, you aren’t switching to an “or” choice, where you win if either of the other doors is correct” ..
Actually you are….and you do win if you choose the closed door
unless of course you previously had it right.
Because you have knowledge of which of the two not to pick. You’re not going to take the goat (unless you want the organic lawn mowing service and/or goat stew.)
Mike: In fact the situation is that you are in effect switching to an “or” choice. Monte Hall could just as easily have said, “Do you want to stick with Door #1 or switch and when if the prize is behind Doors #2 or 3?” That is what he has effectively done by allowing you to switch after showing you which of Doors #2 or 3 the prize is definitely not behind.
“and when if” should be “and win if”.
Mike: Just to make it crystal clear, let me explicitly enumerate the possibilities for you so you can see this (supposing that you picked Door #1):
* If the prize is behind Door #1, he shows you either what’s behind Door #2 or Door #3.
* If the prize is behind Door #2, he shows you what’s behind Door #3.
* If the prize is behind Door #3, he shows you what’s behind Door #2.
Hence, you win if you switch whenever of prize is behind Door #2 or Door #3. Ergo, it is exactly equivalent to allowing you to take both doors.
Correct, but clearer IMO if you draw table.
A stats prof got me with this one once. I was sure he was wrong, but then wrote a few lines of code in my head to do a simulation and realized he was absolutely correct.
Later, when teaching applied stats (Six Sigma) in industry, I had the class run the experiment in groups of 3 (Monty, Contestant, scorekeeper) just using playing cards (two black (spade or club) goats, and one red (heart or diamond) sports car). The results always came up as expected: switching led to a win 2/3 of the time.
I then followed with a discussion of the nasty comments leveled at Vos Savant for her solution to the problem, and pointed out that none of those people that got it wrong ever bothered to collect any data. They could have all done a simple game and prevented showing off their own hubris and stupidity. For the class the moral of the story: DATA WINS!
What I don’t understand is why all those who were so sure-but-wrong didn’t simply draw a table of the possible outcomes, as shown at Vos Savant’s site and the Wikipedia article. That’s all it took to convince me. You don’t need to play the game (in some form) x times to see what the probabilities are.
I remember a similar problem where a car drives up with two children you cannot see in the back seat . You have to guess what the sex of the one on your side is The odds are 50:50 boy or girl
Then a girl gets out of the other side of the car. Now the odds are 2/3 boy and 1/3 girl. How can the sex of the other child affect the one by your door?
Then someone tells you that the girl who got out of the car is the older child. Now the odds are 50:50 again!!
It is so unintuitive
Actually…I think there are some subtle problems with your statements of these problems that don’t make your answers quite correct.
A better statement of the 1st would be this: Considering all two-child families that have at least one girl, what is the probability that the other child is also a girl?
A better statement of the second would be this: Of all two-child families where the older child is a girl, what is the probability that the younger child is also a girl.
This hurts my brain.
1. Girl on your side, girl on the other
2. Boy on your side, boy on the other
3. Girl on your side, boy on the other
4. Boy on your side, girl on the other
Girl gets out other side, two options are gone (2 and 3), leaving
1. Girl on your side, girl on the other
4. Boy on your side, girl on the other
50/50 boy/girl your side.
The probability changes if the side of the car isn’t specified. Then it’s 2/3.
1. Girl, girl
2. Boy, boy
3. Girl, boy
4. Boy, girl
You look away, hear the door open and close, and then see a girl sitting on the bonnet (no one else is around, no kids were in the front seat – those windows are clear)
Option 2 is gone, leaving
1. Girl, girl
3. Girl, boy
4. Boy, girl
2/3 probability the other child is a boy.
If I’m right, which I doubt, the spatial requirement (this side/that side) changes the 2/3 odds to 1/2.
I doubt I’m right because my brain tells me you could see which side the girl got out and it wouldn’t matter – the probability should be 2/3 the other child is a boy. Which makes me wonder if there is something wrong with A.
Love some help here. Good problem.
This seems like a poorly-stated version of the ‘boy-girl’ problem. First, as the table below shows, the odds after the girl in the other seat is revealed are still 50:50, because her exit rules out, not one, but two of the
four possible cases:
Second, the age of the girl is irrelevant in this formulation because there is no established linkage between age and sex in the problem.
However, the problem can be stated in a way that maps on to this, I think:
So, in your version, you’d have to know that the kids were siblings or some such, and bystanders would have to give you the corresponding information. A little tricky to do in a crisp way, but I think perhaps not beyond the bounds of creativity.
I’m enjoying trying to wrap my head around location / no location making a difference to the odds. I still can’t see it in a way that doesn’t require running the options. I have a sense that if I was smarter or a bit better informed, I probably could.
That paradox is far more complex than the Monty Hall problem and depends very heavily upon exactly how the problem is stated and how received information, if any, is viewed.
Been struggling with this one. After much reflection, I’ve concluded that in this case, the obvious answer really is the correct one: the odds stay 50:50 throughout.
My reasoning rests on problem specification: the goal is to predict the sex of the child in one particular seat. That’s different from the MH problem, where you want to predict the location of the prize, and crucially from the vos Savant boy-boy problem, which specifies a particular *distribution*: 2 boys. By specifying one location, it seems to me that all linkage between the genders of the children in the back seat is cut for purposes of this problem; the only thing determining the probability is the boy-girl distribution in the relevant population.
To put it another way, it seems to me that this maps well onto the classic ‘gambler’s fallacy’ paradigm. Just as the odds of a fair coin toss aren’t altered by a preceding run, no matter how lengthy, the odds of a boy being in one particular seat aren’t altered by who is sitting next to him. (Presuming he doesn’t get to choose!)
Conversely, there are differing calculable odds for particular sequences of coin tosses, just as there are differing probabilities for particular seating configurations.
The moral: problem specification rules–and let me quickly specify that ‘rules’ is here a verb!
I may have mis-stated the problem since I heard it a very long time ago. The two children are related.
It was explained to me that the possibilities are that you can have four combinations of children listed in order of birth:
Knowing one child is a girl eliminates choice b.
For the three remaining choices twice the girl is already viewed so the remaining child is a boy. Only for choice c is a girl remaining. Odds 2/3 the remaining child is a boy. If you know that the older child is a girl than choice a is eliminated and the two remaining choices give 50:50 results.
I cannot reconcile this analysis with Doc’s analysis above which gives different odds. The issue seems to me to be the analysis of choice D above which is eliminated by Doc and left in by my analysis. I do not claim statistical skills at this grade. One analysis must be incorrect, since they contradict.
I am interested in which is the correct answer. I like Doc’s analysis better. Perhaps the result is different if the question did not specify the location (this does not really make sense to me)? Say what is the sex of the remaining sibling if they are coming out the front door of a house. Does that affect the analysis of choice d above?
Interesting. Now I can see why the age could be considered relevant, arising not from the problem statement, but from an analysis. Maybe I need to rethink this… darn.
I will say that our tables aren’t listing the same thing–mine was analysing possibilities by location, not by birth order.
In reply to Michjael Sweet, it all depends HOW you know “one child is a girl”. If you see one of the children or are told one of the children is a girl then answer is 1/2.
If it’s a prerequisite that only families where “one child is a girl” are considered in the 1st place then the answer is 2/3.
Damn, another botched HTML tag. This is getting to be a habit. Sorry.
Yeah, shame there isn’t a Preview option here.
Another of my favorite probability problems is this one: Imagine that I put 100 slips of paper, each with a unique number on it into a hat. You know nothing about the distribution of the numbers.
I charge you $1 to play the following game and you get $4 if you win (i.e., I give you 4:1 odds): You draw the paper slips out of the hat one at a time and look at the number on the paper. With each slip of paper you draw, you can choose either to stop or to continue drawing the next slip of paper. You win if you stop on the highest number that is on any of the 100 slips of paper; otherwise, you lose.
It may seem like you are bound to loose money on this game, but in fact, there is a simple strategy that you can easily show will give you at least a 1 in 4 chance of winning the game and hence allow you to make money over the long run (that is, if you play the game many times). What is this strategy?
The next question is how should this strategy be optimized in order to maximize your odds of winning and what are the odds of winning in that case? (This should be answered for the limiting case where there are N slips of paper in the hat, rather than just 100, and you let N go to infinity. For finite N, I don’t know if there is any particularly simple solution.)
Thanks, all, for trying to educate me. I think the issue is whether the first decision is relevant, or not. I get that, if at the start, Monty said, “you can choose either door 1 or both doors 2 and 3”, then choosing the two doors is the best bet. But that isn’t the actual game and I’m having trouble seeing that adding a second decision point makes it so. It seems to me that it’s subjective, whether the first decision is relevant. For example, take the hypothetical (but technically possible) case where Monty misheard the contestant, who said Door 1 but Monty heard it as Door 3. Monty reveals door 2 (for this example) and then asks for a switch or stick decision. Whether it is stick or switch depends on the observer. If the contestant switches, Monty will think he’s sticking. So would it be the best strategy from Monty’s point of view, but not from the contestant’s point of view?
So it all boils down to whether that initial decision should be brought into the calculation of which strategy is best. Remember that the initial decision wins nothing and one can think of many hypothetical situations in which the final decision is indistinguishable from there being only 2 doors to begin with. It might be called “stick or switch” but isn’t it really “door x or door y”?
Mike, you are seriously overthinking the problem.
All you need to understand is that your initial choice is a 1/3 probability, that Monty will reveal (always) a non-prize door knowing exactly where the prize is (that’s key,don’t forget that), and that results in the two doors that you did not choose having a combined probability of 2/3. That will not change, and your 1/3 probability cannot change either under these specific rules.
Once he shows that one of those 2/3 probability doors has zero probability of there being a prize, the remaining door MUST have the 2/3 probability (remember that cannot change in this scenario).
I know that it is counter intuitive, but once you see it, it is like knowing the answer to a magic trick, it becomes so obvious that you cannot imagine how you missed it.
But isn’t that all true *only* if the *whole* exercise can be taken as one problem and *only* if one regards the opened door as still in the game. Let’s see if I can tabulate it. Numbers are the doors, “b” is booby prize, “p” is main prize:
At start of contest, these are the possible configurations:
So each door has the same probability of being correct. Contestant chooses, say, door 1 and a losing door is revealed. That revealed door is no longer in the contest (there is no possibility of the contestant choosing that door). So we now have the following possibilities:
Note that these collapse to just two, since the opened door is no longer available for choosing:
(Door number 2 above just means “the other door”) So now, if the contestant sticks with 1, it has the same chance of winning as switching.
I think the difference between my view and the consensus is that the consensus thinks that the possibilities given before the first choice don’t change before the second choice and that, therefore, switching from 1 to one of the other doors gives the best chance of success. So the question is why is my representation of the possibilities, as we step through the contest, wrong?
[Gosh, I can barely believe that I’ve spent so much time “overthinking” this, given its importance to my life! :) Interesting, though.]
You can’t just ASSUME that the two possibilities that you end up with are equally probable. Without loss of generality, let’s suppose (as you did) that you pick Door #1. Now, let’s consider what happens in each case of where the prize actually is:
If the prize is behind Door #2 (which it will be a third of the time), then MH has to show you what’s behind Door #3. So, this scenario occurs 1/3 of the time.
If the prize is behind Door #3 (which it will be a third of the time), then MH has to show you what’s behind Door #2. So, this scenario occurs 1/3 of the time.
If the prize is behind Door #1 (which it will be a third of the time), then MH can show you either Door #2 or Door #3. So, the scenario where it is behind Door #1 and he shows you Door #2 occurs 1/6 of the time and the scenario where it is behind Door #1 and he shows you Door #3 occurs 1/6 of the time.
Now, you can go through these and very simply figure out in each case what happens if you stick with your original door or you switch…It is clear that you always win by switching if the prize is either behind Door #2 or Door #3. You only win by sticking if the prize is behind Door #1.
Here is an almost identical way of looking at it, but stated just a little differently.
Originally each door has a probability of 1/3. You pick one. The one you pick has a probability of 1/3. There are the two that you did not pick. Their combined probability is 2/3. Now at this point you know that Monty will ALWAYS be able to pick one of the two remaining doors, showing that it does not have the grand prize. Therefore when he picks one of the two remaining doors and shows that it does not have the grand prize you actually have *no more information* about whether the door you picked has the prize. Because of this the probability that it has the grand prize has not changed. And if its probability of having the grand prize has not changed the door that neither you nor Monty picked must have the remaining probability (2/3) of being the door with the grand prize.
End of story. We actually do have to go any further in the analysis.
We can, however, look at the remaining door and ask ourselves what information we have acquired about it when Monty picks the other door. This is simply extra credit.
Originally each door has equal probability of 1/3. You pick the first. The remaining two have a combined probability of 1/3. You know that Monty will not pick your door. You also know that no matter what door Monty picks he will necessarily pick a door that does not have the grand prize. Thus when he picks a door that is not your door you are acquiring no additional information about whether your door has the grand prize. You also know that when he picks one of the two remaining doors the door he picks will not have the grand prize.
As such you have more information regarding the door neither of you picked. Whereas before it had a probability of only 1/3 and the combined probability of the two doors you did not pick was 2/3. Now when he picks one of the two remaining doors you have no additional information as to whether the combination of the two doors you did not pick has the grand prize since from the beginning you knew that he would always be able to pick one of the two remaining doors and the door that he picked would not have the grand prize. Therefore the combined probability that one of the two doors you did not pick has the prize will remain 2/3. However, the probability that the door which Monty picks has the prize will be 0. Therefore the probability that the remaining door – the one that neither of you picked – will be 2/3.
It all comes down to information. What information do you have initially, that is, when you pick a door? That your door has 1/3. That the two remaining door have a combined 2/3. That Monty will always pick a door that does not have the grand prize.
When he actually picks a door what additional information do you have? No more information regarding whether the door you picked has the grand prize. After all, Monty will never pick the same door you pick. No more information regarding whether the two doors you did not pick has the prize. Monty will never pick a door that has the grand prize.
But you have more information regarding the door that Monty actually does pick. It now has a 0/3. Likewise, you have more information regarding the door that neither of you picked. When you did not pick it, that door still had a 1/3. When Monty did not pick it the probability that the door that you did not pick and then Monty did not pick suddenly changed to 2/3. We know this in the same way and for the same reason that the probability that the door Monty picked suddenly dropped to 0/3. And we know this even before he opens the door that he picks – as soon as he makes known which door he has picked – since he will never pick a door that has the grand prize.
Yeah, I get that the other two doors had a combined probability of 2/3, so there it is twice as likely to be behind one of the other two doors. And 2/3 of the time it is, as the application I linked to earlier shows. Incidentally, you can have 2 doors on that simulation and the chance comes out at 50%, as expected, or 10 doors and the chance comes out at 10% for sticking.
Now I’m trying to figure out why it’s invalid to retrospectively transfer the revealed losing door, to my original choice. That is, suppose I picked door 1 then door 2 is revealed to be a losing door. I can’t say, “oh, if I’d been able to pick ‘door 1 or door 2’ that would have given me a 2/3 chance and now door 2 is shown to have a 0 chance, door 1 must now have a 2/3 chance”. But I’m not sure why I can’t say that, except that would mean the combined chance of the two remaining doors having the prize would now be 4/3! I feel it must be related to the fact that such a choice is impossible, in practice, as one never knows the number of the revealed losing door until after the original choice is made, whilst the revealed door is always in the “other 2 doors”. I just can’t get to the “oh, of course” moment, that I finally got to with the main crux of the problem. :)
Just to confirm, I do get it now – just trying to wrap up the pieces!
Timothy, very well stated IMO.
The reason that you can’t do that stems from the statement of the problem. I.e., we know that what MH does is choose to show you a goat behind one of the two doors that you did not pick.
Suppose we proposed a different situation whereby if you chose Door #1, then MH chose (with equal probability if he had a choice) to show you a goat behind either Door #1 or Door #2. Of course, if he showed you the goat behind Door #1, you’d be shit out of luck (at least presuming you aren’t given any opportunity to switch after seeing you made the wrong choice) but suppose you are one of the lucky ones who gets shown a goat behind Door #2. In this case, your logic would be correct: The probability that the prize is behind Door #1 would be 2/3 (and the probability it is behind Door #3 only 1/3).
Note that there is no inconsistency in the fact that the probability of it being behind Door #1 has increased from 1/3 to 2/3 because you are amongst the lucky 50% who gets shown a goat behind Door #2. If you were amongst the unlucky 50% that gets shows a goat behind Door #1 (i.e., the door you picked), your probability of winning with that choice would drop to zero. So, mathematically,
1/3 = (1/2)*(2/3) + (1/2)*0.
It all depends on “the other two doors,” so if Monty gets gets that bit wrong by mistaking the selection, switching would lower the odds to 1/3. If the savvy statistician contestant is aware of the error, however, they would know to stick.
You’ve identified that a flaw in the structure ruins the example.
isn’t it really “door x or door y”?
No, it’s “door x, or door y and z.” Door z is opened by Monty, leaving door y as the best chance of winning – twice as likely as door x, the initial choice.
If you really think this all comes down to subjectivity, I would be happy to take your money. For example, we could set up a situation where you are the Let’s Make a Deal show financial officer and I am the contestant. Then we can play out the scenario and since you think it’s 50:50, you would surely be willing to give me odds where, if it is 50:50 you would come out ahead (but such that if it is really what we are saying…that I can win 2/3 of the time…I would come out ahead). Then we could play this out, say, 10000 times with me exercising the “switch” strategy and see who comes out ahead.
My point is that there is an objective answer to the problem as long as it has been spelled out clearly how Monte Hall behaves (i.e., that he always shows you the goat behind one of the two doors that you didn’t pick). This is rigorous mathematics, not vague philosophizing.
To anyone struggling with this, I’ll suggest doing what I did when I first encountered the question many years ago. Make a simple spreadsheet that simulates it. You can run hundreds or thousands of examples and you’ll see that without any doubt, you’re better off switching. That should get you past the question of whether or not you should switch. Once you clearly see the answer, you may find it easier to understand the reasoning. Or perhaps you’ll think of a clearer way to explain why it’s better to switch.
That’s an interesting point Mark. If we start from the position that we are right about something that turns out to be wrong, it is much harder to get to the right answer. Our minds work like lawyers defending our ‘client’ position, and find reasons to reject information that doesn’t fit our view, instead of giving it a fair hearing. Confirmation bias is pretty much wired into our brains.
Thanks to all for persevering. I finally get it! I was thinking of writing a program to simulate it but someone has already done it.
I’m trying to figure out what my mistake was and I’m having a hard time with that, too, but don’t worry about it … I’ll figure it out in time.
One more try… (Famous Last Words)
Your mistake is thinking that your second-guessing changes the probabilities – it does not. The probabilities of prize/goat placement behind the doors when you make your first guess (three doors hidden) and second guess (one door exposed) do not change. Read Timothy Chase’s Jan 23 1:34am comment above, and then think about this:
– prior to placement of the prize and goats, each door has a 1/3 chance that the prize will be placed behind it.
– the probabilities for each door only change once during the entire process. That is when the prize and goats are placed behind doors (which could be before or after your first guess – it doesn’t matter, but let’s assume it’s before you guess).
– After the prize and goats are placed, one door now has a 100% chance of hiding the prize, and the other two have a 0% chance. Conversely, one door has a 0% chance of hiding a goat, the other two doors have a 100% chance of hiding a goat. That will not change for the rest of the time.
– prior to prize/goat placement,there was originally a 1/3 chance the door you picked hid the prize. When the prize and goats were placed, that door either went from 1/3 to 100%, or 1/3 to 0%.
– 2/3 of the time, the door you picked went from 1/3 chance to 0%.
– one of the other doors also went from 1/3 to 0%, and one of the doors did what your door did – it either went from 1/3 to 0%, or it went from 1/3 to 100% chance of having the prize. If there were some way that you could know more about those two doors, you might be able to determine one of the two doors that went from 1/3 to 0%, That would mean that the remaining door had a 2/3 chance of being the one that went from 1/3 to 100%, instead of being the second door that went from 1/3 to 0%.
– except, you don’t need to guess. Monty opens a door, and shows you one of the two doors that went from 1/3 to 0%.
– you now know that your door is still sitting at a 1/3 chance that it went from 1/3 to 100% – because all this fiddling around and guessing does not change that, so the remaining door (the one you can switch to) has a 2/3 chance of being the door that went from 1/3 to 100%..
Great explanation, Bob. Thanks.
What I *still* don’t get is how anyone can look at the simple tables at https://en.wikipedia.org/wiki/Monty_Hall_problem#Simple_solutions and http://marilynvossavant.com/game-show-problem/
and not understand the true odds. You need nothing else.