30

Suppose there are two face down cards each with a positive real number and with one twice the other. Each card has value equal to its number. You are given one of the cards (with value $x$) and after you have seen it, the dealer offers you an opportunity to swap without anyone having looked at the other card.

If you choose to swap, your expected value should be the same, as you still have a $50\%$ chance of getting the higher card and $50\%$ of getting the lower card.

However, the other card has a $50\%$ chance of being $0.5x$ and a $50\%$ chance of being $2x$. If we keep the card, our expected value is $x$, while if we swap it, then our expected value is: $$0.5(0.5x)+0.5(2x)=1.25x$$ so it seems like it is better to swap. Can anyone explain this apparent contradiction?

Adriano
  • 41,576
Casebash
  • 9,211
  • 1
    Your question is very hard to understand. However, if I'm understanding you correctly, I fail to see any paradox. If I know the value of the other card is either double or half the value of my card (and I'm trying to maximize the value of the card in my hand) of course its beneficial for me to swap cards. – Ami Jul 21 '10 at 03:07
  • You never define what you mean by "value." After reading your question a few times I assumed that each player is trying to maximize "value." The math that you do is not well explained or justified. I don't know what you mean by: "so using expected value." Lastly, I don't see any paradox here. From a probabilistic standpoint, if the player switches cards he may lose or he may gain, but he stands to gain more than he stands to lose, thats what your math shows. Whats the problem? – Ami Jul 21 '10 at 03:23
  • @Ami: I wasn't very clear before. I think that it may be clearer now. – Casebash Jul 21 '10 at 03:24
  • 10
    This is exactly the two envelopes problem, if anyone wants to write up an explanation. – Larry Wang Jul 21 '10 at 03:49
  • 3
    An even more interesting problem (similar, but not the same) has been discussed to death on mathoverflow – BlueRaja - Danny Pflughoeft Jul 21 '10 at 04:05
  • Wow, according to wiki, "This is still an open problem among the subjectivists as no consensus has been reached yet" – Casebash Jul 22 '10 at 21:23

7 Answers7

9

This paradox has always interested me. Something to think about is that there does not exist a uniform probability distribution over the positive real numbers (since they are infinite). In arriving at your paradox, it seems you are assuming that any real number is equally likely, but this cannot be the case.

  • 3
    There is a uniform distribution on the real numbers. I think you meant to say that the paradox is resolved when one realizes there is no uniform distribution on the natural numbers. However, that realization only solves one variant of the paradox. The paradox is recovered by using certain other distributions which still produce the paradox (see 'a new variant' in http://en.wikipedia.org/wiki/Two_envelopes_problem) – Ittay Weiss Sep 23 '12 at 07:10
  • 1
    @Ittay: How do you mean, "There is a uniform distribution on the real numbers"? There isn't in the usual sense of the term (see e.g. http://math.stackexchange.com/questions/14777). – joriki Jul 05 '13 at 06:47
  • 1
    @joriki bad formulation of what I meant to say. I meant to say that the infinitude of real numbers is not what prevents uniform distributions to exists (e.g., there are uniform distributions on finite intervals of real numbers). – Ittay Weiss Jul 05 '13 at 07:17
  • @Eric I was never satisfied by the common resolution. $x$ doesn't stand for two different things in the calculation (if you think about opening up the envelopes then the idea that $x$ is two different things becomes ridiculous). I do think there's something wrong with "suppose one is twice the other" - it probably subtly assumes that you can choose a random real number, as you suggest, but I'm not sure exactly where this subtlety is ... – Zubin Mukerjee Oct 09 '14 at 03:14
8

This puzzle is known as the two envelope paradox. This paper contains a nice explanation of the two envelope paradox, and some references to further literature regarding the puzzle.

Seamus
  • 4,005
  • 1
    in the interests of full disclosure, I know one of the authors of the paper, but know very little about the subject. it might well be that there is a much better review paper on the subject – Seamus Aug 04 '10 at 17:20
  • The article in the link seems to propose an axiomatization to explain the a priori indifference between the envelopes. It does not attempt to resolve the paradox itself. The paradox is discussed quite well in http://en.wikipedia.org/wiki/Two_envelopes_problem. – Ittay Weiss Sep 23 '12 at 07:12
2

For the sake of argument, let's assume that the value of the lower envelope $V_L$ follows a uniform distribution from 0 to ∞. Let's call the probability function for that $P_L$. We define this to mean that values from equal ranges are equally likely. For example, for any constant $k$: $$P_L(0<V_L<k) = P_L(n<V_L<n+k)$$

Comparing this to the values of the higher envelope $V_H$, we get the probability distribution $P_H$, which when equated to $P_L$ is as follows:

$$P_L(n<V_L<n+k)=P_H(2n<V_H<2(n+k))$$

This means that the higher envelope's values are half as likely as the lower envelope's values over the same range.

For an example, let's say we opened the envelope and it's value $V$ ranged anywhere from 1 to 2 dollars inside it. The odds of the other envelope containing 2 to 4 are:

$$P_H(2<V_H<4) = P_L(1<V_L<2)$$

Similarly, the odds of the other envelope containing .5 to 1 are:

$$P_L(.5<V_L<1)$$

This is half as likely, since it covers half the range. Thus, the probability that we chose the lower-valued envelope for $0<V<∞$ is 2/3, not 1/2!

This may seem like another paradox, but it will all make sense in a moment.

Using this, we can now calculate the expected value of the other envelope:

$$2/3 * 2V + 1/3 * 1/2V = 1.5V$$

Thus, it is even higher than the $1.25V$ mentioned in the original post. Again, this seems like a paradox. How can we get a higher expected value for the other envelope across all cases?

To see why this is, we must count infinity, which helps us understand that We have NOT counted all the cases! First, we know that the possible values in our lower envelope range from 0 to ∞. However, this must mean that the values in the upper envelope range from 0 to 2∞. Thus, we know that one of the envelopes has greater than ∞ half the time since:

$$P_H(0<V_H<∞)=P_H(∞<V_H<2∞)$$

Since we'll pick that envelope half the time when it is available, we get that the probability that $V>∞$ is 1/4. In these cases, we should never switch since we have envelope with the bigger value.

For the other 3 out of 4 times, $0<V<∞$ will be true. We've already seen that the probability we've selected the lower valued envelope is 2/3 for this range. If we calculate the total probability that we selected the lower envelope over all values, it ends up being the expected 1/2, which resolves one of our "paradoxes":

$$2/3*3/4 + 1/4 * 0 = 1/2$$

Finally, to remove the other paradox, we calculate the average expected value of the other envelope over the entire range. For $∞<V<2∞$, which covers 1/4 of the cases, we know that the expected value is $.5V$. This averages out to be 3/4∞ since it is a uniform distribution from $.5∞<V<∞$. For $0<V<∞$, the expected value is $1.5V$, which also ends up being $3/4∞$ since it is a uniform distribution of values from $0<V<∞$, and $.5∞*1.5V=3/4V$. Thus, our average expected value of the other envelope is $3/4∞$, which equals the average of all possible values for both envelopes: $$∞/2 * 1/2 + 2∞/2 * 1/2 = 3/4∞$$

You can apply similar logic to any probability distribution you like and it will make cents ;)

Paradox resolved!

Briguy37
  • 1,579
1

Traditionally, the paradox arises from comparing what appears to be someone’s odds when they are allowed to see what’s in their envelope (or their card) vs. their odds when they aren’t.

Without loss of generality, let x = 10 and 2x = 20. Assume the person’s envelope to be selected uniformly at random from those two options.

Without seeing what is in the envelope, the person makes a choice that is true to reality. They realize that it doesn’t matter whether or not they trade envelopes because they can’t distinguish. Half the time they get 10 and half the time they get 20 regardless of whether or not they swap. They are ok with it, and they average 15.

If someone sees what is in the envelope, they incorporate an impossible outcome. If the person sees 10 in the envelope, they think they have a 50% chance of swapping for 20 and a 50% chance of swapping for 5, but they don’t. They have a 100% chance of swapping for 20. Alternatively, if they see 20 in the envelope, they think they have a 50% chance of swapping for 40 and a 50% chance of swapping for 10, but they don’t. They have a 100% chance of swapping for 10. The equation that you give…: $$0.5( \,0.5x) \, + 0.5( \,2x) \, = 1.25x$$ … is just a red herring. It does not properly establish the one and only one true average outcome of 1.5.

That said, I see no true paradox here. A sample space is not dependent upon someone’s frame of reference as the paradox suggests.

AplanisTophet
  • 413
  • 2
  • 12
  • 1
    Another way to assert this is to consider a three envelope set up where they contain 0.5x, x, and 2x. I give you an envelope containing x and then tell you that you can either keep your x (you know it's x) or swap for one of the other envelopes. You are then justified in swapping. The two envelope problem is not comparable in that either the 0.5x option or the 2x option does not exist!! Incorporating a non-existent possibility into your sample space is going to cause a deviation. – AplanisTophet Apr 21 '17 at 03:07
0

I find that it is clearest to use examples.

Say I write (\$5,\$10) on the two cards, and give them to you. This matches your problem, but with specific values. If you pick one, you have a 50% chance to have the lower value, and a 50% chance to have the higher. You also have a 50% chance of having \$5, 50% chance of having \$10. But you don't have a 50% of having the lower, or higher, value if it is given that you have \$10.

Now say I prepare two pairs of cards, one pair with (\$5,\$10), and one with (\$10,\$20). I pick a pair at random, and give it to you. You again have a 50% chance to have the higher or lower value, but a 25% chance of \$5, a 50% chance of \$10, and a 25% chance \$20. You even have a 50% of the lower-or-higher value if it is given that you have \$10 - so in that case, your "switch" expectation works. But if you have \$5 or \$20, it doesn't.

Finally, say I prepare ten pairs of cards, nine pairs with (\$5,\$10), and one with (\$10,\$20). There is no value you could have where the probability of the lower-or-higher is 50%.

My point is that the 50%/50% split between lower and higher is only valid if you do not try to associate it with a value. If you do, even if it is the unknown value x, you have to factor in the relative probability that the pair of cards started with (x/2,x), or with (x,2x).

JeffJo
  • 461
0

The contradiction can be determined from these two statements:

You are given one of the cards (with value $x$)

the other card has a 50% chance of being $0.5x$ and a 50% chance of being $2x$.

The first statement implies that the value of $x$ is conditional on the card chosen. As a result, the two values of $x$ in the second statement are not the same. Thus, the maths that follows is incorrect because the two values of $x$ are summed as if they're equal.

That's all there is to it.

To get the correct answer, we should make $x$ fixed, rather than conditional. The most obvious is to make $x$ the value of the small card. It follows that the value of the bigger card is $2x$.

As a result of this, we also have to remember to subtract the value of the card we've already picked before swapping. If we didn't do this, we'd calculate the expected value of playing the entire game ($1.5x$ - you should definitely play the game given the opportunity).

After making theses changes, we are left with:

$$E = \frac{2x - x}{2} + \frac{x - 2x}{2} = 0$$

So swapping makes no difference, as you would conclude intuitively.

SamF
  • 1
  • 1
-1

I know this question was asked a very long time ago, but I would like to share my response nonetheless, because it seems to me that there is an issue with the problem statement itself, and that clarifying this issue would resolve the paradox that is posed here.

The issue is with you stating that it "seems like it would be better to swap." What do you mean by better?

Suppose that by better you mean that the goal of this game is to have the card with greater value at the end of each turn. Say you score 1 point each time this happens, and get 0 points otherwise. Then the expected value associated with the cards is 1 for the winning card and 0 for the losing card -- it has nothing to do with the specific values of the cards. In that case, the overall expected value is 0.5 * 0 + 0.5 * 1 = 0.5 for staying and 0.5 * 0 + 0.5 * 1 = 0.5 for swapping; i.e., the same value, as you'd expect.

Now suppose, as I think is implied in the original problem statement, that by better you mean that the goal of the game is to have the card with greater value at the end of each turn, and that points are awarded to each player equal to the value of their card at the end of the turn, and the player with the most points after a set number of turns wins.

In that case the expected values for staying and swapping are, in fact, as you say, and there is no paradox here. Suppose for the sake of argument that you drew the number 4 on every turn, and that there's a 50% chance that the second card is a 2 and a 50% chance that the second card is an 8. Then your expected value is 1 * 4 = 4 for staying and 0.5 * 2 + 0.5 * 8 = 5 for swapping, and 5/4 = 1.25. But this is exactly correct: over the course of, say, 2 turns, swapping twice might get you a 2 once and an 8 once (for a total of 10), while staying twice would get you a 4 twice (for a total of 8), and 10/8 = 1.25 as expected. The reason why swapping is better is that the 50% chance of doubling your number outweighs the 50% chance of halving your number. In the case of our example, 2 is only worth 2 points less than 4, while 8 is worth 4 more points than 4, which means that you could get an 8 once every 3 turns and a 2 the other 2 turns when swapping and have the same number of points as your opponent (who would have 4 for all 3 turns), while in actuality you should get an 8 once every 2 turns.

So, there's your answer (I think -- I could be wrong). In your original problem statement, you say

If you choose to swap, your expected value should be the same, as you still have a 50% chance of getting the higher card and 50% of getting the lower card.

but this is under the assumption that the only "valuable" occurrence in the game is having a card that has a higher value than the other (i.e., the first case I mentioned), whereas when you actually do your expected value calculation for swapping, you switch to the assumption that a card is just as valuable as the number it has on it (i.e., the second case I talked about). In other words, everything that you have said in your post is literally true, but given different assumptions, which is why a paradox seems to arise when there is none.