1

It's said that you can't choose a random natural number. But what if you take a bijection between the natural numbers and, say, the rational numbers in the unit interval, and then choose a random rational number from that interval, and then take the corresponding natural? Why doesn't this work?

Arturo Magidin
  • 398,050
JoeNash
  • 11
  • 9
    And how do you "choose a random rational in the unit interval"? What is the probability distribution that makes each rational equally likely, and has probability 1 of picking some rational? That's really the issue with "choose a random natural number". If you want to make each natural number equally likely, then the probability that you pick any one natural number needs to be $0$, but then the probability of picking some natural number is also $0$. You have the exact same problem trying to come up with a uniform probability distribution for the rationals in the unit interval. – Arturo Magidin Oct 27 '11 at 22:05
  • 7
    You can choose a random natural number, you just can't assign all of them the same probability. The same is true of the rationals, for the same reason. – André Nicolas Oct 27 '11 at 22:09
  • @ArturoMagidin But you can choose a random real number, even though the probability of picking any one real number is 0. – JoeNash Oct 27 '11 at 22:10
  • 5
    Yes, but there are uncountably many real numbers, which prevents Arturo's argument from working in that case. (Technically: probabilities are countably additive, but not uncountably additive). – hmakholm left over Monica Oct 27 '11 at 22:12
  • @user12186: As Henning points out, the problem is that probability needs to be countably additive (if you have countably many independent events, then the probability that at least one of them occurs is the sum of the probabilities of each one of them). For the natural numbers/rationals, that means that if each of them has probability $0$ of being chosen, then the probability that any one is chosen is $0$ (because you can obtain the total probability by adding all the individual probabilities). You can't do that for real numbers. If you drop uniformity, then there's no problem for $\bf N$. – Arturo Magidin Oct 27 '11 at 22:16
  • See http://math.stackexchange.com/questions/30619/probability-of-picking-a-specific-value-from-a-countably-infinite-set. I'm unsure whether this should be closed as a duplicate of that -- the original question goes in a different direction, but the OP's comment seems to show that the underlying issue is the same. – joriki Oct 27 '11 at 22:20
  • Perhaps a better duplicate is this one. – Arturo Magidin Oct 27 '11 at 22:27
  • Thanks, countable additivity settles it. Now I'd like to know how you could modify that requirement so that choosing a random rational number would be possible, since it seems like a totally sensible thing to want to do. But that should probably be a another question. – JoeNash Oct 27 '11 at 22:30
  • 1
    The Poisson distribution, the geometric distribution, the binomial distribution, etc. all choose a random natural number. The practice of using the word "random" to mean "uniformly distributed" seems to be a persistent meme among all mathematicians except probabilists and statisticians. Or maybe "all" is exaggerated, but "many" isn't. – Michael Hardy Oct 27 '11 at 22:33
  • It's completely sensible to choose a random rational number. You just can't have every rational numbers be equally likely. But you can get all rational numbers in $[0,1]$ with the same (fully reduced) denominator to have the same probability. That probability would need to be lower for larger denominators, though. – hmakholm left over Monica Oct 27 '11 at 22:38
  • 1
    "...so that choosing a random rational number would be possible, since it seems like a totally sensible thing to want to do." Unfortunately, many things that it is totally sensible to want to do aren't actually possible to do. Choosing a random rational number uniformly at random is one such thing; finding an algorithm that determines if a Diophantine equation has a solution in integers is another. – Alon Amit Oct 27 '11 at 23:39
  • @Michael Hardy: Maybe it should be "almost all". Then us algebraists can interpret it as "all except perhaps for a finite number". – Arturo Magidin Oct 28 '11 at 03:02

1 Answers1

2

The question was thoroughly answered in comments:

You can choose a random natural number, you just can't assign all of them the same probability. The same is true of the rationals, for the same reason. -- André Nicolas

and

the problem is that probability needs to be countably additive (if you have countably many independent events, then the probability that at least one of them occurs is the sum of the probabilities of each one of them). For the natural numbers/rationals, that means that if each of them has probability $0$ of being chosen, then the probability that any one is chosen is $0$ (because you can obtain the total probability by adding all the individual probabilities). -- Arturo Magidin

I'll answer the follow-up question

Now I'd like to know how you could modify that requirement [countable additivity] so that choosing a random rational number would be possible

-- the natural weakening of countable additivity is finite additivity. It is possible to put a finitely additive measure $\mu$ on $\mathbb Z$ so that the measure is shift-invariant and $\mu(\mathbb Z)=1$. See invariant mean. Every finite set gets measure zero. The set of even numbers gets measure $1/2$, formalizing the statement "half of all integers are even". So far so good. But there is no canonical (or even explicit) invariant mean $\mu$; their very existence relies on the axiom of choice. So, it's not clear what concrete and nontrivial probability statements one can get out of the existence of $\mu$.