2

In various programming languages such as Matlab and Python, we can draw a random number s from virtually any continuous distribution, such as the normal and uniform.

But we're also taught that $p(s)=0$ for any specific value $s$ in the support of a continuous distribution.

So, my question is, how can we explain this paradox? Thank you very much!

  • 3
    The random number is not actually given with infinite precision, so what you actually get is not a "specific value" but an interval, which has nonzero probability. This is also what happens with real-world examples of continuous random variables. – bof Apr 14 '20 at 11:00
  • 2
    Can you break a stick into two parts at random? (Yes, more or less) Is the length of the shortest part a continuous random variable? – Henry Apr 14 '20 at 11:43

1 Answers1

2

I think the question has two main aspects:

The first is the practical implementation. As pointed out in the comments, no programme will give you a decimal of infinite precision; you have to be content with some fixed precision floating point number. These numbers are not truly random, they are instead pseudo-random. They use a particular algorithm to generate successive floating point numbers. In python I believe the primary implementation is the Mersenne Twister algorithm (per the docs). The important things to realise here is that pseudo-random number generators are completely deterministic; you just need the seed value, and that they have fixed precision.

The other aspect to this question is the apparent paradox of seeing any particular value from a continuous distribution. The key point here is that events with $0$ probability in an infinite sample space are not necessarily impossible. There are great answers to a related question here: Zero probability and impossibility.

For example, consider you are given a random number from the uniform distribution for the real interval $[0,1]$. Since the value $1$ is an element of the sample space, it is an event that is possible. But by the way we have defined probability, $P(1) = 0$.

If you are familiar with probability density functions and limits I think there is some intuition behind this. We can take a partition of our interval and ask the question what is the probability of a value lying in any sub-interval of that partition. In taking the limiting process where the mesh of the partitions tends to $0$, you can see that the individual probabilities of any particular value tends to $0$ (the areas tend to $0$). However the beauty of the probability density function is that since we do not assign a probability to any value, but rather a probability density, then this limiting process does not reduce the probability density to $0$ as it does to the probability itself. At least for me this is the idea that gives me the intuition for why individual events can have zero probability, yet still not be impossible. I think a visual representation of this is helpful, and Grant Sanderson has a great video explaining this idea (it does a better job than I can): https://www.youtube.com/watch?v=ZA4JkHKZM50

masiewpao
  • 2,217