If you pick a number $x$ randomly from $[0,100]$, we would naturally say that the probability of $x>50$ is $1/2$, right?
This is because we assumed that randomly meant that the experiment was to pick a point from $[0,100]$ (with numbers equally distributed). But, since $f(r)=r^2$ is a bijection $[0,10] \rightarrow [0,100]$, we could also pick a number $r$ from $[0,10]$ and then do $x=r^2 \in [0,100]$ and let that be our random experiment. This time $x>5$ only for $r> \sqrt{50} \sim 7.07$.
In this case we would agree that the first way of choosing $x$ looks a lot more natural. So we would equally agree that is a successful way of modeling the experiment ''pick a random number from [0,100]''.
There are sometimes when we can't even agree on that! For example, on Bertrand's Paradox we are asked to pick a random chord from a circumference and calculate the probability that it is longer than the side of the inscribed equilateral triangle. The point is there are several (a priori) natural ways of choosing the chords (three of them are nicely described here) which, of course, produce different probabilities.
How and when can we consider something is truly random? Does it even make any sense saying something is truly random or is it more a matter of agreement?
Is there any convention in the mathematical community about this issues?
Could we say the common notion of randomness relates to the notion of uniform distribution?
Are there any successful approaches on models about randomness? (That let us decide if a certain distribution represents randomness in the sense of being an uniform distribution)
For example, on the comments it is said: "One can show [using Kolmogorov Complexity] that a number in [0,1] is random with probability 1 under the uniform distribution, so it coheres well with other notions.''
Yes. It's called probability theory.
– Christian Blatter Jan 13 '13 at 20:10