1

I just started learning applied probability. What does mean and variance actually signify? If we want to relate the event of getting head or a tail on a coin toss to $X = 0, 1$ respectively, our mean (or expected value) would be $0(\frac{1}{2}) + 1(\frac{1}{2}) = \frac{1}{2} $; by following the definition: $ E[X] = \sum_{i}x_ip_{X}(x_i) $.

But rather if we consider the values of $X = 1, 2$ to be mapped with the events of getting a head or a tail, our mean would now become $1(\frac{1}{2}) + 2(\frac{1}{2}) = 1.5$.

There is nothing in-between a head or a tail in this experiment. Moreover, the mean or variance change depending on how the random variables are chosen. (If mean varies, it is obvious that variance also varies). So what does mean and variance actually signify, especially if $X$ is discreet and each value of $X$ is mapped with totally independent events.

Sreram
  • 223
  • The mean is the average value where the random variable tends to be when the number of experiments increases. Variance, in some sense, represent how "probably fast" the r.v. tend to it mean when the number of experiments increases. – Masacroso Mar 07 '16 at 10:44
  • http://math.stackexchange.com/questions/700160/intuition-behind-variance-forumla/700231#700231 – Michael Hoppe Mar 07 '16 at 14:36

1 Answers1

3

The Wikipedia-style definitions are: the mean is the expected value and the variance measures how far the set of possible outcomes is spread out.

I myself find nothing wrong with the above, it is both intuitive and correct. Describing the outcome heads as $0$ and tails as $1$ gives $$E[X]=\frac{1}{2}$$ which can be interpreted as: the expected value is evenly far away from both $0$ (heads) and $1$ (tails), i.e. both events are evenly likely. Translating everything over the real line to $Y\in\{1,2\}$ gives $$E[Y]=\frac{3}{2}$$ and you can draw the same conclusion.

As for the variances, note that these do not change when this translation is applied, i.e. $\mathrm{Var}(X)=\mathrm{Var}(Y)$ (or more generally $\mathrm{Var}(X)=\mathrm{Var}(X+c)$ for constant $c$.)

Namely: $$\mathrm{Var}(X)=E[X^2]-E[X]^2=\frac{1}{2}-\left(\frac{1}{2}\right)^2=\frac{1}{4}$$ and $$\mathrm{Var}(Y)=E[Y^2]-E[Y]^2=\frac{5}{2}-\left(\frac{3}{2}\right)^2=\frac{1}{4},$$ which is again intuitively correct. You simply assign values ($0$ and $1$ or $1$ and $2$) to the outcomes heads and tails, the spread of the outcomes should not depend on this assignment.

Hope this helps :)

Eric S.
  • 2,800
  • Thanks, for the case of variance, I didn't check it. I thought it was supposed to be a function of $E[X^2]$ and $E[X]$, so it must change. But now I understand. – Sreram Mar 07 '16 at 11:16
  • 2
    @HelloWorld It is perhaps clearer when you recall variance is a function of $\big(X-\mathsf E(X)\big)$. $$\begin{align}\mathsf {Var}(X) = & \mathsf E(X^2)-\mathsf E(X)^2 \[1ex] = & \mathsf E\Big(\big(X-\mathsf E(X)\big)^2\Big) \[2ex] \mathsf {Var}(X+c) = & \mathsf E\Big(\big(X+c-\mathsf E(X+c)\big)^2\Big)\[1ex] = & \mathsf E\Big(\big(X-\mathsf E(X)\big)^2\Big)\end{align}$$ – Graham Kemp Mar 07 '16 at 11:32