0

Let $X$ a random variable. I know how to calculate it's variance but I don't really understand how to interpret it. For example, if $(B_t)$ is a Brownian motion $B_t\in \mathcal N(0,\sigma ^2t)$. How I can interpret $Var(B_t)=\sigma ^2t$ ? I took brownian motion for example, but my question is valid for any random variable.

Also, if $Var(X)$ is the average distance of $X$ from it's expectation, why $Var(X)=\mathbb E[(X-\mathbb E[X])^2]$ and not $Var(X)=\mathbb E[|X-\mathbb E[X]|]$ ?

Dylan
  • 465
  • It is a measure of spread (like the average it a measure of the central trend). –  Feb 21 '19 at 19:47
  • Your "Also..." question is an interesting one. Your alternative definition would be a perfectly reasonably way of measuring the concept of "spread," but it would lack nice properties that the ordinary definition of variance enjoys, such as (as J.G. remarked) that $\text{Var}(X+Y) = \text{Var}(X) + \text{Var}(Y)$ for uncorrelated variables. – Aaron Montgomery Feb 21 '19 at 20:00
  • @YvesDaoust: I edited my question. What do you think of the last sentence ? thank you. – Dylan Feb 21 '19 at 20:01
  • Your last comment sheds confusion. $L^2$ is not a squared quantity. –  Feb 21 '19 at 20:17
  • @YvesDaoust: Sorry, but I don't follow you ! $$|X|{L^p}^p=\int\Omega |X|^p\mathbb P=:\mathbb E[|X|^p].$$ What doesn't make sense in $$|X-\mathbb E[X]|_{L^2}^2=\mathbb E[(X-\mathbb E[X])^2]=Var(X) \ \ ?$$ – Surb Feb 21 '19 at 21:53

2 Answers2

0

It's a coefficient of dispersion. In average, $X$ take the value $\mathbb E[X]$. But this is a very very bad approximation. But without any other informatino, we can't say more than that. The variance is the average quadratic distance of $X$ from $\mathbb E[X]$. In other word, even if $\mathbb E[X]$ is a bad approximation of $X$, you will anyway find $X$ in the interval $[\mathbb E[X]-\sqrt{var(X)},\mathbb E[X]+\sqrt{var(X)}]$ with a big probability.

For the Brownian motion : even if $B_t=0$ in average, the probability to find $B_t$ in $[-\sqrt{t}\sigma ,t\sqrt \sigma ]$ is big.

Surb
  • 55,662
  • Thank you, I edited my answer, could you check it please ? I'm not sure to understand what you mean by "$\mathbb E[X]$ is a bad approximation, but it will be in $[\mathbb E[X]-\sqrt{Var(X)},\mathbb E[X]+\sqrt{Var(X)}]$ with big probability". Could you explain please ? – Dylan Feb 21 '19 at 20:00
  • Try to see what happen with tossing a faire dice and $X$ the result... $\mathbb E[X]=3.5$, so $X=3.5$ never occur. But $X$ will be in $$\mathbb P{X\in [3.5-\sigma ,3.5+\sigma ]}=\mathbb P{1.82\leq X\leq 5.18}=2/3.$$ – Surb Feb 21 '19 at 20:08
0

For a general understanding of variance - or of its square root, standard deviation - see this question. As for Brownian motion:

One important property of variances is this: if $X,\,Y$ are uncorrelated (as happens e.g. if they're independent), $\operatorname{Var}(X+Y)=\operatorname{Var}X+\operatorname{Var}Y$. Thus when we add many small uncorrelated contributions, we expect a variance to be proportional to the number of such terms. This is why Brownian motion has a variance proportional to the time elapsed.

With regard to the question's edit, the motive for squaring is partly the geometric interpretation in terms of inner products you'll find at the above link, partly the additivity we've discussed, and partly that it's pretty much always far more mathematically tractable. Related to that last point is the fact that least-squares minimisation is an especially tractable regression technique in model fits. Why is this related? Because the error terms in such a fit should be compared to their null-hypothesis standard deviation.

J.G.
  • 115,835