0

In algebra of random variables, the symbolic rule for computing variance of random variable $X\in\mathbb{R}^{n\times p}$ multiplied by a coefficient vector, $a\in\mathbb{R}^p$, is

$$Var(X\cdot a) = a^\top Var(X) a = a^\top \Sigma a $$ where $\Sigma$ is the covariance matrix. What explains vector $a$ coming out twice, i.e. the full derivation?

Given the above, what is the skewness algebra of random variables, and kurtosis algebra of random variables, i.e. derivation of the two expressions below?

$$Skew(X\cdot a) =? \hspace{3cm} Kurt(X\cdot a)=?$$

develarist
  • 1,514

1 Answers1

1

It helps to sum over repeated indices. For the first problem,$$\operatorname{Var}(a_iX_i)=\operatorname{Cov}(a_iX_i,\,a_jX_j)=a_i\underbrace{\operatorname{Cov}(X_i,\,X_j)}_{\Sigma_{ij}}a_j=a\cdot\Sigma a=a^T\Sigma a,$$where the last $=$ uses a slight abuse of notation. Since $a_iX_i$ has mean $a_i\mu_i$ with $\mu_i:=\Bbb EX_i$, and standard deviation $\sigma:=\sqrt{a\cdot\Sigma a}$, its skew is$$\Bbb E(a_iX_i-a_i\mu_i)^3/\sigma^3=(a\cdot\Sigma a)^{-3/2}a_ia_ja_k\Bbb E((X_i-\mu_i)(X_j-\mu_j)(X_k-\mu_k)).$$We can't calculate further than that without further assumptions. The case of kurtosis is similar.

J.G.
  • 115,835
  • set whatever assumptions you think a statistician would set by default if you're asking permission to go further. Otherwise at least show the final line in your derivation in matrix form if you don't mind – develarist Sep 09 '20 at 15:39
  • @develarist Just as $\Sigma_{ij}=\Bbb E(X_i-\mu_i)(X_j\mu_j)$, we can introduce a rank-$3$ tensor $K_{ijk}=\Bbb E(X_i-\mu_i)(X_j-\mu_j)(X_k-\mu_k)$ so the skew is $(a\cdot\Sigma a)^{-3/2}\color{blue}{a_ia_ja_kK_{ijk}}$. Note that (i) matrices aren't enough, (ii) how you write the blue part without indices is a matter of convention, & (iii) even the best choice of a "default" assumption doesn't determine $K_{ijk}$, just as we can't assume away the covariance matrix's role in the variance problem. – J.G. Sep 09 '20 at 15:56