Definition
A inner product (AKA dot product and scalar product) can be define on two vectors $\mathbf{x}$ and $\mathbf{y}$ $\in \mathcal{R^n} $ as
$$ \mathbf{x.x^T} = \langle \mathbf{x},\mathbf{y}\rangle_\mathcal{R^n}=\langle \mathbf{y},\mathbf{x}\rangle_\mathcal{R^n} = \sum_{i=1}^{n} x_i \times y_i $$
The inner product can be seem as the length of the projection of a vector into another and it is widely used as a similarity measure between two vectors.
Also the inner product have the following properties:
The covariance of two random variables $X$ and $Y$ can be defined as
$$ E[(X-E[X]) \times (Y - E[Y])] $$
the covariance holds the properties of been commutative, bilinear and positive-definite.
These properties imply that the covariance is an Inner Product in a vector space, more specifically the Quotient Space.
Association with the kernel trick
If you are familiar with Support Vector Machines you probably familiar with the Kernel Trick where you implicitly compute the inner product of two vectors into a mapped space, called feature space. Without performing the mapping you can compute the inner product into even a possibly infinite dimensional space given that this mapping.
To perform that inner product, you need to find a function, known as kernel functions, that can perform this inner product without explicitly mapping the vectors.
For a kernel function to exist it needs to have the following atributes:
- It needs to be symmetric
- It needs to be positive-definite
That is sufficient and necessary to a function $\kappa(\mathbf{x,y})$ to be considered a inner product in an arbitrary vector space $\mathcal{H}$.
As the covariance, comply to this definition it is a Kernel Function and consequentially it is an Inner Product in a Vector Space.