I have already seen What is the proof that covariance matrices are always semi-definite?.
Note that I am self-learning this topic.
Suppose $\mathbf{Y}$ is a random vector with covariance matrix $\text{Cov}(\mathbf{Y})$. I would like to show that $\text{Cov}(\mathbf{Y})$ is nonnegative definite.
To show that $A = \text{Cov}(\mathbf{Y})$ is nonnegative definite, I need to show that for any $v \in \mathbb{R}^n$, $v^{\prime}Av \geq 0$, where $v^{\prime}$ denotes the transpose of $v$.
Here is what I gather so far: $$v^{\prime}Av = v^{\prime}\mathbb{E}\left[\left(\mathbf{Y}-\boldsymbol{\mu}\right)\left(\mathbf{Y}-\boldsymbol{\mu}\right)^{\prime}\right]v = \mathbb{E}\left[v^{\prime}\left(\mathbf{Y}-\boldsymbol{\mu}\right)\left(\mathbf{Y}-\boldsymbol{\mu}\right)^{\prime}v\right]\text{,}$$ since $v \in \mathbb{R}^n$ is a matrix of constants. Now, my question is, why is it that $$\mathbb{E}\left[v^{\prime}\left(\mathbf{Y}-\boldsymbol{\mu}\right)\left(\mathbf{Y}-\boldsymbol{\mu}\right)^{\prime}v\right] = \mathbb{E}\left[\left(v^{\prime}\left(\mathbf{Y}-\boldsymbol{\mu}\right)\right)^2\right]\text{?}$$ I am wondering if there is an easy way to see this without having to write out all of the matrix components. The linked question above says this follows by linearity of expectation, and I can see how linearity would allow one to "pull in" the $v^{\prime}$ and $v$, but other than that, I'm lost.