You know that covariance of jointly distributed random variables $X$ and $Y$ is $$\mathrm{Cov}(X,Y) = \mathrm{E}[XY] - \mathrm{E}[X] \mathrm{E}[Y].$$
It is clear that we should require finiteness of $\mathrm{E}[XY], \mathrm{E}[X]$ and $\mathrm{E}[Y]$. In particular, this requirement will be met (by Cauchy-Schwarz inequality) if $\mathrm{Var}(X) < \infty$ and $\mathrm{Var}(Y) < \infty$.
There are cases when $\mathrm{Cov}(X,Y) < \infty$ but $\mathrm{Var}(X)$ and $\mathrm{Var}(Y)$ are infinite (see here for example). Also I think $\mathrm{Cov}(X,Y)$ can be finite if only one of the variances is infinite.
But Wikipedia requires finiteness of variances $\mathrm{Var}(X)$ and $\mathrm{Var}(Y)$ directly in its covariance definition. Also in the definition of cross-covariance matrix $$\mathrm{Cov}(\mathbf{X},\mathbf{Y}) = \mathrm{E}[(\mathbf{X} - \mathrm{E}[\mathbf{X}])(\mathbf{Y} - \mathrm{E}[\mathbf{Y}])^T]$$ wikipedia again requires finiteness of variances of all elements of the random vectors $\mathbf{X}$ and $\mathbf{Y}$.
Firstly when I saw the covariance definition in Wikipedia I thought it was a mistake to require finiteness of variances $\mathrm{Var}(X)$ and $\mathrm{Var}(Y)$. But when I saw this requirement again in the cross-covariance matrix definition I started to doubt.
What do ou think about it? Do we really need finite variances $\mathrm{Var}(X)$ and $\mathrm{Var}(Y)$ in the definition of covariance $\mathrm{Cov}(X,Y)$? And do we really need finite variances of all elements of the random vectors $\mathbf{X}$ and $\mathbf{Y}$ in the definition of cross-covariance matrix $\mathrm{Cov}(\mathbf{X},\mathbf{Y})$?