In Bra-ket notation, we denote an inner product of two vectors $|u\rangle,\,|v\rangle$ as $\langle u|v\rangle$, where the map from vectors $|v\rangle$ in the inner product space to scalars $\langle u|v\rangle$ is a linear map labelled $\langle u|$. (The set of such linear maps is a vector space called the dual space; its elements are bras $\langle u|$, whereas the original space has ket elements $|v\rangle$.)
If the matrix $M$ keeps $|v\rangle$ in the same vector space by mapping it to $M|v\rangle$, $\langle u|M|v\rangle$ is not only an inner product; if $|u\rangle,\,|v\rangle$ are elements of some basis, this inner product is a matrix element in that basis.
In quantum mechanics, to each observable $O$ there corresponds an $\widehat{O}$ such that in the pure state $|\psi\rangle$ the expectation of $O$ is $\langle\psi|\widehat{O}|\psi\rangle$. If the state can be taken for granted, we abbreviate this as $\langle\widehat{O}\rangle$.
Even with classical probability, we can express the covariance of two random variables as an inner product on a vector space: namely, a quotient space of equivalence classes of finite-variance random variables. I go through the details here. Then $A,\,B$ has covariance $\langle AB\rangle-\langle A\rangle\langle B\rangle$, but the second term can be deleted if at least one of $A,\,B$ has mean $0$.
If variables have variance $1$, their covariance is also their correlation. Autocorrelation is one special case.