3

Let's assume I have a $2$ dimensional vector space with inner product, and a basis where the inner product can be represented as $$ \begin{bmatrix}x &y\end{bmatrix} \begin{bmatrix}E &F\\F &G\end{bmatrix} \begin{bmatrix}x\\y\end{bmatrix}$$ In this basis, I express an orthonormal basis $\{e_1;e_2\}$as $\{(e_{1x},e_{1y});(e_{2x},e_{2y})\}$.

I have the "suspicion", and I could not find any counterexample, that the inverse of the matrix above can be written as $$ \begin{bmatrix} e_{1x}^2+e_{2x}^2 & e_{1x}e_{1y}+e_{2x}e_{2y}\\ e_{1x}e_{1y}+e_{2x}e_{2y} & e_{1y}^2+e_{2y}^2 \end{bmatrix} $$ for any orthonormal basis $\{e_1;e_2\}$.

I tried to set up the equalities that derive from orthogonality and normality of the various vectors and arrange them in a identity matrix, but I am stuck there and I can't find any way to extraxt an inverse of my $EFFG$ matrix. I feel there may be a straightforward way to prove this (or I am wrong upfront).

What could be a method to prove this? Also, how would this extend to $n>2$?

thanks

2 Answers2

3

There might be a more clever way, but I've solved it with coordinates. Let's work with column vectors as a default, in $\mathbb R^n.$ Let $A$ be the matrix for the inner product in the canonical basis β€”in your example, $\begin{pmatrix} E & F \\ F & G \end{pmatrix}$β€”, and let $g$ be the inner product itself. So for any $u, v \in \mathbb R^n,$ $$g(u, v) = u^TAv.$$ If you have an ortonormal basis $B = \{ e_1, \dots, e_n \}$ and the change of basis from $B$ to the canonical basis is given by the matrix $$P = \begin{pmatrix}e_{11} & \cdots & e_{n1} \\ \vdots & \ddots & \vdots \\ e_{1n} & \cdots & e_{nn} \end{pmatrix},$$ then $$g(u, v) = u_B^T P^TAPv_B,$$ where $v_B$ is the vector $v$ written in the basis $B$. In particular, $$\delta_{ij} = g(e_i, e_j) = (P^TAP)_{ij},$$ so $P^TAP = I.$ Inverting, $P^{-1}A^{-1}(P^T)^{-1} = I,$ so $$A^{-1} = PP^T.$$ Finally, $$(A^{-1})_{ij} = (PP^t)_{ij} = \sum_{k=1}^ne_{ki}e_{kj},$$ which is the generalized version of your suspicion.

Keplerto
  • 343
  • I understand $\delta_{ij}=g(e_i,e_j)$, but I don't know why $\delta_{ij}=(P^TAP){ij}$.
    What if A = \begin{bmatrix}2&0\0&1\end{bmatrix} and P = I, then $\delta
    {ij}\ne(P^TAP)_{ij}$
    – Alex May 24 '23 at 16:20
  • I was using the general equation above for $g(u, v)$, since $$(e_j)_B = \begin{pmatrix}0 \ \vdots \ 1 \ \vdots \ 0 \end{pmatrix}$$ has a $1$ in position $j$, and similarly $(e_i)_B^T$ is the row vector with a $1$ in position $i$. In your example, if $A$ is that matrix then $P$ cannot be the identity matrix, because the canonical basis is not orthonormal in that case – Keplerto May 24 '23 at 21:21
1

Let $e_1=\begin{bmatrix}A\\B\end{bmatrix}$ and $e_2=\begin{bmatrix}C\\D\end{bmatrix}$

By orthonormality of $e_1,e_2$

$e_1^T\begin{bmatrix}E&F\\F&G\end{bmatrix}e_2=0$

$e_2^T\begin{bmatrix}E&F\\F&G\end{bmatrix}e_1=0$

$e_1^T\begin{bmatrix}E&F\\F&G\end{bmatrix}e_1=1$

$e_2^T\begin{bmatrix}E&F\\F&G\end{bmatrix}e_2=1$

And therefore, respectfully:

$ACE+BCF+AFD+BGD=0$

$CEA+DFA+CFB+DGB=0$

$AEA+AFB+BFA+BGB=1$

$CEC+CFD+DFC+DGD=1$

And since the first 2 equations are identical we have:

$\begin{bmatrix}AC&AD+BC&BD\\A^2&2AB&B^2\\C^2&2CD&D^2\end{bmatrix}\begin{bmatrix}E\\F\\G\end{bmatrix}=\begin{bmatrix}0\\1\\1\end{bmatrix}$

$M\begin{bmatrix}E\\F\\G\end{bmatrix}=\begin{bmatrix}0\\1\\1\end{bmatrix}$

$\begin{bmatrix}E\\F\\G\end{bmatrix}=M^{-1}\begin{bmatrix}0\\1\\1\end{bmatrix} = \begin{bmatrix}\frac{B^2+D^2}{(BC-AD)^2}\\\frac{-AB-CD}{(BC-AD)^2}\\\frac{A^2+C^2}{(BC-AD)^2}\end{bmatrix}$

So

$E = \frac{B^2+D^2}{(BC-AD)^2}$

$F = \frac{-AB-CD}{(BC-AD)^2}$

$G = \frac{A^2+C^2}{(BC-AD)^2}$

$\begin{bmatrix}E&F\\F&G\end{bmatrix}^{-1} = \begin{bmatrix}A^2+C^2&AB+CD\\AB+CD&B^2+D^2\end{bmatrix} = \begin{bmatrix}e_{1x}^2+e_{2x}^2&e_{1x}e_{1y}+e_{2x}e_{2y}\\e_{1x}e_{1y}+e_{2x}e_{2y}&e_{1y}^2+e_{2y}^2\end{bmatrix}$

Alex
  • 452