1

For a symmetric-positive-definite matrix $A=\begin{bmatrix} a & b\\ b & c\\ \end{bmatrix}$ with $a\geq c$ and eigenvalues $\lambda_1\geq \lambda_2 > 0$ can we say that Cholesky factorization with a lower triangular form is the same as QR factorization?

Actually I want to prove that $A_k$ in the following iteration converges to $diag(\lambda_1,\lambda2)$: for $k=1,2,...,$ $A_{k−1}=G_kG^{T}_k$; $A_k=G^{T}_k G_k$; end. I wanted to relate this to the power iteration if that's the way ...How to proceed?

Elnaz
  • 629

1 Answers1

1

No. The only orthogonal matrices that are at the same time triangular are diagonal matrices with $\pm 1$ on the diagonal.

What you can do is to compare the Cholesky decomposition or the more general LDLT decomposition with the LU decomposition. There the triangular matrices differ by a diagonal matrix.


Here you can directly calculate the QR decomposition, as $Q$ in the Givens rotation variant is $$ Q=\frac1{\sqrt{a^2+b^2}} \begin{bmatrix} a & -b\\ b & a\\ \end{bmatrix} $$ while in the Householder reflection variant it would be $$ Q=\frac1{\sqrt{a^2+b^2}} \begin{bmatrix} a & b\\ b & -a\\ \end{bmatrix} $$

Lutz Lehmann
  • 126,666
  • actually I want to prove that $A_k$ in the following iteration converges to $diag (\lambda_1, \lambda_2)$: for $k=1,2, ...$, $A_{k-1} = G_kG^{T}_k$; $A_k=G_k^{T} G_k$; end. I wanted to relate this to the power iteration ...How to proceed? – Elnaz Apr 22 '14 at 21:46
  • This is the LR iteration, the historical precursor of the QR iteration. It may lead to runaway components in $A_k$, numerical underflow as well as overflow. There are approaches to stabilize this using rescaling steps with diagonal matrices, the search term would be "GR iteration". – Lutz Lehmann Apr 22 '14 at 21:50
  • Can't we show the convergence by somehow relating this to the power iteration and the Schur decomposition? – Elnaz Apr 22 '14 at 21:56
  • The best I see at the moment is $A_0^m=G_1G_2...G_m\cdot G_m^T...G_2^TG_1^T$. But yes, relating this to power and inverse power iteration is what is done in the QR case, it should also work here. – Lutz Lehmann Apr 22 '14 at 22:08
  • I'm sorry, but I'm not following $A_0^{m} = G_1G_2...G_2^{T}G_1^{T}$? Does this relate LR to power iteration? – Elnaz Apr 22 '14 at 22:15
  • I see that $A_0^{T}A_0 = A_0^{2} = G_1A_1G_1^{T}$ suggesting that $A_0^{T}A_0$ and $A_1$ are similar. But eigenvalues of $A_0^{T}A_0$ are squared version of those of $A_0$. So how this thing work? – Elnaz Apr 22 '14 at 22:21
  • If you can get it, read http://www.sciencedirect.com/science/article/pii/002437959190004G and/or http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.35.1570 . For me it says "guest status", whatever that means. Watkins has written several very nice survey articles, and apparently also a book, on QR and related methods. – Lutz Lehmann Apr 22 '14 at 22:24
  • Can you please extend or amend your question to include your real question? Or make a new question out of it, including insights from these comments, and link it in both directions. -- One obvious identity is $A_kG_k^T=G_k^TA_{k-1}$ so that $A_k=G_k^TA_{k-1}G_k^{-T};(=G^{-1}A_{k-1}G_k)$. – Lutz Lehmann Apr 22 '14 at 22:35