1

This is very similar to, but different from, the following question: Limit of matrix inverse: $\lim_{\lambda \to \infty} (A + \lambda I)^{-1} = \mathbf{0}$?

I would like to know how to find the limit of $(A+tJ)^{-1}$ as $t$ tends to infinity. Here $A$ is an arbitrary square matrix, and $J$ is a matrix of "all ones" the same size as $A$.

Using the idea that the inverse of a matrix is its adjoint divided by its determinant, or $$(A+tJ)^{-1} = \frac{(A+tJ)^*}{|A+tJ|},$$ I've found that both the numerator and the denominator are linear functions of $t$, i.e. $\displaystyle \frac{(A+tJ)^*}{|A+tJ|} = \frac{A^* + tP}{|A| + tq}$. As long as $q \ne 0$, the said limit should be $P/q$.

For example, in the case of a 2x2 matrix $A = \left[ \begin{array}{cc} a & b \\ c & d \end{array} \right]$, I have: $$ \lim_{t \rightarrow \infty} (A+tJ)^{-1} = \lim_{t \rightarrow \infty} \frac{\left[ \begin{array}{cc} d & -b \\ -c & a \end{array} \right] + t \left[ \begin{array}{cc} 1 & -1 \\ -1 & 1 \end{array} \right]}{(ad-bc) + t(a+d-b-c)} = \frac{1}{a+d-b-c} \left[ \begin{array}{cc} 1 & -1 \\ -1 & 1 \end{array} \right] $$ (provided that $a+d-b-c \ne 0$)

But in general, is there a concise way to express the limit in terms of $A$?

Maigo
  • 11

3 Answers3

2

Denote vector $j$ as a vector of ones, of as many rows as $A$, so $J=jj^T$.

By the Sherman-Morrison formula, $(A+tJ)^{-1}=A^{-1}-\frac{A^{-1}tjj^TA^{-1}}{1+tj^TA^{-1}j}$ $=A^{-1}-\frac{A^{-1}jj^TA^{-1}}{\frac{1}{t}+j^TA^{-1}j}$.

For the inquired limit, $\frac{1}{t}$ vanishes.

0

This is only correct if, and only if, $J$ is a full rank matrix. Then you can prove this using by contradiction

Assume there is a vector $\mathbb x$ with bounded norm, (e.g. $||\mathbb x|| \le 1$) such that $(A + tJ)^{-1}\mathbb x=\mathbb z$ is non-zero

That also means that $(A + tJ)\mathbb z = \mathbb x$, but since $J$ is full rank, and $z$ is non-zero $J\mathbb z$ is non-zero, and it follows that $tJ\mathbb z$ must be unbounded.

On the other hand if $J$ is not full rank, exists $J\mathbb z = \mathbb 0$, for that case $(A+tJ)\mathbb z = \mathbb x$, for a non-zero $\mathbb z$ and a bounded $\mathbb x$, so $(A+tJ)^{-1}\mathbb x \ne \mathbb 0$ for some $\mathbb x$, it follows that $(A + tJ)^{-1} \ne \mathbb 0$.

Bob
  • 243
0

Consider the slightly more general problem about the existence of $\lim_{t\to\infty}(A+tuv^T)^{-1}$, where $u$ and $v$ are nonzero vectors.

When $A$ is nonsingular, apply Sherman-Morrison formula to obtain $$ (A+tuv^T)^{-1} =A^{-1}-\frac{t}{1+tv^TA^{-1}u}A^{-1}uv^TA^{-1}. $$ Therefore, limit exists if and only if $v^TA^{-1}u\ne0$, and the limiting value is $$ \lim_{t\to\infty}(A+tuv^T)^{-1} =A^{-1}-\frac{1}{v^TA^{-1}u}A^{-1}uv^TA^{-1}.\tag{1} $$

When $A$ is singular, in order that $A+tuv^T$ is nonsingular when $t$ is large, the rank of $A$ must be $n-1$ (where $A$ is $n\times n$) and $u,v$ must live outside the column space and row space of $A$ respectively. When this is the case, $A+tuv^T$ is nonsingular for every $t\ne0$. In particular, $B=A+uv^T$ is nonsingular. But then $v^TB^{-1}u=u^T(A+uv^T)^{-1}v=1$. Therefore by $(1)$, $$ \lim_{t\to\infty}(A+tuv^T)^{-1} =\lim_{t\to\infty}(B+tuv^T)^{-1} =B^{-1}-B^{-1}uv^TB^{-1} =B^{-1}AB^{-1}. $$

user1551
  • 139,064