6

I have a matrix $M \in \mathbb{R}^{n \times n}$ whose columns are linearly independent. Hence, $M$ is invertible.

How to extend this conclusion to the case where $n$ is infinite?

Specifically, given that $n\in\mathbb{N}$, let $X$ and $Y$ be Banach spaces. $x\in X$ and $y \in Y$ satisfy that \begin{align} y = M x. \end{align}

What conditions do I need to conclude that $M$ is a bounded invertible linear operator?

p.s. If $n$ is finite, it seems that the conclusion hold when $M$ is of full rank and with bounded matrix norm. If $n$ is infinite, what arguments can I use?

lulu
  • 101
  • How do you extend your definitions to that case, and how infinite is $n$? Assuming $n=\Bbb N$, the product of two such matrices is still not always well-defined, take for instance the matrix whose entries are all $1$ and multiply it with itself. With multiplication somewhat undefined, I just feel a little uneasy about the term "invertible". – Jesko Hüttenhain May 03 '20 at 09:13
  • @JeskoHüttenhain Thanks for your comments. I update my question. Let $n\in\mathbb{N}$. In my problem, the matrix $M$ has full rank and bounded norm for any finite $n$. I wonder how to extend this property to the case $n\rightarrow\infty$. Could you please provide any hint on this? – lulu May 03 '20 at 15:27
  • 1
    As far as I can tell, the "infinite matrix" representation of a linear operator is not that popular, especially in non-Hilbert contexts. There are many technicalities to address, as Jesko rightfully points out. Anyway, I remember that I have seen some information on this point of view on the book "Basic operator theory" of Gohberg and Goldberg. – Giuseppe Negro May 06 '20 at 15:35

3 Answers3

3

A bounded linear map $T:X\to Y$ between Banach spaces $X,Y$ is invertible---in the sense that there exists a bounded inverse---if and only if ${}^1$ $T$ is bounded from below and its image is dense in $Y$. Actually if a linear map is bounded from below then in particular it is bounded so we can strengthen this criterion:

A linear map $T:X\to Y$ between Banach spaces $X,Y$ is invertible if $T$ is bounded from below and its image is dense in $Y$.

Now if $X,Y$ even have a Schauder basis so $T$ can be identified with a countably infinite matrix $M_T$ (containing the corresponding basis expansion coefficients) then this can, at least partially, be translated to the information given by $M_T$: the image of $T$ is dense if and only if the columns of $M_T$ span a dense subset of $Y$, which is the best generalization of "the columns have to be linearly independent" I can think of.

Roughly speaking boundedness from below ensures injectivity and density of the image handles surjectivity. Be aware that boundedness from below also guarantees boundedness of $T$ as well as $T^{-1}$, something not present in usual linear algebra as every linear map between finite-dimensional spaces is automatically bounded.

To conclude lets give an example which shows that boundedness and linear independence of columns is not sufficient beyond finite dimensions: let $\ell^2(\mathbb N)$ be the Hilbert space of all square-summable sequences with standard basis $(e_n)_{n\in\mathbb N}$, $e_n=(\delta_{jn})_{j\in\mathbb N}$ and consider the right shift $T:\ell^2\to\ell^2$, $e_n\mapsto e_{n+1}$. This operator is an isometry ($\|Tx\|=\|x\|$, thus bounded: $\|T\|=1$) and the corresponding matrix in this basis is of the form $$ M_T=\begin{pmatrix} 0&0&0&0&\cdots\\1&0&0&0&\cdots\\0&1&0&0&\cdots\\0&0&1&0&\cdots\\\vdots&\vdots&\vdots&\vdots&\ddots \end{pmatrix}\,. $$ The columns are linearly independent (even orthonormal)---thus $T$ is injective which also reflects in boundedness from below (holds because $T$ is isometry). However the span of the columns is not dense as $e_1\not\in\overline{\operatorname{im}(T)}$ so surjectivity fails. While in finite-dimensions, the rank-nullity theorem saves you from such situations, as there "no kernel" means "full image", in infinite-dimensions you lucked out.


${}^1$: The linked math.SE-question asks about the case where $X,Y$ are Hilbert spaces, but the proof at no point uses the existence of an inner product (just that $X,Y$ are complete normed spaces) so the result holds for arbitrary Banach spaces.

Frederik vom Ende
  • 4,345
  • 1
  • 10
  • 32
1

There are multiple ways to interpret your question. I can think of two ways:

(1) Saying a matrix $M$ is invertible is equivalent (or by definition, depending on your definition) to saying that there exists a matrix $M'$ such that $MM'=I$ and $M'M=I$, where $I$ is the diagonal matrix with only $1$'s. It's possible to define the multiplication of infinite dimensional matrices analogously to the finite case (although the sums may not converge), and we can also define $I$. Therefore, we can say that an infinite-dimensional matrix is invertible if there exists $M'$ such that $MM'$ and $M'M$ are well-defined, and if they both are equal to $I$.

(2) If you want to stick to the idea of linearly independent vectors, there exist infinite-dimensional vector spaces, and the concept of linear independence still exists. For example, $\Bbb R^{\Bbb N}$, the vector space of real-valued sequences, is a vector space, with the zero vector $(0,0,0,\ldots)$ and term-wise sum. We can see that the vectors $a_n=(0,\ldots,0,1,1,\ldots)$, which are $0$ up to the $n^\text{th}$ term and $1$ afterwards, are linearly independent. Therefore, the infinite-dimensional matrix $M$ whose $n^\text{th}$ column is exactly $a_n$ has linearly independent columns, and so we can consider $M$ to be invertible.

I'm not sure if these concepts are equivalent, and in general infinite-dimensional matrices aren't the best way to study infinite-dimensional vector spaces, but I hope I gave you some ideas on what it might entail :)

Isaac Ren
  • 1,446
  • Thanks. Your comments help a lot. As for the example in Item (2), I wonder how can you conclude M is invertible? Is there any reference on this? What I am actually interested in is how to prove that the infinite-dimensional M is a bounded invertible operator. I edit the question. Do you have any clue on this? Thanks a lot. – lulu May 03 '20 at 15:09
  • Unfortunately, my knowledge of infinite-dimensional operators is quite limited, so I don't know of any references. Maybe this question will help, at least for invertibility? https://math.stackexchange.com/questions/2339436/show-that-the-operator-is-invertible – Isaac Ren May 03 '20 at 15:14
  • Thanks. I will check :). – lulu May 03 '20 at 15:28
0

Extending one aspect of @IsaacRen's answer:

  • If the matrix $M$ is lower triangular, and its inverse $M'$ as well (and $M \cdot M'=I$ ) then it is easier to extend this to the case of infinite size.
    For instance the lower triangular Pascal-/Binomial-matrix -whether finite or infinite- is invertible this way. (Note, I've read one time that the inverse in the infinite case should be named "reciprocal" instead, but don't have the reference at hand, maybe it is also referenced in wikipedia)

And more:

  • For the case of infinite size they have introduced the terms "row-finite" and "column-finite" of which the lower and the upper triangular matrices are subcases.

  • There are also interesting special cases: the matrix of the Stirling-numbers 2nd kind $S2$ and that of the Stirling-numbers 1st kind are "reciprocals" of each other, such that $S1 \cdot S2 = S2 \cdot S1 = I$ . But there are then non-triangular variants of $S1_k$ possible such that still $S1_K \cdot S2 =I$.

  • For the analysis, whether a square infinite matrix is invertible, I've always used the $LDU$ -decomposition where $ M = L \cdot D \cdot U$ from where $L$ is lower triangular, $D$ is diagonal and $U$ is upper triangular. Then if all three new matrices are invertible, then one can proceed and look, based on the analytic description of the entries of $L^{-1},D^{-1},U^{-1}$ , whether all occuring single dotproducts of $(U^{-1} \cdot D^{-1}) \cdot L^{-1}$ are convergent or at least "summable" (in the sense of summation of divergent series).

    For instance, the infinite squarematrix $$ V = \small \begin{bmatrix} 1&0&0&0&\cdots \\ 1&1&1&1&\cdots \\ 1&2&4&8&\cdots \\ 1&3&9&27&\cdots \\ \vdots \end{bmatrix} $$ is decomposable in LDU-components, and all three components are invertible, but the reverse product of the inverted components is divergent in each dotproduct of rows with columns.

  • About a problem-consideration: when dotproducts $U^{-1} \cdot D^{-1} \cdot L^{-1}$ become divergent, you may be interested in this answer of mine in MSE.

Curiosity:

  • as a surprising finding I had even a harmless looking matrix $M$ of infinite size, and the analysis in the above LDU-sense, using summation-procedures for the row-column-dotproducts in $(U^{-1} \cdot D^{-1}) \cdot L^{-1}$ gave all zero, so the inverse of $M$ should be the ZERO-matrix! (see essay at my homepage and proof in MO-question) Such unexpectable effects are only possible with infinite matrices...