0

When $T$ is any linear operator acting on a vector space $V$, and $n$ is a natural number, $T^n$ means $T$ applied $n$ times (composition) and that is also a linear operator. That is clear.

When $T$ is a nonzero linear operator acting on a vector space $V$, then $T^0$ is the identity operator $T^0 = I$. But I think that should also be true (true by definition), when $T$ is the zero operator i.e. the operator which sends all vectors to the zero vector.

Why? Because $T^0$ means that we are not applying any operator. So it makes sense to say: OK, all vectors stay unchanged when "applying" $T^0$ even when $T$ is the zero operator. I say "applying" because we're not actually applying anything.

Is that indeed so?

I am asking because this kind of disagrees with what we have for real numbers where $0^0$ is usually left undefined.

EDIT:
What's the context of this question? I was reading a proof for the uniqueness of the Jordan Normal Form. There this expression comes up $2d(\phi^p) - d(\phi^{p-1}) - d(\phi^{p+1})$, where $p$ is a positive integer, and $d$ is the defect of the linear operator in the brackets. The proof is very nice but convoluted and eventually it boils down to proving the uniqueness for a special linear operator which has only $0$ as a characteristic root (as an eigenvalue). So I had some doubts what happens exactly with the expression $\phi^{p-1}$ when $p = 1$, and if we need to put some restrictions on the linear operator $\phi$.

peter.petrov
  • 12,568
  • 2
    For operators $T^n$ is usually meant in the sense of function composition therefore $T^0=I$ is fine. – Sassatelli Giulio Jun 26 '22 at 17:17
  • @SassatelliGiulio Yes, but is that a general widely accepted definition? – peter.petrov Jun 26 '22 at 17:22
  • Launch Windows Calculator and enter $0^0$, then tell us if it is undefined. – Anixx Jun 26 '22 at 17:27
  • 2
    Do not use Windows calculator as an authority on anything beyond adding a small range of numbers. – JonathanZ Jun 26 '22 at 17:29
  • @JonathanZsupportsMonicaC do you claim it is erroneous or what? – Anixx Jun 26 '22 at 17:30
  • @Anixx Type +0^0 in Microsoft Excel and see what it says. – Sassatelli Giulio Jun 26 '22 at 17:33
  • @SassatelliGiulio hmmm. I have no Excel, but LibreOffice Calc gives 1. I suspect they are compatible. – Anixx Jun 26 '22 at 17:36
  • @Anixx At the moment I just so happen to have both. You should always take for granted that there isn't full compatibility between a proprietary software and its open source clones, and in this instance I speak for experience, as limited as it was apparent. – Sassatelli Giulio Jun 26 '22 at 17:44
  • @SassatelliGiulio okay, so Excel is less powerful in this case? – Anixx Jun 26 '22 at 17:46
  • @peter.petrov For operators that are defined on the whole space, it's certainly standard, all the more if you want to consider polynomials, power series and suck. I think (because I'm ignorant) that it's standard for densely defined operators too, for the same reason. – Sassatelli Giulio Jun 26 '22 at 17:55
  • 3
    Whether Windows calculator is accurate or not, using it as an authority seems misguided. – Brian Tung Jun 26 '22 at 17:59
  • @BrianTung I just referenced Windows Calculator to point out it is not undefined in Windows Calculator. – Anixx Jun 26 '22 at 18:01
  • @Anixx: Ahh, I see. But things often break down if a piece of software returns undefined for something. It may shoehorn in a reasonable value to avoid doing that. It is certainly true that for this context, the zeroth power of the zero matrix could be assigned the identity matrix. But I wouldn't count on that being universal. – Brian Tung Jun 26 '22 at 18:04
  • Going back to the original question—OP: What is the context where you need to define zeroth power for the zero operator? It seems like an unusual need. – Brian Tung Jun 26 '22 at 18:07
  • @Anixx: Incidentally, in Excel, +0^0 returns #NUM!, as it does whenever it can't compute the expression. It returns the same thing when you take the square root of $-1$, for example. – Brian Tung Jun 26 '22 at 18:09
  • @peter.petrov For what it worth, $0$ turning all into unity as a power is an algebraic axiom. That's why $0^0$ is usually defined to be $1$. If we accept this, then at least for operators representable as finite-rank matrix we have a unity matrix as a result of exponentiation. In example of a $2\times2$ matrix we have $\left( \begin{array}{cc} 0 & 0 \ 0 & 0 \ \end{array} \right)^p=\left( \begin{array}{cc} 0^p+0^{p+2} & 2\ 0^{p+1} \ 2\ 0^{p+1} & 0^p+0^{p+2} \ \end{array} \right)$, inserting $p=0$ we get $\left( \begin{array}{cc} 1 & 0 \ 0 & 1 \ \end{array} \right)$ – Anixx Jun 26 '22 at 18:11
  • @BrianTung I don't think this is unusual, I think, this is an absolutely natural need. – Anixx Jun 26 '22 at 18:14
  • @Anixx: Possibly. I'd like to hear the OP's perspective, please? – Brian Tung Jun 26 '22 at 18:15
  • @peter.petrov Even for real numbers, you need not be too concerned with $0^0$ being defined or not. There are really two conventions: one is to leave $0^0$ undefined because there is no continuous function $[0,\infty)\times[0,\infty)$ that extends $x^y$ and then say that $x^0$ is a placeholder for the constant $1$ function whenever it works (say, in Laurent series $\sum_{k\in\Bbb Z} a_kx^k$); the other is to say that $0^0=1$ and that exponentiation is discontinuous on $[0,\infty)\times [0,\infty)$. It's more or less the same. – Sassatelli Giulio Jun 26 '22 at 18:16
  • @BrianTung I was reading a proof for the uniqueness of the Jordan Normal Form. There this expression comes up $2d(\phi^p) - d(\phi^{p-1}) - d(\phi^{p+1})$, where $p$ is a positive integer, and $d$ is the defect of the linear operator in the brackets. The proof is very nice but convoluted and eventually it boils down to proving the uniqueness just for a special linear operator which has only the number $0$ as a characteristic root (as an eigenvalue). So I had some doubts what happens exactly with $\phi^{p-1}$ when $p = 1$ and if we need to put some restrictions on the linear operator $\phi$. – peter.petrov Jun 27 '22 at 13:51
  • Btw, I don't know why my question is downvoted. What is bad about it?! – peter.petrov Jun 27 '22 at 14:01
  • @peter.petrov: I don't know, I'm afraid. I wouldn't worry too much about it, though. You've been here long enough that you find people occasionally downvoting for seemingly random reasons. Incidentally, I'd put the context you just described into your original question. – Brian Tung Jun 27 '22 at 16:02
  • @BrianTung OK, thanks, I did that. – peter.petrov Jun 27 '22 at 16:34

2 Answers2

6

We define $T^0=I$ for any linear operator $T:V \to V$ so that the usual laws of exponents hold for composition. So, you should take this as a definition which is convenient and not anything extraordinarly deep. Here's why.

For a positive integer $n$ we can define $T^n$ to mean $T \circ T \circ \cdots \circ T$ ($n$ factors), and $T^n$ is again a linear operator on $V$. You can then check, thanks to associativity of function composition, that we have $T^n \circ T^m = T^{n+m}$ for positive integers $n, m$. Doesn't that look nice and familiar?

If we extend this definition to include the possibility that $n=0$ by defining $T^0=I$, then the law $T^n \circ T^m = T^{n+m}$ now holds for integers $n,m \geq 0$. After all, $T^n \circ T^0 = T^n \circ I = T^n$, so we have $T^n \circ T^0 = T^{n+0}$ and all is well.

Even better, if $T$ happens to be invertible we can define $T^{-n}$ for an integer $n \geq 0$ as either $(T^n)^{-1}$ or as $(T^{-1})^n$ as they are both the same, which is a great exercise. Then the law of exponents $$ T^n \circ T^m = T^{n+m} $$ holds for all integers $n, m$. Not only is this algebraically reminiscient of real number exponents in a nice way, this makes the applications of polynomial relations to linear operators incredibly powerful.

But, note that none of this has anything to do with whether or not $T$ is the zero transformation. In that case, $T^0=I$ is still the case by definition. (Note also that this is all about composition of functions and not any sort of real number multiplication/exponentiation, so issues related to $0^0$ in the reals are actually not relevant.)

Randall
  • 18,542
  • 2
  • 24
  • 47
  • Just a side note on this part "... which is a great exercise" >>> this exercise seems trivial, no? If $n$ is a natural number and $A^{-1}$ exists, when we multiply $A^n \cdot (A^{-1})^n$ we get the identity matrix so... $(A^{-1})^n$ is the inverse of $A^n$ i.e. $(A^{-1})^n = (A^n)^{-1}$ – peter.petrov Jun 28 '22 at 13:17
  • Yes, correct. You are just using the uniqueness of inverses. The same thing is true for bijections on sets. – Randall Jun 29 '22 at 01:53
-5

$0^0$ is an indeterminate form. Consider the two limits:

$$\lim_{x \rightarrow 0}\lim_{y \rightarrow 0}\ x^y $$

$$\lim_{y \rightarrow 0}\lim_{x \rightarrow 0}\ x^y $$

For the first limit, we get $x^0$ under the limit as $x$ goes to $0$. For all $x$, except $x=0$, $x^0=1$. Thus $\lim_{x \rightarrow 0}\ x^0 =\lim_{x \rightarrow 0}\ 1=1$. However for the second limit, we have that as $x$ goes to $0$, then we get $0^y$. This is equal to $0$ for all $y$, except $y=0$. This we obtain $\lim_{y \rightarrow 0}\ 0^y= \lim_{y \rightarrow 0}\ 0 = 0$. Therefore there is a discontinuity at $(x,y)=(0,0)$. So we let $0^0$ be undefined and can only talk about $0^0$ when we specify a direction that we are computing that function from. From the x direction, from the y direction, or somewhere in between.

Let $T = a\begin{bmatrix}1,0,0\\0,1,0\\0,0,1\end{bmatrix}=\begin{bmatrix}a,0,0\\0,a,0\\0,0,a\end{bmatrix}$, for an arbitrary constant $a$. Let $\bf{x}$ be a vector in $\mathbb{R}^3$.

Consider two limits:

$$\lim_{a \rightarrow 0}\lim_{n \rightarrow 0}\ T^n {\bf x} $$

$$\lim_{n \rightarrow 0}\lim_{a \rightarrow 0}\ T^n {\bf x} $$

Clearly the first limit, the limit as $n$ goes to 0 gives $(T^0=I)\forall a$, but in the second limit, the limit as $a$ goes to zero gives $(T^n=0)\forall n$. Clearly there is a discontinuity at $(n,a)=(0,0)$.


I am leaving my original answer intact for the time being, but my answer is correct. I want to drop this link here in case anyone doubts whether or not a matrix can be exponentiated to non-integral powers. I suppose this adds one caveat to my 'proof', that the linear transform $T$ be diagonalizable, a subset of the invertible linear transforms. The question as stated does not specify that the transforms must be invertible and is the exception.

Additional reference

Gerald
  • 385
  • 1
    Hm... I am not asking about any limits in my case. – peter.petrov Jun 26 '22 at 17:47
  • @Gerald, note that $n$ is integral, so I don't think taking limits applies in this case. – Doug Jun 26 '22 at 17:51
  • @Doug, non-integral $n$ are well defined. – Gerald Jun 26 '22 at 17:52
  • @peter.petrov, I wanted to specifically respond to your comment since I never did. The expression $0^0$ is generally left undefined specifically because the limits from either direction are not the same. Given this, I showed that for the subset of invertible transforms, the diagonalizable transforms, there is a similar concern where the transform $T^0$ can be undefined since it results in the same $0^0$ operation as standard reals. I left out computing the eigen-basis matrix $P$ and the diagonalized matrix $D$ of the transform linear transform $T$. – Gerald Feb 29 '24 at 15:36
  • Further I didn't specify that the $a$ limit was supposed to be the limit of the determinant of the matrix $D$. We can multiply the matrix $D$ by an arbitrary factor and consider the limit as the diagonalized linear transform $aD$ to any arbitrary power. I understand that I neglected to express these details. – Gerald Feb 29 '24 at 15:46