3

Well I was sure that saying "$A^3$" (where $A$ is an $n\times n$ matrix) is nonsense. Sure one could do $(A\cdot A) A$ But that contains different operators etc. So what did my prof mean by the following statement:

show that $A^{25}\mathbf{x} = \mathbf{0}$ has only the trivial solution? (We're also given the determinant of A).

I know the proof will probably end with stating: "This means that $A^{25}$ is invertible, so $A^{25}\mathbf{x} = \mathbf{0}$ has only the trivial solution. And well I could state that $\det(A^{25}) = 5^{25} \neq 0$.

But then again: I really wonder what the "to the power of" operator means? Or did my prof make a mistake here?

paul23
  • 1,093
  • 2
    $A^0 := I;, A^{n+1} := A\cdot A^n$. Once you have a multiplication, powers with non-negative integer exponents (positive if there isn't a multiplicative identity) are naturally defined. – Daniel Fischer Aug 12 '13 at 14:14
  • 4
    can I ask why you were convinced that there were different operators involved in that first expression? There's only one kind of matrix multiplication. – Robert Mastragostino Aug 12 '13 at 14:48
  • @RobertMastragostino Because a nm matrix when multiplied with itself does not give an nm matrix back. So you can not always keep multiplying. – paul23 Aug 12 '13 at 15:14
  • 4
    @paul23 You can't even multiply an $n\times m$ matrix by itself if $n\neq m$, but $n\times n$ matrices can be multiplied, and the power operation is well-defined on square matrices. – Steven Stadnicki Aug 12 '13 at 15:36
  • 1
    @paul23 fair enough. Exponents like these are normally only defined for square matrices for that very reason. – Robert Mastragostino Aug 12 '13 at 15:39
  • @Robert: "normally only defined"?? – TonyK Aug 15 '13 at 14:22
  • 1
    @TonyK I've been burned too many times to make assumptions. I normally figure that somewhere, someone has tweaked a definition to apply to some new bizarre case in a way I've never heard of before. You could define it to insert transposes in the appropriate places to make the right-multiplications compatible, for example, though I don't know of anyone actually using such a thing. – Robert Mastragostino Aug 15 '13 at 15:31

8 Answers8

10

If you want a formal definition of matrix exponentiation for non-negative integer values, just define $A^n = A^{n-1}\cdot A$ and $A^0 = I$. Since matrix multiplication is associative, we won't have any ambiguity there.

Edit: As Tobias Kildetoft points out below, it might be wiser to define the base case as $A^1=A$ instead of $A^0=I$, so as to not have to worry about how $\det(A^n)=\det(A)^n$ for $\det(A)=0$ would imply $0^0=1$. Which isn't false, depending on how we want to define it, but is something we might not want to worry about for the purposes of defining matrix powers.

Devlin Mallory
  • 2,483
  • 18
  • 21
  • 2
    +1. Associativity is the key, here. After all, multiplication of matrices is just a binary operation. – Avitus Aug 12 '13 at 14:21
  • Does it exhibit the other rules of powers though? ($A^{a+b} = A^a + A^b$ and especially $(AB)^n = A^nB^n$, with A & B being square matrices of the same size) – paul23 Aug 12 '13 at 15:17
  • 1
    @paul23 The first is incorrect, and I suspect you mean $A^{a+b}=A^a A^b$, which is true (you can show this by induction, using associativity.) The second will hold if $A,B$ commute, but not in general. – Devlin Mallory Aug 12 '13 at 15:34
  • @paul23 for the first case, just take $A=$ identity matrix: you will find a contradiction. For the second you need commutativity, as remarked by Devlin Mallory. – Avitus Aug 12 '13 at 16:03
  • I would say that defining $A^0 = I$ might lead to confusion when $A$ is not invertible. – Tobias Kildetoft Aug 12 '13 at 18:42
  • @TobiasKildetoft Could you elaborate? I'm probably missing some obvious reason, but what confusion could this cause? – Devlin Mallory Aug 12 '13 at 18:46
  • 1
    Hmm, this might actually be a bit contrived, but defining it this way will make the $0$'th power behave a lot differently from the positive ones. For example, we have $\det(A^n) = \det(A)^n$ for all $n>0$. But usually, we don't want to define $0^0 = 1$ when working with real or complex number, as this tends to interact poorly with limits. – Tobias Kildetoft Aug 12 '13 at 18:53
  • 1
    Good point! I guess I'm okay in general with $0^0=1$, but that's true that it could cause confusion when combined with limits of functions. – Devlin Mallory Aug 12 '13 at 19:06
  • @TobiasKildetoft Zero to the zero power - Is $0^0=1$? (Personally, I like $0^0=1$ option.) – Martin Sleziak Aug 15 '13 at 15:30
5

As for numbers, if you know how to multiply square matrices together, the "power operation" is then just iteration of multiplication (this works, because your matrix is square). Therefore, what you thought was nonsense is very much "sensical".

Also, since $\det(AB)=\det(A)\det(B)$, you have $\det(A^n)=\det(A)^n$, which is what you need to prove what you want to prove.

M Turgeon
  • 10,419
4

$$A^3=A\cdot A\cdot A$$

$$A^{25}=\underset{25\text{ of these}}{\underbrace{A\cdot A\cdots A\cdot A}}$$

Since matrix multiplication is associative, this is completely natural and acceptable.

Cameron Buie
  • 102,994
2

The square matrix is a representation of an endomorphism in a given basis of the vector space and the product of matices is defined to be the representation of the composition of endomorphisms so if a matrix $A$ represents the endomorphism $f$ in a basis $\mathcal B$ then $A^2$ represents $f^2:=f\circ f$

1

The "to the power of" operation with matrices requires square matrices for the typical interpretation (the "dot" matrix multiplication method), but it could be redefined in some way so that an $m$ by $n$ matrix could have an exponent applied to it. In many programming packages, $A$^$n$ means typical matrix multiplication, while $A$.^$n$ means element-by-element exponentiation.

abiessu
  • 8,115
0

Just an add-on: the multiplication of matrices you refer to is not commutative (in general) but associative. This important fact allows you to define in an unambiguous way the powers $A^n$, for all $n>2$.

Avitus
  • 14,018
0

As the currently-accepted answer has mentioned, the elementary “repeated multiplication” of exponentiation to integer exponents applies just as well to square matrices as it does to $\mathbb{R}$ or $\mathbb{C}$.

But matrix exponentiation can be generalized. Suppose that the matrix $A$ has the factorization $A = BDB^{-1}$, where $D$ is a diagonal matrix. Then, because of associativity of multiplication $A^2 = (BDB^{-1})(BDB^{-1}) = BD(B^{-1}B)DB^{-1} = BDDB^{-1} = BD^2B^{-1}$, and in general, $A^n = BD^nB^{-1}$.

Raising a diagonal matrix to a power is equivalent to raising its diagonal elements to that power, which can be extended to non-integer exponents.

Dan
  • 14,978
-4

But matrix multiplication is not commutative, $A^{2}\cdot A^{3} \neq A^{3}\cdot A^{2}$ except in some particular cases. $$ A = \pmatrix{0 & 3 \\ 1&2}; A^2 = \pmatrix{3 & 6 \\ 2 & 7}; A^3 = \pmatrix{6& 21 \\ 9 & 20} $$ $$A^2·A^3 = \pmatrix{72 & 183\\ 75 & 182}; A^3·A^2 = \pmatrix{60 & 183 \\ 61 & 182} $$ So $A^5$ has more than 1 solution, at least in this case.

Thomas
  • 43,555
user90078
  • 1
  • 1