Prove $$\det \left( e^A \right) = e^{\operatorname{tr}(A)}$$ for all matrices $A \in \mathbb{C}^{n \times n}$.
-
All the answers so far use a triangularized form at some point. If you know that every complex square matrix is triangularizable, it brings the problem back to triangular matrices. – Julien Mar 06 '13 at 16:04
6 Answers
Both sides are continuous. A standard proof goes by showing this for diagonalizable matrices, and then using their density in $M_n(\mathbb{C})$.
But actually, it suffices to triangularize $$ A=P^{-1}TP $$ with $P$ invertible and $T$ upper-triangular. This is possible as soon as the characteristic polynomial splits, which is obviously the case in $\mathbb{C}$.
Let $\lambda_1,\ldots,\lambda_n$ be the eigenvalues of $A$.
Observe that each $T^k$ is upper-triangular with $\lambda_1^k,\ldots,\lambda_n^k$ on the diagonal. It follows that $e^T$ is upper triangular with $e^{\lambda_1},\ldots,e^{\lambda_n}$ on the diagonal. So $$ \det e^T=e^{\lambda_1}\cdots e^{\lambda_n}=e^{\lambda_1+\ldots+\lambda_n}=e^{\mbox{tr}\;T} $$
Finally, observe that $\mbox{tr} \;A=\mbox{tr}\;T$, and that $P^{-1}T^kP=A^k$ for all $k$, so $$P^{-1}e^TP=e^A\qquad \Rightarrow\qquad \det (e^T)=\det (P^{-1}e^TP)=\det(e^A).$$

- 44,791
-
-
1@1015 Is it possible to generalize this to $det(f(A))=f(trA)$. Where $f(A)$ is some continuous differentiable function. i.e. use $P^{-1}T^kP=T^k$ in connection with the Taylor expansion of the function $f$. – Alexander Cska Nov 18 '17 at 15:21
-
@AlexanderCska You can show, for any smooth path of complex square matrices $B(t)$ that passes through the identity, $$\left.\frac{d}{dt}\right|_{t=0}\det\left(B(t)\right)=\text{Tr}\left(B'(0)\right).$$ The equation essentially expresses the more general theorem that every Lie group homomorphism induces a Lie algebra homomorphism, so I doubt it holds outside of the exponential, which maps a Lie algebra into its Lie group. See Stillwell (2008) for more details. – Mo Pol Bol Sep 06 '22 at 12:15
Hint: Use that every complex matrix has a Jordan normal form and that the determinant of a triangular matrix is the product of the diagonal.
Use that $\exp(A)=\exp(S^{-1} J S ) = S^{-1} \exp(J) S $
And that the trace doesn't change under transformations.
\begin{align*} \det(\exp(A))&=\det(\exp(S J S^{-1}))\\ &=\det(S \exp(J) S^{-1})\\ &=\det(S) \det(\exp(J)) \det (S^{-1})\\ &=\det(\exp (J))\\ &=\prod_{i=1}^n \exp(j_{ii})\\ &=\exp(\sum_{i=1}^n{j_{ii}})\\ &=\exp(\text{tr}J) \end{align*}

- 19,935
-
-
-
-
$A$ is the normal matrix and $D$ is the jordan normal form of $A$ – Dominic Michaelis Mar 06 '13 at 15:33
-
-
-
-
-
@John $\exp(x)$ is shorthand for $e^x$ it's easier to read on one line sometimes, rather than squinting to see the tiny superscript. $d_{ii}$ is (to my understanding) denoting the element in the $i^\text{th}$ row and $i^\text{th}$ column. In normal language, this is simply saying the elements on the diagonal. – apnorton Mar 06 '13 at 15:57
-
-
-
@julien I never use that it is a diagonal matrix, i just use that the determinant of a triangular matrix is the product of the diagonal entries. But in deed i told something like that – Dominic Michaelis Mar 06 '13 at 16:02
-
Yes, you said diagonal, which is why I commented. And also you note $D$ which lets people think it is diagonal. – Julien Mar 06 '13 at 16:09
-
-
-
-
@John: trace is preserved by similarity. The trace of a matrix is the coefficient of the penultimate term in the characteristic polynomial, which is also preserved by similarity. Thus, $\mathrm{tr}(A)=\mathrm{tr}(J)$. – robjohn Mar 06 '13 at 16:39
-
so would it be alright if i just changed it a bit so that i ended up with exp(trA) and started with det(exp(J)) – John Mar 06 '13 at 16:42
Let $f(t)= \det(e^{tA})$. Then $f'(t)=D \det(e^{tA}) \cdot Ae^{tA}=\text{tr} \left(^t \text{com}(e^{tA})Ae^{tA} \right)$. But $A$ and $e^{tA}$ commute, and $^t\text{com}(e^{tA})e^{tA}=\det(e^{tA}) \operatorname{I}_n$. Therefore, $f'(t)=\text{tr}(A)f(t)$ and $f(0)=1$, hence $f(t)=e^{\text{tr}(A)t}$. For $t=1$, $\det(e^{A})= e^{\text{tr}(A)}$.

- 33,157
-
Ah! Finally an elementary answer...+1. I think you want $com(e^{sA})^te^{sA}=\det(e^{sA})I_n$. – Julien Mar 06 '13 at 16:18
-
-
Why $D \det(e^{tA}) \cdot Ae^{tA}=\text{tr} \left(^t \text{com}(e^{tA})Ae^{tA} \right)$? – math.n00b Aug 16 '14 at 14:44
-
The result is known as Jacobi's formula: http://en.wikipedia.org/wiki/Jacobi's_formula – Seirios Aug 16 '14 at 15:17
You can do it in these steps (still requires some work):
$\quad \bf (1)$ $A$ is diagonalizable
$\quad \bf (2)$ $A$ is nilpotent
$\quad \bf (3)$ $A$ is arbitrary
$\bf (1)$ This shouldn't be too hard. Start with assuming that $A = CDC^{-1}$ for $D$ a diagonal matrix.
$\bf (2)$ Use that every nilpotent matrix is similar to a upper triangular matrix $D$ with $0$s on the diagonal. So $A = CDC^{-1}$.
$\bf (3)$ Use that every matrix can be written as the sum $A = D + N$ of a nilpotent matrix $N$ and a diagonalizable matrix $D$ and $D$ and $N$ commute. So $$ \det(e^{A}) = \det(e^De^N) =\det(e^{D})\det(e^{N}) = e^{\text{Tr}(D)}e^{\text{Tr}(N)} = e^{\text{Tr}(D) + \text{Tr}(N)} = e^{\text{Tr}(A)}. $$ We have used here that $D$ and $N$ commute so that $e^A = e^De^N.$

- 43,555
-
-
@John: Yes. But you still have to write down the details of step (1) and (2) and there was some claims that I assumed you know. – Thomas Mar 06 '13 at 15:39
-
-
@John: What specific details? (Left is really just to write things down. For example, for step (1) try and write down a diagonal matrix $D$ and the figure out what $e^D$ is using that definition of the exponential map. – Thomas Mar 06 '13 at 15:42
-
You need to add that $D$ and $N$ commute. Also, since you triangularize $D$, why don't you triangularize $A$ directly (which is what I did)? – Julien Mar 06 '13 at 15:50
-
-
I still ask the question: if you triangularize $N$, why don't you triangularize $A$ directly? – Julien Mar 06 '13 at 15:58
-
1@julien: I like the idea of step (3). I guess it seems more clear to me... – Thomas Mar 06 '13 at 16:05
-
More clear? It just makes it longer and more complicated. Now you have to use the fact that $e^{A+B}=e^Ae^B$ when $A$ and $B$ commute, in addition, which is the most difficult part of your argument. But hey, that's your answer so I'll leave it alone now. – Julien Mar 06 '13 at 16:07
-
The other answers seem to overcomplicate things, and there's no need for any discussion of diagonalization or triangulation. Let $A$ have eigenvalues $\lambda_j$ and eigenvectors $v_j$. Either set can be degenerate, it makes no difference. Using the Taylor series $e^A \equiv I + A + \frac{1}{2!} A^2 + \ldots$, you can quickly see that $e^A v_j = e^{\lambda_j} v_j$. Thus $e^M$ has eigenvalues $e^{\lambda_j}$ and eigenvectors $v_j$.
Now, the determinant of any matrix is the product of its eigenvalues (1), and its trace is the sum of its eigenvalues (2). Thus $$\mathrm{det} \left(e^A \right) \, \overset{\scriptscriptstyle{(1)}}{=} \, e^{\lambda_1}e^{\lambda_2}e^{\lambda_3}\ldots = e^{\lambda_1 + \lambda_2 + \lambda_3 + \, \ldots} \, \overset{\scriptscriptstyle{(2)}}{=} \, e^{\mathrm{tr}(A)},$$ which is your desired result.
If you want to also prove (1) and (2) to be completely happy, we can do that as follows [this part is a modified version of Ted Schifrin's answer at https://math.stackexchange.com/a/546167/1096883]. The characteristic polynomial of an $n \times n$ matrix $A$ is $$p(t) = \det(A-tI) = (-1)^n \big(t^n - (\text{tr} A) \,t^{n-1} + \ldots + (-1)^n \det A\big)\, ,$$ which you can see from the standard tedious way ("Laplace expansion") of calculating a determinant. But from the fundamental theorem of algebra we also have $p(t) = (-1)^n(t-\lambda_1) \ldots (t-\lambda_n)$, where the $\lambda_j$ are the eigenvalues of $A$. We then equate the coefficients of $t^{n-1}$ and $t^0$ (i.e. the constant term) between the two forms of $p(t)$, which yields $\det A = \lambda_1 \lambda_2 \ldots \lambda_n$ and $\text{tr}A = \lambda_1 + \lambda_2 + \ldots + \lambda_n$.

- 11
Here is another alternative, set as a series of exercises in Naive Lie Theory by John Stillwell.
Lemma 1: For any complex square matrix $B$ $$\det\left(B\right)=\displaystyle\sum_{j=1}^{n} (-1)^{j+1}b_{1j}\det\left(B_{1j}\right),$$ where $B_{1j}$ denotes the matrix obtained by removing the first row and $j^{\text{th}}$ column from $B$.
A sketch of the proof is given in an addendum. $\qquad\square$
Lemma 2: For any smooth (i.e. differentiable) path of complex square matrices $B(t)$, with $B(0)=\mathbf{1}$, $$\left.\frac{d}{dt}\right|_{t=0}\det\left(B(t)\right)=\text{Tr}\left(B'(0)\right)$$
Proof: Consider $$\left.\frac{d}{dt}\right|_{t=0}b_{1j}(t)\det\left(B_{1j}(t)\right)=b'_{1j}(0)\det\left(B_{1j}(0)\right)+b_{1j}(0)\left.\frac{d}{dt}\right|_{t=0}\det\left(B_{1j}(t)\right).$$ Since $B(0)=\mathbf{1}$ both $b_{1j}(0)$ and $\det(B_{1j}(0))$ are equal to $0$ when $j\neq 1$ and $1$ when $j=1$. Hence by Lemma 1 $$\left.\frac{d}{dt}\right|_{t=0}\det\left(B(t)\right)=b'_{11}(0)+\left.\frac{d}{dt}\right|_{t=0}\det\left(B_{11}(t)\right)$$ and, by induction on $\det\left(B_{11}(t)\right)$, $$\left.\frac{d}{dt}\right|_{t=0}\det\left(B(t)\right)=b'_{11}(0)+b'_{22}(0)+\cdots +b'_{nn}(0)=\text{Tr}\left(B'(0)\right).$$ $\square$
Lemma 3: Let $B(t)=e^{tA}$, for which $B'(0)=A$, then for the smooth function $f(t)=\det(e^{tA})$, with $f(0)=1$, $$f'(t)=f(t)\text{Tr}(A).$$
Proof: $$\begin{equation} \label{eq1} \begin{split} f'(t) & = \lim_{h\rightarrow 0}\frac{1}{h}\left[\det\left(e^{(t+h)A}\right)-\det\left(e^{tA}\right)\right] \\ & = \lim_{h\rightarrow 0}\frac{1}{h}\left[\det\left(e^{tA}e^{hA}\right)-\det\left(e^{tA}\right)\right] \\ & = \det\left(e^{tA}\right)\lim_{h\rightarrow 0}\frac{1}{h}\left[\det\left(e^{hA}\right)-1\right] \\ & =\det\left(e^{tA}\right)\left.\frac{d}{dt}\right|_{t=0}\det\left(B(t)\right) \\ & =f(t)\text{Tr}(A).\end{split}\end{equation}$$ The second and third equalities follow from the commutative and product laws of the exponential and determinant map, respectively. The fourth equality follows from Lemma 2. $\qquad\square$
Lemma 4: $$\det\left(e^A\right)=e^{\text{Tr}(A)}.$$
Proof: If $h(t)=e^{t\cdot\text{Tr}(A)}$ then $$h'(t)=\text{Tr}(A)h(t).$$ Hence $$f(t)=f'(t)\frac{1}{\text{Tr}(A)}=f'(t)\frac{1}{h'(t)}e^{t\cdot\text{Tr}(A)}=g(t)e^{t\cdot\text{Tr}(A)}.$$ Evaluating the function $g$ shows $$g'(0)=\mathbf{0}\implies g(t)=\text{constant}$$ and since $f(0)=1$, this constant is also $1$ and we conclude the result by evaluating $f$ when $t=1$. $\qquad\square$
Addendum: Here is a sketch of the proof for Lemma 1. The determinant of a complex square matrix $A=(a_{ij})$ is a map $$ A\mapsto \displaystyle\sum_{\sigma\in S_n}\text{sgn}(\sigma)a_{\small{1\sigma(1)}}\cdots a_{\small{n\sigma(n)}},\tag{*}$$ which uniquely satisfies the three axioms of multilinearity, anti-symmetry, and normalisation. We claim the map $$B\mapsto \displaystyle\sum_{j=1}^{n} (-1)^{j+1}b_{1j}\det\left(B_{1j}\right)$$ satisfies these three axioms and thus is equal to the determinant. Normalisation follows almost immediately from the definition and we skip the proof. Roughly, to prove anti-symmetry, show that switching any two columns of $B$ is equivalent to multiplying each permutation $\sigma$ by a $2$-cycle. Since this changes the sign of a permutation and the even and odd permutations divide $S_n$ evenly, the sign of every product flips and so the map is anti-symmetric. To prove multilinearity, write $\overline{b_i}=a\overline{u}+b\overline{v}$ for the $i$-th column, let $$ U=\left(\overline{b_1},\cdots,\overline{b_{i-1}},\overline{u},,\overline{b_{i+1}},\cdots, \overline{b_n}\right)$$ and similarly for the matrix $V$. Then for all $j\neq i$ $$ab_{1j}\det(U_{1j})+bb_{1j}\det(V_{1j})= b_{1j}\det(B_{1j}),$$ relying mostly on $(*)$ and commutativity of products in $\mathbb{C}$. The case $j=i$ is even simpler to check and multilinearity follows.

- 1,338