2

I know that we can define a function $f$ over a matrix $A$ by doing the following, we diagonalise $A$ and then apply $f$ to all eigenvalues of $A$. After that we can perform a change of basis if we need our expression to be in a specific basis.

But what happens if the matrix cannot be written in a diagonal form, can I extend this definition? Do you any book that explains this? In $\mathbb{C}$ all matrices can be written in a diagonal form so I am wondering if there is a way to go back to $\mathbb{R}$ after applying $f$ but I am not sure if I am going on the right way. Thanks.

user404720
  • 108
  • 9
  • It depends on what we know about $f$. If $f$ is analytic, then it suffices to apply a power series to the Jordan form. If $f$ is merely continuous, we might run into trouble trying do define a functional calculus – Ben Grossmann Mar 13 '18 at 17:50
  • 1
    Even in $\Bbb C$ not all matrices are diagonalizable. – M. Winter Mar 13 '18 at 20:44

2 Answers2

3

If you begin with a real matrix $A$ and a one-variable real analytic function $f(x), \; $ you do get real $f(A).$

The Jordan part, $P^{-1}AP = J, \;$ is writing $J = D + N$ where $N$ is strictly upper triangular, and, because of the Jordan block description, $DN = ND.$ As a result, we can write $f(J) =f(D+N)$ using all available power series conveniences, along with the fact that $N^n = 0.$ The most common application is $e^{(D+N)t} = e^{Dt} e^{Nt} = e^{Nt} e^{Dt},$ where the entries of $e^{Nt}$ are explicit polynomials in $t.$

The part that I do not see many students writing correctly is just this, turn the thing around, $PJP^{-1} = A,$ finally $P f(J) P^{-1} = f(A).$ In order to do this, it is necessary to confirm that $P P^{-1} = I$ and both $P^{-1}AP = J, \;$ and $P f(J) P^{-1} = f(A).$ It is not enough to know the final Jordan form, you need the actual matrix $P$ and its correct inverse. Here is one from yesterday How to find Jordan basis of a matrix

Reverse direction $PJP^{-1} = A:$

$$ \left( \begin{array}{rrrrr} 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 1 \\ 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \\ 1 & 0 & 1 & 0 & 0 \end{array} \right) \left( \begin{array}{rrrrr} -1 & 0 & 0 & 0 & 0 \\ 0 & -1 & 1 & 0 & 0 \\ 0 & 0 & -1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 0 & 1 \end{array} \right) \left( \begin{array}{rrrrr} 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 \\ -1 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 1 & 0 \\ 0 & 1 & -1 & 0 & 0 \end{array} \right) = \left( \begin{array}{rrrrr} -1 & 0 & 0 & 0 & 0 \\ -1 & 1 & -2 & 0 & 1 \\ -1 & 0 & -1 & 0 & 1 \\ 0 & 1 & -1 & 1 & 0 \\ 0 & 0 & 0 & 0 & -1 \end{array} \right) $$ $$ $$

Worthwhile exercise for the reader: find $e^{At}$

Will Jagy
  • 139,541
2

For a nilpotent operator $N : V \to V$ of index $m \in \mathbb{N}$ and an analytic function $f(z) = \sum_{n=0}^\infty \frac{f^{(n)}(0)}{n!}z^n$ we can define $$f(N) = \sum_{n=0}^{m-1} \frac{f^{(n)}(0)}{n!}N^n$$

For an arbitrary linear map $A : V \to V$ with the minimal polynomial $(x - \lambda_1)^{p_1}(x - \lambda_2)^{p_2}\cdots (x - \lambda_k)^{p_k}$, the space $V$ admits the decomposition into generalized eigenspaces:

$$V = \ker (A - \lambda_1I)^{p_1} \dot+ \ker (A - \lambda_2I)^{p_2} \dot+ \cdots \dot+ \ker (A - \lambda_kI)^{p_k}$$

Notice that restricting $A$ to $\ker (A - \lambda_iI)^{p_i}$ we can write $A_i = A|_{\ker (A - \lambda_iI)^{p_i}}$ in the form

$$A_i = \lambda_i I_i + N_i$$

where $I_i$ is the identity on $\ker (A - \lambda_iI)^{p_i}$, and $N_i = A_i - \lambda_i I_i$, which is a nilpotent map of index $p_i$.

Therefore, having in mind Taylor's formula

$$f(z) = \sum_{n=0}^\infty \frac{f^{(n)}(\lambda)}{n!} (z - \lambda)^n$$

we can define $$f(A_i) = f(\lambda_i I + N_i) = \sum_{n=0}^{p_i-1} \frac{f^{(n)}(\lambda)}{n!} N_i^n$$

So finally, for $A = A_1 \dot+ A_2 \dot+ \cdots \dot+ A_k$ we define

$$f(A) = f(A_1) \dot+ f(A_2) \dot+ \cdots \dot+ f(A_k)$$

Note that all definitions are in fact valid when $f$ is a polynomial, so this is simply a direct generalization to power series.

mechanodroid
  • 46,490