1

Is Jordan basis unique? I have a 4x4 matrix, find eigenvectors and one generalized eigenvector, also trying different linearly independent eigenvectors but the matrix P so that PJP^-1 =A only works for certain vector. \begin{pmatrix}-\frac{1}{2}&0&-\frac{1}{2}&0\\ \frac{3}{2}&0&\frac{1}{2}&0\\ -\frac{3}{2}&0&-\frac{3}{2}&-1\\ 1&1&1&1\end{pmatrix}\begin{pmatrix}1&-1&0&-1\\ 0&2&0&1\\ -2&1&-1&1\\ 2&-1&2&0\end{pmatrix}\begin{pmatrix}1&1&0&0\\ -1&0&1&1\\ -3&-1&0&0\\ 3&0&-1&0\end{pmatrix}=\begin{pmatrix}-1&0&0&0\\ 0&1&0&-1\\ 0&0&1&1\\ 0&0&0&1\end{pmatrix}

  • 1
    your third column is incorrect, it should have an explicit dependency on your fourth column \begin{pmatrix}1&1&-1&0\ -1&0&1&1\ -3&-1&1&0\ 3&0&-1&0\end{pmatrix} and then you need to correct its inverse as well – Will Jagy Mar 02 '18 at 01:20
  • I always find that I must use this vector but I can't understand why. –  Mar 02 '18 at 11:22

6 Answers6

4

Any basis is a Jordan basis for the identity.

user126154
  • 7,551
2

It is not: the Jordan matrix is unique, but there are several basis which will yield that matrix.

  • if I use the three eigenvectors to create P , it only works for a certain vector. I have found it using wolfram but I can't understand why –  Feb 28 '18 at 22:27
2

We have $$ A = \left( \begin{array}{rrrr} 1 & -1 & 0 & -1 \\ 0 & 2 & 0 & 1 \\ -2 & 1 & -1 & 1 \\ 2 & -1 & 2 & 0 \end{array} \right) $$ with characteristic polynomial $(x+1)(x-1)^3$ but minimal polynomial $(x+1)(x-1)^2.$ $$ A -I = \left( \begin{array}{rrrr} 0 & -1 & 0 & -1 \\ 0 & 1 & 0 & 1 \\ -2 & 1 & -2 & 1 \\ 2 & -1 & 2 & -1 \end{array} \right) $$ We also need to know $$ (A -I)^2 = \left( \begin{array}{rrrr} -2 & 0 & -2 & 0 \\ 2 & 0 & 2 & 0 \\ 6 & 0 & 6 & 0 \\ -6 & 0 & -6 & 0 \end{array} \right) $$ We get two dimensions worth of actual eigenvectors for eigenvalue $1,$ we can take a basis as $$ \left( \begin{array}{rr} 1 & 0 \\ 0 & 1 \\ -1 & 0 \\ 0 & -1 \end{array} \right) $$ Linear combinations of the two columns are also eigenvectors. Back to $$ (A -I)^2 = \left( \begin{array}{rrrr} -2 & 0 & -2 & 0 \\ 2 & 0 & 2 & 0 \\ 6 & 0 & 6 & 0 \\ -6 & 0 & -6 & 0 \end{array} \right) $$ we are going to take the fourth column of $P,$ I am going to call it $w,$ as a generalized eigenvector, namely $(A-I)w \neq 0$ but $(A-I)^2 w = 0.$ Once we choose $w$ as the fourth column, the third column (an eigenvector) must be $(A-I)w.$ This is an eigenvector because $0 = (A-I)^2 w = (A-I)(A-I)w .$

Version 1: Choosing $w = (0,1,0,0)^T$ $$ w = \left( \begin{array}{rr} 0 \\ 1 \\ 0 \\ 0 \end{array} \right) $$ $$ (A-I)w = \left( \begin{array}{rr} -1 \\ 1 \\ 1 \\ -1 \end{array} \right) $$ giving one solution to $R^{-1}A R = J$ as $$ \frac{1}{2} \left( \begin{array}{rrrr} -1 & 0 & -1 & 0 \\ 0 & 0 & -2 & -2 \\ -3 & 0 & -3 & -2 \\ 2 & 2 & 2 & 2 \end{array} \right) \left( \begin{array}{rrrr} 1 & -1 & 0 & -1 \\ 0 & 2 & 0 & 1 \\ -2 & 1 & -1 & 1 \\ 2 & -1 & 2 & 0 \end{array} \right) \left( \begin{array}{rrrr} 1 & 1 & -1 & 0 \\ -1 & 0 & 1 & 1 \\ -3 & -1 & 1 & 0 \\ 3 & 0 & -1 & 0 \end{array} \right) = \left( \begin{array}{rrrr} -1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 1 \end{array} \right) $$

Version 2: choosing $w = (0,1,0,1)^T$

$$ \frac{1}{4} \left( \begin{array}{rrrr} -2 & 0 & -2 & 0 \\ 2 & 2 & -2 & -2 \\ -2 & 1 & -2 & -1 \\ 2 & 2 & 2 & 2 \end{array} \right) \left( \begin{array}{rrrr} 1 & -1 & 0 & -1 \\ 0 & 2 & 0 & 1 \\ -2 & 1 & -1 & 1 \\ 2 & -1 & 2 & 0 \end{array} \right) \left( \begin{array}{rrrr} 1 & 1 & -2 & 0 \\ -1 & 0 & 2 & 1 \\ -3 & -1 & 2 & 0 \\ 3 & 0 & -2 & 1 \end{array} \right) = \left( \begin{array}{rrrr} -1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 1 \end{array} \right) $$

Will Jagy
  • 139,541
  • very enlightening, thank you. this works for different w? –  Mar 02 '18 at 18:49
  • 1
    @GF it does if you follow the rules; $w$ must be a generalized eigenvector, and the third column must be $(A-I)w.$ For more complicated problems, we might need to vary the first or second column, but this particular problem is a little forgiving that way. I have just finished a version with a different choice for $w,$ typing that in now – Will Jagy Mar 02 '18 at 18:56
1

No, unless your matrix is diagonalisable and has four distinct eigenvalues. To see that, note that if $v$ and $w$ are generalised eigenvectors for $A$ with the same eigenvalue $\lambda$, then so is $v+w$: indeed, $(A-\lambda I)^4(v+w)=(A-\lambda I)^4v+(A-\lambda I)^4w=0+0=0$.

tomasz
  • 35,474
1

Equivalently, for a matrix $A$ in JNF, is the group $$G_A^n=\{P\in\operatorname{GL}(n,\Bbb C)\,:\, PAP^{-1}=A\}$$ the trivial one $\{I_n\}$? The answer is obviously no, because, for one thing, if $P\in G_A^n$, then $\lambda P\in G_A^n$ for all $\lambda\in\Bbb C\setminus\{0\}$. Which is like saying that, if $b_1,\cdots, b_n$ is a basis (not necessarily a Jordan one), then the matrix associated to some linear transformation $f$ in that basis is the same that it is in $\frac1\lambda b_1,\cdots, \frac1\lambda b_n$.

More substantially, though, let's call $J(\lambda,n)$ the Jordan block of size $n$ and eigenvalue $\lambda$. If $n$ needs not be specified, we'll call it $J_\lambda$ and, specifically, $J_0=J$. Notice that $J_\lambda=\lambda I+J$, which is an element of the commutative subalgebra $\Bbb C[J]\subseteq\Bbb C^{n\times n}$. Namely, the matrices of the form $a_0I+a_1J+\cdots+a_{n-1}J^{n-1}$ for some $a_0,\cdots, a_{n-1}\in\Bbb C$; it is isomorphic to $\Bbb C[x]/(x^n)$ via the map $p(x)\mapsto p(J)$. So, the whole $\Bbb C[J]$ commutes with $J_\lambda$. The invertible elements of $\Bbb C[J]$ are exactly the matrices of the form $p(J)$ for some polynomial $p$ such that $p(0)\ne 0$. If $n>1$, this accounts for a lot of matrices which are not multiples of the identity. More precisely, we can put in correspondence the classes of these matrices which are scalar multiples of one another with $\Bbb C^{n-1}$ via the (non-linear) map $$a_0I+a_1J+\cdots +a_{n-1}J^{n-1}\mapsto \frac1{a_0}\left(a_1,\cdots ,a_{n-1}\right)$$

For a generic matrix in the form $$A=\begin{pmatrix}J(\lambda_1,n_1)&&\\ &\ddots&\\ &&J(\lambda_k,n_k)\end{pmatrix}$$ we can easily find an embedding of the group $\Bbb C[J(n_1)]^*\times \cdots\times \Bbb C[J(n_k)]^*\hookrightarrow G^n_A$ by working block-by-block like before.

However, this might not exhaust the possibilities. If two of the aforementioned blocks are equal (meaning, $\lambda_i=\lambda_j$ and $n_i=n_j$), then there are also the changes of basis that switch those two blocks. Namely, for a $4\times 4$ example, $$\begin{pmatrix}0&I_2\\ I_2&0\end{pmatrix}\begin{pmatrix}J(\lambda,2)&0\\ 0&J(\lambda,2)\end{pmatrix}\begin{pmatrix}0&I_2\\ I_2&0\end{pmatrix}^{-1}=\begin{pmatrix}J(\lambda,2)&0\\ 0&J(\lambda,2)\end{pmatrix}$$

1

Here is a general argument that no notion of a basis like the Jordan basis, associated to a linear operator (a.k.a. vector space endomorphism)$~\phi$ of a vector space$~V$ can never be unique. Let me be scrupulous: I am assuming $V$ is of finite nonzero dimension, and one is not working over a field of $2$ elements, as in that case there are exceptions; there might even be only one basis to begin with. The argument has been put forward in the answer by user228113, kind of, but could be stated a bit more explicitly.

The notion of Jordan basis depends only on $V$ as an abstract vector space and on $\phi$, not on any particular choice of basis or coordinate system on $V$. That means that given a Jordan basis $\def\B{\mathcal B}\B$ (an ordered basis of $V$) for $\phi$ and an isomorphism of vector spaces $f:V\to W$, the image $f(B)$ (an ordered basis of $W$) will be a Jordan basis for the transported operator $f\circ\phi\circ f^{-1}$ on$~W$. This holds in particular for any automorphism $f:V\to V$. Then uniqueness of $\mathcal B$ would amount to the statement that whenever $f\circ\phi\circ f^{-1}=f$, in other words whenever $f$ commutes with $\phi$, this would entail $f(\B)=\B$. But $f(\B)=\B$ only holds when $f$ is the identity, so this would imply that the identity is the only invertible linear operator that commutes with $\phi$. But certainly other nonzero multiples of the identity also commute with $\phi$, so this is a contradiction (this is where I use that the field has more than $2$ elements).

The same argument shows that bases of eigenvectors are (almost) never unique (not really surprising since they are an instance of Jordan bases). Moreover one easily quantifies the degree of non-unicity: it suffices to consider the invertible elements in the centraliser (in the ring of endomorphisms) of $\phi$. For instance for $\phi$ diagonalisable with all eigenspaces of dimension$~1$, this centraliser is relatively small; it equals $k[\phi]$ the ($n$-dimensional vector) space of polynomials in$~\phi$. The invertible elements in that centraliser can be described, expressed in a particular basis of eigenvectors, as the set of generalised permutation matrices; those with exactly one non-zero entry in each row and in each column. This gives the non-unicity of its bases of eigenvectors given by permutations of the basis, and independent nonzero scalar multiplications of each basis vector.