13

\begin{bmatrix}0&1&0&1\\1&0&1&0\\0&1&0&1\\1&0&1&0\end{bmatrix} I have calculated characteristic polynomial as $x^2(x^2-4)$ but I don't know what is minimal polynomial please solve

Parcly Taxel
  • 103,344
caleb
  • 173
  • 1
  • 1
  • 9

3 Answers3

28

All the distinct roots of the characteristic polynomial are also the roots of the minimal polynomial, hence the minimal polynomial has roots $0,2,-2$

Hence $x(x^2-4)$ divides the minimal polynomial,

Also all roots of the minimal polynomial is also a root of the characteristic polynomial, so the minimal polynomial must divide the characteristic polynomial.

Hence all these implies that the minimal polynomial is either $x(x^2-4)$ or $x^2(x^2-4)$.

Now by putting the matrix in the equation $x(x^2-4)$ if it comes $0$ then $x(x^2-4)$ is the minimal polynomial else $x^2(x^2-4)$ is the minimal polynomial.

Another way to decide on the last part: The dimension of the null space of the above matrix is 2, hence it has a basis consisting of the eigenvectors of the matrix, hence it is diagonalizable, hence it's minimal polynomial spilts into distinct linear factors, hence it cannot be $x^2(x^2-4)$, hence the answer is $x(x^2-4)$.

Arpan1729
  • 3,414
  • 29
    Alternatives to 'hence': 'this means that', 'so', 'it follows that', 'thus', 'so we see that', 'from where', etc. – Pedro Dec 05 '18 at 15:57
  • If the dimension of the nullspace is 2 then how is it diagonalizable – Rajesh Sri Jul 23 '20 at 12:52
  • I don't understand why you conclude that $A$ is diagonalizable as well. – Lord Vader Apr 12 '22 at 23:04
  • @RajeshSri I was puzzled at first too. The null space is the eigennspace for eigenvalue of 0. And eigenvectors with distinct eigenvalues are linearly indep. There are 3 distinct eigenvalues 0,2,-2 and again 0-eigenspace is 2 dimensional so it’s big enough – usr0192 Apr 30 '22 at 05:09
21

These examples they give are always way too simple. Here you can spot by inspection the kernel (which is the eigenspace for $\lambda=0$), which is a huge give-away. But I'll apply a general method instead.

Take some nonzero vector, and apply the matrix repeatedly to it, until the images become linearly dependent. I'll just take the first standard basis vector $e_1$ and call the matrix $A$, which gives $$\pmatrix{1\\0\\0\\0}\overset A\mapsto \pmatrix{0\\1\\0\\1}\overset A\mapsto \pmatrix{2\\0\\2\\0}\overset A\mapsto \pmatrix{0\\4\\0\\4} $$ with obvious linear dependency $-4Ae_1+A^3e_1=0$. This (and the fact that this is the first linear dependency) tells you the polynomial $P=X^3-4X$ is the smallest degree monic polynomial to satisfy $P[A](e_1)=0$. Thus $P$ divides the minimal polynomial, and the (unknown at this point) quotient of that division is the minimal polynomial of the restriction of (the linear map defined by) $A$ to the image of $P[A]$. But it turns out the $P[A]=0$ already (you were lucky), so (its image is the zero space, the mentioned quotient is $1$, and) $P$ is itself the minimal polynomial.

As you see, one can do entirely without the characteristic polynomial.

  • Hello! This is really useful and detailed. I am working on a similar while different question, and I found that the minimal polynomial that I got from taking $e_1$ and from taking $e_4$ is different even though both are nonzero. For $e_1$ I obtained a whole minimal polynomial which is $(X+3)^2X^2$, while for $e_4$ I only obtained the part $(X+3)^2$. I found that taking $e_4$ doesn't always get me the correct minpoly. Does this have something to do with the first linear dependency that you mentioned here? Where do you think I could find more references on this? – sazhyahun Nov 30 '20 at 00:32
9

As the given matrix is symmetric it is diagonalizable $\Rightarrow$ its minimal polynomial has distinct roots $\Rightarrow$ minimal polynomial $= x(x-2)(x+2)$.

MattAllegro
  • 3,316
  • @ancientmathematician Yes, of course! I suppose I wrongly read characteristic polynomial or something. I'm deleting my previous misleading comment, and this one in a while. Thanks! – Jose Brox Dec 05 '18 at 20:08