5

There exists some real $2 \times 2$ matrix $A$, such that $A^2+A+I=0$?

$$ A\ =\ \left[\begin{array}{ c c } a & b\\ c & d \end{array}\right] $$

$$ A^{2} = \left[\begin{array}{ c c } a & b\\ c & d \end{array}\right] \left[\begin{array}{ c c } a & b\\ c & d \end{array}\right] =\ \left[\begin{array}{ c c } a^{2} +bc\ & ab+bd\\ ac+cd & bc+d^{2} \end{array}\right] $$

$$ A^{2} \ +\ A\ +I\ =\ 0\ =\ \left[\begin{array}{ c c } a^{2} +bc\ & ab+bd\\ ac+cd & bc+d^{2} \end{array}\right] +\left[\begin{array}{ c c } a & b\\ c & d \end{array}\right] +\left[\begin{array}{ c c } 1 & 0\\ 0 & 1 \end{array}\right] \ \ =\ 0 $$

$$ \left[\begin{array}{ c c } a^{2} +bc\ +a+1 & ab+bd+b\\ ac+cd+c & bc+d^{2} +d+1 \end{array}\right] = 0 $$

$$ ab + bd + b = 0 $$ $$ b(a+d+1) = 0 $$ $$ ac + cd + c = c(a+d+1) = 0 $$ $$ (a+d+1) = 0 \text{ or } c = 0 \text{ or } b = 0 $$ $$ -d = a+1 $$

J. W. Tanner
  • 60,406
  • 4
    Hi! To avoid down-votes and close-votes, please provide us some context for this question, such as: (a) Is this homework? (b) If so, what course are you taking? (c) What specific topic are you covering at the moment? (d) What do you know that you think might be connected? (e) If you're stuck, what are you stuck on? For example, do you know what to apply, but don't know how to apply it, or do you not know what to apply? Please put these facts in your original post, not as responses to this comment, as comments may be deleted without warning. – Brian Tung Jan 31 '24 at 07:10
  • Also, please typeset your mathematics using MathJax. Doing so will help prevent your question from being poorly received. Of course, MathJax has a learning curve, but if you make an honest initial attempt, users will generally be happy to help you improve and fix whatever errors you experience. ¶ E.g., $A = \left[\begin{array}{cc}a & b \\ c & d\end{array}\right]$ yields $A = \left[\begin{array}{cc}a & b \ c & d\end{array}\right]$ (although you might be better off putting it on its own line). – Brian Tung Jan 31 '24 at 07:12
  • Good attempt, I can see if you've got some control sequences off. – Brian Tung Jan 31 '24 at 07:18
  • OP: Check my edits. And notice how I set off the equations on their own lines, so you can do it yourself next time. – Brian Tung Jan 31 '24 at 07:24
  • 1
    A good next step: what happens if $bc=0$? – Mike Jan 31 '24 at 08:05
  • 2
    See https://en.wikipedia.org/wiki/Companion_matrix – lhf Jan 31 '24 at 09:40
  • 1
    Here is a similar question – J. W. Tanner Jan 31 '24 at 13:28

7 Answers7

5

Sure, how about

$\pmatrix {\frac {-1 + i\sqrt 3}{2} \\ & \frac {-1 - i\sqrt 3}{2}}$

Or

$\pmatrix {\cos \frac {2\pi}{3} & -\sin\frac {2\pi}{3} \\ \sin \frac {2\pi}{3} & \cos \frac {2\pi}{3}}$

user317176
  • 11,017
  • 2
    The question asks for a Real $2 \times 2$ matrix – pie Jan 31 '24 at 07:28
  • 1
    @pie The second matrix there is real, and I think the complex matrix provides some intuition. – user317176 Jan 31 '24 at 07:29
  • 5
    @user317176: I guess we should let the OP speak for themselves, but I have a sneaking suspicion that that intuition might have to be spelled out a bit more. – Brian Tung Jan 31 '24 at 08:00
5

Your conclusion that either $ b = c = 0$ or $a + d + 1 = 0 $ is correct.

If $b = c = 0 $, then this $A$ becomes a diagonal matrix and the matrix equation reduces to scalar quadratic equations, which can solved without much effort. However, there are no real solutions in this case, so this case can be ignored.

If $a + d + 1 = 0$, then we also want

$a^2 + bc + a + 1 = 0$

$d^2 + bc + d + 1 = 0$

substituting $d = -1 - a $ into the second equation yields

$ a^2 + b c + a + 1 = 0 $

which is the first equation, thus both equations are satisfied simulaneously if $a + d + 1 = 0 $, so we need to solve only

$ a^2 + b c + a + 1 = 0 $

which is an equation in three variables. If $ b=0 $ or $c=0 $ then there is no real solution for $a$. Otherwise, we have

$ c = - \dfrac{a^2 + a + 1}{b} $

So we can choose $a$ and $b$ arbitrarily, and calculate $c$. Also, $ d $ follows from $a + d + 1 = 0$

For example, if we choose $ a = 1 , b = \dfrac{1}{2} $ then

$ c = - 6 $ and $ d = -2 $

Thus

$ A = \begin{bmatrix} 1 && \dfrac{1}{2} \\ -6 && -2 \end{bmatrix} $

It follows that

$ A^2 = \begin{bmatrix} -2 && -\dfrac{1}{2} \\ 6 && 1 \end{bmatrix}$

hence, $A^2 + A + I = 0 $

Hosam Hajeer
  • 21,978
3

The Cayley-Hamilton theorem can also be used to solve it. Let $A\in\mathbb{R}^{2\times 2}$, the theorem states every square matrix satisfies its own characteristic equation. We have $$ A= \begin{bmatrix} a&b\\c&d \end{bmatrix}. $$ The characteristic equation of the matrix is $$ p(\lambda) = \lambda^2-(a+d)\lambda+(ad-bc). $$ Using the Cayley-Hamilton theorem, we get $$ p(A) = A^2-(a+d)A+(ad-bc)I_2 = \boldsymbol{0}_{2\times 2}. $$

To have $A^2+A+I= \boldsymbol{0}_{2\times 2}$, we need $$ \begin{align} -(a+d) &= 1, \\ ad-bc &= 1. \end{align} $$

If we choose $a=-\frac{1}{2},d=-\frac{1}{2}$, we get $$ ad-bc=1 \implies bc+1=\frac{1}{4} \implies bc = -\frac{3}{4} $$ If $c=-\frac{3}{4b}, b\neq 0$, we choose $b=1$ then we get $c=-\frac{3}{4}$. The matrix is then

$$ A= \begin{bmatrix} -\frac{1}{2} & 1 \\ -\frac{3}{4} & -\frac{1}{2} \end{bmatrix}. $$

CroCo
  • 1,228
2

If you can solve any polynomial $f(z)=0$ in complex numbers, then you can solve $f(A)=0$ with $2\times 2$ matrices using the fact that we can make the replacement: $$x+iy=x\begin{pmatrix}1&0\\0&1\end{pmatrix}+y\begin{pmatrix}0&1\\-1&0\end{pmatrix}=\begin{pmatrix}x&y\\-y&x\end{pmatrix}$$

This works because there is an isomorphism that makes addition and multiplication work the same way. For instance you can readily check that: $\begin{pmatrix}0&1\\-1&0\end{pmatrix}^2 = -\begin{pmatrix}1&0\\0&1\end{pmatrix}$ just as $i^2=-1$, along with all the other regular rules.

Furthermore, you can use this to take user317176's first example to get the second one, or at least up to transposition (which corresponds to complex conjugation).

Merosity
  • 2,489
  • 1
  • 8
  • 16
2

If you just want one solution (which have all been mentioned here earlier):

  • Given any polynomial $p(x)=x^n+a_{n-1}x^{n-1}+\cdots+a_1x+a_0$, its companion matrix $A$ satisfies $p(A)=O$. In this particular case, $A=\begin{pmatrix} 0 & -1 \\ 1 & -1 \end{pmatrix}$ works.
  • Note that the roots of $x^2+x+1$ are precisely the primitive 3rd roots of unity, therefore the rotation matrix by $120^{\circ}$ (or $240^{\circ}$) works, that is $A=\begin{pmatrix} \cos(2\pi/3) &-\sin(2\pi/3) \\ \sin(2\pi/3) & \cos(2\pi/3) \end{pmatrix}$.

Here is an argument to find all solutions. Note that $x^2+x+1=0$ has two imaginary roots, so it's irreducible over $\mathbb R$. Therefore it must be the minimal polynomial of $A$. On the other hand, the characteristic polynomial of $A$ (which is a multiple of the minimal polynomial) is also of degree $2$, therefore its characteristic polynomial must be $x^2+x+1=0$. In other words $\text{Tr}(A)=-1$ and $\det(A)=1$. This without lenghty calculations establishes the necessary and sufficient condition $$a+d=-1, ad-bc=1$$

If $b=0$ or $c=0$, $a,d$ would be the eigenvalues which are imaginary, so $bc\not=0$, therefore the general solution is $$\begin{pmatrix} a & b \\ \frac{a(-1-a)-1}{b} & -1-a \end{pmatrix}$$ where $a\in\mathbb R, b\in\mathbb R\setminus\{0\}$.

Another method is according to the above, $A$ has two distinct (imaginary) eigenvalues, so the matrix can be diagonalized over $\mathbb C$, therefore all solutions are conjugate to each other over $\mathbb C$ hence over $\mathbb R$, in other words, all solutions are simply $$B^{-1}AB$$ where $B$ is an arbitrary invertible $2\times 2$ matrix and $A$ is any solution, such as the ones given earlier.

An alternative way to establish this without spectral method: For any $v\in\mathbb R^2\setminus\{\vec 0\}$, $v$ and $Av$ are linearly independent, since $A$ has no eigenvalue in $\mathbb R$. Under this basis we have $A(Av)=A^2v = -Av - v$ as $(A^2+A+I)v=0$, therefore $A$ is conjugate to $\begin{pmatrix} 0 & -1 \\ -1 & -1 \end{pmatrix}$. This is essentially (part of) the method to derive the theory of companion matrix or rational canonical form.

Just a user
  • 14,899
0

We can try to apply a fixed point iteration, for example

$A_{n+1} = \sqrt{-A_n-I}$

if we have access to some matrix square root.

We start with a random guess and iterate the above calculation up until some limit of iterations or until the fulfilment of the condition $$\|A_n^2+A_n+I\| < \epsilon$$ For some choice of $\epsilon >0$ and matrix norm.

One example of approximation of a solution which my computer gives with this approach is

$$\left[\begin{array}{rr}-0.699810135215795&-0.709937280934347\\1.11266743103098&-0.300189864932606\end{array}\right]$$

mathreadler
  • 25,824
-2

user317176's answer gives the simple idea that rotation matrices $\frac{2\pi}{3}$ by gives an answer you are looking for. Here's another alternative:

Notice $x^2 + x + 1 = 0 \rightarrow x^3 = 1$. Thus we have for $x = A$, $1 = det(I)= det(A^3) = det(A)^3$, so A is invertible. Then choose $e_1, Ae_1$ as basis, to get a similar matrix $B = \begin{bmatrix}0 & -1\\1 & -1\end{bmatrix}$ which sends $e_1 \rightarrow Ae_1$, $Ae_1 \rightarrow A^2e_1 = -(e_1 + Ae_1)$. Since $1$ is not eigenvalue of $B$, $\textbf{0} =\frac{B^3 - I}{B-I} = B^2 + B + I$.

Note: In general this method will yield large matrices, but rotation matrices can be made $2\times 2$ if you want $x^{n-1}+ ... +1 = 0$