1

When I apply a linear transformation to a special vector, this vector doesn't rotate ,it stays in the same direction, but it can get scaled, the special vector is an Eigenvector and scaling (ie) the ratio of the size of it after applying the Linear Transformation A and before linear transformation is an Eigenvalue ,

But this stuff seems trivial , why is knowing about a vector that doesn't change its direction after Linear Transformation so important that it basically appears everywhere, eg: Quantum Physics uses Eigenvectors and Eigenvalues a Lot

Is there a deeper meaning to this that I don't see ?

Bernard
  • 175,478
  • Every linear transformation can be represented as an almost diagonal matrix (the Jordan form) where the diagonal entries are just the eigenvalues, in an appropriate basis which contains corresponding eigenvectors. (Over an algebraic closed field like $\Bbb C$.) – Berci Sep 19 '21 at 20:05
  • 1
    It is easier to deal with a diagonal matrix than an arbitrary one. So if there is a basis that diagonalizes a matrix $A$, then it can be advantageous to find it (orthonormal bases are especially desirable). But to say a basis ${v_1, \dots, v_n}$ diagonalizes $A$ is to say that for each $j$, $Av_j = \lambda_j v_j$ for some $\lambda_j \in \mathbb{F}$. – Mason Sep 19 '21 at 20:26

1 Answers1

1

For one thing they're just very convenient mathematically. As the other comments said you can transform a matrix into its "jordan normal form" which for "nice" matrices is just a diagonal matrix. These diagonal matrices are very nice to handle in a lot of ways and make some difficult things rather easy: consider for example the matrix exponential that's important in e.g. differential equations. For diagonal matrices this is trivial to compute. Furthermore you can use eigenvalues (or the "generalization" of them called singular values) to find out how a transformation stretches space which gives them geometric meaning.

As for why they often times crop up in applications aside from this: I think there are two reasons. One is that they just are directly what we're interested in. Just consider some amplifier or oscilator circuit for example - it's clear that we are interested in it's eigenvalues and how we can tune them. Another one is that a lot of maths is oriented towards turning hard problems into linear algebra problems, and once you have some endomorphism you might as well ask yourself "what are the eigenvalues of this? Can I interpret them in some way that'll tell me something new about my problem?" and because of "the unreasonable effectiveness of mathematics" this apparently works out quite nicely. In a way eivenvectors and eigenvalues encode the "essence" of a linear map, so it makes sense that they're useful and interesting; but the fact that they actually come up that often doesn't have a particularly deep reason I believe.

SV-97
  • 1,101