I have two matrix $A$ and $B$ and consider $C(t)=A+tB$, with $t\in [0,1]$.
Are the eigenvalues of $C(t) $: $\lambda_i:[0,1]\rightarrow \mathbb{C}$ continuous functions?
I guess that the answer is yes, but why?
I have two matrix $A$ and $B$ and consider $C(t)=A+tB$, with $t\in [0,1]$.
Are the eigenvalues of $C(t) $: $\lambda_i:[0,1]\rightarrow \mathbb{C}$ continuous functions?
I guess that the answer is yes, but why?
The answer is yes, and this is dependent on the fact that the roots of a polynomial vary continuously with its coefficients.
We have the following theorem taken from A Brief Introduction to Numerical Analysis by Tyrtyshnikov.
Theorem: Consider a parametrized batch of polynomials $$p(x,t) = x^n + a_1(t)x^{n-1} + \cdots + a_n(t),$$ where each $a_i(t)$ is a continuous function on the interval $[\alpha,\beta]$. Then there exists continuous functions $$x_1(t),\ x_2(t),\ \cdots,\ x_n(t),$$ on $[\alpha, \beta]$ such that for each $x_i(t)$ we have $$p(x_i(t),t) = 0,\ \ \ t\in[\alpha,\beta].$$ $\square$
With $C(t)=A+tB$, each entry of the matrix is a linear polynomial in $t$ and hence the characteristic polynomial will be parametrized in the form above with $t\in[0,1]$. The theorem then directly implies that the roots of the characteristic polynomial, i.e. the eigenvalues of the matrix $C(t)$, are expressible as continuous functions of $t$.
If $t\mapsto A(t)$ is a continuous function from an interval $I$ of the real line in $\mathbb{C}^{n\times n}$, then there exist $n$ continuous functions $\lambda_i:I\to \mathbb{C}, i=1,\ldots,n$ such that for each $t\in I$ the spectrum of $A(t)$ is equal to $\{\lambda_1(t),\ldots,\lambda_n(t) \}$. For some $t_0\in I$ and $i\neq j$ we can have $\lambda_i(t_0)=\lambda_j(t_0)$.
One can see proofs of this Theorem in:
[1] T. Kato: A short introdunction to perturbation theory for linear operators, Springer-Verlag, 1982. Pages 126-127, Theorem 5.2.
[2] R. Bhatia: Matrix analysis, Springer, 1997. Pages 154-155, Theorem VI.1.4 and Corollary VI.1.6.
The proof below is self contained but restricted to continuous family of self-adjoint operators.
Assume the function $t\mapsto H(t)\in M_{n\times n}(\mathbb{C})$ is continuous. Then it is uniformly continuous. With no loss of generality we may assume that the matrices are positive definite, by considering $t\mapsto H(t)+aI,$ for $a\ge \displaystyle \max_{0\le u\le 1}\|H(u)\|.$ By the uniform continuity for any $\varepsilon>0$ there exists $\delta>0$ such that for $$|t_1-t_2|<\delta \implies \|H(t_1)-H(t_2)\|<\varepsilon $$ Let $\lambda_1(t)\ge \lambda_2(t)\ge \ldots \ge \lambda_n(t)$ denote the eigenvalues of $H(t).$ By the minimax principle $$\lambda_k(t)=\min_{\dim V=k-1}\max\{\|H(t)x\|\,:\, x\perp V, \ \|x\|=1\}\\ =\min_{\dim V=k-1}\|H(t)(I-P_V)\|$$ where $P_V$ denotes the orthogonal projection on $V.$ For $|t_1-t_2|<\delta$ we get $$\lambda_k(t_2)=\min_{\dim V=k-1}\|H(t_2)(I-P_V)\|\\ \le \min_{\dim V=k-1}\|H(t_1)(I-P_V)\|+\varepsilon =\lambda_k(t_1)+\varepsilon $$ Similarly we get the converse inequality, hence $$|\lambda_k(t_1)-\lambda_k(t_2)|\le \varepsilon$$