My question concerns a situation where you are looking for a determinant of a matrix which is in itself composed of other matrices (in my example, all the inner matrices are square and of equal dimensions).
Say we have matrix $A_{cl}$: $$ A_{cl}= \left[\begin{matrix} 0 & I\\ -kL_e & -kL_e \end{matrix}\right] $$ where $L$ is a laplacian matrix of a graph (meaning it is symmetric and positive definite in this example because the graph is a spanning tree).
I presume the following: $L$ is $n$ x $n$, therefore $A_{cl}$ is $2n$ x $2n$.
I see the following development, which I don't understand:
$$ det(\lambda I-A_{cl}) = det(\lambda^2I + (\lambda+1)kL_e)) = 0 $$ Since $\lambda = -1$ does not satisfy this equation, it is not an eigenvalue of $A_{cl}$. The eigenvalues of $A_{cl}$ thus satisfy $$ det(\lambda^2/(\lambda+1)I + kL_e) = 0 $$ Denoting the eigenvalues of $-kL_e$ by $\mu$, one has that, for each $i$, $$ \mu_i = \lambda^2/(\lambda+1) $$ and hence $$ \lambda_i = \frac12(\mu_i+\sqrt{\mu_i^2+4\mu_i}) $$
My beef with this development is mostly in the first sentence of it, where they say: $$ det(\lambda I-A_{cl}) = det(\lambda^2I + (\lambda+1)kL_e)) = 0 $$ This is a determinant of a matrix of matrices, and they treat it like it is a 2x2 matrix determinant (and keep the det() operation after, which is even more confusing). If anybody could explain the mechanics behind this first part of the development I would be very grateful.
Thank you