3

Let $M$ be an $n \times n$ matrix, then the $(i,j)$-submatrix of $M$ is the $n-1 \times n-1$ matriz $M_{ij}$ given by removing the $i$-th line and the $j$-th column from $M$. The $(i,j)$-minor is $m_{ij} = \det M_{ij}$ and the $(i,j)$-cofactor is $c_{ij} = (-1)^{i + j}m_{ij}$

Is there a way to define these things for linear maps in a coordinate free way? I'm looking for some way to, given $T \colon V \to V$ a linear map, define $S \colon W \to W$ where $W$ is a hyperplane of $V$ and such that given a basis $e = (e_1, \dots, e_n)$ of $V$, there exists a basis $f = (f_1, \dots, f_{n-1})$ of $W$ related to $e$ in some way such that the matrix of $S$ in $f$ is a submatrix of the matrix of $T$ in $e$.

Some of you might be wondering the motivation behind it all: I am revisiting some ideas in linear algebra in a coordinate free way, and I've stumbled into Cayley-Hamilton. The original algebraic proof uses the definition of cofactor, so I'm wondering how one would go about proving it without coordinates and without using topology (with topology there is a simple proof using the density of diagonalizable matrices, but this is too out of context).

1 Answers1

1

For the minors and the submatrices, they inherently rely on the way we picked our bases for the domain and the target space, so I would say it is impossible to completely do away with the coordinates, strictly speaking, but we can make is so that the dependent on the bases aren't apparent. However, there is an interpretation of the cofactor of $M$ in a coordinates-free way, which relies on understanding the multilinear algebra.

Suppose that the matrix $M$ represents the linear map $T:V \to V$ with respect to the basis $\{e_1, \dots, e_n\}$. The submatrix $M_{ij}$ of $M$ then represents the linear map $$ S_{ij} = \pi_{\hat i}\circ T \circ \mathfrak{i}_{\hat j}: W \to U, $$ where $W = \text{span}\{e_1,\dots, e_{j-1},e_{j+1},\dots, e_n\}$, $U = \text{span}\{e_1,\dots, e_{i-1},e_{i+1},\dots, e_n\}$. Here $\mathfrak{i}_{\hat j}: W\hookrightarrow V$ is the inclusion map, and $\pi_{\hat i}: V \to U$ is the projection map.

The minors (or subdeterminants) of $M$ is the determinants of these $S_{ij}$, viewing $W,U$ as $\Bbb R^{n-1}$. In some sense (that can be made precise), it measures the way $T$ stretches all the $(n-1)$-dimensional surfaces in $\Bbb R^n$.

The cofactor matrix of $M$ represents the linear map $$ \text{cof}\,T = *^{-1} \circ T \circ * :V \to V, $$ where $*:V\to \bigwedge^{n-1}V$ is the Hodge star isomorphism. This linear operator $\text{cof}\,T$ describes how $T$ induces an action on $\bigwedge^{n-1}V$, which is, roughly speaking, the space of all $(n-1)$-dimensional surface elements in $V$. It is characterized by $$ \langle \text{cof}\,T v, Tu \rangle = (\det T) \langle v, u \rangle $$ for all $v,u \in V$. The fact that its entries are $\pm$minors reflects how they scale by the power of $\alpha^{n-1}$ if you scales $T$ by $\alpha$, which is precisely how the $(n-1)$-volume of $(n-1)$-dimensional objects scale.

BigbearZzz
  • 15,084
  • Doesn't use of the Hodge star operator presuppose that $V$ comes equipped with an inner product (and an orientation)? – Travis Willse Feb 28 '24 at 06:18
  • @TravisWillse Right, I should have noted that the given basis defines an oriented unit n-cube as well. I thought about including those information, but in the end I didn't since I wasn't sure if it'd be that useful ifnthe OP doesn't know multilinear algebra already. Maybe it'd have been better for me to write it after all for completeness sake. – BigbearZzz Feb 28 '24 at 06:28
  • Right, what I'm getting at is that introducing an inner product, or just as well as preferred set of (oriented, orthonormal) linear coordinates, gives us something between a bare vector space $V$ and a vector space equipped with a choice of linear coordinates, so it may or may not be "coordinate-free" in the sense o.p. is looking for. – Travis Willse Feb 28 '24 at 14:34
  • It's worth mentioning, maybe, that some of what o.p. asks, like the construction of a preferred nontrivial map $S : W \to W$ on a general subspace $W \subset V$ from a linear transformation $T : V \to V$, anyway requires an inner product or a similar structure. – Travis Willse Feb 28 '24 at 14:40