I am a physicist and very much used to the fact that any self-adjoint matrix ($H^{\dagger} =H$) in a finite-dimensional complex linear space can be uniquely specified by (a) the set of its (real) eigenvalues, and (b) the unitary matrix built from its (orthonormal) eigenvectors:
$$H = U^{\dagger} \cdot \rm{diag}\{ h_1, h_2, \ldots h_n \} \cdot U$$
where $(\cdot)^{\dagger} \equiv (\cdot)^{*T}$ denotes conjugate transpose.
I need a generalization of this for the classes of symmetric ($S^{T} =S$) and anti-symmetic ($A=-A^{T}$) complex matrices. The symmetric case seems easy:
$$S = U^{T}\cdot \rm{diag}\{ s_1, s_2, \ldots s_n \} \cdot U$$ where the singular values $s_1, s_2 \dots$ are non-negative reals and $U$ again is a generic the unitary matrix. (Here is a more precise statement, accounting for the extra choice of signs).
But I have a difficulty identifying the general singular value decomposition structure for an arbitrary anti-symmetirc matrix. I've observed numerically that the rank of $A$ is at most $n-1$ EDIT: when $n$ is odd ($n$ is the dimensionality of the linear space), so generalization needs to be 'clever'.
Can you help?