1

I know that for every symmetric form $f: U \times U \rightarrow \mathbb{K}$, char$\mathbb{K} \neq 2$ there exists a basis for which $f$'s matrix is diagonal.

Could you tell me what happens if we omit assumption about $f$ being symmetric?

Could you give me an example of non symmetric bilinear form $f$ which cannot be diagonalized even if we change the basis of one the spaces?

Hagrid
  • 2,581
  • 1
    "one of the spaces" suggest there is more than one space involved, but there isn't (unless you count $\Bbb K$ as a vector space, but choosing anything else than ${1}$ as its basis would be very confusing (and also it would have no effect on the question diagonal or not)). – Marc van Leeuwen Jun 14 '13 at 09:49

1 Answers1

2

Note that for a bilinear form, the so-called "diagonalisation" is not diagonalisation via similarity, but diagonalisation via congruence. That is, if $A$ is the matrix for $f$ w.r.t. some basis, we look for an invertible matrix $P$ such that $P^TAP$ is equal to some diagonal matrix $D$. Yet, if $P^TAP=D$, then $A=(P^{-1})^TDP^{-1}$ and hence $A$ is necessarily symmetric.

(Put it another way, if $f$ is diagonalisable, then it's matrix w.r.t. some basis is diagonal and hence symmetric. Therefore $f$ is symmetric.)

user1551
  • 139,064
  • I've tried something similar - if $A$ is matrix for $f: U \times V \rightarrow \mathbb{K}$, and $Q, P$ are matrices of change of bases of $U, V$, then $A'=Q^{T}AP$, but I didn't know what to do next. Your solution is very neat. Thank you. Could you come up with an example of a non-symmetric matric which cannot be diagonalised? – Hagrid Jun 14 '13 at 08:07
  • 1
    @Hagrid Hmm, I think my answer has shown that if a bilinear form is diagonalisable via congruence then it is symmetric. So, all asymmetric matrices are not diagonalisable via congruence. – user1551 Jun 14 '13 at 09:09
  • I have one more question, though. How do we find the matrix $P$? – Hagrid Jun 16 '13 at 13:43
  • 2
    @Hagrid (1) By considering a permutation matrix $P$, we may assume that nonzero diagonal entries (if any) of $A$ precede zero diagonal entries. So, we may use Gaussian elimation to eliminate all entries below or on the right of any nonzero diagonal entries. As a result, we may assume that $A$ is of the form $D\oplus S$, where $D$ is a diagonal matrix and $S$ is a symmetric zero-diagonal matrix. – user1551 Jun 16 '13 at 21:32
  • 2
    @Hagrid (2) So the problem reduces to the case where $A$ has a zero diagonal. Suppose $A$ is $n\times n$. If the first row (and hence the first column) of $A$ is zero, the problem reduces to the $(n-1)\times(n-1)$ case. So, WLOG, we may assume that the first row of $A$ is nonzero. Again, by applying a permutation matrix, we may assume that $a_{12}=0$. Since $A$ has a zero diagonal, if we apply the matrix $P=\pmatrix{1&0\ 1&1}\oplus I$ to $A$, $a_{11}$ would become nonzero. So the problem reduces to case (1) again and we may proceed by recursion. – user1551 Jun 16 '13 at 21:33