I am reading a paper on deep learning. Kawaguchi et al, Generalization in Deep Learning
If $ϕ$ is a matrix of dimensions $m \times n$. Is the assumption valid that the rank of $ϕ$ can be greater than $m$ or $n$? Thanks a lot for the replies.
I am reading a paper on deep learning. Kawaguchi et al, Generalization in Deep Learning
If $ϕ$ is a matrix of dimensions $m \times n$. Is the assumption valid that the rank of $ϕ$ can be greater than $m$ or $n$? Thanks a lot for the replies.
Think about one of the meanings of the rank of a matrix: it’s the dimension of the range of the linear transformation that the matrix represents. The range is a subspace of the codomain, so it obviously can’t have a greater dimension than that, but that dimension is equal to the the number of rows in the matrix. On the other hand, neither can the range’s dimension be greater than the dimension of the domain. That takes a bit of proof, but the idea is that the preimages of a set of linearly-independent vectors are also linearly independent, and by definition you can’t have more linearly independent vectors than the dimension of the space that they live in. The dimension of the domain is equal to the number of columns in the matrix.
The rank of matrix is defined as the order of the highest order non-zero matrix of it.
Therefore, the rank of a matrix is r if:
1-Every minor of order (r+1) is 0.
2-There is at least 1 non-zero minor of order r.
Assume a square matrix of order m (minor is a determinant of only a square sub-matrix)
For its rank to be greater than m (say m+1),
Every minor of order (m+2) is 0 and
There is at least 1 non-zero minor of order m+1.
Now, how are you going to create a minor of order m+1 for a matrix of order m and proving
that it is 0? (you may want to consider adding 0's to create a new row and a column so that order changes to m+1, anyway that matrix will turn 0)
So, a matrix can't have it's rank greater than it's order. Using Echelon form of a matrix
gives a similar result, but this way is quite simple.