How do I prove that if $A\in\mathbb C^{n\times n}$ is a matrix then it is irreducible if and only if its associated graph (defined as at Graph of a matrix) is strongly connected?
Update:
Seeing as no-one answered for over a week, I tried to do it by myself.
- The first thing I did was try to show column or row permutation didn't change the strong connectedness of the graph. I didn't manage, and actually proved the opposite. On the way, though, I managed to show transposition doesn't. The argument is that transposition affects the graph in that it inverts all the arrows, but if there is a loop through all nodes then inverting the arrows means you go through it the other way round, so it's still there, and the graph stays strongly connectedness, and if there isn't, well, transposing can't make one appear, as otherwise transposing back would make it disappear, which we have proved impossible. This result may not be useful, but since I've done it I thought I might well write it down.
Then I tried to think of what permuting both rows and columns does to the graph. Why that? Let's recall the notion of irreducible matrix:
A matrix $A\in\mathbb{C}^{n\times n}$ is said to be reducible if there exists a permutation matrix $\Pi$ for which $\Pi\cdot A\cdot > \Pi^T$ is a block upper triangular matrix, i.e. has a block of zeroes in the bottom-left corner.
So if this operation does not alter the graph's strong connectedness, then I can work on the reduced matrix to show its graph is not strongly connected and prove one implication. Now such multiplications as in the definition of a reducible matrix, with $\Pi$ a matrix that swaps line $i$ with line $j$ - what do they do to the graph? Swapping the lines makes all arrows that go out of $i$ go out of $j$ and viceversa; swapping the columns does the same for arrows leaving $i$ (or $j$). So imagine we have a loop. Say it starts from a node other than $i$ and $j$. At a certain point it reaches, say, $i$. Before that, everything is unchanged. When the original loop reaches $i$, the new loop will reach $j$ and go out of it to the same node as it went out from $i$ to before the permutation, if that node wasn't $j$, in which case it will go to $i$. When the original loop enters $j$, the new loop enters $i$, and same as before. So basically the result is just that $i$ and $j$ swap names, and the loop is the same as before taking the name swap into account. So this kind of operations do not alter the strong connectedness of the graph.
Suppose $A$ is as follows: $$A=\left(\begin{array}{c|c}\Huge{A_{11}} & \Huge{A_{12}} \\\hline \Huge{0} & \Huge{A_{22}}\end{array}\right).$$ Suppose the $\Huge{0}$ is $m\times m$ with $m\geq\frac{n}{2}$. Then we have $m$ nodes that are unconnected to other $m$ nodes, going out. But we don't have $2m$ nodes, or have exactly that many, so those $m$ nodes are cut off from all the other $n-m$, going out. So suppose there is a loop. If it starts at one of the $m$ nodes, it can never reach the other $n-m$, and if it starts at one of those, it can reach the $m$ nodes but never get back, so maybe we have a path through all the nodes, but it can't be a loop, i.e. a closed path. So the graph is not strongly connected.
- Now the definition doesn't say anything about the size of those blocks, so the problem I still have is that if $m<\frac{n}{2}$, the argument above fails because we have at least one node besides the $m$ nodes and the other $m$ nodes that can't be reached from the first $m$, and that node could be the missing link. Of course, when I said "can't be reached" up till now, I meant "be reached directly", i.e. not passing through other nodes.
- Of course, if the above is concluded, I have proved that reducibility implies non-strong-connectedness of the graph, so that a strongly connected graph implies irreducibility. But the converse I haven't even tried.
So the questions are: how do I finish the above at points 3-4 and how do I prove the converse? Or maybe I'm missing something in the definition, in which case what is it?
Update 2: I think I am missing something, as a $3\times3$ matrix with a 0 in the bottom-left corner and no other zeroes does have a strongly connected graph, since the only missing arrow is $3\to1$, but we have the loop $1\to3\to2\to1$. So when is a matrix reducible?
Update 3: Browsing the web, I have found some things. I bumped first into this link. Now if that is the definition of reducible matrix, then either I misunderstand the definition of the block triangular form, or the if and only if there doesn't hold, since a matrix with a square block of zeroes bottom-left definitely doesn't satisfy the disjoint set condition but definitely does satisfy the permutation condition, with no permutation at all. Maybe the first condition is equivalent to the non-strong-connection of the graph. Yes, because in that case there are $\mu$ nodes from which you can't reach the remaining $\nu$, so the graph is not strongly connected. So at least that condition implies the non-strong-connectedness of the graph. The converse seems a bit trickier. Looking for that definition, I bumped into this link. Note that no matrix there has a lone corner zero (which would be top-right as the link deals with lower triangular matrixes), and all of them satisfy the disjoint set condition in the link above. So what is the definition of block triangular matrix? If it is that there must be square blocks whose diagonal coincides with part of the original matrix's diagonal and below them there must only be zeroes, then I have finished, since the if and only if in the link above is valid, so reducibility implies non-strong-connectedness of the graph, and whoops, I'm not done yet, I still need the converse, so can someone finally come help me on that? And if it isn't, then what the bleep is it and how do I make this damn proof?
multirow
is unusable here for now, and trying to define the macros for multirow and multicolumn spanning yields an\fbox
-like block of code and turns the rendering of the matrix into the same kind of thing, so definitely no. I'll have to resign to mere\Huge
entries without multispanning and for block matrixes I'll have to do them in LaTeX and add them as pictures, hoping NEVER to find myself needing such a matrix in a comment. – MickG Apr 25 '14 at 14:00However, another problem has arisen: exactly why does that vertical line appear twice??
– MickG Apr 25 '14 at 14:02&
in the code! Anyway it's odd that MathJax behaves like that. It should give an error, which in its language equates to not typesetting the code. – MickG Apr 25 '14 at 14:45