Basing on the answer to this question I tried to solve an exercise which asks to find the intersections of four vectorial subspaces. I have the subspaces $X_O=\{(1,0,0)\}$, $X_{NO}=\{(0,1,0),(0,0,1)\}$, $X_R=\{(1,1,0),(0,0,1)\}$, $X_{NR}=\{(1,-1,0)\}$ and I have to find:
- $X_1=X_R \cap X_{NO}$
- $X_2=X_R \cap X_O$
- $X_3=X_{NR} \cap X_{NO}$
- $X_4=X_{NR} \cap X_O$
I dont't think (and hope) to have done any algebric error, but the results aren't - probably - correct, because I find out that the three latter intersections are empty and $X_1=\{(0, 0, 1)\}$ while the direct sum of the four intersections has to make up the whole space $X=R^3$.
Any suggestion?
Edit1: as reported in comments I found a possible solution coherent to the hypotesys that the direct sum of the four intersections is the whole space, but which leads to results I can't explain. I'll try to describe it here; if there is any incorrectness feel free to point it out.
I can say that $X_3=X_{NR} \cap X_{NO}=X_{NO} \cap (X_{NR}+X_O)=X_{NO} \cap {X_1}^\perp$
(because, for the second equality, $X_{NO} \cap (X_{NR}+X_O)$ is equal to $X_{NO} \cap X_{NR} + X_{NO} \cap X_O$ which is equal to $X_{NO} \cap X_{NR}$ and for the third because stands $(A \cap B)^{\perp}=A^{\perp}+ B^{\perp}$)
So, with $X_1=\{(0,0,1)\}$, I find $X_3=\{(0,1,0)\}$.
But if, for identical reasons, I substitute $X_{NO}$ instead of $X_{NR}$, I find $X_3=X_{NR} \cap X_{NO}=X_{NR} \cap (X_R + X_{NO})$. The term in parentheses is ${X_4}^{\perp}$. $X_4$ can be found empty with any method, so its orthogonal subspace is the entire space; then the intersection between $X_{NR}$ and the entire space is simply $X_{NR}=\{(1,-1,0)\}$.
To me, it appears that the two one-vector bases found for $X_3$ are not the same: how can it be possible? Moreover, computing also $X_2$ with this method realizes the "condition" that the sum of the four subspaces forms the entire space, condition that is not respected with the "classic" resolution, which finds only $X_1$ to be non empty. I explained the problem as crearly as I could: if you have any problem I'll try to make it clearer.
Edit2: as explained in "Systems and Control Theory An Introduction" by A.Astolfi the following properties are valid:
$X_1 + X_2 = X_R$
$X_1 + X_3 = X_{NO}$
$X_1 + X_2 + X_3 + X_4 = X = {\mathbb{R}}^n$
$n1+n2+n3+n4=n$, where $n1, n2, n3, n4$ are the dimensions of, respectively, $X_1, X_2, X_3, X_4$
Edit3 (hopefully the last one): I found in other resources online a different definition for the coordinate transformation matrix T needed for the Kalman decomposition (which from the previous definition was formed by the column vectors composing the bases of $X_1, X_2, X_3, X_4$, in this order):
in this document by Perry Li of the University of Minnesota is stated that $T$ = ( $t1$ $t2$ $t3$ $t4$ ) where
t2 -> ($X_{R} \setminus X_{NO}$): t2 is a basis for $X_R \cap X_{NO}$
t1 -> ($X_{R} \setminus X_{O}$): t1 $\cup$ t2 is a basis for $X_R$
t4 -> ($X_{NR} \setminus X_{NO}$): t2 $\cup$ t4 is a basis for $X_{NO}$
t3 -> ($X_{NR} \setminus X_{O}$): t1 $\cup$ t2 $\cup$ t3 $\cup$ t4 is a basis for $R^3$
while Wikipedia defines it in a similar but slightly different way: (for simplicity I use the same notation with $t_i$) $T$ = ( $t1$ $t2$ $t3$ $t4$ ) where
t1 is a matrix whose columns span the subspace of states which are both reachable and unobservable.
t2 is chosen so that the columns of [t1 t2] are a basis for the reachable subspace.
t3 is chosen so that the columns of [t1 t3] are a basis for the unobservable subspace.
t4 is chosen so that [t1 t2 t3 t4] is invertible.
Maybe these are the correct definitions: I think that these don't corresponde to those from Astolfi's document. Thanks a lot to @amd who patiently has managed to trace the origin of the errors in my statements.