1

a.) Show that if $A=A^T$ is a symmetric matrix, then $A\mathbf{x}=\mathbf{b}$ has a solution iff b is orthogonal to $\ker A$.

b.) Prove that if $K$ is a positive semi-definite matrix and $\mathbf{f}\notin \operatorname{rng}K$, then the quadratic function $$p(\mathbf{x}) = \mathbf{x}^\mathrm{T}K\mathbf{x} -2\mathbf{x}^\mathrm{T}\mathbf{f} + c$$ has no minimum value.

c.) Suppose $\{\mathbf{v_1},\ \cdots,\ \mathbf{v_n}\}$ span a subspace $V \subset \mathbb{R}^m$. Prove that $\mathbf{w}$ is orthogonal to $V$ iff $\mathbf{w}\in \operatorname{coker}A$ where $A=\begin{pmatrix}\mathbf{v_1} & \mathbf{v_2} & \cdots & \mathbf{v_n}\end{pmatrix}$ is the matrix with the indicated columns.

My attempt:

a.) We know that $A=A^\mathrm{T}$, then a vector $\mathbf{x}\in \mathbb{R}^n$ lies in $\ker A$ iff $A\mathbf{x} = \mathbf{0}$.

By matrix multiplication we know that the $i^{\text{th}}$ entry of $A\mathbf{x}$ equals the vector product of the $i^{\text{th}}$ row $\mathbf{r_i}^T$ of $A$ and $\mathbf{x}$, hence $\mathbf{r_i}^{T}\cdot \mathbf{x} = \mathbf{r_i} \cdot \mathbf{x} = 0$ iff $\mathbf{x}$ is orthogonal to $\mathbf{r_i}$.

Therefore $\mathbf{x}\in \ker A$ iff $\mathbf{x}$ is orthogonal to all the rows of $A$. Thus $A\mathbf{x} = \mathbf{b}$ has a solution iff $\mathbf{b}$ is orthogonal to $\ker A$. Is this correct?

b.) I do not know how to do this

c.) In this do I have to prove that $\mathbf{w} \in \operatorname{coker}A$ is orthogonal to the range of $A$? I am not exactly sure what they are asking.

EuYu
  • 41,421
diimension
  • 3,410
  • Part b seems a bit weird. If $\mathbf{f} \notin \operatorname{rng}(K)$ then there is no minimum. Otherwise, there is a global minimum, just that it's not unique. – EuYu Nov 18 '12 at 02:03
  • @EuYu oops, you are right. I forgot to put the not equal sign. Thanks for catching that! – diimension Nov 18 '12 at 02:10

2 Answers2

1

For a, your argument is showing that the rowspace is the orthogonal complement of the nullspace. That is not what the question is asking (although it is related). Notice that $\mathbf{b}$ is an element in the columnspace of $A$. The columnspace, rowspace and nullspace of a symmetric matrix are related in a very specific manner. See if you can use these relations.

For part b, consider the critical points of the quadratic form. Where do they occur? Does your function have any?

For c, what can you say about the left-nullspace and the columnspace of a matrix?

EuYu
  • 41,421
  • Thanks for the advice! For a.) and c.) b is an element of the columnspace so I have to show that the columnspace is orthogonal to the left-nullspace? For b.) they occur when it is minimized but how can I find that? – diimension Nov 18 '12 at 03:56
  • Well, for a symmetric matrix, the columnspace is the rowspace. – EuYu Nov 18 '12 at 03:57
  • There is where I am confused because $A^T=A$ so then I just have to show the rowspace is orthogonal? – diimension Nov 18 '12 at 04:00
  • $A^\mathrm{T} = A$ means that the rows and the columns of the matrix coincide. So of course the rowspace and the columnspace of the matrix coincide. Your vector $\mathbf{b}$ is a vector in the columnspace of $A$ and hence it's a vector in the rowspace of $A$. For part b, littleO's solution is perhaps more natural but taking critical points is certainly easier to think of. The gradient of a quadratic form takes on a very specific form. See this link for example. – EuYu Nov 18 '12 at 04:02
  • So I just had to clarify in my proof that "the rowspace and the columnspace of the matrix coincide where vector b is a vector in the columnspace of A and hence it's a vector in the rowspace of A" then it will be suffice ? And thanks for the link I am going to take a look at it now! – diimension Nov 18 '12 at 04:06
  • Well, you have to understand why it suffices. The full chain of reasoning is as follows: There is a solution to $A\mathbf{x} = \mathbf{b}$ if and only if $\mathbf{b}$ is in the columnspace of $A$. This happens if and only if $\mathbf{b}$ is in the rowspace of $A$ (because $A$ is symmetric). A vector is in the rowspace of a matrix if and only if it is in the orthogonal complement of the kernel (because the rowspace is the orthogonal complement of the kernel). So all in all, there is a solution if and only if $\mathbf{b}$ is in the orthogonal complement of the kernel. – EuYu Nov 18 '12 at 04:09
  • Thank you very much for clearing that up! I understand the mistakes in my reasoning now with your help. – diimension Nov 18 '12 at 04:13
  • Do you understand how part c works? – EuYu Nov 18 '12 at 04:14
  • I was just going to ask you to clear that up for me too. For c, when they say "Suppose ${\mathbf{v_1},\ \cdots,\ \mathbf{v_n}}$ span a subspace $V \subset \mathbb{R}^m$." are they saying it lies in the range of V so then I must show that coker is orthogonal to the range? – diimension Nov 18 '12 at 04:16
  • Would you like to go into chat? This seems to be getting too long. – EuYu Nov 18 '12 at 04:17
1

I'll assume $A$ has real entries.

For part a), \begin{align*} & u \in R(A)^{\perp} \\ \iff & \langle u, y \rangle = 0 \, \forall \, y \in R(A) \\ \iff & \langle u, Ax \rangle = 0 \, \forall \, x \\ \iff & \langle A^T u, x \rangle = 0 \, \forall \, x \\ \iff & A^T u = 0 \\ \iff & u \in N(A^T). \end{align*} This shows that $N(A^T)$ is the orthogonal complement of $R(A)$. Because $A$ is symmetric, $N(A)$ is the orthogonal complement of $R(A)$. Hence $b \in R(A)$ if and only if $b$ is orthogonal to $N(A)$.

For part b), let $v$ be the projection of $f$ onto $N(A)$. Note that $v^T f = \|v\|^2 \neq 0$ and the function \begin{align*} g(\alpha) &= p(\alpha v) \\ &= -2\alpha v^T f + c \end{align*} is unbounded below.

littleO
  • 51,938
  • Thanks! But I don't understand the notation you are using in part a. Do you mind if you can explain the proof by words? – diimension Nov 18 '12 at 04:04
  • I think I have a proof for a). Let $\mathbf t \in \text{ker}A$ and $\mathbf x$ such that $A\mathbf x=\mathbf b$.

    A linear operator $A$ is symmetric, with respect of $\langle \cdot,\cdot\rangle$, iff $ \langle A\mathbf v, \mathbf w\rangle = \langle\mathbf v,A\mathbf w\rangle\ \forall \mathbf v, \mathbf w \in V $

    If you substitute $\mathbf v=\mathbf x$ and $\mathbf w=\mathbf t$ you get $ \langle A\mathbf x, \mathbf t\rangle = \langle \mathbf x, A\mathbf t\rangle \Rightarrow \langle\mathbf b, t\rangle = \langle\mathbf x, \underline O\rangle=0 $

    So $\mathbf b$ is orthogonal to $\text{ker}A$.

    – Rnhmjoj Jan 19 '16 at 16:56