2

I have an affine space in $V_6(\mathbb{R})$: $\{Y=(2,-2,0,1,-1,0)+a(1,-1,0,0,0,0)+b(1,0,0,1,-3,1)+c(1,-4,0,2,1,0)$ with $a,b,c \in \mathbb{R}\}$.

I have to find the number of:

  1. orthogonal (to Y) lines passing through the origin
  2. perpendicular (to Y) lines passing through the origin

Note that in the terminology I'm following two skew lines could be orthogonal but not perpendicular (they don't meet each other).

To find the orthogonal (to Y) lines I proceeded as follows: they have to be contained within $Y$'s orthogonal complement.

$Y^ \perp=((1,-1,0,0,0,0),(1,0,0,1,-3,1),(1,-4,0,2,1,0))^\perp$.

It's a complement, so its dimension is $6-3=3$.

$Y^\perp$ contains an infinite number of lines, so the first answer is infinite.

To find the perpendicular (to Y) lines I proceeded as follows:

$X$ is a generic line, spanned by the vector $(x_1,x_2,x_3,x_4,x_5,x_6)$: $$X=d(x_1,x_2,x_3,x_4,x_5,x_6)$$ where $d \in \mathbb{R}$.

If it's perpendicular to $Y$ then $<X,Y>=0$. So: $$dx_1(2+a+b+c)+dx_2(-2-a-4c)+dx_4(1+b+2c)+dx_5(-1-3b+c)+dx_6=0$$.

It's clear that there isn't only one solution, but I have to find the number of non-proportional 6-tuples $(x_1,x_2,x_3,x_4,x_5,x_6)$ that verify the equation.

Let's get one of these tuples: $(1,1,1,1,1,1)$ verifies the equation, but so does $(1,1,2,1,1,1)$, $(1,1,20000,1,1,1)$, $(1,1,a,1,1,1)$ with $a \in \mathbb{R}$ (in fact, $x_3$ doesn't appear in the eq). Furthermore they are non-proportional, so I've just found infinite (answer n.2) lines perpendicular to $Y$.

I'm not sure if it's right, if I have already imposed the incidence between the line and the space, if the second answer is possible and finally how should I calculate the distance between the space and the origin (on which perpendicular lines should I measure the distance?).

I'll be very grateful if you review what I wrote!

Update. I'll use the method I described in the comments to A.P.'s answer to find the distance between $Y$ and the origin.

A generic vector of $Y$ is something like this :$P+rA+sB+tC= \[ \left( \begin{array}{c} 2+r+s+t \\ -2-r-4t \\ 0 \\ 1+s+2t \\ -1-3s+t \\ 0+s \end{array} \right)\]$

The orthogonal complement $Y'= 0 + \{ (1,-1,0,0,0,0),(1,0,0,1,-3,1),(1,-4,0,2,1,0) \}^\perp $ passes through the origin.

Its carthesian equations representation is given by $\begin{equation} \begin{cases} x_1-x_2=0\\x_1+x_4-3x_5+x_6=0\\x_1-4x_2+2x_4+x_5=0 \end{cases} \end{equation}$

(from this representation I can check again that $0 \in Y'$).

Now, if I substitute $x_1, x_2$ ... with the coordinates of the generic vector of $Y$ I should get the intersection between $Y$ and the orthogonal complement passing through the origin and this point should be good to compute the distance. The problem is: the solution of the system is r = -132/103, s = -81/206, t = -43/206 [See Wolfram].

This solution is corresponding to the vector $(12/103, 12/103, 0, 39/206, -3/103, -81/206)$ whose length is $3 \sqrt{5/206}$ that is very different from A.P.'s elegant-way result. Where's the problem?

  • 1
    Could you please provide your definitions of orthogonal and perpendicular? To me they mean the same thing... – A.P. Dec 26 '15 at 15:09
  • @A.P. the terminology used in my textbook is the same as here http://math.stackexchange.com/questions/1568937/must-perpendicular-resp-orthogonal-lines-meet – Surfer on the fall Dec 26 '15 at 15:27
  • 1
    That one seemed well-enough explained in the question to me, but I am unsure what you mean by "passing by the origin"? Did you perhaps mean "pass through the origin"? Or something different? And also, when you say "orthogonal" and "perpendicular", do you mean to $Y$? – Paul Sinclair Dec 26 '15 at 17:01
  • @PaulSinclair Yes to all of the questions, thanks for the clarification! – Surfer on the fall Dec 26 '15 at 17:05
  • 1
    The tag ([tag:algebraic-geometry]) is intended for questions in a branch of mathematics called algebraic geometry (see the tag-wiki.) The tags ([tag:algebra-precalculus]) and/or ([tag:geometry]) should be used for basic problems that involve both algebra and geometry. – A.P. Dec 28 '15 at 11:36
  • @A.P thank you! Could you please provvide me some hint about the problem? :( – Surfer on the fall Dec 28 '15 at 16:20
  • This is getting out of hand... You should post your update as a separate question, possibly phrasing it in a more general way — e.g., along the lines of "how to compute the distance of an affine space from a point" — while mentioning your specific case, and linking back to this question. I don't have time to address this right now, but probably someone else does... :P – A.P. Jan 03 '16 at 12:38
  • @A.P. I followed your suggestion.. http://math.stackexchange.com/questions/1598277/how-to-compute-the-distance-of-an-affine-space-from-the-origin Thanks again for the great help.. Now I feel that both your way and my way are correct but the different results make me so confused :/ – Surfer on the fall Jan 03 '16 at 13:05

2 Answers2

1

Edit So I understood your question as $V$ being a one-dimensional space $(a,b,c)$ fixed. It turned out you meant $V=\{... \text{ with } a,b,c\in\mathbb R\}$. This makes quite some change, however I will not delete my answer yet as I think it might help all the same.

When $(a,b,c)$ is fixed, $Y$ is a vector in $\mathbb R^6$ and $Y^\perp$ is a hyperplane, i.e. of dimension 5. Now, in the affine space, you want to translate this hyperplane so that it contains the origin. To do this, you can translate it along the vector $Y$. It doesn't change its dimension as a vector space, so for you first question, the orthogonal vector space is of dimension 5 (and of course it gives an infinity of solutions).

For the second question, the set of perpendicular lines is a subset of the orthogonal vector space. This subset consists in the set of lines of $Y^\perp$ passing through $0$ and the span of $Y$. But the span of $Y$ intersects $Y^\perp$ only at one point. So there is only one element of $Y^\perp$ which is perpendicular to $Y$.

anderstood
  • 3,504
  • "If you meant that the affine space is a 3-dim affine space..." I'm sorry (for my imprecision) but this is exactly the point! – Surfer on the fall Dec 28 '15 at 18:11
  • I find your answer fine but could you explain why the second part of my solution proposal is wrong? – Surfer on the fall Dec 28 '15 at 18:28
  • @Surferonthefall I just edited my answer; it does not work exactly the same for your actual question because if you translate $V^\perp$ so that it includes $0$, not all lines of $V^\perp$ will go through 0. I might have a look later to your details. – anderstood Dec 28 '15 at 18:33
1

Let $X$ be an affine sub-space of dimension $d$ in $\Bbb{A}^n(\Bbb{R})$, with directing vector space $V$. Then observe that any line through the origin can be parametrised as $\{t\vec{w} : t\in\Bbb{R}\}$ for some vector $\vec{w} \in \Bbb{R}^n$. Since every line orthogonal to $X$ is directed as a vector in $V^\perp$, it follows that there is a bijection between the points of the projective space $$ \Bbb{P}(V^\perp) $$ and the lines in $\Bbb{A}^n(\Bbb{R})$ through the origin orthogonal to $X$. Finally, note that $\Bbb{P}(V^\perp)$ has dimension $$ \dim_\Bbb{R} V^\perp - 1 = \operatorname{codim}_\Bbb{R} V - 1 = n - d - 1 $$ thus the only affine spaces with a finite number of orthogonal lines through the origin are $\Bbb{A}^n(\Bbb{R})$, which has none, and the hyperplanes, which have one.

Extra: If you've never seen a projective space, here's a nice way to think of $\Bbb{P}(V^\perp)$:

Consider the natural affine space structure $\Bbb{A}(V^\perp)$ on $V^\perp$. Then the lines through the origin of $\Bbb{A}^n(\Bbb{R})$ that are orthogonal to $V$ correspond $1:1$ to the lines through the origin in $\Bbb{A}(V^\perp)$ (make sure you understand this).

Now look at the unit $(n-d-1)$-sphere $S$ in $V^\perp$. Clearly every line through the origin intersects $S$ in two antipodal points. Conversely, every pair of antipodal points on $S$ defines a line through the origin.

Aha! But we can do better than this. Divide $S$ in half with a hyperplane $H$ through the origin, and call $T$ one half-sphere (border included). Then to every line through the origin and not in $H$ corresponds precisely to one point in the interior of $T$, while each of the other lines correspond to a pair of antipodal points on the border of $T$.

If $\sim$ is the equivalence relation on $\partial T$ defined as $$ \vec{x} \sim \vec{y} \iff \vec{x} \text{ is antipodal to } \vec{y} $$ then the quotient topological space $T/\sim$ is a concrete realization of $\Bbb{P}(V^\perp)$, and its dimension is defined as the dimension of the interior of $T$ as a hypersurface in $\Bbb{A}(V^\perp)$.

Exercise: Draw a picture and go through this construction for $\Bbb{P}(\Bbb{R}^2)$. Then try to do the same for $\Bbb{P}(\Bbb{R}^3)$. This should give you a good idea of what's going on here, and it should help you understand why I defined the dimension in that way.

Remark 1: Another way to construct $\Bbb{P}(V^\perp)$ is as the quotient (topological) space $(\Bbb{A}(V^\perp) \setminus \{\vec{0}\})/\sim$, where $$ \vec{x} \sim \vec{y} \iff \vec{x} = \lambda \vec{y} \text{ for some } \lambda \in \Bbb{R} \setminus \{0\} $$ i.e. as the space where proportional points of $\Bbb{A}(V^\perp)$ are identified. This is usually more practical to work with, but in this case I prefer the geometric picture of the other construction. By the way, proving that these two constructions are equivalent is a nice exercise.


What about perpendicular lines, though? Those are just orthogonal lines that intersect $X$. So...

Fix a point $\vec{x} \in X$, let $\vec{v}_1,\dotsc,\vec{v}_d$ be a basis for $V$, and let $\vec{w}_{d+1},\dotsc,\vec{w}_n$ be a basis for $V^\perp$. Then a line $\ell \colon t\vec{w}$ is perpendicular to $X$ if and only if $\vec{w} = r_{d+1} \vec{w}_{d+1} + \dotsc + r_n \vec{w}_n$ and the linear system $$ \vec{x} + a_1 \vec{v}_1 + \dotsc + a_d \vec{v}_d = t \vec{w} \tag{1} \label{eq:1} $$ has a solution in $a_1,\dotsc,a_d,t$.

Clearly if $\vec{0} \in X$ every orthogonal line through the origin will also be perpendicular, but what if $\vec{0} \notin X$? Then $\eqref{eq:1}$ becomes $$ a_1 \vec{v}_1 + \dotsc + a_d \vec{v}_d + a_{d+1} \vec{w}_{d+1} + \dotsc + a_n \vec{w}_n = -\vec{x} \tag{2} \label{eq:2} $$ where $a_j = tr_j$ for $d+1 \leq j \leq n$. But this linear system has exactly one solution in $a_1,\dotsc,a_n$, because by construction $\vec{v}_1,\dotsc,\vec{v}_d,\vec{w}_{d+1},\dotsc,\vec{w}_n$ are linearly independent.

Remark 2: You interpret $\eqref{eq:1}$ in this way: given a point $\vec{x}$ on $X$ and a line $\ell$ through the origin, then $\ell$ intersects $X$ if and only if we can go from $\vec{0}$ to $\vec{x}$ by "walking" along $\ell$ for a while and then along a straight line parallel to (actually, contained in) $X$.


Update: So, suppose that $\vec{0} \notin X$ and you want to compute the distance $\delta$ of $X$ from $\vec{0}$. First, observe that $\delta$ is the same as the length of the segment joining $\vec{0}$ and $\vec{\alpha} = X \cap \ell$, where $\ell$ is the unique line through $\vec{0}$ and perpendicular to $X$. In other words, identifying the point $\vec{\alpha}$ with the vector $\vec{\alpha}-\vec{0}$, we see that $\delta$ is just $\|\vec{\alpha}\|$.

Given the previous discussion, the naive way to compute $\delta$ is to find $\alpha$ by solving the linear system $\eqref{eq:2}$, and then computing $\|\alpha\|$... but we can do better — or, rather, the same but without having to compute $\vec{w}_{d+1},\dotsc,\vec{w}_n$.

Indeed, the "walk" I mentioned in remark 2 is nothing more than a decomposition of (the vector) $\vec{x}$ as a sum of two vectors: $\vec{x}_\perp$, perpendicular to $X$, and $\vec{x}_\parallel$, parallel to $X$. The uniqueness of $\ell$ means that $\vec{x}_\perp$ is the same for every $\vec{x} \in X$, so just pick one and compute: $$ \vec{x}_\perp = \vec{x} - (\vec{x} \cdot \vec{v}_1) \, \frac{\vec{v}_1}{\|\vec{v_1}\|^2} - \dotsb - (\vec{x} \cdot \vec{v}_d) \, \frac{\vec{v}_d}{\|\vec{v_d}\|^2}. $$ Important: As amd pointed out in his answer to a follow-up question, this formula is valid only if $\vec{v}_1,\dotsc,\vec{v}_d$ are pair-wise orthogonal. Indeed, otherwise suppose wlog that $\vec{v}_1 \cdot \vec{v}_2 \neq 0$; then we would end up subtracting from $\vec{x}$ at least $\sigma>1$ times the component of $\vec{x}$ along $\vec{v_1}$. This isn't a deal-breaker: we can easily extract an orthogonal — even orthonormal — basis from $\vec{v}_1,\dotsc,\vec{v}_d$, e.g. with the Gram-Schmidt process.

Where does this formula come from? We simply use the following three facts: $\vec{v}_1,\dotsc,\vec{v}_d$ form a basis for $V$; $\vec{x} = \vec{x}_\perp + \vec{x}_\parallel$; and for every $1 \leq i \leq d$ $$ \vec{x} \cdot \frac{\vec{v}_i}{\|\vec{v}_i\|} $$ is the component of $\vec{x}$ in the direction of $\vec{v}_i$.


In your case $n = 6$ and $d = 3$, so there indeed infinitely many orthogonal lines through the origin. On the other hand, you can check that $\vec{0} \notin Y$, so there is exactly one perpendicular line through the origin.

A.P.
  • 9,728
  • I'm not sure why you talk about bijection between pts of $\mathbb{P}(V^\perp)$ and the lines through the origin orthogonal to $X$ (..I believed that the bijection was between $V^\perp$ and the lines) . I thought that if three vectors are enough (and needed) to generate every vector of $V^\perp$ then three lines are enough (and needed) to generate the set of lines the exercise wants. I'd need an explanation for that "-1" that I missed in the solution proposal. – Surfer on the fall Dec 28 '15 at 18:44
  • The problem is that if $\vec{w}$ is in $V^\perp$, then every vector proportional to it is in $V^\perp$, too, but they all define the same line. The process of weeding out proportional vectors from $V^\perp$ is exactly the construction of $\Bbb{P}(V^\perp)$. In general, the projective space on a $d$-dimensional vector space has dimension $d-1$ (as a projective space, of course). The only projective space with finitely many points is the singleton, which has dimension $0$. – A.P. Dec 28 '15 at 18:54
  • ... By the way, here dimension $-1$ corresponds conventionally to the empty projective space. – A.P. Dec 28 '15 at 18:59
  • I'm in the process of understanding your answer :D In the meanwhile, could you tell me why the second part of my solution proposal is wrong? I guess that I didn't impose the condition about intersection, did I? – Surfer on the fall Dec 28 '15 at 19:10
  • 1
    Indeed. For example, you can check that there is no $a \in \Bbb{R}$ for which the line $\ell_a \colon t(1,1,a,1,1,1)$ intersects $Y$. – A.P. Dec 28 '15 at 19:18
  • I didn't fully understand the fact about the projective space and its dimension. I mean, if I have three indipendent vectors in $V^\perp$, the corresponding lines are indipendent.. but if I have three indipendent lines the dimension of the space of the ortogonal lines can't be 3−1... I'm just thinking that probably the concept of dimension in projective spaces is different from the usual one, isn't it? – Surfer on the fall Dec 28 '15 at 19:27
  • Let's follow once again my idea: if Y is an hyperplane, Y⊥ has dimension one, hence all the vectors of $Y^\perp$ are associated to just one line.. if Y has codimension 2 then Y⊥ has dimension 2 (it's a plane). But a plane contains an infinite number of lines, so the answer is infinite. Same applies for Y with smaller dimension. I think that I said what you wrote with other words and without the concept of projective space (not included in my course). Could you confirm? – Surfer on the fall Dec 28 '15 at 19:28
  • 1
    I included a short introduction to projective spaces, which covers what I'm using here. Your last comment is spot-on: if you're just interested in distinguishing between none, one, and infinite, then that's all you need. Using projective spaces we can quantify exactly how many is "infinite", because intuitively we understand that there should be many more lines orthogonal to a space of codimension $3$ than there are for a space of codimension $2$. – A.P. Dec 28 '15 at 21:02
  • Sorry but I have another question: what if I need to calculate the distance between the hyperplane and the origin? I think I should intersect Y and $Y^\perp$ (the result should be a point) and calculate the distance between the point and the origin. Am I wrong? Thanks again for the great help! – Surfer on the fall Jan 01 '16 at 18:16
  • That's almost correct: $Y^\perp$ isn't well defined, because there is no canonical choice of base-point. Furthermore, not every choice of base-point will guarantee $\vec{0} \in Y^\perp$, so those won't give the desired distance. On the other hand, there's an arguably simpler way (see my edit). – A.P. Jan 02 '16 at 18:06
  • I'm in the process of understanding your idea. In the meanwhile, I didn't catch why my way isn't failsafe. The perpendicular line passing through Q is defined as $l = V^\perp + Q$. Intersecting $l$ and $Y$ I should get the point whose distance from Q is the answer. Where's the "bug"? Thanks again! :) – Surfer on the fall Jan 02 '16 at 20:08
  • $V^\perp + Q$ is $3$-dimensional affine space, so it cannot be a line; $\ell = \langle w \rangle + Q$ for exactly one $w \in V^\perp$. 2. You could try to define $Y^\perp = V^\perp + Q$, but different choices of $Q$ lead to different affine spaces perpendicular to $Y$; this will always intersect $Y$ in at least one point, but it may not contain the origin, in which case it cannot be used to find the distance of $Y$ from the origin..
  • – A.P. Jan 03 '16 at 00:49
  • Now I got it! But what if I choose the origin as Q? I'll be sure that $0 \in Y^\perp = V^\perp + 0$ and I can compute the distance. The idea should be good but there's a problem... could you check the updated question?

    I'm very grateful to you! :)

    – Surfer on the fall Jan 03 '16 at 11:05
  • I don't want to desturb you anymore but I'm start thinking that your method only works if $Y$ is an hyperplane... – Surfer on the fall Jan 03 '16 at 14:21
  • @A.P. The formula you have for orthogonal projection in the update requires an orthogonal basis. It looks like you used the given generators of $V$, but they’re not an orthogonal basis for it, so give the incorrect result. – amd Jan 03 '16 at 23:14