3

I am curious to know why the orthogonality of two (real) functions $f(x)$, $g(x)$ is given by:

$$\int_{-L}^{L} f(x) \,g(x) \; \text{d}x = 0$$

I see a kind of similarity between this definition and the orthogonality of vectors $\vec{v}$, $\vec{w}$ $\in$ $\mathbb{R}^n$, $\,$ viz. $\vec{v} \cdot \vec{w} = v_i \, w_i = 0$. It even makes sense to me that the domain of integration should play an important role in this result. However, I'm at a loss to imagine

a) the context that would've prompted such an extension;

b) the meaning of orthogonality (i.e. is there any way of thinking of this that is as intuitive as the geometric orthogonality of the vector version, where we can intuitively understand the meaning of orthogonality for vectors in $\mathbb{R}^2$ and $\mathbb{R}^3$ and extend the concept to higher dimensions?).

Perhaps the most concise way of asking my question would be is there an alternative way of viewing the definition of orthogonality of functions that is analogous to the geometric definition of the vector dot product (i.e. $\vec{v} \cdot \vec{w} = |\vec{v}|\, |\vec{w}| \cos\theta$)?

I looked at this question, but it doesn't really get at what I'm after.

Rax Adaam
  • 1,176
  • 1
    math.stackexchange.com/questions/1414389/what-is-the-geometric-meaning-of-the-inner-product-of-two-functions –  Nov 08 '17 at 16:49
  • 1
    Maybe you will appreciate the fourier series result that $\int f\overline{g} = \langle \hat f , \hat g\rangle $, where the $\hat f$ is the sequence given by a fourier transform of $f$. This result is known as Parseval's identity, and if the functions $f,g$ have finite Fourier expansions then this explicitly says that the dot product of appropriate vectors in $\mathbb C^n$ is precisely this dot product of functions. – Calvin Khor Nov 08 '17 at 16:58
  • Here is the link Zachary Selk mentioned, the answer is definitely apposite to my question, though it sort of skirts the issue by taking angles to be defined by the inner-product (& therefore we shouldn't expect / look for a geometric meaning). I'm not totally satisfied, personally, but perhaps that's that! – Rax Adaam Nov 08 '17 at 17:43
  • @CalvinKhor thank you - this is not a connection I would have made spontaneously. I will definitely look into that further. – Rax Adaam Nov 08 '17 at 17:44

1 Answers1

3

Yes, it's no coincidence that vectors are called orthogonal if their dot product is zero, and the functions you're considering are called orthogonal if the integral of their product is zero. Both of them are examples of an inner product on a vector space.

If $V$ is a vector space over $\mathbb{R}$, an inner product on $V$ is a map $(\_,\_) \colon V \times V \to \mathbb{R}$ which is

  • symmetric: $(v,w) = (w,v)$ for all $v$ and $w$ in $V$
  • bilinear: $(c_1 v_1 + c_2 v_2,w) = c_1 (v_1,w) + c_2 (v_2,w)$ and $(v,c_1w_1 + c_2 w_2) = c_1(v,w_1) + c_2(v,w_2)$ for all $v, w, v_1, v_2, w_1, w_2$ in $V$ and $c_1,c_2$ in $\mathbb{R}$
  • positive-definite: $(v,v) \geq 0$ for all $v \in V$. Furthermore, $(v,v) = 0 \iff v=0$.

If $V$ has an inner product, we can call vectors orthogonal if their inner product is zero.

In linear algebra, you often try to diagonalize a linear operator. That makes it very easy to study because you just have to look at scalar multiples. When the vector space has an inner product and the operator is normal, then it's diagonalizable with an orthogonal basis.

I guess what motivates the extension of inner product to infinite dimensional vector spaces is that there are important linear operators on those spaces. For instance, the derivative. With the inner product you mentioned, and assuming the values are the same at $\pm L$, the derivative is skew-adjoint, in particular normal.

As for a geometric motivation for this inner product, or the associated orthogonality, you know that an integral is a limit of Riemann sums over finer and finer partitions. For each $n$, we can set $\Delta x_n = \frac{2L}{n}$, $x_0 = -L$, $x_1 = -L + \Delta x$, $x_2 = -L + 2 \Delta x$, et cetera, until $x_n = L$. Then $$ \int_{-L}^L f(x)g(x)\,dx = \lim_{n\to\infty} \sum_{i=1}^n f(x_i)g(x_i)\Delta x $$ In other words, this integral inner product is a limit of finite dimensional inner products formed by sampling $f$ and $g$ at a regular set of points.

Rax Adaam
  • 1,176
  • 1
    once again, an impressive and thorough exposition - thank you for your time. However, for me, there is a lingering dissatisfaction in this explanation, as (perhaps on account of the sequence in which the concepts are taught), I imagined that the dot product of finite vectors was defined first, and that the definition was then generalized to the idea of the inner product. And while the idea of extending orthogonality to finite-dimensional vectors in higher dimensions makes sense to me, there is still something about the mix of infinite and continuous that remains ambiguous. – Rax Adaam Nov 08 '17 at 17:50
  • Also, it might be interesting to include the remark made in this question (pointed out in Zachary Selk's comment to my question), which suggests thinking of $dx$ as the relative weight of the terms (in which case $dx = 1$ returns exactly the familiar vector-components expression). – Rax Adaam Nov 08 '17 at 17:52