6

In my textbook is stated: Let G be a finite-dimensional vector space of real functions in $R^D$. What is meant by "vector space of real functions"?

I know what a vector space is, by I don't get how can real functions form vector space (The only vector spaces that I might see regarding a function are the vector space of the domain and codomain)

Please, if you are aware, provide me a tangible and intuitive example with the explanation, as I find examples extremely useful for understanding.

5 Answers5

3

You should not try to "visualize" a single vector as whatever by all means, tried this ever for a five-dimensional one? We can't "visualise" such high-dimensional vectors, but we want to talk of concepts of parallelism or planes or projections (in such vector spaces). You can't "visualise" the vector $(1,2,3,4,5)$, but you may say that it's parallel to $(2,4,6,8,10)$ and that it's projection on the (not visualisable) plane spanned by $(1,0,0,0,0)$ and $(0,1,0,0,0)$ is $(1,2,0,0,0)$. And that transfer can be done with sets of functions.

Take, for example, $\mathbb R^J$ (where $J$ is a non-empty set), the set of real-valued functions defined on $J$. We want consider each member, that is: each function, of $\mathbb R^J$ as a single vector.

First we recall that two functions $f$ and $g$, defined on the same domain, are defined equal, iff they are pointwise equal, that is, $f=g$ iff for all $x$ from the common domain we have $f(x)=g(x)$.

From here we may define the sum of two functions $f$ and $g$, which is a function of its own, pointwise:

Define for $f,g\in \mathbb R^J$ their sum $f+g$ by $(f+g)(x):=f(x)+g(x)$ for all $x\in J$. Furthermore we may define for any real number $c$ the new function $c\cdot f$ by $(c\cdot f)(x):=c\cdot f(x)$.

It's easy to verify that now $\mathbb R^J$ is a real vector space. (It may be infinite-dimensional, but that doesn't matter in this case.) For example, one has to verify that $$c\cdot (f+g)=c\cdot f+c\cdot g.$$ But that's nearly trivial since by the above definitions $$\begin{align}\bigl({\bf c\cdot(f+g)}\bigr)(x)&=c\cdot\bigl((f+g)(x)\bigr)\\ &=c\cdot\bigl(f(x)+g(x)\bigr)\\ &=c\cdot f(x)+c\cdot g(x)\\ &=(c\cdot f)(x)+(c\cdot g)(x)\\ &=({\bf c\cdot f+c\cdot g})(x).\end{align} $$

To give an example, recall that for any non-zero vector $f$ of a vector space $V$ the set $g=\{c\cdot f|c\in \mathbb R\}$ is a straight line through the origin. Now let $J=\mathbb R$, hence $V=\mathbb R^{\mathbb R}$ and let $f$ be a well known function defined by $f(t)=t^2$.

From this point of view the set $g=\{c\cdot f|c\in \mathbb R\}$ is a straight line in $V$ through the origin. Any point $p$ of $g$ is a function $p$ which is defined by $p(t)=c\cdot t^2$.

By the way, the "usual" vector space $\mathbb R^n=\mathbb R^{\{1,\dots,n\}}$ is nothing else as the set of functions $\vec v\colon\{1,\dots,n\}\to\mathbb R$, can you see this? Such a $\vec v$ is determined by the values it takes for $1,\dots,n$, that is by $\vec v(1).\dots,\vec v(n)$; commonly one writes $v_k$ instead of $\vec v(k)$ for $1\leq k\leq n$. And the notation $$\vec v=\begin{pmatrix}v_1\\ \vdots\\ v_n\end{pmatrix}$$ is nothing else but an abbreviate form of the table of values that $\vec v$ takes on $\{1,\dots, n\}$.

Now take another function $\vec w$ from $\mathbb R^n$. From the above definitions we may compute $\vec v+\vec w$, namely by $(\vec v+\vec w)(k)=\vec v(k)+\vec w(k)$. Now this boils down, abbreviated, to $$\vec v+\vec w=\begin{pmatrix}v_1+w_1\\ \vdots\\ v_n+w_n\end{pmatrix}.$$

Michael Hoppe
  • 18,103
  • 3
  • 32
  • 49
  • Thank you everyone for your answers. I really appreciate the your willingness to help me. There is only one thing that I don't get, a vector space of a functions is composed by functions (and thus cannot be visualised) or by vectors? – Tommaso Bendinelli Dec 20 '18 at 17:20
  • @TommasoBendinelli First of all, a function is a vector (and vice versa). Thus you may transfer the notion of parallelness. e.g., to functions as well: so $f$ and $3\cdot f$ are parallel functions (vectors). – Michael Hoppe Dec 20 '18 at 17:27
  • 2
    I can't visualize it, I have always have thought a function as a mapping between the domain and the codomain. While I see a vector as (forgive but lack of formalism) an "arrow" in an n-dimensional space. How can these two things be the same? – Tommaso Bendinelli Dec 20 '18 at 17:35
  • 1
    @TommasoBendinelli I was also puzzled by it when I first saw it. But in more abstract mathematics, a vector simply means an element of a vector space. Since we have seen the set of all real-valued functions form a vector space, a real-valued function is a vector. Conversely, for an old-school vector $\begin{pmatrix} x_0 \ x_1\end{pmatrix}$ living on the plane, it can be seen as a real-valued function $v: {0, 1} \to \mathbb{R}$ where $v(0) = v_0$ and $v(1) = v_1$. Hope it helps you to visualize. – Alex Vong Dec 20 '18 at 19:21
  • Thank you! If I say let's "A" be a vector space of exponential function ($exp^{x*a}$) with $a$ constrained to be either 1 or 2, this vector space has dimension 2. Right? If I remove the constraint on $a$ at this point this the vector space has infinite dimension. Is it correct ? – Tommaso Bendinelli Dec 20 '18 at 19:35
  • No, it won't, because the sum of two such function isn't in that "space". Rather take the span of set ${\exp(x),\exp{2x}}$. You get another two-dimensional space if $J$ consists solely of two elements. – Michael Hoppe Dec 20 '18 at 19:51
  • 1
    @TommasoBendinelli: An arrow in space is an example of a vector, and it is the example that is most often used to introduce the concept of a vector. But a vector is anything that obeys the rules of vectors; arrows in space are just one thing that does. Remember back when you were a child and were taught what "animals" are; it would be very natural for a child to think that animals are lions, horses, chickens, frogs, and so on, and then you look at a worm and say "how can that be an animal; it has no legs, no eyes..." – Eric Lippert Dec 20 '18 at 20:10
  • And keep in mine that the notion of an arrow is essentially one of a map as well, namely the shift by that arrow. – Michael Hoppe Dec 20 '18 at 20:16
  • @TommasoBendinelli: once you have grasped that functions can form a vector space, do some reading on Wikipedia and see if you can understand the meaning of the dual space and the double dual space of the vector space of one-dimensional arrows. I struggled with that when I was studying linear algebra, and the day that it made sense to me was a turning point. In the 20+ years since I have forgotten most of it, but I need to pick it up again; linear algebra is surprisingly important in programming these days. :-) – Eric Lippert Dec 20 '18 at 20:18
2

$\mathbb R^D$ is the set of all functions $f:D \to \mathbb R.$ If we define an addition $f+g$ and a scalar multiplication $ \alpha f$ in this set by

$(f+g)(x)=f(x)+g(x)$ and $( \alpha f)(x)= \alpha f(x)$,

then $\mathbb R^D$ is a real vector space ( of functions).

Fred
  • 77,394
2

Take a collection of functions and see if you can demonstrate all the properties of a vector space using them.

A vector space requires:

  1. An additive identity (written $0$ in ${\mathbb R}$). The function $f\equiv 0$ fulfills this need.
  2. A scalar multiplicative identity (written $1$ in ${\mathbb R}$). $1$ works here since $1\cdot f = f$
  3. Commutativity of addition: $f+g = g+f$
  4. Associativity of addition: $f+(g+h) = (f+g)+h$
  5. Associativity of scalar multiplication: $\alpha (\beta f) = (\alpha \beta)f$
  6. Distributivity of scalars: $(\alpha + \beta)f = \alpha f + \beta f$
  7. Distributivity of scalars over vector addition: $\alpha(f+g) = \alpha f + \alpha g$
  8. An additive inverse: given $f$ there exists $g$ such that $f+g = 0$. Obviously $g=-f$ satisfies this.
postmortes
  • 6,338
  • Can we define the addition of two functions (f + g)(x) as f(x) + g(x) for any given function? – Tommaso Bendinelli Dec 20 '18 at 17:37
  • Yes, exactly so – postmortes Dec 20 '18 at 17:53
  • So, for instance, let's take the set of function with the domain equal to the interval (0,5) to the codomain (0,5). So, for instance, in this set of a function belongs f(x) = x, f(x) = 2x, f(x) =exp(x) f(x)=cos(x) (all defined for the domain), as all these functions maps at least one point between domain and codomain. This is a vector space. – Tommaso Bendinelli Dec 20 '18 at 18:00
  • 1
    They're all part of a vector space, yes. You've selected functions that are all continuous on the domain $(0,5)$, so the full vector space could be $C((0,5))$ which is all functions continuous on that domain. All functions that can be reached used the ones you have and the axioms 1-8 in the answer are part of the vector space (so $\alpha \cdot (\exp x)$ is in the vector space for all real $\alpha$. – postmortes Dec 20 '18 at 19:13
1

I'll offer a point of view that gives some concrete examples.

As people have mentioned, the only thing necessary to have a "vector space" is the ability to add objects together, and multiply them by scalars (subject to some special rules). We have this for functions that share a common domain and codomain. If we consider all functions with codomain $\mathbb{R}$, and fixed codomain $D$, we get an infinite-dimensional vector space (unless $D$ is a finite set). This is what is usually called $\mathbb{R}^{D}$.

Now, if we want a finite dimensional vector space, what we are looking for is a subspace of $\mathbb{R}^{D}$ that can be spanned by a finite set of functions. Here span is the normal linear algebra concept, where we are allowed to take linear combinations of the functions, e.g. the span of functions $f(x)$ and $g(x)$ would look like $\{ af(x) + bg(x) \ : \ a,b \in \mathbb{R}\}$.

Some examples:

  1. If we take the set of constant functions $f(x) = c$ for $c \in \mathbb{R}$, this is a 1-dimensional vector space of functions, because any such function is just $c$ times the function $f(x) = 1$.

  2. If we take the set of polynomials of degree less than $n$, we get a vector space of dimensions $n+1$, for example the polynomials with degree less than 4 gives a 5-dimensional vector space with basis $\{1,x,x^{2}, x^{3}, x^{4}\}$.

  3. If we take linear combinations of $\sin{x}$ and $\cos{x}$, we get a vector space of dimension 2 containing functions of the form $\{a\sin{x}+b\cos{x} \ : \ a,b \in \mathbb{R}\}$ (it can be shown that $\sin{x}$ and $\cos{x}$ are not scalar multiples of each other, so are linearly independent).

xxxxxxxxx
  • 13,302
  • I've deleted my answer—I think it would cause confusion. The question I was trying to answer was "How can a set of functions form a space anyway?" which I felt other answers were jumping past. – timtfj Dec 20 '18 at 21:42
0

The notion of a vector space is abstract and it can be applied to Functions (sometimes these spaces are called Function Spaces) and these spaces are the subject of Functional Analysis.

It is important to get away from the geometric representation of a vector space - they are special cases and impossible to think about when we move into higher dimensional spaces. Instead, consider our definition of a vector space:

Let $V$ be a set which is closed under vector addition and multiplication of vectors by scalars, then we call V a vector space. i.e.

$$ \forall x, y \in V \quad \forall c,d \in R $$

$$ cx+dy \in V $$

When we are talking about function spaces we are talking about mappings from a set $X$ to a vector space (over a field but don't get too bugged down in this if you don't know what this means) $V$ (note that $X$ can also be a vector space, then we can consider the function space to be a linear mapping).

A more simple way of thinking about them is considering them as a collection of functions that share characteristics of their range and whose codomains' are commonly the same.

Example time!

The function space $C(\mathbb R^n)$ represents all functions who are continuous in $\mathbb R^n$. e.g. $f(x)=x \in C(\mathbb R^1 )$

A Hilbert Spaces are subspaces of $C(\mathbb R^n)$ and often referred to as generalisations of Euclidean space.