0

Whenever I refer a book or video on how to represent a function as a vector, the source automatically assumes the function to be a polynomial $$a_0 + a_1 \alpha +a_2 \alpha^2 + ... + a_n \alpha^n $$ and hence forms the basis easily by, $$\begin{bmatrix}1 \\ \alpha \\ \alpha^2 \\ .\\ .\\ . \end{bmatrix}$$ Ofcourse I understand why they primarily concentrate on a polynomial, since we approximate the trivial functions with Taylor series. But how to actually look at a function, for example $e^x$, as a vector of infinite dimensions, without approximating it with Taylor series? And, in this answer, was does he mean when he says

Wait, what basis were we on before if we're on the exponentials now? The dirac deltas. Take an inner product of some function with a dirac delta and notice how you get back that function at the action point of the dirac delta. This is sometimes called the sifting theorem, but it should be clear that if we can project a vector (via inner product) and just get back some component of that vector, that component was how much the vector had in the direction we projected it onto.

I understand that we get the component of a vector when we dot it with the Dirac delta function. But my doubt is, we get only one component, that's it. How can we form the entire vector as the Dirac-Delta as basis, since it booms at a single place.

  • Set $V := {ce^x : c\in\mathbb R}$. This is a (one-dimensional) vector space. The functions $x^n$ are not in there. If that does not answer your first question, then I am afraid I don't understand it. – amsmath Apr 18 '19 at 18:22
  • I don't get it, so we can't say that a function is a vector with infinite dimensions? – Aravindh Vasu Apr 18 '19 at 18:24
  • Where do you get the infinite dimensions from? In Fourier theory we can also write $f(x) = x^k$ on $[0,2\pi]$ as a series of cosine and sin terms: $x^k = \sum_{n=-\infty}^\infty(a_n\cos(nx) + b_n\sin(nx))$. Would you say that the function $x^k$ "is a vector with infinite dimension"? – amsmath Apr 18 '19 at 18:29
  • I'm bit confused because, in the mentions answer, there's the following text "Put concisely: a function can be thought of as an infinite-dimensional vector. It has one element for every value it can take on, in an ordered array. " What does this mean? – Aravindh Vasu Apr 18 '19 at 18:30
  • Now I see what is meant. Let me think... – amsmath Apr 18 '19 at 18:32
  • Ok, here it is: In fact, this is not correct. Consider the space $V$ of functions on $[0,1]$, for example. For any $x\in [0,1]$ define the function $f_x = \chi_{{x}}$ which is one at $x$ and zero elsewhere. Then ${f_x : x\in [0,1]}$ seems to be a basis of $V$, but it isn't. A basis of $V$ would be a set $B$ of functions such that each $f\in V$ can be written as a finite linear combination of the elements in $B$, which is not the case here. – amsmath Apr 18 '19 at 18:44
  • The way I thought; each basis vector of a function is a separate Dirac-Delta function, centered at the desired point, am I anywhere near right? – Aravindh Vasu Apr 18 '19 at 18:46
  • @amsmath Sorry, But can you simplify this? How can f(X) be a basis of $V$ – Aravindh Vasu Apr 18 '19 at 18:47
  • No, that's wrong. Especially since $\delta_x$ is not a function. It's the mapping on $V$ that maps a function $f$ to its value $f(x)$ at $x$. And what do you mean by $f(X)$? – amsmath Apr 18 '19 at 18:52
  • @amsmath okay, now I got the first part of that comment, what's $B$ again? – Aravindh Vasu Apr 18 '19 at 19:01
  • Read it again... – amsmath Apr 18 '19 at 19:04
  • 1
    Just a precision - a basis does not have to be finite, it doesn't even have to be countable. Assuming the axiom of choice, every vector space has a basis, and most are uncountable. Just like a finite or countable sum is expressed with a sigma sign, an uncountable sum is expressed with an integral symbol, so in fact @amsmath the set you speak of is indeed a basis of $V$. – Matrefeytontias Apr 18 '19 at 19:06
  • I'm okay with the Infinite basis vectors, but what "are" they. I get it only in the case of polynomials – Aravindh Vasu Apr 18 '19 at 19:07
  • 1
    You can think of any family of vectors, finite, countable or uncountable, as a set of vectors and a set of indices. Both have exactly the same number of elements, as indices are merely labels for the vectors. In the case of n-dimensional space, the indexing set is simply the integers from $0$ to $n - 1$ ; in the case of polynomials, the indexing set is $\mathbb N$, whereas in the case of Fourier analysis for example, the indexing is the whole of $\mathbb R$ and the vector set is the set of all exponentials ${ x \mapsto e^{\alpha x} / \alpha \in \mathbb R }$ – Matrefeytontias Apr 18 '19 at 19:12
  • @Matrefeytontias You should be careful with spreading false statements. No, it isn't a basis of $V$. As I said: If a set $B$ is a basis of $V$ then each $v\in V$ can be written as a finite linear combinations of the elements in $B$. – amsmath Apr 18 '19 at 19:13
  • https://en.wikipedia.org/wiki/Basis_(linear_algebra) Also what you write about Fourier analysis is wrong (at elast the "basis" you provide). – amsmath Apr 18 '19 at 19:15
  • 1
    I was precisely on that link, and while it is true that a "basis" is stricto sensu a free family that decomposes every element of the space into a finite sum of its elements, the notion generalises to Hamel basis (or algebraic basis) in uncountably-infinite-dimensional spaces. The debate of whether or not it makes sense to speak of non-Hamel basis in uncountably-infinite-dimensional spaces is another story (I can't really speak up about it tbh), but since we are speaking about function spaces, I just assumed and went ahead and chose Hamel basis (argument with Fourier analysis still holds btw). – Matrefeytontias Apr 18 '19 at 19:19
  • 1
    On the subject of Fourier series, one will prefer orthogonal basis over Hamel basis - again, those are countably infinite and every function uses them all, not a finite sum of them. Even worse, Fourier transforms can be thought of as decomposing functions onto an uncountably infinite orthogonal basis, the complex exponentials - I did mess up the basis I provided, it should be ${ x \mapsto e^{\alpha i x} / \alpha \in \mathbb R }$ (notice the complex $i$). – Matrefeytontias Apr 18 '19 at 19:23
  • @Matrefeytontias From your text I don't see whether you admit having been wrong or not. In the latter case, how do you write $f(x)\equiv 1$ as a finite linear combination of the $f_x$ that I have defined? – amsmath Apr 18 '19 at 20:00
  • @Matrefeytontias Concerning the Fourier stuff: There is no mathematical notion of basis (Hamel, orthonormal, etc) that makes ${e^{i\alpha x} : \alpha\in\mathbb R}$ a basis of $L^2(\mathbb R)$. – amsmath Apr 18 '19 at 20:10
  • @amsmath Guys, any conclusion?so how to conduct a function as a vector (in layman terms)? – Aravindh Vasu Apr 19 '19 at 00:40
  • So, can we think about Laplace transform as change of basis? Please take a look at https://math.stackexchange.com/q/3190961/525644 – Aravindh Vasu Apr 19 '19 at 01:11
  • The quarrel here seems a bit pointless as my other answers were meant to be informal. Of course, care must be taken when working with infinite-dimensional vector spaces, when it comes to defining a basis and so forth. My answers were meant to start with finite-dimensional intuition, and guide you into the infinite so you can break into more precise documents (I explicitly state that in both answers). Functions as infinite dimensional vectors is intuition that the vast majority of mathematicians and scientists use, and it has been made very precise in a number of settings. – jnez71 Feb 03 '21 at 19:58
  • For precision, the infinite-dimensional function space that I was working with was the Hilbert space $L^2$. I quote: "Hilbert spaces arise naturally and frequently in mathematics and physics, typically as infinite-dimensional function spaces... Many of the applications of Hilbert spaces exploit the fact that Hilbert spaces support generalizations of simple geometric concepts like projection and change of basis from their usual finite dimensional setting." This is very well-known stuff. – jnez71 Feb 03 '21 at 20:12
  • Furthermore, this section is basically my answers. "The functions [Fourier basis] form an orthogonal basis of the Hilbert space $L^2_{[0,1]}$... The problem can also be studied from the abstract point of view: every Hilbert space has an orthonormal basis, and every element of the Hilbert space can be written in a unique way as a sum of multiples of these basis elements. The coefficients appearing on these basis elements are sometimes known abstractly as the Fourier coefficients of the element of the space. [Ref: Halmos, 1957]" – jnez71 Feb 03 '21 at 20:14
  • More quotes: "In various applications to physical problems, a function can be decomposed into physically meaningful eigenfunctions of a differential operator (typically the Laplace operator): this forms the foundation for the spectral study of functions, in reference to the spectrum of the differential operator. [Ref: Hilber 1953, Reed 1975]." And this section of the DFT article is identical to my answers, starting finite and taking an infinite-resolution limit. – jnez71 Feb 03 '21 at 20:25
  • Invertible integral transforms are precisely change-of-bases in the sense of Fredholm and here. The standard basis (deltas) are discussed here, and if we allow generalized functions, then the infinite-dimensional extension is the Dirac-deltas. I chose not to mention dual spaces in my answer though, to avoid confusing beginners. – jnez71 Feb 03 '21 at 20:59
  • The generalization from Kronecker to Dirac delta is readily seen in the well-known theory of Green's functions. Quote: "This can be thought of as an expansion of $f$ according to a Dirac delta function basis (projecting $f$ over $\delta(t-\tau)$ and a superposition of the solution on each projection. Such an integral equation is known as a Fredholm integral equation, the study of which constitutes Fredholm theory." In my (rather common) opinion, this is a very intuitive way to think about typical function spaces. – jnez71 Feb 03 '21 at 21:00

0 Answers0