Whenever I refer a book or video on how to represent a function as a vector, the source automatically assumes the function to be a polynomial $$a_0 + a_1 \alpha +a_2 \alpha^2 + ... + a_n \alpha^n $$ and hence forms the basis easily by, $$\begin{bmatrix}1 \\ \alpha \\ \alpha^2 \\ .\\ .\\ . \end{bmatrix}$$ Ofcourse I understand why they primarily concentrate on a polynomial, since we approximate the trivial functions with Taylor series. But how to actually look at a function, for example $e^x$, as a vector of infinite dimensions, without approximating it with Taylor series? And, in this answer, was does he mean when he says
Wait, what basis were we on before if we're on the exponentials now? The dirac deltas. Take an inner product of some function with a dirac delta and notice how you get back that function at the action point of the dirac delta. This is sometimes called the sifting theorem, but it should be clear that if we can project a vector (via inner product) and just get back some component of that vector, that component was how much the vector had in the direction we projected it onto.
I understand that we get the component of a vector when we dot it with the Dirac delta function. But my doubt is, we get only one component, that's it. How can we form the entire vector as the Dirac-Delta as basis, since it booms at a single place.