Why does $\mathbb{R}^n$ have a "canonical" basis, if vector spaces are supposed to have none?
More generally, for every field $F$ and every integer $n$, why does the "space of $n$-tuples" $F^n$ have a "canonical" basis (as an $n$-dimensional vector space over $F$), if vector spaces are supposed to have none?
(I don't know if the same thing can be asked of modules, but modules are weird. Maybe of free modules?)
Set $n$ to be $2$.
In $\mathbb{R}^2$ you have the basis $\{(1,0), (0,1)\}$. Since $\mathbb{R}^2$ is a (2D) vector space (over $\mathbb{R}$), this basis is just as arbitrary as any other basis. But it certainly doesn't look arbitrary. (The basis $\{(\pi,e), (\sqrt{2},\sqrt[3]{216156757151})\}$ is just as much of a basis, but it seems "more arbitrary" than the other one, and one ever uses it.)
Sure, I know this isn't really the "canonical" basis for $\mathbb{R}^2$ (or $F^2$), but people even go as far as calling is the standard basis! Might as well call it canonical. It even features the additive and multiplicative identities of the field, and nothing else. It's as if it were the basis with "minimal complexity", or something like that, but such notions should make no sense in vector spaces.
So what's going on with the basis $\{(1,0), (0,1)\}$, and why is it "special", if bases are supposed to be the opposite of special?
(Ie. all bases are "equivalent", all are related by the action of the automorphism group $GL(n,\mathbb{R})$ on $\mathbb{R}^n$, every $n$-dimensional vector space over $\mathbb{R}$ is isomorphic to $\mathbb{R}^n$ (although the isomorphism depends on the basis), etc.)
Nevermind that $\{(1,0), (0,1)\}$ isn't even a basis, because $(1,0)$ and $(0,1)$ aren't even vectors, they're tuples, so the best one can say is that $(1,0)$ is the coordinate tuple of infinitely many elements of the 2D $\mathbb{R}$-vector space $\mathbb{R}^2$, one for each basis. So the elements of the vector space $\mathbb{R}^2$ don't even look like $(1,0)$ or $(0,1)$, but they look like $a_0\vec b_0 + a_1\vec b_1$, where $\vec b_0,\vec b_1$ are vectors that form a basis and $a_0,a_1$ are scalars.
So then people say that vectors in $\mathbb{R}^2$ are linear combinations $a_0 \vec e_0 + a_1 \vec e_1$, where $\{\vec e_0, \vec e_1\}$ is the "standard" basis, and sometimes they'll define $\vec e_0$ as the "vector" $\begin{bmatrix} 1 \\ 0 \end{bmatrix}$, but that's not even a vector, it's a $2 \times 1$ matrix, and matrices need a basis in order to be well-defined, so the whole definition is circular.
Something fishy is going on with bases and vector space isomorphims/automorphisms...
(Consider this question a continuation of this)