Let me put the terms "geometric space", "polynomial", "basis", and "dimension" into context and it will be clear what is going on.
Geometric space & Polynomial.
The common concept here is that of a vector space. Just forget for a second the geometric perspective on vectors and vector space and look on it abstractly. The ingredients for a vector space are:
- A set $V$, the "set of vectors".
- An operator $+ \colon V \times V \rightarrow V$ that lets you add up vectors.
- A scalar mutliplicaton $\cdot : \mathbb{R} \times V \rightarrow V$ that lets you "scale" a vector $v \in V$ by a scalar $\lambda \in \mathbb{R}$ to $\lambda \cdot v$.
In order to have the triplet $(V, +, \cdot)$ form a vector space the operators $+$ and $\cdot$ need to satisfy some properties that allows you to do things like $(\lambda + \mu)(u + v) = \lambda u + \lambda v + \mu u + \mu v$ for $u, v \in V$ and $\lambda, \mu \in \mathbb{R}$.
You gave two examples for vector spaces:
$(\mathbb{R}^n, +, \cdot)$, where $+$ denotes the addition of points (as position vectors) in $\mathbb{R}^n$ and $\cdot$ denotes the scalar multiplication of a real number $\lambda$ with a point (as position vector) in $\mathbb{R}^n$.
$(P_n, +, \cdot)$, where $P$ denotes the set of polynomials of degree at most $n$, $+$ denotes the addition of polynomials and $\cdot$ denotes the scalar multiplication of a real number with a polynomial.
Actually, you can interpret "polynomial" in two different ways here: An, say, "abstract polynomial" that is simply a finite sequence of "coefficients" $(a_0, \dots, a_n)$, and there is no "$x$" here, and the polynomial as a function (called polynomial function) $\mathbb{R} \rightarrow \mathbb{R} \colon x \mapsto \sum_{i=0}^n a_i x^i$. In your setting, these two concepts are essentially the same, but maybe it helps to think of a polynomial as a function.
Basis & dimension.
Say you found a finite set of vectors $v_1, \dots, v_m \in V$ that have the properties that (i) they "span" the whole vector space $V$ by considering all linear combinations $\sum_{i} \lambda_i v_i$ of these vectors and (ii) none of the $v_i$ is a linear combination of the others (c.f., linearly independent). Then this set of vectors $v_1, \dots, v_m$ is called a basis of $V$. The fun thing here is that any basis of $V$ has the same number of vectors, and this number is called the dimension. Note: "a basis" but "the dimension".
A basis of $(\mathbb{R}^n, +, \cdot)$ is for instance $e_1, \dots, e_n$ with $e_i \in V$ having all coordinates being zero, except the $i$-th, which is one. So $e_2 = (0, 1, 0, \dots, 0)$. The dimension of $(\mathbb{R}^n, +, \cdot)$ is therefore $n$; any basis has $n$ vectors.
A basis of $(P_n, +, \cdot)$ would be $f_0, \dots, f_n$ with $f_i \colon \mathbb{R} \rightarrow \mathbb{R} \colon x \mapsto x^i$. In your question you used the short-hand notation $x^i$ for $f_i$, which I would like to avoid, to make a point (sic!) here.
A nice thing about this particular basis is that the polynomial $(a_0, \dots, a_n)$ (as abstract polynomial) resp. $x \mapsto a_0 + a_1 \cdot x + a_2 \cdot x^2 + \cdots + a_n x^n$ (as the polynomial function) can be easily decomposed as a linear combination of basis vectors, namely $a_0 f_0 + \cdots + a_n f_n$. Let us be precise about the notation here: When I write $a_0 + a_1 \cdot x + \cdots + a_n \cdot x^n$ then the $+$ refers to addition of real numbers. When I write $a_0 f_0 + \cdots + a_n f_n$ then the $+$ refers to addition of vectors, namely the ones in the vector space $(P_n, +, \cdot)$. For the fun of it one could say: The abstract polynomial $(a_0, \dots, a_n)$ has as coordinate vector $(a_0, \dots, a_n)$ for the ordered basis $f_0, \dots, f_n$ of the vector space $P_n$.
As a side note: $P_n$ has dimension $n+1$, just like $\mathbb{R}^{n+1}$ has.
Conclusion.
In terms of linear algebra, the vectors (!) $f_1$ and $f_2$, which are your $x$ and $x^2$ in the question, are linearly independent, because there is no scalar $\lambda \in \mathbb{R}$ such that $\lambda f_1 = f_2$. In other words you cannot scale the polynomial function $x^1$ in a way to get the polynomial function $x^2$, in your notation. (From calculus point of view: you cannot scale the function graph $x \mapsto x^1$ to get the one of $x \mapsto x^2$.)
Another argument, but maybe harder to follow, would be the following. Consider the coefficient sequence (the abstract polynomial) of the two polynomials $f_1, f_2$. They would be $(0, 1, 0, \dots)$ and $(0, 0, 1, 0, \dots)$. You cannot scale $(0, 1, 0, \dots)$ by scalar-multiplication with $\lambda$ to get a vector $(0, 0, 1, 0, \dots)$. But I do not like this argument, because these coefficient sequences already look like the vector notion in $\mathbb{R}^n$.
Anyhow, in this sense your two polynomials "live in different dimensions", even though I would not phrase it like that, because the term "dimension" is not used as particular linearly independent directions of a vector space in linear algebra.
By the way, when you argue that $x^2 = x \cdot x$ then you do not operate on vectors in a vector space, but you operate on the field of the real numbers, which allows you to multiply the field members. In a vector space there is no multiplication $u \cdot v \in V$. (Well, not quite true, see cross product, but this is not what you look for.)