I wonder if the notation
$$ P(X) = a_{0} + a_{1}X + a_{2}X^2$$
where $X$ is an indeterminate variable is an abuse of notation. Is $X^2$ just $X_2$? Put it another way, what's the meaning of the power in that context.
I wonder if the notation
$$ P(X) = a_{0} + a_{1}X + a_{2}X^2$$
where $X$ is an indeterminate variable is an abuse of notation. Is $X^2$ just $X_2$? Put it another way, what's the meaning of the power in that context.
$R[X]$ is a ring, it contains $X$, so you can multiply $X $ with itself. That's what $X^2$ means.
ADD Given a commutative ring $R$ with unity, we can consider the set of all sequences $(a_0,a_1,\ldots)$ in $R$ that are eventually zero, $R^{(\Bbb N)}$. We can add them coordinate-wise, and introduce multiplication as follows: if $A=(a_0,a_1,\ldots,a_n,0,\ldots)$ and $B=(b_0,b_1,b_2,\ldots,b_m,0,\ldots)$, the $n$-th term of $AB$ is $\sum_{i+j=n}a_ib_j$, that is $AB=(a_0b_0,a_1b_0+a_0b_1,a_2b_0+a_1b_1+a_2b_0,\ldots)$ -- this is exactly how we want polynomials to multiply out. If we identify $r\in R$ with $\hat r=(r,0,0,\ldots)$, we note that $\hat r A=(ra_0,ra_1,\ldots)$ using the definition above, $\hat 0=(0,0,\ldots)$ is the additive neutral element and $\hat 1 =(1,0,\ldots)$ is the multiplicative neutral element. One can see addition and multiplication are associative, and with a little work can obtain the remaining requirements to show $R^{(\Bbb N)}$ is made into a commutative ring with unity. Now set $X=(0,1,0,\ldots)$. Then using the definition of multiplication, we see $X^k=(0,0,\ldots,{1},\ldots,0,0,\ldots)$, where the $1$ is at the $k+1$-th place. Thus we can write any $(a_0,a_1,\ldots,a_n,0,0,0,\ldots)$ as $a_0+a_1X+a_2X^2+\cdots+a_nX^n$ and by coordinatewise equality, we have $a_0+a_1X\cdots+a_nX^n$ and $b_0+b_1X+\cdots+b_mX^m$ are equal iff $n=m$ and $a_i=b_i$ for $i=1,\ldots,n$. Note $R\ni r\to\hat r\in R[X]$ is an injective homomorphism, so $R$ is embedded as a subring of $R[X]$. Thus our ring is generated by $R(=\hat R)$ and $X$, and we call it the polynomial ring over $R$ in one indeterminate, and denote it accordingly as $R[X]$.
I think there may be some confusion due to the usual notion of variables. It is vital to note that in $R[X]$, the $X$ variable does not stand for some undetermined member of $R$, the way polynomials are taught in beginning calculus. It's (sort of) okay to think of $X$ as some undetermined member of an equally undetermined $R$-algebra, though. But really, you should just think of $X$ as a letter, and monomials $X^2$, $X^3$, etc. as some expressions involving this letter. You don't care what they mean, other than that they must satisfy the multiplication rule $X^m\cdot X^n=X^{m+n}$. And then, a polynomial is a formal linear combination of these monomials, such as $\sum_{k=0}^n a_kX^k$ with $a_k\in R$.
So what is a formal linear combination? Good question. Just think about it as an expression like the sum above. The important factor is that the expression uniquely defines the coefficients $a_k$. This is not a theorem, it is true by definition.
Finally, turn this set of polynomials into a ring (actually, an $R$-algebra) by defining sums and products via the expected formuale.
PS. There is also a category theoretic definition which may be more satisfying, if you know the requisite theory.
It means exactly what it usually does--that is, $X\cdot X.$ Not knowing what $X$ is doesn't change that.
At some point, we may want to evaluate the polynomial by substituting some value in for our indeterminate $X,$ but until then, it's basically just a placeholder.
It seems you may be wondering how to give rigorous meaning to the indeterminates $x^k$ in $R[x]$.
$R[x]\,$ may be constructed as the subring of linear maps on $R^{\Bbb N} = (r_0,r_1,r_2,\ldots)\,$ generated by the $\,r$-scalings $\ v \mapsto r v = (r v_0, r v_1, \ldots)$ and the shift map $\,v\mapsto xv = (0,v_0,v_1,\ldots).\,$ These form a ring with multiplication being composition of linear maps $\, (fg)v = f(gv),\,$ where $\,\ x^2(1,0,0,\ldots) =\, x(x(1,0,0,\ldots)) = x(0,1,0,0,\ldots)\ =\ (0,0,1,0,0,\ldots),\ $ $\,\ x^3(1,0,0,\ldots) = x(x^2(1,0,0,\ldots) = x(0,0,1,0,0\ldots) = (0,0,0,1,0,0,\ldots),\, \ldots\ $ so $$ (a_n x^n+\cdots + a_1 x+ a_0)(1,0,0,\ldots) = (a_0,a_1,\ldots, a_n,0,0,\ldots)\qquad$$
This implies that two polynomials are equal iff they have the same coefficients, by comparing their values on $(1,0,0,\ldots)$ as above.
Remark $ $ This is essentially a special case of the fact that every ring can be represented as a subring of the ring of linear maps on its underlying additive group, a ring-theoretic analog of Cayley's theorem for groups. The universal aspects of this construction are better appreciated when one studies universal algebra, e.g. see George Bergman's notes. Then one can discover this construction in a natural manner, as a solution to a universal mapping problem, using fairly general principles. In particular, look up "van der Waerden's trick" (which, if memory serves correct, is discussed in Bergman's notes).
Very much along the line of @BillDubuque’s response, I suggest this: Let $R$ be your given ring, and set up the set $\mathrm{Map}_0(\mathbb N,R)$, namely the set of all maps from the nonnegative integers $\mathbb N$ to $R$ that are zero at all but a finite number of members of $\mathbb N$. The addition is coordinatewise (pointwise, if you like), and multiplication proceeds this way: $$ (f*g)(n)=\sum^\infty_{i=0}f(i)g(n-i)\,, $$ where $f$ and $g$ are given elements of the set, taken to be zero at negative integers. This looks like an infinite sum, but in fact is finite, because of the finitariness of $f$ and $g$. Now we define $X$ to be the function that takes the value $1$ at $1$, zero everywhere else, it’s in our set, and consider $R$ to be contained in our set by sending the element $a\in R$ to the function that’s $a$ at $0$ and zero everywhere else. So, for instance, $aX^2$ is the function that’s $a$ at $2$ and zero everywhere else. Finally, rename our set $R[X]$.