0

This is perhaps a silly question, so forgive me as I am not used to studying abstract algebra and the like.

In Lipschutz's Linear Algebra, he mentions how the roots of a polynomial over some field $K$ will depend on the nature of $K$ (previously in the book, the reader was fine thinking of any old field). Obviously this follows from certain polynomials (most prominently $f(x) = x^2+1$) having roots in $\Bbb C$ as opposed to in $\Bbb R$. What I'm curious about is the connection between this notion of polynomials (as mappings from one set to another) to the notion of polynomials as rings, where we speak of $x = (...0,1,0), x^2(...1,0,0), etc.$ as forming the basis of the vector space of polynomials. I understand how this basis can be used to construct any conceivable polynomial as a linear combination using the scalars from the field over which the given polynomial space is taken. But what does it mean for this more abstract representation of polynomials to have a root? I can't just substitute in $x=5=(...0,5,0)$, whatever that would even mean.

Hoping someone can shed some light or point me to a useful resource here.

EE18
  • 1,211
  • $(...0,5,0)$ would correspond to $5x$ – J. W. Tanner Mar 30 '20 at 16:59
  • How much algebra do you know? In particular, are you familiar with the motivation for the construction of the ring $R[x]$ of formal polynomials? Key is its universal mapping property, which allows us to evaluate a polynomial equation $,p(x) = q(x),$ not only at elements $,x \in R,$ but into any ring containing an image of $R$ that commutes with the image of $x$, i.e. in any $R$-algebra. So formal polynomials are a sort of "universal function" - they induce functions on any ring containing a (central image) of $R$. – Bill Dubuque Mar 30 '20 at 18:49
  • This allows us to prove once-and-for-all ("universally") fundamental identities like difference of square factorization, the binomial theorem, identities of determinants, resultants, etc, then specialize (evaluate) them as need be for specific cases. For such we need the evaluation map to be a ring homomorphism i.e. $,\xi_a (f+g) = \xi_a(f)+\xi_a(g),,$ $,\xi_a(fg) = \xi_a(f)\xi_a(g).,$ This combinbed with $\xi_a(x) = a,$ uniquely determines $\xi_a$ and yields the usual result for evaluation of polynomial functions on $R$. Hence the definition of formal polynomial evaluation. – Bill Dubuque Mar 30 '20 at 18:50
  • It is quite deceptive at first glance just how much power this universality affords, e.g. the above linked answer gives a striking example of a simple universal proof of Sylvester's determinant identity that even some grad students and professors have difficulty grasping (until one helps them get past innate analytic bias). – Bill Dubuque Mar 30 '20 at 18:50

1 Answers1

1

When $K$ is a field, in algebra we usually speak of the abstract polynomials $K[x]$, which is a purely formal construction in which each element is a finite sequence $f = (f_0, f_1, f_2, \ldots)$ which is eventually zero, and we can define addition/subtraction/scalar multiplication pointwise, and multiplication of polynomials $fg = (f_0 g_0, f_1 g_0 + f_0 g_1, \ldots)$, or more formally $$(fg)_k = \sum_{i + j = k} f_i g_j$$ just as you would for polynomials. Because this notation is quite inconvenient, one usually writes the "formal sum" $f = \sum_{i \geq 0} f_i x^i$ which allows quite concise notation like $f = x^10 - x$, but it should be understood that this is just alternative notation for a formal construction with sequences.

Each polynomial $f$ gives a function $e_f \colon K \to K$ by evaluation ($e$ stands for "evaluation" here), where whenever $k \in K$ is an element of the field we define $e_f(k) = \sum_{i \geq 0} f_i k^i$. However, not every function $K \to K$ is of the form $e_f$ for some polynomial (for example the absolute value function when $K = \mathbb{R}$ is not a polynomial), and unless the base field is infinite we cannot in general determine the polynomial $f$ from the function $e_f$. For example, if $K$ is a finite field with $q$ elements, then the polynomial $f = x^q - x$ is clearly nonzero as a formal polynomial, but $e_f(k) = 0$ for all $k \in K$.

However, we always say that a polynomial $f$ has a root at $k \in K$ if $e_f(k) = 0$: this definition always does the right thing. One can always prove that if $e_f(k) = 0$ then $(x - k)$ divides $f$, as in $f = (x - k)g$ for some other polynomial $g \in K[x]$. So you could equivalently define roots of the polynomial $f$ by looking at the linear factors of $f$.

You also asked about polynomials like $f = x^2 + 1 \in \mathbb{R}[x]$. We would say that this has no roots in $\mathbb{R}$, because $e_f(k) > 0$ for all $k \in \mathbb{R}$, or (equivalently) it has no linear factors $(x - k) \in \mathbb{R}[x]$. However, it does have roots over $\mathbb{C}$, meaning that when we view $f$ as an element of $\mathbb{C}[x]$ it can be factored $f = (x - i)(x + i)$. So the existence of roots always will depend on the field $K$.

Joppy
  • 12,875