In high school we learnt that, given a non-constant polynomial $P(X) \in \mathbb C [X]$, let $ a_i, i =1,2, ..., \deg P$ be its roots (counting multiplicity) over $ \mathbb C$, we can find a unique (up to scaling by a constant factor) polynomial $Q(X) \in \mathbb C[X]$ whose roots (counting multiplicity) are $a_i ^2 , i = 1,2, ... , \deg P$, and $Q$ can be found as following:
Formally substitute $x = \sqrt{t}$ into the polynomial equation $ P(x)=0$, and then separate the terms with integer index in $t$ and those with half-integer index in $t$, and move the latter terms to the other side of the equation. Square both sides to get a polynomial in $t$. This polynomial, over $\mathbb C$, has $a_i^2$'s as roots.
I can see how this algorithm can be motivated : if $P(a)=0$, then $P(\sqrt a ^2)=0$. But if we wish to formally justify it, we ran into the factorisation $P(x)=k(x-a_1)(x-a_2)...(x-a_{\deg P})$. And when we change $a_1 $ to $-a_1$, I do not see how this transformation does not affect the outcome polynomial. That is, I do not see why the outcome polynomial is invariant under changing the sign of a root of the equation.
We can probably prove it by bashing it with theory of symmetric polynomials and so, but I would still be questing why such a simple algorithm could work, and why its proof would be so complicated.
(E.g. for the quadratic case, we suppose $P_1(X)=(X-a)(X-b), P_2(X)=(X+a)(X-b)$. Formally substituting $x = \sqrt{t}$ in, we have in the former case $t+ab=(a+b)\sqrt t$, and in the latter case $t-ab = (b-a) \sqrt t$. It only after squaring both sides that the polynomials become the same--for very non-obvious reason. I do not know how to extend this to higher degrees.)
Thank you for your consideration.
Wilson