For example, the rules for arithmetic on complex numbers are a direct result of complex numbers being defined as pairs of real numbers. Thus, the algebraic properties of the reals also apply to the complex numbers. Is there a term for this?
-
I understand it has additional properties to C, but I'm wondering how to describe the process by which one set 'inherits' properties from a subset it's built on. – Teleoplexic Apr 25 '22 at 12:14
-
1If you are familiar with rings, in particular basic properties of polynomial rings and quotient rings, then you can find an answer here. That assumes you construct it efficiently using general ring constructions (vs. brute-force verifying all the ring laws, as some authors do). – Bill Dubuque Apr 25 '22 at 12:27
-
My original example - and I'm not sure if it's relevant to the idea I'm trying to get at - is that $\alpha+\beta = \beta + \alpha$ for all $\alpha, \beta \in \mathbb{C}$ as a direct result of the definition of a complex number as an ordered pair of reals. Thus, addition is commutative in $\mathbb{C}$ because it is commutative in $\mathbb{R}$. Is there some sort of term for this to describe how a set's algebraic structure is built on that of a subset? – Teleoplexic Apr 25 '22 at 12:32
-
In your prior comment it seems you are viewing $\Bbb C$ as an algebra over the field $\Bbb R$. It is difficult to know how to answer your question without knowing if you are familiar with these basic algebraic structures (rings, modules, algebras, and their quotients), Please clarify that. – Bill Dubuque Apr 25 '22 at 13:04
-
1I disagree with the vote to close. This question is indeed a little vague, but I think this is a situation where it's impossible to ask a much clearer version of the question without already knowing how to answer it. – Noah Schweber Apr 25 '22 at 16:01
-
I agree with @NoahSchweber that it is a worthwhile Question, despite being asked at a disadvantage of not knowing the best terminology. The additive properties of $\mathbb R$ go easily over into the construction of $\mathbb C$, but the multiplicative properties not so easily. You might be interested in what happens when we try to repeat this construction, getting the quaternions, the octonions, the sedonians... and then things run dry, sort of. – hardmath Apr 25 '22 at 16:15
1 Answers
This is a great question! There's a lot of material here, and no snappy answer; broadly speaking, you're running into themes from universal algebra, category theory, and model theory (which is not to say that these themes are exclusive to those areas of course).
To set the stage, I want to mention three separate facts:
The commutativity and associativity of addition of complex numbers does indeed "come immediately" from the commutativity and associativity of addition of real numbers.
Meanwhile, the commutativity and associativity of multiplication of complex numbers does not follow quite so easily, and this isn't too surprising given that the definition of complex multiplication is more intricate than the definition of complex addition.
- To be a bit more explicit: if we think of $a+bi$ as the ordered pair $(a,b)$, "easy" multiplication would be $(a,b)(c,d)=(ac,bd)$, but "complex" (hehe) multiplication is $(a,b)(c,d)=(ac-bd, ad+bc)$.
And there are basic algebraic properties which do not in fact lift from $\mathbb{R}$ to $\mathbb{C}$ at all. For example, the rule $$\forall x,y(x^2+y^2=0\implies x=y=0)$$ is true in $\mathbb{R}^2$ but false in $\mathbb{C}$. Put another way, we can do division in the complex numbers but not in the "square real numbers."
So there are real distinctions to be made.
Products preserve equations (and more!)
Let's start by focusing on point $1$ - the tameness of $\mathbb{C}$-addition as an immediate consequence of the tameness of $\mathbb{R}$-addition. This is actually (relatively) easy to explain:
There is a very general way of combining two (or more but let's ignore that for now) "algebraic structures of the same type," such as two sets equipped with a binary operation: the direct product. In the specific case of sets equipped with a single binary operation, this is defined as follows: given "starting structures" $(A_1;*_1), (A_2;*_2)$, the product structure $$(A_1;*_1)\times (A_2;*_2)$$ is just $(A_1\times A_2; \star),$ where $\star$ is defined componentwise as $(a_1,a_2)\star (b_1,b_2)=(a_1*_1b_1, a_2*_2b_2)$.
Now why is this relevant? Well, if we look just at the additive structure in each case, we get $$(\mathbb{R};+)^2\cong(\mathbb{C};+).$$ So the way complex addition is built from real addition really is such a "one-step" process.
WARNING: I am not saying that "$\mathbb{R}^2\cong\mathbb{C}$"! That would be horribly misleading - that notation suggests that I'm referring to both addition and multiplication, and it is not true that $(\mathbb{R};+_\mathbb{R},\times_\mathbb{R})^2\cong(\mathbb{C};+_\mathbb{C},\times_\mathbb{C})$. For example, in $\mathbb{R}^2$ we have $(1,0)\times (0,1)=(0,0)$, but in $\mathbb{C}$ we can never multiply two nonzero things to get zero.
It turns out that there is a very general result that applies in such a situation:
$Th_{eq}(\mathcal{A}\times\mathcal{B})=Th_{eq}(\mathcal{A})\cap Th_{eq}(\mathcal{B})$.
(In fact this holds for products of infinitely many structures as well, but meh.) Here $\mathcal{A},\mathcal{B}$ are "algebraic structures of the same type" - e.g. sets equipped with a single binary operation, in our case - $\times$ is the direct product, and $Th_{eq}(\mathcal{X})$ is the equational theory of $\mathcal{X}$; that is, the set of all expressions of the form $$\forall x_1,...,x_n[t(x_1,...,x_n)=s(x_1,...,x_n)]$$ for terms $t,s$ in the language of $\mathcal{X}$ which are true in $\mathcal{X}$ for any assignment of values to variables. That's a bit of a mouthful, but all it means is that the equational theory contains things like commutativity (= "$\forall x_1,x_2[x_1*x_2=x_2*x_1]$") and associativity (= "$\forall x_1,x_2,x_3[x_1*(x_2*x_3)=(x_1*x_2)*x_3]$").
And this is just the start of the story. Here are a couple more points:
Equational truth is preserved by taking substructures and quotients as well (although we may in each case gain new equations to boot - a quotient of a nonabelian group might be abelian, for example). And in a precise sense (the HSP theorem - see e.g. here) this is the whole story.
Meanwhile, products preserve more than just equations - they also preserve implications between equations. If "$s=t\implies u=v$" is true in both $\mathcal{A}$ and $\mathcal{B}$, then it will also be true in $\mathcal{A}\times\mathcal{B}$. A relevant term here is "Horn clause" (and you may be interested in the results mentioned here).
Building "full" $\mathbb{C}$ from "full" $\mathbb{R}$
OK, now what happens when we fold multiplication into the mix?
Right off the bat things are more complicated. Let's compare the commutativity proofs for complex addition and complex multiplication. The proof of commutativity of addition is more-or-less just $$(a,b)+_\mathbb{C}(c,d)=(a+_\mathbb{R}c, b+_\mathbb{R}d)=(c+_\mathbb{R}a, d+_\mathbb{R}b)=(c,d)+_\mathbb{C}(b,a).$$ We literally just apply $+_\mathbb{R}$-commutativity twice in the obvious ways. The proof of commutativity of multiplication, however, is a bit more involved: it goes as $$(a,b)\times_\mathbb{C}(c,d)=((a\times_\mathbb{R}c)-_\mathbb{R}(b\times_\mathbb{R}d), (a\times_\mathbb{R}d)+_\mathbb{R}(b\times_\mathbb{R}c))=((c\times_\mathbb{R}a)-_\mathbb{R}(d\times_\mathbb{R}b), (c\times_\mathbb{R}b)+_\mathbb{R}(d\times_\mathbb{R}a))=(c,d)\times_\mathbb{C}(a,b).$$ We had to use the commutativity of addition along the way (look carefully)! And subtraction - a very non-commutative operation - made an appearance along the way. Meanwhile, more complicated iterations of the basic idea of building $\mathbb{C}$ from $\mathbb{R}$ do indeed fail to be multiplicatively commutative. Finally (and this was point (3) above but let's recall it anyways since it's been a while), since the multiplicative identity has square roots in $\mathbb{C}$ but not $\mathbb{R}$ it's certainly not true that all basic facts about multiplication lift from $\mathbb{R}$ to $\mathbb{C}$.
So there is a genuine issue here. It turns out that we can build $(\mathbb{C};+,\times)$ from $(\mathbb{R};+,\times)$ in an equation-preserving way, but it's a bit tricky:
We start by forming the direct product of infinitely many copies of $(\mathbb{R};+,\times)$. Concretely, an element of this really big algebra is an infinite sequence $(r_i)_{i\in\mathbb{N}}$ of real numbers, and we add and multiply componentwise. (Note that $(0,0,0,...)$ is the additive identity and $(1,1,1,...)$ is the multiplicative identity.) Per above remarks, this giant structure - call it "$\mathcal{S}$" - has the same equational theory as the reals.
Now we carve out a particular "substructure" - let $\mathcal{X}$ be the set of all sequences of real numbers that you can build via (componentwise) addition and multiplication from the constant sequences and the sequence $\mathfrak{x}=(1,2,3,...)$.
With some work, you can show that this $\mathcal{X}$ is isomorphic to the polynomial ring $\mathbb{R}[X]$; the isomorphism I have in mind is the one generated by sending each constant sequence $(r,r,r,r,...)$ to the constant polynomial $r$, and sending $\mathfrak{x}$ to the monomial $X$. Now it's well-known that $\mathbb{C}$ is isomorphic to a quotient of $\mathbb{R}[X]$, so in fact we have that $\mathbb{C}$ is (isomorphic to) a quotient of a substructure of a power of $\mathbb{R}$ - so, by the above remarks, every equation true in $\mathbb{R}$ is true in $\mathbb{C}$ as well. And since $\mathbb{R}$ is (isomorphic to) a substructure of $\mathbb{C}$, the converse holds too - their equational theories coincide!
Admittedly, we could sidestep universal algebra entirely and simply do the following. Suppose "$s(\overline{x})=t(\overline{x})$" is an equation true in $\mathbb{R}$. This means (in $\mathbb{C}$) that $s-t$ is a multivariate polynomial vanishing on all real inputs, and we can show via elementary means that this means $s-t$ is identically zero. This avoids the more abstract approach above, but arguably loses some explanatory power.

- 245,398