12

Update

My original thoughts are better expressed on this mathoverflow post.

Short Version

When defining the $-$, $+$, $÷$, and $×$ operators in a functional manner, one can observe that the $(-, +)$ pair is very similar to the $(÷, ×)$ pair, and the only main differences between them is their identity terms ($0$ and $1$ respectively) and the fact that the divisor cannot be equal to the identity term of the $(-, +)$ pair of operators.

My questions are the following: where can I find some prior work on this topic, and can one define a family of such operator pairs with different identity terms? Is there any theory for such objects?

Set Theory Version

While the arithmetic properties outlined below can be defined for both sets and types, referring to set theory might help clarify the question: if $(+, -)$ with 0 as identity element defines a group and $[(+, -), (×, ÷)]$ with 1 as identity element for $(×, ÷)$ defines a field, what is defined by $[(+, -), (×, ÷), (\#, @)]$ with an identity element for $(\#, @)$ other than 0 and 1?

Since $(+, -, 0)$ is used to define $\mathbb{Z}$ and $[(+, -, 0), (×, ÷, 1)]$ is used to define $\mathbb{Q}$, which $(\#, @, r)$ could be introduced so that $[(+, -, 0), (×, ÷, 1), (\#, @, r)]$ would define $\mathbb{S}$, with $\mathbb{Q} \subset \mathbb{S} \subseteq \mathbb{R}$?

Intuitively, $\#$ should be based on exponentiation, while $@$ should be based on logarithm.

Long Version

One can define the $-$, $+$, $÷$, and $×$ operators in the following fashion:

Minus:

$ \small \text{Minus Identity Term: the minus identity term is equal to 0.}\normalsize\\ i(m) = 0.\\ \quad\\ \small \text{Subtraction Identity:} \enspace \alpha - 0 = \alpha.\normalsize\\ m(\alpha, i(m)) = \alpha.\\ \quad\\ \small \text{Self Subtraction:} \enspace \alpha = \beta \Longleftrightarrow \alpha - \beta = 0.\normalsize\\ \alpha = \beta \Longleftrightarrow m(\alpha, \beta) = i(m).\\ \quad\\ \small \text{Subtraction Affine Identity:} \enspace \alpha - (\beta - \gamma) = \gamma - (\beta - \alpha).\normalsize\\ m(\alpha, m(\beta, \gamma)) = m(\gamma, m(\beta, \alpha)).\\ $

Plus:

$ \small \text{Multiplication Affine Identity:} \enspace (\alpha + \beta) - \gamma = \alpha - (\gamma - \beta).\normalsize\\ m(p(\alpha, \beta), \gamma) = m(\alpha, m(\gamma, \beta)).\\ $

Divides:

$ \small \text{Divides Identity Term: the divides identity term is equal to 1.} \normalsize\\ i(d) = 1.\\ \quad\\ \small \text{Division Identity:} \enspace \frac{\alpha}{1} = \alpha.\normalsize\\ d(\alpha, i(d)) = \alpha.\\ \quad\\ \small \text{Self Division:} \enspace \frac{\alpha}{\alpha} = 1.\normalsize\\ \alpha = \beta \Longleftrightarrow d(\alpha, \beta) = i(d).\\ \quad\\ \small \text{Division Affine Identity:} \enspace \frac{\alpha}{\frac{\beta}{\gamma}} = \frac{\gamma}{\frac{\beta}{\alpha}}.\normalsize\\ d(\alpha, d(\beta, \gamma)) = d(\gamma, d(\beta, \alpha)).\\ $

Times:

$ \small \text{Multiplication Affine Identity:} \enspace \frac{\alpha × \beta}{\gamma} = \frac{\alpha}{\frac{\gamma}{\beta}}.\normalsize\\ d(t(\alpha, \beta), \gamma) = d(\alpha, d(\gamma, \beta)).\\ $

We observe that the pair of divides and times operators are defined exactly the same way as the pair of minus and plus operators, but with different identity terms, and with a minus identity restriction on the multiplier subdomain of the divides function.

The symmetry established between the pairs of operators $(-, +)$ and $(÷, ×)$ allows the following pairs of properties to be proven for both properties in every pair by proving it for a single property.

The following properties are established for any pair of operator functions $(f, g)$, which corresponds to the pairs $(-, +)$ and $(÷, ×)$. Furthermore, the term reverse is used to refer to the opposite for the $(-, +)$ pair and to the inverse for the $(÷, ×)$ pair.

Proofs for the $(-, +)$ pair can be found on this notebook.

Anticommutativity: $f(\alpha, \beta) = f(i(f), f(\beta, \alpha).$

$ \alpha - \beta = -(\beta - \alpha).\\ \quad\\ \displaystyle \frac{\alpha}{\beta} = \frac{1}{\frac{\beta}{\alpha}}.\\ $

Double Reverse Identity: $\alpha = f(i(f), f(i(f), \alpha)).$

$ \alpha = -(-\alpha).\\ \quad\\ \displaystyle \alpha = \frac{1}{\frac{1}{\alpha}}.\\ $

Associative Commutativity: $f(f(\alpha, \beta), \gamma) = f(f(\alpha, \gamma), \beta).$

$ (\alpha - \beta) - \gamma = (\alpha - \gamma) - \beta.\\ \quad\\ \displaystyle \frac{\frac{\alpha}{\beta}}{\gamma} = \frac{\frac{\alpha}{\gamma}}{\beta}.\\ $

Affine Equivalence: $f(\alpha, \beta) = \gamma \Longleftrightarrow f(\alpha, \gamma) = \beta.$

$ \alpha - \beta = \gamma \Longleftrightarrow \alpha - \gamma = \beta.\\ \quad\\ \displaystyle \frac{\alpha}{\beta} = \gamma \Longleftrightarrow \frac{\alpha}{\gamma} = \beta.\\ $

Identity Element: $g(\alpha, i(f)) = \alpha.$

$ \alpha + 0 = \alpha.\\ \quad\\ \alpha × 1 = \alpha.\\ $

Dual Substitution: $g(\alpha, \beta) = f(\alpha, f(i(f), \beta)).$

$ \alpha + \beta = \alpha - (-\beta).\\ \quad\\ \alpha × \beta = \frac{\alpha}{\frac{1}{\beta}}.\\ $

Dual Equivalence: $\alpha = g(\beta, \gamma) \Longleftrightarrow \beta = f(\alpha, \gamma).$

$ \alpha = \beta + \gamma \Longleftrightarrow \beta = \alpha - \gamma.\\ \quad\\ \alpha = \beta × \gamma \Longleftrightarrow \beta = \frac{\alpha}{\gamma}.\\ $

Commutativity: $g(\alpha, \beta) = g(\beta, \alpha).$

$ \alpha + \beta = \beta + \alpha.\\ \quad\\ \alpha × \beta = \beta × \alpha.\\ $

Associativity: $g(g(\alpha, \beta), \gamma) = g(\alpha, g(\beta, \gamma)).$

$ (\alpha + \beta) + \gamma = \alpha + (\beta + \gamma).\\ \quad\\ (\alpha × \beta) × \gamma = \alpha × (\beta × \gamma).\\ $

Dual Identity: $(g(f(\alpha, \beta), \beta) = \alpha) \land (f(g(\alpha, \beta), \beta) = \alpha).$

$ ((\alpha - \beta) + \beta = \alpha) \land ((\alpha + \beta) - \beta = \alpha).\\ \quad\\ \displaystyle (\frac{\alpha}{\gamma} × \beta = \alpha) \land (\frac{\alpha × \beta}{\beta} = \alpha).\\ $

ismael
  • 689
  • 1
    There is a mistake in the anticommutativity part for subtraction – Slugger Jan 04 '19 at 19:15
  • 8
    Have you heard of groups? – jgon Jan 04 '19 at 19:16
  • @Slugger Sorry for the mistake and thanks a lot for the fix! – ismael Jan 04 '19 at 19:19
  • 2
    @ismael No problem! You also say that "the only main difference between them is their identity elements (0 and 1 respectively)", but keep in mind that another significant difference is that multiplicative inverses are not defined for all numbers, i.e., for zero – Slugger Jan 04 '19 at 19:21
  • @jgon Yes, thank you very much. But this does not really answer my question. Also, I am trying to study these properties without relying on the axioms required for set theory. Therefore, I prefer working with types whenever possible. – ismael Jan 04 '19 at 19:24
  • @Slugger You are absolutely right, And I mention the limitation in the post when I refer to the multiplier subdomain of the division operator. And I am still trying to figure out what that really means. Where does this limitation come from, and what are its implications? I know the limitation, but I do not yet understand its essence. – ismael Jan 04 '19 at 19:25
  • Another limitation: the identities are not identities: they are just right identities. – rschwieb Jan 04 '19 at 19:27
  • @ismael, there is an element free axiomatization of groups that allows you to define group objects in arbitrary categories. Then from the axiomatization of groups you can prove all of these properties. – jgon Jan 04 '19 at 19:27
  • @ismael We can solve the issue by for instance taking the domain of multiplcation and division to be $\mathbb{R} \setminus {0}$, but then the comparison becomes kinda unfair, because for addition we need zero to be included in the set of elements as otherwise the additive inverse of $1$ is not defined – Slugger Jan 04 '19 at 19:28
  • I think the solutions section is very telling about how vague this question is. Before writing a solution myself, I read all the answers and thought "I can tell everyone is seeing a different question than I am, and I have no idea what they are addressing." The only reason I bothered to answer is because I think I have evidence to support the viewpoint that "nothing much is really there." – rschwieb Jan 04 '19 at 19:57
  • @rschwieb I agree with you, my original question is not super clear. Here is a shorter version: If (-, +) defines a group and [(-, +), (/, ×)] defines a field, what is [(-, +), (/, ×), (@, #)] called if @ is defined like - and / with an identity element other than 0 and 1? – ismael Jan 04 '19 at 20:09
  • 1
    The title might be even better with a minor change to "What makes the pairs of operators $(-, +)$ and $(\div , \times)$ so similar?" so they also look similar – Henry Jan 04 '19 at 22:48
  • @Henry Great observation! I updated the title and the notebook. Thank you! – ismael Jan 04 '19 at 22:53
  • Really simple example: what is the relation between multiplication and exponentiation, and division and roots? – Davislor Jan 04 '19 at 23:44
  • @Davislor You’re absolutely right. The third set of operators should be based on exponentiation and logarithm, with probably $e$ as identity term. But I can’t find a suitable set of operators yet. – ismael Jan 05 '19 at 01:09

7 Answers7

10

What you're talking about is called a Field.

A Field is a set (say rational number $\mathbb{Q}$, real numbers $\mathbb{R}$, complex numbers $\mathbb{C}$, etc...) together with two operations $(+,\times)$ such that the following axioms holds:

The operations are associative: $a + (b + c) = (a + b) + c$ and $a \cdot (b \cdot c) = (a \cdot b) $

The operations are commutative: $a+b=b+a$ and $a\cdot b=b\cdot a$

Each of the operations has it's own identity element $(0,1)$. Formally, there exist two different elements $0$ and $1$ such that $a + 0 = a$ and $a · 1 = a$.

And each of the operations admits an "inverse" (i.e. we have $(-,/)$). That is,

For every $a$, there exists an element denoted $−a$, such that $a + (−a) = 0$. Similarly for every $a\not = 0$ there exists an element, often denoted by $a^{-1}$ or $1/a$ such that $a\cdot a^{-1}=1$.

Finally there is one more axiom which associate between the additive and multiplicative notion. It's called the distributivity and it says that $a \cdot (b + c) = (a \cdot b) + (a \cdot c)$.

We have many fields some of them are finite and some are infinite. In my opinion the best example for a finite field would be $\mathbb{F}_p$ - the field of $p$-elements with addition and multiplication modulo $p$, you can read about it and more finite fields here. The most useful infinite fields (Again in my opinion) are the rational numbers, the real numbers and the complex numbers with the usual addition and multiplication. The important part however is that and every field satisfies all of the properties that you mentioned in your question.

Note that I removed quantifiers from the definitions to make them simpler, for the complete and correct axioms of a field please click the link in the first line.

Yanko
  • 13,758
  • Yes indeed, if you apply these properties to set theory, you get a field, but I am trying to study these properties without having to rely on the axioms of set theory. Also, I am looking for a theory that would provide an extensive analysis of families of operators defined with different values for the identity elements. Can values other than $0$ and $1$ be used? And if so, what kind of operators/functions do they introduce? And how do these functions relate to each other (in the same way that $e$ and $ln$ relate to each other)? – ismael Jan 04 '19 at 19:38
  • 1
    @ismael About your first sentence, I really don't know anything mathematical that doesn't rely on the axioms of set theory. About the second sentence, we just denote the identity elements as $0$ and $1$, I guess you can give them different names but this would be unnecessary. I don't understand your last sentence, what kind of operators or functions do you expect that they will introduce? How are $e$ and $\ln$ relevant here? – Yanko Jan 04 '19 at 19:42
  • If you take a finitist approach and use type theory instead of set theory, you do not need the axioms of set theory. This makes it harder to work with $\mathbb{R}$, but things go really smoothly with $\mathbb{Q}$. As far as identity element (or should I say identity term?) values are concerned, I am calling $0$ the initial object of my underlying coinductive type and $1$ the successor of $0$, with $successor$ being my coinductive operator. Now, I wonder what the operator defined like minus and divides would look like with $-1$ as identity term. – ismael Jan 04 '19 at 19:46
  • It is not at all clear what you are asking that isnt covered in any basic abstract algebra class. No one can give you an answer if you can not form an actual question. – fleablood Jan 04 '19 at 19:50
  • @Yanko Using the terminology of set theory for clarity sake, if you have a group with (-, +) and a field with [(-, +), (/, ×)], what do you have when you add a new pair of operators (@, #) with @ defined using an element other than 0 and 1 as identity element (say -1 for example)? Is there a name for such an algebraic structure? – ismael Jan 04 '19 at 19:50
  • @ismael $/$ isn't an operation on fields. It's a notational convenience. – jgon Jan 04 '19 at 20:12
  • @ismael I think you will just make a new operation on this field and create another field. Note that if you forget about the operations $+,\times$ on $\mathbb{C}$ then it is technically the same (in the set theoretic sense) as $\mathbb{R}$. So technically you can define on $\mathbb{C}$ both the structure of a field of complex numbers and the structure of a field of real numbers. Note that the (set theoretic) identification $\mathbb{R}\cong \mathbb{C}$ doesn't need to preserve $0,1$. – Yanko Jan 04 '19 at 21:20
  • @Yanko Well, if you keep just two pairs of operations at the same time, then yes, you just get another field. But if you keep all three pairs and make sure to define the third in relation to the first and the second, much like the second is defined in relation to the first, I tend to believe that you end up with a different (richer) structure altogether. Most importantly, I wonder if anyone has done that on $\mathbb{Q}$ and found an identity element for which a new pair of operation (@, #) would make sense. – ismael Jan 04 '19 at 21:28
  • @ismael: Your first comment (that fields have to do with set theory) is not only false but ridiculous. Just because some people claim (correctly) that modern mathematics can be done within ZFC, does not imply that it is in practice done so. Nobody except set theorists think of every mathematical object as a set. And everything that Yanko wrote in his/her answer works fine in any reasonable foundational system, not just ZFC or type theory. If you don't understand set theory, don't comment on it as if you do... And finitism has nothing to do with all this. – user21820 Jan 05 '19 at 11:23
7

Since you're interested in type theory and say that you therefore want an element free perspective, I'll give you the categorical perspective.

In category theory, we can define group objects in a category $C$ with finite products (including the terminal object, $*$) as an object $G$ with $\mu : G\times G \to G$ (a binary operator), $e: * \to G$ (a nullary operator), and $i : G\to G$ (a unary operator) satisfying the following relations, where $\Delta_G : G\to G\times G$ is the diagonal map and $\tau_G : G\to *$ is the map to the terminal object:

Associativity: $$\mu\circ (\mu\times \newcommand\id{\operatorname{id}}\id) = \mu\circ (\id\times \mu) :G\times G\times G \to G$$ Identity: $$\mu\circ (\id\times e)=\mu\circ (e \times \id)=\id : G\times G$$ Inverses: $$\mu\circ (\id\times i) \circ \Delta_G = \mu\circ (i\times \id) \circ \Delta_G = e\circ \tau_G : G\to G$$

Now this axiomatization is equivalent to the axiomatization you've given in your question, except that instead of inversion, you've given division as the primitive operation.

To get your data, we define division as $d=\mu \circ (\id \times i)$.

Conversely, given division $d: G\times G\to G$, we define $i$ by $i=d\circ (e\times \id)$.

Your axiomatization gives associativity and identity for free, plus also commutativity (so you're technically axiomatizing abelian groups).

Then your "dual identity" can be phrased $$\mu\circ (d\times \id) \circ (\id \times \Delta_G) = d\circ (\mu \times \id)\circ (\id \times \Delta_G) = \id \times \tau_G : G\times G\to G $$

Composing with $e\times \id$ we get the identity $$\mu\circ (d\times \id) \circ (\id\times \Delta_G) \circ (e\times \id) = \mu\circ (d\times \id)\circ (e\times \id\times \id)\circ \Delta_G = \mu\circ (i\times \id)\circ \Delta_G=e\times \tau_G,$$ which is half of the inverses identity, and the other half we get is: $$d\circ (\mu\times \id) \circ (\id\times \Delta_G) \circ (e\times \id) = d\circ (\mu\times \id)\circ (e\times \id\times \id)\circ \Delta_G = d\circ \Delta_G=e\circ \tau_G,$$ so we just need to check $d = \mu\circ (\id \times i)$, and this follows from your double reverse and dual substitution identities. (We get $\alpha + (-\beta) = \alpha - (-(-\beta)) = \alpha - \beta$).

Conclusion

All of the properties you've listed follow from the fact that the operations you've chosen define abelian groups.

Thus the reason the triples of operators (don't forget the identity) are so similar is that they each define abelian groups.

Edit:

It's now a bit more clear to me what you're asking about. You also are interested in the relationship between these pairs/triples of operators, and how to possibly add another pair/triple.

In which case I feel the need to point out that fields don't come with two pairs of operations.

It's actually a bit easier to see this in the case of (commutative) rings.

For a general commutative ring $R$ define $a/b = a\cdot b^{-1}$ when $b$ is invertible.

Then the collection of all invertible elements of $R$, denoted $R^\times$ forms a group, and it has identity $1$, the usual multiplication as multiplication, and the division just defined gives the division operation.

Now $R^\times=R$, as sets only when $R=0$, the zero ring, since otherwise $0$ is never invertible. Thus the triple of operations $(1,*,/)$ is never actually a triple of operations on $R$, but rather a triple of operations on the related object $R^\times$.

In the very special case of fields, $R^\times = R\setminus\{0\}$, but for say the integers, we have $\Bbb{Z}^\times = \{1,-1\}$.

Also there is an additional axiom relating the operations $+$ and $*$, the distributive law.

Thus it's not clear what you mean by adding another triple of operations.

The two triples of operations already discussed aren't defined on the same set/type to begin with, so it's not quite clear how you'd be adding a third.

Also even if you did construct a related type on which to define a third operation, this third operation should relate to the previous two in some way.

In mathematics, there are examples of rings with additional operations (though none that I can think of that form an abelian group), such as differential graded algebras, but the third operation always relates to the prior two in some way.

jgon
  • 28,469
  • Thank you so much for this explanation! I really need to dig into category theory (and probably homotopy type theory as well while I am at it). Also, sorry for the confusion regarding elements and terms. I have fixed it in the original post. – ismael Jan 04 '19 at 20:24
  • @ismael I've edited in an additional section in response to your edits – jgon Jan 04 '19 at 20:51
  • This is super helpful! Yes, the third operation should be defined in relation to the previous two, much like the second is defined in relation to the first. And I intentionally avoided talking about distributivity because I am still working on it, trying to figure out whether this has to be an axiom or whether it can be proven. – ismael Jan 04 '19 at 21:16
  • Another way to reformulate my original question is this: since $(+, -, 0)$ is used to define $\mathbb{Z}$ and $[(+, -, 0), (×, /, 1)]$ is used to define $\mathbb{Q}$, which triplet $(#, @, r)$ could be introduced so that $[(+, -, 0), (×, /, 1), (#, @, r)]$ would define $\mathbb{S}$, with $\mathbb{Q} \subset \mathbb{S} \subseteq \mathbb{R}$? – ismael Jan 04 '19 at 22:04
  • 1
    @ismael Part of my point though is that division isn't an operation on $\Bbb{Q}$, since division is only defined on $\Bbb{Q}\times\Bbb{Q}^\times$. Also it's really too broad a question to ask what additional operation could be introduced without specifying how it interacts with the other two operations. – jgon Jan 04 '19 at 22:21
  • Yes indeed, my question is very open ended. Please accept my apology for that. Nevertheless, the answers that were provided really helped me form my thoughts. And as far as division not being defined on 0 (the first operator’s identity term) is concerned, I suspect that an “interesting” third operator won’t be defined on 1 (the second operator’s identity term). Now, logarithms come to mind... – ismael Jan 04 '19 at 22:36
  • Nitpicking: "Now it is never the case that $R^\times=R$, as sets" -- well except if $R=\lbrace 0 \rbrace$, the zero ring, which however usually is excluded from the study of commutative rings (for reasons which are philosophically related to this). – Torsten Schoeneberg Jan 05 '19 at 19:17
  • @TorstenSchoenberg I saw the first ten words of your comment and I immediately knew that I forgot to exclude the zero ring. Edited. – jgon Jan 05 '19 at 19:20
3

Update: A more detailed answer is available on this mathoverflow post.

Following @Henry’s suggestion, a recursive structure of abelian groups can be constructed by using commutative hyperoperations:

$p_{n+1}(a, b) = \exp(p_n(\ln(a), \ln(b)))$

$p_0(a, b) = \ln\left(e^a + e^b\right)$

$p_1(a, b) = a + b$

$p_2(a, b) = a\cdot b = e^{\ln(a) + \ln(b)}$

$p_3(a, b) = a^{\ln(b)} = e^{\ln(a)\ln(b)}$

$p_4(a, b) = e^{e^{\ln(\ln(a))\ln(\ln(b))}}$

These functions give us the sequence of $(+, ×, ...)$ operations, while their inverse functions give us the sequence of $(-, ÷, ...)$ dual operations. The sequence of identity terms is $(0, 1, e, ...)$. With this, $T_1$ (Type Level 1) is isomorphic to a group, $T_2$ is isormorphic to a field, and successive types give you more and more complex objects.

The identity terms are:

$i_n = e \upuparrows (n - 2).$

$i_1 = 0.$

$i_2 = 1.$

$i_3 = e.$

$i_3 = e ^ e.$

$i_4 = e ^ {e ^ e}.$

While I cannot yet fathom what $T_4$ And successive types can be used for, I have to believe that $T_3$ is interesting, because it brings exponentiation to the table in a very natural manner. Therefore, stopping at the level of fields feels quite shortsighted.

Also, $T_1$ is isomorphic to $\mathbb{Z}$ and $T_2$ is isomorphic to $\mathbb{Q}$, but $T_3$ is isomorphic to a strict subset of $\mathbb{R}$. This goes to suggest that the gap between $\mathbb{Q}$ and $\mathbb{R}$ is pretty large and should be filled incrementally with larger and larger sets. One interesting question is whether $T_n$ “converges” toward a structure that is isomorphic to $\mathbb{R}$ when $n$ increases.

ismael
  • 689
2

Here is my best attempt at answering this question; but the answer I have is likely to be disappointing.

If $(+, -)$ with 0 as identity element defines a group and $[(+, -), (×, /)]$ with 1 as identity element for $(×, /)$ defines a field, what is defined by $[(+, -), (×, /), (\#, @)]$ with an identity element for $(\#, @)$ other than 0 and 1?

As far as I know, no such thing has been studied any significant amount.

As you've noticed, any field has two corresponding groups: its additive group and its multiplicative group. These two groups have different identity elements.

I'm not aware of any kind of algebraic structure which has three corresponding groups. And nobody is going to study such things, or name them, until someone has found an interesting example of such a thing.

Tanner Swett
  • 10,624
  • 1
    That’s a totally valid answer. Indeed, I do not like it, but it’s quite possibly the best answer so far. – ismael Jan 04 '19 at 21:34
  • I guess @Henry just found one: $a # b = a ^{log(b)} = b ^{log(a)}$ and $a @ b = a ^ \frac{1}{log(b)}$ with $e$ as identity term. Having this providing a third group built on top of the previous two is a more elegant way of stating that exponentiation is repeated multiplication. And it makes me wonder what comes next after 0, 1, and e... – ismael Jan 05 '19 at 02:24
1

$\log$ turns multiplication and division into addition and subtraction. The precise statement is that $\log: \mathbb R^+ \to \mathbb R$ is a group isomorphism, whose inverse is $\exp$.

lhf
  • 216,483
  • 1
    Excellent point, but this is more a corollary rather than a justification. And the definition of such functions for the pair of identity elements $(0, 1)$ makes me wonder what functions could be defined with other pairs or additional operators defined with other identity elements like $-1$, $e$, or $\pi$. – ismael Jan 04 '19 at 19:28
  • 1
    The extension could then be that $a ,#, b = a^{\log(b)} = b^{\log(a)}$ on $\mathbb R_{\gt 1}$, so $a ,@, b = a^{1/\log(b)}$ with an identity which is $e$ for natural logarithms, and then $a ,#, \exp(1/\log(a)) = e$ showing the inverse – Henry Jan 04 '19 at 23:08
  • @Henry This is it! Beautiful! Thank you so very much. – ismael Jan 05 '19 at 02:10
  • @Henry Could you make this an answer to the question so that It can be closed? – ismael Jan 05 '19 at 02:10
  • @Henry With this result, we now have a very “natural” series 0, 1, e. This begs the following question: what comes next? I mean, this is too beautiful to be a simple coincidence... – ismael Jan 05 '19 at 02:29
  • @ismael - I think you have now done this. I suspect the next identity is $e^e$ and then $e^{e^e}$ and so on – Henry Jan 05 '19 at 09:14
  • @Henry Well, all I did was to put it back into a known context. You still get all the credit for having found the next set of operators and its identity term. Thank you so much again. As you found the answer, I was still stuck looking for the commutative operator. – ismael Jan 05 '19 at 14:43
  • Using log base 2 instead of the natural log yields the sequence of identities 0, 1, 2, 4, 16, ... – user21793 Jan 05 '19 at 16:32
1

How do you define these operations? If it's the primary-school way on real numbers, it follows from the facts that (i) reals form an Abelian group under $+$, its identity element famously named $0$, and (ii) reals $\ne 0$ form an Abelian group under $\times$. (Note this guarantees many similarities follow from group theory.) This, together with $a\times (b+c)=a\times b+a\times c$ (we say $\times$ distributes over $+$), defines a field. Maths has a lot of groups, and a lot of fields; and where you have fields, you have two very similar operations.

J.G.
  • 115,835
1

I think what's going on is this:

Suppose you have any binary relation $\star$ making $X$ an abelian group. One way to express the relation is that it is a subset $S\subseteq (X\times X)\times X$ where $a\star b=c$ iff $((a,b),c)\in S$.

You can immediately form a new relation $S'=\{((c,a),b)\mid (a,b,c)\in S\}$, and that describes a different binary operation. The fact that $S$ was formed from an abelian group operation allows you to say that this actually is a function.

And you can repeat this again to get $S''=\{((b,c),a)\mid (a,b,c)\in S\}$, but it isn't as obvious that it is a function from it's origin from $S'$, but we can appeal again to $S$ again to prove it is a function.

Repeating the trick a third time gets you back to $S$.

If you take the special case where $\star$ is addition, you'll find that $S'$ is subtraction where the thing subtracted is on the right, and $S''$ is like subtraction where the thing subtracted is on the left.

All this means, I think, is that the binary operations for some groups that we are all very familiar with can be translated to this new funky ordering, and because of the group properties contained in $S$, you will have a standard set of properties available in $S'$ (and also perhaps a slightly different set for $S''$, I didn't check).

My gut feeling is that the set of group axioms on $S'$ is equivalent in some sense to the abeilan group axioms encompassed in $S$, so that we really haven't learned anything new, really, we've just rewritten all the addition in terms of subtraction, and all the division in terms of multiplication. It does not feel like there is anything significant in this process.

rschwieb
  • 153,510
  • Well, I am not so sure. First, I am not using set theory, I am using coinductive type theory, therefore some axioms of set theory are not necessary. Second, by defining subtraction before addition, I can deal with measures like temperatures that do not support addition (this is a very big deal for physicists and statisticians). Third, my real question is related to the values picked for the identity terms: what happens when these values are not $(0, 1)$? Or what happens when you add a third pair of operators with a third identity term (say -1)? Has anyone worked on this yet? – ismael Jan 04 '19 at 19:58
  • 1
    I don't understand your first and third points. I think I understand your second point, and I'm suggesting that even though that is fine, it probably amounts to the same thing as addition in the end. – rschwieb Jan 04 '19 at 20:00
  • I don’t think it does. While you can add a temperature delta to a temperature, you cannot add two temperature. This suggests that the addition operator should not have the cartesian product of the same set as domain, but the cartesian product of a type that does not support addition with a type that does support addition (I’m not sure that talking about cartesian product for types makes perfect sense, but this is a totally different subject). This kind of hybrid domain is not allowed by groups or fields unfortunately... – ismael Jan 04 '19 at 20:02
  • @ismael I'm not sure I agree with your specific example, that you can't add temperatures, but it sounds like you're talking about the idea of an affine space. Also usually the Cartesian product of types is the type of pairs. E.g., in Haskell notation for algebraic data types, the cartesian product of a and b is Pair a b, where data Pair a b = Pair a b. (Although in Haskell, you would usually just use the built-in type (a,b)). – jgon Jan 04 '19 at 20:57
  • @jgon You’re totally on point: I am trying to define my types in a more “affine” manner, hence the names I gave to some of the properties of their operators. Also, it seems to me that by using such “affine” expressions for the axiomatic definition of the types, you end up with less axioms than if you don’t, which I view as a plus. For example, in the case of the multiplication, it is defined with a single axiom, from which we can proove the identity term, associativity, and commutativity. These “affine” expressions are really quite powerful in that respect. – ismael Jan 04 '19 at 21:18