5

Let $A$ be a non-trivial integral domain. Define the relation $\sim$ on the set of pairs $A \times A\setminus\{0_A\}$ as follows:

$$(a_1,b_1) \sim (a_2,b_2) \overset{\text{def}}{\Longleftrightarrow} a_1b_2=a_2b_1.$$

It turns out that $\sim$ is an equivalence relation on $A \times A\setminus\{0_A\}$. Addition and multiplication procedure is defined as follows.

$$(a_1,b_1)+(a_2,b_2) \overset{\text{def}}{=} (a_1b_2+a_2b_1,b_1b_2)\\(a_1,b_1)\cdot(a_2,b_2)\overset{\text{def}}{=}(a_1a_2,b_1b_2).$$

If one wishes to define such operations similarly on the set of equivalence classes by $\sim$, that is on the set $(A \times A\setminus\{0_A\})/\!\sim$, one must prove the operations agree with the relation $\sim$. In other words, it must be shown these procedures give a well-defined function, not depending on the choice of representative from an equivalence class.

Here is how I would prove the result in the case of addition.

Let $(a,b)\sim(a_1,b_1)$ and $(c,d) \sim (c_1,d_1)$ be any pairs in $A \times A\setminus\{0_A\}$. We need to show that $(a,b)+(c,d)$ is $\sim$-equivalent to $(a_1,b_1)+(c_1,d_1)$, that is $(ad+bc)b_1d_1 = (a_1d_1+b_1c_1)bd.$

<p>Hence, look at the expression <span class="math-container">$E:=(ad+bc) b_1d_1$</span>. Using distributivity in <span class="math-container">$A$</span>, we have <span class="math-container">$E=(ad)b_1d_1+(bc)b_1d_1$</span>. Using commutativity (and associativity) of multiplication, <span class="math-container">$E=(ab_1)dd_1+(cd_1)bb_1$</span>. But because <span class="math-container">$(a,b)\sim(a_1,b_1)$</span> and <span class="math-container">$(c,d) \sim (c_1,d_1)$</span>, we may replace <span class="math-container">$ab_1=a_1b$</span>, and <span class="math-container">$cd_1=c_1d$</span>. Therefore, <span class="math-container">$E=(a_1b)dd_1+(c_1d)bb_1$</span>. Again via distributivity (and commutativity, associativity), finally <span class="math-container">$E=(a_1d_1+b_1c_1)bd$</span>. <strong><em>QED</em></strong></p>

Here is how E. B. Vinberg does it in A Course of Algebra, page 130.

Define now addition and multiplication of pairs by the following rules: $$(a_1,b_1)+(a_2,b_2) = (a_1b_2+a_2b_1,b_1b_2)\\(a_1,b_1)(a_2,b_2)=(a_1a_2,b_1b_2).$$ We will prove that the equivalence relation defined above agrees with these operations. By the preceding discussion, it suffices to show that when we multiply both entries in one of the pairs $(a_1,b_1)$ or $(a_2,b_2)$ by the same element $c$, their sum and product get replaced by equivalent pairs. But it is clear that when we do this, both entries in the sum and the product are multiplied by $c$.

(Emphasis added by me).

Q: Why does it suffice to show only what Vinberg says?

To emphasise, "the preceding discussion" is quoted in either my previous question in yellow quote boxes, or here in this post. The order of the book is preserved. I thought it would be a poor idea to again quote the full passage here due to length. Of course, I am willing to do so if necessary; in such a case, please leave an appropriate comment.

Bill Dubuque
  • 272,048

2 Answers2

2

Recall that the scaling relation $\,\sim:\,$ is defined as $\, (a,b) \sim: (c,d)\iff (c,d) = (ea,eb)\,$ for some $\,e\neq 0,\,$ i.e. $\,\large \frac{a}b \sim: \frac{e\,a}{e\,b}.\,$ They have equal cross-multiples $\,eab\,$ so $\,f\sim:g\,\Rightarrow\, f\sim g.$

The Lemma in the prior question shows that every cross-multiplication equivalence $\,f_1\sim f_2\,$ can be decomposed into a pair of scaling relations, i.e. $\,f_1\sim f_2\iff f_1\sim:f:\sim f_2\ $ for some $\,f,\,$ i.e. $\,f_1,\,f_2\,$ are cross-multiplication equivalent $\iff$ they have a common scaling $\,f.\,$

Thus it suffices to prove that addition and multiplication are compatible with the scaling relation, which follows from scaling symmetry of the addition & multiplication formulas due to their linear form, i.e. $\, s(f_1)\sim: \color{#c00}e\,s(f_1) = s(\color{#c00}ef_1) = s(f)\,$ below, where we prove compatibility for the first argument of addition using the sum function$\ s(x) := x + g_1,\, $ for $\,g_1 = (c,d).$

$\ \ \ \ \ \ \ \begin{align}f_1 + g_1\ \ \ \ \ &\sim: \ \ \ \ \ f + g_1 \\[.2em] f_1 \ \ \ \sim:\ \ \ \ f \ \ \ \ \, \smash[t]{\color{#0a0}{\overset{\rm C}\Longrightarrow}}\, \ \ \ \ \ \ \ \ s(f_1)\ \ \ \ \ \ \ & \sim:\ \ \ \ \ \ \ s(f)\\[.2em] \ {\rm i.e.}\ \ \ \ (a,b)\sim:(ea,eb)\,\Rightarrow\, (a,b)+(c,d)&\sim: (\color{#c00}ea,\color{#c00}eb)+(c,d)\ \ = \ s(\color{#c00}ef_1) \\[.2em] {\rm by}\ \ \ \ (ad\!+\!cb,\,bd) &\sim: (\color{#c00}ead\!+\!\color{#c00}ecb,\,\color{#c00}ebd)\ \ = \ \color{#c00}e\,s(f_1) \end{align}\ \ \ \ \ \qquad$

${\rm Then}\ \ f_1\sim f_2\,\Rightarrow\, s(f_1)\sim s(f_2)\,$ follows by applying $\,\smash[t]{\color{#0a0}{\overset{\rm C}\Rightarrow}}\,$ to a $\,\sim:\,$ decomposition of $\, f_1 \sim f_2\,$

$\ \ \ \ \ \ \ \ \ \, f_1\sim f_2\,\Rightarrow\begin{align}f_1\sim: f\\[.2em] f_2\sim: f\end{align}$ $\:\color{#0a0}{\overset{\rm C}\Rightarrow}\,\begin{align}s(f_1)\sim: s(f)\\[.2em] s(f_2)\sim: s(f)\end{align}$ $\,\Rightarrow\begin{align}s(f_1)\sim s(f)\\[.2em] s(f_2)\sim s(f)\end{align}$ $\,\color{#08ff}\Rightarrow\! \begin{align} s(f_1)\,&\sim\, s(f_2),\,\ {\rm i.e.}\\[.2em] f_1+g_1&\sim \color{#08f}{f_2+g_1}\end{align}$

Similarly (or using symmetry and commutativity) we get $\ g_1\sim g_2\,\Rightarrow\, \color{#08f}{f_2+g_1}\sim f_2+ g_2\,$ thus

$\rm\color{#08f}{transitivity}$ of $\,\sim\,$ yields $\,\ \ f_1\sim f_2,\ g_1\sim g_2\,\Rightarrow\, f_1+g_1\sim f_2+g_2\qquad $

which means $\,\sim\,$ is compatible with addition. Multiplication compatibility follows similarly.

Remark $ $ These tedious proofs are usually "left to the reader" in most expositions. One can avoid this by instead using a more algebraic construction of fraction rings via quotients of polynomial rings, where we adjoin an inverse $\,x_a\,$ for each $\,a\neq 0\,$ via extension rings $\, A_j[x_a]/(ax_a-1).\,$

In this approach the proofs follow immediately from universal properties of polynomial and quotient rings. The two approaches are related by the fact that the fraction pairs correspond to normal forms in these quotients rings, where every element is equivalent to a monomial $\,a\, x_{a_1}\cdots x_{a_k}\,$ (essentially by choosing a $ $ common "denominator"), $ $ denoted by the $ $ "fraction" $\,a/(a_1\cdots a_k)\,$ or, set-theoretically, by the pair $\,(a,\,a_1\cdots a_k),\,$ analogous to Hamilton's pair-representation of complex numbers $\,(a,b),\,$ corresponding to normal forms (least degree reps) $\,a+bx\,$ in $\,\Bbb R[x]/(x^2\!+1)\cong C.\,$ For more on this viewpoint see here (there we consider a more general construction (localization) which inverts elements in some specified subset $\,S\subseteq A)$

Bill Dubuque
  • 272,048
  • I have gone over your answer as well. It seems you took advantage of the theorem Thomas Andrew's calls "stronger statement" at the end of their answer (your implication C). That's great because the answers then complement each other. Your answer seems correct to me (besides the part starting with "Remark" as it is a bit over my head, and thus cannot directly verify). Now on to more specific questions. You say "addition formula is multilinear". What is the formal logical statement you have in mind for this "multilinearity"? (The general definition but exactly in the form it it used here). – Linear Christmas Jul 05 '19 at 15:50
  • @Lin Yes, that's essentially what I do. Re: multilinear, here this means that the sum formula's numerator $,p(a,b) = ad+cb,$ satisfies $,p(ea,eb) = \color{#c00}e,p(a,b),$ and its denom $,q(a,b) = bd,$ satisifies $,q(ea,eb) = \color{#c00}e,q(a,b),,$ which makes it clear why scaling $,(a,b),$ by $,e,$ has the effect of scaling the sum by $,\color{#c00}e,$ (cf. answer's red $,\color{#c00}e's$). Presumably such multilinearity is what Vinberg implicitly refers to when he wrote "But it is clear that when we do this, both entries in the sum and the product are multiplied by $e$". – Bill Dubuque Jul 05 '19 at 16:49
  • I tried to present it in a form that make the argument clear, e.g. above you should visualize how the ">"-shaped subgraph of $,\sim:,$ (pointing $,f_1, f_2,$ at their common scaling $,f)$ is preserved by applying $,s,$ to it, and that lifts to the same ">" shaped subgraph of $,\sim,$ (erase the ":"s using $,f\sim:g,\Rightarrow,f\sim g),,$ which yields the the sought result by transitivity (around the ">" in $,s(f_1)\sim s(f)\sim s(f_2)$ with $,s(f_1),s(f_2)$ pointed at their common scaling $,s(f))$. This allows us to "see" the proof in a single glance (can't animate here). – Bill Dubuque Jul 05 '19 at 16:49
  • @LinearChristmas If there are any points that remain unclear then let me know, esp. if you can't see the proof in a single glance (as intended). – Bill Dubuque Jul 05 '19 at 16:56
  • So it is not "multilinearity" in the most common sense of the word wrt multilinear functions. In the usual sense, it would also require $p(ea,eb) = e^2p(a,b)$ instead of scaling simply by $e$, correct? As for seeing the proof at a single glance, I am not sure what you mean by that. I think I do understand each step in your argument, if that's what you mean. And by ">"-shaped subgraph did you have in mind "<"-shaped graph? If not, then perhaps something is rendered differently, or I am misunderstanding what you said. – Linear Christmas Jul 05 '19 at 17:14
  • @Linear Which way the ">" points depends on your perspective. Yes, multilinear was an abuse of terminology. Here we need only the scaling symmetry of linear homogeneous forms. I will update that to avoid confusion. – Bill Dubuque Jul 05 '19 at 17:25
  • Re: "Which way the ">" points depends on your perspective". Then call me confused, indeed :). I had in mind something like this picture: https://i.stack.imgur.com/HCJZ9.png. – Linear Christmas Jul 05 '19 at 17:45
  • @Lin The ">" shaped subgraph of $,\sim:,$ has $,f_1,f_2,$ on the left, connected (pointed) to their common scaling $,f,$ on the right. In the answer I wrote it as $,f_1 \sim: f,\ f_2\sim f,\ $ or $,f_1\sim: f :\sim f_2,,$ but you should visualize it as said subgraph to help you see the proof in a single glance. Note: the dots ":" in $,\sim:,$ mark which side of the relation has scaling (view the dots as (place-)holes $,\dfrac{\circ, a}{\circ, b},$ for the fraction scale factor $,e)$ – Bill Dubuque Jul 05 '19 at 18:27
  • @Linear Hopefully the point about scaling symmetry is clearer after the latest edit.. – Bill Dubuque Jul 05 '19 at 19:29
  • Yes, I think the part about scaling symmetry is clearer, especially if you mean linearity only wrt to multiplying a pair $(a,b)$ with a scalar, pointwise speaking $e(a,b)$, and then the property of the function $s$ (there seems to be no linearity wrt to addition of pairs in the sense defined here because it leads to an extra $g_1$ term on one side). – Linear Christmas Jul 06 '19 at 13:27
  • @Linear I'm tempted to completely rewrite the answer since I don't think it is as clear as it could be (alas, I haven't had a large enough chunk of spare time - holidays here). We could use a good exposition on this since it is a common question. – Bill Dubuque Jul 06 '19 at 13:33
  • You also say that "The Lemma in the prior question shows that ... $,f_1\sim f_2\iff f_1\sim:f:\sim f_2$". Strictly speaking, doesn't that earlier lemma only imply the the direction $\Longleftarrow$ since the pair of equivalence relations $f_1\sim:f:\sim f_2$ can be shown to be an equivalence relation satisfying $(3.34)$? Currently I cannot see why the lemma from previous question implies the other way around, i.e. $,f_1\sim f_2\Longrightarrow f_1\sim:f:\sim f_2$. (Even though it is clear how to prove it separately and why this is true). – Linear Christmas Jul 06 '19 at 13:34
  • @Linear The converse is - of course - obvious, That is part of what's on my todo list - to sync up both answers (including notation). Maybe I'll have some time later today. Do you get an inbox notifcation when answers on your questions are edited? If not I will ping you. Your feedback is much appreciated since it helps to make the answers clearer for other readers. – Bill Dubuque Jul 06 '19 at 13:39
  • Finally, about the subgraph part. Do you mean that I should myself visualise the proof as a graph with the shape $>$, rather than that this shape is somewhere in there with your formatting? Re your two newest comments: I don't think that is necessary unless, of course, you wish to do so. Mostly the answer is clear and helpful; perhaps I am just trying to make sure I don't misinterpret anything you have said. I already appreciate the effort and patience you have given in assisting me. Re second comment: I don't think I get any notifications, but you may indeed ping me under OP, for instance. – Linear Christmas Jul 06 '19 at 13:39
  • And just to clarify to be sure: The converse is easy to confirm. But the justification I use in my head for that is simply constructing $f := (a_1b_2, b_1b_2)$, not directly using the lemma from previous question in some way. – Linear Christmas Jul 06 '19 at 13:44
  • @Linear It's not crucial to think graph-theoretically or visualize diagrams in s small problem like this, but it is useful to train your mind to do so on small problems, since it can prove crucial in larger problems. Similar ideas come into play in term-rewriting systems (e.g. we can think of scaling equivalence as a rewrite rule), see e.g. Newman's Diamond Lemma (see here or here). – Bill Dubuque Jul 06 '19 at 14:31
  • Diagrams play essential roles in many places, e.g. showing non-distributivity of (ideal) lattices, proofs in linear lattices, etc. – Bill Dubuque Jul 06 '19 at 14:31
1

Vinberg implicitly defines a relationship which we'll call $\sim_1:$

$(a_1,b_1)\sim_1 (a_2,b_2)$ if $\exists c\in A\setminus \{0\}$ such that $a_1c=a_2,b_1c=b_2.$

This is not an equivalence relation. ($\sim_1$ is actually a pre-order.)

Vinberg shows in the prior discussion that $\sim_1$ has the property:

Lemma 1: If $(a_1,b_1)\sim_1(a_2,b_2)$ then $(a_1,b_1)\sim (a_2,b_2)$

and also the property:

Lemma 2: $(a_1,b_1)\sim (a_2,b_2)$ if and only if there exists $(a_3,b_3)$ such that $(a_1,b_1)\sim_1 (a_3,b_3)$ and $(a_2,b_2)\sim_1 (a_3,b_3).$

Those two properties are the key.

Now Vinberg is saying we only need to show:

Lemma 3: For $p\sim_1 p_1$ and any $q$ that: $$\begin{align}p+q&\sim p_1+q\text{ and }\\ q+p&\sim q+p_1\end{align}\tag{1}$$

and similarly for multiplication.

From Lemma 3 we prove the general case:

Theorem: If $p\sim p_1$ and $q\sim q_1$ then $p+q\sim p_1+q_1.$

Proof: By Lemma 2, there must have $p_2,q_2$ such that $p\sim_1 p_2, p_1\sim_1 p_2, q\sim_1 q_2, q_1\sim_1 q_2.$

Then we have: $$p+q\sim p_2+q\sim p_2+q_2$$ by (1), and so $p+q\sim p_2+q_2.$

Likewise, we have $p_1+q_1\sim p_2+q_2.$

So we've shown: $p+q\sim p_1+q_1.$

The same works for multiplication.


It is easier to show the stronger statement:

For $p\sim_1 p_1$ and any $q$, $$\begin{align}p+q&\sim_1 p_1+q\text{ and }\\ q+p&\sim_1 q+p_1,\end{align}\tag{1'}$$

and then deduce Lemma 3 from (1') using Lemma 1.

Thomas Andrews
  • 177,126
  • Hmm. Interesting answer. Of course, the caveat with my type of question "What person X had in mind", is always that some inherent subjectivity is woven in from the beginning. Yet there are a few missing links that don't yet fall in place for me if this answer is indeed what Vinberg had in mind. The first is a smaller quibble, Vinberg first mentions and supports the commutativity of $+$ a bit further along in the text. Secondly, and more importantly, where is the multiplication with the element $c\in A\setminus{0_A}$ used in your argument? – Linear Christmas Jul 02 '19 at 18:49
  • To be clear, I am not saying your answer is wrong at all, it proves the result itself nicely. But could you further clarify why you feel this is what Vinberg had in mind? I need more convincing on the latter part. – Linear Christmas Jul 02 '19 at 18:49
  • Okay, I've read it more completely, and corrected my answer. @LinearChristmas – Thomas Andrews Jul 02 '19 at 19:36
  • I haven't read any further at the moment but lemma 1 seems unclear (and may not hold depending on what is meant). It is true that $\sim$ is the smallest equivalence relation $\approx$ such that $\sim_1$ is true. Is this what you had in mind? – Linear Christmas Jul 02 '19 at 19:58
  • Why is Lemma 1 false? It just says that $(a_1,b_1)\sim(ca_1,cb_2)$ for non-zero $c.$ It does not imply the other way around - that all $p\sim q$ have $p\sim_1 q.$ Lemma 2 gives the more complicated definition if $\sim$ in terms of $\sim_1.$. @LinearChristmas – Thomas Andrews Jul 02 '19 at 20:01
  • Now it seems I am at fault for incomplete reading, sorry. It's getting a bit past my bed time (adult way of putting it: sleep schedule). I will check in tomorrow and go through the full argument. – Linear Christmas Jul 02 '19 at 20:12
  • Are you certain that $\sim_1$ is a partial order? It seems that $A=\mathbb{Z}$ and the whole number pairs $(1,1), (-1,-1)$ provide a counterexample. More generally, it seems that $\sim_1$ is a partial order iff the only invertible element of $A$ is $1_A$. – Linear Christmas Jul 05 '19 at 13:19
  • I have also gone over the rest of the answer in the case of addition. The arguments do work (and I quite enjoy lemma 2). But I do have a question: Would you say this approach (assumingly Vinberg's too) is more elegant or more general, more natural or in any other way more preferable/superior to the strategy I presented in the OP? – Linear Christmas Jul 05 '19 at 13:31
  • Yes, sorry, $\sim_1$ is more properly called a pre-order. @LinearChristmas – Thomas Andrews Jul 05 '19 at 15:52
  • Your approach works fine, and is more direct. It is a bit algebraically messy, but not that bad. – Thomas Andrews Jul 05 '19 at 17:51
  • Thank you for the response. It also seems that Vinberg's argument, in the end, is and has to be "algebraic", too (in the sense used in your comment). It's just that the place where this "algebraic-ness" comes out is different. In my approach, it is direct; whereas in Vinberg's approach, it is in the proofs of lemmas 1 to 3 (mostly, 2). Would you agree? I guess also that which approach is more preferable depends on what one has previously discussed; if the lemmas already arose in a different context, it's only natural to use them in the correctness proof as Vinberg has done. – Linear Christmas Jul 05 '19 at 18:16
  • All proofs will be algebraic, but breaking the argument into pieces is less messy - smaller units of argument. Vinberg is also using a technique worth learning - he doesn’t even think to make it formal because it is “intuitive” to him to break this logic up into smaller parts. @LinearChristmas As you advance, this sort of argument should feel intuitive, too. – Thomas Andrews Jul 05 '19 at 18:40