8

I am currently reading through Rudin's Principles of Mathematical Analysis and I am learning about fields and their properties. Note that this is the initial chapter - I am just starting off.

I was wondering which field property enables us to multiply on both sides of an equation and still preserve equality.

There is a very clear proposition stated in the book that gives me this for inequalities: $$ \text{If} \ \ x > 0 \ \ \text{and} \ \ y < z \ \ \text{then} \ \ xy<xz. $$

However, the only proposition that seems useful for this in the case of equalities, is stated as an implication and not an equivalence: $$ \text{If} \ x\not= 0 \ \ \text{and} \ \ xy=xz \ \ \text{then} \ \ y=z. $$

Any help would be much appreciated.

2 Answers2

32

This isn't a field property, it's a property of the underlying logical framework within which we're defining fields in the first place.

Specifically, the main property is that if $a=b$ then any sentence involving $a$ is equivalent to the same sentence gotten by replacing some of the $a$s with $b$s; we also use the simpler property that "$=$" is reflexive. From this we can argue:

  • Suppose $a=b$.

  • By reflexivity we have $ma=ma$.

  • Now by the first bulletpoint we can substitute $b$ for the second $a$ in the second bulletpoint, which gives us $$ma=mb$$ as desired.


That logical framework is often swept under the rug. Some people find this helpful since it means that they don't have to worry about such "basic" facts and can focus on more interesting stuff. Others find this annoying since hiding assumptions really goes against the whole point of the "axiomatic" turn that the definition of fields is part of in the first place. Personally, I lean on the side of not sweeping this sort of thing under the rug, but that reflects my own logician-y biases.

Aside from basic rules for equality, our logical rules also tell us how to manipulate statements in general. E.g. the fact that you can prove "Every $x$ has property $P$" by introducing an arbitrary $x$ and showing it has property $P$ is just the rule of universal generalization.

However, there are some subtleties around this logical framework itself. Basically, "naive" mathematical reasoning takes place in second-order (or similar) logic, but that's truly terrible when we actually look at it. First-order logic turns out to be the right way to go, but with a twist: we study (for example) fields within the large first-order theory $\mathsf{ZFC}$, the latter of which serves as a general all-purpose framework for conducting mathematics.

Noah Schweber
  • 245,398
  • 1
    +1 and I think one should always lean on the side of not sweeping under the rug as far as possible. – Paramanand Singh May 18 '20 at 01:40
  • Thanks so much for your thorough answer. I had a little follow up question regarding the logical framework that you mentioned and the logic in general that supports all of this from the background. Would it be wise to try and learn a bit about this, while learning this abstract algebra necessary for analysis. And if so, do you have any recommendations in terms of the resources that I could use to accomplish this? Thanks in advance. – Luka Duranovic May 18 '20 at 21:58
  • @LukaDuranovic Argh, that's a good question and I'm not sure there's a uniform answer. It all depends on the person learning it. I'd say that what it comes down to is how you feel about formal proof (= step-by-step arguments where each step follows from the previous ones via some explicit set of rules; no "natural language" allowed!). If you find this sort of thing exciting, then seeing all the details will (I think) make you much more confident in what you're doing; if on the other hand you find it boring, and you may reasonably, I think it'd be a better idea to take it for granted – Noah Schweber May 18 '20 at 22:06
  • if you can. And of course the details of your class(es) also matter (maybe you really want to learn the logic stuff but just don't have time). As to a source, I personally swear by chapters $9$ and $10$ of Boolos/Burgess/Jeffrey. Don't be fooled - the previous 8 chapters are not needed for this (although they're quite fun). It's well worth buying a copy in my opinion (although apparently it has a hardback version for about $$100$ - don't buy that one, obviously!). – Noah Schweber May 18 '20 at 22:08
  • Interestingly enough, this goes right back to junior-school mathematics!  My teacher felt that rather than blindly following a rule like ‘change the side and change the sign’ and sweeping the details under the carpet, we might benefit from understanding what was actually going on (i.e. adding or subtracting the same number from both sides of an equation).  I suspect that tiny insight really helped me.  And it's exactly the same principle as in this question: if you do the same thing to both sides of an equation, you get another equation. – gidds May 18 '20 at 22:40
  • Noah, I'm going to have to disagree with you on your last paragraph. In practice, mathematicians do not work in second-order logic, but rather many-sorted first-order logic. There is no reason to think of reasoning using the field axioms as somehow being 'inside' the theory of fields, when in fact mathematicians are simply reasoning using the field axioms applied to the specific field's members and operations. If you want to call this Henkin semantics for second-order logic, that's fine, but then it's not "terrible" at all. =) – user21820 Jun 08 '20 at 04:05
  • Maybe you might think of things like topology axiomatization, which is in some sense truly second-order. But that is in isolation. In practice, mathematicians want to reason about arbitrary topologies, so they would want to quantify over topologies, which means that you need to work in either some set theory or third-order logic! In other words, it's never really second-order logic haha.. – user21820 Jun 08 '20 at 04:11
  • @LukaDuranovic: I would say that it is better to learn how to use a practical formal deductive system for first-order logic, such as a Fitch-style system to be able to grasp how mathematical reasoning can be carried out 100% rigorously. It shouldn't take more than a week's effort, but it would bring a lifetime of crystal-clear logical understanding of mathematics, in my opinion. Note that the deductive system has finitely many rules but is complete; if something is logically forced by the axioms then it can be proven. – user21820 Jun 08 '20 at 06:32
  • @LukaDuranovic: To address your inquiry it suffices to look at the =intro/elim (introduction/elimination) rules. The =intro rule allows you to deduce "$E=E$" for any object $E$. In your case, given any elements $x,y$ of a field $(F,·)$, $x·y$ is an object so $x·y = x·y$. The =elim rule basically says that if $E,F$ are objects and you have deduced "$E=F$" and "$P(E)$" where "$P(E)$" is some statement about $E$, then you can deduce "$P(F)$" (i.e. the same statement about $F$). Here $P(t)$ is "$x·y = x·t$. So, under assumption of "$y=z$", you can deduce "$P(y)$" and hence "$P(z)$". – user21820 Jun 08 '20 at 06:57
  • @user21820, do you have any reference recommendations that provide this lifetime of crystal clear logical understanding of mathematics? – Joe Jul 12 '21 at 13:21
  • @Joe: Hi! What I meant in my comment was that once you are familiar with using a deductive system for FOL, you would be able to easily translate any mathematical reasoning into FOL and hence have cyrstal-clear understanding of the reasoning (i.e. not based on vague intuition with a perpetual uncertainty about the correctness of an argument). If you wish to learn a deductive system, I recommend either my linked system or the one given in "Language, Proof and Logic", and you can find me in this chat-room for more details. – user21820 Jul 12 '21 at 15:40
  • Personally, I noticed this "perpetual uncertainty" in the vast majority of students who are not familiar with a deductive system; they are never 100% sure and seek teachers' verification of their proofs, because they are not even aware that there is a fixed finite set of deductive rules that is sufficient for all mathematics. – user21820 Jul 12 '21 at 15:43
  • @user21820, Thanks! I'll check out that reference. – Joe Jul 12 '21 at 18:03
10

Well, actually this is due to some field property (but one which is usually in the preamble of the definition and not in the list of axioms): The definition of a field states that multiplication is a map $mult$ taking two arguments of the ground set to another one.

And one of the inherent properties of maps is that they have a single output for any given input. That means that if $a=b$ than $mult(m,a)$ and $mult(m,b)$ have the same input and thus, their outputs $ma$ and $mb$ are the same.

Dirk
  • 11,680
  • Yes. In maths this is generally taken for granted, but in most programming languages “functions” actually can give different results when invoked twice with the same argument. The property of not doing that is called referential transparency. – leftaroundabout May 18 '20 at 09:22
  • 1
    @leftaroundabout In colloquial language they are referred to as 'pure' functions as well. – orlp May 18 '20 at 09:38
  • Thanks so much for this answer. @leftaroundabout Thanks for your comment. I know what you mean when you're talking about this phenomenon in programming languages. I guess that is why some people like Haskell so much. – Luka Duranovic May 18 '20 at 21:52
  • Is the property of being a map also needed to derive $ab=ab$? – Carsten S May 19 '20 at 06:06
  • @CarstenS No, I don't think so. That's equivalent to $f(x)=f(x)$ and as long as $f$ returns the same thing every time you plug the same thing is (could be called "deterministic function") this holds. – Dirk May 20 '20 at 04:56
  • But because of (logical, see the other answer) properties of the equality relation, saying that $x=y$ implies $f(x)=f(y)$ does not use any properties of $f$ other than $f(x)=f(x)$. And if that was not the case then the notation $f(x)$ would already be fatally flawed. Therefore your answer does not make much sense to me.

    This would be different, of course, if we talked about a defined equivalence relation instead of equality.

    – Carsten S May 20 '20 at 08:52
  • Well, $f$ could be a relation instead of a function and this would change things… To put in quite sloppy: "(Deterministic) function: only one output for every input", "relation: a set of outputs for some inputs", and (unusual) "random function: a different output each time invoked" – Dirk May 20 '20 at 09:23