11

Why does the Euclidean algorithm always terminate? Can we make this effective by bounding the number of steps it takes in terms of the original integers?

Bill Dubuque
  • 272,048
  • 7
    A trivial bound: the number of steps is $\leq \max(a,b)$ when you compute $\gcd(a,b)$. – Willie Wong Jul 07 '16 at 17:28
  • 1
    Can you state your current mathematical level ? If you're advanced enough, you could understand Art of Computer Programming -2 which has a section on this. (It's beyond my current level though). – Saikat Jul 08 '16 at 09:44
  • @Willie Actually the maximum number of steps before algorithm terminates at $0$ is strictly less than max($a, b$). It will never be equal to max($a, b$). – Michael Munta Mar 12 '19 at 22:15

6 Answers6

25

It always terminates because at each step one of the two arguments to $\gcd({}\mathbin\cdot{},{}\mathbin\cdot{})$ gets smaller, and at the next step the other one gets smaller. You can't keep getting smaller positive integers forever; that is the "well ordering" of the natural numbers. As long as neither of the two arguments is $0$ you can take it one more step, but it can't go on forever, so you have to reach a point where one of them is $0$, and then it stops.

As for bounds, a very crude and easily established upper bound on the number of steps is the sum of the two arguments. One of the arguments is reduced by at least $1$ at each step, and you can't reduce $n$ repeatedly by $1$ more than $n$ times without bringing it to $0$.

The worst case is $\gcd(m,n)$ where the ratio of $m$ to $n$ is the ratio of two consecutive Fibonacci numbers. For now I'll leave the proof of that as an exercise.

  • 4
    Sometimes people write "The function $f(\cdot)$ is$\ldots$" where I would write "The function $f$ is$\ldots$" since I don't see any good purpose in the former notation. This time I thought it served a purpose. I don't recall ever doing that before. $\qquad$ – Michael Hardy Jul 07 '16 at 17:31
  • The infinite sequence ​ (1,2),(2,1),(1,2),(2,1),(1,2),(2,1),... ​ is such that "at each step one of the two" $\hspace{.25 in}$ entries "gets smaller, and at the next step the other gets smaller", so that argument doesn't quite work. $\hspace{.27 in}$ –  Jul 07 '16 at 20:48
  • 3
    @RickyDemer Add "while other stays the same". – Maja Piechotka Jul 07 '16 at 21:11
  • @Ricky Sure, but it does under the implicit assumption that, for any (a, b), we chose a > b. (To formalise that assumption, in any instance, a > b, and when we take gcd(a,b) = c, we know that c < b. So, we the algorithm states we take the next pair to be {b, c} - arbitrarily pick the order (b, c).) – David E Jul 07 '16 at 21:12
  • I don't think this is the complete answer. The Euclidian algorithm also works for any two positive rational numbers. And the positive rational numbers are not well ordered by distance to zero. – Taemyr Jul 07 '16 at 22:33
  • 2
    @Taemyr : If one does what I suspect you mean in applying the algorithm to a pair of rational numbers, then both such numbers are integer multiples of the gcd that you get at the end, and that is also true of all of the pairs along the way. So you still get decreasing sequences of positive integers, and so the argument still applies. And at any rate, I don't think that what you say is a real reason to complain of incompleteness in the answer, since Euclid's algorithm is an algorithm for finding the gcd of a pair of integers. $\qquad$ – Michael Hardy Jul 08 '16 at 03:30
  • 1
    @RickyDemer : One must assume the reader has sufficient familiarity with Euclid's algorithm to know that neither of the two components of the pair ever gets bigger. $\qquad$ – Michael Hardy Jul 08 '16 at 03:41
  • 1
    @MichaelHardy If you don't mind, can you work out your exercise. I was planning on editing the question to add why consecutive Fibonacci numbers are the worst case for The Euclidean Algorithm, since I've heard it many times but never succeeding in proving it. Since, you've already mentioned it, you could prove it too. – Saikat Jul 08 '16 at 07:40
  • @MichaelHardy: I'm possibly just confused about the argument you're making re: rational numbers, but it sounds a bit like you're begging the question when you refer to the gcd you get "at the end" in your proof that the process terminates... – Ben Millwood Jul 08 '16 at 13:15
  • @BenMillwood : Here is what I surmise "Taemyr" had in mind: Suppose the two rational numbers are $8/3$ and $1/5$. The smaller one, $1/5$, goes into the larger one, $8/3$, $13$ times, leaving a remainder of $1/15$. Now we're working with the pair $(1/5,,1/15)$. The smaller one goes into the larger one $3$ times, leaving a remainder of $0$. So we're done and the gcd is $1/15$. Now notice that the ratio of $8/3$ to $1/5$ simplifies to the ratio of $40$ to $3$, so we apply Euclid's algorithm to that pair and we get the same sequence of quotients, and it must terminate. $\qquad$ – Michael Hardy Jul 08 '16 at 19:55
  • Wouldn't it be easier to say that the remainders you get after each step are a strictly decreasing sequence of nonnegative integers? – bof Oct 07 '16 at 05:41
10

This is a case of "infinite descent".

An iteration of the algorithm transforms a pair $(a,b)$ with $a>b\ge0$ into another pair $(a',b')$ with $a'>b'\ge0$, and also $a'<a$ (not $a'\le a$). So unavoidably you transform a problem in another problem of the same type with smaller arguments, and you will reach $0$ after a finite number of steps.

For a discussion of the number of steps, see https://en.wikipedia.org/wiki/Euclidean_algorithm#Number_of_steps.

6

Yes there is a bound. It is used in computational mathematics. If $a$ and $b$ are integers ($a\ge b\ge1$) and the euclidean alg. required $n+1$ operations then you have $n+1 \lt 5\log_{10}b$. Moreover the algorithm must terminate in a finite number steps because in each step you have a remainder that is strictly less than its predecessor. So if it don't terminate then the set of all remainders will not have the minimal element, but this is an absurdum.(We are in $\mathbb N$).

Marco Lecci
  • 1,023
4

If you take two steps on the Euclidean algorithm, you have halved the size of the larger number.

$$\begin{align} a,b &\to b, c \;(\equiv a \bmod b)\\[3ex] b\le a/2 &\implies c<b \le a/2 \\ b>a/2 &\implies c=a-b < a/2\\[3ex] b,c &\to c,d\;(<c) \quad\square \end{align}$$

So the process terminates in at most $2\log_2 a$ steps.

Joffan
  • 39,627
  • I am confused, as you have shown in a single step, rather than two. – jiten Mar 29 '18 at 23:30
  • 1
    @jiten There are two steps $(a,b)\to (b,c)\to (c,d)$ in order to show that the larger of the pair of numbers is reducing by a factor of two. – Joffan Mar 29 '18 at 23:45
2

Consider the following invariant of the algorithm: take $|a|+|b|$. Assume $a>b$. The algorithm can be defined as follows: replace $(a,b)$ with $(a-b,b)$ if $a-b>b$ and $(b,a-b)$ if $a-b\leq b$. When one of the elements is zero, return the other element. In this procedure $|a|+|b|$ always gets smaller, but it has a minimum of $1$. Thus it terminates

-1

If you take gcd (x,0), the process doesn't terminate. In effect because r=x

This causes quite the show if you're using a physical adding machine

https://www.alltechbuzz.net/mechanical-calculator-dividing-by-zero/

(Link shows adding machine stuck in an endless loop)

So if the process never ends, you would conclude one of the integers in the Euclidean Algorithm would have to be 0.

  • 6
    For many standard versions of the algorithm it terminates trivially when you try to take $\gcd(x,0)$. – Steven Stadnicki Jul 07 '16 at 18:58
  • 2
    If Euclid's algorithm is applied properly, it does terminate if given that input, and when given any other input, it eventually reaches $(x,0)$ and that is where it terminates. – Michael Hardy Jul 07 '16 at 19:06
  • 3
    $\ldots,$although one should add that as originally applied by Euclid, the number $0$ will not be reached because Euclid did not have any such number. To Euclid, the numbers were $2,3,4,5,\ldots. \qquad$ – Michael Hardy Jul 07 '16 at 19:08
  • That's true, the difference being (x,0) in the normal process occurs when r=0, whereas if r=x the process continues forever – Phillip Hamilton Jul 07 '16 at 19:51