You need to find the greatest common divisor (GCD) or the lowest common multiple (LCM) of two numbers. The GCD of two numbers a and b is the greatest number that divides evenly into both a and b. For example the GCD of 24 and 60 is 12.

Both of these problems and many others can be solved with Euclid’s algorithm. We will begin by explaining how to obtain GCD and then show that LCM is a simple extension. To compute the GCD naively we could start from the smallest of the two numbers and work our way downwards until we find a number that divides into both of them:

for(inti=Math.min(a,b); i>=1; i--)if(a%i==0 && b%i==0)returni;

Although this method is fast enough for most applications, a much faster approach is to use Euclid's algorithm. Euclid's algorithm iterates over the two numbers until a remainder of 0 is found. The algorithm is based on the principle that the GCD of two numbers does not change if the smaller number is subtracted from the larger number. For example, suppose we want to find the GCD of 2336 and 1314. We begin by expressing the larger number (2336) in terms of the smaller number (1314) plus a remainder:

2336 = 1314 x 1 + 1022

We now do the same with 1314 and 1022:

1314 = 1022 x 1 + 292

We continue this process until we reach a remainder of 0:

1022 = 292 x 3 + 146 292 = 146 x 2 + 0

The last non-zero remainder is the GCD. So the GCD of 2336 and 1314 is 146. This algorithm can be easily coded as a recursive function:

//assume that a and b are positivepubliclongGCD(longa,longb) {if(b==0)returna;returnGCD(b,a%b); }

Using this algorithm we can also find the lowest common multiple (LCM) of two numbers. For example the LCM of 6 and 9 is 18 since 18 is the smallest number that divides both 6 and 9. Notice that 18=(6*9)/3. In general we can express LCM(a,b) as (a*b)/GCD(a,b). To prevent potential overflow we perform division before multiplication in the implementation:

publiclongLCM(longa,longb) {returnb/GCD(a,b)*a; }

Euclid’s algorithm can compute the GCD of large numbers efficiently. In 1844 Gabriel Lamé showed that it never requires more iterations than five times the number of digits h (in base 10) of the smaller number. Since the computational expense of each iteration has order h, the overall complexity grows as h^2.

The algorithm has many theoretical and practical applications. It is an important element of the RSA algorithm. It can also be used to solve linear Diophantine equations: equations with integer coefficients of the form ax + by = c. To solve such equations we need a variation of the Euclid's Algorithm, where we record the coefficients floor(a/b) at each step. We can then use these coefficients to find integers c1 and d1 such that |b*c1 - a*d1| = 1. If c does not divide evenly by GCD(a,b) then there are no integer solutions; otherwise there are an infinite number of integer solutions and one of these is x0 = (+/-)c*d1, y0 = (+/-)c*c1. Note that the signs depend on the sign of b*c1 - a*d1, as well as, the signs of a, b and c. Once we have a single solution (x0,y0) then the general solution takes the form x = x0 + b/GCD(a,b)*t, y = y0 - a/GCD(a,b)*t, where t is an arbitrary integer. The following code implements these idea:

// Find solutions to a linear diophantine equation ax+by=c // where a, b and c are integerspublicvoidsolveDiophantine(longa,longb,longc) { System.out.println("Attempting to solve: "+a+"x + "+b+"y = "+c); //special caseif(a==0 && b==0 && c==0) { System.out.println("x and y can take any value");return; }longa1=Math.abs(a);longb1=Math.abs(b);longc1=0, c2=1;longd1=1, d2=0;while(b1>0) {longmult=a1/b1;longtemp=c2; c2=c2*mult+c1; c1=temp; temp=d2; d2=d2*mult+d1; d1=temp; temp=b1; b1=a1%b1; a1=temp; }longgcd=a1;if(gcd==0 || Math.abs(c)%gcd!=0) { System.out.println("There are no integer solutions!");return; }longdet=c1*d2-c2*d1;longsigna=a>0 ? 1: -1;longsignb=b>0 ? 1: -1;longx=-d1*c*det*signa/gcd;longy=c1*c*det*signb/gcd; System.out.println("One solution is x="+x+", y="+y); System.out.println("A general solution is x="+x+" + "+b/gcd+"t, y="+y+" - "+a/gcd+"t"); }

For a more detailed description of the above algorithm, see the following:

http://mathworld.wolfram.com/DiophantineEquation.html

http://mathforum.org/library/drmath/view/51595.html

http://www.wikihow.com/Solve-a-Linear-Diophantine-Equation

Now let's try solving CommonMultiples (Div I 250 SRM 346). We are asked to count how many numbers between

publicclassCommonMultiples {publicintcountCommMult(int[] a,intlower,intupper) {longlcm=1;for(inti=0; i<a.length; i++) lcm=LCM(lcm,a[i]);return(int)(upper/lcm-(lower-1)/lcm); }publiclongGCD(longa,longb) {if(b==0)returna;returnGCD(b,a%b); }publiclongLCM(longa,longb) {returnb/GCD(a,b)*a; } }

Here are some more practice problems to try:

SRM 358 Div I 500 BalanceScale

SRM 427 Div II 500 DesignCalendar

EDIT2: Added a comment to the gcd() that a,b>0. Thanks to syg96

EDIT3: Fixed some bugs in the diophantine code. Should be correct now.]]>

Well, your gcd can return negative number on negative input. It can be a bad surprise for someone=) Better add "return abs(a);" or write a,b>=0 as input constraint or at least mention that your gcd can return negative result.

Wikipedia is right on complexity. And the proof about O(h^2) is correct there: it assumes that division is O(h(l+1)) and calculates the sum...

Well, I should have written the explanation of extended Euclid.

We want to find any solution (x, y) of equation:

a*x + b*y = gcd(a, b);

Two cases:

1. Let b == 0. Then

x = sgn(a); y = 0; is a solution:

a*sgn(a) + b*0 = a*sgn(a) = abs(a) = gcd(a, 0);

Moreover, we choose positive gcd here.

2. Let b != 0. Then

Divide a/b with reminder:

int q = a / b; (quotient)

int r = a % b; (reminder)

Then it is guaranteed (even for signed a,b !) that:

a = q*b + r;

Ok, now let's call Euclid(b, r, y, x) to solve the smaller problem recursively. It returns gcd(b, r) = gcd(a, b) and fills numbers y and x such that:

b*y + r*x = d (note that x and y are swapped)

Now use r = (a-q*b):

b*y + (a-q*b)*x = d

Now group everything around a and b:

d = b*y + (a-q*b)*x = b*y + a*x - q*b*x = a*x + b*(y-q*x)

This equation tells us that:

nx = x; ny = y - q*x; is a solution for problem a*nx + b*ny = gcd(a, b);

So the problem is solved.

In case I forget the formulas, I just write the equation for recursive call Euclid(b, a-q*b, recx, recy) on paper, group everything around a and b and get new solution.

The swapping of x and y is not necessary. It is just a hack which make code a bit shorter=)

So as soon as you understand this algorithm it is relatively easy to code.

About checking Diophantine code for all sorts of weird cases...=)

I made a training with such a problem. The absolute values of a, b, d are up to 10^9 there. You have to output any signed 32-bit integer solution.

If you want to check yourself, I can send you tests + checker over email (stgatilov@gmail.com)

]]>

* I avoided the one-line gcd() definition for clarity purposes.

* For the complexity I used the information from Wikipedia. Perhaps Wikipedia is wrong, or I misinterpreted.

* Thanks for the large number gcd. It will be nice to include it.

* I don't understand your extended euclid's algorithm, but I am happy to include it.

* I think my solution to diophantine equations should handle +/- of a and b. I am not sure that it deals with all 0 cases correctly. Please let me know if you find a bug in it. Personally, I feel that it is not necessary to include it, because it appears quite infrequently in problems.]]>

intgcd(inta,intb) {return(b ? gcd(b, a%b) : abs(a)); }

This code works well for any signed integers except the pair (0, 0).

Second, I feel that this statement is wrong:

"Since the computational expense of each iteration has order h, the overall complexity grows as h^2."

Actually, each iteration requires division of h-digit numbers so it requires O(h^2) time instead of O(h). In this way you can only prove the overall O(h^3) complexity.

If the division is implemented in a proper way, it can be proved that the algorithm requires O(h^2) time. But the proof is rather difficult and not that obvious.

So let's change this statement just to "With proper division implementation the computational complexity for large numbers is O(h^2)" or something like that.

If we speak of large number gcd, we should include binary algorithm as it is much easier to implement for large numbers. Here is the code for ints:

intgcd(inta,intb) {if(a == 0)returnb;if(b == 0)returna;if(a%2==0 && b%2==0)returngcd(a/2, b/2) * 2;elseif(a%2 == 0)returngcd(a/2, b);elseif(b%2 == 0)returngcd(a, b/2);elsereturn(a>=b ? gcd(a-b, b) : gcd(a, b-a)); }

Works for nonnegative numbers only (except zero pair as always). Time is O(h^2).

I think we should make the extended Euclid algorithm clear:

intEuclid(inta,intb,int&x,int&y) {if(b == 0) { x = (a<0 ? -1 : 1); y = 0;returna*x; }intq = a/b;intr = a%b;intd = Euclid(b, r, y, x); y -= q*x;returnd; } ...intx, y; g = Euclid(a, b, x, y);

This code returns g = gcd(a, b) and also finds such x and y that:

a*x + b*y = g

It works for any pair of signed numbers (except zero pair). Moreover, the solution it finds is almost minimal in absolute value:

|x| <= |b|/g

|y| <= |a|/g

This experimental fact often allows us not to be afraid of overflow.

The solution for general diophantine equation includes a lot of cases such as a==0 or b==0 of c==0 plus signs of a and b etc. Do we really need to include it? The extended Euclid algorithm code is sufficient to find reciprocal element modulo M which is the most common usage.]]>