1

I am trying to determine the primality of really big numbers n with ~25000 digits knowing that (n-1) = AB(B-x), where A, B and x are known but x is not constant meaning that i want to test primality of a ton of numbers of the same form.

The problem is A and B are also big numbers (~8000 digits) and since the only theorem that i found that could help me was

Theorem 2: Suppose n-1 = FR, where F>R, gcd(F,R) is one and the factorization of F is known. If for every prime factor q of F there is an integer a>1 such that

a^(n-1) = 1 (mod n), and
gcd(a^[(n-1)/q]-1,n) = 1;

then n is prime.

which to me looks like will take ages to run in any machine since i have to test for all a given 1 < a < n.

The question is there some FAST way to determine primality of n knowing (n-1) factors or at least a way to guess primality with a small chance of error?

Right now i have a list with 150000 values of x, so i would like to test ~15000 numbers of the form AB(B-x) + 1 for primality.

Link Theorem page: https://primes.utm.edu/prove/prove3_1.html#Pocklington

  • If $n$ IS prime then you will find a suitable $a$ pretty fast. I would suggest to start with the strong-pseudoprime-test with , lets say $20$ , random bases coprime to $n$. If $n$ does not pass this test, it must be composite. If it passes the test, it is very probably prime and you can try to prove the primality with your test. – Peter Dec 01 '16 at 13:31
  • I already used a sieve up to 1000000000 to remove numbers with small factors. Also miller-Rabin (the fastest?) prime test takes 3 min so having to do it 150000 times is a no-no... – BarriaKarl Dec 01 '16 at 16:34

1 Answers1

1

I'm not sure what your machine is, but my i5 laptop can do a Miller-Rabin test on a 25k digit number in a little over 30 seconds, using GMP code. There are simple ways to access this in Python, Perl, or Pari/GP among others if you don't want to write C. The RosettaCode page has examples in many languages. Admittedly 30 seconds vs. 3 minutes may still leave it too impractical.

I would also recommend looking at PFGW, as it is much faster than GMP once past 4k or so digits. It is only a simple Fermat test by default but that's good enough to weed out almost all composites at this size. The remainder you can send through a good test (e.g. BPSW) with another tool.

There are improvements to Pocklington, most shown in BLS (1975). They require less factoring, but if you have enough already there isn't much point other than noting you can used generalized Pocklington so you don't need a single 'a' value for all 'q' values. You'd want to run a compositeness test (e.g. Fermat, Miller-Rabin, Euler, Euler-Plumb, BPSW, Frobenius variant, etc.) first. If the number passed BPSW or a good Frobenius test, then it's almost certainly prime, so your goal with the theorems is merely to find a set of acceptable 'a' values. This is typically pretty fast (as Peter points out).

PFGW does have some ability to run an n-1 test, and has a way to provide factors. It doesn't seem to return any result other than pass/fail however. Perl/ntheory can do them as well, including giving certificates, but doesn't have a way to hand in known factors so that means it probably won't work here.

DanaJ
  • 2,999
  • I see. I am using java built in .isProbablePrime(1) to test primality. I will check your suggestions because any optimization is welcome, thanks. – BarriaKarl Dec 09 '16 at 12:56