0

I'm reading Nielsen and Chuang. On page 142 the integer factoring decision problem is introduced: The integer factorization problem can be reduced to a decision problem:

Given a composite integer m and l < m, does m have a non-trivial factor less than l?

For a given witness x < l this problem can be checked in polynomial time. Now I'm asking myself why it should not be possible to use the algorithm for checking to construct a polynomial time algorithm to decide the decision problem. I'm thinking of something like this:

for x < l
  if check(x,l,m)
     return true
  else 
     return false

This is basically just trying out all potential numbers of which there are at maximum m. So the time needed for this should be still linear because m times polynomial time (for the checking algorithm) should still be just polynomial I think.

I know something about this reasoning must be wrong but I'm completely stuck there.

Opinel
  • 11
  • 2

1 Answers1

1

I think I found the solution by myself: It is called Pseudo-polynomial time - the time scales polynomial in the value of the number but not in the length.

Opinel
  • 11
  • 2
  • 1
    Indeed, integer division takes time polynomial in the value of the dividend and divisor, so by exhaustive division, factoring takes polynomial time in the value of the number to factor. –  Jan 19 '23 at 11:01
  • Actually, even faster than linear. Pollard-Rho is not the fastest anymore and can together with a fast primality tester factor a number around n in O (n^(1/4)). Unfortunately for an n-bit number that is about 2^(n/4). – gnasher729 Feb 18 '23 at 00:45