21

Shor's algorithm is expected to enable us to factor integers far larger than could be feasibly done on modern classical computers.

At current, only smaller integers have been factored. For example, this paper discusses factorizing $15=5{\times}3$.

What is in this sense the state-of-art in research? Is there any recent paper in which it says some bigger numbers have been factorized?

Sanchayan Dutta
  • 17,497
  • 7
  • 48
  • 110
Lying Dancer
  • 313
  • 2
  • 5

3 Answers3

15

The prime factorization of 21 (7x3) seems to be the largest done to date with Shor's algorithm; it was done in 2012 as detailed in this paper. It should be noted, however, that much larger numbers, such as 56,153 in 2014, have been factored using a minimization algorithm, as detailed here. For a convenient reference, see Table 5 of this paper:

$$ \begin{array}{c} \textbf{Table 5:}~\text{Quantum factorization records} \\ \hline \small{ \begin{array}{cccccc} \text{Number} & \text{# of factors} & \begin{array}{c}\text{# of qubits} \\ \text{needed} \end{array} & \text{Algorithm} & \begin{array}{c}\text{Year} \\ \text{implemented} \end{array} & \begin{array}{c}\text{Implemented} \\ \text{without prior} \\ \text{knowledge of} \\ \text{solution} \end{array} \\ \hline 15 & 2 & 8 & \text{Shor} & 2001~\left[2\right] & \chi \\ & 2 & 8 & \text{Shor} & 2007~\left[3\right] & \chi \\ & 2 & 8 & \text{Shor} & 2007~\left[3\right] & \chi \\ & 2 & 8 & \text{Shor} & 2009~\left[5\right] & \chi \\ & 2 & 8 & \text{Shor} & 2012~\left[6\right] & \chi \\ 21 & 2 & 10 & \text{Shor} & 2012~\left[7\right] & \chi \\ 143 & 2 & 4 & \text{minimization} & 2012~\left[1\right] & \checkmark \\ 56153 & 2 & 4 & \text{minimization} & 2012~\left[1\right] & \checkmark \\ \hline 291311 & 2 & 6 & \text{minimization} & \text{not yet} & \checkmark \\ 175 & 3 & 3 & \text{minimization} & \text{not yet} & \checkmark \end{array}} \end{array}_{\Large{.}} $$

auden
  • 3,429
  • 1
  • 20
  • 48
  • @SqueamishOssifrage: Where does it say the minimization algorithm is "limited to numbers whose factors have known relations making the search space much smaller, such as differing in only a few bit positions or differing in all but a few positions" ? – user1271772 No more free time May 20 '18 at 21:35
  • @user1271772 As I understand it, the technique relies on reducing the problem to require only a tractable number of qubits by eliminating variables by known relations between the bits of the factors. Though the number of qubits to factor $N$ may scale with only $O(\log^2 N)$, none of the papers I read seemed to make any attempt to estimate the growth of time to solution as a function of the number of qubits or of $\log N$. – Squeamish Ossifrage May 22 '18 at 20:42
  • @SqueamishOssifrage: "by eliminating variables by known relations between the bits of the factors" Would you agree that Eq. 1 of https://arxiv.org/pdf/1411.6758.pdf implies that z12 = 0, *without* any "known" relation between the bits? Would you agree that you can deduce that z12 = 0 for arbitrary p1, p2, q1, q2 ? Next: The number of variables (qubits) in the table method is $\log(N)$ not $\log^2 N$. The problem can be solved on an annealer with $\log(N)$ qubits if arbitrary 4-qubit interactions are allowed. If only 2-qubit interactions are allowed, you need $\log^2 N$. – user1271772 No more free time May 22 '18 at 23:55
  • @SqueamishOssifrage: "none of the papers I read seemed to make any attempt to estimate the growth of time to solution as a function of the number of qubits". This one made an attempt: https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.101.220405

    But "time to solution" is not what's important, it is the effort required. GNF sieving is easy but the matrix step is horribly cumbersome. Performing Shor's algorithm in a reasonably optimal way is cumbersome. The minimization algorithm is simple.

    – user1271772 No more free time May 23 '18 at 00:29
  • @SqueamishOssifrage: Finally: "Note that the minimization algorithm is limited to numbers whose factors have known relations" .. no part of the algorithm is limited to "known" relations. The algorithm does not assume anything about the factors. No relations. The bits are all unknown variables that are determined by minimization. The minimization can be done with fewer qubits for some numbers than others. The same is true for Shor's algorithm. The same is true for GNFS. In fact if the number you want to factor is even, it is rather easy to factor it. – user1271772 No more free time May 23 '18 at 00:31
  • @user1271772 Sorry, you're right, it is not limited to factors with known relations; I retract my earlier comment about those limitations. I'm well aware that time is not ‘the important metric’—certainly cryptanalysis is full of space/time tradeoffs including simply dedicating silicon die area to many cores in parallel—but time figures into cost because to power your {CPU, refrigerator, computing slave} for two hours probably costs about twice as much as it costs to power it for one hour. Not sure what you mean by ‘cumbersome’, though—that's not a cost metric I know! – Squeamish Ossifrage May 23 '18 at 03:46
  • @SqueamishOssifrage: This is actually just the word used by Paul Zimmerman and the rest of the team that wrote their report on factoring RSA-768 in 2009. They wrote that the matrix step is cumbersome. They say that sieving is much more "relaxing" in that they can just let the computers run without doing anything, whereas they say the matrix step involves a lot more human work. This is just what they said. Yes power costs a lot, but D-Wave uses orders of magnitude less power than any of the super-computers on the Top500 or Green500 (most power efficient) list. – user1271772 No more free time May 23 '18 at 03:57
6

For Shor's algorthm: State of the art is still 15. In order to "factor" 21 in the paper Heather mentions, they had to use the fact that $21=7\times 3$ to choose their base $a$. This was explained in 2013 in the paper Pretending to factor numbers on a quantum computer, later published by Nature with a slightly friendlier title. The quantum computer did not factor 21, but it verified that the factors 7 and 3 are indeed correct.

For the annealing algorithm: State of the art is 376289. But we do not know how this will scale. A very crude upper limit to the number of qubits needed to factor RSA-230 is 5.5 billion qubits (but this can be brought down significantly by better compilers), while Shor's algorithm can do it with 381 qubits.

  • 1
    You'll notice in the table in my answer there's a column for "implemented without prior knowledge of solution" there's an "x" for all shor's algorithm implementations, leading me to believe something similar is true for factoring 15. – auden May 20 '18 at 22:09
5

The size of the number factored is not a good measure for the complexity of the factorization problem, and correspondingly the power of a quantum algorithm. The relevant measure should rather be the periodicity of the resulting function which appears in the algorithm.

This is discussed in J. Smolin, G. Smith, A. Vargo: Pretending to factor large numbers on a quantum computer, Nature 499, 163-165 (2013). In particular, the authors also give an example of a number with 20000 binary digits which can be factored with a two-qubit quantum computer, with exactly the same implementation that had been used previously to factor other numbers.

It should be noted that the "manual simplifications" which the authors perform to arrive at this quantum algorithm is something which has also been done e.g. for the original experiment factoring 15.

Norbert Schuch
  • 6,521
  • 16
  • 27