7

In his paper "The Argument against Quantum Computers, the Quantum Laws of Nature, and Google’s Supremacy Claims", Gil Kalai argues that quantum advantage will never be reached. For NISQ devices in particular, he argues that for a large variety of noise, the correlation between the ideal distribution and the noised one converges to $0$, meaning that the results are effectively unusable.

A common counter-argument is the Threshold theorem, which states that for an acceptable level of noise, we can error-correct a Quantum Computer. Gil Kalai however argues that:

At the center of my analysis is a computational complexity argument stating that $\gamma<\delta$

where

  • $\gamma$ is the rate of noise required for quantum advantage, and
  • $\delta$ is the rate of noise that can realistically be achieved.

Thus, Gil Kalai states that the Threshold Theorem will never be applied in practice, that the level of noise in NISQ devices will always be higher that the aforementioned threshold.

However, last year, the Google Quantum AI team published "Suppressing quantum errors by scaling a surface code logical qubit", where they show, from my understanding, that they managed to perform error-correction at threshold, meaning that correcting a Quantum Computer does not add more errors than it corrects.

Is this paper enough to invalidate all of Gil Kalai's arguments? For instance, does the fact that NISQ-generated distributions can be approximated by low-degree polynomials still hold, or is it linked to the previous argument and thus rendered void?

I don't think there has been a follow-up bu Gil Kalai on this paper, though I may have missed it.

Tristan Nemoz
  • 6,137
  • 2
  • 9
  • 32
  • 2
    Nice question, Tristan! :) – Gil Kalai Feb 15 '23 at 19:08
  • 1
    @GilKalai Getting a first-hand answer is definitely on my Stack Exchange checklist, don't hesitate to clarify the question or to add an answer! :D – Tristan Nemoz Feb 15 '23 at 19:57
  • 2
    Tristan, this question is better answered by others, and I am very interested by colleagues' view. (But I will keep in mind your checklist for the future. :) ) – Gil Kalai Feb 15 '23 at 21:04
  • 1
    @GilKalai That's too bad, but I understand. If you ever do a follow-up on your paper based on recent results, please feel free to post here, or to ping me! – Tristan Nemoz Feb 16 '23 at 08:03
  • I have a hard time justifying quantum computers when we can do things just as fast classically. Bhattacharya et al. (2002): "We report on an experiment on Grover's quantum search algorithm showing that classical waves can search a N-item database as efficiently as quantum mechanics can." As Arnold Neumaier pointed out, George Stokes’ description of a polarized quasi-monochromatic beam of classical light behaves exactly like a modern quantum bit. It appears that all quantum systems can be simulated by classical electromagnetic waves. However, a practical realization of this would be difficult. –  Mar 20 '23 at 17:57

3 Answers3

8

Note: views are my own.

I think experiments of this type will refute Gil's arguments, but I would be uncomfortable claiming that yet. I like nice clear don't-even-really-need-statistics answers to questions like this, so personally I'll be waiting for one-in-a-trillion error rates on complex states before saying the physical-noise-is-conspiring position has become indefensible.

Kind of related: I did a twitter poll intended to gauge where people thought quantum computers would break first, if they weren't possible. I don't know what proportion of the answers are from experts vs lay people, what proportion are jokes, etc etc standard disclaimers etc, but the results at least suggest that most people think a storage experiment isn't enough to detect foundational problems.

enter image description here

Craig Gidney
  • 36,389
  • 1
  • 29
  • 95
  • 1
    This is a nice answer, Craig! I'll admit that my argument is refuted much before "one-in-a-trillion error rates on complex states," is demonstrated. In fact, if you show me (physically or logically) a CNOT gate with one-in-a-trillion error rate this will convince me that my argument is invalid. If you are talking about complex quantum states on 30 qubits, I'd be quite satisfied with one-in-a-thousand error rate. – Gil Kalai Feb 15 '23 at 20:01
  • 1
    Regarding the poll, it poses a nice (but for me a little vague) question, and I have two questions about it and a two remark (in a subsequent comment). The first question is what the phrase "even with error correction" means. It seems that very stable logical qubits via error correction amounts to "qubit storage". At least I am not sure I understand what "qubit storage precisely means. The second question is what precisely the third answer "distillation (magic/EPR)" refers to? – Gil Kalai Feb 15 '23 at 20:04
  • 1
    Two remarks about Craig's poll. The first remark is that my argument states that good quantum error correcting codes cannot be built. (So it goes against the premise "even with error correction"). The second remark is that my argument does not assumes or claims that "quantum mechanics is wrong". It is perfectly OK to speculate also about such a scenario but it is not part of my argument which (as far as I can see) lies strictly within quantum mechanics. – Gil Kalai Feb 15 '23 at 20:05
  • 1
    @GilKalai You'd be convinced by a distance 7+ surface code experiencing less than 1 error per d*1000 rounds? I'll keep that in mind. In the poll, by "qubit storage" I meant the ability to store states like |0> or |+> or |00>+|11> with negligible error for long periods. Magic state distillation is in the poll because it's the most common process that allows the computer to escape the finite Clifford group and achieve universality. I agree the poll is a bit vague, it's actually hard to be concrete given the variety of views out there, and also agree it's not perfectly matched up to your claims. – Craig Gidney Feb 15 '23 at 20:22
  • 1
    @GilKalai I gave an introduction about magic states here: https://quantumcomputing.stackexchange.com/a/13642/2293 – user1271772 No more free time Feb 15 '23 at 20:26
4

Noise

"Does Google's error correction paper invalidate Gil Kalai's arguments?"

The only thing that will invalidate Gil Kalai's arguments, is an actual experiment that demonstrates quantum advantage. Not an experiment that is described as "one more step towards quantum advantage". Nor an experiment that is described as "one more step towards quantum error correction or fault-tolerance".

"Gil Kalai states that the Threshold Theorem will never be applied in practice, that the level of noise in NISQ devices will always be higher that the aforementioned threshold."

The "N" in NISQ is "noise". If a device has so little noise that it's actually lower than some milestone threshold, I would say that we have graduated out of the NISQ era.

Number of qubits

The opening sentence to the Wikipedia article on NISQ defines NISQ computers as ones that will not achieve fault-tolerance or quantum advantage:

"The current state of quantum computing [1] is referred to as the noisy intermediate-scale quantum (NISQ) era, [2] characterized by quantum processors containing 50-100 qubits which are not yet advanced enough for fault-tolerance or large enough to achieve quantum supremacy."

It also defines NISQ machines to have up to 100 qubits, which comes from the abstract of Preskill's paper that coined the term NISQ. He never explicitly defined NISQ, so people resorted to defining it based on the scarce places in the paper where he did talk about the number of qubits.

Classical supercomputers can currently simulate the full wavefunction of a 54-qubit machine and in Figure 1 of this 2018 paper, you can see that simulations were possible for a 144-qubit version of Google's random quantum circuits that were used for their "supremacy" experiment.

With 100 physical qubits and the type of circuits that Google have, it's unlikely that you can outperform classical supercomputers that were simulating 144 qubits in the same type of circuit, way back in 2018. Even if you don't use the clever tensor-network algorithm that the above-linked paper used, and you use a silly algorithm like a brute-force $\exp{(-iHt)}$ calculation, classical supercomputers can fully simulate a 54-qubit wavefunction, so your error-correcting code would have to have 100/54 = 1.85 physical qubits per logical qubit.

Conclusion

It is nice to see that Google was able to claim that a "distance-5" surface code involving 49-qubits "modestly outperforms" a "distance-3" surface code involving 17-qubits, meaning that larger error-correcting codes are indeed performing "modestly" better than smaller ones. However, you are going to need way, way, way, way more physical qubits than 100 to show that a distance-N code can lead to quantum advantage in a real experiment. You would need N much larger. That's why they said "modestly outperforms" instead of "outperforms enough for fault-tolerance".

  • 1
    Note the 2018 simulation paper is for much easier circuits than google's experiment. (The paper predates the experiment, and the planned circuits were changed as weaknesses and opportunities for strengthening were found.) The two major differences are they're simulating half as many 2 qubit gates per layer, and they're simulating CZ gates instead of ISWAP-like gates. This de-facto cuts the depth of the circuit by a factor of 4; an enormous change. State of the art fixes this: https://arxiv.org/abs/2212.04749 . That said, this type of tensor sim improving is orthogonal to Gil's claims. – Craig Gidney Feb 14 '23 at 16:48
  • Thanks @CraigGidney! The paper you linked says that the simulation that they did "is more than 10^3 harder than that performed by Sycamore." Furthermore, the 144-qubit simulation done in 2018 was mentioned anecdotally, and the following paragraph said that unless you have an error correcting code that uses 1.85 physical qubits per logical qubit, no 100-qubit quantum computer is going to have error correction and more logical qubits than what can be perfectly simulated on a 2018 supercomputer. I don't see what is "orthogonal" to Gil's claims. NISQ machines will not have quantum advantage. – user1271772 No more free time Feb 14 '23 at 17:14
  • I say it's orthogonal to Gil's claims because, in the hypothetical world where random quantum circuits were easy to simulate classically because of some awesome algorithmic trick, this wouldn't imply quantum computers couldn't factor numbers faster than classical computers (because factoring circuits are highly non-random). Whereas Gil's claims imply no quantum factoring. Failing-to-be-faster due to classical awesomeness is very different from failing-to-be-faster due to quantum limitations. – Craig Gidney Feb 14 '23 at 17:54
  • To factor numbers on classical computers, one would not try to simulate a non-random quantum circuit, they would just run the GNFS, which would outperform any current or future NISQ machine. – user1271772 No more free time Feb 14 '23 at 18:02
  • Yes, I agree that quantum computers are nowhere near doing classically intractable factoring and that classical factoring methods look nothing like simulating quantum circuits. But it remains true that classical computers being able to efficiently simulate random quantum circuits wouldn't imply Gil was right about there being unavoidable correlated noise in quantum computers, and wouldn't imply fault tolerant quantum computers will be unable to factor large numbers. – Craig Gidney Feb 14 '23 at 18:24
  • What do you mean? "correlated noise in quantum computers" is something that even you agree is unavoidable right? The more interesting question is whether or not it is avoidable to an extent that gives advantage over classical computers. For that, the state-of-the-art classical simulations are of paramount importance, no? – user1271772 No more free time Feb 14 '23 at 19:10
  • 1
    Hi guys, I think that both claims in the 2019 Google's quantum supremacy paper as well as the claims in the new distance-5 surface code paper are in tension with my theory, but as both user1271772 and Craig said, you need considerably stronger experiments to refute my argument.

    The original claims in the 2019 paper (about classical hardness of the achieved samples) were sufficient (in my eyes) to invalidate my argument but the discovery of much better classical algorithms changed the picture.

    – Gil Kalai Feb 15 '23 at 19:58
  • 1
    I studied (mainly in 2005-2012) "correlated noise in quantum computers" and correlated noise is part of the picture regarding the behavior of noisy quantum computers. But correlated noise is not part of my current argument for why neither substantial quantum computational advantage nor good-quality quantum error correcting codes are possible. – Gil Kalai Feb 15 '23 at 20:00
  • It seems to me that this is a question of how nonlinear does error correction scale, and not whether it's possible. It's a very simple argument to bound the speed of quantum evolution from basic laws of QM. See my post: https://quantumcomputing.stackexchange.com/questions/28327/thermodynamic-speed-limit-to-quantum-computing –  Mar 03 '23 at 17:20
  • @MatthewCory Thanks for pointing me to your post, which it turns out I had upvoted long ago. I don't know if those arguments are "very" simple. We still don't have a general quantum mechanical theory that obeys special relativity (the Dirac equation is a 1-electron equation and doesn't even take into account the Coulomb interaction between two electrons, and the Dirac-Coulomb-Breit equation is considered an approximation), other than field theories like QED for which calculations on larger systems are notoriously difficult or impossible. It's also hard to predict engineering complexity for QC. – user1271772 No more free time Mar 03 '23 at 17:40
  • Ask Charles Francis, on Physics Stack Exchange, about QED. Once you ditch point particles (I'm an ultrafinitist), a lot of technical problems simply disappear. There is much popular misunderstanding about lattice gauge theories, causal perturbation theory, non-geometric gravity, high-energy physics, entanglement, classical analogs and unification, but I can't mention them here. Suffice it to say that I wouldn't assume Boltzmann's law or E-T bounds are not extremely verified physics. Those laws are prior to engineering concerns. https://physics.stackexchange.com/users/255997/charles-francis –  Mar 03 '23 at 17:54
  • @MatthewCory why do I need to ask Charles Frances about QED? – user1271772 No more free time Mar 03 '23 at 20:08
  • @user1271772 See: https://arxiv.org/abs/physics/0101062 https://arxiv.org/abs/physics/0110007 –  Mar 05 '23 at 07:10
  • I don't see what the relationship is between those two papers and the rest of this discussion. – user1271772 No more free time Mar 05 '23 at 10:51
  • @MatthewCory You can't change the subject and then complain about it. You are simply exaggerating problems with unification. I provided very basic arguments against scaling quantum computers. Overcoming quantum fluctuations involves INTERNAL decoherence, not just external decoherence. Landauer's principle is very misleading. If you want to debate the meaning of decoherence, then I refer you to Freeman Dyson's "Two Experiments": https://www.youtube.com/watch?v=-HjF4yvgOlo –  Mar 10 '23 at 16:32
0

It seems to me that once in a while a quantum computer by chance will achieve good error correction but this will be a random occurrence and highly unlikely. When we consider gravitational waves and similar non-shieldable radiation forms, building at first consideration quantum error correction systems will need their own quantum error correction systems which will need their own quantum error correction systems and so. The number of required qubits will be huge and each level of error correction apparatus will be prone to corruption by noise.

Another consideration is the noisy aspects of the fabric of space-time. Provided the smallest length and time units are the Planck length and Planck time units, the very background of space-time has inherent noise even in consideration of regions of space-time having no classical scale gravitational waves passing through it and no classical scale curvature.

Even the otherwise monolithic Higgs field is noisy at the Planck length and Planck time levels in theory. Macroscopically, the Higgs field is more or less uniform but at the Planck scales occasionally and randomly the field vectors align in such a way so as to produce a vector field within a given portion of this scalar field.

Martin Vesely
  • 13,891
  • 4
  • 28
  • 65