7

I'm looking for any evidence pointing towards D-Wave's approach to quantum computation being promising to achieve any sort of computational advantage with respect to classical devices.

Note that I'm not asking about quantum annealing in general. As also mentioned e.g. in this answer and (Aharonov et al. 2014), one can show the equivalence between adiabatic quantum computation and the gate model, which, although I'm not sure how much this proof applies to D-Wave-like devices, I suppose is a decent argument towards the advantages of quantum annealing in general. But D-Wave devices are generally, as far as I can tell, tailored to specific tasks, and far from universal, so these general arguments about equivalence with gate-based computing do not seem to apply.

I am also not asking about evidence that D-Wave devices are capable of providing right now any sort of computational advantages. There's already some posts on this topic on the site, e.g.

Obviously showing that right now D-Wave provides computational advantages would also answer this question, but here I'm setting the bar lower than that, as it is my understanding that there is no such clear (as in, undisputed within the community) evidence as of now. Rather, I'm asking for any reason to believe D-Wave's approach is going to be useful at any point in the future. In other words, that the general approach is viable, even though there might be a number of technical hurdles to overcome before reaching that point.

I am also not necessarily asking about theoretical proofs of quantum advantage for specific problems, nor necessarily for evidence of exponential advantages, or even for any sort of scaling advantage. As discussed e.g. in this post, there doesn't seem to be any such proof at the moment. Rather, I'm looking for any argument of the form "there is a chance that D-Wave devices are going to be useful at some point in the future because XXX". I leave "be useful" intentionally vague, but I'd only qualify it in that I'm looking for "usefulness" in the applicational sense. Any such device is arguably "useful" in that these are complex machines and even the sole act of building them and testing them furthers our scientific understanding of the topic, but that's not what I'm talking about here.

Finally, I would point out that arguments of the form "D-Wave's device X has been used to solve optimisation task Y and was found to be more efficient than classical algorithm Z" can be possible answers, but they are also a rather weak form of evidence, unless there is also good reason to believe there are no better classical algorithms to solve the same problem.


This is also related to Is there any real world problem where I can see quantum computing being better than classical computing?.

glS
  • 24,708
  • 5
  • 34
  • 108

1 Answers1

3

Great question. Let me first start with non-quantum stuff that turns out to be pretty important when talking about D-Wave's quantum annealers or any other solvers like VQE, QAOA etc.

QA is designed to solve Quadratic unconstrained binary optimization (QUBO) problems. Many interesting and important combinatorial problems are non-QUBO. In fact, Integer Programming is a huge research field with an extraordinary number of industrial applications. Integer programs can be linear, with integer variables and many linear constraints. And here we have it, the largest class of useful problems can't be directly solved on QA. However, if we desperately want to use QA for integer programming, then we must reformulate a problem as QUBO. However, in most cases, when we convert a problem into QUBO, the new problem becomes really hard. This hardness comes due to additional slack variables that must be present in a QUBO model to account for inequality constraints. It turns out, that in the new QUBO problem, every neighbouring solution (which is one-bit flip away) is infeasible. So, each feasible solution is separated by an ocean of infeasible solutions. Such an issue is called a rugged optimization landscape. Basically, we have to navigate through a rocky energy landscape with exponentially high peaks of energy. This is really hard for both QA and classical heuristics such as simulated annealing, tabu search etc.

The above discussion points out that even if we get a perfect QA, a problem it tries to solve may be really hard because it was converted to QUBO. At this point, it is best to keep the original formulation and use the usual simplex-like algorithm with a branch-and-bound heuristic. It will do much much better.

Ok, let's say we are not interested in the largest class of problems and want to solve native problems like MaxCut, Quadratic Assignment Problem, spin-glass systems etc. These problems are naturally QUBO, so the above discussion about a messed-up energy landscape and additional slack variables does not apply. But, we have a little issue again. We still need to embed a problem on a QPU. The embedding problem is NP-hard. So, before you start solving an optimization problem, you need to solve the embedding problem! If you are an experienced D-Wave user, then this is not an issue. Pros usually have a handly library of mappings which allows them to bypass the embedding stage to some extent.

So to conclude, given perfectly-working QA it is only reasonable to solve a very limited class of problems. If you want to extend the application of QA to a wider class of problems then prepare to incur an unreasonable tax on the performance and quantum resource requirements.

Now, why does QA has potential? QA is a heuristic. Hence, it is fair to compare it with other classical heuristics such as simulated annealing, tabu search, parallel-tempering, evolution algorithms etc. All these classical algorithms can be called local-search and hill-climbing heuristics. Hill-climbing is a mechanism to avoid getting trapped in local minima. All these classical algorithms deploy various schemes to hill-climb the energy landscape to find the minimum. Climbing hills is really hard because if the energy peak is very high and you want to traverse it, the chances you succeed with classical heuristics are pretty low. This is because an algorithm will have to accept too many uphill moves. And if it accepts too many uphill moves it may never converge. This is when QA can make a difference! QA can hill-climb and most importantly it can quantum tunnel through the high peaks of energy towards the global minimum. So, in theory, QA has a much more robust landscape exploration mechanism and can tunnel towards the global optimum due to the adiabatic theorem! This feature sets apart QA from all other classical heuristics.

You can simulate some sort of tunnelling classically by flipping multiple bits of a current solution, but there is no guarantee that the newly obtained solution is closer to the global minimum.

So, to conclude, the tunnelling and the adiabatic process are two main things that set QA apart from classical heuristics that blindly climb the landscape. There is no way to simulate the QA tunnelling classically because we don't know the direction in which the global minimum is. So, if D-Wave irons out their tech issues (whatever they are) the future looks promising for optimization of native QUBO problems.

And now here are my two cents: I think D-Wave felt a victim of its own marketing. There is a lot of underserved bad talk on the internet about their tech. They created high expectations, and various folks ran to benchmark QA and publish their findings with claims that it didn't work as expected. But this is the same as trying to benchmark IBM's or Google's QPUs. They work... but kinda don't work for anything useful. Does it mean they have no potential? No! Just as anything quantum, QA is a very early tech so the smartest thing to do is just to have the same expectations you have from other quantum companies, that is: "Yes, they got a quantum something-something. Yes, they claimed some quantum advantage over something-something... but let's check-in in a couple of years".

kludg
  • 3,204
  • 9
  • 18
MonteNero
  • 2,396
  • 4
  • 22
  • Following up on your last paragraph, I suppose the interesting question there is about trajectory. QPUs have a clear roadmap that will allow them to deal with noise etc and eventually do something useful. Is there such a clear roadmap for dWave? – DaftWullie Jul 12 '22 at 06:45
  • D-Wave already does something useful. Their QPU can actually find decent solutions to some optimization problems. But in my opinion, its applications are limited so far. Regarding the roadman, that's the question for D-Wave. – MonteNero Jul 12 '22 at 07:09
  • thank you for the answer. I can't tell whether you're arguing D-Wave devices already do something "useful" or not, though. Surely "decent solutions" need to be compared with classical solutions obtained with a comparable amount of spent resources, no? Such comparisons are inherently quite tricky, but can you point to any specific references showing such a thing? That aside, it seems like your main points are here that the potential advantages are provided by tunneling and adiabatic theorem. But there's many moving parts in adiabatic optimisation algorithms. Is there some clear argument (...) – glS Jul 12 '22 at 07:33
  • (...) showing, in at least some specific instances, that tunneling & adiabatic theorem can provide an advantage in finding the solution to a given problem? Something like: "assuming the cost landscape is such and such, then tunneling can be shown to find the minimum in X time whereas classical algorithms tend to take Y>>X time" – glS Jul 12 '22 at 07:34
  • I definitely claim that QA can do useful stuff and solve moderately-large industrial problems. But from my own very very limited experience classical methods worked better for my problems. I guess, if I had free credits to run and experiment with QA more, I could MAYBE achieve better results? – MonteNero Jul 12 '22 at 07:41
  • I see. You want to see more rigorous mathematical or quantitative argument. Unfortunately I'm not able to quantitatively characterize energy peaks, tunneling behavior and give timing in some units. This not only requires expertise that I don't have but also very specific optimization problem, problem formulation, detailed knowledge of hardware and classical solver we compare with etc. Sorry can't point you to a good paper that I know of. But yes, you got the essence. Tunneling is what sets QA apart from SA, Tabu and the likes. – MonteNero Jul 12 '22 at 07:49
  • @MonteNero I'm just looking for any specific example of said usefulness being on display. Where I come from is that I've been hearing about D-Wave devices doing "potentially useful stuff and solving real-world problems" for years, and yet I've never got one good example of this being shown that was not highly controversial. I've also asked the same question to people that actively published papers doing things with D-Wave, and even then I didn't get a clear answer. I'm just looking for a good reference here. From the papers I looked at I couldn't find direct clear comparisons – glS Jul 12 '22 at 09:00
  • or in lack of any such direct (even if only empirical) evidence, at least some theoretical argument to show why one should expect any advantage at all – glS Jul 12 '22 at 09:01
  • I get your point. QA can solve moderately large problems. No need for papers, we can verify this on our own. Just go on their website select pure QPU solver (this is FREE) and solve a MaxCut. It is very likely that you will observe the ground state. Now, if you ask somebody if it has an advantage over a classical solver, the answer is not clear-cut. But again, asking about the advantage of QA is the same as asking about the advantages of QAOA , VQE or Grover Minimum Finding algo. Is there any advantage? Can these algorithms solve a 20 vertex MaxCut problem with a low Time To Solution? – MonteNero Jul 12 '22 at 09:18
  • I read this a few weeks ago https://www.nature.com/articles/s41598-022-06070-5.pdf?origin=ppub. The paper benchmarks D-Wave's solver vs Digital Annealer, Simulated Bifurcation Machine and Simulated Annealing solver. What I dislike about this paper is that it uses D-Wave's hybrid solver. This solver is powerful but it is hybrid! So it is not clear how much of optimization goes into their QPU. It is probably possible to find some other papers where D-Waves QPU solves fairly large problems. Problems large enough to show that it can actually do useful work that other QPUs can't. – MonteNero Jul 12 '22 at 09:27
  • Again, I see this strange bias that I was talking about in my answer. People want to see the proof of advantage from D-Wave, but they do not ask the same questions about QAOA, VQE, Grover Min Finder that run on the actual hardware. When it comes to big brand names like Google and IBM, the consensus is that it is too early to say anything about the performance of their QPUs and optimization algorithms. But when it is D-Wave, it is always about "show me the advantage now!". I feel like people are conditioned to have more trust in these mogul-giants and accept their excuses more leniently. – MonteNero Jul 12 '22 at 09:42