4

Suppose I have an algorithm that prepares a state of $N$ qubits ($N$ variable); I was recently told that the algorithm is called efficient if the depth of the circuit is independent of $N$. However, they said that there is variability in how "efficient" is defined.

What is the definition of an "efficient" algorithm? Or are there many ways to define it?

Martin Vesely
  • 13,891
  • 4
  • 28
  • 65
David Raveh
  • 141
  • 4
  • 2
    From point of view of complexity theory, an efficient algorithm runs in polynomial time. – Martin Vesely Mar 02 '24 at 07:35
  • Circuit depth and time (to execute the circuit/algorithm) are directly proportional. Thus, poly depth implies poly time, and vice versa. Also, constant function can be taken as a polynomial for sake of completeness. – Manish Kumar Mar 02 '24 at 13:37

3 Answers3

3

As pointed out by @Martin Vesely, complexity theorists generally use an "efficient algorithm" for any algorithm with polynomial runtime. But this is in no way a universal use. [see wiki]. However, the complexity theory definition is most suitable to the context (of the problem).

Circuit depth and computational time (of the algorithm) are directly related. A poly-depth circuit implies a polynomial runtime and vice versa.

Some quantum states require constant depth circuits (w.r.t number of qubits), like preparing a uniform superposition state. Some require polynomial/logarithmic depth. Some go beyond polynomials, too.

A more rigorous definition (of efficient algorithm) requires an explicit mention of:

  1. the model of computation (classical, quantum, etc.),
  2. the specific computation resource (time, space, randomness, etc.)

We get the above details by inspecting the context of the discussion. Like in this problem, you are concerned about the quantum model. And time is the resource of concern (this is taken as default if nothing is mentioned explicitly).

Manish Kumar
  • 450
  • 5
  • 16
2

In complexity theory, we study the number of "steps" that an algorithm takes to answer a question of the form "is $x$ in the language $L$?". Complexity is defined with relation to the size of the input $x$. An algorithm is typically said to be "efficient" if it terminates in a number of steps polynomial in the input length $|x|$.

In classical complexity theory, there are two model of computation: Turing machines and boolean circuits. In Turing machines, the "step" is the transition function and its complexity is how many times the machine, with its tape initialized on $x$, uses its transition function before halting in an accepting or rejecting state. In boolean circuits, the "steps" are the logical gates of the circuit (e.g. OR, AND, NOT gates). You could also care about circuit depth. Since boolean circuits take a fixed-size input, we consider families of circuits $\{C_1,C_2,\dots\}$ (one for each input size). Typically, we want a family of circuit to be uniform, i.e. there exists a poly-time Turing machine that on input $1^n$ outputs the description of $C_n$.

In quantum computing, we are typically interested in unitary transformations in the form of quantum circuits. The definition of complexity is now quite similar to that of classical circuits with a family of quantum circuits $U_1,U_2,\dots$. In the quantum setting you might also need a number (which also depends on $|x|$) of auxiliary registers initialized in state $|0\rangle$. To decide if $x\in L$, the circuit $U_{|x|}$ is applied on initial state $| x\rangle| 0^{s(|x|)}\rangle$ and the qubits are measured to get the answer.

Other metrics that people care about include space complexity, which would correspond to circuit width in the case of quantum computing. In the definition I've given, circuit width is $|x|+s(|x|)$, so efficiency would be defined $s(\cdot)$ being a polynomial.

Some quantum algorithms (most famously Shor's) alternate classical and quantum subroutines. These algorithms don't directly fit in the definition above, but it theory noting prevents one from doing the classical part in the quantum circuit with some overhead in the complexity.

lamontap
  • 121
  • 3
1

Just addition on preparing quantum states. For a general state, circuit depth is exponential as you can see from construction in this paper: Transformation of quantum states using uniformly controlled rotations. So, if your algorithm requires input of a general state and the state is prepared as part of the algorithm (i.e. it is not provided by other algorithm or sensor etc.), then it is not efficient. Of course, there are some special states which can prepared efficiently, e.g. a uniform superposition (on each qubit, Hadamard gate is applied, which implies constant depth regardless of number of qubits), GHZ states (linear increase in depth with number of qubits), W-states (see here) or Gaussian states (see here and here).

Once we have quantum memory (qRAM), we should be able to prepare any state efficiently, however, qRAM is for now far from completion. See here for more information on how qRAM should work.

Martin Vesely
  • 13,891
  • 4
  • 28
  • 65
  • 1
    could you fix the second link? "...Gaussian state (see here.." – Manish Kumar Mar 03 '24 at 07:55
  • @ManishKumar: Fixed. Thanks for pointing this out. – Martin Vesely Mar 03 '24 at 09:41
  • 1
    @ManishKumar: I just added several links to papers on efficient algorithms for preparing quantum states (W states, Gaussian and description how qRAM works). – Martin Vesely Mar 03 '24 at 22:03
  • 1
    Shouldn't one rather look at the approximate construction of quantum states? – Norbert Schuch Mar 03 '24 at 23:22
  • @NorbertSchuch: Sounds logically. Do you have any reference how to prepare states approximately? – Martin Vesely Mar 04 '24 at 05:38
  • 1
    @MartinVesely That's what the Solovay-Kitaev theorem is all about. More from a many-qubit (or many-body) point of view, I believe the "convenient illusion of Hilbert space" paper addresses the question of the complexity of approximate preparation. – Norbert Schuch Mar 04 '24 at 07:49
  • 1
    On a different note, "So, if your algorithm requires input of a general state, then it is not efficient." is a tricky statement -- I'd say it does not make sense: If you provide the input as a quantum state, all is fine, and if you provide a classical description, this will be exponentially long in the number of qubits, but since "efficient" is measured is terms of the size of the input, any quantum circuit (even exponentially long) can even be simulated efficiently classically. – Norbert Schuch Mar 04 '24 at 07:51
  • @NorbertSchuch: Thank you for the comment. I meant the algorithm is inefficient if a general state is prepared as part of the algorithm. So, I changed my answer accordingly. – Martin Vesely Mar 04 '24 at 11:40
  • 1
    @MartinVesely I still find this a problematic statement -- partly because it is very hard to write down (or otherwise specify) a generic state. Most states you can think of are entirely ungeneric ;) ... and once you specify the state in any way as part of the algorithm description, then it is part of the input. -- The "convenient illusion" paper makes quite the opposite point: Most states are not needed, so you shouldn't think about how to prepare them, as they are completely unphysical. – Norbert Schuch Mar 04 '24 at 18:52
  • @NorbertSchuch: Still, even though we can neglect vast majority of the states, we have somehow prepare physical ones. If there is no algorithm allowing polynomial scaling of gates number, preparing such states is inefficient. – Martin Vesely Mar 05 '24 at 07:21
  • 1
    @MartinVesely But then you have to explain how you specify this complicated quantum state efficiently as an input to the quantum computer. (I mean, if you algorithm does not say in some way "Prepare this and that state", it cannot be used as an input anyway, so there is no point reasoning about it.) You could of course say "Start the algorithm by preparing this and that ground state of a QMA-hard Hamiltonian" -- that will be hard to prepare, but for some other reason. Similarly, you could have a classical computation which you demand to start from something uncomputable, ... – Norbert Schuch Mar 05 '24 at 07:59
  • ... like a busy beaver number, and then you could argue that this whole classical computation is very hard (in fact, uncomputable), but that's not what we mean when we say a problem is hard (at heast not in the usual sense). – Norbert Schuch Mar 05 '24 at 08:00
  • 1
    @NorbertSchuch: To make our long discussion short. If an algorithm is efficient and preparation of its input is efficient then whole setting is efficient. If either part of the setting is inefficient then the whole setting is inefficient. That was my point. – Martin Vesely Mar 05 '24 at 15:12