7

I've just gotten started with Nielsen and Chuang's text, and I'm a little stuck. They mention that theoretically, it would be possible to store an infinite amount of information in the state of a single qubit. I'm not sure I completely comprehend this.

Here's how I rationalized it: You take all the information you want to store, put it in binary form, and make it the real component of $\alpha $ or $\beta$ (the coefficients of the computational basis states).

Now I'm not sure if I've understood it right, but since it's still fuzzy in my head it would be great to get some kind of ELI5 explanation or possibly a more detailed picture of how this would, even theoretically, be possible.

Apologies if the question doesn't meet standards. I'm new to the forum and would be open to feedback regarding asking questions or answering them.

glS
  • 24,708
  • 5
  • 34
  • 108
agiri
  • 484
  • 3
  • 9
  • Do they say that you can store an infinite amount of information in one qubit (in other words, there's something you can do to a qubit so that the qubit actually contains an infinite amount of information), or that you can store an arbitrarily large amount of information in one qubit? Those are very different claims. – Tanner Swett Nov 01 '19 at 21:24
  • The exact words that I came across were: Paradoxically, there are an infinite number of points on the unit sphere, so that in principle one could store an entire text of Shakespeare in the infinite binary expansion of theta. I realize that this does not explicitly mention that an infinite amount of information could be stored, but it seems to follow from the fact that they mention an infinite number of points on the Bloch Sphere. Also, if I remember correctly, this bit from one of Nielsen's YouTube videos clearly calls it "infinite" information. – agiri Nov 02 '19 at 05:42
  • 2
    @TannerSwett It's infinite in the same sense as binary expansion of $\pi$ or $e$ (allegedly) containing all the works of Shakespeare (along with every literary piece that can ever be penned in the future). Not really insightful, and I don't see any reason to mention it in a textbook apart from getting newbies and laymen hyped up about quantum computing. – Sanchayan Dutta Nov 02 '19 at 07:18

3 Answers3

15

I'm not sure what passage in Nielsen and Chuang you have in mind and I see all of this differently. I don't see any need to believe that it is "theoretically" possible store an infinite amount of information in a qubit. My answer to the paradox is that amplitudes aren't stored information. A qubit does not know its amplitudes any more than a randomized bit knows the chance that it is a 1. If the bit has 0.637 chance of being 1, that does not mean that 0.637 has been stored anywhere. The size of the bit's brain is exactly one bit; it can only tell you 0 or 1 if you ask it the one question that it can answer. Now a qubit can answer any one out of a continuous family of binary questions; but it can still only answer one such question in the sense that that completely determines its posterior state for future questions. A qubit is still too small to give a clean answer to any question with more than two answers, and it certainly does not have room to store decimal expansions of numbers.

To reiterate, quantum amplitudes are similar to classical probabilities. They are statistical features that are not directly stored by the systems that obey the statistics.

The Holevo-Nayak theorem says that n qubits cannot store any more than n classical bits. That's the real answer to the question of how a qubit can encode or store infinite information, "theoretically" or otherwise. Answer: It can't.

Greg Kuperberg
  • 1,426
  • 10
  • 14
  • Umm, isn't whether or not a system obeying certain statistics stores that statistical information basically a philosophical question (depending on your definition of store and information)? – Sanchayan Dutta Nov 01 '19 at 16:21
  • 5
    You could argue that. I admit that am taking a specific philosophical position here based on the theorems in the field. I consider something stored when I can get it back out, and I define information by entropy, in this case von Neumann entropy. – Greg Kuperberg Nov 01 '19 at 16:29
  • 1
    I also consider the analogy between quantum amplitudes and classical probabilities to be persuasive. Who believes that a randomized bit stores the digits of the probability that it is a 1? To some extent the real philosophical dispute here is whether quantum amplitudes are on the same footing as classical probabilities. I say yes! – Greg Kuperberg Nov 01 '19 at 16:32
  • 1
    So @Greg Kuperberg, if I understand correctly, what you're saying is that even theoretically, there is no way that a qubit could store an infinite amount of information - because the information would be non-retrievable and therefore of no use. In that sense, the amplitude is just analogous to the classical notion of probability and shouldn't really be thought of as an "information store". Does that make sense? – agiri Nov 01 '19 at 17:22
  • 1
    @Aditya - That's right, those are the two basic reasons that I don't think of quantum amplitudes as stored information. (1) They are highly analogous to classical probabilities, which no one thinks of as stored information. (2) They are not information that you can directly retrieve, not even theoretically, insofar as theory is governed by theorems. – Greg Kuperberg Nov 01 '19 at 17:47
  • Hold on @Greg Kuperberg. Just to be pedantic... if you had a LOT of identical-amplitude qbits, couldn't you theoretically query each one, thereby obtain the statistical limit for the amplitude, and thereby extract the (tremendously inefficiently stored) string encoded in the amplitude? Of course if you were doing this it would be a lot more efficient to break up the string into smaller strings encoded into smaller-sized qbit bunches precisely because it becomes orders of magnitude more expensive to store longer info in the trailing digits of the big-bunch's statistical amplitude. – M M Nov 02 '19 at 13:45
  • @MM Sure, but then you're querying "a LOT of identical-amplitude qubits" rather than a single qubit. An isolated system containing a single qubit will still give you only 1 bit of "information", at max. In fact, "the number of binary questions an isolated system can answer" is roughly the definition of "information content" of a system, in the widely studied field of information theory. Note that Greg is using the term "information" in a very specific manner (using von Neumann entropy), which doesn't quite match up with the human intuition of information. – Sanchayan Dutta Nov 02 '19 at 15:30
  • @MM - Yes, if you have many qubits in the same state, then you can certainly do state tomography to learn that state. (Note the standard spelling of "qubit". Mermin incorrectly spells it "qbit".) In that sense, one qubit does store a tiny fragment of its quantum amplitudes. But this isn't any different from storing 0.637 in a large mass of randomized bits, each with a 0.637 chance of being a 1. In both cases, it's getting an answer by collecting statistics. – Greg Kuperberg Nov 02 '19 at 16:29
  • @SanchayanDutta - Not quite. I'm a human and von Neumann entropy does match my intuition of information. It's an immediate generalization of Shannon entropy in classical probability, which was always meant to agree with intuition even if it is a modern formula. Intuition doesn't spring from nowhere, it's something that you create from experience. If you truly internalize the mathematics of QCQI rather than feeling that you must dance on one toe to understand it, then that's the best way to hone your intuition. – Greg Kuperberg Nov 02 '19 at 16:39
  • @SanchayanDutta - Anyway, otherwise your comment is excellent, except that I would make one small emendation. The entropy of a system is the limiting ratio A/B, where B copies of the system can store the answers to A binary questions. The calculation of that limit indeed gives you Shannon-von Neumann entropy. – Greg Kuperberg Nov 02 '19 at 16:46
  • 1
    @GregKuperberg "If you truly internalize the mathematics of QCQI rather than feeling that you must dance on one toe to understand it, then that's the best way to hone your intuition." --- Yep, we're in strong agreement. Perhaps I should have mentioned "a layman's notion of intuition" instead. Once one gets familiar with the Shannon-von Neuman notion of information, that does indeed become the most sensible (and practical) way to approach the topic. – Sanchayan Dutta Nov 02 '19 at 16:56
  • Apologies for the diversion, but since I find it difficult to follow your conversation I'd appreciate it if you could point me to material that would give me the background required. I have a fairly good knowledge of Linear Algebra but that's about it, and I'd like to move a little quicker. – agiri Nov 02 '19 at 18:22
6

Highly Relevant: (Physics SE) Informational capacity of qubits and photons


Here's how I rationalized it: You take all the information you want to store, put it in binary form, and make it the real component of $\alpha $ or $\beta$ (the coefficients of the computational basis states).

Yes, and then if you could then prepare a qubit precisely in the $\alpha|0\rangle + \beta|1\rangle$, in some sense you would be storing infinite information in a single qubit.

Though, the drawback is, firstly it's not possible to prepare quantum states that precisely in practice due to noise and other engineering limitations. Secondly, even if you managed to do that, you wouldn't be able to recover that information by measuring the qubit, as qubits immediately collapse to one of their basis states ($|0\rangle$ and $|1\rangle$ being the standard basis states).

The "encoding infinite information" idea is funny because it's certainly possible to claim that hypothetically if you can produce radio waves with frequency an integer multiple of $\pi$ or any other non-recurring, non-terminating irrational number for that matter that has unlimited decimal places, you are storing infinite information in that radio wave. It doesn't mean that the information is useful or practically retrievable!

Sanchayan Dutta
  • 17,497
  • 7
  • 48
  • 110
  • Right. So although it is a theoretical possibility, it is, for all practical purposes, impossible. The radio waves example certainly helps rationalize it! Thanks for answering. – agiri Nov 01 '19 at 13:45
2

Here is another way to think about it. You can, in principle, store an infinite amount of information into a qubit, in the sense that you might need arbitrarily many bits to exactly pinpoint its state.

However, this is not as weird or surprising as one could think. You can make the same argument about a (classical) probability distributions. Given any amount of information, I can always find a way to encode it into a probability distribution over a bit. For example, given $N$ bits of classical information in the form of a bitsring $\equiv(x_1,...,x_N)$, just define $x$ as the number having that bitstring as binary decomposition, and then use a probability distribution with $p_0=x 2^{-M}$ for a big enough $M$.

About the matter of retrieving the information "stored" this way, you find in both classical and quantum case that there is no way to do that with a single measurement. In other words, the more information you want to retrieve from a probability distribution, the more you need to sample from it. Holevo's theorem essentially tells you that quantum mechanics doesn't give any advantages over the classical case in this task.

glS
  • 24,708
  • 5
  • 34
  • 108
  • +1 Though a problem with this example is that it quickly turns into a discussion on the philosophy of information theory, to avoid which we're forced to mention our definitions upfront. Someone could argue that as $\pi/4$ is a computable number, storing an algorithm that can generate $\pi/4$ upto arbitrary precision is equivalent information and that can be encoded with a finite number of bits. To make your example work you'd then to take a non-computable number. I suppose it's really important to precisely define storage and information in these discussions, like Kuperberg did. – Sanchayan Dutta Nov 02 '19 at 12:39
  • 1
    @SanchayanDutta I don't think "philosophy" enters the discussion. You can argue that storing the algorithm producing $\pi/4$ is equivalent to knowing $\pi/4$, sure, but you can only do that assuming you already know that it is $\pi/4$ that was stored. If you are interested in retrieving information from a probability distribution, presumably you don't know what the information is (otherwise what would be the point?), and so how do you apply your "computability" argument? – glS Nov 02 '19 at 12:42
  • in other words, computable or not, with a finite number of samples you can never be sure that the "true" probability was, say, $p_0=0.2$ rather than $0.2+10^{-N}$ for some large enough $N$. – glS Nov 02 '19 at 12:48
  • Erm, I was mostly addressing the "then you need an infinite amount of bits to exactly describe the system" part of your answer. That's not exactly true, considering I can binary encode a finite algorithm for describing the probability distribution instead. I wasn't talking about retrieving the probability distribution. I'm sure you understand what the issue here is; I'm merely pointing it out for the other readers, as this is essentially a precursor to information theory. – Sanchayan Dutta Nov 02 '19 at 12:49
  • 1
    yes, I think I see what you mean. I edited to make the discussion about probability distributions less "controversial". – glS Nov 02 '19 at 13:01