6

Consider the following protocol.

We are given either $|\psi\rangle = \frac{1}{\sqrt{2}}(|0\rangle + |1\rangle)$ or $|\phi\rangle = \alpha_{0} |0\rangle + \alpha_{1}|1\rangle$ where $\alpha_{0}^{2}$ is chosen uniformly at random from $[0, 1]$ and $\alpha_{1}^{2}$ is chosen accordingly to normalize the sum.

We measure in the computational basis and depending on the result, discriminate between $|\psi\rangle$ and $|\phi\rangle$.

It is intuitive to see that the protocol is highly likely to fail. But what might be a mathematical way to reach the same conclusion?

  • Do you mean $\alpha_0$ is chosen uniformly and then $\alpha_1$ is chosen to be the real number to make normalized. The way phrased sounds like 2 independent draws. – AHusain Jun 08 '19 at 00:08
  • Question edited. – NewUser2020 Jun 08 '19 at 01:22
  • I think it matters if $\alpha_0$ were chosen uniformly at random from $[0,1]$, as opposed to if $\alpha_0^2$ were chosen uniformly at random from $[0,1]$. Without squaring I think that you might be able to distinguish $\vert \psi\rangle$ from $\vert\phi\rangle$ - I think $\vert 0\rangle$ is more likely on $\vert\phi\rangle$ than $\vert\psi\rangle$. With drawing $\alpha_0^2$ uniformly at random, and measuring in the computational basis, I think it is intuitive that you can't distinguish. – Mark Spinelli Jun 08 '19 at 02:52
  • 1
    Thanks for the edit - notice now that both $\alpha_0$ and $\alpha_1$ are not limited to being real. As to the question - consider $\psi$ as a classical fair coin with a 50% chance of landing heads, and $\phi$ as a coin with a probability of landing heads chosen uniformly at random from $[0,1]$. If we are given a coin and need to decide which coin we were given, we can't distinguish with only a single toss. But if we can toss more than once, we could distinguish. Does this help? – Mark Spinelli Jun 08 '19 at 12:36
  • Thanks for the answer. However, since according to the answer below $P(|0\rangle)$ is exactly $\frac{1}{2}$ for $|\phi \rangle$, isn't it true that even if we measure more than once, it is still the case that we can not distinguish the two cases more than randomly guessing? – NewUser2020 Jun 09 '19 at 13:18
  • 1
    If you give me a coin and I must decide whether it is biased or is fair, I can toss the same coin $100$ times, and bank on Chernov to get a good idea if it's biased or fair. If you give me a $100$ copies of qubits prepared either as $\vert\psi \rangle$ or the same $\vert \phi\rangle$ with $\alpha_0^2$ fixed each time, then I can distinguish by repeated measurements. – Mark Spinelli Jun 10 '19 at 01:21
  • Thanks! It's clear now. – NewUser2020 Jun 10 '19 at 04:44

1 Answers1

10

This is a question from classical probability theory. I suppose $a_i$ are real (and $\alpha_i \ge 0$), though it's also possible to consider the complex case. The probability of obtaining $|0\rangle$ from the measurement of $\alpha_{0} |0\rangle + \alpha_{1}|1\rangle$ is $\alpha_0^2$. This probability is in fact a conditional probability $P(|0\rangle ~~\vert~~|\phi\rangle = \alpha_{0} |0\rangle + \alpha_{1}|1\rangle )$ of obtaining $|0\rangle$ from $|\phi\rangle$ where the condition is $|\phi\rangle = \alpha_{0} |0\rangle + \alpha_{1}|1\rangle$. In a discrete case (if $\alpha_0$ has only finite number of possible values) to calculate overall probability of obtaining $|0\rangle$ from $|\phi\rangle$ we could use the law of total probability $$ P(|0\rangle) = \sum_{a_0} P(|0\rangle ~~\vert~~|\phi\rangle = \alpha_{0} |0\rangle + \alpha_{1}|1\rangle )P(|\phi\rangle = \alpha_{0} |0\rangle + \alpha_{1}|1\rangle) $$ But we have a continuous case here with $P(|\phi\rangle = \alpha_{0} |0\rangle + \alpha_{1}|1\rangle) = 0$.
So we need to take the integral. There is a theory of how to calculate such probabilities with integrals, but I think in our case it is the correct way just to assume that $\alpha_0^2$ is uniformly distributed on $\{\frac{k}{n}, k=1..n\}$, calculate probability $P_n(|0\rangle)$ and then take the limit for $n \rightarrow \infty$.

$$ P_n(|0\rangle) = \sum_{k=1}^n P(|0\rangle ~~\vert~~|\phi\rangle = \sqrt{\frac{k}{n}} |0\rangle + \sqrt{\frac{n-k}{n}}|1\rangle )P(|\phi\rangle = \sqrt{\frac{k}{n}} |0\rangle + \sqrt{\frac{n-k}{n}}|1\rangle) = $$ $$ = \sum_{k=1}^n \frac{k}{n} \cdot \frac{1}{n} = \sum_{k=1}^n \frac{k}{n^2} = \frac{n(n+1)}{2}\cdot\frac{1}{n^2} = \frac{n+1}{2n} $$

The limit is $\frac{1}{2}$, which coincides with the probability of obtaining $|0\rangle$ from measuring $|\psi\rangle = |+\rangle$

Remark
Note that I assumed that you are given a lot of copies of $|\psi\rangle$ or a lot of samples of $|\phi\rangle$ (with varying $\alpha_0$) and you are asked to distinguish the case. If you are given only one copy of $|\psi\rangle$ or $|\phi\rangle$, then you can't distinguish it (with certainty) simply because the measurement of $|+\rangle$ will give random result.

Update
Another approach is to calculate the expected density matrix of $|\phi\rangle$:
$$ E(|\phi\rangle\langle\phi|) = \int_0^1(\sqrt{x_0}|0\rangle+\sqrt{1-x_0}|1\rangle)(\sqrt{x_0}\langle 0|+\sqrt{1-x_0}\langle 1|)dx_0 = $$ $$ = \int_0^1x_0dx_0 \cdot |0\rangle\langle 0| + \int_0^1(1-x_0)dx_0 \cdot |1\rangle\langle 1| + $$ $$ + \int_0^1\sqrt{x_0(1-x_0)}dx_0 \cdot |0\rangle\langle 1| + \int_0^1\sqrt{(1-x_0)x_0}dx_0 \cdot |1\rangle\langle 0| = $$ $$ = \frac{1}{2}(|0\rangle\langle 0| + |1\rangle\langle 1|) + \frac{\pi}{8}(|0\rangle\langle 1| + |1\rangle\langle 0|) $$

Hence the overall probability of obtaining $|0\rangle$ from measuring $|\phi\rangle$ in computational basis is $\text{Tr}(E(|\phi\rangle\langle\phi|) \cdot |0\rangle\langle 0|) = \frac{1}{2}$.

Here we can see the difference with the complex case, since the expected density matrix will be $\frac{1}{2}I$.

Danylo Y
  • 7,144
  • 11
  • 20
  • Thank you for an excellent answer. I have two queries. Why does assuming $\alpha_{0}^{2}$ being uniformly distributed over ${\frac{k}{n} : k = 1, 2, ...., n}$ correspond to the fact that $\alpha_{0}^{2}$ is uniformly distributed on $[0, 1]$? Also, why is the probability $P(|\phi\rangle = \sqrt{\frac{k}{n}} |0\rangle + \sqrt{\frac{n-k}{n}}|1\rangle )$ equal to $\frac{1}{n}$ for each $k$? – NewUser2020 Jun 09 '19 at 12:34
  • And what might be the changes for the complex case? – NewUser2020 Jun 09 '19 at 12:48
  • Also, since we get $P(|0\rangle)$ as exactly $\frac{1}{2}$, are we inferring that even if we are given a lot of copies, say exponential, of $|\psi\rangle$ and $|\phi\rangle$, we can still do no better than random guessing? – NewUser2020 Jun 09 '19 at 13:06
  • Well, in the case of $[0,1]$-uniform distribution the probability of getting sample from $(\frac{k-1}{n}, \frac{k}{n}]$ interval is exactly $\frac{1}{n}$. So this is natural to associate small interval with one of its points with corresponding probability $\frac{1}{n}$.
  • – Danylo Y Jun 09 '19 at 14:48
  • In the complex case everything almost the same. There can be different $|\phi\rangle$ with the condition $|\alpha_0^2| = \frac{k}{n}$ but this is not matter since the conditional probability is the same $P(|\phi\rangle | |\alpha_0^2| = \frac{k}{n} ) = \frac{1}{n}$.
  • – Danylo Y Jun 09 '19 at 14:50
  • 1
  • Yes, because probabilities are the same in both cases. But if we allowed to use measurements in other bases then we can recover the state exactly and check if it's equal to $|+\rangle$ or not.
  • – Danylo Y Jun 09 '19 at 14:51