In his landmark 1985 paper "Quantum theory, the Church–Turing principle and the universal quantum computer" Deutsch gives a quantum algorithm to calculate $G(\mathbf f)=f(0)\oplus f(1)$ with only one query, but half of the time his algorithm would flag as failing. As noted in this question and as I understand it Deutsch thought that we couldn't improve upon the fifty-percent chance of failure.
But, we know that we can run Deutsch's algorithm (or the Deutsch-Jozsa algorithm) without any failure, and the circuit is "just" two Hadamard's sandwiching the query. So, why does Deutsch only achieve a 50% probability of success with his first algorithm?
He wasn't using the circuit model yet, and was envisioning Turing tapes in superposition. It looks like he properly prepared the superposition and evaluated $f(x)$, but did he get hung up in not closing the interference?