2

When proving weak OWFs imply the existence of strong OWFs, the standard construction goes by concatenating several applications of the weak OWF (see, e.g., here). That is, given a weak OWF $f$, the following is a strong OWF: $$F(x_1, \dots, x_t) = (f(x_1), \dots, f(x_t))$$ where $|x_i| = n$, $t = n p(n)$, and $p \in \text{poly}(n)$ such that $$\text{P}_{x \in \{0,1\}^n}(A(f(x), 1^n) \in f^{-1}(f(x))) < 1 - \frac{1}{p(n)}$$ for every PPT $A$.

The proof is by a reducibility argument in which one presupposes there is a PPT $B$ which inverts $F$ with non-negligible probability (i.e., $F$ is not strong one-way) and then constructs a PPT $A$ which inverts the weak OWF $f$ with probability greater than $1 - \frac{1}{p(n)}$.

Most versions of the construction(*) I have seen are as follows: $A$ receives input $y = f(x)$ and $1^n$, where $x \in \{0,1\}^n$ is picked uniformily (but not revealed to $A$). $A$ then picks $i \in \{1, \dots, t \}$ and $x_j \in \{0,1\}^n$ uniformily for $j \neq i$ and runs $B$ on $Y = (f(x_1), \dots, f(x_{i-1}), y, f(x_{i+1}), \dots, f(x_t))$, thus (with non-negligible probability) inverting $y$. If $B$ does not produce a preimage, the procedure is repeated a certain number of times.

My question is: Why must $y$ be in a random position in $Y$? Why cannot, for example, $i = 1$ be fixed? After all, $y = f(x)$ and $x$ is guaranteed to be picked uniformily (as every other $x_j$).

These lecture notes here hint that this is to "balance the probabilities", which is unfortunately too vague for me to comprehend.


(*) I have also found an alternative construction in Oded Goldreich's "Computational Complexity: A Conceptual Perspective" in which $A$ does not pick $i$ randomly; instead, it iterates over each possible value of $i$. However, I can see how the two are equivalent.

dkaeae
  • 530
  • 5
  • 15

1 Answers1

3

As you write, the proof is via a reduction that transforms any adversary $B$ that inverts $F$ with non-negligible probability $\varepsilon(n)$ into an adversary $A$ that inverts $f$ with probability at least $1-1/p(n) \approx 1$.

Assume for simplicity that $f$ is an injective function. Consider a potential adversary $B$ that internally works as follows: given $Y=(y_1, \ldots, y_t)$, it checks whether $y_1$ belongs to a certain “special” $\varepsilon(n)$-fraction of the image $f(\{0,1\}^n)$. If so, it computes and outputs the preimage $X = F^{-1}(Y)$. Otherwise, it outputs nothing, i.e, it fails to invert this $Y$. Clearly, this $B$ satisfies the above hypothesis, because $y_1$ has an $\varepsilon(n)$ probability of being in the “special” set. (How $B$ checks $y_1$ and inverts $F$ is not important, because the reduction treats $B$ as a “block box.” So we can think of $B$ as using unbounded computation to perform these steps.)

Now, consider a reduction $A$ that works as you propose, by always “plugging in” its external challenge $y=f(x)$ in the first position, letting $y_1=y$, and choosing the rest of $Y$ itself. Clearly, $B$ outputs the preimage $F^{-1}(Y)$ if and only if $y$ belongs to the “special” set, which occurs with probability $\varepsilon(n)$. Therefore, $A$ succeeds in outputting the preimage $f^{-1}(y)$ with the same probability $\varepsilon(n)$. But this is not enough: we need $A$ to succeed with much larger probability $1-1/p(n)$.

If $A$ repeats its procedure many times, always letting $y_1=y$ but changing the other $y_i$, the probability of success would not improve: whether $B$ inverts $Y$ or not depends only on the value of $y_1=y$, and this does not change. (Remember, $A$ gets only one challenge value $y$ and needs to invert on it with high probability.)

The correct reduction and proof circumvents this problem by showing that there must exist some position $i$ for which there is a large $1-1/p(n)$ fraction of $y_i$ such that, if we choose the other $y_j = f(x_j)$ at random as described, then $B$ inverts on that $Y$ with non-negligible probability (over the choice of the $y_j$ alone). By guessing such $i$ at random (or enumerating all of them) and repeating the core procedure many times, we can ensure a high probability that $B$ actually inverts on one of the $Y$ we provide it, thereby telling $A$ a preimage of $y$. (The actual proof does not depend on $f$ being injective.)

Chris Peikert
  • 5,813
  • 1
  • 24
  • 28