2

Please, consider two honest parties $A$ and $B$ outsourced their private data to a malicious server $S$. So the parties store their data in the server. Then at a later point in time they want to ask the server to run some computation on both parties data and return a private result back to a party.

In order to prove the security of the scheme,in the ideal world, we construct a simulator $SIM_s$ simulating malicious server, $A_s$.

Question: Does $SIM_s$, pick two random datasets, make them private and send to $A_s$?

What is not clear in this case is that whether the parties (or simulator) have input or the server has input. Because the parties have already outsourced their dataset.

user13676
  • 835
  • 5
  • 13

2 Answers2

2

The objective of the simulator is to make the simulated world (often called the ideal world) indistinguishable from the real world (running the actual protocol). See my write-up on the UC framework here for more detail.

In the proof setup, the entity attempting to distinguish between the two worlds is often assumed to provide the inputs to the parties. That keeps things as generic as possible.

So, what should be done is have $\mathcal{Z}$ pick the inputs for the parties.

mikeazo
  • 38,563
  • 8
  • 112
  • 180
  • Why sometime simulator in the ideal world has more power than the adversary in the real world. For instance, the simulator can extract the secret value used by the other party (in particular in zero knowledge prof of knowledge) where the real world adversary has not such capability. Or the simulator can receive the other party input to the zero knowledge ideal call. see "Efficient two party protocols" page 190, full simulation oblivious transfer. – user13676 Jul 26 '15 at 13:25
  • The simulator in the example can even make the adversary believe that the zero knowledge proof is verified where it is not. – user13676 Jul 26 '15 at 13:27
  • @user13676 you should ask that as a separate question, instead of in the comments. – mikeazo Jul 26 '15 at 14:20
  • In the ideal model where simulator deals with the adversary, can we allow the adversary to cheat the simulator with the same probability (that is negligible) as in real world? or in the ideal model the simulator must detect all misbehavior of adversary? – user13676 Jul 29 '15 at 13:58
  • @user13676 The simulator does not detect anything. The simulators goal is to make the two worlds indistinguishable. In particular, a quote from an answer on your other question is important to remember "the simulator typically works by running the real-world adversary". So the simulator actually acts as the adversary in some sense. – mikeazo Jul 29 '15 at 14:04
  • @user13676 That said, who determines whether or not cheating is taking place? In the UC world (which I am most familiar with), it would be $\mathcal{Z}$. – mikeazo Jul 29 '15 at 14:06
  • @user13676 So client's in real world only detect malicious behavior with negligible probability? Doesn't sound like a very useful protocol. – mikeazo Jul 29 '15 at 14:15
  • I said it wrong, the malicious party can escape from detection with negligible probability in the both world. I'll modify the above comment. – user13676 Jul 29 '15 at 14:17
  • consider a case where two parties send their private data to a malicious server, then it computes the result and sends it back to a party. My question is that in the ideal world, can we allow the adversary escape from detection with the same probability(that is negligible) as it can do in the real world? Or in the ideal world the simulator must detect the adversary's misbehavior with probability of exactly 1? – user13676 Jul 29 '15 at 14:19
  • I figured it was a mistake. :) The simulator should mimic the real world. So if cheating can occur undetected in the real world with some probability, it should hold in the ideal world too. – mikeazo Jul 29 '15 at 14:23
0

$SIM_s$ can do that, but it doesn't need to. $\:$ The distinguisher chooses the parties' inputs.