0

all, I am working on federated learning and here goes my question:

Suppose there are two participants to do the federated learning. For some of the models (e.g. Logistic Regression models, assume one party has some features $X_1$ and the label $y$, the other has some other features $X_2$. The coefficients are denoted as $W_1$ and $W_2$, respectively), the schemes use "full encryption/mask", which encrypt/secret share all the intermediate results of the training process.

However, there are some schemes which merely hides parts of the intermediate results, for the most "aggressive" one, only $W_2X_2$ is encrypted or secret shared. Then, it is sent to the other party, and the other party decrypts/recovers $W_2X_2$ and continues computation as in plaintext. These schemes argue that the exposure of $W_2X_2$ will not further reveal information of $X_2$ and is considered secure.

My question is how to evaluate the security of the later one, are there any possible attacks to recover the original data or to build a new model which is almost the same as the final federated model by only one of the participants?

alexander
  • 149
  • 5

1 Answers1

1

Here's one attack : https://arxiv.org/pdf/2011.09290.pdf
Hope that helps.

vince.h
  • 132
  • 5