Summary
Machine learning algorithms are data-hungry, and perform better when exposed to more and more data. Such data is being collected in massive amounts by internet giants, and is often sensitive and private. Examples include the purchases and browsing history of users, their health data and exercise activity, locations they travel to and messages they type into their mobile phone. The amount of data being collected can be significantly reduced using cryptographic techniques, in particular, using secure multiparty computation. Secure computation enables mutually distrustful parties to compute a joint function of their inputs without revealing the inputs to one another.
In this research, we will address secure computation techniques of machine learning tasks. The first task is private classification: One party holds a model trained on a sensitive dataset, and another party holds a sample and wishes to evaluate the model on that private sample. Our objective is to fulfill this task while achieving significantly stronger security notion than previous works, that is, security even if one of the parties deviates from the protocol specifications (malicious security). The second task is federated learning, a techniques that enables thousands of participants to train a neural network on their joint data but without revealing the data to one another. However, a recent work showed that such task is susceptible to injection of backdoors, and a user can manipulate the joint model to his/her own benefit, significantly reducing the usefulness of federated learning in practice. Our objective is to guarantee immunity to such injections. The two objectives will be achieved by improving specific cryptographic building blocks, and applying them for these applications.
PRIMAL will speed up secure computation in practice, and carries immense potential to enhance privacy in the digital era.
In this research, we will address secure computation techniques of machine learning tasks. The first task is private classification: One party holds a model trained on a sensitive dataset, and another party holds a sample and wishes to evaluate the model on that private sample. Our objective is to fulfill this task while achieving significantly stronger security notion than previous works, that is, security even if one of the parties deviates from the protocol specifications (malicious security). The second task is federated learning, a techniques that enables thousands of participants to train a neural network on their joint data but without revealing the data to one another. However, a recent work showed that such task is susceptible to injection of backdoors, and a user can manipulate the joint model to his/her own benefit, significantly reducing the usefulness of federated learning in practice. Our objective is to guarantee immunity to such injections. The two objectives will be achieved by improving specific cryptographic building blocks, and applying them for these applications.
PRIMAL will speed up secure computation in practice, and carries immense potential to enhance privacy in the digital era.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: | https://cordis.europa.eu/project/id/891234 |
Start date: | 01-06-2020 |
End date: | 28-02-2024 |
Total budget - Public funding: | 185 464,32 Euro - 185 464,00 Euro |
Cordis data
Original description
Machine learning algorithms are data-hungry, and perform better when exposed to more and more data. Such data is being collected in massive amounts by internet giants, and is often sensitive and private. Examples include the purchases and browsing history of users, their health data and exercise activity, locations they travel to and messages they type into their mobile phone. The amount of data being collected can be significantly reduced using cryptographic techniques, in particular, using secure multiparty computation. Secure computation enables mutually distrustful parties to compute a joint function of their inputs without revealing the inputs to one another.In this research, we will address secure computation techniques of machine learning tasks. The first task is private classification: One party holds a model trained on a sensitive dataset, and another party holds a sample and wishes to evaluate the model on that private sample. Our objective is to fulfill this task while achieving significantly stronger security notion than previous works, that is, security even if one of the parties deviates from the protocol specifications (malicious security). The second task is federated learning, a techniques that enables thousands of participants to train a neural network on their joint data but without revealing the data to one another. However, a recent work showed that such task is susceptible to injection of backdoors, and a user can manipulate the joint model to his/her own benefit, significantly reducing the usefulness of federated learning in practice. Our objective is to guarantee immunity to such injections. The two objectives will be achieved by improving specific cryptographic building blocks, and applying them for these applications.
PRIMAL will speed up secure computation in practice, and carries immense potential to enhance privacy in the digital era.
Status
CLOSEDCall topic
MSCA-IF-2019Update Date
28-04-2024
Images
No images available.
Geographical location(s)