Instance-Hiding Schemes for Private Distributed Learning

An important problem today is how to allow multiple distributed entities to train a shared neural network on their private data while protecting data privacy. Federated learning is a standard framework for distributed deep learning Federated Learning, and one would like to assure full privacy in that framework . The proposed methods, such as homomorphic encryption and differential privacy, come with drawbacks such as large computational overhead or large drop in accuracy. This work introduces a new and simple encryption of training data, which hides the information in it and allows its use in the usual deep learning pipeline. The encryption is inspired by classic notion of instance-hiding in cryptography. Experiments show that it allows training with fairly small effect on final accuracy. We also give some theoretical analysis of privacy guarantees for this encryption, showing that violating privacy requires attackers to solve a difficult computational problem. Joint work with Yangsibo Huang, Zhao Song, and Kai Li. To appear at ICML 2020.

Date

Affiliation

Princeton University; Distinguishing Visiting Professor, School of Mathematics