This is a simple implementation of Federated Learning (FL) with Differential Privacy (DP). The bare FL model (without DP) is the reproduction of the paper Communication-Efficient Learning of Deep Networks from Decentralized Data. Each client train local model using DP-SGD ([2], tensorflow-privacy) to perturb model parameters.
- torch 1.7.1
- tensorflow-privacy 0.5.1
- numpy 1.16.2
FLModel.py: definition of the FL client and FL server class
MLModel.py: CNN model for MNIST datasets
utils.py: sample MNIST in a non-i.i.d. manner
- Download MNIST dataset
- Install tensorflow-privacy
- Set parameters in test.py/test.ipynb
- Execute test.ipynb to train model on MNIST dataset
# code segment in test.py/test.ipynb
lr = 0.1
fl_param = {
'output_size': 10, # number of units in output layer
'client_num': client_num, # number of clients
'model': MnistCNN, # model
'data': d, # dataset
'lr': lr, # learning rate
'E': 100, # number of local iterations
'eps': 8.0, # privacy budget
'delta': 1e-5, # approximate differential privacy: (epsilon, delta)-DP
'q': 0.05, # sampling rate
'clip': 8, # clipping norm
'tot_T': 10, # number of aggregation times (communication rounds)
}
[1] McMahan, Brendan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication-Efficient Learning of Deep Networks from Decentralized Data. In Proc. Artificial Intelligence and Statistics (AISTATS), 2017.
[2] Abadi, Martin, et al. Deep learning with differential privacy. Proceedings of the 2016 ACM SIGSAC conference on computer and communications security. 2016.
[3] TensorFlow Privacy: https://github.com/tensorflow/privacy