Know thy self, know thy enemy. A thousand battles, a thousand victories. --Sun Tzu知己知彼,百战百胜 ——孙武
Blades is a simulator for Byzantine-robust federated Learning with Attacks and Defenses Experimental Simulation.
Blades is designed to simulate attacks and defenses in federated learning with high performance and fast evaluation of existing strategies and new techniques. Key features of Blades include:
- Specificity: Different from existing federated learning simulators, Blades is specifically designed to simulate attacks and defenses. Thus we provide built-in implementations of representative attack strategies as well as robust aggregation schemes, so that users can efficiently validate their approaches and compare with existing solutions.
- Scalability: Blades is scalable in terms of both clients and computing resources. In resource-constrained systems, it allows each trainer/actor to deal with multiple clients' requests sequentially, thus the scale of experiments is not limited by the number of trainers/actors. Based on Ray, Blades is deployable either on a single machine or a computing cluster.
- Extensibility: Blades is highly compatible with Pytorch, allowing any combination of model, dataset and optimizer. It supports diverse federated learning configurations, including standardized implementations such as fedsgd and fedavg, with Pytorch being the framework of choice for implementing the models. Blades allows the end users to incorporate new types of attacks, defenses, and optimization algorithms in a straightforward fashion.
NOTE: More features are under development and the APIs are subject to change. If you are interested in this project, don't hesitate to contact me or make a PR directly.
You can also develop your own attack/defense and evaluate it by cloning Blades:
git clone https://github.com/lishenghui/blades.git
cd src
pip install -v -e .
# "-v" means verbose, or more output
# "-e" means installing a project in editable mode,
# thus any local modifications made to the code will take effect without reinstallation.
cd scripts
python main.py --config_path ../config/example.yaml
In detail, the following strategies are currently implemented:
Strategy | Description | Sourse |
---|---|---|
Noise | Put random noise to the updates. | Sourse |
Labelflipping | Fang et al. Local Model Poisoning Attacks to Byzantine-Robust Federated Learning, USENIX Security' 20 | Sourse |
Signflipping | Li et al. RSA: Byzantine-Robust Stochastic Aggregation Methods for Distributed Learning from Heterogeneous Datasets, AAAI' 19 | Sourse |
ALIE | Baruch et al. A little is enough: Circumventing defenses for distributed learning NeurIPS' 19 | Sourse |
IPM | Xie et al. Fall of empires: Breaking byzantine- tolerant sgd by inner product manipulation, UAI' 20 | Sourse |
Strategy | Description | Sourse |
---|---|---|
FangAttack | Fang et al. Local Model Poisoning Attacks to Byzantine-Robust Federated Learning, USENIX Security' 20 | Sourse |
DistanceMaximization | Shejwalkar et al. Manipulating the byzantine: Optimizing model poisoning attacks and defenses for federated learning, NDSS' 21 | Sourse |
Strategy | Description | Source |
---|---|---|
FLTrust | Cao et al. FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping, NDSS'21 | Source |
To run blades on a cluster, you only need to deploy Ray cluster
according to the official guide.
Please cite our paper (and the respective papers of the methods used) if you use this code in your own work:
@article{li2022blades, title={Blades: A Simulator for Attacks and Defenses in Federated Learning}, author= {Li, Shenghui and Ju, Li and Zhang, Tianru and Ngai, Edith and Voigt, Thiemo}, journal={arXiv preprint arXiv:2206.05359}, year={2022} }