Git Product home page Git Product logo

federated-learning's Introduction

Federated Learning Simulator

Simulate Federated Learning with compressed communication on a large number of Clients.

Recreate experiments described in Sattler, F., Wiedemann, S., Müller, K. R., & Samek, W. (2019). Robust and Communication-Efficient Federated Learning from Non-IID Data. arXiv preprint arXiv:1903.02891.

Usage

First, set environment variable 'TRAINING_DATA' to point to the directory where you want your training data to be stored. MNIST, FASHION-MNIST and CIFAR10 will download automatically.

python federated_learning.py

will run the Federated Learning experiment specified in

federated_learning.json.

You can specify:

Task

  • "dataset" : Choose from ["mnist", "cifar10", "kws", "fashionmnist"]
  • "net" : Choose from ["logistic", "lstm", "cnn", "vgg11", "vgg11s"]

Federated Learning Environment

  • "n_clients" : Number of Clients
  • "classes_per_client" : Number of different Classes every Client holds in it's local data
  • "participation_rate" : Fraction of Clients which participate in every Communication Round
  • "batch_size" : Batch-size used by the Clients
  • "balancedness" : Default 1.0, if <1.0 data will be more concentrated on some clients
  • "iterations" : Total number of training iterations
  • "momentum" : Momentum used during training on the clients

Compression Method

  • "compression" : Choose from [["none", {}], ["fedavg", {"n" : ?}], ["signsgd", {"lr" : ?}], ["stc_updown", [{"p_up" : ?, "p_down" : ?}]], ["stc_up", {"p_up" : ?}], ["dgc_updown", [{"p_up" : ?, "p_down" : ?}]], ["dgc_up", {"p_up" : ?}] ]

Logging

  • "log_frequency" : Number of communication rounds after which results are logged and saved to disk
  • "log_path" : e.g. "results/experiment1/"

Run multiple experiments by listing different configurations.

Options

  • --schedule : specify which batch of experiments to run, defaults to "main"

Citation

Paper

Sattler, F., Wiedemann, S., Müller, K. R., & Samek, W. (2019). Robust and Communication-Efficient Federated Learning from Non-IID Data. arXiv preprint arXiv:1903.02891.

federated-learning's People

Contributors

felisat avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

federated-learning's Issues

RuntimeError when running with all VGG models

When I am trying to run experiments with the model of VGG, including "vgg11" and "vgg11s" specified in the configuration file of "federated_learning.json", I will get the error message as follows:

"RuntimeError: Given groups=1, weight of size [64, 3, 3, 3], expected input[1, 1, 32, 32] to have 3 channels, but got 1 channels instead"

Applying Compression

The compression of the gradients takes place in the compress_weight_update_up using the compress function with and without the accumulation of errors. However, upon applying the compression, the new values of the gradient are not applied to the server or the client model. So, how are the models assessed?

NameError: name 'fedlearnCNN' is not defined

Hello,

In the configuration file of federated_learning.json, I got the error message shown as the title when I assign value to net with_"cnn"_. There is a missing part when the object is trying to inheriting "fedlearnCNN". Can you please provide that piece of code?

Why doesn't fluctuate Accuracy?

image
For a simple test, I set the hyperparameter as in the picture above and ran the test, but why does the accuracy not exceed 0.12?
Does the number of clients have a huge impact on learning? Setting the communication rounds to a large constant produces the same result.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.