Git Product home page Git Product logo

fedcache's Introduction

FedCache

This repository is the official Pytorch implementation DEMO of FedCache: A Knowledge Cache-driven Federated Learning Architecture for Personalized Edge Intelligence. IEEE Transactions on Mobile Computing (TMC). 2024

News

[Apr. 2024] FedCache's remote information retrieval has been effectively advanced and implemented by PTaaS. Privacy-Preserving Training-as-a-Service for On-Device Intelligence: Concept, Architectural Scheme, and Open Problems (arxiv.org).

[Mar. 2024] FedCache is featured by Tencent. 机器人再度大幅进化!阿西莫夫三法则还有效吗?(Robots are Evolving Dramatically Again! Is Asimov's "Three Laws of Robotics" Still Valid?).

[Mar. 2024] I was invited to give a talk for Network System and Machine Learning Group, School of Computer Science, Peking University. 面向个性化边缘智能的缓存驱动联邦学习: 研究进展与开放性问题 (Cache-driven Federated Learning for Personalized Edge Intelligence: Research Progress and Open Problems).

[Mar. 2024] FedCache is featured by NGUI. 缓存驱动联邦学习架构赋能个性化边缘智能 (Cache-Driven Federated Learning Architecture Energizes Personalized Edge Intelligence).

[Mar. 2024] FedCache is included by the first survey investigating the application of knowledge distillation in federated edge learning. Knowledge Distillation in Federated Edge Learning: A Survey (arxiv.org).

[Feb. 2024] FedCache is featured on Phoenix Tech. 缓存驱动联邦学习架构来了!专为个性化边缘智能打造 (The Cache-Driven Federated Learning Architecture is Coming! Built for Personalized Edge Intelligence).

[Feb. 2024] FedCache is accepted by IEEE Transactions on Mobile Computing (TMC). FedCache: A Knowledge Cache-Driven Federated Learning Architecture for Personalized Edge Intelligence | IEEE Journals & Magazine | IEEE Xplore

[Jan. 2024] One follow-up paper examines the impact of logits poisoning attack on FedCache. Logits Poisoning Attack in Federated Distillation (arxiv.org).

[Dec. 2023] We discover a Chinese blog that interprets FedCache on CSDN. 缓存驱动的联邦学习架构FedCache (FedCache: Cache-Driven Federated Learning Architecture).

[Dec. 2023] One follow-up paper confirms the further potential of FedCache for enhanced communication efficiency by accumulating local updates. Improving Communication Efficiency of Federated Distillation via Accumulating Local Updates (arxiv.org).

[Aug. 2023] FedCache is featured by Netease. AI在量子计算中的研究进展 (Research Progress of AI in Quantum Computing).

[Aug. 2023] FedCache is released on arxiv. FedCache: A Knowledge Cache-driven Federated Learning Architecture for Personalized Edge Intelligence (arxiv.org).

Highlight

  • FedCache is a device friendly, scalable and effective personalized federated learning architecture tailored for edge computing.
  • FedCache guarantees satisfactory performance while conforming to multiple personalized devices-side limitations.
  • FedCache improves communication efficiency by up to x200 over previous architectures and can accommodate heterogeneous devices and asynchronous interactions among devices and the server.

Family of FedCache

  • Foundation Works: FedICT, FedDKC, MTFL, DS-FL, FD
  • Derivative Works:
    • Communication: ALU
    • Poisoning Attack: FDLA
    • Generalization: Coming Soon......
    • Security: Coming Soon......
    • Application: Coming Soon......
    • Robustness: TBD
    • Scaling: TBD
    • Fairness: TBD
    • Deployment: TBD

If you have any ideas or questions regarding to FedCache, please feel free to contact [email protected].

Requirements

  • Python: 3.10
  • Pytorch: 1.13.1
  • torchvision: 0.14.1
  • hnswlib
  • Other dependencies

Run this DEMO

python main_fedcache.py


Evaluation

Model Homogeneous Setting

MNIST Dataset

Method MAUA (%) Communication Cost (G) Speed-up Ratio
pFedMe 94.89 13.25 ×1.0
MTFL 95.59 7.77 ×1.7
FedDKC 89.62 9.13 ×1.5
FedICT 84.62 - -
FD 84.19 - -
FedCache 87.77 0.99 ×13.4

FashionMNIST Dataset

Method MAUA (%) Communication Cost (G) Speed-up Ratio
pFedMe 81.57 20.71 ×1.0
MTFL 83.92 12.33 ×1.7
FedDKC 78.24 8.43 ×2.5
FedICT 76.90 13.34 ×1.6
FD 76.32 - -
FedCache 77.71 0.08 ×258.9

CIFAR-10 Dataset

Method MAUA (%) Communication Cost (G) Speed-up Ratio
pFedMe 37.49 - -
MTFL 43.43 52.99 ×1.0
FedDKC 45.87 11.46 ×4.6
FedICT 43.61 10.69 ×5.0
FD 42.77 - -
FedCache 44.42 0.19 ×278.9

CINIC-10 Dataset

Method MAUA (%) Communication Cost (G) Speed-up Ratio
pFedMe 31.65 - -
MTFL 34.09 - -
FedDKC 43.95 4.12 ×1.3
FedICT 42.79 5.50 ×1.0
FD 39.36 - -
FedCache 40.45 0.07 ×78.6

Model Heterogeneous Setting

MNIST Dataset

Method MAUA (%) Communication Cost (G) Speed-up Ratio
FedDKC 85.38 10.53 ×1.0
FedICT 80.53 - -
FD 79.90 - -
FedCache 83.94 0.10 ×105.3

FashionMNIST Dataset

Method MAUA (%) Communication Cost (G) Speed-up Ratio
FedDKC 77.96 12.64 ×1.0
FedICT 76.11 - -
FD 75.57 - -
FedCache 77.26 0.08 ×158.0

CIFAR-10 Dataset

Method MAUA (%) Communication Cost (G) Speed-up Ratio
FedDKC 44.53 4.58 ×1.2
FedICT 43.96 5.35 ×1.0
FD 40.40 - -
FedCache 41.59 0.05 ×107.0

CINIC-10 Dataset

Method MAUA (%) Communication Cost (G) Speed-up Ratio
FedDKC 44.80 4.12 ×1.3
FedICT 43.40 5.50 ×1.0
FD 40.76 - -
FedCache 41.71 0.07 ×78.6

Cite this work

@ARTICLE{wu2024fedcache,
  author={Wu, Zhiyuan and Sun, Sheng and Wang, Yuwei and Liu, Min and Xu, Ke and Wang, Wen and Jiang, Xuefeng and Gao, Bo and Lu, Jinda},
  journal={IEEE Transactions on Mobile Computing}, 
  title={FedCache: A Knowledge Cache-Driven Federated Learning Architecture for Personalized Edge Intelligence}, 
  year={2024},
  volume={},
  number={},
  pages={1-15},
  keywords={Computer architecture;Training;Servers;Computational modeling;Data models;Adaptation models;Performance evaluation;Communication efficiency;distributed architecture;edge computing;knowledge distillation;personalized federated learning},
  doi={10.1109/TMC.2024.3361876}
  }

Related Works

FedICT: Federated Multi-task Distillation for Multi-access Edge Computing. IEEE Transactions on Parallel and Distributed Systems (TPDS). 2023

Agglomerative Federated Learning: Empowering Larger Model Training via End-Edge-Cloud Collaboration. IEEE International Conference on Computer Communications (INFOCOM). 2024

Exploring the Distributed Knowledge Congruence in Proxy-data-free Federated Distillation. ACM Transactions on Intelligent Systems and Technology (TIST). 2024

Federated Class-Incremental Learning with New-Class Augmented Self-Distillation. arXiv preprint arXiv:2401.00622. 2024

Survey of Knowledge Distillation in Federated Edge Learning. arXiv preprint arXiv:2301.05849. 2023

fedcache's People

Contributors

wuzhiyuan2000 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.