The Cryptocurrency Post

pfl-research: Simulation Framework for Accelerating Research in Private Federated Learning

pfl-research: Simulation Framework for Accelerating Research in Private Federated Learning

Federated Learning (FL) is an emerging ML training paradigm where clients own their data and collaborate to train a global model without revealing any data to the server and other participants.

Researchers commonly perform experiments in a simulation environment to quickly iterate on ideas. However, existing open-source tools do not offer the efficiency required to simulate FL on larger and more realistic FL datasets. We introduce pfl-research, a fast, modular, and easy-to-use Python framework for simulating FL. It supports TensorFlow, PyTorch, and non-neural network models, and is tightly integrated with state-of-the-art privacy algorithms.

We study the speed of open-source FL frameworks and show that pfl-research is 7-72× faster than alternative open-source frameworks on common cross-device setups. Such speedup will significantly boost the productivity of the FL research community and enable testing hypotheses on realistic FL datasets that were previously too resource intensive. We release a suite of benchmarks that evaluates an algorithm’s overall performance on a diverse set of realistic scenarios.


Source link

Exit mobile version