AI

Google AI Proposes FAX: A JAX-Based Python Library for Defining Scalable Distributed and Federated Computations in the Data Center

2 Mins read

In recent research, a team of researchers from Google Research has introduced FAX, an advanced software library built on top of JavaScript to improve calculations utilized in federated learning (FL). It has been specifically developed to facilitate large-scale distributed and federated computations across diverse applications, including data center and cross-device situations. 

By utilizing JAX’s sharding features, FAX enables smooth integration with TPUs (Tensor Processing Units) and sophisticated JAX runtimes like Pathways. It provides numerous important benefits by directly embedding necessary building blocks for federated computations as primitives inside JAX.

The library provides scalability, simple JIT compilation, and AD features. In FL, clients work together on Machine Learning (ML) assignments without disclosing their personal information, and federated computations frequently concurrently include numerous clients’ training models while maintaining periodic synchronization. On-device clients can be used in FL applications, but high-performance data center software is still essential. 

FAX overcomes these issues by offering a framework for specifying scalable distributed and federated computations in data centers. Through its Primitive mechanism, it incorporates a federated programming model into JAX, allowing FAX to make use of JIT compilation and sharding to XLA. 

FAX has the ability to shard computations between models and clients, as well as within-client data between logical and physical device meshes. It makes use of innovations in distributed data-center training like Pathways and GSPMD. The team has shared that FAX may also provide Federated Automatic Differentiation (federated AD) by facilitating forward- and reverse-mode differentiation through the Primitive mechanism of JAX. This allows data location information to be preserved during the differentiation process.

The team has summarized their primary contributions as follows. 

  1. XLA HLO (XLA High-Level Optimizer) format translation of FAX computations is efficient. A domain-specific compiler called XLA HLO prepares computational graphs for use with a range of hardware accelerators. Through the utilization of this feature, FAX can fully utilize hardware accelerators such as TPUs, leading to enhanced efficiency and performance. 
  1. A thorough implementation of federated automated differentiation has been included in FAX. This feature automates the gradient computation process through the intricate federated learning setup, significantly simplifying the expression of federated computations. FAX speeds up the process of automatic differentiation, which is a crucial part of training ML models, especially for federated learning tasks.
  1. FAX calculations are made to work easily with cross-device federated compute systems that are currently in use. This implies that computations created with FAX, whether they include data center servers or on-device clients, can be quickly and simply deployed and carried out in real-world federated learning contexts.

In conclusion, FAX is flexible and can be used for various ML computations in data centers. Beyond FL, it can handle a wide range of distributed and parallel algorithms, such as FedAvg, FedOpt, branch-train-merge, DiLoCo, and PAPA.


Check out the Paper and GithubAll credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. Join our Telegram Channel, Discord Channel, and LinkedIn Group.

If you like our work, you will love our newsletter..

Don’t Forget to join our 38k+ ML SubReddit


Tanya Malhotra is a final year undergrad from the University of Petroleum & Energy Studies, Dehradun, pursuing BTech in Computer Science Engineering with a specialization in Artificial Intelligence and Machine Learning.
She is a Data Science enthusiast with good analytical and critical thinking, along with an ardent interest in acquiring new skills, leading groups, and managing work in an organized manner.




Source link

Related posts
AI

OpenAI Introduces ChatGPT Windows App

3 Mins read
The newly launched ChatGPT Windows app (beta version) by OpenAI aims to address several challenges and create a more streamlined user experience…
AI

Jina AI Released g.jina.ai: A Powerful API for Strengthening Human Written Content with Grounded, Fact-Based Information from Real-Time Searches

4 Mins read
Jina AI announced the release of their latest product, g.jina.ai, designed to tackle the growing problem of misinformation and hallucination in generative…
AI

PyTorch 2.5 Released: Advancing Machine Learning Efficiency and Scalability

3 Mins read
The PyTorch community has continuously been at the forefront of advancing machine learning frameworks to meet the growing needs of researchers, data…

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *