University of Michigan AI Lab Publishes Breakthrough in Federated Learning
A team of researchers at the University of Michigan's AI Laboratory has published a groundbreaking paper in Nature Machine Intelligence detailing a new federated learning framework called "FedSparse" that dramatically reduces the communication overhead of distributed machine learning. The technique allows AI models to be trained across thousands of devices while transmitting 90% less data between nodes.
"Federated learning is critical for privacy-preserving AI, but the bandwidth requirements have been a major bottleneck for real-world deployment," said lead author Professor Wei-Lun Chao of U-M's Computer Science and Engineering department. "FedSparse uses a novel sparse gradient compression technique that makes federated training practical even on low-bandwidth connections."
The research has immediate implications for healthcare, autonomous vehicles, and mobile computing, where data privacy concerns often prevent centralized model training. Several Ann Arbor companies have already expressed interest in licensing the technology, including May Mobility for its fleet learning systems and Michigan Medicine for cross-hospital clinical AI models.
The paper, co-authored by graduate students Priya Ramanathan and James Kolinski, was developed at U-M's Bob and Betty Beyster Building on North Campus. The research was supported by a $2.4 million grant from the National Science Foundation and industry partnerships with Toyota Research Institute and Google. U-M's Office of Technology Transfer has filed a provisional patent on the core algorithms, and the team plans to release an open-source reference implementation by summer 2026.
Share this article
Relocating?
Homes near University of Michigan
Explore neighborhoods and housing options close to University of Michigan's Ann Arbor office.
View relocation guide