Address:
Lucas Grativol Ribeiro from the department Mathematical and Electrical Engineering (MEE) and from the Lab-STICC Laboratory, will present his research about :
"Neural Network Compression in the Context of Federating Learning and Edge Devices"
Due to increasing concerns over data privacy, federated learning is a collaborative, decentralized machine learning framework.
By offloading model training to different participants and keeping data local, the framework allows for a more privacy-aware training routine. However, this trade-off implies extra communication and computation costs to entities that want to train with such a method. This manuscript discusses the different emergent challenges of the domain, pointing out possible solutions to increase its efficiency and reduce its hardware requirements. This is obtained by studying classic compression techniques, such as pruning and re-purposing lowrank adaptations to lower federated learning costs. Moreover, for cases where participants have limited communication capabilities, a codesign methodology for an embedded fewshot learning algorithm is proposed. The presented solution considers hardware limitations when proposing a deployment pipeline for an FPGA platform. This solution permits a lowlatency algorithm that can also be used to implement pos-federated learning models.
Organizer(s)
Thesis accreditation from IMT Atlantique with the doctoral School SPIN
Keywords : Federated Learning, Pruning, Low-Rank Adaptation, Few-Shot Learning, FPGA