Private Federated Learning with Autotuned Compression

Enayat Ullah, Christopher A. Choquette-Choo, Peter Kairouz, Sewoong Oh
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:34668-34708, 2023.

Abstract

We propose new techniques for reducing communication in private federated learning without the need for setting or tuning compression rates. Our on-the-fly methods automatically adjust the compression rate based on the error induced during training, while maintaining provable privacy guarantees through the use of secure aggregation and differential privacy. Our techniques are provably instance-optimal for mean estimation, meaning that they can adapt to the “hardness of the problem” with minimal interactivity. We demonstrate the effectiveness of our approach on real-world datasets by achieving favorable compression rates without the need for tuning.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-ullah23b, title = {Private Federated Learning with Autotuned Compression}, author = {Ullah, Enayat and Choquette-Choo, Christopher A. and Kairouz, Peter and Oh, Sewoong}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {34668--34708}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/ullah23b/ullah23b.pdf}, url = {https://proceedings.mlr.press/v202/ullah23b.html}, abstract = {We propose new techniques for reducing communication in private federated learning without the need for setting or tuning compression rates. Our on-the-fly methods automatically adjust the compression rate based on the error induced during training, while maintaining provable privacy guarantees through the use of secure aggregation and differential privacy. Our techniques are provably instance-optimal for mean estimation, meaning that they can adapt to the “hardness of the problem” with minimal interactivity. We demonstrate the effectiveness of our approach on real-world datasets by achieving favorable compression rates without the need for tuning.} }
Endnote
%0 Conference Paper %T Private Federated Learning with Autotuned Compression %A Enayat Ullah %A Christopher A. Choquette-Choo %A Peter Kairouz %A Sewoong Oh %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-ullah23b %I PMLR %P 34668--34708 %U https://proceedings.mlr.press/v202/ullah23b.html %V 202 %X We propose new techniques for reducing communication in private federated learning without the need for setting or tuning compression rates. Our on-the-fly methods automatically adjust the compression rate based on the error induced during training, while maintaining provable privacy guarantees through the use of secure aggregation and differential privacy. Our techniques are provably instance-optimal for mean estimation, meaning that they can adapt to the “hardness of the problem” with minimal interactivity. We demonstrate the effectiveness of our approach on real-world datasets by achieving favorable compression rates without the need for tuning.
APA
Ullah, E., Choquette-Choo, C.A., Kairouz, P. & Oh, S.. (2023). Private Federated Learning with Autotuned Compression. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:34668-34708 Available from https://proceedings.mlr.press/v202/ullah23b.html.

Related Material