Paper 2023/597

FedVS: Straggler-Resilient and Privacy-Preserving Vertical Federated Learning for Split Models

Songze Li, Hong Kong University of Science and Technology
Duanyi Yao, Hong Kong University of Science and Technology
Jin Liu, Hong Kong University of Science and Technology
Abstract

In a vertical federated learning (VFL) system consisting of a central server and many distributed clients, the training data are vertically partitioned such that different features are privately stored on different clients. The problem of split VFL is to train a model split between the server and the clients. This paper aims to address two major challenges in split VFL: 1) performance degradation due to straggling clients during training; and 2) data and model privacy leakage from clients’ uploaded data embeddings. We propose FedVS to simultaneously address these two challenges. The key idea of FedVS is to design secret sharing schemes for the local data and models, such that information-theoretical privacy against colluding clients and curious server is guaranteed, and the aggregation of all clients’ embeddings is reconstructed losslessly, via decrypting computation shares from the non- straggling clients. Extensive experiments on various types of VFL datasets (including tabular, CV, and multi-view) demonstrate the universal advantages of FedVS in straggler mitigation and privacy protection over baseline protocols.

Metadata
Available format(s)
PDF
Category
Cryptographic protocols
Publication info
Published elsewhere. ICML 2023
Keywords
Vertical Federated LearningStraggler MitigationPrivacy Protection
Contact author(s)
songzeli8824 @ gmail com
dyao @ connect ust hk
jliu577 @ connect hkust-gz edu cn
History
2023-04-28: approved
2023-04-26: received
See all versions
Short URL
https://ia.cr/2023/597
License
Creative Commons Attribution-NonCommercial-ShareAlike
CC BY-NC-SA

BibTeX

@misc{cryptoeprint:2023/597,
      author = {Songze Li and Duanyi Yao and Jin Liu},
      title = {{FedVS}: Straggler-Resilient and Privacy-Preserving Vertical Federated Learning for Split Models},
      howpublished = {Cryptology {ePrint} Archive, Paper 2023/597},
      year = {2023},
      url = {https://eprint.iacr.org/2023/597}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.