Towards privacy-preserving and fairness-aware federated learning framework - Télécom SudParis
Communication Dans Un Congrès Année : 2025

Towards privacy-preserving and fairness-aware federated learning framework

Résumé

Federated Learning (FL) enables the distributed training of a model across multiple data owners under the orchestration of a central server responsible for aggregating the models generated by the different clients. However, the original approach of FL has significant shortcomings related to privacy and fairness requirements. Specifically, the observation of the model updates may lead to privacy issues, such as membership inference attacks, while the use of imbalanced local datasets can introduce or amplify classification biases, especially for minority groups. In this work, we show that these biases can be exploited to increase the likelihood of privacy attacks against these groups. To do so, we propose a novel inference attack exploiting the knowledge of group fairness metrics during the training of the global model. Then to thwart this attack, we define a fairness-aware encrypted-domain aggregation algorithm that is differentially-private by design thanks to the approximate precision loss of the threshold multi-key CKKS homomorphic encryption scheme. Finally, we demonstrate the good performance of our proposal both in terms of fairness and privacy through experiments conducted over three real datasets.
Fichier principal
Vignette du fichier
popets-2025-0044.pdf (1.78 Mo) Télécharger le fichier
Origine Fichiers éditeurs autorisés sur une archive ouverte

Dates et versions

hal-04782394 , version 1 (14-11-2024)

Identifiants

Citer

Adda-Akram Bendoukha, Didem Demirag, Nesrine Kaaniche, Aymen Boudguiga, Renaud Sirdey, et al.. Towards privacy-preserving and fairness-aware federated learning framework. Privacy Enhancing Technologies (PETs), Jul 2025, Washinghton, DC, United States. pp.845-865, ⟨10.56553/popets-2025-0044⟩. ⟨hal-04782394⟩
0 Consultations
0 Téléchargements

Altmetric

Partager

More