A key issue in current federated learning research is how to improve the performance of federated learning algorithms by reducing communication overhead and computing costs while ensuring data privacy. This paper proposed an efficient wireless transmission scheme termed the subsampling privacy-enabled RDP wireless transmission system (SS-RDP-WTS), which can reduce the communication and computing overhead in the process of learning but also enhance the privacy protection ability of federated learning. We proved our scheme's convergence and analyzed its privacy guarantee, as well as demoonstrated the performance of our scheme on the Modified National Institute of Standards and Technology database (MNIST) and Canadian Institute for Advanced Research, 10 classes datasets (CIFAR10).
Citation: Qingjie Tan, Xujun Che, Shuhui Wu, Yaguan Qian, Yuanhong Tao. Privacy amplification for wireless federated learning with Rényi differential privacy and subsampling[J]. Electronic Research Archive, 2023, 31(11): 7021-7039. doi: 10.3934/era.2023356
A key issue in current federated learning research is how to improve the performance of federated learning algorithms by reducing communication overhead and computing costs while ensuring data privacy. This paper proposed an efficient wireless transmission scheme termed the subsampling privacy-enabled RDP wireless transmission system (SS-RDP-WTS), which can reduce the communication and computing overhead in the process of learning but also enhance the privacy protection ability of federated learning. We proved our scheme's convergence and analyzed its privacy guarantee, as well as demoonstrated the performance of our scheme on the Modified National Institute of Standards and Technology database (MNIST) and Canadian Institute for Advanced Research, 10 classes datasets (CIFAR10).
[1] | B. McMahan, E. Moore, D. Ram-age, S. Hampson, B. Arcas, Communication-efficient learning of deep networks from decentralized data, in Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, (2017), 1273–1282. |
[2] | W. Chang, R. Tandon, Communication efficient federated learning over multiple access channels, arXiv preprint, (2020), arXiv: 2001.08737. https://doi.org/10.48550/arXiv.2001.08737 |
[3] | M. Seif, R. Tandon, M. Li, Wireless federated learning with local differential privacy, in 2020 IEEE International Symposium on Information Theory (ISIT), (2020), 2604–2609. |
[4] | X. Zhang, M. Fang, J. Liu, Z. Zhu, Private and communication-efficient edge learning: a sparse differential Gaussian-masking distributed SGD approach, in MOBIHOC Mobile and Ad Hoc Networking and Computing, (2020), 261–270. https://doi.org/10.1145/3397166.3409123 |
[5] | J. Ding, G. Liang, J. Bi, M. Pan, Differentially private and communication efficient collaborative learning, in Proceedings of the AAAI Conference on Artificial Intelligence, 35 (2021), 7219–7227. https://doi.org/10.1609/aaai.v35i8.16887 |
[6] | L. Melis, C. Song, E. D. Cristofaro, V. Shmatikov, Exploiting unintended feature leakage in collaborative learning, in 2019 IEEE Symposium on Security and Privacy (SP), (2019), 691–706. https://doi.org/10.1109/SP.2019.00029 |
[7] | L. Zhu, Z. Liu, S. Han, Deep leak-age from gradients, arXiv preprint, (2019), arXiv: 1906.08935. https://doi.org/10.48550/arXiv.1906.08935 |
[8] | J. Geiping, H. Bauermeister, H. Dröge, M. Moeller, Inverting gradients–how easy is it to break privacy in federated learning, in NIPS'20: Proceedings of the 34th International Conference on Neural Information Processing Systems, (2020), 16937–16947. |
[9] | C. Dwork, A. Roth, The algorithmic foundations of differential privacy, Found. Trends Theor. Comput. Sci., 9 (2014), 211–407. |
[10] | D. Liu, O. Simeone, Privacy for free: wireless federated learning via uncoded transmission with adaptive power control, IEEE J. Sel. Areas Commun., 39 (2021), 170–185. |
[11] | A. Islam, S. Shin, A digital twin-based drone-assisted secure data aggregation scheme with federated learning in artificial Intelligence of Things, IEEE Network, 37 (2023), 278–285. https://doi.org/10.1109/MNET.001.2200484 doi: 10.1109/MNET.001.2200484 |
[12] | J. Ma, S. Naas, S. Sigg, X. Lyu, Privacy-preserving federated learning based on multi-key homomorphic encryption, arXiv preprint, (2021), arXiv: 2104.06824. https://doi.org/10.48550/arXiv.2104.06824 |
[13] | D. Byrd, A. Polychroniadou, Differentially private secure multi-party computation for federated learning in financial applications, in Proceedings of the First ACM International Conference on AI in Finance, (2020), 1–9. https://doi.org/10.1145/3383455.3422562 |
[14] | A. Islam, A. Amin, S. Shin, FBI: A federated learning-based blockchain-embedded data accumulation scheme using drones for Internet of Things, IEEE Wireless Commun. Lett., 11 (2022), 972–976. |
[15] | Y. Wang, B. Balle, S. Kasiviswanathan, Subsampled Rényi differential privacy and analytical moments accountant, in The 22nd International Conference on Artificial Intelligence and Statistics, PMLR, (2019), 1226–1235. |
[16] | M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, et al., Deep learning with di erential privacy, in Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, (2016), 308–318. https://doi.org/10.1145/2976749.2978318 |
[17] | N. Agarwal, A. T. Suresh, F. Yu, S. Kumar, H. Mcmahan, cpSGD: Communication-efficient and differentially-private distributed SGD, arXiv preprint, (2018), arXiv: 1805.10559, https://doi.org/10.48550/arXiv.1805.10559 |
[18] | I. Mironov, Rényi differential privacy, arXiv preprint, (2017), arXiv: 1702.07476. https://doi.org/10.48550/arXiv.1702.07476 |
[19] | I. Mironov, K. Talwar, L. Zhang, Rényi differential privacy of the sampled Gaussian mechanism, arXiv preprint, (2019), arXiv: 1908.10530. https://doi.org/10.48550/arXiv.1908.10530 |
[20] | L. Wang, R. Jia, D. Song, D2P-fed: Differentially private federated learning with efficient communication, arXiv preprint, (2021), arXiv: 2006.13039. https://doi.org/10.48550/arXiv.2006.13039 |
[21] | R. Geyer, T. Klein, M. Nabi, Differentially private federated learning: a client level perspective, arXiv preprint, (2018), arXiv: 1712.07557. https://doi.org/10.48550/arXiv.1712.07557 |
[22] | M. Du, K. Wang, Z. Xia, Y. Zhang, Differential privacy preserving of training model in wireless big data with edge computing, IEEE Trans. Big Data, 6 (2018), 283–295. |
[23] | M. Seif, R. Tandon, M. Li, Wireless federated learning with local differential privacy, in 2020 IEEE International Symposium on Information Theory (ISIT), (2020), 2604–2609. |
[24] | D. Liu, O. Simeone, Privacy for free: Wireless federated learning via uncoded transmission with adaptive power control, IEEE J. Sel. Areas Commun., 39 (2020), 170–185. |
[25] | J. Ding, G. Liang, J. Bi, M. Pan, Differentially private and communication efficient collaborative learning, in Proceedings of the AAAI Conference on Artificial Intelligence, 35 (2021), 7219–7227. https://doi.org/10.1609/aaai.v35i8.16887 |
[26] | K. Wei, J. Li, C. Ma, M. Ding, C. Chen, S. Jin, et al., Low-latency federated learning over wireless channels with differential privacy, IEEE J. Sel. Areas Commun., 40 (2021), 290–307. |
[27] | C. Dwork, F. McSherry, K. Nissim, A. Smith, Calibrating noise to sensitivity in private data analysis, in Theory of Cryptography, Springer, (2006), 265–284. https://doi.org/10.1007/11681878_14 |
[28] | C. Dwork, K. Kenthapadi, F. Mcsherry, I. Mironov, M. Naor, Our data, ourselves: Privacy via distributed noise generation, in Advances in Cryptology-EUROCRYPT 2006, Springer, (2006), 486–503. |
[29] | S. P. Kasiviswanathan, H. K. Lee, K. Nissim, S. Raskhodnikova, A. Smith, What can we learn privately, arXiv preprint, (2010), arXiv: 0803.0924. https://doi.org/10.48550/arXiv.0803.0924 |
[30] | B. Balle, G. Barthe, M. Gaboardi, J. Hsu, T. Sato, Hypothesis testing interpretations and any differential privacy, arXiv preprint, (2019), arXiv: 1905.09982. https://doi.org/10.48550/arXiv.1905.09982 |
[31] | C. L. Canonne, G. Kamath, T. Steinke, The discrete Gaussian for differential privacy, arXiv preprint, (2021), arXiv: 2004.00010. https://doi.org/10.48550/arXiv.2004.00010 |
[32] | B. Balle, G. Barthe, M. Gaboardi, Privacy amplification by subsampling: Tight analyses via couplings and divergences, arXiv preprint, (2018), arXiv: 1807.01647. https://doi.org/10.48550/arXiv.1807.01647 |
[33] | M. S. E. Mohamed, W. T. Chang, R. Tandon, Privacy amplification for federated learning via user sampling and wireless aggregation, IEEE J. Sel. Areas Commun., 39 (2021), 3821–3835. https://doi.org/10.1109/JSAC.2021.3118408 doi: 10.1109/JSAC.2021.3118408 |
[34] | A. Rakhlin, O. Shamir, K. Sridharan, Making gradient descent optimal for strongly convex stochastic optimization, arXiv preprint, (2012), arXiv: 1109.5647. https://doi.org/10.48550/arXiv.1109.5647 |
[35] | D. Basu, D. Data, C. Karakus, S. Diggavi, Qsparse-Local-SGD: distributed SGD with quantization, sparsification, and local computations, arXiv preprint, (2019), arXiv: 1906.02367. https://doi.org/10.48550/arXiv.1906.02367 |