- [1] P. Bukaty. The california consumer privacy act (ccpa): An implementation guide. IT Governance Ltd, 2019.
- [2] E. Monitor, (2022) “Regulation 2022/1925 - Contestable and fair markets in the digital sector and amending Directives (EU) 2019/1937 and (EU) 2020/1828 (Digital Markets Act)" Official Journal of the European Union L(265): 1–66.
- [3] R. Creemers, (2023) “Cybersecurity Law and regulation in China: Securing the smart state" China Law and Society Review 6(2): 111–145.
- [4] M. Ye, X. Fang, B. Du, P. C. Yuen, and D. Tao, (2023) “Heterogeneous federated learning: State-of-the-art and research challenges" ACM Computing Surveys 56(3): 1–44. DOI: 10.1145/3625558.
- [5] J. Wen, Z. Zhang, Y. Lan, Z. Cui, J. Cai, and W. Zhang, (2023) “A survey on federated learning: challenges and applications" International Journal of Machine Learning and Cybernetics 14(2): 513–535. DOI: 10.1007/s13042-022-01647-y.
- [6] J. So, C. He, C.-S. Yang, S. Li, Q. Yu, R. E Ali, B. Guler, and S. Avestimehr, (2022) “Lightsecagg: a lightweight and versatile design for secure aggregation in federated learning" Proceedings of Machine Learning and Systems 4: 694–720. DOI: 10.48550/arXiv.2109.14236.
- [7] J. Kim, S. Kim, J. Choi, J. Park, D. Kim, and J. H. Ahn. “SHARP: A short-word hierarchical accelerator for robust and practical fully homomorphic encryption”. In: Proceedings of the 50th Annual International Symposium on Computer Architecture. 2023, 1–15. DOI: 10.1145/3579371.3589053.
- [8] S. Lee, G. Lee, J. W. Kim, J. Shin, and M.-K. Lee. “HETAL: Efficient privacy-preserving transfer learning with homomorphic encryption”. In: International Conference on Machine Learning. PMLR. 2023, 19010–19035. DOI: 10.48550/ARXIV.2403.14111.
- [9] K. Zhao, J. Hu, H. Shao, and J. Hu, (2023) “Federated multi-source domain adversarial adaptation framework for machinery fault diagnosis with data privacy" Reliability Engineering & System Safety 236: 109246. DOI: 10.1016/j.ress.2023.109246.
- [10] R. Hu, Y. Guo, and Y. Gong, (2023) “Federated learning with sparsified model perturbation: Improving accuracy under client-level differential privacy" IEEE Transactions on Mobile Computing 23(8): 8242–8255. DOI: 10.1109/TMC.2023.3343288.
- [11] P. Paillier. “Public-key cryptosystems based on composite degree residuosity classes”. In: International conference on the theory and applications of cryptographic techniques. Springer. 1999, 223–238. DOI: 10.1007/3-540-48910-X_16.
- [12] Y. Zhang, K. Tian, Y. Lu, F. Liu, C. Li, Z. Gong, Z. Hu, J. Li, and Q. Xu. "Reparable Threshold Paillier Encryption Scheme for Federated Learning". DOI: 10.21203/rs.3.rs3453596/v1.
- [13] S. Mohammadi, S. Sinaei, A. Balador, and F. Flammini. “Optimized Paillier Homomorphic Encryption in Federated Learning for Speech Emotion Recognition”. In: 2023 IEEE 47th Annual Computers, Software, and Applications Conference (COMPSAC). IEEE. 2023, 1021–1022. DOI: 10.1109/COMPSAC57700.2023.00156.
- [14] C. Xu, Y. Wang, and J. Wang. “An Improvement Paillier Algorithm Applied to Federated Learning”. In: 2023 IEEE 29th International Conference on Parallel and Distributed Systems (ICPADS). IEEE. 2023, 1445–1451. DOI: 10.1109/icpads60453.2023.00205.
- [15] S. KR and J. Judith. "An Augmented Salp-swarm Optimization Based on Paillier Federated Multi-layer Perceptron (Pf-mlp) and Homomorphic Encryption Standard (Hes) Techniques for Data Security in Cloud Systems". 2023. DOI: 10.21203/rs.3.rs-3123750/v1.
- [16] P. Yao, H. Wang, C. Zheng, J. Yang, and L. Wang. “Efficient federated learning aggregation protocol using approximate homomorphic encryption”. In: 2023 26th International Conference on Computer Supported Cooperative Work in Design (CSCWD). IEEE. 2023, 1884–1889. DOI: 10.1109/CSCWD57460.2023.10152829.
- [17] C. Zhang, S. Li, J. Xia, W. Wang, F. Yan, and Y. Liu. “BatchCrypt: Efficient homomorphic encryption for Cross-Silo federated learning”. In: 2020 USENIX annual technical conference (USENIX ATC 20). 2020, 493–506.
- [18] Y. Wang and F. Zhu, (2023) “Distributed dynamic eventtriggered control for multi-agent systems with quantization communication" IEEE Transactions on Circuits and Systems II: Express Briefs 71(4): 2054–2058. DOI: 10.1109/TCSII.2023.3329875.
- [19] S. Horváth, D. Kovalev, K. Mishchenko, P. Richtárik, and S. Stich, (2023) “Stochastic distributed learning with gradient quantization and double-variance reduction" Optimization Methods and Software 38(1): 91–106. DOI: 10.1080/10556788.2022.2117355.
- [20] B. Wan, J. Zhao, and C. Wu, (2023) “Adaptive message quantization and parallelization for distributed full-graph gnn training" Proceedings of Machine Learning and Systems 5: 203–218. DOI: 10.48550/arXiv.2306.01381.
- [21] F. Seide, H. Fu, J. Droppo, G. Li, and D. Yu. “1-bit stochastic gradient descent and its application to data-parallel distributed training of speech dnns”. In: Fifteenth annual conference of the international speech communication association. 2014. DOI: 10.21437/interspeech.2014-274.
- [22] J. Duchi, E. Hazan, and Y. Singer, (2011) “Adaptive subgradient methods for online learning and stochastic optimization." Journal of machine learning research 12(7): 2121–2159. DOI: 10.1109/TNN.2011.2146788.
- [23] D. Alistarh, D. Grubic, J. Li, R. Tomioka, and M. Vojnovic. “QSGD: Communication-efficient SGD via gradient quantization and encoding”. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. 2017, 1707–1718. DOI: 10.48550/arXiv.1610.02132.
- [24] W. Wen, C. Xu, F. Yan, C. Wu, Y. Wang, Y. Chen, and H. Li. “Terngrad: Ternary gradients to reduce communication in distributed deep learning”. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. 2017, 1508–1518. DOI: 10.48550/arXiv.1705.07878.
- [25] A. F. Aji and K. Heafield, (2017) “Sparse communication for distributed gradient descent" arXiv preprint arXiv:1704.05021: DOI: 10.18653/v1/D17-1045.
- [26] N. Strom. “Scalable distributed DNN training using commodity GPU cloud computing”. In: Interspeech. 2015. DOI: 10.21437/Interspeech.2015-354.
- [27] N. Dryden, T. Moon, S. A. Jacobs, and B. Van Essen. “Communication quantization for data-parallel training of deep neural networks”. In: 2016 2nd Workshop on Machine Learning in HPC Environments (MLHPC). IEEE. 2016, 1–8. DOI: 10.1109/MLHPC.2016.004.
- [28] C.-Y. Chen, J. Choi, D. Brand, A. Agrawal, W. Zhang, and K. Gopalakrishnan. “Adacomp: Adaptive residual gradient compression for data-parallel distributed training”. In: Proceedings of the AAAI conference on artificial intelligence. 2018, 2827–2835. DOI: 10.1609/aaai.v32i1.11728.
- [29] Y. Lin, S. Han, H. Mao, Y. Wang, and W. J. Dally, (2017) “Deep gradient compression: Reducing the communication bandwidth for distributed training" arXiv preprint arXiv:1712.01887:
- [30] J. Wangni, J. Wang, J. Liu, and T. Zhang. “Gradient sparsification for communication-efficient distributed optimization”. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. 2018, 1306–1316. DOI: 10.48550/arXiv.1710.09854.
- [31] X. Chen, C. Liang, D. Huang, E. Real, K. Wang, H. Pham, X. Dong, T. Luong, C.-J. Hsieh, Y. Lu, and Q. V. Le. “Symbolic discovery of optimization algorithms”. In: Proceedings of the 37th International Conference on Neural Information Processing Systems. 2140. Curran Associates Inc., 2024, 49205–49233. DOI: 10.48550/arXiv.2302.06675.