摘要
近年来,随着人工智能技术的飞速发展,人们越来越重视数据隐私与安全,世界各国也出台一系列法律法规以保护用户隐私.面对制约人工智能发展的数据孤岛以及数据隐私和安全问题,联邦学习作为一种新型的分布式机器学习技术应运而生.然而,高通信开销问题阻碍着联邦学习的进一步发展,为此,本文提出了基于选择性通信策略的高效联邦学习算法.具体地,该算法基于联邦学习的网络结构特点,采取选择性通信策略,在客户端通过最大均值差异衡量本地模型与全局模型的相关性以过滤相关性较低的本地模型,并在服务器端依据相关性对本地模型进行加权聚合.通过上述操作,所提算法在保证模型快速收敛的同时能够有效减少通信开销.仿真结果表明,与FedAvg算法和FedProx算法相比,所提算法能够在保证准确率的前提下,将通信轮次分别减少54%和60%左右.
In recent years,with the rapid development of artificial intelligence(AI),people pay more and more attention to data privacy and security,and countries around the world have also issued a series of laws and regulations to protect user privacy.In the face of data silos and data privacy and security issues that restrict the development of AI,federated learning(FL)emerges as a new paradigm of distributed machine learning technology.However,the high communication overhead problem hinders the further development of FL,to this end,this paper proposes an efficient federated learning algorithm based on selective communication policy.Specifically,based on the network structure characteristics of FL,the algorithm adopts a selective communication policy,measures the correlation between the local model and the global model by the maximum mean discrepancy on the client side to filter the local models with low correlation,and then the server will execute a weighted aggregation of the local models according to the correlations.Through the above operations,the proposed algorithm can effectively reduce the communication overhead while ensuring the rapid convergence of the model.The simulation results show that,compared with FedAvg and FedProx,the proposed algorithm can reduce the number of communication rounds by about 54%and 60%,respectively,on the premise of ensuring the accuracy.
作者
李群
陈思光
LI Qun;CHEN Siguang(School of Internet of Things,Nanjing University of Posts and Telecommunications,Nanjing 210003,China)
出处
《小型微型计算机系统》
CSCD
北大核心
2024年第3期549-554,共6页
Journal of Chinese Computer Systems
基金
国家自然科学基金项目(61971235)资助
江苏省“333高层次人才培养工程”项目资助
中国博士后科学基金(面上一等资助)项目(2018M630590)资助
江苏省博士后科研资助计划项目(2021K501C)资助
南京邮电大学‘1311’人才计划资助.
关键词
联邦学习
通信开销
最大均值差异
federated learning
communication overhead
maximum mean discrepancy