中文版 | English
题名

Complementary Knowledge Distillation for Robust and Privacy-Preserving Model Serving in Vertical Federated Learning

作者
DOI
发表日期
2024-03-25
会议名称
38th AAAI Conference on Artificial Intelligence, AAAI 2024
ISSN
2159-5399
EISSN
2374-3468
ISBN
9781577358879
会议录名称
卷号
38
页码
19832-19839
会议日期
February 20, 2024 - February 27, 2024
会议地点
Vancouver, BC, Canada
会议录编者/会议主办者
Association for the Advancement of Artificial Intelligence
出版地
2275 E BAYSHORE RD, STE 160, PALO ALTO, CA 94303 USA
出版者
摘要
Vertical Federated Learning (VFL) enables an active party with labeled data to enhance model performance (utility) by collaborating with multiple passive parties that possess auxiliary features corresponding to the same sample identifiers (IDs). Model serving in VFL is vital for real-world, delay-sensitive applications, and it faces two major challenges: 1) robustness against arbitrarily-aligned data and stragglers; and 2) privacy protection, ensuring minimal label leakage to passive parties. Existing methods fail to transfer knowledge among parties to improve robustness in a privacy-preserving way. In this paper, we introduce a privacy-preserving knowledge transfer framework, Complementary Knowledge Distillation (CKD), designed to enhance the robustness and privacy of multi-party VFL systems. Specifically, we formulate a Complementary Label Coding (CLC) objective to encode only complementary label information of the active party’s local model for passive parties to learn. Then, CKD selectively transfers the CLC-encoded complementary knowledge 1) from the passive parties to the active party, and 2) among the passive parties themselves. Experimental results on four real-world datasets demonstrate that CKD outperforms existing approaches in terms of robustness against arbitrarily-aligned data, while also minimizing label privacy leakage.
Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
学校署名
第一
语种
英语
相关链接[来源记录]
收录类别
资助项目
We thank the anonymous reviewers/SPC/AC for the constructive comments and suggestions. This work was supported by the National Natural Science Foundation of China (Grant No.62250710682), Guangdong Provincial Key Laboratory (Grant No.2020B121201001), the Program for Guangdong Introducing Innovative and Entrepreneurial Teams (Grant No. 2017ZT07X386), and Research Institute of Trustworthy Autonomous Systems (RITAS).
WOS研究方向
Computer Science
WOS类目
Computer Science, Artificial Intelligence ; Computer Science, Theory & Methods
WOS记录号
WOS:001241509500004
EI入藏号
20241515864523
EI主题词
Artificial intelligence ; Delay-sensitive applications ; Knowledge management ; Learning systems ; Privacy-preserving techniques ; Sensitive data
EI分类号
Telecommunication; Radar, Radio and Television:716 ; Telephone Systems and Related Technologies; Line Communications:718 ; Data Processing and Image Processing:723.2 ; Artificial Intelligence:723.4 ; Computer Applications:723.5 ; Control System Applications:731.2 ; Chemical Operations:802.3 ; Information Retrieval and Use:903.3
来源库
EV Compendex
引用统计
成果类型会议论文
条目标识符http://sustech.caswiz.com/handle/2SGJ60CL/794527
专题南方科技大学
作者单位
1.Southern University of Science and Technology, Shenzhen, China
2.Hong Kong University of Science and Technology, Hong Kong
3.Webank AI Lab, Shenzhen, China
第一作者单位南方科技大学
第一作者的第一单位南方科技大学
推荐引用方式
GB/T 7714
Gao, Dashan,Wan, Sheng,Fan, Lixin,et al. Complementary Knowledge Distillation for Robust and Privacy-Preserving Model Serving in Vertical Federated Learning[C]//Association for the Advancement of Artificial Intelligence. 2275 E BAYSHORE RD, STE 160, PALO ALTO, CA 94303 USA:Association for the Advancement of Artificial Intelligence,2024:19832-19839.
条目包含的文件
条目无相关文件。
个性服务
原文链接
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
导出为Excel格式
导出为Csv格式
Altmetrics Score
谷歌学术
谷歌学术中相似的文章
[Gao, Dashan]的文章
[Wan, Sheng]的文章
[Fan, Lixin]的文章
百度学术
百度学术中相似的文章
[Gao, Dashan]的文章
[Wan, Sheng]的文章
[Fan, Lixin]的文章
必应学术
必应学术中相似的文章
[Gao, Dashan]的文章
[Wan, Sheng]的文章
[Fan, Lixin]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
[发表评论/异议/意见]
暂无评论

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。