题名 | Set-level Guidance Attack: Boosting Adversarial Transferability of Vision-Language Pre-training Models |
作者 | |
通讯作者 | Feng Zheng |
共同第一作者 | Dong Lu; Zhiqiang Wang |
DOI | |
发表日期 | 2023-10-02
|
会议名称 | 2023ICCV
|
ISSN | 1550-5499
|
ISBN | 979-8-3503-0719-1
|
会议录名称 | |
页码 | 102-111
|
会议日期 | 2023-10-2~10-6
|
会议地点 | 法国巴黎
|
出版地 | 10662 LOS VAQUEROS CIRCLE, PO BOX 3014, LOS ALAMITOS, CA 90720-1264 USA
|
出版者 | |
摘要 | Vision-language pre-training (VLP) models have shown vulnerability to adversarial examples in multimodal tasks. Furthermore, malicious adversaries can be deliberately transferred to attack other black-box models. However, existing work has mainly focused on investigating white-box attacks. In this paper, we present the first study to investigate the adversarial transferability of recent VLP models. We observe that existing methods exhibit much lower transferability, compared to the strong attack performance in white-box settings. The transferability degradation is partly caused by the under-utilization of cross-modal interactions. Particularly, unlike unimodal learning, VLP models rely heavily on cross-modal interactions and the multimodal alignments are many-to-many, e.g., an image can be described in various natural languages. To this end, we propose a highly transferable Set-level Guidance Attack (SGA) that thoroughly leverages modality interactions and incorporates alignment-preserving augmentation with cross-modal guidance. Experimental results demonstrate that SGA could generate adversarial examples that can strongly transfer across different VLP models on multiple downstream vision-language tasks. On image-text retrieval, SGA significantly enhances the attack success rate for transfer attacks from ALBEF to TCL by a large margin (at least 9.78% and up to 30.21%), compared to the state-of-the-art. Our code is available at https://github.com/Zoky-2020/SGA. |
关键词 | |
学校署名 | 第一
; 共同第一
; 通讯
|
语种 | 英语
|
相关链接 | [IEEE记录] |
收录类别 | |
资助项目 | National Key R&D Program of China[2022YFF1202903]
; National Natural Science Foundation of China[62122035]
|
WOS研究方向 | Computer Science
; Imaging Science & Photographic Technology
|
WOS类目 | Computer Science, Artificial Intelligence
; Computer Science, Theory & Methods
; Imaging Science & Photographic Technology
|
WOS记录号 | WOS:001159644300010
|
来源库 | 人工提交
|
全文链接 | https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10376813 |
引用统计 |
被引频次[WOS]:1
|
成果类型 | 会议论文 |
条目标识符 | http://sustech.caswiz.com/handle/2SGJ60CL/646947 |
专题 | 南方科技大学 工学院_计算机科学与工程系 |
作者单位 | 1.Southern University of Science and Technology 2.The University of Hong Kong 3.Monash University 4.Temple University 5.Peng Cheng Laboratory |
第一作者单位 | 南方科技大学 |
通讯作者单位 | 南方科技大学 |
第一作者的第一单位 | 南方科技大学 |
推荐引用方式 GB/T 7714 |
Dong Lu,Zhiqiang Wang,Teng Wang,et al. Set-level Guidance Attack: Boosting Adversarial Transferability of Vision-Language Pre-training Models[C]. 10662 LOS VAQUEROS CIRCLE, PO BOX 3014, LOS ALAMITOS, CA 90720-1264 USA:IEEE COMPUTER SOC,2023:102-111.
|
条目包含的文件 | ||||||
文件名称/大小 | 文献类型 | 版本类型 | 开放类型 | 使用许可 | 操作 | |
Set-Level Guidance A(946KB) | -- | -- | 限制开放 | -- |
|
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论