题名 | Knowledge-Aware Prompt Tuning for Generalizable Vision-Language Models |
作者 | |
通讯作者 | Wenpeng Lu; Feng Zheng |
共同第一作者 | Teng Wang |
DOI | |
发表日期 | 2023-10-02
|
会议名称 | 2023ICCV
|
ISSN | 1550-5499
|
ISBN | 979-8-3503-0719-1
|
会议录名称 | |
页码 | 15624-15634
|
会议日期 | 2023-10-2~10-6
|
会议地点 | 法国巴黎
|
出版地 | 10662 LOS VAQUEROS CIRCLE, PO BOX 3014, LOS ALAMITOS, CA 90720-1264 USA
|
出版者 | |
摘要 | Pre-trained vision-language models, e.g., CLIP, working with manually designed prompts have demonstrated great capacity of transfer learning. Recently, learnable prompts achieve state-of-the-art performance, which however are prone to overfit to seen classes, failing to generalize to unseen classes. In this paper, we propose a Knowledge-Aware Prompt Tuning (KAPT) framework for vision-language models. Our approach takes the inspiration from human intelligence in which external knowledge is usually incorporated into recognizing novel categories of objects. Specifically, we design two complementary types of knowledge-aware prompts for the text encoder to leverage the distinctive characteristics of category-related external knowledge. The discrete prompt extracts the key information from descriptions of an object category, and the learned continuous prompt captures overall contexts. We further design an adaptation head for the visual encoder to aggregate salient attentive visual cues, which establishes discriminative and task-aware visual representations. We conduct extensive experiments on 11 widely-used benchmark datasets and the results verify the effectiveness in fewshot image classification, especially in generalizing to unseen categories. Compared with the state-of-the-art CoCoOp method, KAPT exhibits favorable performance and achieves an absolute gain of 3.22% on new classes and 2.57% in terms of harmonic mean. |
关键词 | |
学校署名 | 共同第一
; 通讯
|
语种 | 英语
|
相关链接 | [IEEE记录] |
收录类别 | |
资助项目 | National Key R&D Program of China[2022YFF1202903]
|
WOS研究方向 | Computer Science
; Imaging Science & Photographic Technology
|
WOS类目 | Computer Science, Artificial Intelligence
; Computer Science, Theory & Methods
; Imaging Science & Photographic Technology
|
WOS记录号 | WOS:001169499008011
|
来源库 | 人工提交
|
全文链接 | https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10376778 |
引用统计 |
被引频次[WOS]:4
|
成果类型 | 会议论文 |
条目标识符 | http://sustech.caswiz.com/handle/2SGJ60CL/646945 |
专题 | 南方科技大学 工学院_计算机科学与工程系 |
作者单位 | 1.Qilu University of Technology (Shandong Academy of Sciences) 2.Southern University of Science and Technology 3.The University of Hong Kong 4.United Imaging Healthcare 5.Monash Univerisity |
通讯作者单位 | 南方科技大学 |
推荐引用方式 GB/T 7714 |
Baoshuo Kan,Teng Wang,Wenpeng Lu,et al. Knowledge-Aware Prompt Tuning for Generalizable Vision-Language Models[C]. 10662 LOS VAQUEROS CIRCLE, PO BOX 3014, LOS ALAMITOS, CA 90720-1264 USA:IEEE COMPUTER SOC,2023:15624-15634.
|
条目包含的文件 | ||||||
文件名称/大小 | 文献类型 | 版本类型 | 开放类型 | 使用许可 | 操作 | |
Knowledge-Aware Prom(3725KB) | -- | -- | 限制开放 | -- |
|
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论