题名 | Prompting for Multi-Modal Tracking |
作者 | |
通讯作者 | Feng Zheng |
DOI | |
发表日期 | 2022-08-01
|
会议名称 | The 30th ACM International Conference on Multimedia
|
会议日期 | 2022/10/10-2022/10/14
|
会议地点 | 里斯本
|
摘要 | Multi-modal tracking gains attention due to its ability to be more accurate and robust in complex scenarios compared to traditional RGB-based tracking. Its key lies in how to fuse multi-modal data and reduce the gap between modalities. However, multi-modal tracking still severely suffers from data deficiency, thus resulting in the insufficient learning of fusion modules. Instead of building such a fusion module, in this paper, we provide a new perspective on multimodal tracking by attaching importance to the multi-modal visual prompts. We design a novel multi-modal prompt tracker (ProTrack), which can transfer the multi-modal inputs to a single modality by the prompt paradigm. By best employing the tracking ability of pretrained RGB trackers learning at scale, our ProTrack can achieve high-performance multi-modal tracking by only altering the inputs, even without any extra training on multi-modal data. Extensive experiments on 5 benchmark datasets demonstrate the effectiveness of the proposed ProTrack. |
学校署名 | 第一
; 通讯
|
语种 | 英语
|
来源库 | 人工提交
|
出版状态 | 在线出版
|
引用统计 |
被引频次[WOS]:24
|
成果类型 | 会议论文 |
条目标识符 | http://sustech.caswiz.com/handle/2SGJ60CL/415620 |
专题 | 南方科技大学 工学院_计算机科学与工程系 |
作者单位 | 1.Southern University of Science and Technology, China 2.University of Birmingham, UK 3.University of Electronic Science and Technology of China |
第一作者单位 | 南方科技大学 |
通讯作者单位 | 南方科技大学 |
第一作者的第一单位 | 南方科技大学 |
推荐引用方式 GB/T 7714 |
Jinyu Yang,Zhe Li,Feng Zheng,et al. Prompting for Multi-Modal Tracking[C],2022.
|
条目包含的文件 | 条目无相关文件。 |
|
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论