题名 | VLMixer Unpaired Vision-Language Pre-training via Cross-Modal CutMix |
作者 | |
通讯作者 | Feng Zheng |
DOI | |
发表日期 | 2022-06-17
|
会议名称 | International Conference on Machine Learning
|
会议日期 | 2022/7/17-2022/7/23
|
会议地点 | Baltimore Convention Center
|
摘要 | Existing vision-language pre-training (VLP) methods primarily rely on paired image-text datasets, which are either annotated by enormous human labors, or crawled from the internet followed by elaborate data cleaning techniques. To reduce the dependency on well-aligned imagetext pairs, it is promising to directly leverage the large-scale text-only and image-only corpora. This paper proposes a data augmentation method, namely cross-modal CutMix (CMC), for implicit cross-modal alignment learning in unpaired VLP. Specifically, CMC transforms natural sentences from the textual view into a multi-modal view, where visually-grounded words in a sentence are randomly replaced by diverse image patches with similar semantics. There are several appealing proprieties of the proposed CMC. First, it enhances the data diversity while keeping the semantic meaning intact for tackling problems where the aligned data are scarce; Second, by attaching cross-modal noise on uni-modal data, it guides models to learn token-level interactions across modalities for better denoising. Furthermore, we present a new unpaired VLP method, dubbed as VLMixer, that integrates CMC with contrastive learning to pull together the uni-modal and multi-modal views for better instance-level alignments among different modalities. Extensive experiments on five downstream tasks show that VLMixer could surpass previous state-of-the-art unpaired VLP methods. |
学校署名 | 第一
; 通讯
|
来源库 | 人工提交
|
引用统计 |
被引频次[WOS]:0
|
成果类型 | 会议论文 |
条目标识符 | http://sustech.caswiz.com/handle/2SGJ60CL/534763 |
专题 | 工学院_计算机科学与工程系 |
作者单位 | 1.Department of Computer Science and Engineering, Southern University of Science and Technology 2.Department of Computer Science, The University of Hong Kong 3.Data Platform, Tencent |
第一作者单位 | 计算机科学与工程系 |
通讯作者单位 | 计算机科学与工程系 |
第一作者的第一单位 | 计算机科学与工程系 |
推荐引用方式 GB/T 7714 |
Teng Wang,Wenhao Jiang,Zhichao Lu,et al. VLMixer Unpaired Vision-Language Pre-training via Cross-Modal CutMix[C],2022.
|
条目包含的文件 | ||||||
文件名称/大小 | 文献类型 | 版本类型 | 开放类型 | 使用许可 | 操作 | |
ICML2022_VLMixer Unp(517KB) | -- | -- | 限制开放 | -- |
|
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论