中文版 | English
题名

VLMixer: Unpaired Vision-Language Pre-training via Cross-Modal CutMix

作者
通讯作者Feng Zheng
发表日期
2022
会议名称
38th International Conference on Machine Learning (ICML)
ISSN
2640-3498
会议录名称
会议日期
JUL 17-23, 2022
会议地点
null,Baltimore,MD
出版地
1269 LAW ST, SAN DIEGO, CA, UNITED STATES
出版者
摘要
Existing vision-language pre-training (VLP) methods primarily rely on paired image-text datasets, which are either annotated by enormous human labors, or crawled from the internet followed by elaborate data cleaning techniques. To reduce the dependency on well-aligned imagetext pairs, it is promising to directly leverage the large-scale text-only and image-only corpora. This paper proposes a data augmentation method, namely cross-modal CutMix (CMC), for implicit cross-modal alignment learning in unpaired VLP. Specifically, CMC transforms natural sentences from the textual view into a multi-modal view, where visually-grounded words in a sentence are randomly replaced by diverse image patches with similar semantics. There are several appealing proprieties of the proposed CMC. First, it enhances the data diversity while keeping the semantic meaning intact for tackling problems where the aligned data are scarce; Second, by attaching cross-modal noise on uni-modal data, it guides models to learn token-level interactions across modalities for better denoising. Furthermore, we present a new unpaired VLP method, dubbed as VLMixer, that integrates CMC with contrastive learning to pull together the uni-modal and multi-modal views for better instance-level alignments among different modalities. Extensive experiments on five downstream tasks show that VLMixer could surpass previous state-of-the-art unpaired VLP methods. Project page: https: //github.com/ttengwang/VLMixer
学校署名
第一 ; 通讯
语种
英语
相关链接[来源记录]
收录类别
资助项目
National Natural Science Foundation of China["61972188","62122035","61906081","62106097"] ; China Postdoctoral Science Foundation["2021M691424","27208720","17212120"]
WOS研究方向
Computer Science
WOS类目
Computer Science, Artificial Intelligence
WOS记录号
WOS:000900130203039
来源库
Web of Science
引用统计
被引频次[WOS]:0
成果类型会议论文
条目标识符http://sustech.caswiz.com/handle/2SGJ60CL/415616
专题工学院_计算机科学与工程系
作者单位
1.Department of Computer Science and Engineering, Southern University of Science and Technology
2.Department of Computer Science, The University of Hong Kong 3
3.Data Platform, Tencent
第一作者单位计算机科学与工程系
通讯作者单位计算机科学与工程系
第一作者的第一单位计算机科学与工程系
推荐引用方式
GB/T 7714
Teng Wang,Wenhao Jiang,Zhichao Lu,et al. VLMixer: Unpaired Vision-Language Pre-training via Cross-Modal CutMix[C]. 1269 LAW ST, SAN DIEGO, CA, UNITED STATES:JMLR-JOURNAL MACHINE LEARNING RESEARCH,2022.
条目包含的文件
条目无相关文件。
个性服务
原文链接
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
导出为Excel格式
导出为Csv格式
Altmetrics Score
谷歌学术
谷歌学术中相似的文章
[Teng Wang]的文章
[Wenhao Jiang]的文章
[Zhichao Lu]的文章
百度学术
百度学术中相似的文章
[Teng Wang]的文章
[Wenhao Jiang]的文章
[Zhichao Lu]的文章
必应学术
必应学术中相似的文章
[Teng Wang]的文章
[Wenhao Jiang]的文章
[Zhichao Lu]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
[发表评论/异议/意见]
暂无评论

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。