中文版 | English
题名

Learning Target-Aware Vision Transformers for Real-Time UAV Tracking

作者
发表日期
2024
DOI
发表期刊
ISSN
1558-0644
卷号62
摘要
In recent years, the field of unmanned aerial vehicle (UAV) tracking has grown rapidly, finding numerous applications across various industries. While the discriminative correlation filters (DCF)-based trackers remain the most efficient and widely used in the UAV tracking, recently lightweight convolutional neural network (CNN)-based trackers using filter pruning have also demonstrated impressive efficiency and precision. However, the performance of these lightweight CNN-based trackers is still far from satisfactory. In the generic visual tracking, emerging vision transformer (ViT)-based trackers have shown great success by using cross-attention instead of correlation operation, enabling more effective capturing of relationships between the target and the search image. But to best of the authors’ knowledge, the UAV tracking community has not yet well explored the potential of ViTs for more effective and efficient template-search coupling for UAV tracking. In this article, we propose an efficient ViT-based tracking framework for real-time UAV tracking. Our framework integrates feature learning and template-search coupling into an efficient one-stream ViT to avoid an extra heavy relation modeling module. However, we observe that it tends to weaken the target information through transformer blocks due to the significantly more background tokens. To address this problem, we propose to maximize the mutual information (MI) between the template image and its feature representation produced by the ViT. The proposed method is dubbed TATrack. In addition, to further enhance efficiency, we introduce a novel MI maximization-based knowledge distillation, which strikes a better trade-off between accuracy and efficiency. Exhaustive experiments on five benchmarks show that the proposed tracker achieves state-of-the-art performance in UAV tracking. Code is released at: https://github.com/xyyang317/TATrack.
相关链接[IEEE记录]
收录类别
SCI ; EI
学校署名
其他
ESI学科分类
GEOSCIENCES
引用统计
被引频次[WOS]:1
成果类型期刊论文
条目标识符http://sustech.caswiz.com/handle/2SGJ60CL/783819
专题工学院_计算机科学与工程系
作者单位
1.Guangxi Key Laboratory of Embedded Technology and Intelligent System, Guilin University of Technology, Guilin, China
2.Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
3.College of Computer Science, Sichuan University, Chengdu, China
推荐引用方式
GB/T 7714
Shuiwang Li,Xiangyang Yang,Xucheng Wang,et al. Learning Target-Aware Vision Transformers for Real-Time UAV Tracking[J]. IEEE Transactions on Geoscience and Remote Sensing,2024,62.
APA
Shuiwang Li,Xiangyang Yang,Xucheng Wang,Dan Zeng,Hengzhou Ye,&Qijun Zhao.(2024).Learning Target-Aware Vision Transformers for Real-Time UAV Tracking.IEEE Transactions on Geoscience and Remote Sensing,62.
MLA
Shuiwang Li,et al."Learning Target-Aware Vision Transformers for Real-Time UAV Tracking".IEEE Transactions on Geoscience and Remote Sensing 62(2024).
条目包含的文件
条目无相关文件。
个性服务
原文链接
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
导出为Excel格式
导出为Csv格式
Altmetrics Score
谷歌学术
谷歌学术中相似的文章
[Shuiwang Li]的文章
[Xiangyang Yang]的文章
[Xucheng Wang]的文章
百度学术
百度学术中相似的文章
[Shuiwang Li]的文章
[Xiangyang Yang]的文章
[Xucheng Wang]的文章
必应学术
必应学术中相似的文章
[Shuiwang Li]的文章
[Xiangyang Yang]的文章
[Xucheng Wang]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
[发表评论/异议/意见]
暂无评论

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。