中文版 | English
题名

Unsupervised Action Recognition using Spatiotemporal, Adaptive, and Attention-guided Refining-Network

作者
发表日期
2024
DOI
发表期刊
ISSN
2691-4581
EISSN
2691-4581
卷号PP期号:99页码:1-12
摘要
Previous works on unsupervised skeleton-based action recognition primarily focused on strategies for utilizing features to drive model optimization through methods like contrastive learning and reconstruction. However, designing application-level strategies poses challenges. This paper shifts the focus to the generation-level modelings and introduces the Spatiotemporal Adaptively Attentions-guided Refining Network (AgRNet). AgRNet approaches the reduction of costs and enhancement of efficiency by constructing the Adaptive Activity- Guided Attention (AAGA) and Adaptive Dominant-Guided Attenuation (ADGA) modules. The AAGA leverages the sparsity of the correlation matrix in the attention mechanism to adaptively filter and retain the active components of the sequence during the modeling process. The ADGA embeds the local dominant features of the sequence, obtained through convolutional distillation, into the globally dominant features under the attention mechanism, guided by the defined attenuation factor. Additionally, the Progressive Feature Modeling (PFM) module is introduced to complement the progressive features in motion sequences that were overlooked by AAGA and ADGA. AgRNet shows efficiency on three public datasets, NTU-RGBD 60, NTU-RGBD 120, and UWA3D.
IEEE
相关链接[IEEE记录]
收录类别
语种
英语
学校署名
其他
资助项目
This work is supported by the Shenzhen Science and Technology Program under Grant JCYJ20220531100814033 and the National Natural Science Foundation of China under grant 61771322. \u2020These authors contributed equally to this work and should be considered co-first authors. Corresponding author: Wenming Cao Xinpeng Yin, Cheng Zhang, and Wenming Cao are with the State Key Laboratory of Radio Frequency Heterogeneous Integration (Shenzhen University, Department of Electronics and Information Engineering) (e-mail: 2110436215@email.szu.edu.cn; 2210433095@email.szu.edu.cn; wm-cao@szu.edu.cn) ZiXu Huang is with the Southern University of Science and Technology, Department of Computer Science and Engineering; Zhihai He is with the Southern University of Science and Technology, Department of Electronic and Electrical Engineering (e-mail: 12111227@mail.sustech.edu.cn; hezh@sustech.edu.cn)
出版者
EI入藏号
20243016744494
EI主题词
Distillation ; Efficiency ; Musculoskeletal system ; Refining
EI分类号
Biomechanics, Bionics and Biomimetics:461.3 ; Chemical Operations:802.3 ; Production Engineering:913.1 ; Mathematical Statistics:922.2
来源库
EV Compendex
引用统计
成果类型期刊论文
条目标识符http://sustech.caswiz.com/handle/2SGJ60CL/794551
专题工学院_计算机科学与工程系
南方科技大学
工学院_电子与电气工程系
作者单位
1.State Key Laboratory of Radio Frequency Heterogeneous IntegrationDepartment of Electronics and Information EngineeringShenzhen University
2.Department of Computer Science and EngineeringSouthern University of Science and Technology
3.Department of Electronic and Electrical EngineeringSouthern University of Science and Technology
推荐引用方式
GB/T 7714
Yin, Xinpeng,Zhang, Cheng,Huang, ZiXu,et al. Unsupervised Action Recognition using Spatiotemporal, Adaptive, and Attention-guided Refining-Network[J]. IEEE Transactions on Artificial Intelligence,2024,PP(99):1-12.
APA
Yin, Xinpeng,Zhang, Cheng,Huang, ZiXu,He, Zhihai,&Cao, Wenming.(2024).Unsupervised Action Recognition using Spatiotemporal, Adaptive, and Attention-guided Refining-Network.IEEE Transactions on Artificial Intelligence,PP(99),1-12.
MLA
Yin, Xinpeng,et al."Unsupervised Action Recognition using Spatiotemporal, Adaptive, and Attention-guided Refining-Network".IEEE Transactions on Artificial Intelligence PP.99(2024):1-12.
条目包含的文件
条目无相关文件。
个性服务
原文链接
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
导出为Excel格式
导出为Csv格式
Altmetrics Score
谷歌学术
谷歌学术中相似的文章
[Yin, Xinpeng]的文章
[Zhang, Cheng]的文章
[Huang, ZiXu]的文章
百度学术
百度学术中相似的文章
[Yin, Xinpeng]的文章
[Zhang, Cheng]的文章
[Huang, ZiXu]的文章
必应学术
必应学术中相似的文章
[Yin, Xinpeng]的文章
[Zhang, Cheng]的文章
[Huang, ZiXu]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
[发表评论/异议/意见]
暂无评论

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。