题名 | Adaptive graphical model network for 2D handpose estimation |
作者 | |
发表日期 | 2020
|
会议录名称 | |
摘要 | In this paper, we propose a new architecture called Adaptive Graphical Model Network (AGMN) to tackle the task of 2D hand pose estimation from a monocular RGB image. The AGMN consists of two branches of deep convolutional neural networks for calculating unary and pairwise potential functions, followed by a graphical model inference module for integrating unary and pairwise potentials. Unlike existing architectures proposed to combine DCNNs with graphical models, our AGMN is novel in that the parameters of its graphical model are conditioned on and fully adaptive to individual input images. Experiments show that our approach outperforms the state-of-the-art method used in 2D hand keypoints estimation by a notable margin on two public datasets. |
学校署名 | 其他
|
语种 | 英语
|
相关链接 | [Scopus记录] |
Scopus记录号 | 2-s2.0-85085507108
|
来源库 | Scopus
|
成果类型 | 会议论文 |
条目标识符 | http://sustech.caswiz.com/handle/2SGJ60CL/395660 |
专题 | 南方科技大学 |
作者单位 | 1.University of California,Irvine,United States 2.Tencent, 3.Southeast University, 4.Southern University of Science and Technology, |
推荐引用方式 GB/T 7714 |
Kong,Deying,Chen,Yifei,Ma,Haoyu,et al. Adaptive graphical model network for 2D handpose estimation[C],2020.
|
条目包含的文件 | 条目无相关文件。 |
|
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论