中文版 | English
题名

Training data selection and optimal sensor placement for deep-learning-based sparse inertial sensor human posture reconstruction

作者
通讯作者Liu,Haoyang
发表日期
2021-05-01
DOI
发表期刊
EISSN
1099-4300
卷号23期号:5
摘要
Although commercial motion-capture systems have been widely used in various applications, the complex setup limits their application scenarios for ordinary consumers. To overcome the drawbacks of wearability, human posture reconstruction based on a few wearable sensors have been actively studied in recent years. In this paper, we propose a deep-learning-based sparse inertial sensor human posture reconstruction method. This method uses bidirectional recurrent neural network (Bi-RNN) to build an a priori model from a large motion dataset to build human motion, thereby the low-dimensional motion measurements are mapped to whole-body posture. To improve the motion reconstruction performance for specific application scenarios, two fundamental problems in the model construction are investigated: training data selection and sparse sensor placement. The problem of deep-learning training data selection is to select independent and identically distributed (IID) data for a certain scenario from the accumulated imbalanced motion dataset with sufficient information. We formulate the data selection into an optimization problem to obtain continuous and IID data segments, which comply with a small reference dataset collected from the target scenario. A two-step heuristic algorithm is proposed to solve the data selection problem. On the other hand, the optimal sensor placement problem is studied to exploit most information from partial observation of human movement. A method for evaluating the motion information amount of any group of wearable inertial sensors based on mutual information is proposed, and a greedy searching method is adopted to obtain the approximate optimal sensor placement of a given sensor number, so that the maximum motion information and minimum redundancy is achieved. Finally, the human posture reconstruction performance is evaluated with different training data and sensor placement selection methods, and experimental results show that the proposed method takes advantages in both posture reconstruction accuracy and model training time. In the 6 sensors configuration, the posture reconstruction errors of our model for walking, running, and playing basketball are 7.25, 8.84, and 14.13, respectively.
关键词
相关链接[Scopus记录]
收录类别
语种
英语
学校署名
第一
WOS记录号
WOS:000653911300001
Scopus记录号
2-s2.0-85106652687
来源库
Scopus
引用统计
被引频次[WOS]:7
成果类型期刊论文
条目标识符http://sustech.caswiz.com/handle/2SGJ60CL/229589
专题工学院_机械与能源工程系
作者单位
1.Shenzhen Key Laboratory of Biomimetic Robotics and Intelligent Systems,Department of Mechanical and Energy Engineering,Southern University of Science and Technology,Shenzhen,518055,China
2.Guangdong Provincial Key Laboratory of Human-Augmentation and Rehabilitation Robotics in Universities,Southern University of Science and Technology,Shenzhen,518055,China
3.School of Sports Engineering,Beijing Sport University,Beijing,100084,China
第一作者单位机械与能源工程系;  南方科技大学
第一作者的第一单位机械与能源工程系
推荐引用方式
GB/T 7714
Zheng,Zhaolong,Ma,Hao,Yan,Weichao,et al. Training data selection and optimal sensor placement for deep-learning-based sparse inertial sensor human posture reconstruction[J]. Entropy,2021,23(5).
APA
Zheng,Zhaolong,Ma,Hao,Yan,Weichao,Liu,Haoyang,&Yang,Zaiyue.(2021).Training data selection and optimal sensor placement for deep-learning-based sparse inertial sensor human posture reconstruction.Entropy,23(5).
MLA
Zheng,Zhaolong,et al."Training data selection and optimal sensor placement for deep-learning-based sparse inertial sensor human posture reconstruction".Entropy 23.5(2021).
条目包含的文件
条目无相关文件。
个性服务
原文链接
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
导出为Excel格式
导出为Csv格式
Altmetrics Score
谷歌学术
谷歌学术中相似的文章
[Zheng,Zhaolong]的文章
[Ma,Hao]的文章
[Yan,Weichao]的文章
百度学术
百度学术中相似的文章
[Zheng,Zhaolong]的文章
[Ma,Hao]的文章
[Yan,Weichao]的文章
必应学术
必应学术中相似的文章
[Zheng,Zhaolong]的文章
[Ma,Hao]的文章
[Yan,Weichao]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
[发表评论/异议/意见]
暂无评论

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。