中文版 | English
题名

水下外肢体机器人的设计及其人机交互方法研究

其他题名
DESIGN AND HUMAN-ROBOT INTERACTION RESEARCH OF UNDERWATER SUPERNUMERARY ROBOTIC LIMB
姓名
姓名拼音
GUO Yuqin
学号
12032421
学位类型
硕士
学位专业
0801Z1 智能制造与机器人
学科门类/专业学位类别
08 工学
导师
宋超阳
导师单位
机械与能源工程系
论文答辩日期
2023-05-13
论文提交日期
2023-06-28
学位授予单位
南方科技大学
学位授予地点
深圳
摘要

技术潜水员的水下作业存在智能化不足且效率低下的问题,导致技术潜水职业普遍工作强度高、作业风险也较高。结合文献调研,我们总结了现有技术应用的两个不足之处:第一,现有水下机器人仅限于观察或跟踪潜水员,而不是作为可靠的伙伴辅助潜水员完成某一项水下作业;第二,潜水员与水下机器人的交互方式无法满足其水下作业需求。针对上述需求和不足,本文对应用于潜水员水下作业辅助的外肢体机器人进行了研究。

首先,针对于潜水员水下作业的技术需求指标,本文提出了一种新型水下外肢体机器人系统,搭建了一种基于喉麦和可穿戴IMU 的人机交互硬件接口。通过样机实验,验证了所设计样机的基本运动性能及其人机交互接口的有效性。

其次,为了实现基于头部运动的人机交互方法,本文分析了头部运动的自由度及其姿态描述方法,推导了基于姿态四元数的头部运动序列相似度的计算方法,并搭建了一种基于动态时间规整和姿态四元数的头部运动分类识别算法。数据集的分类实验证实了头部运动识别算法的可行性,其平均识别准确率可达94%。

再次,为了实现基于喉部振动信号的人机交互方法,本文分析了喉部振动与常规语音信号的生成及其同源性,介绍了相应的降噪方法和信号预处理方法。据此,进一步提出了一种基于梅尔频率倒谱系数和长短期记忆网络的喉部振动识别方法。基于所采集的音阶喉部振动信号的数据集,验证了所提出的喉部振动识别算法的有效性,其平均识别准确率达到87%。

最后,为了验证所设计的水下外肢体机器人的基本运动性能,我们对其进行了游泳池实地测试,结果表明所设计的样机可以辅助测试者完成多种水下运动,包括直线潜行、左右转弯以及直线螺旋潜行等,其推进速度可达0.6 m/s 以上。最后,为了验证上述所提出的人机交互方法,我们进行了若干组人机交互实验,进一步验证了基于头部运动和喉部振动的人机交互方法的可行性,也展现了优良的识别成功率。

其他摘要

Underwater work by technical divers suffers from insufficient automation and low efficiency, resulting in high workload and risk for technical diving profession. Based on literature research, we have identified two shortcomings in the application of existing technologies: firstly, existing underwater robots are limited to observing or following divers, rather than acting as reliable partners to assist divers in a particular underwater task; secondly, the way divers interact with underwater robots does not meet their underwater operational needs. In order to address these needs and shortcomings, this paper focuses on the research of supernumerary robotic limb which is used to assist the underwater operation of the divers.

Firstly, a new underwater supernumerary robotic limb system is proposed to meet the technical requirements of divers’ underwater operations, and a human-robot interaction hardware interface based on a throat microphone and a wearable IMU is built. Through prototype experiments, the basic motion performance of the designed prototype and the effectiveness of its human-robot interaction interface are verified.

Secondly, in order to implement a head motion-based HCI method, the paper analyses the degrees of freedom of head motion and its pose description method, derives a method to calculate the similarity of head motion sequences based on pose quaternions, and builds a head motion classification and recognition algorithm based on dynamic time warping and pose quaternions. The classification experiments on the dataset confirmed the feasibility of the proposed head motion recognition algorithm, which achieved a 94% recognition success rate in average.

Thirdly, in order to implement a human-computer interaction method based on laryngeal vibration signals, this paper analyses the generation of laryngeal vibration and conventional speech signals and their homology, and introduces the corresponding noise reduction methods and signal pre-processing methods. Accordingly, a method of laryngeal vibration recognition based on Mel frequency cepstrum coefficients and long and short-term memory networks is further proposed. The effectiveness of the proposed throat vibration recognition algorithm is verified based on the collected dataset of phonetic throat vibration signals, with an average recognition success rate of 87%.

Finally, in order to verify the basic motion performance of the prototype of under-water supernumerary robotic limb system, a pool field test was conducted. The results showed that the designed prototype could assist the tester in a variety of underwater movements, including diving forward, turning left/turning right and straight-line spiral diving forward, and its propulsion speed could reach over 0.6 m/s. Finally, in order to validate the proposed human-computer interaction method, several sets of human-computer interaction experiments were conducted to further verify the feasibility of the human-computer interaction method based on head movements and throat vibrations, which also demonstrated excellent recognition success rate.

关键词
其他关键词
语种
中文
培养类别
独立培养
入学年份
2020
学位授予年份
2023-07
参考文献列表

[1] 洪翔. 习近平关于海洋强国重要论述研究[D]. 集美大学, 2020.
[2] 深圳市市规划和自然资源局. 深圳市海洋发展规划(2022-2035 年)[Z]. 2022.
[3] 左立标. 国外深海机器人技术发展现状及对我国的启示[J]. 采矿技术, 2011(5): 5.
[4] 梁凇. 水下检测与作业机器人ROV 控制系统研制及动力定位研究[D]. 江苏科技大学,2017.
[5] 杨兵. 穿戴式潜水推进系统的潜水运动建模及推进器位置优化[D]. 天津: 河北工业大学,2019.
[6] 王帅. 水下运载器总体优化与自平衡控制技术研究[D]. 北京: 中国舰船研究院, 2012.
[7] JIMF. Farallon MK-III DPV Scooter[EB/OL]. 2015. https://scubaboard.com/community/threads/farallon-mk-iii-dpv-uw-scooter.513199/.
[8] SUBMERGE. Submerge Scooters DPV[EB/OL]. 2011. https://www.silent-submersion.com/Products/imperial.html.
[9] BAZONKA. Protei-5 Diver Propulsion Vehicle[EB/OL]. 2013. https://military-history.fandom.com/wiki/Protei-5_Russian_diver_propulsion_vehicle?oldid=163829.
[10] A22SHADY. Oceanic Mako[EB/OL]. 2011. https://scubaboard.com/community/threads/oceanic-mako-scooter.392169/.
[11] LLORENS-BONILLA B, PARIETTI F, ASADA H. Demonstration-based control of supernumerary robotic limbs. Intelligent Robots and Systems (IROS), 2012 IEEE[C]//RSJ International Conference on. 2012: 7-12.
[12] PARIETTI F, ASADA H H. Dynamic analysis and state estimation for wearable robotic limbs subject to human-induced disturbances[C]//2013 IEEE International Conference on Robotics and Automation. IEEE, 2013: 3880-3887.
[13] PARIETTI F, ASADA H H. Supernumerary robotic limbs for aircraft fuselage assembly: body stabilization and guidance by bracing[C]//2014 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2014: 1176-1183.
[14] BONILLA B L, ASADA H H. A robot on the shoulder: Coordinated human-wearable robot control using coloured petri nets and partial least squares predictions[C]//2014 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2014: 119-125.
[15] PARIETTI F, CHAN K, ASADA H H. Bracing the human body with supernumerary robotic limbs for physical assistance and load reduction[C]//2014 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2014: 141-148.
[16] PARIETTI F, CHAN K C, HUNTER B, et al. Design and control of supernumerary robotic limbs for balance augmentation[C]//2015 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2015: 5010-5017.
[17] KHODAMBASHI R, WEINBERG G, SINGHOSE W, et al. User oriented assessment of vibration suppression by command shaping in a supernumerary wearable robotic arm[C]//2016 IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids). IEEE, 2016: 1067-1072.
[18] KUREK D A, ASADA H H. The MantisBot: Design and impedance control of supernumerary robotic limbs for near-ground work[C]//2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2017: 5942-5947.
[19] GONZALEZ D J, ASADA H H. Design of extra robotic legs for augmenting human payload capabilities by exploiting singularity and torque redistribution[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018: 4348-4354.
[20] GUGGENHEIM J, HOFFMAN R, SONG H, et al. Leveraging the human operator in the design and control of supernumerary robotic limbs[J]. IEEE Robotics and Automation Letters, 2020, 5(2): 2177-2184.
[21] VATSAL V, HOFFMAN G. Design and analysis of a wearable robotic forearm[C]//2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018: 5489-5496.
[22] AL-SADA M, HÖGLUND T, KHAMIS M, et al. Orochi: investigating requirements and expectations for multipurpose daily used supernumerary robotic limbs[C]//Proceedings of the 10th Augmented Human International Conference 2019. 2019: 1-9.
[23] HAO M, ZHANG J, CHEN K, et al. Design and basic control of extra robotic legs for dynamic walking assistance[C]//2019 IEEE International Conference on Advanced Robotics and its Social Impacts (ARSO). IEEE, 2019: 246-250.
[24] XU C, LIU Y, LI Z. Biomechtronic design of a supernumerary robotic limbs for industrial assembly[C]//2019 IEEE 4th International Conference on Advanced Robotics and Mechatronics (ICARM). IEEE, 2019: 553-558.
[25] ZHANG Q, ZHU Y, ZHAO X, et al. Design of reconfigurable supernumerary robotic limb based on differential actuated joints[J]. International Journal of Computer and Information Engineering, 2020, 14(4): 115-122.
[26] VÉRONNEAU C, DENIS J, LEBEL L P, et al. Multifunctional remotely actuated 3-DOF supernumerary robotic arm based on magnetorheological clutches and hydrostatic transmission lines[J]. IEEE Robotics and Automation Letters, 2020, 5(2): 2546-2553.
[27] KHAZOOM C, CAILLOUETTE P, GIRARD A, et al. A supernumerary robotic leg powered by magnetorheological actuators to assist human locomotion[J]. IEEE Robotics and Automation Letters, 2020, 5(4): 5143-5150.
[28] 荆泓玮, 朱延河, 赵思恺, 等. 外肢体机器人研究现状及发展趋势[J]. 机械工程学报, 2020,v.56(07): 15-23.
[29] TONG Y, LIU J. Review of Research and Development of Supernumerary Robotic Limbs[J/OL]. IEEE/CAA Journal of Automatica Sinica, 2021, 8(5): 929-952. DOI: 10.1109/JAS.2021.1003961.
[30] PRATTICHIZZO D, POZZI M, BALDI T L, et al. Human augmentation by wearable supernumerary robotic limbs: review and perspectives[J]. Progress in Biomedical Engineering, 2021,3(4): 042005.
[31] 曾端. 水下助推机器人液压驱动系统控制方法研究[D]. 成都: 电子科技大学, 2016.
[32] 李兴勇. 水下助推机器人控制系统设计[D]. 成都: 电子科技大学, 2017.
[33] 高家葵. 水下助推机器人控制方法研究[D]. 成都: 电子科技大学, 2017.
[34] 王斌. 水下助推机器人的人体运动意图感知方法研究[D]. 成都: 电子科技大学, 2017.
[35] 秦睿. 水下外骨骼机器人动力学特性研究[D]. 成都: 电子科技大学, 2017.
[36] SATTAR J, DUDEK G. Where is your dive buddy: tracking humans underwater using spatiotemporal features[C]//2007 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2007: 3654-3659.
[37] VERZIJLENBERG B, JENKIN M. Swimming with robots: Human robot communication at depth[C]//2010 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2010: 4023-4028.
[38] CHIARELLA D, BIBULI M, BRUZZONE G, et al. Gesture-based language for diver-robot underwater interaction[C]//Oceans 2015-genova. IEEE, 2015: 1-9.
[39] CHAVEZ A G, MUELLER C A, DOERNBACH T, et al. Robust gesture-based communication for underwater human-robot interaction in the context of search and rescue diver missions[A]. 2018.
[40] CODD-DOWNEY R, JENKIN M. Human robot interaction using diver hand signals[C]//2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 2019: 550-551.
[41] NAđ Đ, WALKER C, KVASIĆ I, et al. Towards advancing diver-robot interaction capabilities[J]. IFAC-PapersOnLine, 2019, 52(21): 199-204.
[42] LIU T, ZHU Y, WU K, et al. Underwater Accompanying Robot Based on SSDLite Gesture Recognition[J]. Applied Sciences, 2022, 12(18): 9131.
[43] 李晗生. 蛙人推进器的水下航行数值模拟及实验研究[D]. 哈尔滨: 哈尔滨工程大学, 2020.
[44] 蒋新松. 水下机器人[M]. 辽宁科学技术出版社, 2000.
[45] 李晓晖, 朱玉泉, 聂松林. 喷水推进器的发展研究综述[J]. 液压与气动, 2007(7): 4.
[46] 刘锟. 混合型水下自航行器的概念设计与研究[D]. 天津大学, 2009.
[47] 臧鹏飞. 水下机器人动力学建模与运动稳定性分析[D]. 南京信息工程大学, 2018.
[48] BLUEROBOTICS. T200 Thruster — bluerobotics.com[EB/OL]. 2019. https://bluerobotics.com/store/thrusters/t100-t200-thrusters/t200-thruster-r2-rp/.
[49] ROBOTIS. ROBOTIS e-Manual — emanual.robotis.com[EB/OL]. 2020. https://emanual.robotis.com/docs/en/dxl/x/xw540-t140/.
[50] SKOGSTAD M, ERIKSEN T, Φ S, 等. 职业潜水员听力情况的12 年追踪研究[J]. 转化医学杂志, 2009, 22(004): 213-217.
[51] MACHANGPA J W, CHINGTHAM T S. Head gesture controlled wheelchair for quadriplegic patients[J]. Procedia computer science, 2018, 132: 342-351.
[52] LI P, MEZIANE R, OTIS M J D, et al. A Smart Safety Helmet using IMU and EEG sensors for worker fatigue detection[C]//2014 IEEE International Symposium on Robotic and Sensors Environments (ROSE) Proceedings. IEEE, 2014: 55-60.
[53] SEVERIN I C. Head Gesture-Based on IMU Sensors: a Performance Comparison Between the Unimodal and Multimodal Approach[C]//2021 International Symposium on Signals, Circuits and Systems (ISSCS). IEEE, 2021: 1-4.
[54] KANG M S, KANG H W, LEE C, et al. The gesture recognition technology based on IMU sensor for personal active spinning[C]//2018 20th International Conference on Advanced Communication Technology (ICACT). IEEE, 2018: 546-552.
[55] SEVERIN I C, DOBREA D M. 6DOF Inertial IMU Head Gesture Detection: Performance Analysis Using Fourier Transform and Jerk-Based Feature Extraction[C]//2020 IEEE Microwave Theory and Techniques in Wireless Communications (MTTW): volume 1. IEEE, 2020:118-123.
[56] TURAN M T, ERZIN E. Source and filter estimation for throat-microphone speech enhancement[J]. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2015, 24(2): 265-275.
[57] AHMAD N, GHAZILLA R A R, KHAIRI N M, et al. Reviews on various inertial measurement unit (IMU) sensor applications[J]. International Journal of Signal Processing Systems, 2013, 1(2): 256-262.
[58] PRAYUDI I, KIM D. Design and implementation of IMU-based human arm motion capture system[C]//2012 IEEE International conference on mechatronics and automation. IEEE, 2012: 670-675.
[59] KONG W, SESSA S, COSENTINO S, et al. Development of a real-time IMU-based motion capture system for gait rehabilitation[C]//2013 IEEE International Conference on Robotics and Biomimetics (ROBIO). IEEE, 2013: 2100-2105.
[60] FAISAL I A, PURBOYO T W, ANSORI A S R. A Review of accelerometer sensor and gyroscope sensor in IMU sensors on motion capture[J]. J. Eng. Appl. Sci, 2019, 15(3): 826-829.
[61] IVANOV A V, ZHILENKOV A A. The use of IMU MEMS-sensors for designing of motion capture system for control of robotic objects[C]//2018 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (EIConRus). IEEE, 2018: 890-893.
[62] WADSN. TACTICAL THROAT MIC WADSN — wadsn.com[EB/OL]. 2020. https://www.wadsn.com/product/z033-fg.
[63] MICROSTRAIN. 3DM-GX3-25 Parker — microstrain.com[EB/OL]. 2012. https://www.microstrain.com/inertial/3dm-gx3-25?qt-product_quicktab=2#qt-product_quicktab.
[64] MURPHY-CHUTORIAN E, TRIVEDI M M. Head pose estimation in computer vision: A survey[J]. IEEE transactions on pattern analysis and machine intelligence, 2008, 31(4): 607-626.
[65] UNIVERSITY C. Cervical Flexion & Extension BIOMECHANICS[EB/OL]. YouTube(2020).https://youtu.be/EEW_aM-lpQk.
[66] FRESK E, NIKOLAKOPOULOS G. Full quaternion based attitude control for a quadrotor[C/OL]//2013 European Control Conference (ECC). 2013: 3864-3869. DOI: 10.23919/ECC.2013.6669617.
[67] HAMILTON W R. Ii. on quaternions; or on a new system of imaginaries in algebra[J]. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 1844, 25(163):10-13.
[68] SOLA J. Quaternion kinematics for the error-state Kalman filter[A]. 2017.
[69] CRAIG J J. 机器人学导论: 第3 版[M]. 机器人学导论: 第3 版, 2006.
[70] 刘玉焘. 基于可穿戴式传感器的人体动作捕获与识别研究[D]. 哈尔滨工业大学, 2020.
[71] 梅锋. 基于时序统计特征人体运动分割模型与姿态识别算法设计[D]. 华东交通大学,2022.
[72] SAKOE H, CHIBA S. Dynamic programming algorithm optimization for spoken word recognition[J]. IEEE transactions on acoustics, speech, and signal processing, 1978, 26(1): 43-49.
[73] ITAKURA F. Minimum prediction residual principle applied to speech recognition[J]. IEEE Transactions on acoustics, speech, and signal processing, 1975, 23(1): 67-72.
[74] SENIN P. Dynamic time warping algorithm review[J]. Information and Computer Science Department University of Hawaii at Manoa Honolulu, USA, 2008, 855(1-23): 40.
[75] BENESTY J, SONDHI M M, HUANG Y, et al. Springer handbook of speech processing:volume 1[M]. Springer, 2008.
[76] MAHENDRU H C. Quick review of human speech production mechanism[J]. International Journal of Engineering Research and Development, 2014, 9(10): 48-54.
[77] YANG Q, JIN W, ZHANG Q, et al. Mixed-modality speech recognition and interaction using a wearable artificial throat[J]. Nature Machine Intelligence, 2023, 5(2): 169-180.
[78] SAINBURG T, THIELK M, GENTNER T Q. Finding, visualizing, and quantifying latent structure across diverse animal vocal repertoires[J]. PLoS computational biology, 2020, 16(10):e1008228.
[79] KIAPUCHINSKI D M, LIMA C R E, KAESTNER C A A. Spectral noise gate technique applied to birdsong preprocessing on embedded unit[C]//2012 IEEE International Symposium on Multimedia. IEEE, 2012: 24-27.
[80] 李秀. 基于DTW 和GMM 的多维特征说话人识别[D]. 南京邮电大学, 2020.
[81] PRABHU K M. Window functions and their applications in signal processing[M]. Taylor & Francis, 2014.
[82] SMITH J O. Spectral Audio Signal Processing[M/OL]. W3K Publishing, 2011. https://ccrma.stanford.edu/~jos/sasp/.
[83] BäCKSTRöM T, RäSäNEN O, ZEWOUDIE A, et al. Introduction to Speech Processing[M/OL]. 2nd ed. 2022. https://speechprocessingbook.aalto.fi. DOI: 10.5281/zenodo.6821775.
[84] 高维深. 基于HMM/ANN 混合模型的非特定人语音识别研究[D]. 电子科技大学, 2013.
[85] 李艳花. 基于特征提取的智能轮椅语音识别控制技术的研究与实现[D]. 重庆邮电大学,2010.
[86] 谢世强. 基于神经网络非特定人语音识别与机器人控制研究[D]. 沈阳工业大学, 2013.
[87] 李永健. 基于DTW 和HMM 的语音识别算法仿真及软件设计[D]. 哈尔滨工程大学, 2009.
[88] GREENBERG S, AINSWORTH W, FAY R. Springer Handbook of Auditory Research: Speech Processing in the Auditory System[M/OL]. Springer New York, 2004. https://books.google.com/books?id=xWU2o08AxwwC.
[89] UMESH S, COHEN L, NELSON D. Fitting the mel scale[C]//1999 IEEE InternationalConference on Acoustics, Speech, and Signal Processing. Proceedings. ICASSP99 (Cat. No.99CH36258): volume 1. IEEE, 1999: 217-220.
[90] ZWICKER E. Masking and psychological excitation as consequences of the ear’s frequency analysis[J]. Frequency analysis and periodicity detection in hearing, 1970: 376-394.
[91] FOOK C, HARIHARAN M, YAACOB S, et al. A review: Malay speech recognition and audio visual speech recognition[C]//2012 International Conference on Biomedical Engineering (ICoBE). IEEE, 2012: 479-484.
[92] SMAGULOVA K, JAMES A P. A survey on LSTM memristive neural network architectures and applications[J]. The European Physical Journal Special Topics, 2019, 228(10): 2313-2324.
[93] STAUDEMEYER R C, MORRIS E R. Understanding LSTM–a tutorial into long short-term memory recurrent neural networks[A]. 2019.
[94] MICROSOFT. Xbox Wireless Controller — xbox.com[EB/OL]. 2022. https://www.xbox.com/zh-CN/accessories/controllers/xbox-wireless-controller.
[95] MAXIMO M, RIBEIRO C H, AFONSO R J. Modeling of a position servo used in robotics applications[J]. Proceedings of the 2017 Simpósio Brasileiro de Automaçao Inteligente (SBAI), Porto Alegre, SC, Brazil, 2017.
[96] AJITH A, RAJKUMAR R, SHIBU R M, et al. Design and Development of Novel System for Increasing Range of an Unmanned Underwater Vehicle[C/OL]//2019 IEEE Underwater Technology(UT). 2019: 1-8. DOI: 10.1109/UT.2019.8734399.

所在学位评定分委会
力学
国内图书分类号
TP242.6
来源库
人工提交
成果类型学位论文
条目标识符http://sustech.caswiz.com/handle/2SGJ60CL/544197
专题工学院_机械与能源工程系
推荐引用方式
GB/T 7714
郭宇芹. 水下外肢体机器人的设计及其人机交互方法研究[D]. 深圳. 南方科技大学,2023.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可 操作
12032421-郭宇芹-机械与能源工程(9721KB)----限制开放--请求全文
个性服务
原文链接
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
导出为Excel格式
导出为Csv格式
Altmetrics Score
谷歌学术
谷歌学术中相似的文章
[郭宇芹]的文章
百度学术
百度学术中相似的文章
[郭宇芹]的文章
必应学术
必应学术中相似的文章
[郭宇芹]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
[发表评论/异议/意见]
暂无评论

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。