中文版 | English
题名

非结构地形下基于信息融合的机器人可通过性分析

姓名
姓名拼音
Zhangwenyao
学号
11930249
学位类型
硕士
学位专业
0801 力学
学科门类/专业学位类别
08 工学
导师
贾振中
导师单位
机械与能源工程系
论文答辩日期
2022-05-10
论文提交日期
2022-06-12
学位授予单位
南方科技大学
学位授予地点
深圳
摘要

机器人的可通过性分析对任务的成功执行至关重要,尤其是在非结构地形下 (如星球探测,农业采摘以及灾难救援)。由于非结构环境中地形种类复杂,障碍 物众多,给机器人的运动带来了巨大困难。特别是在松软地形,机器人通常会产 生过度的打滑和沉陷,进而导致任务失败甚至机器人损坏。这种情况下需要机器 人拥有更为精确、鲁棒的感知和可通过性分析的能力,这对其在复杂工作环境下 的导航和路径规划具有重要意义,也能大幅提高机器人的任务成功率。过去的可 通过性分析的研究通常仅依赖于地形的几何信息,这样忽略了语义信息的重要性, 且没有考虑到非结构地形中最为危险的松软地形的复杂力学性质。 因此,本论文针对该问题提出了基于信息融合的非结构地形可通过分析方法, 从地形识别,根据多重信息进行可通过性分析以及松软地形的可通过性预测这三 个方面进行研究。主要贡献点如下: (1)使用语义分割方法进行地形识别,针对现在地形分割中存在的边界分类 不明确的问题,本论文使用双流语义分割网络 Gated-SCNN 来改善。针对非结构地 形数据集中的长尾问题造成尾类地形分割不准确这一问题,本论文使用加权交叉 熵损失函数进行改善。 (2)本论文提出一个结合机器人本体的运动能力以及语义信息和几何信息 进行了机器人在非结构地形下的可通过性分析的算法框架。使用低成本 RGB-D 相 机获取的点云信息即可进行局部高程地图和语义地图的构建,并根据深度信息计 算地形坡度和阶梯高度。最后根据几何信息,语义信息与机器人移动能力分析周 围地形的可通过性。 (3)针对非结构地形中最为危险的松软地形感知问题,本论文提出了用于松 软地形的基于视觉的沉陷量识别算法。在不同光照和复杂背景环境下进行了实验, 结果证明该算法具有很强的鲁棒性和准确性。此外本论文还创新性地搭建了一个 新型探测轮系统(Articulated Wheeled Bevameter),机器人可以通过该探测轮系统 在不涉险的情况下预测支撑轮在未知区域的沉陷量和滑移率,并根据这两个参数 结合机器人运动能力进行松软地面的可通过性预测,选择最佳通行路径。

其他摘要

Robot mobility is critical for mission success, especially in unstructured terrain, such as planetary exploration, agriculture, and disaster relief. Because complex types of terrains and numerous obstacles bring great difficulties to the traversability of the robot. In this case, robots need a more accurate and robust perception and mobility prediction algorithm, which is important for navigation and path planning in complex environments. Most of the previous studies only focus on the analysis of robot traversability with geometry information. But they ignore the importance of semantic information. They also do not consider the complex terramichanical properties of soft terrain. Therefore, this paper proposes a method of traversability analysis over unstructured terrain using information fusion. This study can be carried out from terrain classification, traversability analysis based on multiple information, and traversability prediction of soft terrain. The detailed contributions are as follows: (1)This paper uses the dual-stream neural network to classify terrain to solve the problem of unclear boundary recognition with edge information, and uses a weighted cross-entropy loss function to solve the long-tail problem. (2)In this paper, an algorithm framework is proposed to analyze the robot’s traversability in unstructured terrain by combining the robot mobility, semantic and geometric information. This algorithm can build a local elevation and semantic map, then calculate the slope and step height of the terrain. Finally, the traversability of the terrain is analyzed according to the geometry, semantic information and robot mobility. (3)To perceive soft terrain accurately, this paper proposes a vision-based wheelsoil contact contour estimation method. Meantime, experiments under different lighting and complex background environments are conducted to prove that the algorithm is great robust and accurate. In addition, this paper also establishes a new articulated wheeled bevameter. Robots can estimate the sinkage and slip ratio using this wheeled bevameter without danger. The robot can use the articulated wheeled bevameter to choose the safest path and predict traversability by predicting the sinkage and slip ratio.

关键词
其他关键词
语种
中文
培养类别
独立培养
入学年份
2019
学位授予年份
2022-06
参考文献列表

[1] SHRIVASTAVA S, KARSAI A, AYDIN Y O, et al. Material remodeling and unconventional gaits facilitate locomotion of a robophysical rover over granular terrain[J]. Science Robotics, 2020, 5(42).

[2] PAPADAKIS P. Terrain traversability analysis methods for unmanned ground vehicles: A survey [J/OL]. Engineering Applications of Artificial Intelligence, 2013, 26(4): 1373-1385. https://www.sciencedirect.com/science/article/pii/S095219761300016X. DOI: https://doi.org/10.1016/j.engappai.2013.01.006.

[3] WERMELINGER M, FANKHAUSER P, DIETHELM R, et al. Navigation planning for legged robots in challenging terrain[C]//2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2016: 1184-1189.

[4] YANG S, HUANG Y, SCHERER S. Semantic 3d occupancy mapping through efficient high order crfs[C]//2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2017: 590-597.

[5] PAZ D, ZHANG H, LI Q, et al. Probabilistic semantic mapping for urban autonomous driving applications[C]//Intelligent Robots and Systems. 2020.

[6] JIA Z, SMITH W, PENG H. Terramechanics-based wheel–terrain interaction model and its applications to off-road wheeled mobile robots[J]. Robotica, 2012, 30(3): 491-503.

[7] JIA Z, SMITH W, PENG H. Fast analytical models of wheeled locomotion in deformable terrain for mobile robots[J]. Robotica, 2013, 31(1): 35-53.

[8] IAGNEMMA K, DUBOWSKY S. Mobile robots in rough terrain: Estimation, motion planning, and control with application to planetary rovers: volume 12[M]. Springer Science & Business Media, 2004.

[9] WONG J Y. Theory of ground vehicles[M]. John Wiley & Sons, 2008.

[10] ENDO M, ENDO S, NAGAOKA K, et al. Terrain-dependent slip risk prediction for planetary exploration rovers[J]. Robotica, 2021, 39: 1883-1896.

[11] ISHIGAMI G. Terramechanics-based analysis and control for lunar/planetary exploration robots [J]. PhD Thesis, Graduate School of Engineering, Tohoku University, 2008.

[12] GAO H, LV F, YUAN B, et al. Sinkage definition and visual detection for planetary rovers wheels on rough terrain based on wheel-soil interaction boundary[J]. Robotics and Autonomous Systems, 2017: 222-240.

[13] LEGNEMMA K, BROOKS C A, DUBOWSKY S. Visual, tactile, and vibration-based terrain analysis for planetary rovers[C]//IEEE Aerospace Conference. 2004.

[14] REINA G, OJEDA L, MILELLA A, et al. Wheel slippage and sinkage detection for planetary rovers[J]. IEEE-ASME Transactions on Mechatronics, 2006, 11: 185-195.

[15] HEGDE G M, YE C, ROBINSON C A, et al. Computer-vision-based wheel sinkage estimation for robot navigation on lunar terrain[J]. IEEE-ASME Transactions on Mechatronics, 2013, 18:1346-1356.

[16] GEIGER A, LENZ P, URTASUN R. Are we ready for autonomous driving? the kitti vision benchmark suite[C]//Computer Vision and Pattern Recognition. 2012.

[17] OBERWEGER M, WOHLHART P, LEPETIT V. Hands deep in deep learning for hand pose estimation[J]. arXiv: Computer Vision and Pattern Recognition, 2015: 1-10.

[18] YOUNG KO T, LEE S H. Novel method of semantic segmentation applicable to augmented reality.[J]. Sensors, 2020, 20: 1737.

[19] LONG J, SHELHAMER E, DARRELL T. Fully convolutional networks for semantic segmentation[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2015:3431-3440.

[20] SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[J]. arXiv preprint arXiv:1409.1556, 2014.

[21] SZEGEDY C, LIU W, JIA Y, et al. Going deeper with convolutions[C]//Computer Vision and Pattern Recognition. 2015: 1-9.

[22] RONNEBERGER O, FISCHER P, BROX T. U-net: Convolutional networks for biomedical image segmentation[C]//International Conference on Medical image computing and computerassisted intervention. Springer, 2015: 234-241.

[23] MILLETARI F, NAVAB N, AHMADI S A. V-net: Fully convolutional neural networks for volumetric medical image segmentation[C]//2016 fourth international conference on 3D vision (3DV). IEEE, 2016: 565-571.

[24] BADRINARAYANAN V, KENDALL A, CIPOLLA R. Segnet: A deep convolutional encoderdecoder architecture for image segmentation[C]//2016.

[25] LIN T Y, DOLLÁR P, GIRSHICK R, et al. Feature pyramid networks for object detection[C]// Computer Vision and Pattern Recognition. 2017.

[26] ZHAO H, SHI J, QI X, et al. Pyramid scene parsing network[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 2881-2890.

[27] HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 770-778.

[28] CHEN L C, PAPANDREOU G, KOKKINOS I, et al. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs[J/OL]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40(4): 834-848. DOI:10.1109/TPAMI.2017.2699184.

[29] CHEN L C, PAPANDREOU G, KOKKINOS I, et al. Semantic image segmentation with deep convolutional nets and fully connected crfs[C]//International Conference on Learning Representations. 2015.

[30] CHEN L C, PAPANDREOU G, SCHROFF F, et al. Rethinking atrous convolution for semantic image segmentation[J]. arXiv: Computer Vision and Pattern Recognition, 2017.

[31] CHEN L C, ZHU Y, PAPANDREOU G, et al. Encoder-decoder with atrous separable convolution for semantic image segmentation[C]//European Conference on Computer Vision. 2018.

[32] IAGNEMMA K D, DUBOWSKY S. Terrain estimation for high-speed rough-terrain autonomous vehicle navigation[J]. International Society for Optics and Photonics, 2002.

[33] WEISS C, FRHLICH H, ZELL A. Vibration-based terrain classification using support vector machines[C]//2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2006, October 9-15, 2006, Beijing, China. 2006.

[34] HöPFLINGER M, REMY C D, HUTTER M, et al. Haptic terrain classification for legged robots[C]//IEEE International Conference on Robotics and Automation, ICRA 2010, Anchorage, Alaska, USA, 3-7 May 2010. 2010.

[35] KOLVENBACH H, BäRTSCHI C, WELLHAUSEN L, et al. Haptic inspection of planetary soils with legged robots[J/OL]. IEEE Robotics and Automation Letters, 2019, 4(2): 1626-1632. DOI: 10.1109/LRA.2019.2896732.

[36] VALADA A, BURGARD W. Deep spatiotemporal models for robust proprioceptive terrain classification[J]. The International Journal of Robotics Research, 2017, 36(13-14): 1521-1539.

[37] DIMA C S, VANDAPEL N, HEBERT M. Classifier fusion for outdoor obstacle detection [C]//IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA’04. 2004: volume 1. IEEE, 2004: 665-671.

[38] KELLY A, STENTZ A, AMIDI O, et al. Toward reliable off road autonomous vehicles operating in challenging environments[J]. The International Journal of Robotics Research, 2006, 25(5-6): 449-483.

[39] FILITCHKIN P, BYL K. Feature-based terrain classification for littledog[C/OL]//2012 IEEE/RSJ International Conference on Intelligent Robots and Systems. 2012: 1387-1392. DOI: 10.1109/IROS.2012.6386042.

[40] RASMUSSEN C, et al. Laser range-, color-, and texture-based classifiers for segmenting marginal roads[C]//IEEE Conference on Computer Vision and Pattern Recognition Technical Sketches: volume 1. Citeseer, 2001.

[41] XU Y, ZHANG S, LI J, et al. Extracting terrain texture features for landform classification using wavelet decomposition[J]. ISPRS International Journal of Geo-Information, 2021, 10(10): 658.

[42] ROTHROCK B, KENNEDY R, CUNNINGHAM C, et al. Spoc: Deep learning-based terrain classification for mars rover missions[M]//AIAA SPACE 2016. 2016: 5539.

[43] VALADA A, OLIVEIRA G, BROX T, et al. Deep multispectral semantic scene understanding of forested environments using multimodal fusion[C]//International Symposium on Experimental Robotics (ISER). 2016.

[44] ZHOU R, DING L, GAO H, et al. Mapping for planetary rovers from terramechanics perspective *[C]//2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2019.

[45] IWASHITA Y, NAKASHIMA K, RAFOL S, et al. Mu-net: Deep learning-based thermal ir image estimation from rgb image[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. 2019: 0-0.

[46] HALATCI I, BROOKS C A, IAGNEMMA K. Terrain classification and classifier fusion for planetary exploration rovers[C]//2007 IEEE aerospace conference. IEEE, 2007: 1-11.

[47] BROOKS C A, IAGNEMMA K. Self-supervised terrain classification for planetary surface exploration rovers[J]. Journal of Field Robotics, 2012.

[48] OTSU K, ONO M, FUCHS T J, et al. Autonomous terrain classification with co- and selftraining approach[J/OL]. IEEE Robotics and Automation Letters, 2016, 1(2): 814-819. DOI: 10.1109/LRA.2016.2525040.

[49] WELLHAUSEN L, DOSOVITSKIY A, RANFTL R, et al. Where should i walk? predicting terrain properties from images via self-supervised learning[C]//International Conference on Robotics and Automation. 2019.

[50] ZüRN J, BURGARD W, VALADA A. Self-supervised visual terrain classification from unsupervised acoustic feature learning[J/OL]. IEEE Transactions on Robotics, 2021, 37(2): 466-481. DOI: 10.1109/TRO.2020.3031214.

[51] BEKKER M G. Off-the-road locomotion: Research and development in terramechanics[C]// 1960.

[52] DING L, GAO H, DENG Z, et al. Wheel slip-sinkage and its prediction model of lunar rover [J]. Journal of Central South University of Technology, 2010, 17: 129-135.

[53] MEIRION-GRIFFITH G, SPENKO M. A modified pressure–sinkage model for small, rigid wheels on deformable terrains[J]. Journal of Terramechanics, 2011, 48: 149-155.

[54] YOSHIDA K, SHIWA T. Development of a research testbed for exploration rover at tohoku university[J]. The Journal of Space Technology and Science, 1996, 12(1): 1_9-1_16.

[55] APOSTOLOPOULOS D S. Analytical configuration of wheeled robotic locomotion[M]. Carnegie Mellon University, 2001.

[56] CENTER S S. Rover mobility performance evaluation tool (rmpet): a systematic tool for rover chassis evaluation via application of bekker theory[C]//Proceedings of the 8th ESA Workshop on Advanced Space Technologies for Robotics and Automation ASTRA 2004. 2004.

[57] IAGNEMMA K, SHIBLY H, DUBOWSKY S. A laboratory single wheel testbed for studying planetary rover wheel-terrain interaction[J]. MIT field and space robotics laboratory technical report, 2005, 1: 05-05.

[58] DING L, GAO H, DENG Z, et al. Experimental study and analysis on driving wheels’performance for planetary exploration rovers moving in deformable soil[J]. Journal of Terramechanics, 2011, 48(1): 27-45.

[59] DING L, GAO H, LIU Z, et al. Identifying mechanical property parameters of planetary soil using in-situ data obtained from exploration rovers[J]. Planetary and Space Science, 2015, 119: 121-136.

[60] HEGDE G M, ROBINSON C, YE C, et al. Computer vision based wheel sinkage detection for robotic lunar exploration tasks[C]//International Conference on Mechatronics and Automation. 2010.

[61] ANGELOVA A, MATTHIES L, HELMICK D, et al. Learning and prediction of slip from visual information[J]. Journal of Field Robotics, 2007, 24: 205-231

[62] SKONIECZNY K, SHUKLA D K, FARAGALLI M, et al. Data-driven mobility risk prediction for planetary rovers[J]. Journal of Field Robotics, 2019, 36: 475-491.

[63] CUNNINGHAM C, NESNAS I A D, WHITTAKER W. Improving slip prediction on mars using thermal inertia measurements.[C]//Robotics: Science and Systems. 2017.

[64] Dense 3d semantic mapping of indoor scenes from rgb-d images[C]//2022.

[65] MCCORMAC J, HANDA A, DAVISON A J, et al. Semanticfusion: Dense 3d semantic mapping with convolutional neural networks[C]//International Conference on Robotics and Automation. 2017.

[66] WHELAN T, LEUTENEGGER S, SALAS-MORENO R F, et al. Elasticfusion: Dense slam without a pose graph[C]//Robotics: Science and Systems. 2015.

[67] SÜNDERHAUF N, PHAM T, LATIF Y, et al. Meaningful maps with object-oriented semantic mapping[C]//Intelligent Robots and Systems. 2017.

[68] CARTILLIER V, REN Z, JAIN N, et al. Semantic mapnet: Building allocentric semantic maps and representations from egocentric views[C]//National Conference on Artificial Intelligence. 2021.

[69] PAPADAKIS P. Terrain traversability analysis methods for unmanned ground vehicles: A survey [J]. Engineering Applications of Artificial Intelligence, 2013, 26: 1373-1385.

[70] HIROSE N, SADEGHIAN A, VÁZQUEZ M, et al. Gonet: A semi-supervised deep learning approach for traversability estimation[J]. arXiv: Robotics, 2018.

[71] DARGAZANY A. Stereo-based terrain traversability analysis using normal-based segmentation and superpixel surface analysis.[J]. arXiv: Computer Vision and Pattern Recognition, 2019.

[72] HOSSEINPOOR S, TORRESEN J, MANTELLI M, et al. Traversability analysis by semantic terrain segmentation for mobile robots[C/OL]//2021 IEEE 17th International Conference on Automation Science and Engineering (CASE). 2021: 1407-1413. DOI: 10.1109/CASE49439. 2021.9551629.

[73] FAN D D, OTSU K, KUBO Y, et al. Step: Stochastic traversability evaluation and planning for safe off-road navigation[J]. arXiv preprint arXiv:2103.02828, 2021.

[74] GUAN T, HE Z, MANOCHA D, et al. Ttm: Terrain traversability mapping for autonomous excavator navigation in unstructured environments.[J]. arXiv: Robotics, 2021.

[75] GUAN T, KOTHANDARAMAN D, CHANDRA R, et al. Ganav: Group-wise attention network for classifying navigable regions in unstructured outdoor environments.[J]. arXiv: Robotics, 2021.

[76] ZHENG S, LU J, ZHAO H, et al. Rethinking semantic segmentation from a sequence-tosequence perspective with transformers[C]//Computer Vision and Pattern Recognition. 2021.

[77] LIN T Y, MAIRE M, BELONGIE S, et al. Microsoft coco: Common objects in context[J]. arXiv: Computer Vision and Pattern Recognition, 2014.

[78] EVERINGHAM M, GOOL L V, WILLIAMS C K I, et al. The pascal visual object classes (voc) challenge[J]. International Journal of Computer Vision, 2010, 88: 303-338.

[79] CORDTS M, OMRAN M, RAMOS S, et al. The cityscapes dataset for semantic urban scene understanding[C]//Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2016.

[80] MATURANA D, CHOU P W, UENOYAMA M, et al. Real-time semantic mapping for autonomous off-road navigation[C]//Field and Service Robotics. Springer, 2018: 335-350.

[81] WIGNESS M, EUM S, ROGERS J G, et al. A rugd dataset for autonomous navigation and visual perception in unstructured outdoor environments[C]//International Conference on Intelligent Robots and Systems (IROS). 2019.

[82] JIANG P, OSTEEN P, WIGNESS M, et al. Rellis-3d dataset: Data, benchmarks and analysis [C]//2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2021: 1110-1116.

[83] TAKIKAWA T, ACUNA D, JAMPANI V, et al. Gated-scnn: Gated shape cnns for semantic segmentation[C]//International Conference on Computer Vision. 2019.

[84] XIE S, TU Z. Holistically-nested edge detection[C]//International Conference on Computer Vision. 2015.

[85] JANG E, GU S, POOLE B. Categorical reparameterization with gumbel-softmax[J]. arXiv preprint arXiv:1611.01144, 2016.

[86] ZHANG Y, KANG B, HOOI B, et al. Deep long-tailed learning: A survey[C]//2021.

[87] KUINDERSMA S, DEITS R, FALLON M, et al. Optimization-based locomotion planning, estimation, and control design for the atlas humanoid robot[J]. Autonomous Robots, 2016, 40: 429-455.

[88] MASTALLI C, FOCCHI M, HAVOUTIS I, et al. Trajectory and foothold optimization using low-dimensional models for rough terrain locomotion[C]//International Conference on Robotics and Automation. 2017.

[89] HERBERT M, CAILLAS C, KROTKOV E, et al. Terrain mapping for a roving planetary explorer[C]//International Conference on Robotics and Automation. 1989.

[90] KWEON I S, KANADE T. High resolution terrain map from multiple sensor data[C]//Intelligent Robots and Systems. 1990.

[91] BELTER D, LABCKI P, SKRZYPCZYńSKI P. Estimating terrain elevation maps from sparse and uncertain multi-sensor data[C]//Robotics and Biomimetics. 2012.

[92] WOODEN D, MALCHANO M D, BLANKESPOOR K, et al. Autonomous navigation for bigdog[C]//International Conference on Robotics and Automation. 2010.

[93] FANKHAUSER P, BLOESCH M, HUTTER M. Probabilistic terrain mapping for mobile robots with uncertain localization[J/OL]. IEEE Robotics and Automation Letters (RA-L), 2018, 3(4): 3019-3026. DOI: 10.1109/LRA.2018.2849506.

[94] BLOESCH M, SOMMER H, LAIDLOW T, et al. A primer on the differential calculus of 3d orientations[J]. arXiv: Robotics, 2016.

[95] KLEINERALEXANDER, DORNHEGECHRISTIAN. Real-time localization and elevation mapping within urban search and rescue scenarios[J]. Journal of Field Robotics, 2007.

[96] FANKHAUSER P, HUTTER M. A universal grid map library: Implementation and use case for rough terrain navigation[C]//2016.

[97] ZHANG Z. Flexible camera calibration by viewing a plane from unknown orientations[C]// Proceedings of the seventh ieee international conference on computer vision: volume 1. Ieee, 1999: 666-673.

[98] ISHIGAMI G, KEWLANI G, IAGNEMMA K. Predictable mobility[J]. IEEE robotics & automation magazine, 2009, 16(4): 61-70.

[99] OLSON E. AprilTag: A robust and flexible visual fiducial system[C]//Proceedings of the IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2011: 3400-3407.

[100] MEIRION-GRIFFITH G, SPENKO M. A pressure-sinkage model for small-diameter wheels on compactive, deformable terrain[J]. Journal of Terramechanics, 2013, 50: 37-44.

[101] BALLARD D H. Generalizing the hough transform to detect arbitrary shapes[J]. Pattern recognition, 1981, 13(2): 111-122.

[102] OTSU N. A threshold selection method from gray level histograms[J]. IEEE Transactions on Systems, Man, and Cybernetics, 1979, 9: 62-66.

[103] CANNY J. A computational approach to edge detection[J]. IEEE Transactions on pattern analysis and machine intelligence, 1986(6): 679-698.

[104] ROUSSEEUW P J, LEROY A M. Robust regression and outlier detection: volume 589[M]. John wiley & sons, 2005.

[105] DING L, GAO H, DENG Z, et al. Experimental study and analysis on driving wheels’ performance for planetary exploration rovers moving in deformable soil[J]. Journal of Terramechanics, 2011, 48: 27-45.

[106] CUNNINGHAM C, ONO M, NESNAS I, et al. Locally-adaptive slip prediction for planetary rovers using gaussian processes[C]//2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2017: 5487-5494.

所在学位评定分委会
机械与能源工程系
国内图书分类号
O39
来源库
人工提交
成果类型学位论文
条目标识符http://sustech.caswiz.com/handle/2SGJ60CL/335884
专题工学院_机械与能源工程系
推荐引用方式
GB/T 7714
张文垚. 非结构地形下基于信息融合的机器人可通过性分析[D]. 深圳. 南方科技大学,2022.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可 操作
11930249-张文垚-机械与能源工程(15818KB)----限制开放--请求全文
个性服务
原文链接
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
导出为Excel格式
导出为Csv格式
Altmetrics Score
谷歌学术
谷歌学术中相似的文章
[张文垚]的文章
百度学术
百度学术中相似的文章
[张文垚]的文章
必应学术
必应学术中相似的文章
[张文垚]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
[发表评论/异议/意见]
暂无评论

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。