[1] SHRIVASTAVA S, KARSAI A, AYDIN Y O, et al. Material remodeling and unconventional gaits facilitate locomotion of a robophysical rover over granular terrain[J]. Science Robotics, 2020, 5(42).
[2] PAPADAKIS P. Terrain traversability analysis methods for unmanned ground vehicles: A survey [J/OL]. Engineering Applications of Artificial Intelligence, 2013, 26(4): 1373-1385. https://www.sciencedirect.com/science/article/pii/S095219761300016X. DOI: https://doi.org/10.1016/j.engappai.2013.01.006.
[3] WERMELINGER M, FANKHAUSER P, DIETHELM R, et al. Navigation planning for legged robots in challenging terrain[C]//2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2016: 1184-1189.
[4] YANG S, HUANG Y, SCHERER S. Semantic 3d occupancy mapping through efficient high order crfs[C]//2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2017: 590-597.
[5] PAZ D, ZHANG H, LI Q, et al. Probabilistic semantic mapping for urban autonomous driving applications[C]//Intelligent Robots and Systems. 2020.
[6] JIA Z, SMITH W, PENG H. Terramechanics-based wheel–terrain interaction model and its applications to off-road wheeled mobile robots[J]. Robotica, 2012, 30(3): 491-503.
[7] JIA Z, SMITH W, PENG H. Fast analytical models of wheeled locomotion in deformable terrain for mobile robots[J]. Robotica, 2013, 31(1): 35-53.
[8] IAGNEMMA K, DUBOWSKY S. Mobile robots in rough terrain: Estimation, motion planning, and control with application to planetary rovers: volume 12[M]. Springer Science & Business Media, 2004.
[9] WONG J Y. Theory of ground vehicles[M]. John Wiley & Sons, 2008.
[10] ENDO M, ENDO S, NAGAOKA K, et al. Terrain-dependent slip risk prediction for planetary exploration rovers[J]. Robotica, 2021, 39: 1883-1896.
[11] ISHIGAMI G. Terramechanics-based analysis and control for lunar/planetary exploration robots [J]. PhD Thesis, Graduate School of Engineering, Tohoku University, 2008.
[12] GAO H, LV F, YUAN B, et al. Sinkage definition and visual detection for planetary rovers wheels on rough terrain based on wheel-soil interaction boundary[J]. Robotics and Autonomous Systems, 2017: 222-240.
[13] LEGNEMMA K, BROOKS C A, DUBOWSKY S. Visual, tactile, and vibration-based terrain analysis for planetary rovers[C]//IEEE Aerospace Conference. 2004.
[14] REINA G, OJEDA L, MILELLA A, et al. Wheel slippage and sinkage detection for planetary rovers[J]. IEEE-ASME Transactions on Mechatronics, 2006, 11: 185-195.
[15] HEGDE G M, YE C, ROBINSON C A, et al. Computer-vision-based wheel sinkage estimation for robot navigation on lunar terrain[J]. IEEE-ASME Transactions on Mechatronics, 2013, 18:1346-1356.
[16] GEIGER A, LENZ P, URTASUN R. Are we ready for autonomous driving? the kitti vision benchmark suite[C]//Computer Vision and Pattern Recognition. 2012.
[17] OBERWEGER M, WOHLHART P, LEPETIT V. Hands deep in deep learning for hand pose estimation[J]. arXiv: Computer Vision and Pattern Recognition, 2015: 1-10.
[18] YOUNG KO T, LEE S H. Novel method of semantic segmentation applicable to augmented reality.[J]. Sensors, 2020, 20: 1737.
[19] LONG J, SHELHAMER E, DARRELL T. Fully convolutional networks for semantic segmentation[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2015:3431-3440.
[20] SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[J]. arXiv preprint arXiv:1409.1556, 2014.
[21] SZEGEDY C, LIU W, JIA Y, et al. Going deeper with convolutions[C]//Computer Vision and Pattern Recognition. 2015: 1-9.
[22] RONNEBERGER O, FISCHER P, BROX T. U-net: Convolutional networks for biomedical image segmentation[C]//International Conference on Medical image computing and computerassisted intervention. Springer, 2015: 234-241.
[23] MILLETARI F, NAVAB N, AHMADI S A. V-net: Fully convolutional neural networks for volumetric medical image segmentation[C]//2016 fourth international conference on 3D vision (3DV). IEEE, 2016: 565-571.
[24] BADRINARAYANAN V, KENDALL A, CIPOLLA R. Segnet: A deep convolutional encoderdecoder architecture for image segmentation[C]//2016.
[25] LIN T Y, DOLLÁR P, GIRSHICK R, et al. Feature pyramid networks for object detection[C]// Computer Vision and Pattern Recognition. 2017.
[26] ZHAO H, SHI J, QI X, et al. Pyramid scene parsing network[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 2881-2890.
[27] HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 770-778.
[28] CHEN L C, PAPANDREOU G, KOKKINOS I, et al. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs[J/OL]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40(4): 834-848. DOI:10.1109/TPAMI.2017.2699184.
[29] CHEN L C, PAPANDREOU G, KOKKINOS I, et al. Semantic image segmentation with deep convolutional nets and fully connected crfs[C]//International Conference on Learning Representations. 2015.
[30] CHEN L C, PAPANDREOU G, SCHROFF F, et al. Rethinking atrous convolution for semantic image segmentation[J]. arXiv: Computer Vision and Pattern Recognition, 2017.
[31] CHEN L C, ZHU Y, PAPANDREOU G, et al. Encoder-decoder with atrous separable convolution for semantic image segmentation[C]//European Conference on Computer Vision. 2018.
[32] IAGNEMMA K D, DUBOWSKY S. Terrain estimation for high-speed rough-terrain autonomous vehicle navigation[J]. International Society for Optics and Photonics, 2002.
[33] WEISS C, FRHLICH H, ZELL A. Vibration-based terrain classification using support vector machines[C]//2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2006, October 9-15, 2006, Beijing, China. 2006.
[34] HöPFLINGER M, REMY C D, HUTTER M, et al. Haptic terrain classification for legged robots[C]//IEEE International Conference on Robotics and Automation, ICRA 2010, Anchorage, Alaska, USA, 3-7 May 2010. 2010.
[35] KOLVENBACH H, BäRTSCHI C, WELLHAUSEN L, et al. Haptic inspection of planetary soils with legged robots[J/OL]. IEEE Robotics and Automation Letters, 2019, 4(2): 1626-1632. DOI: 10.1109/LRA.2019.2896732.
[36] VALADA A, BURGARD W. Deep spatiotemporal models for robust proprioceptive terrain classification[J]. The International Journal of Robotics Research, 2017, 36(13-14): 1521-1539.
[37] DIMA C S, VANDAPEL N, HEBERT M. Classifier fusion for outdoor obstacle detection [C]//IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA’04. 2004: volume 1. IEEE, 2004: 665-671.
[38] KELLY A, STENTZ A, AMIDI O, et al. Toward reliable off road autonomous vehicles operating in challenging environments[J]. The International Journal of Robotics Research, 2006, 25(5-6): 449-483.
[39] FILITCHKIN P, BYL K. Feature-based terrain classification for littledog[C/OL]//2012 IEEE/RSJ International Conference on Intelligent Robots and Systems. 2012: 1387-1392. DOI: 10.1109/IROS.2012.6386042.
[40] RASMUSSEN C, et al. Laser range-, color-, and texture-based classifiers for segmenting marginal roads[C]//IEEE Conference on Computer Vision and Pattern Recognition Technical Sketches: volume 1. Citeseer, 2001.
[41] XU Y, ZHANG S, LI J, et al. Extracting terrain texture features for landform classification using wavelet decomposition[J]. ISPRS International Journal of Geo-Information, 2021, 10(10): 658.
[42] ROTHROCK B, KENNEDY R, CUNNINGHAM C, et al. Spoc: Deep learning-based terrain classification for mars rover missions[M]//AIAA SPACE 2016. 2016: 5539.
[43] VALADA A, OLIVEIRA G, BROX T, et al. Deep multispectral semantic scene understanding of forested environments using multimodal fusion[C]//International Symposium on Experimental Robotics (ISER). 2016.
[44] ZHOU R, DING L, GAO H, et al. Mapping for planetary rovers from terramechanics perspective *[C]//2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2019.
[45] IWASHITA Y, NAKASHIMA K, RAFOL S, et al. Mu-net: Deep learning-based thermal ir image estimation from rgb image[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. 2019: 0-0.
[46] HALATCI I, BROOKS C A, IAGNEMMA K. Terrain classification and classifier fusion for planetary exploration rovers[C]//2007 IEEE aerospace conference. IEEE, 2007: 1-11.
[47] BROOKS C A, IAGNEMMA K. Self-supervised terrain classification for planetary surface exploration rovers[J]. Journal of Field Robotics, 2012.
[48] OTSU K, ONO M, FUCHS T J, et al. Autonomous terrain classification with co- and selftraining approach[J/OL]. IEEE Robotics and Automation Letters, 2016, 1(2): 814-819. DOI: 10.1109/LRA.2016.2525040.
[49] WELLHAUSEN L, DOSOVITSKIY A, RANFTL R, et al. Where should i walk? predicting terrain properties from images via self-supervised learning[C]//International Conference on Robotics and Automation. 2019.
[50] ZüRN J, BURGARD W, VALADA A. Self-supervised visual terrain classification from unsupervised acoustic feature learning[J/OL]. IEEE Transactions on Robotics, 2021, 37(2): 466-481. DOI: 10.1109/TRO.2020.3031214.
[51] BEKKER M G. Off-the-road locomotion: Research and development in terramechanics[C]// 1960.
[52] DING L, GAO H, DENG Z, et al. Wheel slip-sinkage and its prediction model of lunar rover [J]. Journal of Central South University of Technology, 2010, 17: 129-135.
[53] MEIRION-GRIFFITH G, SPENKO M. A modified pressure–sinkage model for small, rigid wheels on deformable terrains[J]. Journal of Terramechanics, 2011, 48: 149-155.
[54] YOSHIDA K, SHIWA T. Development of a research testbed for exploration rover at tohoku university[J]. The Journal of Space Technology and Science, 1996, 12(1): 1_9-1_16.
[55] APOSTOLOPOULOS D S. Analytical configuration of wheeled robotic locomotion[M]. Carnegie Mellon University, 2001.
[56] CENTER S S. Rover mobility performance evaluation tool (rmpet): a systematic tool for rover chassis evaluation via application of bekker theory[C]//Proceedings of the 8th ESA Workshop on Advanced Space Technologies for Robotics and Automation ASTRA 2004. 2004.
[57] IAGNEMMA K, SHIBLY H, DUBOWSKY S. A laboratory single wheel testbed for studying planetary rover wheel-terrain interaction[J]. MIT field and space robotics laboratory technical report, 2005, 1: 05-05.
[58] DING L, GAO H, DENG Z, et al. Experimental study and analysis on driving wheels’performance for planetary exploration rovers moving in deformable soil[J]. Journal of Terramechanics, 2011, 48(1): 27-45.
[59] DING L, GAO H, LIU Z, et al. Identifying mechanical property parameters of planetary soil using in-situ data obtained from exploration rovers[J]. Planetary and Space Science, 2015, 119: 121-136.
[60] HEGDE G M, ROBINSON C, YE C, et al. Computer vision based wheel sinkage detection for robotic lunar exploration tasks[C]//International Conference on Mechatronics and Automation. 2010.
[61] ANGELOVA A, MATTHIES L, HELMICK D, et al. Learning and prediction of slip from visual information[J]. Journal of Field Robotics, 2007, 24: 205-231
[62] SKONIECZNY K, SHUKLA D K, FARAGALLI M, et al. Data-driven mobility risk prediction for planetary rovers[J]. Journal of Field Robotics, 2019, 36: 475-491.
[63] CUNNINGHAM C, NESNAS I A D, WHITTAKER W. Improving slip prediction on mars using thermal inertia measurements.[C]//Robotics: Science and Systems. 2017.
[64] Dense 3d semantic mapping of indoor scenes from rgb-d images[C]//2022.
[65] MCCORMAC J, HANDA A, DAVISON A J, et al. Semanticfusion: Dense 3d semantic mapping with convolutional neural networks[C]//International Conference on Robotics and Automation. 2017.
[66] WHELAN T, LEUTENEGGER S, SALAS-MORENO R F, et al. Elasticfusion: Dense slam without a pose graph[C]//Robotics: Science and Systems. 2015.
[67] SÜNDERHAUF N, PHAM T, LATIF Y, et al. Meaningful maps with object-oriented semantic mapping[C]//Intelligent Robots and Systems. 2017.
[68] CARTILLIER V, REN Z, JAIN N, et al. Semantic mapnet: Building allocentric semantic maps and representations from egocentric views[C]//National Conference on Artificial Intelligence. 2021.
[69] PAPADAKIS P. Terrain traversability analysis methods for unmanned ground vehicles: A survey [J]. Engineering Applications of Artificial Intelligence, 2013, 26: 1373-1385.
[70] HIROSE N, SADEGHIAN A, VÁZQUEZ M, et al. Gonet: A semi-supervised deep learning approach for traversability estimation[J]. arXiv: Robotics, 2018.
[71] DARGAZANY A. Stereo-based terrain traversability analysis using normal-based segmentation and superpixel surface analysis.[J]. arXiv: Computer Vision and Pattern Recognition, 2019.
[72] HOSSEINPOOR S, TORRESEN J, MANTELLI M, et al. Traversability analysis by semantic terrain segmentation for mobile robots[C/OL]//2021 IEEE 17th International Conference on Automation Science and Engineering (CASE). 2021: 1407-1413. DOI: 10.1109/CASE49439. 2021.9551629.
[73] FAN D D, OTSU K, KUBO Y, et al. Step: Stochastic traversability evaluation and planning for safe off-road navigation[J]. arXiv preprint arXiv:2103.02828, 2021.
[74] GUAN T, HE Z, MANOCHA D, et al. Ttm: Terrain traversability mapping for autonomous excavator navigation in unstructured environments.[J]. arXiv: Robotics, 2021.
[75] GUAN T, KOTHANDARAMAN D, CHANDRA R, et al. Ganav: Group-wise attention network for classifying navigable regions in unstructured outdoor environments.[J]. arXiv: Robotics, 2021.
[76] ZHENG S, LU J, ZHAO H, et al. Rethinking semantic segmentation from a sequence-tosequence perspective with transformers[C]//Computer Vision and Pattern Recognition. 2021.
[77] LIN T Y, MAIRE M, BELONGIE S, et al. Microsoft coco: Common objects in context[J]. arXiv: Computer Vision and Pattern Recognition, 2014.
[78] EVERINGHAM M, GOOL L V, WILLIAMS C K I, et al. The pascal visual object classes (voc) challenge[J]. International Journal of Computer Vision, 2010, 88: 303-338.
[79] CORDTS M, OMRAN M, RAMOS S, et al. The cityscapes dataset for semantic urban scene understanding[C]//Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2016.
[80] MATURANA D, CHOU P W, UENOYAMA M, et al. Real-time semantic mapping for autonomous off-road navigation[C]//Field and Service Robotics. Springer, 2018: 335-350.
[81] WIGNESS M, EUM S, ROGERS J G, et al. A rugd dataset for autonomous navigation and visual perception in unstructured outdoor environments[C]//International Conference on Intelligent Robots and Systems (IROS). 2019.
[82] JIANG P, OSTEEN P, WIGNESS M, et al. Rellis-3d dataset: Data, benchmarks and analysis [C]//2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2021: 1110-1116.
[83] TAKIKAWA T, ACUNA D, JAMPANI V, et al. Gated-scnn: Gated shape cnns for semantic segmentation[C]//International Conference on Computer Vision. 2019.
[84] XIE S, TU Z. Holistically-nested edge detection[C]//International Conference on Computer Vision. 2015.
[85] JANG E, GU S, POOLE B. Categorical reparameterization with gumbel-softmax[J]. arXiv preprint arXiv:1611.01144, 2016.
[86] ZHANG Y, KANG B, HOOI B, et al. Deep long-tailed learning: A survey[C]//2021.
[87] KUINDERSMA S, DEITS R, FALLON M, et al. Optimization-based locomotion planning, estimation, and control design for the atlas humanoid robot[J]. Autonomous Robots, 2016, 40: 429-455.
[88] MASTALLI C, FOCCHI M, HAVOUTIS I, et al. Trajectory and foothold optimization using low-dimensional models for rough terrain locomotion[C]//International Conference on Robotics and Automation. 2017.
[89] HERBERT M, CAILLAS C, KROTKOV E, et al. Terrain mapping for a roving planetary explorer[C]//International Conference on Robotics and Automation. 1989.
[90] KWEON I S, KANADE T. High resolution terrain map from multiple sensor data[C]//Intelligent Robots and Systems. 1990.
[91] BELTER D, LABCKI P, SKRZYPCZYńSKI P. Estimating terrain elevation maps from sparse and uncertain multi-sensor data[C]//Robotics and Biomimetics. 2012.
[92] WOODEN D, MALCHANO M D, BLANKESPOOR K, et al. Autonomous navigation for bigdog[C]//International Conference on Robotics and Automation. 2010.
[93] FANKHAUSER P, BLOESCH M, HUTTER M. Probabilistic terrain mapping for mobile robots with uncertain localization[J/OL]. IEEE Robotics and Automation Letters (RA-L), 2018, 3(4): 3019-3026. DOI: 10.1109/LRA.2018.2849506.
[94] BLOESCH M, SOMMER H, LAIDLOW T, et al. A primer on the differential calculus of 3d orientations[J]. arXiv: Robotics, 2016.
[95] KLEINERALEXANDER, DORNHEGECHRISTIAN. Real-time localization and elevation mapping within urban search and rescue scenarios[J]. Journal of Field Robotics, 2007.
[96] FANKHAUSER P, HUTTER M. A universal grid map library: Implementation and use case for rough terrain navigation[C]//2016.
[97] ZHANG Z. Flexible camera calibration by viewing a plane from unknown orientations[C]// Proceedings of the seventh ieee international conference on computer vision: volume 1. Ieee, 1999: 666-673.
[98] ISHIGAMI G, KEWLANI G, IAGNEMMA K. Predictable mobility[J]. IEEE robotics & automation magazine, 2009, 16(4): 61-70.
[99] OLSON E. AprilTag: A robust and flexible visual fiducial system[C]//Proceedings of the IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2011: 3400-3407.
[100] MEIRION-GRIFFITH G, SPENKO M. A pressure-sinkage model for small-diameter wheels on compactive, deformable terrain[J]. Journal of Terramechanics, 2013, 50: 37-44.
[101] BALLARD D H. Generalizing the hough transform to detect arbitrary shapes[J]. Pattern recognition, 1981, 13(2): 111-122.
[102] OTSU N. A threshold selection method from gray level histograms[J]. IEEE Transactions on Systems, Man, and Cybernetics, 1979, 9: 62-66.
[103] CANNY J. A computational approach to edge detection[J]. IEEE Transactions on pattern analysis and machine intelligence, 1986(6): 679-698.
[104] ROUSSEEUW P J, LEROY A M. Robust regression and outlier detection: volume 589[M]. John wiley & sons, 2005.
[105] DING L, GAO H, DENG Z, et al. Experimental study and analysis on driving wheels’ performance for planetary exploration rovers moving in deformable soil[J]. Journal of Terramechanics, 2011, 48: 27-45.
[106] CUNNINGHAM C, ONO M, NESNAS I, et al. Locally-adaptive slip prediction for planetary rovers using gaussian processes[C]//2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2017: 5487-5494.
修改评论