[1] GREICIUS T. Mars Science Laboratory - Curiosity Rover[EB/OL]. NASA(2015-01-20T10:42-05:00)
[2023-02-12]. http://www.nasa.gov/mission_pages/msl/index.html.
[2] HUGH F D W. Where am I? A tutorial on mobile vehicle localization[J/OL]. Industrial Robotan International Journal, 1994, 21: 11-16. DOI: 10.1108/EUM0000000004145.
[3] 刘浩敏, 章国锋, 鲍虎军. 基于单目视觉的同时定位与地图构建方法综述[J]. 计算机辅助设计与图形学学报, 2016, 28(6): 14.
[4] 高翔, 张涛, 刘毅, 等. 视觉 SLAM 十四讲(第二版) [M]. 电子工业出版社, 2019.
[5] MUR-ARTAL R, TARDóS J D. ORB-SLAM2: An Open-Source SLAM System for Monocular,Stereo, and RGB-D Cameras[J/OL]. IEEE Transactions on Robotics, 2017, 33(5): 1255-1262.DOI: 10.1109/TRO.2017.2705103.
[6] STURM J, ENGELHARD N, ENDRES F, et al. A benchmark for the evaluation of RGBD SLAM systems[C/OL]//2012 IEEE/RSJ International Conference on Intelligent Robots andSystems. 2012: 573-580. DOI: 10.1109/IROS.2012.6385773.
[7] SMITH R C, CHEESEMAN P. On the Representation and Estimation of Spatial Uncertainty[J/OL]. The International Journal of Robotics Research, 1986, 5(4): 56-68. DOI: 10.1177/027836498600500404.
[8] SMITH R, SELF M, CHEESEMAN P. Estimating uncertain spatial relationships in robotics[C/OL]//Proceedings. 1987 IEEE International Conference on Robotics and Automation: volume 4. 1987: 850-850. DOI: 10.1109/ROBOT.1987.1087846.
[9] DAVISON A J, REID I D, MOLTON N D, et al. MonoSLAM: Real-Time Single CameraSLAM[J/OL]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2007, 29(6):1052-1067. DOI: 10.1109/TPAMI.2007.1049.
[10] CADENA C, CARLONE L, CARRILLO H, et al. Past, Present, and Future of SimultaneousLocalization and Mapping: Toward the Robust-Perception Age[J/OL]. IEEE Transactions onRobotics, 2016, 32(6): 1309-1332. DOI: 10.1109/TRO.2016.2624754.
[11] SHI J, TOMASI. Good features to track[C/OL]//1994 Proceedings of IEEE Conference onComputer Vision and Pattern Recognition. 1994: 593-600. DOI: 10.1109/CVPR.1994.323794.
[12] KLEIN G, MURRAY D. Parallel Tracking and Mapping for Small AR Workspaces[C/OL]//2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality. 2007:225-234. DOI: 10.1109/ISMAR.2007.4538852.
[13] MUR-ARTAL R, MONTIEL J M M, TARDóS J D. ORB-SLAM: A Versatile and AccurateMonocular SLAM System[J/OL]. IEEE Transactions on Robotics, 2015, 31(5): 1147-1163.DOI: 10.1109/TRO.2015.2463671.
[14] GALVEZ-LóPEZ D, TARDOS J D. Bags of Binary Words for Fast Place Recognition in ImageSequences[J/OL]. IEEE Transactions on Robotics, 2012, 28(5): 1188-1197. DOI: 10.1109/TRO.2012.2197158.
[15] CAMPOS C, MONTIEL J M, TARDóS J D. Inertial-Only Optimization for Visual-Inertial Initialization[C/OL]//2020 IEEE International Conference on Robotics and Automation (ICRA).2020: 51-57. DOI: 10.1109/ICRA40945.2020.9197334.
[16] MUR-ARTAL R, TARDóS J D. Visual-Inertial Monocular SLAM With Map Reuse[J/OL].IEEE Robotics and Automation Letters, 2017, 2(2): 796-803. DOI: 10.1109/LRA.2017.2653359.
[17] CAMPOS C, ELVIRA R, RODRíGUEZ J J G, et al. ORB-SLAM3: An Accurate OpenSource Library for Visual, Visual– Inertial, and Multimap SLAM[J/OL]. IEEE Transactionson Robotics, 2021, 37(6): 1874-1890. DOI: 10.1109/TRO.2021.3075644.
[18] GEIGER A, LENZ P, URTASUN R. Are we ready for autonomous driving? The KITTI visionbenchmark suite[C/OL]//2012 IEEE Conference on Computer Vision and Pattern Recognition.2012: 3354-3361. DOI: 10.1109/CVPR.2012.6248074.
[19] NEWCOMBE R A, LOVEGROVE S J, DAVISON A J. DTAM: Dense tracking and mapping inreal-time[C/OL]//2011 International Conference on Computer Vision. 2011: 2320-2327. DOI:10.1109/ICCV.2011.6126513.
[20] JAKOB E, THOMAS S, DANIEL C. LSD-SLAM: Large-Scale Direct Monocular SLAM[C/OL]//ECCV: volume 8690. 2014: 834-849. DOI: 10.1007/978-3-319-10605-2_54.
[21] ENGEL J, KOLTUN V, CREMERS D. Direct Sparse Odometry[J/OL]. IEEE Transactions onPattern Analysis and Machine Intelligence, 2018, 40(3): 611-625. DOI: 10.1109/TPAMI.2017.2658577.
[22] FORSTER C, PIZZOLI M, SCARAMUZZA D. SVO: Fast semi-direct monocular visual odometry[C/OL]//2014 IEEE International Conference on Robotics and Automation (ICRA). 2014:15-22. DOI: 10.1109/ICRA.2014.6906584.
[23] FORSTER C, ZHANG Z, GASSNER M, et al. SVO: Semidirect Visual Odometry for Monocular and Multicamera Systems[J/OL]. IEEE Transactions on Robotics, 2017, 33(2): 249-265.DOI: 10.1109/TRO.2016.2623335.
[24] VIDAL A R, REBECQ H, HORSTSCHAEFER T, et al. Ultimate SLAM? Combining Events,Images, and IMU for Robust Visual SLAM in HDR and High-Speed Scenarios[J/OL]. IEEERobotics and Automation Letters, 2018, 3(2): 994-1001. DOI: 10.1109/LRA.2018.2793357.
[25] BESL P, MCKAY N D. A method for registration of 3-D shapes[J/OL]. IEEE Transactions onPattern Analysis and Machine Intelligence, 1992, 14(2): 239-256. DOI: 10.1109/34.121791.
[26] ZHANG J, SANJIV S. LOAM: Lidar Odometry and Mapping in Real-time[C/OL]//Proceedingsof Robotics: Science and Systems. Berkeley, USA, 2014. DOI: 10.15607/RSS.2014.X.007.
[27] ZHANG J, SINGH S. Visual-lidar odometry and mapping: low-drift, robust, and fast[C/OL]//2015 IEEE International Conference on Robotics and Automation (ICRA). 2015: 2174-2181.DOI: 10.1109/ICRA.2015.7139486.
[28] SHAN T, ENGLOT B. LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry andMapping on Variable Terrain[C/OL]//2018 IEEE/RSJ International Conference on IntelligentRobots and Systems (IROS). 2018: 4758-4765. DOI: 10.1109/IROS.2018.8594299.
[29] SHAN T, ENGLOT B, MEYERS D, et al. LIO-SAM: Tightly-coupled Lidar Inertial Odometryvia Smoothing and Mapping[C/OL]//2020 IEEE/RSJ International Conference on IntelligentRobots and Systems (IROS). 2020: 5135-5142. DOI: 10.1109/IROS45743.2020.9341176.
[30] SHAN T, ENGLOT B, RATTI C, et al. LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping[C/OL]//2021 IEEE International Conference on Robotics andAutomation (ICRA). 2021: 5692-5698. DOI: 10.1109/ICRA48506.2021.9561996.
[31] LIN J, ZHANG F. Loam livox: A fast, robust, high-precision LiDAR odometry and mappingpackage for LiDARs of small FoV[C/OL]//2020 IEEE International Conference on Roboticsand Automation (ICRA). 2020: 3126-3131. DOI: 10.1109/ICRA40945.2020.9197440.
[32] XU W, ZHANG F. FAST-LIO: A Fast, Robust LiDAR-Inertial Odometry Package by TightlyCoupled Iterated Kalman Filter[J/OL]. IEEE Robotics and Automation Letters, 2021, 6(2):3317-3324. DOI: 10.1109/LRA.2021.3064227.
[33] XU W, CAI Y, HE D, et al. FAST-LIO2: Fast Direct LiDAR-Inertial Odometry[J/OL]. IEEETransactions on Robotics, 2022, 38(4): 2053-2073. DOI: 10.1109/TRO.2022.3141876.
[34] LIN J, ZHENG C, XU W, et al. R 2 LIVE: A Robust, Real-Time, LiDAR-Inertial-Visual TightlyCoupled State Estimator and Mapping[J/OL]. IEEE Robotics and Automation Letters, 2021, 6(4): 7469-7476. DOI: 10.1109/LRA.2021.3095515.
[35] LIN J, ZHANG F. R3LIVE: A Robust, Real-time, RGB-colored, LiDAR-Inertial-Visualtightly-coupled state Estimation and mapping package[C/OL]//2022 International Conferenceon Robotics and Automation (ICRA). 2022: 10672-10678. DOI: 10.1109/ICRA46639.2022.9811935.
[36] ZHENG C, ZHU Q, XU W, et al. FAST-LIVO: Fast and Tightly-coupled Sparse-DirectLiDAR-Inertial-Visual Odometry[C/OL]//2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2022: 4003-4009. DOI: 10.1109/IROS47612.2022.9981107.
[37] LIN J, YUAN C, CAI Y, et al. ImMesh: An Immediate LiDAR Localization and MeshingFramework[A/OL]. 2023.
[38] HARTLEY R, ZISSERMAN A. Multiple View Geometry in Computer Vision[M/OL]. WestNyack: Cambridge University Press, 2004. DOI: 10.1017/CBO9780511811685.
[39] FISCHLER M A, BOLLES R C. Random Sample Consensus: A Paradigm for Model Fittingwith Applications To Image Analysis and Automated Cartography[J]. Communications of theACM, 1981, 24(6): 381-395.
[40] SUN D, YANG X, LIU M Y, et al. PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume[C/OL]//2018 IEEE/CVF Conference on Computer Vision and PatternRecognition. 2018: 8934-8943. DOI: 10.1109/CVPR.2018.00931.
[41] SUN Y, LIU M, MENG M Q H. Improving RGB-D SLAM in Dynamic Environments: AMotion Removal Approach[J/OL]. Robotics and Autonomous Systems, 2016, 89: 110-122.DOI: 10.1016/j.robot.2016.11.012.
[42] KIM D H, KIM J H. Effective Background Model-Based RGB-D Dense Visual Odometry in aDynamic Environment[J/OL]. IEEE Transactions on Robotics, 2016, 32(6): 1565-1573. DOI:10.1109/TRO.2016.2609395.
[43] SCONA R, JAIMEZ M, PETILLOT Y R, et al. StaticFusion: Background Reconstruction forDense RGB-D SLAM in Dynamic Environments[C/OL]//2018 IEEE International Conferenceon Robotics and Automation (ICRA). 2018: 3849-3856. DOI: 10.1109/ICRA.2018.8460681.
[44] ZHANG T, ZHANG H, LI Y, et al. FlowFusion: Dynamic Dense RGB-D SLAM Based on Optical Flow[C/OL]//2020 IEEE International Conference on Robotics and Automation (ICRA).2020: 7322-7328. DOI: 10.1109/ICRA40945.2020.9197349.
[45] DAI W, ZHANG Y, LI P, et al. RGB-D SLAM in Dynamic Environments Using Point Correlations[J/OL]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(1):373-389. DOI: 10.1109/TPAMI.2020.3010942.
[46] YU C, LIU Z, LIU X J, et al. DS-SLAM: A Semantic Visual SLAM towards Dynamic Environments[C/OL]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS). 2018: 1168-1174. DOI: 10.1109/IROS.2018.8593691.
[47] BADRINARAYANAN V, KENDALL A, CIPOLLA R. SegNet: A Deep ConvolutionalEncoder-Decoder Architecture for Image Segmentation[J/OL]. IEEE Transactions on PatternAnalysis and Machine Intelligence, 2017, 39(12): 2481-2495. DOI: 10.1109/TPAMI.2016.2644615.
[48] BESCOS B, FáCIL J M, CIVERA J, et al. DynaSLAM: Tracking, Mapping, and Inpainting inDynamic Scenes[J/OL]. IEEE Robotics and Automation Letters, 2018, 3(4): 4076-4083. DOI:10.1109/LRA.2018.2860039.
[49] HE K, GKIOXARI G, DOLLáR P, et al. Mask R-CNN[C/OL]//2017 IEEE International Conference on Computer Vision (ICCV). 2017: 2980-2988. DOI: 10.1109/ICCV.2017.322.
[50] WANG K, LIN Y, WANG L, et al. A Unified Framework for Mutual Improvement of SLAMand Semantic Segmentation[C/OL]//2019 International Conference on Robotics and Automation (ICRA). 2019: 5224-5230. DOI: 10.1109/ICRA.2019.8793499.
[51] JI T, WANG C, XIE L. Towards Real-time Semantic RGB-D SLAM in Dynamic Environments[C/OL]//2021 IEEE International Conference on Robotics and Automation (ICRA). 2021:11175-11181. DOI: 10.1109/ICRA48506.2021.9561743.
[52] WANG C C, THORPE C, THRUN S, et al. Simultaneous Localization, Mapping and MovingObject Tracking[J/OL]. The International Journal of Robotics Research, 2007, 26(9): 889-916.DOI: 10.1177/0278364907081229.
[53] KUNDU A, KRISHNA K M, JAWAHAR C V. Realtime multibody visual SLAM with asmoothly moving monocular camera[C/OL]//2011 International Conference on Computer Vision. 2011: 2080-2087. DOI: 10.1109/ICCV.2011.6126482.
[54] REDDY N D, SINGHAL P, CHARI V, et al. Dynamic body VSLAM with semantic constraints[C/OL]//2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).2015: 1897-1904. DOI: 10.1109/IROS.2015.7353626.
[55] JUDD K M, GAMMELL J D, NEWMAN P. Multimotion Visual Odometry (MVO): Simultaneous Estimation of Camera and Third-Party Motions[C/OL]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2018: 3949-3956. DOI:10.1109/IROS.2018.8594213.
[56] YANG S, SCHERER S. CubeSLAM: Monocular 3-D Object SLAM[J/OL]. IEEE Transactionson Robotics, 2019, 35(4): 925-938. DOI: 10.1109/TRO.2019.2909168.
[57] MCCORMAC J, HANDA A, DAVISON A, et al. SemanticFusion: Dense 3D semantic mappingwith convolutional neural networks[C/OL]//2017 IEEE International Conference on Roboticsand Automation (ICRA). 2017: 4628-4635. DOI: 10.1109/ICRA.2017.7989538.
[58] HERMANS A, FLOROS G, LEIBE B. Dense 3D semantic mapping of indoor scenes fromRGB-D images[C/OL]//2014 IEEE International Conference on Robotics and Automation(ICRA). 2014: 2631-2638. DOI: 10.1109/ICRA.2014.6907236.
[59] MCCORMAC J, CLARK R, BLOESCH M, et al. Fusion++: Volumetric Object-Level SLAM[C/OL]//2018 International Conference on 3D Vision (3DV). 2018: 32-41. DOI: 10.1109/3DV.2018.00015.
[60] RUNZ M, BUFFIER M, AGAPITO L. MaskFusion: Real-Time Recognition, Tracking andReconstruction of Multiple Moving Objects[C/OL]//2018 IEEE International Symposium onMixed and Augmented Reality (ISMAR). 2018: 10-20. DOI: 10.1109/ISMAR.2018.00024.
[61] XU B, LI W, TZOUMANIKAS D, et al. MID-Fusion: Octree-based Object-Level MultiInstance Dynamic SLAM[C/OL]//2019 International Conference on Robotics and Automation(ICRA). 2019: 5231-5237. DOI: 10.1109/ICRA.2019.8794371.
[62] CHU Q, OUYANG W, LI H, et al. Online Multi-object Tracking Using CNN-Based SingleObject Tracker with Spatial-Temporal Attention Mechanism[C/OL]//2017 IEEE InternationalConference on Computer Vision (ICCV). 2017: 4846-4855. DOI: 10.1109/ICCV.2017.518.
[63] HENEIN M, ZHANG J, MAHONY R, et al. Dynamic SLAM: The Need For Speed[C/OL]//2020 IEEE International Conference on Robotics and Automation (ICRA). 2020: 2123-2129.DOI: 10.1109/ICRA40945.2020.9196895.
[64] ZHANG J, HENEIN M, MAHONY R, et al. VDO-SLAM: A Visual Dynamic Object-awareSLAM System[C/OL]//arXiv. 2020. DOI: https://doi.org/10.48550/arXiv.2005.11052.
[65] BESCOS B, CAMPOS C, TARDóS J D, et al. DynaSLAM II: Tightly-Coupled Multi-ObjectTracking and SLAM[J/OL]. IEEE Robotics and Automation Letters, 2021, 6(3): 5191-5198.DOI: 10.1109/LRA.2021.3068640.
[66] QIU Y, WANG C, WANG W, et al. AirDOS: Dynamic SLAM benefits from Articulated Objects[C/OL]//2022 International Conference on Robotics and Automation (ICRA). 2022: 8047-8053.DOI: 10.1109/ICRA46639.2022.9811667.
[67] FANG H S, XIE S, TAI Y W, et al. RMPE: Regional Multi-person Pose Estimation[C/OL]//2017 IEEE International Conference on Computer Vision (ICCV). 2017: 2353-2362. DOI:10.1109/ICCV.2017.256.
[68] KIM G, KIM A. Remove, then Revert: Static Point cloud Map Construction using Multiresolution Range Images[C/OL]//2020 IEEE/RSJ International Conference on Intelligent Robots andSystems (IROS). 2020: 10758-10765. DOI: 10.1109/IROS45743.2020.9340856.
[69] BEN M, PRATUL P S, MATTHEW T, et al. NeRF: Representing Scenes as Neural RadianceFields for View Synthesis[C/OL]//ECCV. 2020. DOI: 10.1007/978-3-030-58452-8_24.
[70] TEED Z, DENG J. DROID-SLAM: Deep Visual SLAM for Monocular, Stereo, and RGB-DCameras[C/OL]//RANZATO M, BEYGELZIMER A, DAUPHIN Y, et al. Advances in NeuralInformation Processing Systems: volume 34. Curran Associates, Inc., 2021: 16558-16569. https://proceedings.neurips.cc/paper_files/paper/2021/file/89fcd07f20b6785b92134bd6c1d0fa42-Paper.pdf.
[71] ROSINOL A, GUPTA A, ABATE M, et al. 3D Dynamic Scene Graphs: Actionable SpatialPerception with Places, Objects, and Humans[C/OL]//Proceedings of Robotics: Science andSystems. Corvalis, Oregon, USA, 2020. DOI: 10.15607/RSS.2020.XVI.079.
[72] LI Y, ZHAO H, QI X, et al. Fully Convolutional Networks for Panoptic Segmentation[C/OL]//2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2021: 214-223. DOI: 10.1109/CVPR46437.2021.00028.
[73] JIN HAN L, MYUNG-KYU H, DONG WOOK K, et al. From Big to Small: Multi-ScaleLocal Planar Guidance for Monocular Depth Estimation[C/OL]//arXiv 1907.10326. 2019. DOI:10.48550/arXiv.1907.10326.
[74] VIGUERAS F, HERNáNDEZ A, MALDONADO I. Iterative Linear Solution of the Perspectiven-Point Problem Using Unbiased Statistics[C/OL]//2009 Eighth Mexican International Conference on Artificial Intelligence. 2009: 59-64. DOI: 10.1109/MICAI.2009.39.
[75] CHEN X, MA H, WAN J, et al. Multi-view 3D Object Detection Network for AutonomousDriving[C/OL]//2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).2017: 6526-6534. DOI: 10.1109/CVPR.2017.691.
[76] MILIOTO A, VIZZO I, BEHLEY J, et al. RangeNet ++: Fast and Accurate LiDAR Semantic Segmentation[C/OL]//2019 IEEE/RSJ International Conference on Intelligent Robots andSystems (IROS). 2019: 4213-4220. DOI: 10.1109/IROS40897.2019.8967762.
[77] KIRILLOV A, HE K, GIRSHICK R, et al. Panoptic Segmentation[C/OL]//2019 IEEE/CVFConference on Computer Vision and Pattern Recognition (CVPR). 2019: 9396-9405. DOI:10.1109/CVPR.2019.00963.
[78] LIN T, MAIRE M, BELONGIE S J, et al. Microsoft COCO: Common Objects in Context[C/OL]//Lecture Notes in Computer Science: volume 8693 ECCV. 2014: 740-755. DOI:10.1007/978-3-319-10602-1_48.
[79] GRINVALD M, FURRER F, NOVKOVIC T, et al. Volumetric Instance-Aware Semantic Mapping and 3D Object Discovery[J/OL]. IEEE Robotics and Automation Letters, 2019, 4(3):3037-3044. DOI: 10.1109/LRA.2019.2923960.
[80] BARATH D, MATAS J, NOSKOVA J. MAGSAC: Marginalizing Sample Consensus[C/OL]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2019:10189-10197. DOI: 10.1109/CVPR.2019.01044.
[81] YU C, LIU Z, LIU X J, et al. DS-SLAM: A Semantic Visual SLAM towards Dynamic Environments[C/OL]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS). 2018: 1168-1174. DOI: 10.1109/IROS.2018.8593691.
[82] LIU Y, MIURA J. RDS-SLAM: Real-Time Dynamic SLAM Using Semantic SegmentationMethods[J/OL]. IEEE Access, 2021, 9: 23772-23785. DOI: 10.1109/ACCESS.2021.3050617.
修改评论