中文版 | English
题名

动态环境下基于多源信息融合的 SLAM 研究

其他题名
RESEARCH ON SIMULTANEOUS LOCATION AND MAPPING BASED ON MULTI-SOURCE INFORMATION FUSION IN DYNAMIC ENVIRONMENTS
姓名
姓名拼音
ZHU Hu
学号
12032412
学位类型
硕士
学位专业
0801Z1 智能制造与机器人
学科门类/专业学位类别
08 工学
导师
贾振中
导师单位
机械与能源工程系
论文答辩日期
2023-05-13
论文提交日期
2023-06-29
学位授予单位
南方科技大学
学位授予地点
深圳
摘要

随着智能机器人和自动驾驶等领域的发展和商业化应用,同时定位和建图(Simultaneous Localization and Mapping, SLAM)作为其中的关键技术,从学术界到工业界都得到了大量的关注。这一研究领域在近年来也取得了卓越的成果,但是SLAM 在实际应用中的鲁棒性和可拓展性仍然存在一定的挑战。以往的 SLAM 算法都假设周围环境为静止状态,通过提取对应的特征或者路标来进行定位和建图。但真实的场景通常是复杂且充满动态对象的,这使得 SLAM 相关算法在实际应用中遇到了很大的挑战。真实场景中的运动对象不仅会导致算法的定位出现漂移,还会在所构建的地图中出现由于运动对象积累的残影,从而导致地图不可复用。本论文研究如何通过深度学习网络(如全景分割,深度估计等)与几何信息相结合的方式来提高 SLAM 算法定位的鲁棒性和建图方面的感知能力。本文的主要研究内容如下:
(1) 针对室外动态场景的自身定位和车辆追踪问题,本文提出了定位、建图和多目标追踪的联合优化模型,使用多模态的数据输入,通过全景分割得到场景中运动先验,从而对场景中的运动对象可以进行统一的表示和追踪。在 SLAM 后端将运动物体和静态背景的路标点、相机自身位姿构建了新的优化模型,对运动对象的位姿和静态地图,相机位姿进行处理。试验结果表明该方法能够对运动对象进行有效追踪。
(2) 为了快速构建不包含运动点云的静态地图,本文设计了基于点云可视化的多投影视图、多分辨率的方法,并对运动点云去除采用先恢复静态点云,后去除运动点云的两阶段处理方案,显著提高了运动点云检测的准确性。实验结果表明,该方法能快速构建去除运动对象点云的地图,解决了以往地图需要离线处理的问题。
(3) 针对动态对象会降低定位准确性方面,本文提出了基于全景分割和几何信息相融合的方案去除运动对象的影响。采用基于学习和几何相融合的方法对场景中的运动物体和静态背景进行划分。对于未知运动对象,设计了由粗到精的两阶段特征点分类方法,从而确保建图和定位的特征点为静态。提高了定位准确度,并且依据全景分割的语义信息构建了更高层级的语义地图。该方法在数据集和真实场景中实验都表明定位有更高的准确度,并且语义地图中不会出现大量运动对象产生的残影。

针对上述研究内容,本文的算法通过在 KITTI, TUM RGB-D 等公开数据集和自己采集的场景中对算法进行了系统的测试和验证,并且对算法输出的结果通过定性和定量评估,证明了本文所提出的算法在复杂场景中能对运动对象进行去除以及追踪,实验结果表明能够提高定位准确性和构建更加干净的地图,有助于提高 SLAM 算法在机器人应用中的智能化水平和鲁棒性。

其他摘要

With the development and commercialization of fields such as intelligent robotics and autonomous driving, Simultaneous Localization and Mapping (SLAM) has received significant attention from both academia and industry. This research area has achieved remarkable results in recent years. However, the robustness and scalability of SLAM in practical applications still pose certain challenges. Previous SLAM algorithms have assumed the surrounding environment to be in a static state, relying on extracting corresponding features or landmarks for localization and mapping.

However, real-world scenarios are typically complex and filled with dynamic objects, which presents significant challenges for SLAM-related algorithms in practical applications. The presence of moving objects in real scenes not only causes drift in algorithmic localization but also results in accumulated ghosting artifacts in the constructed maps, rendering them non-reusable. This paper investigates how to enhance the robustness of SLAM algorithmic localization and the perceptual mapping capability through the integration of deep learning networks (such as panoramic segmentation and depth estimation) with geometric information. The main research objectives of this paper are as follows:

(1) To address the self-localization and vehicle tracking challenges in dynamic outdoor scenes, this paper proposes a joint optimization model for localization, mapping, and multi-object tracking. The model utilizes multimodal data inputs and leverages panoptic segmentation to obtain motion priors in the scene, enabling a unified representation and tracking of moving objects. In the SLAM backend, a new optimization model is developed to handle the landmark points of both moving objects and static background, as well as the camera’s own pose. The proposed method effectively tracks the poses of moving objects, as demonstrated by experimental results.

(2) In order to achieve online construction of a static map without moving point clouds, this paper proposes a method based on point cloud visualization with multiple projection views and multiple resolutions. To remove moving point clouds, a two-stage processing pipeline is adopted that first restores static point clouds and then removes moving point clouds. This significantly improves the accuracy of detecting moving point clouds. Experimental results demonstrate that the proposed method can achieve online construction of maps without moving point clouds, solving the problem of offline processing required by previous mapping methods.

(3) To address the issue of localization drifting caused by moving objects, this paper proposes a solution based on the fusion of panoptic segmentation and geometric information to remove the influence of moving objects. A learning-based and geometrically fused method is employed to partition the moving objects and static background in the scene. For unknown moving objects, a feature point classification method is designed from coarse to fine, ensuring that the feature points used for mapping and localization are static. This method improves the localization accuracy and constructs a higher-level semantic map based on the semantic information provided by panoptic segmentation. Experiments on both datasets and real-world scenes demonstrate higher localization accuracy and fewer artifacts caused by moving objects in the resulting semantic map.

Regarding the above research content, this paper’s algorithm was tested and evaluated on publicly available datasets such as KITTI and TUM RGB-D, as well as on scenes captured by the authors. The results of the algorithm were evaluated both qualitatively and quantitatively, and it was shown that the proposed algorithm can handle moving objects in complex scenes with different methods. The experimental results demonstrate that the
algorithm can improve the accuracy of localization and construct cleaner point clouds, which can help to enhance the intelligence and robustness of SLAM algorithms in robotics applications.

 

关键词
其他关键词
语种
中文
培养类别
独立培养
入学年份
2020
学位授予年份
2023-06
参考文献列表

[1] GREICIUS T. Mars Science Laboratory - Curiosity Rover[EB/OL]. NASA(2015-01-20T10:42-05:00)
[2023-02-12]. http://www.nasa.gov/mission_pages/msl/index.html.
[2] HUGH F D W. Where am I? A tutorial on mobile vehicle localization[J/OL]. Industrial Robotan International Journal, 1994, 21: 11-16. DOI: 10.1108/EUM0000000004145.
[3] 刘浩敏, 章国锋, 鲍虎军. 基于单目视觉的同时定位与地图构建方法综述[J]. 计算机辅助设计与图形学学报, 2016, 28(6): 14.
[4] 高翔, 张涛, 刘毅, 等. 视觉 SLAM 十四讲(第二版) [M]. 电子工业出版社, 2019.
[5] MUR-ARTAL R, TARDóS J D. ORB-SLAM2: An Open-Source SLAM System for Monocular,Stereo, and RGB-D Cameras[J/OL]. IEEE Transactions on Robotics, 2017, 33(5): 1255-1262.DOI: 10.1109/TRO.2017.2705103.
[6] STURM J, ENGELHARD N, ENDRES F, et al. A benchmark for the evaluation of RGBD SLAM systems[C/OL]//2012 IEEE/RSJ International Conference on Intelligent Robots andSystems. 2012: 573-580. DOI: 10.1109/IROS.2012.6385773.
[7] SMITH R C, CHEESEMAN P. On the Representation and Estimation of Spatial Uncertainty[J/OL]. The International Journal of Robotics Research, 1986, 5(4): 56-68. DOI: 10.1177/027836498600500404.
[8] SMITH R, SELF M, CHEESEMAN P. Estimating uncertain spatial relationships in robotics[C/OL]//Proceedings. 1987 IEEE International Conference on Robotics and Automation: volume 4. 1987: 850-850. DOI: 10.1109/ROBOT.1987.1087846.
[9] DAVISON A J, REID I D, MOLTON N D, et al. MonoSLAM: Real-Time Single CameraSLAM[J/OL]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2007, 29(6):1052-1067. DOI: 10.1109/TPAMI.2007.1049.
[10] CADENA C, CARLONE L, CARRILLO H, et al. Past, Present, and Future of SimultaneousLocalization and Mapping: Toward the Robust-Perception Age[J/OL]. IEEE Transactions onRobotics, 2016, 32(6): 1309-1332. DOI: 10.1109/TRO.2016.2624754.
[11] SHI J, TOMASI. Good features to track[C/OL]//1994 Proceedings of IEEE Conference onComputer Vision and Pattern Recognition. 1994: 593-600. DOI: 10.1109/CVPR.1994.323794.
[12] KLEIN G, MURRAY D. Parallel Tracking and Mapping for Small AR Workspaces[C/OL]//2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality. 2007:225-234. DOI: 10.1109/ISMAR.2007.4538852.
[13] MUR-ARTAL R, MONTIEL J M M, TARDóS J D. ORB-SLAM: A Versatile and AccurateMonocular SLAM System[J/OL]. IEEE Transactions on Robotics, 2015, 31(5): 1147-1163.DOI: 10.1109/TRO.2015.2463671.
[14] GALVEZ-LóPEZ D, TARDOS J D. Bags of Binary Words for Fast Place Recognition in ImageSequences[J/OL]. IEEE Transactions on Robotics, 2012, 28(5): 1188-1197. DOI: 10.1109/TRO.2012.2197158.
[15] CAMPOS C, MONTIEL J M, TARDóS J D. Inertial-Only Optimization for Visual-Inertial Initialization[C/OL]//2020 IEEE International Conference on Robotics and Automation (ICRA).2020: 51-57. DOI: 10.1109/ICRA40945.2020.9197334.
[16] MUR-ARTAL R, TARDóS J D. Visual-Inertial Monocular SLAM With Map Reuse[J/OL].IEEE Robotics and Automation Letters, 2017, 2(2): 796-803. DOI: 10.1109/LRA.2017.2653359.
[17] CAMPOS C, ELVIRA R, RODRíGUEZ J J G, et al. ORB-SLAM3: An Accurate OpenSource Library for Visual, Visual– Inertial, and Multimap SLAM[J/OL]. IEEE Transactionson Robotics, 2021, 37(6): 1874-1890. DOI: 10.1109/TRO.2021.3075644.
[18] GEIGER A, LENZ P, URTASUN R. Are we ready for autonomous driving? The KITTI visionbenchmark suite[C/OL]//2012 IEEE Conference on Computer Vision and Pattern Recognition.2012: 3354-3361. DOI: 10.1109/CVPR.2012.6248074.
[19] NEWCOMBE R A, LOVEGROVE S J, DAVISON A J. DTAM: Dense tracking and mapping inreal-time[C/OL]//2011 International Conference on Computer Vision. 2011: 2320-2327. DOI:10.1109/ICCV.2011.6126513.
[20] JAKOB E, THOMAS S, DANIEL C. LSD-SLAM: Large-Scale Direct Monocular SLAM[C/OL]//ECCV: volume 8690. 2014: 834-849. DOI: 10.1007/978-3-319-10605-2_54.
[21] ENGEL J, KOLTUN V, CREMERS D. Direct Sparse Odometry[J/OL]. IEEE Transactions onPattern Analysis and Machine Intelligence, 2018, 40(3): 611-625. DOI: 10.1109/TPAMI.2017.2658577.
[22] FORSTER C, PIZZOLI M, SCARAMUZZA D. SVO: Fast semi-direct monocular visual odometry[C/OL]//2014 IEEE International Conference on Robotics and Automation (ICRA). 2014:15-22. DOI: 10.1109/ICRA.2014.6906584.
[23] FORSTER C, ZHANG Z, GASSNER M, et al. SVO: Semidirect Visual Odometry for Monocular and Multicamera Systems[J/OL]. IEEE Transactions on Robotics, 2017, 33(2): 249-265.DOI: 10.1109/TRO.2016.2623335.
[24] VIDAL A R, REBECQ H, HORSTSCHAEFER T, et al. Ultimate SLAM? Combining Events,Images, and IMU for Robust Visual SLAM in HDR and High-Speed Scenarios[J/OL]. IEEERobotics and Automation Letters, 2018, 3(2): 994-1001. DOI: 10.1109/LRA.2018.2793357.
[25] BESL P, MCKAY N D. A method for registration of 3-D shapes[J/OL]. IEEE Transactions onPattern Analysis and Machine Intelligence, 1992, 14(2): 239-256. DOI: 10.1109/34.121791.
[26] ZHANG J, SANJIV S. LOAM: Lidar Odometry and Mapping in Real-time[C/OL]//Proceedingsof Robotics: Science and Systems. Berkeley, USA, 2014. DOI: 10.15607/RSS.2014.X.007.
[27] ZHANG J, SINGH S. Visual-lidar odometry and mapping: low-drift, robust, and fast[C/OL]//2015 IEEE International Conference on Robotics and Automation (ICRA). 2015: 2174-2181.DOI: 10.1109/ICRA.2015.7139486.
[28] SHAN T, ENGLOT B. LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry andMapping on Variable Terrain[C/OL]//2018 IEEE/RSJ International Conference on IntelligentRobots and Systems (IROS). 2018: 4758-4765. DOI: 10.1109/IROS.2018.8594299.
[29] SHAN T, ENGLOT B, MEYERS D, et al. LIO-SAM: Tightly-coupled Lidar Inertial Odometryvia Smoothing and Mapping[C/OL]//2020 IEEE/RSJ International Conference on IntelligentRobots and Systems (IROS). 2020: 5135-5142. DOI: 10.1109/IROS45743.2020.9341176.
[30] SHAN T, ENGLOT B, RATTI C, et al. LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping[C/OL]//2021 IEEE International Conference on Robotics andAutomation (ICRA). 2021: 5692-5698. DOI: 10.1109/ICRA48506.2021.9561996.
[31] LIN J, ZHANG F. Loam livox: A fast, robust, high-precision LiDAR odometry and mappingpackage for LiDARs of small FoV[C/OL]//2020 IEEE International Conference on Roboticsand Automation (ICRA). 2020: 3126-3131. DOI: 10.1109/ICRA40945.2020.9197440.
[32] XU W, ZHANG F. FAST-LIO: A Fast, Robust LiDAR-Inertial Odometry Package by TightlyCoupled Iterated Kalman Filter[J/OL]. IEEE Robotics and Automation Letters, 2021, 6(2):3317-3324. DOI: 10.1109/LRA.2021.3064227.
[33] XU W, CAI Y, HE D, et al. FAST-LIO2: Fast Direct LiDAR-Inertial Odometry[J/OL]. IEEETransactions on Robotics, 2022, 38(4): 2053-2073. DOI: 10.1109/TRO.2022.3141876.
[34] LIN J, ZHENG C, XU W, et al. R 2 LIVE: A Robust, Real-Time, LiDAR-Inertial-Visual TightlyCoupled State Estimator and Mapping[J/OL]. IEEE Robotics and Automation Letters, 2021, 6(4): 7469-7476. DOI: 10.1109/LRA.2021.3095515.
[35] LIN J, ZHANG F. R3LIVE: A Robust, Real-time, RGB-colored, LiDAR-Inertial-Visualtightly-coupled state Estimation and mapping package[C/OL]//2022 International Conferenceon Robotics and Automation (ICRA). 2022: 10672-10678. DOI: 10.1109/ICRA46639.2022.9811935.
[36] ZHENG C, ZHU Q, XU W, et al. FAST-LIVO: Fast and Tightly-coupled Sparse-DirectLiDAR-Inertial-Visual Odometry[C/OL]//2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2022: 4003-4009. DOI: 10.1109/IROS47612.2022.9981107.
[37] LIN J, YUAN C, CAI Y, et al. ImMesh: An Immediate LiDAR Localization and MeshingFramework[A/OL]. 2023.
[38] HARTLEY R, ZISSERMAN A. Multiple View Geometry in Computer Vision[M/OL]. WestNyack: Cambridge University Press, 2004. DOI: 10.1017/CBO9780511811685.
[39] FISCHLER M A, BOLLES R C. Random Sample Consensus: A Paradigm for Model Fittingwith Applications To Image Analysis and Automated Cartography[J]. Communications of theACM, 1981, 24(6): 381-395.
[40] SUN D, YANG X, LIU M Y, et al. PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume[C/OL]//2018 IEEE/CVF Conference on Computer Vision and PatternRecognition. 2018: 8934-8943. DOI: 10.1109/CVPR.2018.00931.
[41] SUN Y, LIU M, MENG M Q H. Improving RGB-D SLAM in Dynamic Environments: AMotion Removal Approach[J/OL]. Robotics and Autonomous Systems, 2016, 89: 110-122.DOI: 10.1016/j.robot.2016.11.012.
[42] KIM D H, KIM J H. Effective Background Model-Based RGB-D Dense Visual Odometry in aDynamic Environment[J/OL]. IEEE Transactions on Robotics, 2016, 32(6): 1565-1573. DOI:10.1109/TRO.2016.2609395.
[43] SCONA R, JAIMEZ M, PETILLOT Y R, et al. StaticFusion: Background Reconstruction forDense RGB-D SLAM in Dynamic Environments[C/OL]//2018 IEEE International Conferenceon Robotics and Automation (ICRA). 2018: 3849-3856. DOI: 10.1109/ICRA.2018.8460681.
[44] ZHANG T, ZHANG H, LI Y, et al. FlowFusion: Dynamic Dense RGB-D SLAM Based on Optical Flow[C/OL]//2020 IEEE International Conference on Robotics and Automation (ICRA).2020: 7322-7328. DOI: 10.1109/ICRA40945.2020.9197349.
[45] DAI W, ZHANG Y, LI P, et al. RGB-D SLAM in Dynamic Environments Using Point Correlations[J/OL]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(1):373-389. DOI: 10.1109/TPAMI.2020.3010942.
[46] YU C, LIU Z, LIU X J, et al. DS-SLAM: A Semantic Visual SLAM towards Dynamic Environments[C/OL]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS). 2018: 1168-1174. DOI: 10.1109/IROS.2018.8593691.
[47] BADRINARAYANAN V, KENDALL A, CIPOLLA R. SegNet: A Deep ConvolutionalEncoder-Decoder Architecture for Image Segmentation[J/OL]. IEEE Transactions on PatternAnalysis and Machine Intelligence, 2017, 39(12): 2481-2495. DOI: 10.1109/TPAMI.2016.2644615.
[48] BESCOS B, FáCIL J M, CIVERA J, et al. DynaSLAM: Tracking, Mapping, and Inpainting inDynamic Scenes[J/OL]. IEEE Robotics and Automation Letters, 2018, 3(4): 4076-4083. DOI:10.1109/LRA.2018.2860039.
[49] HE K, GKIOXARI G, DOLLáR P, et al. Mask R-CNN[C/OL]//2017 IEEE International Conference on Computer Vision (ICCV). 2017: 2980-2988. DOI: 10.1109/ICCV.2017.322.
[50] WANG K, LIN Y, WANG L, et al. A Unified Framework for Mutual Improvement of SLAMand Semantic Segmentation[C/OL]//2019 International Conference on Robotics and Automation (ICRA). 2019: 5224-5230. DOI: 10.1109/ICRA.2019.8793499.
[51] JI T, WANG C, XIE L. Towards Real-time Semantic RGB-D SLAM in Dynamic Environments[C/OL]//2021 IEEE International Conference on Robotics and Automation (ICRA). 2021:11175-11181. DOI: 10.1109/ICRA48506.2021.9561743.
[52] WANG C C, THORPE C, THRUN S, et al. Simultaneous Localization, Mapping and MovingObject Tracking[J/OL]. The International Journal of Robotics Research, 2007, 26(9): 889-916.DOI: 10.1177/0278364907081229.
[53] KUNDU A, KRISHNA K M, JAWAHAR C V. Realtime multibody visual SLAM with asmoothly moving monocular camera[C/OL]//2011 International Conference on Computer Vision. 2011: 2080-2087. DOI: 10.1109/ICCV.2011.6126482.
[54] REDDY N D, SINGHAL P, CHARI V, et al. Dynamic body VSLAM with semantic constraints[C/OL]//2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).2015: 1897-1904. DOI: 10.1109/IROS.2015.7353626.
[55] JUDD K M, GAMMELL J D, NEWMAN P. Multimotion Visual Odometry (MVO): Simultaneous Estimation of Camera and Third-Party Motions[C/OL]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2018: 3949-3956. DOI:10.1109/IROS.2018.8594213.
[56] YANG S, SCHERER S. CubeSLAM: Monocular 3-D Object SLAM[J/OL]. IEEE Transactionson Robotics, 2019, 35(4): 925-938. DOI: 10.1109/TRO.2019.2909168.
[57] MCCORMAC J, HANDA A, DAVISON A, et al. SemanticFusion: Dense 3D semantic mappingwith convolutional neural networks[C/OL]//2017 IEEE International Conference on Roboticsand Automation (ICRA). 2017: 4628-4635. DOI: 10.1109/ICRA.2017.7989538.
[58] HERMANS A, FLOROS G, LEIBE B. Dense 3D semantic mapping of indoor scenes fromRGB-D images[C/OL]//2014 IEEE International Conference on Robotics and Automation(ICRA). 2014: 2631-2638. DOI: 10.1109/ICRA.2014.6907236.
[59] MCCORMAC J, CLARK R, BLOESCH M, et al. Fusion++: Volumetric Object-Level SLAM[C/OL]//2018 International Conference on 3D Vision (3DV). 2018: 32-41. DOI: 10.1109/3DV.2018.00015.
[60] RUNZ M, BUFFIER M, AGAPITO L. MaskFusion: Real-Time Recognition, Tracking andReconstruction of Multiple Moving Objects[C/OL]//2018 IEEE International Symposium onMixed and Augmented Reality (ISMAR). 2018: 10-20. DOI: 10.1109/ISMAR.2018.00024.
[61] XU B, LI W, TZOUMANIKAS D, et al. MID-Fusion: Octree-based Object-Level MultiInstance Dynamic SLAM[C/OL]//2019 International Conference on Robotics and Automation(ICRA). 2019: 5231-5237. DOI: 10.1109/ICRA.2019.8794371.
[62] CHU Q, OUYANG W, LI H, et al. Online Multi-object Tracking Using CNN-Based SingleObject Tracker with Spatial-Temporal Attention Mechanism[C/OL]//2017 IEEE InternationalConference on Computer Vision (ICCV). 2017: 4846-4855. DOI: 10.1109/ICCV.2017.518.
[63] HENEIN M, ZHANG J, MAHONY R, et al. Dynamic SLAM: The Need For Speed[C/OL]//2020 IEEE International Conference on Robotics and Automation (ICRA). 2020: 2123-2129.DOI: 10.1109/ICRA40945.2020.9196895.
[64] ZHANG J, HENEIN M, MAHONY R, et al. VDO-SLAM: A Visual Dynamic Object-awareSLAM System[C/OL]//arXiv. 2020. DOI: https://doi.org/10.48550/arXiv.2005.11052.
[65] BESCOS B, CAMPOS C, TARDóS J D, et al. DynaSLAM II: Tightly-Coupled Multi-ObjectTracking and SLAM[J/OL]. IEEE Robotics and Automation Letters, 2021, 6(3): 5191-5198.DOI: 10.1109/LRA.2021.3068640.
[66] QIU Y, WANG C, WANG W, et al. AirDOS: Dynamic SLAM benefits from Articulated Objects[C/OL]//2022 International Conference on Robotics and Automation (ICRA). 2022: 8047-8053.DOI: 10.1109/ICRA46639.2022.9811667.
[67] FANG H S, XIE S, TAI Y W, et al. RMPE: Regional Multi-person Pose Estimation[C/OL]//2017 IEEE International Conference on Computer Vision (ICCV). 2017: 2353-2362. DOI:10.1109/ICCV.2017.256.
[68] KIM G, KIM A. Remove, then Revert: Static Point cloud Map Construction using Multiresolution Range Images[C/OL]//2020 IEEE/RSJ International Conference on Intelligent Robots andSystems (IROS). 2020: 10758-10765. DOI: 10.1109/IROS45743.2020.9340856.
[69] BEN M, PRATUL P S, MATTHEW T, et al. NeRF: Representing Scenes as Neural RadianceFields for View Synthesis[C/OL]//ECCV. 2020. DOI: 10.1007/978-3-030-58452-8_24.
[70] TEED Z, DENG J. DROID-SLAM: Deep Visual SLAM for Monocular, Stereo, and RGB-DCameras[C/OL]//RANZATO M, BEYGELZIMER A, DAUPHIN Y, et al. Advances in NeuralInformation Processing Systems: volume 34. Curran Associates, Inc., 2021: 16558-16569. https://proceedings.neurips.cc/paper_files/paper/2021/file/89fcd07f20b6785b92134bd6c1d0fa42-Paper.pdf.
[71] ROSINOL A, GUPTA A, ABATE M, et al. 3D Dynamic Scene Graphs: Actionable SpatialPerception with Places, Objects, and Humans[C/OL]//Proceedings of Robotics: Science andSystems. Corvalis, Oregon, USA, 2020. DOI: 10.15607/RSS.2020.XVI.079.
[72] LI Y, ZHAO H, QI X, et al. Fully Convolutional Networks for Panoptic Segmentation[C/OL]//2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2021: 214-223. DOI: 10.1109/CVPR46437.2021.00028.
[73] JIN HAN L, MYUNG-KYU H, DONG WOOK K, et al. From Big to Small: Multi-ScaleLocal Planar Guidance for Monocular Depth Estimation[C/OL]//arXiv 1907.10326. 2019. DOI:10.48550/arXiv.1907.10326.
[74] VIGUERAS F, HERNáNDEZ A, MALDONADO I. Iterative Linear Solution of the Perspectiven-Point Problem Using Unbiased Statistics[C/OL]//2009 Eighth Mexican International Conference on Artificial Intelligence. 2009: 59-64. DOI: 10.1109/MICAI.2009.39.
[75] CHEN X, MA H, WAN J, et al. Multi-view 3D Object Detection Network for AutonomousDriving[C/OL]//2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).2017: 6526-6534. DOI: 10.1109/CVPR.2017.691.
[76] MILIOTO A, VIZZO I, BEHLEY J, et al. RangeNet ++: Fast and Accurate LiDAR Semantic Segmentation[C/OL]//2019 IEEE/RSJ International Conference on Intelligent Robots andSystems (IROS). 2019: 4213-4220. DOI: 10.1109/IROS40897.2019.8967762.
[77] KIRILLOV A, HE K, GIRSHICK R, et al. Panoptic Segmentation[C/OL]//2019 IEEE/CVFConference on Computer Vision and Pattern Recognition (CVPR). 2019: 9396-9405. DOI:10.1109/CVPR.2019.00963.
[78] LIN T, MAIRE M, BELONGIE S J, et al. Microsoft COCO: Common Objects in Context[C/OL]//Lecture Notes in Computer Science: volume 8693 ECCV. 2014: 740-755. DOI:10.1007/978-3-319-10602-1_48.
[79] GRINVALD M, FURRER F, NOVKOVIC T, et al. Volumetric Instance-Aware Semantic Mapping and 3D Object Discovery[J/OL]. IEEE Robotics and Automation Letters, 2019, 4(3):3037-3044. DOI: 10.1109/LRA.2019.2923960.
[80] BARATH D, MATAS J, NOSKOVA J. MAGSAC: Marginalizing Sample Consensus[C/OL]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2019:10189-10197. DOI: 10.1109/CVPR.2019.01044.
[81] YU C, LIU Z, LIU X J, et al. DS-SLAM: A Semantic Visual SLAM towards Dynamic Environments[C/OL]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS). 2018: 1168-1174. DOI: 10.1109/IROS.2018.8593691.
[82] LIU Y, MIURA J. RDS-SLAM: Real-Time Dynamic SLAM Using Semantic SegmentationMethods[J/OL]. IEEE Access, 2021, 9: 23772-23785. DOI: 10.1109/ACCESS.2021.3050617.

所在学位评定分委会
力学
国内图书分类号
TP242.6
来源库
人工提交
成果类型学位论文
条目标识符http://sustech.caswiz.com/handle/2SGJ60CL/544602
专题工学院_机械与能源工程系
推荐引用方式
GB/T 7714
朱虎. 动态环境下基于多源信息融合的 SLAM 研究[D]. 深圳. 南方科技大学,2023.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可 操作
12032412-朱虎-机械与能源工程系(41319KB)----限制开放--请求全文
个性服务
原文链接
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
导出为Excel格式
导出为Csv格式
Altmetrics Score
谷歌学术
谷歌学术中相似的文章
[朱虎]的文章
百度学术
百度学术中相似的文章
[朱虎]的文章
必应学术
必应学术中相似的文章
[朱虎]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
[发表评论/异议/意见]
暂无评论

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。