中文版 | English
题名

面向自动驾驶3D目标检测数据集的主动学习优化和快速统计评估

其他题名
ACTIVE LEARNING OPTIMIZATION AND FAST STATISTICAL EVALUATION FOR 3D OBJECT DETECTION DATASET IN AUTONOMOUS DRIVING
姓名
姓名拼音
LEI Chenyang
学号
12132336
学位类型
硕士
学位专业
0809 电子科学与技术
学科门类/专业学位类别
08 工学
导师
郝祁
导师单位
计算机科学与工程系
论文答辩日期
2024-05-12
论文提交日期
2024-06-26
学位授予单位
南方科技大学
学位授予地点
深圳
摘要

在自动驾驶中,3D目标检测可提供关键的环境感知能力,是整个系统至关重要的部分。为了能够获得泛化能力更强3D目标检测模型,通常需要大量的点云数据进行训练,然而点云数据标注耗时且困难。那么在标注预算有限的条件下,如何从原始未标记数据中构建一批高质量的优化数据集并进行标注,提高其训练的3D目标检测模型的性能,是本文需要解决的问题。在构建优化数据集之后还需要对其进行评估以验证优化的质量,目前的研究通常使用算法训练的方法,然而这种方法需要耗费大量的时间和计算资源,那么如何设计一种不依赖训练,快速有效的数据集评估方法,同样也是需要考虑的问题。

对于数据集优化,目前的研究多使用主动学习的方法,然而当前主动学习更多考虑分类以及2D目标检测任务,没有解决3D目标检测数据集不平衡、冗余多、缺少复杂场景的问题。考虑上述问题,本文提出了针对3D目标检测任务的主动学习数据集优化方法,其创新点在于:1)提出了类别熵度量的方法,通过计算场景类别数量分布的熵,来有效地解决不平衡问题;2)提出了场景相似性度量的方法,通过构建场景的图模型进而计算图之间的相似性,从而选择冗余性更低的数据;3)提出了感知不确定性度量的方法,通过构建一个混合密度网络进而高效地计算出任意和模型不确定性,从而选择更加复杂的场景数据;4)提出三阶段混合采样策略,通过逐级使用三种度量方法,更好地平衡不同度量的作用。对于优化数据集的评估,基于统计的方法能够缩短计算时间,然而目前该类方法的研究没有针对特定的任务设计综合性的指标,且设置不同指标的权重时直接使用先验知识进行赋值,具有较强的主观性。考虑上述问题,本文提出了基于多指标统计的数据集评估方法,其创新点在于:1)针对3D目标检测任务提出了多种数据统计指标,并利用熵、密度等方法对其进行量化;2)提出了基于机器学习和层次分析法的指标权重计算方法,从而有效地减少了主观赋值的影响。 

本文在KITTI和Lyft数据集上对上述优化和评估方法进行了实验,实验结果表明本文的主动学习方法所得到的优化数据集能有效提高3D目标检测模型的性能,且优于现有的先进方法。本文的多指标统计评估方法能够在较短的时间内对优化数据集进行评估,并且能与算法训练评估的结果基本保持一致。

关键词
语种
中文
培养类别
独立培养
入学年份
2021
学位授予年份
2024-06
参考文献列表

[1] LI Y, MA L, ZHONG Z, et al. Deep Learning for Lidar Point Clouds in Autonomous Driving: A Review[J]. IEEE Transactions on Neural Networks and Learning Systems, 2020, 32(8): 3412- 3432.
[2] WU B, WAN A, YUE X, et al. SqueezeSeg: Convolutional Neural Nets with Recurrent CRF for Real-Time Road-Object Segmentation from 3D Lidar Point Cloud[C]//2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018: 1887-1893.
[3] ZHAN X, WANG Q, HUANG K, et al. A Comparative Survey of Deep Active Learning[J/OL]. CoRR, 2022, abs/2203.13450. https://doi.org/10.48550/arXiv.2203.13450.
[4]WANG D, SHANG Y. A New Active Labeling Method for Deep Learning[C]//2014 Interna- tional Joint Conference on Neural Networks (IJCNN). IEEE, 2014: 112-119.
[5] KAO C C, LEE T Y, SEN P, et al. Localization-Aware Active Learning for Object Detection [C]//14th Asian Conference on Computer Vision (ACCV). Springer, 2019: 506-522.
[6] ROY S, UNMESH A, NAMBOODIRI V P. Deep Active Learning for Object Detection.[C]// 29th British Machine Vision Conference (BMVC). BMVA Press, 2018: 91.
[7] AGHDAM H H, GONZALEZ-GARCIA A, WEIJER J V D, et al. Active Learning for Deep Detection Neural Networks[C]//2019 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2019: 3672-3680.
[8] LI X, GUO Y. Adaptive Active Learning for Image Classification[C]//2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2013: 859-866.
[9] YIN C, QIAN B, CAO S, et al. Deep Similarity-Based Batch Mode Active Learning with Exploration-Exploitation[C]//2017 IEEE International Conference on Data Mining (ICDM). IEEE, 2017: 575-584.
[10] HEKIMOGLU A, SCHMIDT M, MARCOS-RAMIRO A, et al. Efficient Active Learning Strategies for Monocular 3D Object Detection[C]//2022 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2022: 295-302.
[11] GEIGER A, LENZ P, URTASUN R. Are We Ready for Autonomous Driving? The KITTI Vi- sion Benchmark Suite[C]//2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2012: 3354-3361.
[12] HOUSTON J, ZUIDHOF G, BERGAMINI L, et al. One Thousand and One Hours: Self-driving Motion Prediction Dataset[C]//4th Conference on Robot Learning (CoRL). PMLR, 2021: 409- 418.
[13] SADAT A, SEGAL S, CASAS S, et al. Diverse Complexity Measures for Dataset Curation in Self-Driving[C]//2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2021: 8609-8616.
[14] LIU M, YURTSEVER E, ZHOU X, et al. A Survey on Autonomous Driving Datasets: Data Statistic, Annotation, and Outlook[J/OL]. CoRR, 2024, abs/2401.01454. https://doi.org/10.485 50/arXiv.2401.01454.
[15] WANG Y, LI K, HU Y, et al. Modeling and Quantitative Assessment of Environment Com- plexity for Autonomous Vehicles[C]//2020 Chinese Control And Decision Conference (CCDC). IEEE, 2020: 2124-2129.
[16] BI H, PERELLO-NIETO M, SANTOS-RODRIGUEZ R, et al. Human Activity Recognition Based on Dynamic Active Learning[J]. IEEE Journal of Biomedical and Health Informatics, 2020, 25(4): 922-934.
[17] CAO X, YAO J, XU Z, et al. Hyperspectral Image Classification with Convolutional Neural Network and Active Learning[J]. IEEE Transactions on Geoscience and Remote Sensing, 2020, 58(7): 4604-4616.
[18] YAO J, CAO X, HONG D, et al. Semi-Active Convolutional Neural Networks for Hyperspectral Image Classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 1-15.
[19] BELUCH W H, GENEWEIN T, NÜRNBERGER A, et al. The Power of Ensembles for Active Learning in Image Classification[C]//2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2018: 9368-9377.
[20] SCHMIDT S, RAO Q, TATSCH J, et al. Advanced Active Learning Strategies for Object De- tection[C]//2020 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2020: 871-876.
[21] TSYMBALOV E, PANOV M, SHAPEEV A. Dropout-Based Active Learning for Regres- sion[C]//Analysis of Images, Social Networks and Texts: 7th International Conference (AIST). Springer, 2018: 247-258.
[22] FENG D, WEI X, ROSENBAUM L, et al. Deep Active Learning for Efficient Training of a Lidar 3d Object Detector[C]//2019 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2019: 667-674.
[23] CHOI J, ELEZI I, LEE H J, et al. Active Learning for Deep Object Detection via Probabilistic Modeling[C]//2021 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2021: 10264-10273.
[24] PITROPOV M, HUANG C, ABDELZAD V, et al. LiDAR-MIMO: Efficient Uncertainty Esti- mation for LiDAR-based 3D Object Detection[C]//2022 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2022: 813-820.
[25] LI C, MA H, KANG Z, et al. On Deep Unsupervised Active Learning[C]//Proceedings of the 29th International Joint Conference on Artificial Intelligence (IJCAI). 2020: 2626-2632.
[26] SENER O, SAVARESE S. Active Learning for Convolutional Neural Networks: A Core-Set Approach[C]//6th International Conference on Learning Representations (ICLR). OpenRe- view.net, 2018.
[27] SINHA S, EBRAHIMI S, DARRELL T. Variational Adversarial Active Learning[C]//2019 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2019: 5972-5981.
[28] TANG Y P, HUANG S J. Self-Paced Active Learning: Query the Right Thing at the Right Time[C]//The 31rd AAAI Conference on Artificial Intelligence (AAAI). AAAI Press, 2019: 5117-5124.
[29] EBERT S, FRITZ M, SCHIELE B. RALF: A Reinforced Active Learning Formulation for Ob- ject Class Recognition[C]//2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2012: 3626-3633.
[30] CITOVSKY G, DESALVO G, GENTILE C, et al. Batch Active Learning at Scale[C]//Advances in Neural Information Processing Systems 31 (NeurIPS). 2021: 11933-11944.
[31] LIU P, WANG L, RANJAN R, et al. A Survey on Active Deep Learning: From Model Driven to Data Driven[J]. ACM Computing Surveys, 2022, 54(10s): 1-34.
[32] SENGE R, BÖSNER S, DEMBCZYŃSKI K, et al. Reliable Classification: Learning Classifiers that Distinguish Aleatoric and Epistemic Uncertainty[J]. Information Sciences, 2014, 255: 16- 29.
[33] KENDALL A, GAL Y. What Uncertainties do We Need in Bayesian Deep Learning for Com- puter Vision?[C]//Advances in Neural Information Processing Systems 30 (NeurIPS). 2017: 5574-5584.
[34] GAL Y, GHAHRAMANI Z. Dropout as a Bayesian Approximation: Representing Model Un- certainty in Deep Learning[C]//Proceedings of the 33nd International Conference on Machine Learning (ICML). JMLR, 2016: 1050-1059.
[35] CHOI S, LEE K, LIM S, et al. Uncertainty-Aware Learning from Demonstration Using Mixture Density Networks with Sampling-Free Variance Modeling[C]//2018 IEEE International Confer- ence on Robotics and Automation (ICRA). IEEE, 2018: 6915-6922.
[36] DABLAIN D, KRAWCZYK B, CHAWLA N V. DeepSMOTE: Fusing Deep Learning and SMOTE for Imbalanced Data[J]. IEEE Transactions on Neural Networks and Learning Systems, 2022, 34(9): 6390-6404.
[37] HUANG Y, BAI B, ZHAO S, et al. Uncertainty-Aware Learning Against Label Noise on Imbal- anced Datasets[C]//The 36th AAAI Conference on Artificial Intelligence (AAAI). AAAI Press, 2022: 6960-6969.
[38] CHEN Z, LUO Y, WANG Z, et al. Revisiting Domain-Adaptive 3D Object Detection by Reli- able, Diverse and Class-Balanced Pseudo-Labeling[C]//2023 IEEE/CVF International Confer- ence on Computer Vision (ICCV). IEEE, 2023: 3714-3726.
[39] PEREZ-ORTIZ M, TIŇO P, MANTIUK R, et al. Exploiting Synthetically Generated Data with Semi-Supervised Learning for Small and Imbalanced Datasets[C]//The 33rd AAAI Conference on Artificial Intelligence (AAAI). AAAI Press, 2019: 4715-4722.
[40] DONG Y, XIAO H, DONG Y. SA-CGAN: An Oversampling Method Based on Single Attribute Guided Conditional GAN for Multi-class Imbalanced Learning[J]. Neurocomputing, 2022, 472: 326-337.
[41] KASHIMA H, TSUDA K, INOKUCHI A. Marginalized Kernels Between Labeled Gaphs[C]// Proceedings of the 20th international conference on machine learning (ICML). JMLR, 2003: 321-328.
[42] ZHAN X, LIU H, LI Q, et al. A Comparative Survey: Benchmarking for Pool-based Active Learning.[C]//Proceedings of the 30th International Joint Conference on Artificial Intelligence (IJCAI). 2021: 4679-4686.
[43] ASH J T, ZHANG C, KRISHNAMURTHY A, et al. Deep Batch Active Learning by Diverse, Uncertain Gradient Lower Bounds[C]//8th International Conference on Learning Representa- tions (ICLR). OpenReview.net, 2020.
[44] VIJAYANARASIMHAN S, GRAUMAN K. Large-Scale Live Active Learning: Training Ob- ject Detectors with Crawled Data and Crowds[J]. International Journal of Computer Vision, 2014, 108: 97-114.
[45] DESAI S V, LAGANDULA A C, GUO W, et al. An Adaptive Supervision Framework for Active Learning in Object Detection[C]//30th British Machine Vision Conference (BMVC). BMVA Press, 2019: 230.
[46] HAUSSMANN E, FENZI M, CHITTA K, et al. Scalable Active Learning for Object Detection [C]//2020 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2020: 1430-1435.
[47] MOSES A, JAKKAMPUDI S, DANNER C, et al. Localization-Based Active Learning (LO- CAL) for Object Detection in 3D Point Clouds[C]//Geospatial Informatics XII. SPIE, 2022: 44-58.
[48] LIANG Z, XU X, DENG S, et al. Exploring Diversity-based Active Learning for 3D Object Detection in Autonomous Driving[J/OL]. CoRR, 2022, abs/2205.07708. https://doi.org/10.4 8550/arXiv.2205.07708.
[49] LUO Y, CHEN Z, WANG Z, et al. Exploring Active 3D Object Detection from a Generalization Perspective[C]//11th International Conference on Learning Representations (ICLR). OpenRe- view.net, 2023.
[50] EVERINGHAM M, VAN GOOL L, WILLIAMS C K, et al. The Pascal Visual Object Classes (VOC) Challenge[J]. International Journal of Computer Vision, 2010, 88(2): 303-338.
[51] YANG S, GAO L, ZHAO Y, et al. Research on the Quantitative Evaluation of the Traffic Envi- ronment Complexity for Unmanned Vehicles in Urban Roads[J]. IEEE Access, 2021, 9: 23139- 23152.
[52] ZHANG L, MA Y, XING X, et al. Research on the Complexity Quantification Method of Driving Scenarios Based on Information Entropy[C]//24th IEEE International Intelligent Trans- portation Systems Conference (ITSC). IEEE, 2021: 3476-3481.
[53] WU X, XING X, CHEN J, et al. Risk Assessment Method for Driving Scenarios of Autonomous Vehicles Based on Drivable Area[C]//25th IEEE International Conference on Intelligent Trans- portation Systems (ITSC). IEEE, 2022: 2206-2213.
[54] WANG J, WU J, LI Y. The Driving Safety Field Based on Driver–Vehicle–Road Interactions [J]. IEEE Transactions on Intelligent Transportation Systems, 2015, 16(4): 2203-2214.
[55] CHENG Y, LIU Z, GAO L, et al. Traffic Risk Environment Impact Analysis and Complexity Assessment of Autonomous Vehicles Based on the Potential Field Method[J]. International Journal of Environmental Research and Public Health, 2022, 19(16): 10337.
[56] ZIPFL M, JAROSCH M, ZOLLNER J M. Self Supervised Clustering of Traffic Scenes Using Graph Representations[C]//2022 International Conference on Electrical, Computer, Communi- cations and Mechatronics Engineering (ICECCME). IEEE, 2022: 1-7.
[57] ZIPFL M, ZÖLLNER J M. Towards Traffic Scene Description: The Semantic Scene Graph [C]//25th IEEE International Conference on Intelligent Transportation Systems (ITSC). IEEE, 2022: 3748-3755.
[58] CAESAR H, BANKITI V, LANG A H, et al. NuScenes: A Multimodal Dataset for Au- tonomous Driving[C]//2020 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2020: 11621-11631.
[59] PHAM Q H, SEVESTRE P, PAHWA R S, et al. A 3D Dataset: Towards Autonomous Driving in Challenging Environments[C]//2020 IEEE International conference on Robotics and Automa- tion (ICRA). IEEE, 2020: 2267-2273.
[60] PATIL A, MALLA S, GANG H, et al. The H3D Dataset for Full-Surround 3D Multi-Object De- tection and Tracking in Crowded Urban Scenes[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019: 9552-9557.
[61] HUANG X, CHENG X, GENG Q, et al. The Apolloscape Dataset for Autonomous Driving[C]// 2018 IEEE Conference on Computer Vision and Pattern Recognition Workshops. IEEE, 2018: 954-960.
[62] SUN P, KRETZSCHMAR H, DOTIWALLA X, et al. Scalability in Perception for Autonomous Driving: Waymo Open Dataset[C]//2020 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2020: 2446-2454.
[63] SIMONELLI A, BULO S R, PORZI L, et al. Disentangling Monocular 3D Object Detection [C]//2019 IEEE/CVF International Conference on Computer Vision (ICCV). 2019: 1991-1999.
[64] ALABDULMOHSIN I, GAO X, ZHANG X. Efficient Active Learning of Halfspaces via Query Synthesis[C]//The 29th AAAI Conference on Artificial Intelligence (AAAI). AAAI Press, 2015: 2483-2489.
[65] SCHUMANN R, REHBEIN I. Active Learning via Membership Query Synthesis for Semi- Supervised Sentence Classification[C]//Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL). ACL, 2019: 472-481.
[66] ENGLHARDT A, BÖHM K. Exploring the Unknown–Query Synthesis in One-Class Active Learning[C]//Proceedings of the 2020 SIAM International Conference on Data Mining (SDM). SIAM, 2020: 145-153.
[67] LIU S, XUE S, WU J, et al. Online Active Learning for Drifting Data Streams[J]. IEEE Trans- actions on Neural Networks and Learning Systems, 2021, 34(1): 186-200.
[68] CACCIARELLI D, KULAHCI M, TYSSEDAL J S. Stream-based Active Learning with Linear Models[J]. Knowledge-Based Systems, 2022, 254: 109664.
[69] SCHMIDT S, GÜNNEMANN S. Stream-based Active Learning by Exploiting Temporal Prop- erties in Perception with Temporal Predicted Loss[C]//34th British Machine Vision Conference (BMVC). BMVA Press, 2023: 664.
[70] REN P, XIAO Y, CHANG X, et al. A Survey of Deep Active Learning[J]. ACM Computing Surveys, 2021, 54(9): 1-40.
[71] BEN-DAVID S, BLITZER J, CRAMMER K, et al. A Theory of Learning from Different Do- mains[J]. Machine Learning, 2010, 79: 151-175.
[72] HOULSBY N, HUSZAR F, GHAHRAMANI Z, et al. Bayesian Active Learning for Classifi- cation and Preference Learning[J/OL]. CoRR, 2011, abs/1112.5745. http://arxiv.org/abs/1112.5745.
[73] SUSTech ISUS Group. SUScape Dataset[EB/OL]. 2023. https://suscape.net/home.
[74] WANG Y, CHEN X, YOU Y, et al. Train in Germany, Test in the USA: Making 3D Object Detectors Generalize[C]//2020 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2020: 11713-11723.
[75] SHI S, WANG X, LI H. PointRCNN: 3D Object Proposal Generation and Detection from Point Cloud[C]//2019 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2019: 770-779.
[76] ZHOU Y, TUZEL O. VoxelNet: End-to-End Learning for Point Cloud based 3D Object Detec- tion[C]//2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2018: 4490-4499.
[77] LANG A H, VORA S, CAESAR H, et al. PointPillars: Fast Encoders for Object Detection from Point Clouds[C]//2019 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2019: 12697-12705.
[78] AGHDAM H H, HERAVI E J, DEMILEW S S, et al. Rad: Realtime and Accurate 3D Object Detection on Embedded Systems[C]//2021 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2021: 2875-2883.
[79] LIU W, ANGUELOV D, ERHAN D, et al. SSD: Single Shot Multibox Detector[C]//14th Eu- ropean Conference on Computer Vision (ECCV). Springer, 2016: 21-37.
[80] REN S, HE K, GIRSHICK R B, et al. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137-1149.
[81] DUAN K, BAI S, XIE L, et al. CenterNet: Keypoint Triplets for Object Detection[C]//2019 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2019: 6569-6578.
[82] YIN T, ZHOU X, KRAHENBUHL P. Center-Based 3D Object Detection and Tracking[C]// 2021 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2021: 11784-11793.
[83] TEAM O D. OpenPCDet: An Open-Source Toolbox for 3D Object Detection from Point Clouds [EB/OL]. 2020. https://github.com/open-mmlab/OpenPCDet.
[84] LIN T Y, GOYAL P, GIRSHICK R, et al. Focal Loss for Dense Object Detection[C]//2017 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2017: 2980-2988.
[85] GIRSHICK R. Fast R-CNN[C]//2015 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2015: 1440-1448.
[86] DING L, LI D, LIU B, et al. Capture Uncertainties in Deep Neural Networks for Safe Operation of Autonomous Driving vehicles[C]//2021 IEEE Intl Conf on Parallel & Distributed Processing with Applications, Big Data & Cloud Computing, Sustainable Computing & Communications, Social Computing & Networking. IEEE, 2021: 826-835.
[87] LIAO N, LI X. Traffic Anomaly Detection Model Using K-Means and Active Learning Method [J]. International Journal of Fuzzy Systems, 2022, 24(5): 2264-2282.
[88] SADAT A, SEGAL S, CASAS S, et al. Diverse Complexity Measures for Dataset Curation in Self-Driving[C]//2021 IEEE/RSJ International Conference on Intelligent Robots and Systems, (IROS). IEEE, 2021: 8609-8616.
[89] ALONSO J A, LAMATA M T. Consistency in the Analytic Hierarchy Process: A New Ap- proach[J]. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, 2006, 14(04): 445-459.
[90] DING G, ZHANG M, LI E, et al. Jst: Joint self-training for unsupervised domain adapta- tion on 2d&3d object detection[C]//2022 International Conference on Robotics and Automation (ICRA). IEEE, 2022: 477-483.
[91] GAO J, SUN C, ZHAO H, et al. VectorNet: Encoding HD Maps and Agent Dynamics from Vectorized Representation[C]//2020 IEEE Conference on Computer Vision and Pattern Recog- nition (CVPR). IEEE, 2020: 11525-11533.

所在学位评定分委会
电子科学与技术
国内图书分类号
TP391.4
来源库
人工提交
成果类型学位论文
条目标识符http://sustech.caswiz.com/handle/2SGJ60CL/766135
专题工学院_计算机科学与工程系
推荐引用方式
GB/T 7714
雷晨阳. 面向自动驾驶3D目标检测数据集的主动学习优化和快速统计评估[D]. 深圳. 南方科技大学,2024.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可 操作
12132336-雷晨阳-计算机科学与工(5612KB)----限制开放--请求全文
个性服务
原文链接
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
导出为Excel格式
导出为Csv格式
Altmetrics Score
谷歌学术
谷歌学术中相似的文章
[雷晨阳]的文章
百度学术
百度学术中相似的文章
[雷晨阳]的文章
必应学术
必应学术中相似的文章
[雷晨阳]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
[发表评论/异议/意见]
暂无评论

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。