[1] LI Y, MA L, ZHONG Z, et al. Deep Learning for Lidar Point Clouds in Autonomous Driving: A Review[J]. IEEE Transactions on Neural Networks and Learning Systems, 2020, 32(8): 3412- 3432.
[2] WU B, WAN A, YUE X, et al. SqueezeSeg: Convolutional Neural Nets with Recurrent CRF for Real-Time Road-Object Segmentation from 3D Lidar Point Cloud[C]//2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018: 1887-1893.
[3] ZHAN X, WANG Q, HUANG K, et al. A Comparative Survey of Deep Active Learning[J/OL]. CoRR, 2022, abs/2203.13450. https://doi.org/10.48550/arXiv.2203.13450.
[4]WANG D, SHANG Y. A New Active Labeling Method for Deep Learning[C]//2014 Interna- tional Joint Conference on Neural Networks (IJCNN). IEEE, 2014: 112-119.
[5] KAO C C, LEE T Y, SEN P, et al. Localization-Aware Active Learning for Object Detection [C]//14th Asian Conference on Computer Vision (ACCV). Springer, 2019: 506-522.
[6] ROY S, UNMESH A, NAMBOODIRI V P. Deep Active Learning for Object Detection.[C]// 29th British Machine Vision Conference (BMVC). BMVA Press, 2018: 91.
[7] AGHDAM H H, GONZALEZ-GARCIA A, WEIJER J V D, et al. Active Learning for Deep Detection Neural Networks[C]//2019 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2019: 3672-3680.
[8] LI X, GUO Y. Adaptive Active Learning for Image Classification[C]//2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2013: 859-866.
[9] YIN C, QIAN B, CAO S, et al. Deep Similarity-Based Batch Mode Active Learning with Exploration-Exploitation[C]//2017 IEEE International Conference on Data Mining (ICDM). IEEE, 2017: 575-584.
[10] HEKIMOGLU A, SCHMIDT M, MARCOS-RAMIRO A, et al. Efficient Active Learning Strategies for Monocular 3D Object Detection[C]//2022 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2022: 295-302.
[11] GEIGER A, LENZ P, URTASUN R. Are We Ready for Autonomous Driving? The KITTI Vi- sion Benchmark Suite[C]//2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2012: 3354-3361.
[12] HOUSTON J, ZUIDHOF G, BERGAMINI L, et al. One Thousand and One Hours: Self-driving Motion Prediction Dataset[C]//4th Conference on Robot Learning (CoRL). PMLR, 2021: 409- 418.
[13] SADAT A, SEGAL S, CASAS S, et al. Diverse Complexity Measures for Dataset Curation in Self-Driving[C]//2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2021: 8609-8616.
[14] LIU M, YURTSEVER E, ZHOU X, et al. A Survey on Autonomous Driving Datasets: Data Statistic, Annotation, and Outlook[J/OL]. CoRR, 2024, abs/2401.01454. https://doi.org/10.485 50/arXiv.2401.01454.
[15] WANG Y, LI K, HU Y, et al. Modeling and Quantitative Assessment of Environment Com- plexity for Autonomous Vehicles[C]//2020 Chinese Control And Decision Conference (CCDC). IEEE, 2020: 2124-2129.
[16] BI H, PERELLO-NIETO M, SANTOS-RODRIGUEZ R, et al. Human Activity Recognition Based on Dynamic Active Learning[J]. IEEE Journal of Biomedical and Health Informatics, 2020, 25(4): 922-934.
[17] CAO X, YAO J, XU Z, et al. Hyperspectral Image Classification with Convolutional Neural Network and Active Learning[J]. IEEE Transactions on Geoscience and Remote Sensing, 2020, 58(7): 4604-4616.
[18] YAO J, CAO X, HONG D, et al. Semi-Active Convolutional Neural Networks for Hyperspectral Image Classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 1-15.
[19] BELUCH W H, GENEWEIN T, NÜRNBERGER A, et al. The Power of Ensembles for Active Learning in Image Classification[C]//2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2018: 9368-9377.
[20] SCHMIDT S, RAO Q, TATSCH J, et al. Advanced Active Learning Strategies for Object De- tection[C]//2020 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2020: 871-876.
[21] TSYMBALOV E, PANOV M, SHAPEEV A. Dropout-Based Active Learning for Regres- sion[C]//Analysis of Images, Social Networks and Texts: 7th International Conference (AIST). Springer, 2018: 247-258.
[22] FENG D, WEI X, ROSENBAUM L, et al. Deep Active Learning for Efficient Training of a Lidar 3d Object Detector[C]//2019 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2019: 667-674.
[23] CHOI J, ELEZI I, LEE H J, et al. Active Learning for Deep Object Detection via Probabilistic Modeling[C]//2021 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2021: 10264-10273.
[24] PITROPOV M, HUANG C, ABDELZAD V, et al. LiDAR-MIMO: Efficient Uncertainty Esti- mation for LiDAR-based 3D Object Detection[C]//2022 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2022: 813-820.
[25] LI C, MA H, KANG Z, et al. On Deep Unsupervised Active Learning[C]//Proceedings of the 29th International Joint Conference on Artificial Intelligence (IJCAI). 2020: 2626-2632.
[26] SENER O, SAVARESE S. Active Learning for Convolutional Neural Networks: A Core-Set Approach[C]//6th International Conference on Learning Representations (ICLR). OpenRe- view.net, 2018.
[27] SINHA S, EBRAHIMI S, DARRELL T. Variational Adversarial Active Learning[C]//2019 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2019: 5972-5981.
[28] TANG Y P, HUANG S J. Self-Paced Active Learning: Query the Right Thing at the Right Time[C]//The 31rd AAAI Conference on Artificial Intelligence (AAAI). AAAI Press, 2019: 5117-5124.
[29] EBERT S, FRITZ M, SCHIELE B. RALF: A Reinforced Active Learning Formulation for Ob- ject Class Recognition[C]//2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2012: 3626-3633.
[30] CITOVSKY G, DESALVO G, GENTILE C, et al. Batch Active Learning at Scale[C]//Advances in Neural Information Processing Systems 31 (NeurIPS). 2021: 11933-11944.
[31] LIU P, WANG L, RANJAN R, et al. A Survey on Active Deep Learning: From Model Driven to Data Driven[J]. ACM Computing Surveys, 2022, 54(10s): 1-34.
[32] SENGE R, BÖSNER S, DEMBCZYŃSKI K, et al. Reliable Classification: Learning Classifiers that Distinguish Aleatoric and Epistemic Uncertainty[J]. Information Sciences, 2014, 255: 16- 29.
[33] KENDALL A, GAL Y. What Uncertainties do We Need in Bayesian Deep Learning for Com- puter Vision?[C]//Advances in Neural Information Processing Systems 30 (NeurIPS). 2017: 5574-5584.
[34] GAL Y, GHAHRAMANI Z. Dropout as a Bayesian Approximation: Representing Model Un- certainty in Deep Learning[C]//Proceedings of the 33nd International Conference on Machine Learning (ICML). JMLR, 2016: 1050-1059.
[35] CHOI S, LEE K, LIM S, et al. Uncertainty-Aware Learning from Demonstration Using Mixture Density Networks with Sampling-Free Variance Modeling[C]//2018 IEEE International Confer- ence on Robotics and Automation (ICRA). IEEE, 2018: 6915-6922.
[36] DABLAIN D, KRAWCZYK B, CHAWLA N V. DeepSMOTE: Fusing Deep Learning and SMOTE for Imbalanced Data[J]. IEEE Transactions on Neural Networks and Learning Systems, 2022, 34(9): 6390-6404.
[37] HUANG Y, BAI B, ZHAO S, et al. Uncertainty-Aware Learning Against Label Noise on Imbal- anced Datasets[C]//The 36th AAAI Conference on Artificial Intelligence (AAAI). AAAI Press, 2022: 6960-6969.
[38] CHEN Z, LUO Y, WANG Z, et al. Revisiting Domain-Adaptive 3D Object Detection by Reli- able, Diverse and Class-Balanced Pseudo-Labeling[C]//2023 IEEE/CVF International Confer- ence on Computer Vision (ICCV). IEEE, 2023: 3714-3726.
[39] PEREZ-ORTIZ M, TIŇO P, MANTIUK R, et al. Exploiting Synthetically Generated Data with Semi-Supervised Learning for Small and Imbalanced Datasets[C]//The 33rd AAAI Conference on Artificial Intelligence (AAAI). AAAI Press, 2019: 4715-4722.
[40] DONG Y, XIAO H, DONG Y. SA-CGAN: An Oversampling Method Based on Single Attribute Guided Conditional GAN for Multi-class Imbalanced Learning[J]. Neurocomputing, 2022, 472: 326-337.
[41] KASHIMA H, TSUDA K, INOKUCHI A. Marginalized Kernels Between Labeled Gaphs[C]// Proceedings of the 20th international conference on machine learning (ICML). JMLR, 2003: 321-328.
[42] ZHAN X, LIU H, LI Q, et al. A Comparative Survey: Benchmarking for Pool-based Active Learning.[C]//Proceedings of the 30th International Joint Conference on Artificial Intelligence (IJCAI). 2021: 4679-4686.
[43] ASH J T, ZHANG C, KRISHNAMURTHY A, et al. Deep Batch Active Learning by Diverse, Uncertain Gradient Lower Bounds[C]//8th International Conference on Learning Representa- tions (ICLR). OpenReview.net, 2020.
[44] VIJAYANARASIMHAN S, GRAUMAN K. Large-Scale Live Active Learning: Training Ob- ject Detectors with Crawled Data and Crowds[J]. International Journal of Computer Vision, 2014, 108: 97-114.
[45] DESAI S V, LAGANDULA A C, GUO W, et al. An Adaptive Supervision Framework for Active Learning in Object Detection[C]//30th British Machine Vision Conference (BMVC). BMVA Press, 2019: 230.
[46] HAUSSMANN E, FENZI M, CHITTA K, et al. Scalable Active Learning for Object Detection [C]//2020 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2020: 1430-1435.
[47] MOSES A, JAKKAMPUDI S, DANNER C, et al. Localization-Based Active Learning (LO- CAL) for Object Detection in 3D Point Clouds[C]//Geospatial Informatics XII. SPIE, 2022: 44-58.
[48] LIANG Z, XU X, DENG S, et al. Exploring Diversity-based Active Learning for 3D Object Detection in Autonomous Driving[J/OL]. CoRR, 2022, abs/2205.07708. https://doi.org/10.4 8550/arXiv.2205.07708.
[49] LUO Y, CHEN Z, WANG Z, et al. Exploring Active 3D Object Detection from a Generalization Perspective[C]//11th International Conference on Learning Representations (ICLR). OpenRe- view.net, 2023.
[50] EVERINGHAM M, VAN GOOL L, WILLIAMS C K, et al. The Pascal Visual Object Classes (VOC) Challenge[J]. International Journal of Computer Vision, 2010, 88(2): 303-338.
[51] YANG S, GAO L, ZHAO Y, et al. Research on the Quantitative Evaluation of the Traffic Envi- ronment Complexity for Unmanned Vehicles in Urban Roads[J]. IEEE Access, 2021, 9: 23139- 23152.
[52] ZHANG L, MA Y, XING X, et al. Research on the Complexity Quantification Method of Driving Scenarios Based on Information Entropy[C]//24th IEEE International Intelligent Trans- portation Systems Conference (ITSC). IEEE, 2021: 3476-3481.
[53] WU X, XING X, CHEN J, et al. Risk Assessment Method for Driving Scenarios of Autonomous Vehicles Based on Drivable Area[C]//25th IEEE International Conference on Intelligent Trans- portation Systems (ITSC). IEEE, 2022: 2206-2213.
[54] WANG J, WU J, LI Y. The Driving Safety Field Based on Driver–Vehicle–Road Interactions [J]. IEEE Transactions on Intelligent Transportation Systems, 2015, 16(4): 2203-2214.
[55] CHENG Y, LIU Z, GAO L, et al. Traffic Risk Environment Impact Analysis and Complexity Assessment of Autonomous Vehicles Based on the Potential Field Method[J]. International Journal of Environmental Research and Public Health, 2022, 19(16): 10337.
[56] ZIPFL M, JAROSCH M, ZOLLNER J M. Self Supervised Clustering of Traffic Scenes Using Graph Representations[C]//2022 International Conference on Electrical, Computer, Communi- cations and Mechatronics Engineering (ICECCME). IEEE, 2022: 1-7.
[57] ZIPFL M, ZÖLLNER J M. Towards Traffic Scene Description: The Semantic Scene Graph [C]//25th IEEE International Conference on Intelligent Transportation Systems (ITSC). IEEE, 2022: 3748-3755.
[58] CAESAR H, BANKITI V, LANG A H, et al. NuScenes: A Multimodal Dataset for Au- tonomous Driving[C]//2020 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2020: 11621-11631.
[59] PHAM Q H, SEVESTRE P, PAHWA R S, et al. A 3D Dataset: Towards Autonomous Driving in Challenging Environments[C]//2020 IEEE International conference on Robotics and Automa- tion (ICRA). IEEE, 2020: 2267-2273.
[60] PATIL A, MALLA S, GANG H, et al. The H3D Dataset for Full-Surround 3D Multi-Object De- tection and Tracking in Crowded Urban Scenes[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019: 9552-9557.
[61] HUANG X, CHENG X, GENG Q, et al. The Apolloscape Dataset for Autonomous Driving[C]// 2018 IEEE Conference on Computer Vision and Pattern Recognition Workshops. IEEE, 2018: 954-960.
[62] SUN P, KRETZSCHMAR H, DOTIWALLA X, et al. Scalability in Perception for Autonomous Driving: Waymo Open Dataset[C]//2020 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2020: 2446-2454.
[63] SIMONELLI A, BULO S R, PORZI L, et al. Disentangling Monocular 3D Object Detection [C]//2019 IEEE/CVF International Conference on Computer Vision (ICCV). 2019: 1991-1999.
[64] ALABDULMOHSIN I, GAO X, ZHANG X. Efficient Active Learning of Halfspaces via Query Synthesis[C]//The 29th AAAI Conference on Artificial Intelligence (AAAI). AAAI Press, 2015: 2483-2489.
[65] SCHUMANN R, REHBEIN I. Active Learning via Membership Query Synthesis for Semi- Supervised Sentence Classification[C]//Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL). ACL, 2019: 472-481.
[66] ENGLHARDT A, BÖHM K. Exploring the Unknown–Query Synthesis in One-Class Active Learning[C]//Proceedings of the 2020 SIAM International Conference on Data Mining (SDM). SIAM, 2020: 145-153.
[67] LIU S, XUE S, WU J, et al. Online Active Learning for Drifting Data Streams[J]. IEEE Trans- actions on Neural Networks and Learning Systems, 2021, 34(1): 186-200.
[68] CACCIARELLI D, KULAHCI M, TYSSEDAL J S. Stream-based Active Learning with Linear Models[J]. Knowledge-Based Systems, 2022, 254: 109664.
[69] SCHMIDT S, GÜNNEMANN S. Stream-based Active Learning by Exploiting Temporal Prop- erties in Perception with Temporal Predicted Loss[C]//34th British Machine Vision Conference (BMVC). BMVA Press, 2023: 664.
[70] REN P, XIAO Y, CHANG X, et al. A Survey of Deep Active Learning[J]. ACM Computing Surveys, 2021, 54(9): 1-40.
[71] BEN-DAVID S, BLITZER J, CRAMMER K, et al. A Theory of Learning from Different Do- mains[J]. Machine Learning, 2010, 79: 151-175.
[72] HOULSBY N, HUSZAR F, GHAHRAMANI Z, et al. Bayesian Active Learning for Classifi- cation and Preference Learning[J/OL]. CoRR, 2011, abs/1112.5745. http://arxiv.org/abs/1112.5745.
[73] SUSTech ISUS Group. SUScape Dataset[EB/OL]. 2023. https://suscape.net/home.
[74] WANG Y, CHEN X, YOU Y, et al. Train in Germany, Test in the USA: Making 3D Object Detectors Generalize[C]//2020 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2020: 11713-11723.
[75] SHI S, WANG X, LI H. PointRCNN: 3D Object Proposal Generation and Detection from Point Cloud[C]//2019 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2019: 770-779.
[76] ZHOU Y, TUZEL O. VoxelNet: End-to-End Learning for Point Cloud based 3D Object Detec- tion[C]//2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2018: 4490-4499.
[77] LANG A H, VORA S, CAESAR H, et al. PointPillars: Fast Encoders for Object Detection from Point Clouds[C]//2019 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2019: 12697-12705.
[78] AGHDAM H H, HERAVI E J, DEMILEW S S, et al. Rad: Realtime and Accurate 3D Object Detection on Embedded Systems[C]//2021 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2021: 2875-2883.
[79] LIU W, ANGUELOV D, ERHAN D, et al. SSD: Single Shot Multibox Detector[C]//14th Eu- ropean Conference on Computer Vision (ECCV). Springer, 2016: 21-37.
[80] REN S, HE K, GIRSHICK R B, et al. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137-1149.
[81] DUAN K, BAI S, XIE L, et al. CenterNet: Keypoint Triplets for Object Detection[C]//2019 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2019: 6569-6578.
[82] YIN T, ZHOU X, KRAHENBUHL P. Center-Based 3D Object Detection and Tracking[C]// 2021 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2021: 11784-11793.
[83] TEAM O D. OpenPCDet: An Open-Source Toolbox for 3D Object Detection from Point Clouds [EB/OL]. 2020. https://github.com/open-mmlab/OpenPCDet.
[84] LIN T Y, GOYAL P, GIRSHICK R, et al. Focal Loss for Dense Object Detection[C]//2017 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2017: 2980-2988.
[85] GIRSHICK R. Fast R-CNN[C]//2015 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2015: 1440-1448.
[86] DING L, LI D, LIU B, et al. Capture Uncertainties in Deep Neural Networks for Safe Operation of Autonomous Driving vehicles[C]//2021 IEEE Intl Conf on Parallel & Distributed Processing with Applications, Big Data & Cloud Computing, Sustainable Computing & Communications, Social Computing & Networking. IEEE, 2021: 826-835.
[87] LIAO N, LI X. Traffic Anomaly Detection Model Using K-Means and Active Learning Method [J]. International Journal of Fuzzy Systems, 2022, 24(5): 2264-2282.
[88] SADAT A, SEGAL S, CASAS S, et al. Diverse Complexity Measures for Dataset Curation in Self-Driving[C]//2021 IEEE/RSJ International Conference on Intelligent Robots and Systems, (IROS). IEEE, 2021: 8609-8616.
[89] ALONSO J A, LAMATA M T. Consistency in the Analytic Hierarchy Process: A New Ap- proach[J]. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, 2006, 14(04): 445-459.
[90] DING G, ZHANG M, LI E, et al. Jst: Joint self-training for unsupervised domain adapta- tion on 2d&3d object detection[C]//2022 International Conference on Robotics and Automation (ICRA). IEEE, 2022: 477-483.
[91] GAO J, SUN C, ZHAO H, et al. VectorNet: Encoding HD Maps and Agent Dynamics from Vectorized Representation[C]//2020 IEEE Conference on Computer Vision and Pattern Recog- nition (CVPR). IEEE, 2020: 11525-11533.
修改评论