[1] MCINTOSH S, KAMEI Y, ADAMS B, et al. The impact of code review coverage and codereview participation on software quality: A case study of the qt, vtk, and itk projects[C]//Proceedings of the 11th working conference on mining software repositories. 2014: 192-201.
[2] SPADINI D, ÇALIKLI G, BACCHELLI A. Primers or reminders? The effects of existingreview comments on code review[C]//Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering. 2020: 1171-1182.
[3] SADOWSKI C, SÖDERBERG E, CHURCH L, et al. Modern code review: a case study atgoogle[C]//Proceedings of the 40th international conference on software engineering: Softwareengineering in practice. 2018: 181-190.
[4] BALACHANDRAN V. Reducing human effort and improving quality in peer code reviewsusing automatic static analysis and reviewer recommendation[C]//2013 35th International Conference on Software Engineering (ICSE). IEEE, 2013: 931-940.
[5] FAGAN M. Design and code inspections to reduce errors in program development[J]. Softwarepioneers: contributions to software engineering, 2002: 575-607.
[6] DAVILA N, NUNES I. A systematic literature review and taxonomy of modern code review[J].Journal of Systems and Software, 2021, 177: 110951.
[7] YANG C, ZHANG X, ZENG L, et al. An empirical study of reviewer recommendation in pullbased development model[C]//Proceedings of the 9th Asia-Pacific Symposium on Internetware.2017: 1-6.
[8] THONGTANUNAM P, TANTITHAMTHAVORN C, KULA R G, et al. Who should review mycode? a file location-based code-reviewer recommendation approach for modern code review[C]//2015 IEEE 22nd International Conference on Software Analysis, Evolution, and Reengineering (SANER). IEEE, 2015: 141-150.
[9] ZHANG J, XU L, LI Y. Classifying python code comments based on supervised learning[C]//Web Information Systems and Applications: 15th International Conference, WISA 2018,Taiyuan, China, September 14–15, 2018, Proceedings. Springer, 2018: 39-47.
[10] AL-KASWAN A, AHMED T, IZADI M, et al. Extending Source Code Pre-Trained LanguageModels to Summarise Decompiled Binaries[A]. 2023.
[11] 2021 中国软件供应链安全分析报告[EB/OL]. https://www.freebuf.com/articles/paper/276171.html.
[12] RAFFEL C, SHAZEER N, ROBERTS A, et al. Exploring the limits of transfer learning witha unified text-to-text transformer[J]. The Journal of Machine Learning Research, 2020, 21(1):5485-5551.
[13] WANG Y, WANG W, JOTY S, et al. Codet5: Identifier-aware unified pre-trained encoderdecoder models for code understanding and generation[A]. 2021.
[14] HINTON G E, SALAKHUTDINOV R R. Reducing the dimensionality of data with neuralnetworks[J]. science, 2006, 313(5786): 504-507.
[15] BEYER L, HÉNAFF O J, KOLESNIKOV A, et al. Are we done with imagenet?[A]. 2020.
[16] QIU X, SUN T, XU Y, et al. Pre-trained models for natural language processing: A survey[J].Science China Technological Sciences, 2020, 63(10): 1872-1897.
[17] MIKOLOV T, SUTSKEVER I, CHEN K, et al. Distributed representations of words and phrasesand their compositionality[J]. Advances in neural information processing systems, 2013, 26.
[18] MIKOLOV T, CHEN K, CORRADO G, et al. Efficient estimation of word representations invector space[A]. 2013.
[19] PENNINGTON J, SOCHER R, MANNING C D. Glove: Global vectors for word representation[C]//Proceedings of the 2014 conference on empirical methods in natural language processing(EMNLP). 2014: 1532-1543.
[20] PETERS M E, NEUMANN M, IYYER M, et al. Deep contextualized word representations[J/OL]. CoRR, 2018, abs/1802.05365. http://arxiv.org/abs/1802.05365.
[21] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[J]. Advances inneural information processing systems, 2017, 30.
[22] RADFORD A, NARASIMHAN K, SALIMANS T, et al. Improving language understandingby generative pre-training[M]. OpenAI, 2018.
[23] RADFORD A, WU J, CHILD R, et al. Language models are unsupervised multitask learners[J]. OpenAI blog, 2019, 1(8): 9.
[24] BROWN T, MANN B, RYDER N, et al. Language models are few-shot learners[J]. Advancesin neural information processing systems, 2020, 33: 1877-1901.
[25] DEVLIN J, CHANG M W, LEE K, et al. Bert: Pre-training of deep bidirectional transformersfor language understanding[A]. 2018.
[26] LIU Y, OTT M, GOYAL N, et al. Roberta: A robustly optimized bert pretraining approach[A].2019.
[27] FENG Z, GUO D, TANG D, et al. Codebert: A pre-trained model for programming and naturallanguages[A]. 2020.
[28] GUO D, REN S, LU S, et al. Graphcodebert: Pre-training code representations with data flow[A]. 2020.
[29] TUFANO R, MASIERO S, MASTROPAOLO A, et al. Using pre-trained models to boost codereview automation[C]//Proceedings of the 44th International Conference on Software Engineering. 2022: 2291-2302.
[30] LI L, YANG L, JIANG H, et al. AUGER: automatically generating review comments withpre-training models[C]//Proceedings of the 30th ACM Joint European Software EngineeringConference and Symposium on the Foundations of Software Engineering. 2022: 1009-1021.
[31] MACLEOD L, GREILER M, STOREY M A, et al. Code reviewing in the trenches: Challengesand best practices[J]. IEEE Software, 2017, 35(4): 34-42.
[32] JIANG J, HE J H, CHEN X Y. Coredevrec: Automatic core member recommendation forcontribution evaluation[J]. Journal of Computer Science and Technology, 2015, 30: 998-1016.
[33] YANG C, ZHANG X H, ZENG L B, et al. RevRec: A two-layer reviewer recommendationalgorithm in pull-based development model[J]. Journal of Central South University, 2018, 25(5): 1129-1143.
[34] XIA Z, SUN H, JIANG J, et al. A hybrid approach to code reviewer recommendation withcollaborative filtering[C]//2017 6th International Workshop on Software Mining (SoftwareMining). IEEE, 2017: 24-31.
[35] SÜLÜN E, TÜZÜN E, DOĞRUSÖZ U. Reviewer recommendation using software artifacttraceability graphs[C]//Proceedings of the fifteenth international conference on predictive models and data analytics in software engineering. 2019: 66-75.
[36] CHOUCHEN M, OUNI A, MKAOUER M W, et al. WhoReview: A multi-objective searchbased approach for code reviewers recommendation in modern code review[J]. Applied SoftComputing, 2021, 100: 106908.
[37] YANG X, LO D, XIA X, et al. Deep learning for just-in-time defect prediction[C]//2015 IEEEInternational Conference on Software Quality, Reliability and Security. IEEE, 2015: 17-26.
[38] SHI S T, LI M, LO D, et al. Automatic code review by learning the revision of source code[C]//Proceedings of the AAAI Conference on Artificial Intelligence: volume 33. 2019: 4910-4917.
[39] SIOW J K, GAO C, FAN L, et al. Core: Automating review recommendation for code changes[C]//2020 IEEE 27th International Conference on Software Analysis, Evolution and Reengineering (SANER). IEEE, 2020: 284-295.
[40] TUFANO R, PASCARELLA L, TUFANO M, et al. Towards automating code review activities[C]//2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE). IEEE,2021: 163-174.
[41] LI Z, LU S, GUO D, et al. CodeReviewer: Pre-Training for Automating Code Review Activities[A]. 2022.
[42] SINGH D, SEKAR V R, STOLEE K T, et al. Evaluating how static analysis tools can reducecode review effort[C]//2017 IEEE symposium on visual languages and human-centric computing (VL/HCC). IEEE, 2017: 101-105.
[43] ZHENG J, WILLIAMS L, NAGAPPAN N, et al. On the value of static analysis for fault detectionin software[J]. IEEE transactions on software engineering, 2006, 32(4): 240-253.
[44] AYEWAH N, PUGH W, MORGENTHALER J D, et al. Evaluating static analysis defect warnings on production software[C]//Proceedings of the 7th ACM SIGPLAN-SIGSOFT workshopon Program analysis for software tools and engineering. 2007: 1-8.
[45] GHORBANI N, GARCIA J, MALEK S. Detection and Repair of Architectural Inconsistencies in Java[C/OL]//2019 IEEE/ACM 41st International Conference on Software Engineering(ICSE). 2019: 560-571. DOI: 10.1109/ICSE.2019.00067.
[46] HU X, LI G, XIA X, et al. Deep code comment generation[C]//Proceedings of the 26th conference on program comprehension. 2018: 200-210.
[47] LI B, YAN M, XIA X, et al. DeepCommenter: a deep code comment generation tool withhybrid lexical and syntactical information[C]//Proceedings of the 28th ACM Joint Meeting onEuropean Software Engineering Conference and Symposium on the Foundations of SoftwareEngineering. 2020: 1571-1575.
[48] WEN F, NAGY C, BAVOTA G, et al. A large-scale empirical study on code-comment inconsistencies[C]//2019 IEEE/ACM 27th International Conference on Program Comprehension(ICPC). IEEE, 2019: 53-64.
[49] STULOVA N, BLASI A, GORLA A, et al. Towards detecting inconsistent comments in javasource code automatically[C]//2020 IEEE 20th international working conference on source codeanalysis and manipulation (SCAM). IEEE, 2020: 65-69.
[50] RANI P, PANICHELLA S, LEUENBERGER M, et al. How to identify class comment types? Amulti-language approach for class comment classification[J]. Journal of Systems and Software,2021, 181: 111047.
[51] HAIDUC S, APONTE J, MORENO L, et al. On the Use of Automated Text SummarizationTechniques for Summarizing Source Code[C/OL]//2010 17th Working Conference on ReverseEngineering. 2010: 35-44. DOI: 10.1109/WCRE.2010.13.
[52] HINDLE A, BARR E T, GABEL M, et al. On the naturalness of software[J]. Communicationsof the ACM, 2016, 59(5): 122-131.
[53] 霍丽春, 张丽萍. 代码注释演化及分类研究综述[J]. 内蒙古师范大学学报 (自然科学汉文版), 2020, 49(5): 423-432.
[54] RATOL I K, ROBILLARD M P. Detecting fragile comments[C]//2017 32nd IEEE/ACM International Conference on Automated Software Engineering (ASE). IEEE, 2017: 112-122.
[55] PASCARELLA L, BACCHELLI A. Classifying code comments in Java open-source softwaresystems[C]//2017 IEEE/ACM 14th International Conference on Mining Software Repositories(MSR). IEEE, 2017: 227-237.
[56] PASCARELLA L, BRUNTINK M, BACCHELLI A. Classifying code comments in Java software systems[J]. Empirical Software Engineering, 2019, 24(3): 1499-1537.
[57] DI SORBO A, PANICHELLA S, VISAGGIO C A, et al. Exploiting natural language structuresin software informal documentation[J]. IEEE Transactions on Software Engineering, 2019, 47(8): 1587-1604.
[58] MNIH V, HEESS N, GRAVES A, et al. Recurrent models of visual attention[J]. Advances inneural information processing systems, 2014, 27.
[59] BAHDANAU D, CHO K, BENGIO Y. Neural machine translation by jointly learning to alignand translate[A]. 2014.
[60] NIU Z, ZHONG G, YU H. A review on the attention mechanism of deep learning[J]. Neurocomputing, 2021, 452: 48-62.
[61] BAVOTA G, RUSSO B. Four eyes are better than two: On the impact of code reviews on software quality[C]//2015 IEEE International Conference on Software Maintenance and Evolution(ICSME). IEEE, 2015: 81-90.
[62] The 2021 Open Source Jobs Report: 9th Annual Report on Critical Skills, Hiring Trends andEducation[EB/OL]. https://www.linuxfoundation.org/tools/the-2021-open-source-jobs-report/.
[63] BOSU A, GREILER M, BIRD C. Characteristics of useful code reviews: An empirical study at microsoft[C]//2015 IEEE/ACM 12th Working Conference on Mining Software Repositories. IEEE, 2015: 146-156.
[64] RAHMAN M M, ROY C K, KULA R G. Predicting usefulness of code review comments usingtextual features and developer experience[C]//2017 IEEE/ACM 14th International Conferenceon Mining Software Repositories (MSR). IEEE, 2017: 215-226.
[65] RAYCHEV V, VECHEV M, YAHAV E. Code completion with statistical language models[C]//Proceedings of the 35th ACM SIGPLAN conference on programming language design andimplementation. 2014: 419-428.
[66] JIANG H, ZHANG J, LI X, et al. Recommending new features from mobile app descriptions[J].ACM Transactions on Software Engineering and Methodology (TOSEM), 2019, 28(4): 1-29.
[67] LIU S, GAO C, CHEN S, et al. ATOM: Commit message generation based on abstract syntaxtree and hybrid ranking[J]. IEEE Transactions on Software Engineering, 2020, 48(5): 1800-1817.
[68] SPADINI D, PALOMBA F, BAUM T, et al. Test-driven code review: an empirical study[C]//2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE). IEEE, 2019:1061-1072.
[69] PAPINENI K, ROUKOS S, WARD T, et al. Bleu: a method for automatic evaluation of machinetranslation[C]//Proceedings of the 40th annual meeting of the Association for ComputationalLinguistics. 2002: 311-318.
[70] RANI P, PANICHELLA S, LEUENBERGER M, et al. What do class comments tell us? Aninvestigation of comment evolution and practices in Pharo Smalltalk[J]. Empirical softwareengineering, 2021, 26(6): 112.
[71] HUSAIN H, WU H H, GAZIT T, et al. Codesearchnet challenge: Evaluating the state of semantic code search[A]. 2019.
[72] LU S, GUO D, REN S, et al. Codexglue: A machine learning benchmark dataset for codeunderstanding and generation[A]. 2021.
[73] SENNRICH R, HADDOW B, BIRCH A. Neural machine translation of rare words with subwordunits[A]. 2015.
[74] BAEZA-YATES R, RIBEIRO-NETO B, et al. Modern information retrieval: volume 463[M].ACM press New York, 1999.
[75] INDIKA A, WASHINGTON P Y, PERUMA A. Performance Comparison of Binary MachineLearning Classifiers in Identifying Code Comment Types: An Exploratory Study[A]. 2023.
[76] GUO Q, QIU X, LIU P, et al. Star-transformer[A]. 2019.
修改评论