[1] 杨少春, 赵世军. 我国金融安全的影响因素与应对策略研究[J]. 财务与金融, 2021(05): 1-5+21.
[2] 国务院. 国务院关于印发社会信用体系建设规划纲要(2014—2020 年)的通知[EB/OL]. 2014. http://www.gov.cn/zhengce/content/2014-06/27/content_8913.htm.
[3] 国务院. 关于推进社会信用体系建设高质量发展促进形成新发展格局的意见[EB/OL]. 2022. http://www.gov.cn/zhengce/2022-03/29/content_5682283.htm.
[4] 剧锦文, 常耀中. 消费信贷与中国经济转型的实证研究[J]. 经济与管理研究, 2016, 37(07): 29-36.
[5] 王立勇, 石颖. 互联网金融的风险机理与风险度量研究——以 P2P 网贷为例[J]. 东南大学 学报 (哲学社会科学版), 2016, 18(02): 103-112+148.
[6] 王钊. P2P 借贷信用风险动态评价方法研究[D]. 合肥工业大学, 2019.
[7] GUO Y H, ZHOU W J, LUO C Y, et al. Instance-Based Credit Risk Assessment for InvestmentDecisions in P2P Lending[J]. European Journal of Operational Research, 2016, 249(2): 417-426.
[8] 朱俊宇. 基于公共信用信息的高新技术企业信用评价[D]. 北方工业大学, 2021.
[9] JIANG C Q, WANG Z, ZHAO H M. A Prediction-Driven Mixture Cure Model and its Appli-cation in Credit Scoring[J]. European Journal of Operational Research, 2019, 277(1): 20-31.
[10] BASTANI K, ASGARI E, NAMAVARI H. Wide and Deep Learning for Peer-to-Peer Lending[J]. Expert Systems with Applications, 2019, 134: 209-224.
[11] CHANG Y C, CHANG K H, WU G J. Application of Extreme Gradient Boosting Trees inthe Construction of Credit Risk Assessment Models for Financial Institutions[J]. Applied Soft Computing, 2018, 73: 914-920.
[12] LIU Y, LI X E, ZHANG Z M. A New Approach in Reject Inference of Using Ensemble Learning Based on Global Semi-Supervised Framework[J]. Future Generation Computer Systems-the International Journal of Escience, 2020, 109: 382-391.
[13] GUNNARSSON B R, BROUCKE S V, BAESENS B, et al. Deep Learning for Credit Scoring: Do or Don’t?[J]. European Journal of Operational Research, 2021, 295(1): 292-305.
[14] TRIPATHI D, EDLA D R, BABLANI A, et al. Experimental Analysis of Machine Learning Methods for Credit Score Classification[J]. Progress in Artificial Intelligence, 2021, 10(3): 217-243.
[15] DASTILE X, CELIK T, POTSANE M. Statistical and Machine Learning Models in Credit Scoring: A Systematic Literature Survey[J]. Applied Soft Computing, 2020, 91: 106263.
[16] BROWN I, MUES C. An experimental comparison of classification algorithms for imbalanced credit scoring data sets[J]. Expert Systems with Applications, 2012, 39(3): 3446-3453.
[17] ZHOU Z H. Open-environment machine learning[J]. National Science Review, 2022, 9(08): 211-221.
[18] DIETTERICH T G. Steps Toward Robust Artificial Intelligence[J]. AI Magazine, 2017, 38(3): 3-24.
[19] 赵鹏, 周志华. 基于决策树模型重用的分布变化流数据学习[J]. 中国科学: 信息科学, 2021, 51(01): 1-12.
[20] PARMAR J, CHOUHAN S S, RAYCHOUDHURY V, et al. Open-world Machine Learning: Applications, Challenges, and Opportunities[J]. ACM computing surveys, 2022.
[21] SEHWAG V, BHAGOJI A, SONG L, et al. Analyzing the Robustness of Open-World Machine Learning[C]//AISec’19: Conference on Computer and Communications Security. ACM, 2019: 105-116.
[22] DIETTERICH T G. Robust Artificial Intelligence and Robust Human Organizations[M]. Cor-nell University Library, arXiv.org, 2018.
[23] ADDO P M, GUEGAN D, HASSANI B. Credit Risk Analysis Using Machine and Deep Learn-ing Models[J]. Risks, 2018, 6(2): 1-38.
[24] HAMORI S, KAWAI M, KUME T, et al. Ensemble learning or deep learning? Application to default risk analysis[J]. Journal of Risk and Financial Management, 2018, 11(1): 12.
[25] GOMES H M, BIFET A, READ J, et al. Adaptive Random Forests for Evolving Data Stream Classification[J]. Machine Learning, 2017, 106(9-10): 1469-1495.
[26] DOMINGOS P, HULTEN G. Mining High-Speed Data Streams[C]//Proceedings of the Sixth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Associa-tion for Computing Machinery, 2000: 71–80.
[27] HULTEN G, SPENCER L, DOMINGOS P. Mining Time-Changing Data Streams[C]// Proceedings Of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Association for Computing Machinery, 2001: 97–106.
[28] 胡晶. Logistic 及其改进方法在个人网贷信用评分中的应用[D]. 华中师范大学, 2019.
[29] DEVI S S, RADHIKA Y. A Survey on Machine Learning and Statistical Techniques inBankruptcy Prediction[J]. International Journal of Machine Learning and Computing, 2018, 8(2): 133-139.
[30] HARRIS T. Credit Scoring Using the Clustered Support Vector Machine[J]. Expert Systems with Applications, 2015, 42(2): 741-750.
[31] LESSMANN S, BAESENS B, SEOW H V, et al. Benchmarking State-of-the-Art Classification Algorithms for Credit Scoring: An Update of Research[J]. European Journal of Operational Research, 2015, 247(1): 124-136.
[32] CHEN S Q, GUO Z F, ZHAO X L. Predicting mortgage early delinquency with machine learning methods[J]. European Journal of Operational Research, 2021, 290(1): 358-372.
[33] ZHANG D, ZHOU X, LEUNG S C H, et al. Vertical Bagging Decision Trees Model for Credit Scoring[J]. Expert Systems with Applications, 2010, 37(12): 7838-7843.
[34] DU P, SHU H. Exploration of Financial Market Credit Scoring and Risk Management and Prediction Using Deep Learning and Bionic Algorithm[J]. Journal of Global Information Man-agement, 2022, 30(9): 1-29.
[35] DASTILE X, CELIK T. Making Deep Learning-Based Predictions for Credit Scoring Explain-able[J]. Ieee Access, 2021, 9: 50426-50440.
[36] ZHU B, YANG W, WANG H, et al. A Hybrid Deep Learning Model for Consumer Credit Scoring[C]//2018 International Conference on Artificial Intelligence and Big Data (ICAIBD). IEEE, 2018: 205-208.
[37] LUO C C, WU D S, WU D X. A Deep Learning Approach for Credit Scoring Using Credit Default Swaps[J]. Engineering Applications of Artificial Intelligence, 2017, 65: 465-470.
[38] 王灿. 基于不平衡数据与集成学习算法的信用评价模型[J]. 中国新技术新产品, 2022, No.472(18): 10-12.
[39] BAUDER R A, KHOSHGOFTAAR T M, HASANIN T. An Empirical Study on Class Rarity in Big Data[C]//2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA). IEEE, 2018: 785-790.
[40] JOHNSON J M, KHOSHGOFTAAR T M. Survey on Deep Learning with Class Imbalance[J]. Journal of Big Data, 2019, 6(1): 1-54.
[41] KRAWCZYK B. Learning from Imbalanced Data: Open Challenges and Future Directions[J]. Progress in Artificial Intelligence, 2016, 5(4): 221-232.
[42] BUDA M, MAKI A, MAZUROWSKI M A. A Systematic Study of the Class Imbalance Problem in Convolutional Neural Networks[J]. Neural Networks, 2018, 106: 249-259.
[43] CHAWLA N V, BOWYER K W, HALL L O, et al. SMOTE: Synthetic Minority Over-Sampling Technique[J]. Journal of Artificial Intelligence Research, 2002, 16: 321-357.
[44] 夏利宇, 何晓群. 基于重抽样法处理不平衡问题的信用评级模型[J]. 管理评论, 2020, 32 (03): 75-84.
[45] 朱安安. 基于过采样 SVM 的不平衡数据信用评价模型[J]. 软件导刊, 2018, 17(10): 64-67.
[46] NIU K, ZHANG Z, LIU Y, et al. Resampling Ensemble Model Based on Data Distributionfor Imbalanced Credit Risk Evaluation in P2P Lending[J]. Information Sciences, 2020, 536: 120-134.
[47] HAN H, WANG W Y, MAO B H. Borderline-SMOTE: A New Over-Sampling Method in Imbalanced Data Sets Learning[C]//International Conference on Intelligent Computing: volume 3644. 2005: 878-887.
[48] BUNKHUMPORNPAT C, SINAPIROMSARAN K, LURSINSAP C. Safe-Level-SMOTE: Safe-Level-Synthetic Minority Over-Sampling TEchnique for Handling the Class Imbalanced Problem[C]//Advances in Knowledge Discovery and Data Mining, Proceedings: volume 5476. 2009: 475-482.
[49] LEE T, LEE K B, KIM C O. Performance of Machine Learning Algorithms for Class-Imbalanced Process Fault Detection Problems[J]. IEEE Transactions on Semiconductor Man-ufacturing, 2016, 29(4): 436-445.
[50] DHAR S, CHERKASSKY V. Development and Evaluation of Cost-Sensitive Universum-SVM [J]. IEEE Transactions on Cybernetics, 2014, 45(4): 806-818.
[51] GHAZIKHANI A, MONSEFI R, YAZDI H S. Online Cost-Sensitive Neural Network Classi-fiers for Non-Stationary and Imbalanced Data Streams[J]. Neural Computing & Applications, 2013, 23(5): 1283-1295.
[52] 杨莲, 石宝峰, 董轶哲. 基于 Class Balanced Loss 修正交叉熵的非均衡样本信用风险评价 模型[J]. 系统管理学报, 2022, 31(02): 255-269+289.
[53] SUN Z B, SONG Q B, ZHU X Y, et al. A Novel Ensemble Method for Classifying Imbalanced Data[J]. Pattern Recognition, 2015, 48(5): 1623-1637.
[54] FERREIRA L E B, GOMES H M, BIFET A, et al. Adaptive Random Forests with Resam-pling for Imbalanced Data Streams[C]//2019 International Joint Conference on Neural Networks (IJCNN). IEEE, 2019.
[55] 李小娟, 韩萌, 王乐, 等. 监督与半监督学习下的数据流集成分类综述[J]. 计算机应用研 究, 2021, 38(07): 1921-1929.
[56] AGGARWAL C C. A Survey of Stream Classification Algorithms[J]. Data Classification: Algorithms and Applications, 2014, 245: 273-302.
[57] KREMPL G, ŽLIOBAITE I, BRZEZIńSKI D, et al. Open Challenges for Data Stream Mining Research[J]. ACM SIGKDD Explorations Newsletter, 2014, 16(1): 1-10.
[58] 翟婷婷, 高阳, 朱俊武. 面向流数据分类的在线学习综述[J]. 软件学报, 2020, 31(04): 912-931.
[59] ANGLUIN D. Queries and Concept Learning[J]. Machine learning, 1988, 2(4): 319-342.
[60] 孙大为, 张广艳, 郑纬民. 大数据流式计算: 关键技术及系统实例[J]. 软件学报, 2014, 25(04): 839-862.
[61] BIFET A, HOLMES G, KIRKBY R, et al. MOA: Massive Online Analysis[J]. Journal ofMachine Learning Research, 2010, 11: 1601-1604.
[62] 许冠英, 韩萌, 王少峰, 等. 数据流集成分类算法综述[J]. 计算机应用研究, 2020, 37(01):1-8+15.
[63] BIGGIO B, CORONA I, NELSON B, et al. Security Evaluation of Support Vector Machinesin Adversarial Environments[M]. Springer, 2014: 105-153.
[64] CAUWENBERGHS G, POGGIO T. Incremental and Decremental Support Vector MachineLearning[J]. Advances in Neural Information Processing Systems, 2000, 13: 409-415.
[65] LU Y, BOUKHAROUBA K, BOONæRT J, et al. Application of An Incremental SVM Algo-rithm for On-line Human Recognition from Video Surveillance Using Texture and Color Fea-tures[J]. Neurocomputing, 2014, 126: 132-140.
[66] OZA N C. Online Bagging and Boosting[C]//2005 IEEE International Conference on Systems, Man and Cybernetics: volume 3. IEEE, 2005: 2340-2345.
[67] SAFFARI A, LEISTNER C, SANTNER J, et al. On-line Random Forests[C]//2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops. IEEE, 2009: 1393-1400.62
[68] LOEZER L, ENEMBRECK F, BARDDAL J P, et al. Cost-sensitive learning for imbalanced data streams[C]//Proceedings of the 35th Annual Acm Symposium on Applied Computing. 2020: 498-504.
[69] LOSING V, HAMMER B, WERSING H. Interactive Online Learning for Obstacle Classifica-tion on A Mobile Robot[C]//2015 International Joint Conference on Neural Networks (IJCNN). IEEE, 2015: 1-8.
[70] ELWELL R, POLIKAR R. Incremental Learning of Variable Rate Concept Drift[C]//Multiple Classifier Systems, Proceedings: volume 5519. 2009: 142-151.
[71] DITZLER G, POLIKAR R. Incremental Learning of Concept Drift from Streaming Imbalanced Data[J]. IEEE Transactions on Knowledge and Data Engineering, 2013, 25(10): 2283-2301.
[72] DITZLER G, POLIKAR R. An Ensemble Based Incremental Learning Framework for Concept Drift and Class Imbalance[C]//The 2010 International Joint Conference on Neural Networks (IJCNN). IEEE, 2010: 1-8.
[73] CHAWLA N V, BOWYER K W, HALL L O, et al. SMOTE: Synthetic Minority Over-Sampling Technique[J]. Journal of Artificial Intelligence Research, 2002, 16(1): 321–357.
[74] WEBB G I, HYDE R, CAO H, et al. Characterizing Concept Drift[J]. Data Mining and Knowl-edge Discovery, 2016, 30(4): 964-994.
[75] 文益民, 刘帅, 缪裕青, 等. 概念漂移数据流半监督分类综述[J]. 软件学报, 2022, 33(04): 1287-1314.
[76] KELLY M G, HAND D J, ADAMS N M. The Impact of Changing Populations on Classifier Per-formance[C]//Proceedings of the Fifth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 1999: 367-371.
[77] WARES S, ISAACS J, ELYAN E. Data Stream Mining: Methods and Challenges for Handling Concept Drift[J]. Sn Applied Sciences, 2019, 1(11): 1-19.
[78] 陈志强, 韩萌, 李慕航, 等. 数据流概念漂移处理方法研究综述[J]. 计算机科学, 2022: 1-19.
[79] GAMA J, MEDAS P, CASTILLO G, et al. Learning with Drift Detection[C]//Brazilian Sym-posium on Artificial Intelligence: volume 3171. 2004: 286-295.
[80] BIFET A, GAVALDA R. Learning from Time-Changing Data with Adaptive Windowing[C]//Proceedings of the 2007 SIAM International Conference on Data Mining. SIAM, 2007: 443-448.
[81] SUN Y G, WANG Z H, LIU H Y, et al. Online Ensemble Using Adaptive Windowing for Data Streams with Concept Drift[J]. International Journal of Distributed Sensor Networks, 2016, 12 (5): 4218973.
[82] ZHOU Z H, CHEN Z Q. Hybrid decision tree[J]. KNOWLEDGE-BASED SYSTEMS, 2002, 15(8): 515-528.
[83] KHALID S, KHALIL T, NASREEN S. A Survey of Feature Selection and Feature Extraction Techniques in Machine Learning[J]. 2014 Science and Information Conference (Sai), 2014: 372-378.
[84] ZHOU G, SOHN K, LEE H. Online incremental feature learning with denoising autoencoders [C]//Artificial intelligence and statistics. PMLR, 2012: 1453-1461.63
[85] KAWEWONG A, HASEGAWA O. Fast and Incremental Attribute Transferring and Classifying System for Detecting Unseen Object Classes: volume 6354[Z]. 2010: 563-568.
[86] KANKUEKUL P, KAWEWONG A, TANGRUAMSUB S, et al. Online Incremental Attribute-based Zero-shot Learning[Z]. 2012: 3657-3664.
[87] HOU C P, ZHOU Z H. One-Pass Learning with Incremental and Decremental Features[J]. Ieee Transactions on Pattern Analysis and Machine Intelligence, 2018, 40(11): 2776-2792.
[88] HOU B J, ZHANG L J, ZHOU Z H. Learning With Feature Evolvable Streams[J]. Ieee Trans-actions on Knowledge and Data Engineering, 2021, 33(6): 2602-2615.
[89] BREIMAN L. Random Forests[J]. Machine Learning, 2001, 45(1): 5-32.
[90] IVERSON L R, PRASAD A M. Predicting Abundance of 80 Tree Species Following ClimateChange in the Eastern United States[J]. Ecological Monographs, 1998, 68(4): 465-485.
[91] OZA N C, RUSSELL S J. Online Bagging and Boosting[C]//International Workshop on Arti-ficial Intelligence and Statistics. PMLR, 2001: 229-236.
[92] OZA N C, RUSSELL S. Experimental Comparisons of Online and Batch Versions of Bag-ging and Boosting[C]//Proceedings Of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2001: 359-364.
[93] BIFET A, HOLMES G, PFAHRINGER B. Leveraging Bagging for Evolving Data Streams[C]// Machine Learning and Knowledge Discovery in Databases: volume 6321. 2010: 135-150.
[94] CATLETT J. Megainduction: Machine Learning on Very Large Databases[D]. University of Sydney, 1991.
[95] MARON O, MOORE A. Hoeffding Races: Accelerating Model Selection Search for Classifi-cation and Function Approximation[J]. Advances in Neural Information Processing Systems, 1993, 6(1): 59-66.
[96] 褚耀奇. 基于自适应随机森林的非平衡数据流分类方法[D]. 中国矿业大学, 2021.
[97] FRIEDMAN M. The use of ranks to avoid the assumption of normality implicit in the analysisof variance[J]. Journal of the american statistical association, 1937, 32(200): 675-701.
[98] NEMENYI P B. Distribution-free multiple comparisons[D]. Princeton University, 1963.
[99] BEYGELZIMER A, KALE S, LUO H. Optimal and Adaptive Algorithms for Online Boosting [C]//International Conference on Machine Learning. PMLR, 2015: 2323-2331.
[100] CHEN S T, LIN H T, LU C J. An Online Boosting Algorithm with Theoretical Justifications [C]//Proceedings of the 29th International Conference on Machine Learning. Omnipress, 2012.
[101] PELOSSOF R, JONES M, VOVSHA I, et al. Online Coordinate Boosting[C]//2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops. IEEE, 2009:1354-1361.
[102] ELKAN C. The foundations of cost-sensitive learning[C]//International joint conference onartificial intelligence: volume 17. Lawrence Erlbaum Associates Ltd, 2001: 973-978.
[103] PICEK S, HEUSER A, JOVIC A, et al. The Curse of Class Imbalance and Conflicting Metricswith Machine Learning for Side-Channel Evaluations[J]. IACR Transactions on Cryptographic Hardware and Embedded Systems, 2019, 2019(1): 1-29.64
[104] BALDI P, BRUNAK S, CHAUVIN Y, et al. Assessing the Accuracy of Prediction Algorithms for Classification: An Overview[J]. Bioinformatics, 2000, 16(5): 412-424.
[105] CHICCO D, JURMAN G. The Advantages of the Matthews Correlation Coefficient (MCC) over F1 Score and Accuracy in Binary Classification Evaluation[J]. BMC Genomics, 2020, 21 (1): 1-13.
[106] ERHAN D, BENGIO Y, COURVILLE A, et al. Why Does Unsupervised Pre-training Help Deep Learning?[J]. Journal of Machine Learning Research, 2010, 11: 625-660.
[107] HE X, PAN J, JIN O, et al. Practical Lessons from Predicting Clicks on Ads at Facebook[C]// Proceedings of The Eighth International Workshop on Data Mining for Online Advertising. 2014: 1-9.
[108] KARUMBAIAH S, LAN A, NAGPAL S, et al. Using Past Data to Warm Start Active Ma-chine Learning: Does Context Matter?[C]//11th International Learning Analytics and Knowl-edge Conference. 2021: 151-160.
[109] SUTSKEVER I, MARTENS J, DAHL G, et al. On the Importance of Initialization and Mo-mentum in Deep Learning[C]//International Conference on Machine Learning. PMLR, 2013: 1139-1147.
[110] SRIVASTAVA R K, GREFF K, SCHMIDHUBER J. Training Very Deep Networks[J]. Ad-vances in Neural Information Processing Systems, 2015, 28: 2377-2385.
[111] BERNARDO A, GOMES H M, MONTIEL J, et al. C-SMOTE: Continuous Synthetic Minority Oversampling for Evolving Data Streams[J]. 2020 Ieee International Conference on Big Data (Big Data), 2020: 483-492.
[112] BIFET A, HOLMES G, PFAHRINGER B, et al. New ensemble methods for evolving data streams[C]//KDD ’09: International Conference on Knowledge Discovery and Data Mining. ACM, 2009: 139-148.
[113] VLACHOS M, DOMENICONI C, GUNOPULOS D, et al. Non-linear dimensionality reduc-tion techniques for classification and visualization[C]//KDD ’02: International Conference on Knowledge Discovery and Data Mining. ACM, 2002: 645-651.
修改评论