[1] Lida Abdi and Sattar Hashemi. To combat multi-class imbalanced problems by means of over-sampling techniques. IEEE Transactions on Knowledge and Data Engineering, 28(1):238–251, 2016.
[2] M’Hamed B. Abidine and Belkacem Fergani. A new multi-class wsvm classification to imbalanced human activity dataset. Journal of Computers, 9(7):1560–1565, 2014.
[3] Charu C Aggarwal, S Yu Philip, Jiawei Han, and Jianyong Wang. A framework for clustering evolving data streams. In Proceedings 2003 VLDB conference, pages 81–92. Elsevier, 2003.
[4] Tahira Alam, Chowdhury Farhan Ahmed, Sabit Anwar Zahin, Muhammad Asif Hossain Khan, and Maliha Tashfia Islam. An effective recursive technique for multi-class classification and regression for imbalanced data. IEEE Access, 7:127615–127630, 2019.
[5] Jes´us Alcal´a-Fdez, Alberto Fern´andez, Juli´an Luengo, Joaqu´ın Derrac, Salvador Garc´ıa, Luciano S´anchez, and Francisco Herrera. KEEL data-mining software tool: data set repository, integration of algorithms and experimental analysis framework. Journal of Multiple-Valued Logic and Soft Computing, 17(2-3):255–287, 2011.
[6] Roberto Alejo, Jose M Sotoca, Rosa Maria Valdovinos, and Gustavo A Casa˜n. The multi-class imbalance problem: Cost functions with modular and non-modular neural networks. In International Symposium on Neural Networks, pages 421–431. Springer, 2009.
[7] N Anupama and Sudarson Jena. A novel approach using incremental oversampling for data stream mining. Evolving Systems, 10(3):351–362, 2019.
[8] Sukarna Barua, Md. Monirul Islam, Xin Yao, and Kazuyuki Murase. Mwmote–majority weighted minority oversampling technique for imbalanced data set learning. IEEE Transactions on Knowledge and Data Engineering, 26(2):405–425, 2014.
[9] Alessio Bernardo, Heitor Murilo Gomes, Jacob Montiel, Bernhard Pfahringer, Albert Bifet, and Emanuele Della Valle. C-SMOTE: continuous synthetic minority oversampling for evolving data streams. In 2020 IEEE International Conference on Big Data, pages 483–492, 2020.
[10] Albert Bifet and Ricard Gavalda. Learning from time-changing data with adaptive windowing. In Proceedings of the 2007 SIAM international conference on data mining, pages 443–448. SIAM, 2007.
[11] Dariusz Brzezinski, Leandro L Minku, Tomasz Pewinski, Jerzy Stefanowski, and Artur Szumaczuk. The impact of data difficulty factors on classification of imbalanced and concept drifting data streams. Knowledge and Information Systems, 63(6):1429–1469, 2021.
[12] Mateusz Buda, Atsuto Maki, and Maciej A Mazurowski. A systematic study of the class imbalance problem in convolutional neural networks. Neural networks, 106:249–259, 2018.
[13] Chumphol Bunkhumpornpat, Krung Sinapiromsaran, and Chidchanok Lursinsap. Safe-level-SMOTE: Safe-level-synthetic minority over-sampling technique for handling the class imbalanced problem. In Advances in Knowledge Discovery and Data Mining, pages 475–482. Springer Berlin Heidelberg, 2009.
[14] Alberto Cano and Bartosz Krawczyk. Kappa Updated Ensemble for driftingdata stream mining. Machine Learning, 109(1):175–218, 2020.
[15] Alberto Cano and Bartosz Krawczyk. ROSE: robust online self-adjusting ensemble for continual learning on imbalanced drifting data streams. Machine Learning, pages 1–39, 2022.
[16] Feng Cao, Martin Estert, Weining Qian, and Aoying Zhou. Density-based clustering over an evolving data stream with noise. In Proceedings of the 2006 SIAM international conference on data mining, pages 328–339. SIAM, 2006.
[17] Kaidi Cao, Colin Wei, Adrien Gaidon, Nikos Arechiga, and Tengyu Ma. Learning imbalanced datasets with label-distribution-aware margin loss. Proceedings of the 33rd International Conference on Neural Information Processing Systems, pages 1567–1578, 2019.
[18] Jie Chang, Xiaoci Zhang, Minquan Ye, Daobin Huang, and Chuanwen Yao. Brain tumor segmentation based on 3d unet with multi-class focal loss. In International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), 2018.
[19] Nitesh V. Chawla, Kevin W. Bowyer, Lawrence O. Hall, and W. Philip Kegelmeyer. SMOTE: synthetic minority over-sampling technique. Journal of Artificial Intelligence Research, 16(1):321–357, 2002.
[20] Nitesh V Chawla, Aleksandar Lazarevic, Lawrence O Hall, and Kevin W Bowyer. SMOTEBoost: Improving prediction of the minority class in boosting. In European conference on principles of data mining and knowledge discovery, pages 107–119. Springer, 2003.
[21] Yin Cui, Menglin Jia, Tsung-Yi Lin, Yang Song, and Serge Belongie. Classbalanced loss based on effective number of samples. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9268–9277, 2019.
[22] Shounak Datta, Sayak Nag, and Swagatam Das. Boosting with lexicographic programming: Addressing class imbalance without cost tuning. IEEE Transactions on Knowledge and Data Engineering, 32(5):883–897, 2020.
[23] Janez Demˇsar. Statistical comparisons of classifiers over multiple data sets. Journal of Machine Learning Research, 7:1–30, 2006.
[24] Li Deng. The mnist database of handwritten digit images for machine learning research. IEEE Signal Processing Magazine, 29(6):141–142, 2012.
[25] Shuya Ding, Bilal Mirza, Zhiping Lin, Jiuwen Cao, Xiaoping Lai, Tam V Nguyen, and Jose Sepulveda. Kernel based online learning for imbalance multiclass classification. Neurocomputing, 277:139–148, 2018.
[26] Pedro Domingos. Metacost: A general method for making classifiers costsensitive. In International Conference on Knowledge Discovery and Data Mining (ICKDD), pages 155–164, 1999.
[27] Charles Elkan. The foundations of cost-sensitive learning. In Internationaljoint conference on artificial intelligence, pages 973–978, 2001.
[28] James D Evans. Straightforward statistics for the behavioral sciences. Thomson Brooks/Cole Publishing Co, 1996.
[29] Alberto Fern´andez, Mara Jose Del Jesus, and Francisco Herrera. Multi-class imbalanced data-sets with linguistic fuzzy rule based classification systems based on pairwise learning. In Computational Intelligence for KnowledgeBased Systems Design, pages 89–98, 2010.
[30] Edgar C Fieller, Herman O Hartley, and Egon S Pearson. Tests for rank correlation coefficients. I. Biometrika, 44(3/4):470–481, 1957.
[31] Yoav Freund and Robert E. Schapire. A desicion-theoretic generalization of on-line learning and an application to boosting. In Computational Learning Theory, pages 23–37. Springer Berlin Heidelberg, 1995.
[32] Yoav Freund and Robert E. Schapire. Experiments with a new boosting algorithm. In International Conference on Machine Learning, volume 96, pages 148–156. Citeseer, 1996.
[33] Joao Gama. Knowledge discovery from data streams. CRC Press, 2010.
[34] Haixiang Guo, Yijing Li, Shang Jennifer, Mingyun Gu, Yuanyue Huang, and Bing Gong. Learning from class-imbalanced data: Review of methods and applications. Expert systems with applications, 73:220–239, 2017.
[35] Michael Hahsler and Matthew Bola˜nos. Clustering data streams based on shared density between micro-clusters. IEEE Transactions on Knowledge and Data Engineering, 28(6):1449–1461, 2016.
[36] Hui Han, Wen-Yuan Wang, and Bing-Huan Mao. Borderline-SMOTE: A new over-sampling method in imbalanced data sets learning. In International Conference on Intelligent Computing, pages 878–887, 2005.
[37] David J Hand and Robert J Till. A simple generalisation of the area under the roc curve for multiple class classification problems. Machine learning, 45(2):171–186, 2001.
[38] Trevor Hastie and Robert Tibshirani. Classification by pairwise coupling. In Advances in Neural Information Processing Systems, 1998.
[39] Munawar Hayat, Salman Khan, Syed Waqas Zamir, Jianbing Shen, and Ling Shao. Gaussian affinity for max-margin class imbalanced learning. In Proceedings of the IEEE/CVF international conference on computer vision, pages 6469–6479, 2019.
[40] Haibo He, Yang Bai, Edwardo A. Garcia, and Shutao Li. ADASYN: Adaptive synthetic sampling approach for imbalanced learning. In IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence), pages 1322–1328. IEEE, 2008.
[41] Haibo He and Edwardo A. Garcia. Learning from imbalanced data. IEEE Transactions on Knowledge and Data Engineering, 21(9):1263–1284, 2009.
[42] Sture Holm. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics, 6(2):65–70, 1979.
[43] Geoff Hulten, Laurie Spencer, and Pedro Domingos. Mining time-changing data streams. In Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining, pages 97–106, 2001.
[44] Zohaib Jan, Juan Carlos Munos, and Asim Ali. A novel method for creating an optimized ensemble classifier by introducing cluster size reduction and diversity. IEEE Transactions on Knowledge and Data Engineering, 34(7):3072–3081, 2022.
[45] Rong Jin and Jian Zhang. Multi-class Learning by Smoothed Boosting. Kluwer Academic Publishers, 2007.
[46] Justin M Johnson and Taghi M Khoshgoftaar. Survey on deep learning with class imbalance. Journal of Big Data, 6(1):1–54, 2019.
[47] Salman H. Khan, Munawar Hayat, Mohammed Bennamoun, Ferdous A. Sohel, and Roberto Togneri. Cost-sensitive learning of deep feature representations from imbalanced data. IEEE Transactions on Neural Networks and Learning Systems, 29(8):3573–3587, 2018.
[48] Varsha S Khandekar and Pravin Srinath. Non-stationary data stream analysis: state-of-the-art challenges and solutions. In Proceeding of International Conference on Computational Science and Applications, pages 67–80. Springer, 2020.
[49] Bartosz Krawczyk. Cost-sensitive one-vs-one ensemble for multi-class imbalanced data. In International Joint Conference on Neural Networks, pages 2447–2452. IEEE, 2016.
[50] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of featuresfrom tiny images. Technical Report 0, University of Toronto, Toronto, Ontario, 2009.
[51] Amisha Kumari and Urjita Thakar. Hellinger distance based oversampling method to solve multi-class imbalance problem. In International Conference on Communication Systems and Network Technologies, pages 137–141. IEEE, 2017.
[52] Mateusz Lango and Jerzy Stefanowski. What makes multi-class imbalanced problems difficult? An experimental study. Expert Systems with Applications, 199:116962, 2022.
[53] Bum Ju Lee, Keun Ho Kim, Boncho Ku, Jun Su Jang, and Jong Yeol Kim. Prediction of body mass index status from voice signals based on machine learning for automated medical applications. Artificial Intelligence in Medicine, 58(1):51–61, 2013.
[54] Hansang Lee, Minseok Park, and Junmo Kim. Plankton classification on imbalanced large scale database via convolutional neural networks with transfer learning. In 2016 IEEE international conference on image processing (ICIP), pages 3713–3717. IEEE, 2016.
[55] Joffrey L Leevy, Taghi M Khoshgoftaar, Richard A Bauder, and Naeem Seliya. A survey on addressing high-class imbalance in big data. Journal of Big Data, 5(1):1–30, 2018.
[56] Baofeng Li, Shaohua Fan, Xin Gao, Feng Zhai, and Yang He. Smart meters fault prediction technology based on cost-sensitive xgboost algorithm for imbalanced dat. In International Conference on Robotics and Automation Sciences (ICRAS), 2019.
[57] Lusi Li, Haibo He, and Jie Li. Entropy-based sampling approaches for multi-class imbalanced problems. IEEE Transactions on Knowledge and Data Engineering, 32(11):2159–2170, 2020.
[58] Shuxian Li, Liyan Song, Yiu-ming Cheung, and Xin Yao. BEDCOE: Borderline enhanced disjunct cluster based oversampling ensemble for online multi-class imbalance learning. In 26th European Conference on Artificial Intelligence (ECAI 2023), pages 1414–1421, 2023.
[59] Shuxian Li, Liyan Song, Xiaoyu Wu, Zheng Hu, Yiu-ming Cheung, and Xin Yao. ARConvL: Adaptive region-based convolutional learning for multiclass imbalance classification. In 22rd European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD 2023), pages 103–120, 2023.
[60] Shuxian Li, Liyan Song, Xiaoyu Wu, Zheng Hu, Yiu-ming Cheung, and Xin Yao. Multi-class imbalance classification based on data distribution and adaptive weights. IEEE Transactions on Knowledge and Data Engineering, in press, 2024, doi: 10.1109/TKDE.2024.3384961.
[61] Lijun Liang, Tingting Jin, and Meiya Huo. Feature identification from imbalanced data sets for diagnosis of cardiac arrhythmia. In International Symposium on Computational Intelligence & Design, 2018.
[62] Lijun Liang, Tingting Jin, and Meiya Huo. Feature identification from imbalanced data sets for diagnosis of cardiac arrhythmia. In International Symposium on Computational Intelligence and Design, volume 02, pages 52–55. IEEE, 2018.
[63] Nan-Ying Liang, Guang-Bin Huang, Paramasivan Saratchandran, and Narasimhan Sundararajan. A fast and accurate online sequential learning algorithm for feedforward networks. IEEE Transactions on neural networks, 17(6):1411–1423, 2006.
[64] Minlong Lin, Ke Tang, and Xin Yao. Dynamic sampling approach to training neural networks for multiclass imbalance classification. IEEE Transactions on Neural Networks and Learning Systems, 24(4):647–660, 2013.
[65] Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Doll´ar. Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pages 2980–2988, 2017.
[66] Jialun Liu, Yifan Sun, Chuchu Han, Zhaopeng Dou, and Wenhui Li. Deep representation learning on long-tailed data: A learnable embedding augmentation perspective. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2970–2979, 2020.
[67] Xu-Ying Liu and Qian-Qian Li. Learning from combination of data chunks for multi-class imbalanced data. In International Joint Conference on Neural Networks, pages 1680–1687. IEEE, 2014.
[68] Xu-Ying Liu, Qian-Qian Li, and Zhi-Hua Zhou. Learning imbalanced multiclass data with optimal dichotomy weights. In International Conference on Data Mining, pages 478–487, 2013.
[69] Zhen Liu, Deyu Tang, Jincheng Li, and Ruoyu Wang. Objective costsensitive-boosting-welm for handling multi class imbalance problem. In International Joint Conference on Neural Networks, pages 1975–1982. IEEE, 2017.
[70] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), December 2015.
[71] Aditya Krishna Menon, Sadeep Jayasumana, Ankit Singh Rawat, Himanshu Jain, Andreas Veit, and Sanjiv Kumar. Long-tail learning via logit adjustment. In International Conference on Learning Representations, 2021.
[72] B Abidine M’hamed and Belkacem Fergani. A new multi-class WSVM classification to imbalanced human activity dataset. Journal of Computers, 9(7):1560–1565, 2014.
[73] Bilal Mirza, Zhiping Lin, Jiuwen Cao, and Xiaoping Lai. Voting based weighted online sequential extreme learning machine for imbalance multiclass classification. In 2015 IEEE International Symposium on Circuits and Systems (ISCAS), pages 565–568, 2015.
[74] Bilal Mirza, Zhiping Lin, and Kar-Ann Toh. Weighted online sequential extreme learning machine for class imbalance learning. Neural processing letters, 38(3):465–486, 2013.
[75] Frederick Mosteller and John W Tukey. Data analysis, including statistics. Handbook of social psychology, 2:80–203, 1968.
[76] Sankha Subhra Mullick, Shounak Datta, and Swagatam Das. Generative adversarial minority oversampling. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1695–1704, 2019.
[77] Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y. Ng. Reading digits in natural images with unsupervised feature learning. In NIPS Workshop on Deep Learning and Unsupervised Feature Learning, 2011.
[78] S. Pouyanfar, S. Chen, and M. Shyu. Deep spatio-temporal representation learning for multi-class imbalanced data classification. In 2018 IEEE International Conference on Information Reuse and Integration (IRI), pages 386–393, July 2018.
[79] Samira Pouyanfar, Shu-Ching Chen, and Mei-Ling Shyu. Deep spatiotemporal representation learning for multi-class imbalanced data classification. In International Conference on Information Reuse and Integration, pages 386–393. IEEE, 2018.
[80] Part Pramokchon and Punpiti Piamsa-nga. Reducing Effects of Class Imbalance Distribution in Multi-class Text Categorization. Springer International Publishing, 2014.
[81] Jiongming Qin, Cong Wang, Qinhong Zou, Yubin Sun, and Bin Chen. Active learning with extreme learning machine for online imbalanced multiclass classification. Knowledge-Based Systems, 231:107385, 2021.
[82] Jiawei Ren, Cunjun Yu, Shunan Sheng, Xiao Ma, Haiyu Zhao, Shuai Yi, and hongsheng Li. Balanced meta-softmax for long-tailed visual recognition. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 4175–4186. Curran Associates, Inc., 2020.
[83] Siqi Ren, Wen Zhu, Bo Liao, Zeng Li, Peng Wang, Keqin Li, Min Chen, and Zejun Li. Selection-based resampling ensemble algorithm for nonstationary imbalanced stream data learning. Knowledge-Based Systems, 163:705–722, 2019.
[84] Ryan Rifkin and Aldebaro Klautau. In defense of one-vs-all classification. Journal of Machine Learning Research, 5:101–141, 2004.
[85] Chris Seiffert, Taghi M Khoshgoftaar, and Jason Van Hulse. Hybrid sampling for imbalanced data. Integrated Computer-Aided Engineering, 16(3):193–210, 2009.
[86] Chris Seiffert, Taghi M. Khoshgoftaar, Jason Van Hulse, and Amri Napolitano. RUSBoost: A hybrid approach to alleviating class imbalance. IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, 40(1):185–197, 2010.
[87] Anish Shah, Eashan Kadam, Hena Shah, Sameer Shinde, and Sandip Shingade. Deep residual networks with exponential linear unit. In Proceedings of the third international symposium on computer vision and the internet, pages 59–65, 2016.
[88] Parinaz Sobhani, Herna Viktor, and Stan Matwin. Learning from imbalanced data using ensemble methods and cluster-based undersampling. In International Workshop on New Frontiers in Mining Complex Patterns, pages 69–83. Springer, 2014.
[89] Vinicius MA Souza, Denis M dos Reis, Andre G Maletzke, and Gustavo EAPA Batista. Challenges in benchmarking stream learning algorithms with real-world data. Data Mining and Knowledge Discovery, 34:1805–1858, 2020.
[90] Jerzy Stefanowski. Classification of multi-class imbalanced data: Data difficulty factors and selected methods for improving classifiers. In International Joint Conference on Rough Sets, pages 57–72. Springer International Publishing, 2021.
[91] Yanmin Sun, Mohamed S Kamel, and Yang Wang. Boosting for learning multiple classes with imbalanced class distribution. In International Conference on Data Mining, pages 592–602. IEEE, 2006.
[92] Aboozar Taherkhani, Georgina Cosma, and T Martin McGinnity. AdaBoost-CNN: An adaptive boosting algorithm for convolutional neural networks to classify multi-class imbalanced datasets using transfer learning. Neurocomputing, 404:351–366, 2020.
[93] Jingru Tan, Changbao Wang, Buyu Li, Quanquan Li, Wanli Ouyang, Changqing Yin, and Junjie Yan. Equalization loss for long-tailed object recognition. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 11662–11671, 2020.
[94] Duc Tran, Hieu Mac, Van Tong, Hai Anh Tran, and Linh Giang Nguyen. A lstm based framework for handling multiclass imbalance in dga botnet detection. Neurocomputing, 275:2401–2413, 2018.
[95] Hamed Valizadegan, Rong Jin, and Anil K Jain. Semi-supervised boosting for multi-class classification. Machine Learning and Knowledge Discovery in Databases, page 522–537, 2008.
[96] Grant Van Horn, Oisin Mac Aodha, Yang Song, Yin Cui, Chen Sun, Alex Shepard, Hartwig Adam, Pietro Perona, and Serge Belongie. The inaturalist species classification and detection dataset. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 8769–8778, 2018.
[97] Chi-Man Vong, Jie Du, Chi-Man Wong, and Jiu-Wen Cao. Postboosting using extended G-mean for online sequential multiclass imbalance learning. IEEE Transactions on Neural Networks and Learning Systems, 29(12):6163–6177, 2018.
[98] Boyu Wang and Joelle Pineau. Online bagging and boosting for imbalanced data streams. IEEE Transactions on Knowledge and Data Engineering, 28(12):3353–3366, 2016.
[99] Shuo Wang, Huanhuan Chen, and Xin Yao. Negative correlation learning for classification ensembles. In International Joint Conference on Neural Networks, pages 1–8. IEEE, 2010.
[100] Shuo Wang, Leandro L. Minku, and Xin Yao. Resampling-based ensemble methods for online class imbalance learning. IEEE Transactions on Knowledge and Data Engineering, 27(5):1356–1368, 2015.
[101] Shuo Wang, Leandro L Minku, and Xin Yao. Dealing with multiple classes in online class imbalance learning. In IJCAI, pages 2118–2124, 2016.
[102] Shuo Wang, Leandro L. Minku, and Xin Yao. A systematic study of online class imbalance learning with concept drift. IEEE Transactions on Neural Networks and Learning Systems, 29(10):4802–4821, 2018.
[103] Shuo Wang and Xin Yao. Multiclass imbalance problems: Analysis and potential solutions. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 42(4):1119–1130, 2012.
[104] Shuo Wang and Xin Yao. Relationships between diversity of classification ensembles and single-class performance measures. IEEE Transactions on Knowledge and Data Engineering, 25(1):206–219, 2013.
[105] Xinyue Wang, Yilin Lyu, and Liping Jing. Deep generative model for robust imbalance classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14124–14133, 2020.
[106] Frank Wilcoxon. Individual comparisons by ranking methods. In Breakthroughs in statistics, pages 196–202. Springer, 1992.
[107] Liuyu Xiang, Guiguang Ding, and Jungong Han. Learning from multiple experts: Self-paced knowledge distillation for long-tailed classification. In European Conference on Computer Vision, pages 247–263. Springer, 2020.
[108] Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747, 2017.
[109] Hong-Ming Yang, Xu-Yao Zhang, Fei Yin, and Cheng-Lin Liu. Robust classification with convolutional prototype learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3474–3482, 2018.
[110] Xuebing Yang, Qiuming Kuang, Wensheng Zhang, and Guoping Zhang. AMDO: An over-sampling technique for multi-class imbalanced problems. IEEE Transactions on Knowledge and Data Engineering, 30(9):1672–1685, 2018.
[111] Yifan Zhang, Bingyi Kang, Bryan Hooi, Shuicheng Yan, and Jiashi Feng. Deep long-tailed learning: A survey. arXiv preprint arXiv:2110.04596, 2021.
[112] Zhongliang Zhang, Bartosz Krawczyk, Salvador Garcia, Alejandro RosalesP´erez, and Francisco Herrera. Empowering one-vs-one decomposition with ensemble learning for multi-class imbalanced data. Knowledge-Based Systems, pages 251–263, 2016.
[113] Xing-Ming Zhao, Xin Li, Luonan Chen, and Kazuyuki Aihara. Protein classification with imbalanced data. Proteins Structure Function and Bioinformatics, 70(4):1125–1132, 2008.
[114] Boyan Zhou, Quan Cui, Xiu-Shen Wei, and Zhao-Min Chen. BBN: Bilateral-branch network with cumulative learning for long-tailed visual recognition. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9719–9728, 2020.
[115] Zhi-Hua Zhou. Machine Learning (Chinese Version). Tsinghua University Press, 2016.
修改评论