[1] LIEM C, LANGER M, DEMETRIOU A, et al. Psychology meets machine learning: Interdisciplinary perspectives on algorithmic job candidate screening[M]//Explainable and interpretable models in computer vision and machine learning. Springer, 2018: 197-253.
[2] FUJIYOSHI H, HIRAKAWA T, YAMASHITA T. Deep learning-based image recognition for autonomous driving[J]. IATSS Research, 2019, 43(4): 244-252.
[3] WU Y, SCHUSTER M, CHEN Z, et al. Google’s neural machine translation system: Bridging the gap between human and machine translation[J]. arXiv preprint arXiv:1609.08144, 2016.
[4] MUKERJEE A, BISWAS R, DEB K, et al. Multi–objective evolutionary algorithms for the risk–return trade–off in bank loan management[J]. International Transactions in Operational Research, 2002, 9(5): 583-597.
[5] LI L, LASSITER T, OH J, et al. Algorithmic hiring in practice: Recruiter and HR professional’s perspectives on AI use in hiring[C]//AIES ’21: Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. Association for Computing Machinery, 2021: 166–176.
[6] BRENNAN T, DIETERICH W, EHRET B. Evaluating the predictive validity of the COMPAS risk and needs assessment system[J]. Criminal Justice and Behavior, 2009, 36(1): 21-40.
[7] MEHRABI N, MORSTATTER F, SAXENA N, et al. A survey on bias and fairness in machine learning[J]. ACM Computing Surveys (CSUR), 2021, 54(6): 1-35.
[8] CORBETT-DAVIES S, PIERSON E, FELLER A, et al. Algorithmic decision making and the cost of fairness[C]//Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2017: 797-806.
[9] HUANG C W, ZHANG Z Q, MAO B F, et al. An overview of artificial intelligence ethics[J]. IEEE Transactions on Artificial Intelligence, 2022, Early Access: 1-21.
[10] 刘文炎; 沈楚云; 王祥丰; 金博; 卢兴见; 王晓玲; 查宏远; 何积丰;. 可信机器学习的公平性综述[J]. 软件学报, 2021, 32(5): 1404-1426.
[11] CATON S, HAAS C. Fairness in machine learning: A Survey[J]. arXiv preprint- arXiv:2010.04053, 2020.
[12] PESSACH D, SHMUELI E. A review on fairness in machine learning[J]. ACM Computing Surveys (CSUR), 2022, 55(3): 1-44.
[13] FELDMAN M, FRIEDLER S A, MOELLER J, et al. Certifying and removing disparate impact [C]//Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2015: 259-268.
[14] KAMIRAN F, CALDERS T. Data preprocessing techniques for classification without discrimination[J]. Knowledge and Information Systems, 2012, 33(1): 1-33.
[15] KAMISHIMA T, AKAHO S, ASOH H, et al. Fairness-aware classifier with prejudice remover regularizer[C]//Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, 2012: 35-50.
[16] GOH G, COTTER A, GUPTA M, et al. Satisfying real-world goals with dataset constraints[J]. Advances in Neural Information Processing Systems, 2016, 29.
[17] ZHANG B H, LEMOINE B, MITCHELL M. Mitigating unwanted biases with adversarial learning[C]//Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. 2018: 335-340.
[18] HARDT M, PRICE E, SREBRO N. Equality of opportunity in supervised learning[C]// Advances in neural information processing systems. 2016: 3315-3323.
[19] MENON A K, WILLIAMSON R C. The cost of fairness in binary classification[C]//Conference on Fairness, Accountability and Transparency. PMLR, 2018: 107-118.
[20] RATHORE A, DEV S, PHILLIPS J M, et al. VERB: Visualizing and interpreting bias mitigation techniques for word representations[J]. arXiv preprintarXiv:2104.02797, 2021.
[21] CHEN Y L, JOO J. Understanding and mitigating annotation bias in facial expression recognition[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 2021: 14980-14991.
[22] WU H L, MA C, MITRA B, et al. Multi-FR: A multi-objective optimization method for achieving two-sided fairness in E-commerce recommendation[J]. arXiv preprintarXiv:2105.02951, 2021.
[23] XU D P, YUAN S H, ZHANG L, et al. Fairgan+: Achieving fair data generation and classification through generative adversarial nets[C]//2019 IEEE International Conference on Big Data (Big Data). IEEE, 2019: 1401-1406.
[24] DWORK C, IMMORLICA N, KALAI A T, et al. Decoupled classifiers for group-fair and efficient machine learning[C]//Conference on Fairness, Accountability and Transparency. PMLR, 2018: 119-133.
[25] BELLAMY R K, DEY K, HIND M, et al. AI Fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias[J]. IBM Journal of Research and Development, 2019, 63(4/5): 4-1.
[26] AHN Y, LIN Y R. Fairsight: Visual analytics for fairness in decision making[J]. IEEE Transactions on Visualization and Computer Graphics, 2019, 26(1): 1086-1095.
[27] SALEIRO P, KUESTER B, HINKSON L, et al. Aequitas: A bias and fairness audit toolkit[J]. arXiv preprintarXiv:1811.05577, 2018.
[28] ZHANG Q Q, LIU J L, ZHANG Z Q, et al. Fairer machine learning through multi-objective evolutionary learning[C]//International Conference on Artificial Neural Networks. Springer, 2021: 111-123.
[29] VERMA S, RUBIN J. Fairness definitions explained[C]//2018 IEEE/ACM International Workshop on Software Fairness (FairWare). 2018: 1-7.
[30] DWORK C, HARDT M, PITASSI T, et al. Fairness through awareness[C]//Proceedings of the 3rd Innovations in Theoretical Computer Science Conference. 2012: 214-226.
[31] KOHAVI R, et al. Scaling up the accuracy of naive-bayes classifiers: A decision-tree hybrid. [C]//KDD’96: Proceedings of the Second International Conference on Knowledge Discovery and Data Mining. AAAI Press, 1996: 202–207.
[32] BERK R, HEIDARI H, JABBARI S, et al. Fairness in criminal justice risk assessments: The state of the art[J]. Sociological Methods & Research, 2021, 50(1): 3-44.
[33] CHOULDECHOVA A. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments[J]. Big Data, 2017, 5(2): 153-163.
[34] GALHOTRA S, BRUN Y, MELIOU A. Fairness testing: Testing software for discrimination [C]//Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering. 2017: 498-510.
[35] SPEICHER T, HEIDARI H, GRGIC-HLACA N, et al. A unified approach to quantifying algorithmic unfairness: Measuring individual &group unfairness via inequality indices[C]// Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2018: 2239-2248.
[36] WAN M Y, ZHA D C, LIU N H, et al. In-processing modeling techniques for machine learning Fairness: A Survey[J]. ACM Transactions on Knowledge Discovery from Data (TKDD), 2022.
[37] KAMISHIMA T, AKAHO S, SAKUMA J. Fairness-aware learning through regularization approach[C]//2011 IEEE 11th International Conference on Data Mining Workshops. IEEE, 2011: 643-650.
[38] DI STEFANO P G, HICKEY J M, VASILEIOU V. Counterfactual fairness: Removing direct effects through regularization[J]. arXiv preprintarXiv:2002.10774, 2020.
[39] KUSNER M, LOFTUS J, RUSSELL C, et al. Counterfactual fairness[C]//NIPS’17: Proceedings of the 31st International Conference on Neural Information Processing Systems. Long Beach, California, USA: Curran Associates Inc., 2017: 4069–4079.
[40] ZAFAR M B, VALERA I, RODRIGUEZ M, et al. Fairness constraints: A flexible approach for fair classification[J]. The Journal of Machine Learning Research, 2019, 20(1): 2737-2778.
[41] ZAFAR M B, VALERA I, ROGRIGUEZ M G, et al. Fairness constraints: Mechanisms for fair Classification[C]//SINGH A, ZHU J. Proceedings of Machine Learning Research: volume 54 Proceedings of the 20th International Conference on Artificial Intelligence and Statistics. PMLR, 2017: 962-970.
[42] WOODWORTH B, GUNASEKAR S, OHANNESSIAN M I, et al. Learning non-discriminatory predictors[C]//KALE S, SHAMIR O. Proceedings of Machine Learning Research: volume 65 Proceedings of the 2017 Conference on Learning Theory. PMLR, 2017: 1920-1953.
[43] COTTER A, JIANG H, GUPTA M, et al. Optimization with non-differentiable constraints with applications to fairness, recall, churn, and other goals[J]. The Journal of Machine Learning Research, 2019, 20(172): 1-59.
[44] MADRAS D, CREAGER E, PITASSI T, et al. Learning adversarially fair and transferable representations[C]//International Conference on Machine Learning. PMLR, 2018: 3384-3393.
[45] BEUTEL A, CHEN J L, ZHAO Z, et al. Data decisions and theoretical implications when adversarially learning fair representations[J]. arXiv preprintarXiv:1707.00075, 2017.
[46] ADEL T, VALERA I, GHAHRAMANI Z, et al. One-network adversarial fairness[C]// Proceedings of the AAAI Conference on Artificial Intelligence: volume 33. 2019: 2412-2420.
[47] LI X X, CUI Z T, WU Y F, et al. Estimating and improving fairness with adversarial learning [J]. arXiv preprintarXiv:2103.04243, 2021.
[48] ZHANG Q Q, LIU J L, ZHANG Z Q, et al. Mitigating unfairness via evolutionary multi-objective ensemble learning[J]. IEEE Transactions on Evolutionary Computation, 2022, Early Access: 1-15.
[49] LIU S Y, VICENTE L N. Accuracy and fairness trade-offs in machine learning: A stochastic multi-objective approach[J]. Computational Management Science, 2022: 1-25.
[50] LIU S Y, VICENTE L N. The stochastic multi-gradient algorithm for multi-objective optimization and its application to supervised machine learning[J]. arXiv preprintarXiv:1907.04472, 2019.
[51] YAO X. Evolving artificial neural networks[J]. Proceedings of the IEEE, 1999, 87(9): 1423- 1447.
[52] GUPTA M R, COTTER A, FARD M M, et al. Proxy fairness[J]. arXiv preprint- arXiv:1806.11212, 2018.
[53] LI M Q, YAO X. Quality evaluation of solution sets in multiobjective optimisation: A survey [J]. ACM Computing Surveys (CSUR), 2019, 52(2): 1-38.
[54] DEB K, PRATAP A, AGARWAL S, et al. A fast and elitist multiobjective genetic algorithm: NSGA-II[J]. IEEE Transactions on Evolutionary Computation, 2002, 6(2): 182-197.
[55] WANG H D, JIAO L C, YAO X. Two_Arch2: An improved two-archive algorithm for many-objective optimization[J]. IEEE Transactions on Evolutionary Computation, 2014, 19(4): 524- 541.
[56] LI B, TANG K, LI J, et al. Stochastic ranking algorithm for many-objective optimization based on multiple indicators[J]. IEEE Transactions on Evolutionary Computation, 2016, 20(6): 924- 938.
[57] MARLER R T, ARORA J S. Survey of multi-objective optimization methods for engineering [J]. Structural and Multidisciplinary Optimization, 2004, 26: 369-395.
[58] MEI Y, NGUYEN S, XUE B, et al. An efficient feature selection algorithm for evolving job shop scheduling rules with genetic programming[J]. IEEE Transactions on Emerging Topics in Computational Intelligence, 2017, 1(5): 339-353.
[59] GONG Z C, CHEN H H, YUAN B, et al. Multiobjective learning in the model space for time series classification[J]. IEEE Transactions on Cybernetics, 2019, 49(3): 918-932.
[60] MINKU L L, YAO X. Software effort estimation as a multiobjective learning problem[J]. ACM Transactions on Software Engineering and Methodology (TOSEM), 2013, 22(4): 1-32.
[61] RUNARSSON T, YAO X. Stochastic ranking for constrained evolutionary optimization[J]. IEEE Transactions on Evolutionary Computation, 2000, 4(3): 284-294.
[62] PESSACH D, SHMUELI E. Algorithmic fairness[J]. arXiv preprintarXiv:2001.09784, 2020.
[63] FRIEDLER S A, SCHEIDEGGER C, VENKATASUBRAMANIAN S, et al. A comparative study of fairness-enhancing interventions in machine learning[C]//Proceedings of the Conference on Fairness, Accountability, and Transparency. 2019: 329-338.
[64] KINGMA D P, BA J. Adam: A method for stochastic optimization[J]. arXiv preprint- arXiv:1412.6980, 2014.
[65] ZHANG Q F, LI H. MOEA/D: A multiobjective evolutionary algorithm based on decomposition [J]. IEEE Transactions on Evolutionary Computation, 2007, 11(6): 712-731.
[66] ZITZLER E, LAUMANNS M, THIELE L. SPEA2: Improving the strength pareto evolutionary algorithm[J]. Technical Report Gloriastrasse, 2001.
[67] TIAN Y, CHENG R, ZHANG X Y, et al. Diversity assessment of multi-objective evolutionary algorithms: Performance Metric and Benchmark Problems[J]. IEEE Computational Intelligence Magazine, 2019, 14(3): 61-74.
[68] PASZKE A, GROSS S, MASSA F, et al. Pytorch: An imperative style, high-performance deep learning library[J]. Advances in Neural Information Processing Systems, 2019, 32.
[69] WATKINS E A, MCKENNA M, CHEN J. The four-fifths rule is not disparate impact: A woeful tale of epistemic trespassing in algorithmic fairness[J]. arXiv preprintarXiv:2202.09519, 2022.
[70] YANG T, LINDER J, BOLCHINI D. DEEP: Design-oriented evaluation of perceived usability [J]. International Journal of Human-Computer Interaction, 2012, 28(5): 308-346.
修改评论