[1] LECUN Y, BENGIO Y, HINTON G. Deep learning[J]. Nature, 2015, 521(7553): 436-444.
[2] NGUYEN A, YOSINSKI J, CLUNE J. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images[C/OL]//2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2015: 427-436. DOI: 10.1109/CVPR.2015.7298640.
[3] GOODFELLOW I J, SHLENS J, SZEGEDY C. Explaining and harnessing adversarial examples [J]. arXiv preprint arXiv:1412.6572, 2014.
[4] LI J, MAO B, LIANG Z, et al. Trust and trustworthiness: What they are and how to achieve them[C]//International Conference on Pervasive Computing and Communications Workshops and other Affiliated Events (PerCom Workshops). IEEE, 2021: 711-717.
[5] HUANG X, KROENING D, RUAN W, et al. A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability[J]. Computer Science Review, 2020, 37: 100270.
[6] BRUNDAGE M, AVIN S, WANG J, et al. Toward trustworthy ai development: mechanisms for supporting verifiable claims[J]. arXiv preprint arXiv:2004.07213, 2020.
[7] TSIPRAS D, SANTURKAR S, ENGSTROM L, et al. Robustness may be at odds with accuracy [J]. arXiv preprint arXiv:1805.12152, 2018.
[8] KONG L, SUN J, ZHANG C. Sde-net: Equipping deep neural networks with uncertainty estimates[C]//International Conference on Machine Learning. 2020.
[9] ABDELZAD V, CZARNECKI K, SALAY R, et al. Detecting out-of-distribution inputs in deep neural networks using an early-layer output[J]. arXiv preprint arXiv:1910.10307, 2019.
[10] VERNEKAR S, GAURAV A, DENOUDEN T, et al. Analysis of confident-classifiers for outof-distribution detection[J]. arXiv preprint arXiv:1904.12220, 2019.
[11] RAN X, XU M, MEI L, et al. Detecting out-of-distribution samples via variational auto-encoder with reliable uncertainty estimation[J]. arXiv preprint arXiv:2007.08128, 2020.
[12] GONG D, LIU L, LE V, et al. Memorizing normality to detect anomaly: Memory-augmented deep autoencoder for unsupervised anomaly detection[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019: 1705-1714.
[13] ILYAS A, SANTURKAR S, TSIPRAS D, et al. Adversarial examples are not bugs, they are features[C]//Advances in Neural Information Processing Systems. 2019: 125-136.
[14] MADRY A, MAKELOV A, SCHMIDT L, et al. Towards deep learning models resistant to adversarial attacks[J]. arXiv preprint arXiv:1706.06083, 2017.
[15] CARLINI N, WAGNER D. Towards evaluating the robustness of neural networks[C]//2017 IEEE Symposium on Security and Privacy (sp). IEEE, 2017: 39-57.
[16] CROCE F, ANDRIUSHCHENKO M, SEHWAG V, et al. Robustbench: a standardized adversarial robustness benchmark[C/OL]//Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track. 2021. https://openreview.net/forum?id=SSKZPJCt 7B.
[17] YU F, QIN Z, LIU C, et al. Interpreting and evaluating neural network robustness[J]. arXiv preprint arXiv:1905.04270, 2019.
[18] HEIN M, ANDRIUSHCHENKO M. Formal guarantees on the robustness of a classifier against adversarial manipulation[C]//Advances in Neural Information Processing Systems. 2017: 2266- 2276.
[19] WENG T W, ZHANG H, CHEN P Y, et al. Evaluating the robustness of neural networks: An extreme value theory approach[J]. arXiv preprint arXiv:1801.10578, 2018.
[20] SHAFAHI A, NAJIBI M, GHIASI M A, et al. Adversarial training for free![J]. Advances in Neural Information Processing Systems, 2019, 32.
[21] WONG E, RICE L, KOLTER J Z. Fast is better than free: Revisiting adversarial training[C]// International Conference on Learning Representations. 2019.
[22] CARLINI N, WAGNER D. Adversarial examples are not easily detected: Bypassing ten detection methods[C]//Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. 2017: 3-14.
[23] LU J, ISSARANON T, FORSYTH D. Safetynet: Detecting and rejecting adversarial examples robustly[C]//Proceedings of the IEEE International Conference on Computer Vision. 2017: 446-454.
[24] BAKER B, GUPTA O, NAIK N, et al. Designing neural network architectures using reinforcement learning[J]. arXiv preprint arXiv:1611.02167, 2016.
[25] ZOPH B, LE Q V. Neural architecture search with reinforcement learning[J]. arXiv preprint arXiv:1611.01578, 2016.
[26] LIU H, SIMONYAN K, YANG Y. Darts: Differentiable architecture search[J]. arXiv preprint arXiv:1806.09055, 2018.
[27] LUO R, TIAN F, QIN T, et al. Neural architecture optimization[C]//Advances in Neural Information Processing Systems. 2018: 7816-7827.
[28] YAO X, LIU Y, LIN G. Evolutionary programming made faster[J]. IEEE Transactions on Evolutionary Computation, 1999, 3(2): 82-102.
[29] YAO X. Evolving artificial neural networks[J]. Proceedings of the IEEE, 1999, 87(9): 1423- 1447.
[30] YING C, KLEIN A, REAL E, et al. Nas-bench-101: Towards reproducible neural architecture search[J]. arXiv preprint arXiv:1902.09635, 2019.
[31] HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016: 770-778.
[32] LI L, TALWALKAR A. Random search and reproducibility for neural architecture search[C]// Uncertainty in Artificial Intelligence. PMLR, 2020: 367-377.
[33] LIU Y, SUN Y, XUE B, et al. A survey on evolutionary neural architecture search[J]. arXiv preprint arXiv:2008.10937, 2020.
[34] REAL E, MOORE S, SELLE A, et al. Large-scale evolution of image classifiers[J]. arXiv preprint arXiv:1703.01041, 2017.
[35] XIE L, YUILLE A. Genetic cnn[C]//Proceedings of the IEEE International Conference on Computer Vision. 2017: 1379-1388.
[36] ELSKEN T, METZEN J H, HUTTER F. Neural architecture search: A survey[J]. arXiv preprint arXiv:1808.05377, 2018.
[37] SUN Y, XUE B, ZHANG M, et al. Automatically designing cnn architectures using the genetic algorithm for image classification.[J]. IEEE Transactions on Cybernetics, 2020, 50(9): 3840- 3854.
[38] SUN Y, XUE B, ZHANG M, et al. Automatically evolving cnn architectures based on blocks [J]. arXiv preprint arXiv:1810.11875, 2018.
[39] SUGANUMA M, SHIRAKAWA S, NAGAO T. A genetic programming approach to designing convolutional neural network architectures[C]//Proceedings of the Genetic and Evolutionary Computation Conference. 2017: 497-504.
[40] HUANG G, LIU Z, VAN DER MAATEN L, et al. Densely connected convolutional networks [C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017: 4700-4708.
[41] SZEGEDY C, VANHOUCKE V, IOFFE S, et al. Rethinking the inception architecture for computer vision[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016: 2818-2826.
[42] DONG X, YANG Y. Nas-bench-102: Extending the scope of reproducible neural architecture search[J]. arXiv preprint arXiv:2001.00326, 2020.
[43] LIU H, SIMONYAN K, VINYALS O, et al. Hierarchical representations for efficient architecture search[J]. arXiv preprint arXiv:1711.00436, 2017.
[44] REAL E, AGGARWAL A, HUANG Y, et al. Regularized evolution for image classifier architecture search[C]//Proceedings of the AAAI Conference on Artificial Intelligence: volume 33. 2019: 4780-4789.
[45] BAKER B, GUPTA O, RASKAR R, et al. Accelerating neural architecture search using performance prediction[J]. arXiv preprint arXiv:1705.10823, 2017.
[46] LUO R, TAN X, WANG R, et al. Neural architecture search with gbdt[J]. arXiv preprint arXiv:2007.04785, 2020.
[47] WEN W, LIU H, CHEN Y, et al. Neural predictor for neural architecture search[C]//European Conference on Computer Vision. Springer, 2020: 660-676.
[48] JIN Y. A comprehensive survey of fitness approximation in evolutionary computation[J]. Soft Computing, 2005, 9(1): 3-12.
[49] BRINKER K. Incorporating diversity in active learning with support vector machines[C]// Proceedings of the 20th International Conference on Machine Learning (ICML-03). 2003: 59- 66.
[50] SUDHOLT D. The benefits of population diversity in evolutionary algorithms: a survey of rigorous runtime analyses[M]//Theory of Evolutionary Computation. Springer, 2020: 359-404.
[51] CHEN T, HE T, BENESTY M, et al. Xgboost: extreme gradient boosting[J]. R Package Version 0.4-2, 2015: 1-4.
[52] WHITE C, NEISWANGER W, NOLEN S, et al. A study on encodings for neural architecture search[J]. Advances in Neural Information Processing Systems, 2020, 33: 20309-20319.
[53] YING C. Enumerating unique computational graphs via an iterative graph invariant[J]. arXiv preprint arXiv:1902.06192, 2019.
[54] AMODEI D, OLAH C, STEINHARDT J, et al. Concrete problems in AI safety[J/OL]. CoRR, 2016, abs/1606.06565. http://arxiv.org/abs/1606.06565.
[55] GAL Y, GHAHRAMANI Z. Dropout as a bayesian approximation: Representing model uncertainty in deep learning[C]//International Conference on Machine Learning. PMLR, 2016: 1050-1059.
[56] ABDAR M, POURPANAH F, HUSSAIN S, et al. A review of uncertainty quantification in deep learning: Techniques, applications and challenges[J]. Information Fusion, 2021, 76: 243-297.
[57] GAWLIKOWSKI J, TASSI C R N, ALI M, et al. A survey of uncertainty in deep neural networks [J]. ArXiv, 2021, abs/2107.03342.
[58] KENDALL A, GAL Y. What uncertainties do we need in bayesian deep learning for computer vision?[J]. Advances in Neural Information Processing Systems, 2017, 30.
[59] LAKSHMINARAYANAN B, PRITZEL A, BLUNDELL C. Simple and scalable predictive uncertainty estimation using deep ensembles[J]. Advances in Neural Information Processing Systems, 2017, 30.
[60] NORTHCUTT C G, ATHALYE A, MUELLER J. Pervasive label errors in test sets destabilize machine learning benchmarks[C]//Proceedings of the 35th Conference on Neural Information Processing Systems Track on Datasets and Benchmarks. 2021.
[61] CORDEIRO F R, CARNEIRO G. A survey on deep learning with noisy labels: How to train your model when you cannot trust on the annotations?[C]//2020 33rd SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI). IEEE, 2020: 9-16.
[62] ALGAN G, ULUSOY I. Image classification with deep learning in the presence of noisy labels: A survey[J]. Knowledge-Based Systems, 2021, 215: 106771.
[63] KARIMI D, DOU H, WARFIELD S K, et al. Deep learning with noisy labels: Exploring techniques and remedies in medical image analysis[J]. Medical Image Analysis, 2020, 65: 101759.
[64] VALDENEGRO-TORO M. I find your lack of uncertainty in computer vision disturbing[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021: 1263-1272.
[65] SEOH R. Qualitative analysis of monte carlo dropout[J]. arXiv preprint arXiv:2007.01720, 2020.
[66] DER KIUREGHIAN A, DITLEVSEN O. Aleatory or epistemic? does it matter?[J]. Structural Safety, 2009, 31(2): 105-112.
[67] HÜLLERMEIER E, WAEGEMAN W. Aleatoric and epistemic uncertainty in machine learning: An introduction to concepts and methods[J]. Machine Learning, 2021, 110(3): 457-506.
[68] VAN AMERSFOORT J, SMITH L, TEH Y W, et al. Uncertainty estimation using a single deep deterministic neural network[C]//International Conference on Machine Learning. PMLR, 2020: 9690-9700.
[69] DENKER J, LECUN Y. Transforming neural-net output levels to probability distributions[J]. Advances in Neural Information Processing Systems, 1990, 3.
[70] NEAL R M. Bayesian learning for neural networks: volume 118[M]. Springer Science & Business Media, 2012.
[71] SRIVASTAVA N, HINTON G, KRIZHEVSKY A, et al. Dropout: a simple way to prevent neural networks from overfitting[J]. The Journal of Machine Learning Research, 2014, 15(1): 1929-1958.
[72] HENDRYCKS D, GIMPEL K. A baseline for detecting misclassified and out-of-distribution examples in neural networks[J]. arXiv preprint arXiv:1610.02136, 2016.
[73] SCHWAIGER A, SINHAMAHAPATRA P, GANSLOSER J, et al. Is uncertainty quantification in deep learning sufficient for out-of-distribution detection?[C]//AISafety@ IJCAI. 2020.
[74] SALVADOR T, VOLETI V, IANNANTUONO A, et al. Improved predictive uncertainty using corruption-based calibration[J]. Stat, 2021, 1050: 7.
[75] SHIN W, HA J W, LI S, et al. Which strategies matter for noisy label classification? insight into loss and uncertainty[J]. ArXiv, 2020, abs/2008.06218.
[76] YU X, LIU T, GONG M, et al. Transfer learning with label noise[J]. arXiv preprint arXiv:1707.09724, 2017.
[77] SPETH J, HAND E M. Automated label noise identification for facial attribute recognition. [C]//CVPR Workshops. 2019: 25-28.
[78] LECUN Y. The mnist database of handwritten digits[J]. http://yann. lecun. com/exdb/mnist/, 1998.
[79] KRIZHEVSKY A, HINTON G, et al. Learning multiple layers of features from tiny images[J]. 2009.
[80] TAJWAR F, KUMAR A, XIE S M, et al. No true state-of-the-art? ood detection methods are inconsistent across datasets[J]. arXiv preprint arXiv:2109.05554, 2021.
[81] GOEL P, CHEN L. On the robustness of monte carlo dropout trained with noisy labels[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021: 2219-2228.
[82] HAN B, YAO Q, LIU T, et al. A survey of label-noise representation learning: Past, present and future[J]. arXiv preprint arXiv:2011.04406, 2020.
[83] HAN B, YAO Q, YU X, et al. Co-teaching: Robust training of deep neural networks with extremely noisy labels[J]. Advances in Neural Information Processing Systems, 2018, 31.
[84] CHEN P, LIAO B B, CHEN G, et al. Understanding and utilizing deep neural networks trained with noisy labels[C]//International Conference on Machine Learning. PMLR, 2019: 1062-1070.
[85] CHANG H S, LEARNED-MILLER E, MCCALLUM A. Active bias: Training more accurate neural networks by emphasizing high variance samples[J]. Advances in Neural Information Processing Systems, 2017, 30.
[86] KÖHLER J M, AUTENRIETH M, BELUCH W H. Uncertainty based detection and relabeling of noisy image labels.[C]//CVPR Workshops. 2019: 33-37.
[87] NORTHCUTT C G, WU T, CHUANG I L. Learning with confident examples: Rank pruning for robust classification with noisy labels[C/OL]//UAI’17: Proceedings of the Thirty-Third Conference on Uncertainty in Artificial Intelligence. Sydney, Australia: AUAI Press, 2017. http://auai.org/uai2017/proceedings/papers/35.pdf.
[88] SONG H, KIM M, LEE J G. Selfie: Refurbishing unclean samples for robust deep learning[C]// International Conference on Machine Learning. PMLR, 2019: 5907-5915.
[89] GHOSH A, KUMAR H, SASTRY P. Robust loss functions under label noise for deep neural networks[C]//Proceedings of the AAAI Conference on Artificial Intelligence: volume 31. 2017.
[90] ZHANG Z, SABUNCU M. Generalized cross entropy loss for training deep neural networks with noisy labels[J]. Advances in Neural Information Processing Systems, 2018, 31.
[91] XIAO H, RASUL K, VOLLGRAF R. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms[J]. arXiv preprint arXiv:1708.07747, 2017.
[92] YU F, SEFF A, ZHANG Y, et al. Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop[J]. arXiv preprint arXiv:1506.03365, 2015.
[93] DENG J, DONG W, SOCHER R, et al. Imagenet: A large-scale hierarchical image database[C]// 2009 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2009: 248-255.
[94] NETZER Y, WANG T, COATES A, et al. Reading digits in natural images with unsupervised feature learning[J]. 2011.
[95] EL-SAWY A, EL-BAKRY H, LOEY M. Cnn for handwritten arabic digits recognition based on lenet-5[C]//International Conference on Advanced Intelligent Systems and Informatics. Springer, 2016: 566-575.
[96] HENDRYCKS D, MAZEIKA M, DIETTERICH T. Deep anomaly detection with outlier exposure[J]. arXiv preprint arXiv:1812.04606, 2018.
[97] LIANG S, LI Y, SRIKANT R. Enhancing the reliability of out-of-distribution image detection in neural networks[J]. arXiv preprint arXiv:1706.02690, 2017.
[98] LEE K, LEE H, LEE K, et al. Training confidence-calibrated classifiers for detecting out-ofdistribution samples[J]. arXiv preprint arXiv:1711.09325, 2017.
[99] REN J, LIU P J, FERTIG E, et al. Likelihood ratios for out-of-distribution detection[J]. Advances in Neural Information Processing Systems, 2019, 32.
[100] SHVETSOVA N, BAKKER B, FEDULOVA I, et al. Anomaly detection in medical imaging with deep perceptual autoencoders[J]. IEEE Access, 2021, 9: 118571-118583.
[101] SARAFIJANOVIC-DJUKIC N, DAVIS J. Fast distance-based anomaly detection in images using an inception-like autoencoder[C]//International Conference on Discovery Science. Springer, 2019: 493-508.
[102] XU D, RICCI E, YAN Y, et al. Learning deep representations of appearance and motion for anomalous event detection[J]. arXiv preprint arXiv:1510.01553, 2015.
[103] ZHOU C, PAFFENROTH R C. Anomaly detection with robust deep autoencoders[C]// Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2017: 665-674.
[104] KINGMA D P, WELLING M. Auto-encoding variational bayes[J]. arXiv preprint arXiv:1312.6114, 2013.
[105] AN J, CHO S. Variational autoencoder based anomaly detection using reconstruction probability[J]. Special Lecture on IE, 2015, 2(1): 1-18.
[106] DENOUDEN T, SALAY R, CZARNECKI K, et al. Improving reconstruction autoencoder outof-distribution detection with mahalanobis distance[J]. arXiv preprint arXiv:1812.02765, 2018.
[107] SHANNON C E. A mathematical theory of communication[J]. ACM SIGMOBILE Mobile Computing and Communications Review, 2001, 5(1): 3-55.
[108] BERKHAHN F, KEYS R, OUERTANI W, et al. Augmenting variational autoencoders with sparse labels: A unified framework for unsupervised, semi-(un) supervised, and supervised learning[J]. arXiv preprint arXiv:1908.03015, 2019.
修改评论