[1] KRIZHEVSKY A, SUTSKEVER I, HINTON G E. Imagenet Classification with Deep Convolutional Neural Networks[J]. Advances in Neural Information Processing Systems, 2012, 25.
[2] HINTON G, DENG L, YU D, et al. Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups[J]. IEEE Signal processing magazine, 2012, 29(6): 82-97.
[3] MIKOLOV T, CHEN K, CORRADO G, et al. Efficient Estimation of Word Representations in Vector Space[C]//International Conference on Learning Representations. 2013.
[4] JUMPER J, EVANS R, PRITZEL A, et al. Highly Accurate Protein Structure Prediction with AlphaFold[J]. Nature, 2021, 596(7873): 583-589.
[5] DAVIES A, VELIČKOVIĆ P, BUESING L, et al. Advancing Mathematics by Guiding Human Intuition with AI[J]. Nature, 2021, 600(7887): 70-74.
[6] DEGRAVE J, FELICI F, BUCHLI J, et al. Magnetic Control of Tokamak Plasmas through Deep Reinforcement Learning[J]. Nature, 2022, 602(7897): 414-419.
[7] GAL Y, GHAHRAMANI Z. Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning[C]//International Conference on Machine Learning. 2016: 1050-1059.
[8] HABIB K. ODI Resume Report on Investigation PE 16-007 Concerning Tesla Automatic Vehicle Control Systems[M]//Technical Report. NHTSA Office of Defects Investigation, 2016.
[9] LIU X, LI Y, WU C, et al. Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network[C]//International Conference on Learning Representations. 2018.
[10] HOULSBY N, HUSZÁR F, GHAHRAMANI Z, et al. Bayesian Active Learning for Classification and Preference Learning[J]. Stat, 2011, 1050: 24.
[11] KENDALL A, GAL Y. What Uncertainties do We Need in Bayesian Deep Learning for Computer Vision?[C]//Advances in Neural Information Processing Systems. 2017: 5574-5584.
[12] GAL Y. Uncertainty in Deep Learning[J]. University of Cambridge, 2016, 1(3).
[13] MACKAY D J. A Practical Bayesian Framework for Backpropagation Networks[J]. Neural Computation, 1992, 4(3): 448-472.
[14] NEAL R M. Bayesian Learning for Neural Networks: volume 118[M]. Springer Science & Business Media, 2012.
[15] WELLING M, TEH Y W. Bayesian Learning via Stochastic Gradient Langevin Dynamics[C]//International Conference on Machine Learning. 2011: 681-688.
[16] CHEN T, FOX E, GUESTRIN C. Stochastic Gradient Hamiltonian Monte Carlo[C]//International Conference on Machine Learning. 2014: 1683-1691.
[17] YAO J, PAN W, GHOSH S, et al. Quality of Uncertainty Quantification for Bayesian Neural Network Inference[A]. 2019. arXiv: 1906.09686.
[18] GRAVES A. Practical Variational Inference for Neural Networks[C]//Advances in Neural Information Processing Systems. 2011: 2348-2356.
[19] BLUNDELL C, CORNEBISE J, KAVUKCUOGLU K, et al. Weight Uncertainty in Neural Network[C]//International Conference on Machine Learning. 2015: 1613-1622.
[20] LOUIZOS C, WELLING M. Structured and Efficient Variational Deep Learning with Matrix Gaussian Posteriors[C]//International Conference on Machine Learning. 2016: 1708-1716.
[21] LOUIZOS C, WELLING M. Multiplicative Normalizing Flows for Variational Bayesian Neural Networks[C]//International Conference on Machine Learning. 2017: 2218-2227.
[22] PAWLOWSKI N, BROCK A, LEE M C, et al. Implicit Weight Uncertainty in Neural Networks[A]. 2017. arXiv: 1711.01297.
[23] HERNÁNDEZ-LOBATO J M, ADAMS R. Probabilistic Backpropagation for Scalable Learning of Bayesian Neural Networks[C]//International Conference on Machine Learning. 2015:1861-1869.
[24] HERNANDEZ-LOBATO J, LI Y, ROWLAND M, et al. Black-Box Alpha Divergence Minimization[C]//International Conference on Machine Learning. 2016: 1511-1520.
[25] KINGMA D P, SALIMANS T, WELLING M. Variational Dropout and the Local Reparameterization Trick[J]. Advances in Neural Information Processing Systems, 2015, 28.
[26] WILLIAMS C K. Computing With Infinite Networks[C]//Advances in Neural Information Processing Systems. 1997: 295-301.
[27] LAKSHMINARAYANAN B, PRITZEL A, BLUNDELL C. Simple and Scalable Predictive Uncertainty Estimation Using Deep Ensembles[C]//Advances in Neural Information Processing Systems. 2017: 6402-6413.
[28] TAGASOVSKA N, LOPEZ-PAZ D. Frequentist Uncertainty Estimates for Deep Learning[A]. 2018. arXiv: 1811.00908.
[29] WENZEL F, SNOEK J, TRAN D, et al. Hyperparameter Ensembles for Robustness and Uncertainty Quantification[A]. 2020. arXiv: 2006.13570.
[30] HAVASI M, JENATTON R, FORT S, et al. Training Independent Subnetworks for Robust Prediction[C]//International Conference on Learning Representations. 2021.
[31] AMINI A, SCHWARTING W, SOLEIMANY A, et al. Deep Evidential Regression[J]. Advances in Neural Information Processing Systems, 2020, 33.
[32] SENSOY M, KAPLAN L, KANDEMIR M. Evidential Deep Learning to Quantify Classification Uncertainty[C]//Advances in Neural Information Processing Systems: volume 31. Curran Associates, Inc., 2018.
[33] VAN AMERSFOORT J, SMITH L, TEH Y W, et al. Uncertainty Estimation Using a Single Deep Deterministic Neural Network[C]//International Conference on Machine Learning. PMLR, 2020: 9690-9700.
[34] LIU J, LIN Z, PADHY S, et al. Simple and Principled Uncertainty Estimation with Deterministic Deep Learning via Distance Awareness[J]. Advances in Neural Information Processing Systems, 2020, 33: 7498-7512.
[35] VAN AMERSFOORT J, SMITH L, JESSON A, et al. On Feature Collapse and Deep Kernel Learning for Single Forward Pass Uncertainty[A]. 2021. arXiv: 2102.11409.
[36] MUKHOTI J, VAN AMERSFOORT J, TORR P H, et al. Deep Deterministic Uncertainty for Semantic Segmentation[A]. 2021. arXiv: 2111.00079.
[37] LITJENS G, KOOI T, BEJNORDI B E, et al. A Survey on Deep Learning in Medical Image Analysis[J]. Medical Image Analysis, 2017, 42: 60-88.
[38] NAIR T, PRECUP D, ARNOLD D L, et al. Exploring Uncertainty Measures in Deep Networks for Multiple Sclerosis Lesion Detection and Segmentation[C]//International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2018: 655-663.
[39] GHESU F C, GEORGESCU B, GIBSON E, et al. Quantifying and Leveraging Classification Uncertainty for Chest Radiograph Assessment[C]//International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2019: 676-684.
[40] KARIMI D, ZENG Q, MATHUR P, et al. Accurate and Robust Deep Learning-Based Segmentation of the Prostate Clinical Target Volume in Ultrasound Images[J]. Medical Image Analysis, 2019, 57: 186-196.
[41] KWON Y, WON J H, KIM B J, et al. Uncertainty Quantification Using Bayesian Neural Networks in Classification: Application to Biomedical Image Segmentation[J]. Computational Statistics & Data Analysis, 2020, 142: 106816.
[42] JOSKOWICZ L, COHEN D, CAPLAN N, et al. Inter-Observer Variability of Manual Contour Delineation of Structures in CT[J]. European Radiology, 2019, 29(3): 1391-1399.
[43] BECKER A S, CHAITANYA K, SCHAWKAT K, et al. Variability of Manual Segmentation of the Prostate in Axial T2-Weighted MRI: A Multi-Reader Study[J]. European Journal of Radiology, 2019, 121: 108716.
[44] WARFIELD S K, ZOU K H, WELLS W M. Simultaneous Truth and Performance Level Estimation (STAPLE): an Algorithm for the Validation of Image Segmentation[J]. IEEE Transactions on Medical Imaging, 2004, 23(7): 903-921.
[45] ASMAN A J, LANDMAN B A. Formulating Spatially Varying Performance in the Statistical Fusion Framework[J]. IEEE Transactions on Medical Imaging, 2012, 31(6): 1326-1336.
[46] CARDOSO M J, LEUNG K, MODAT M, et al. STEPS: Similarity and Truth Estimation for Propagated Segmentations and Its Application to Hippocampal Segmentation and Brain Parcelation[J]. Medical Image Analysis, 2013, 17(6): 671-684.
[47] AKHONDI-ASL A, HOYTE L, LOCKHART M E, et al. A Logarithmic Opinion Pool Based STAPLE Algorithm for the Fusion of Segmentations with Associated Reliability Weights[J]. IEEE Transactions on Medical Imaging, 2014, 33(10): 1997-2009.
[48] JOSKOWICZ L, COHEN D, CAPLAN N, et al. Automatic Segmentation Variability Estimation with Segmentation Priors[J]. Medical Image Analysis, 2018, 50: 54-64.
[49] KOHL S, ROMERA-PAREDES B, MEYER C, et al. A Probabilistic U-Net for Segmentation of Ambiguous Images[C]//Advances in Neural Information Processing Systems. 2018: 6965-6975.
[50] BAUMGARTNER C F, TEZCAN K C, CHAITANYA K, et al. Phiseg: Capturing Uncertainty in Medical Image Segmentation[C]//International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2019: 119-127.
[51] TANNO R, SAEEDI A, SANKARANARAYANAN S, et al. Learning From Noisy Labels by Regularized Estimation of Annotator Confusion[C]//IEEE Conference on Computer Vision and Pattern Recognition. 2019: 11244-11253.
[52] ZHANG L, TANNO R, XU M C, et al. Disentangling human error from ground truth in segmentation of medical images[J]. Advances in Neural Information Processing Systems, 2020, 33: 15750-15762.
[53] MONTEIRO M, LE FOLGOC L, COELHO DE CASTRO D, et al. Stochastic Segmentation Networks: Modelling Spatially Correlated Aleatoric Uncertainty[C]//Advances in Neural Information Processing Systems: volume 33. Curran Associates, Inc., 2020: 12756-12767.
[54] HU S, WORRALL D, KNEGT S, et al. Supervised Uncertainty Quantification for Segmentation with Multiple Annotations[C]//International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2019: 137-145.
[55] ROSENBLATT F. Principles of Neurodynamics. Perceptrons and the Theory of Brain Mechanisms[R]. Cornell Aeronautical Lab Inc Buffalo NY, 1961.
[56] LECUN Y, BOTTOU L, BENGIO Y, et al. Gradient-Based Learning Applied to Document Recognition[J]. Proceedings of the IEEE, 1998, 86(11): 2278-2324.
[57] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is All You Need[J]. Advances in Neural Information Processing Systems, 2017, 30.
[58] KINGMA D P, WELLING M. Auto-Encoding Variational Bayes[C]//International Conference on Learning Representations. 2013.
[59] SRIVASTAVA N, HINTON G, KRIZHEVSKY A, et al. Dropout: A Simple Way to Prevent Neural Networks from Overfitting[J]. The Journal of Machine Learning Research, 2014, 15(1): 1929-1958.
[60] GULRAJANI I, AHMED F, ARJOVSKY M, et al. Improved Training of Wasserstein GANs[J]. Advances in Neural Information Processing Systems, 2017, 30.
[61] MIYATO T, KATAOKA T, KOYAMA M, et al. Spectral Normalization for Generative Adversarial Networks[C]//International Conference on Learning Representations. 2018.
[62] GOUK H, FRANK E, PFAHRINGER B, et al. Regularisation of Neural Networks by Enforcing Lipschitz Continuity[J]. Machine Learning, 2021, 110(2): 393-416.
[63] HE K, ZHANG X, REN S, et al. Deep Residual Learning for Image Recognition[C]//IEEE Conference on Computer Vision and Pattern Recognition. 2016: 770-778.
[64] JACOBSEN J H, SMEULDERS A, OYALLON E. I-RevNet: Deep Invertible Networks[C]//International Conference on Learning Representations. 2018.
[65] BEHRMANN J, GRATHWOHL W, CHEN R T, et al. Invertible Residual Networks[C]//International Conference on Machine Learning. PMLR, 2019: 573-582.
[66] TITSIAS M. Variational Learning of Inducing Variables in Sparse Gaussian Processes[C]//Artificial Intelligence and Statistics. PMLR, 2009: 567-574.
[67] FRIEDMAN J, HASTIE T, TIBSHIRANI R. The Elements of Statistical Learning, Volume 1[M]. Berlin, 2001.
[68] BRIER G W, et al. Verification of Forecasts Expressed in Terms of Probability[J]. Monthly Weather Review, 1950, 78(1): 1-3.
[69] LECUN Y, BENGIO Y, HINTON G. Deep Learning[J]. nature, 2015, 521(7553): 436-444.
[70] NAEINI M P, COOPER G, HAUSKRECHT M. Obtaining Well Calibrated Probabilities Using Bayesian Binning[C]//AAAI Conference on Artificial Intelligence. 2015.
[71] HARRELL F E, CALIFF R M, PRYOR D B, et al. Evaluating the Yield of Medical Tests[J]. JAMA, 1982, 247(18): 2543-2546.
[72] KRIZHEVSKY A, HINTON G, et al. Learning Multiple Layers of Features from Tiny Images[M]. Citeseer, 2009.
[73] XIAO H, RASUL K, VOLLGRAF R. Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms[EB/OL]. (2017-08-28).
[74] NETZER Y, WANG T, COATES A, et al. Reading Digits in Natural Images with Unsupervised Feature Learning[Z]. 2011.
[75] ZAGORUYKO S, KOMODAKIS N. Wide Residual Networks[C]//British Machine Vision Conference (BMVC). BMVA Press, 2016: 87.1-87.12.
[76] SHRIDHAR K, LAUMANN F, LIWICKI M. Uncertainty Estimations by Softplus Normalization in Bayesian Convolutional Neural Networks with Variational Inference[A]. 2018. arXiv: 1806.05978.
[77] SHRIDHAR K, LAUMANN F, LIWICKI M. A Comprehensive Guide to Bayesian Convolutional Neural Network with Variational Inference[A]. 2019. arXiv: 1901.02731.
[78] RONNEBERGER O, FISCHER P, BROX T. U-Net: Convolutional Networks for Biomedical Image Segmentation[C]//International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2015: 234-241.
[79] OKTAY O, SCHLEMPER J, FOLGOC L L, et al. Attention U-Net: Learning Where to Look for the Pancreas[A]. 2018. arXiv: 1804.03999.
[80] ARMATO III S G, MCLENNAN G, BIDAUT L, et al. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI): a Completed Reference Database of Lung Nodules on CT Scans[J]. Medical Physics, 2011, 38(2): 915-931.
修改评论