[1] ARJOVSKY M, BOTTOU L, GULRAJANI I, et al. Invariant risk minimization[J]. CoRR,2019, abs/1907.02893.
[2] ZHU H, XU J, LIU S, et al. Federated learning on non-iid data: A survey[J]. Neurocomputing, 2021, 465: 371-390.
[3] MINTUN E, KIRILLOV A, XIE S. On interaction between augmentations and corruptions in natural corruption robustness[J]. CoRR, 2021, abs/2102.11273.
[4] HENDRYCKS D, DIETTERICH T G. Benchmarking neural network robustness to common corruptions and perturbations[C]//ICLR (Poster). OpenReview.net, 2019.
[5] KOH P W, SAGAWA S, MARKLUND H, et al. WILDS: A benchmark of in-the-wild distribution shifts[C]//Proceedings of Machine Learning Research: volume139 ICML. PMLR, 2021: 5637-5664.
[6] ZHANG R. Making convolutional networks shift-invariant again[C]//Proceedings of Machine Learning Research: volume 97 ICML. PMLR, 2019: 7324-7334.
[7] VASCONCELOS C, LAROCHELLE H, DUMOULIN V, et al. An effective anti-aliasing approach for residual networks[J]. CoRR, 2020, abs/2011.10675.
[8] VASCONCELOS C, LAROCHELLE H, DUMOULIN V, et al. Impact of aliasing on generalization in deep convolutional networks[C]//ICCV. IEEE, 2021: 10509-10518.
[9] DAPELLO J, MARQUES T, SCHRIMPF M, et al. Simulating a primary visual cortex at the front of cnns improves robustness to image perturbations[C]//NeurIPS. 2020.
[10] BAIDYA A, DAPELLO J, DICARLO J J, et al. Combining different V1 brain model variants to improve robustness to image corruptions in cnns[J]. CoRR, 2021, abs/2110.10645.
[11] LOPES R G, SMULLIN S J, CUBUK E D, et al. Affinity and diversity: Quantifying mechanisms of data augmentation[J]. CoRR, 2020, abs/2002.08973.
[12] RECHT B, ROELOFS R, SCHMIDT L, et al. Do CIFAR-10 classifiers generalize to cifar-10?[J]. CoRR, 2018, abs/1806.00451.
[13] KRIZHEVSKY A, SUTSKEVER I, HINTON G E. Imagenet classification with deep convolutional neural networks[C]//NIPS. 2012: 1106-1114.
[14] XIE S, GIRSHICK R B, DOLLÁR P, et al. Aggregated residual transformations for deep neural networks[C]//CVPR. IEEE Computer Society, 2017: 5987-5995.
[15] GEIRHOS R, TEMME C R M, RAUBER J, et al. Generalisation in humans and deep neural networks[C]//NeurIPS. 2018: 7549-7561.
[16] DAO T, GU A, RATNER A, et al. A kernel theory of modern data augmentation[C]// Proceedings of Machine Learning Research: volume 97 ICML. PMLR, 2019: 1528-1537.
[17] CHAPELLE O, WESTON J, BOTTOU L, et al. Vicinal risk minimization[C]//NIPS. MIT Press, 2000: 416-422.
[18] GOODFELLOWIJ,SHLENSJ,SZEGEDYC. Explainingandharnessingadversarialexamples[C]//ICLR (Poster). 2015.
[19] MADRY A, MAKELOV A, SCHMIDT L, et al. Towards deep learning models resistant toadversarial attacks[C]//ICLR (Poster). OpenReview.net, 2018.
[20] DEVRIES T, TAYLOR G W. Improved regularization of convolutional neural networks withcutout[J]. CoRR, 2017, abs/1708.04552.
[21] ZHANG H, CISSÉ M, DAUPHIN Y N, et al. mixup: Beyond empirical risk minimization[C]//ICLR (Poster). OpenReview.net, 2018.
[22] YUN S, HAN D, CHUN S, et al. Cutmix: Regularization strategy to train strong classifiers withlocalizable features[C]//ICCV. IEEE, 2019: 6022-6031.
[23] GEIRHOS R, RUBISCH P, MICHAELIS C, et al. Imagenet-trained cnns are biased towardstexture; increasing shape bias improves accuracy and robustness[C]//ICLR. OpenReview.net,2019.
[24] RUSAK E, SCHOTT L, ZIMMERMANN R S, et al. A simple way to make neural networks ro-bust against diverse image corruptions[C]//Lecture Notes in Computer Science: volume 12348ECCV (3). Springer, 2020: 53-69.
[25] HENDRYCKS D, BASART S, MU N, et al. The many faces of robustness: A critical analysisof out-of-distribution generalization[J]. CoRR, 2020, abs/2006.16241.
[26] CUBUK E D, ZOPH B, MANÉ D, et al. Autoaugment: Learning augmentation strategies fromdata[C]//CVPR. Computer Vision Foundation / IEEE, 2019: 113-123.
[27] HENDRYCKS D, MU N, CUBUK E D, et al. Augmix: A simple data processing method toimprove robustness and uncertainty[C]//ICLR. OpenReview.net, 2020.
[28] BENGIOY,YAOL,ALAING,etal. Generalizeddenoisingauto-encodersasgenerativemodels[C]//NIPS. 2013: 899-907.
[29] YE M, SHEN J, LIN G, et al. Deep learning for person re-identification: A survey and outlook[J]. CoRR, 2020, abs/2001.04193.
[30] ZHENG L, SHEN L, TIAN L, et al. Scalable person re-identification: A benchmark[C]//ICCV.IEEE Computer Society, 2015: 1116-1124.
[31] LI W, ZHAO R, XIAO T, et al. Deepreid: Deep filter pairing neural network for person re-identification[C]//CVPR. IEEE Computer Society, 2014: 152-159.
[32] WEI L, ZHANG S, GAO W, et al. Person transfer GAN to bridge domain gap for person re-identification[C]//CVPR. IEEE Computer Society, 2018: 79-88.
[33] NGUYEN D T, HONG H G, KIM K, et al. Person recognition system based on a combinationof body images from visible light and thermal cameras[J]. Sensors, 2017, 17(3): 605.
[34] WU A, ZHENG W, YU H, et al. Rgb-infrared cross-modality person re-identification[C]//ICCV. IEEE Computer Society, 2017: 5390-5399.
[35] DOSOVITSKIY A, BEYER L, KOLESNIKOV A, et al. An image is worth 16x16 words:Transformers for image recognition at scale[C]//ICLR. OpenReview.net, 2021.
[36] HE S, LUO H, WANG P, et al. Transreid: Transformer-based object re-identification[J]. CoRR,2021, abs/2102.04378.
[37] ZHONG Z, ZHENG L, KANG G, et al. Random erasing data augmentation[C]//AAAI. AAAIPress, 2020: 13001-13008.
[38] TAORI R, DAVE A, SHANKAR V, et al. Measuring robustness to natural distribution shifts inimage classification[C]//NeurIPS. 2020.
[39] VASILJEVIC I, CHAKRABARTI A, SHAKHNAROVICH G. Examining the impact of bluron recognition by convolutional networks[J]. CoRR, 2016, abs/1611.05760.
[40] DODGE S F, KARAM L J. A study and comparison of human and deep learning recognitionperformance under visual distortions[C]//ICCCN. IEEE, 2017: 1-7.
[41] AZULAY A, WEISS Y. Why do deep convolutional networks generalize so poorly to smallimage transformations?[J]. J. Mach. Learn. Res., 2019, 20: 184:1-184:25.
[42] HE F, TAO D. Recent advances in deep learning theory[J]. arXiv preprint arXiv:2012.10931,2020.
[43] SCHNEIDER S, RUSAK E, ECK L, et al. Improving robustness against common corruptionsby covariate shift adaptation[C]//NeurIPS. 2020.
[44] TANG Z, GAO Y, ZHU Y, et al. Selfnorm and crossnorm for out-of-distribution robustness[J].CoRR, 2021, abs/2102.02811.
[45] KAMANN C, ROTHER C. Increasing the robustness of semantic segmentation models withpainting-by-numbers[C]//Lecture Notes in Computer Science: volume 12355 ECCV (10).Springer, 2020: 369-387.
[46] MICHAELIS C, MITZKUS B, GEIRHOS R, et al. Benchmarking robustness in object detec-tion: Autonomous driving when winter is coming[J]. CoRR, 2019, abs/1907.07484.
[47] BIRODKAR V, MOBAHI H, KRISHNAN D, et al. A closed-form learned pooling for deepclassification networks[J]. CoRR, 2019, abs/1906.03808.
[48] LI Q, SHEN L, GUO S, et al. Wavecnet: Wavelet integrated cnns to suppress aliasing effect fornoise-robust image classification[J]. IEEE Trans. Image Process., 2021, 30: 7074-7089.
[49] HOSSAIN M T, TENG S W, SOHEL F, et al. Robust image classification using a low-passactivation function and DCT augmentation[J]. IEEE Access, 2021, 9: 86460-86474.
[50] YU H, LIU A, LIU X, et al. Pda: Progressive data augmentation for general robustness of deepneural networks[J]. arXiv preprint arXiv:1909.04839, 2019.
[51] LEE J, ZAHEER M Z, ASTRID M, et al. Smoothmix: a simple yet effective data augmentationto train robust classifiers[C]//CVPR Workshops. Computer Vision Foundation / IEEE, 2020:3264-3274.
[52] WONG E, KOLTER J Z. Learning perturbation sets for robust machine learning[C]//ICLR.OpenReview.net, 2021.
[53] XU Z, LIU D, YANG J, et al. Robust and generalizable visual representation learning viarandom convolutions[C]//ICLR. OpenReview.net, 2021.
[54] LAUGROS A, CAPLIER A, OSPICI M. Addressing neural network robustness with mixup andtargeted labeling adversarial training[C]//Lecture Notes in Computer Science: volume 12539ECCV Workshops (5). Springer, 2020: 178-195.
[55] LIN H, VAN ZUIJLEN M, PONT S C, et al. What can style transfer and paintings do for modelrobustness?[C]//CVPR. Computer Vision Foundation / IEEE, 2021: 11028-11037.
[56] CHEN X, XIE C, TAN M, et al. Robust and accurate object detection via adversarial learning[C]//CVPR. Computer Vision Foundation / IEEE, 2021: 16622-16631.
[57] CALIAN D A, STIMBERG F, WILES O, et al. Defending against image corruptions throughadversarial augmentations[J]. CoRR, 2021, abs/2104.01086.
[58] WANG J, JIN S, LIU W, et al. When human pose estimation meets robustness: Adversarialalgorithms and benchmarks[C]//CVPR. Computer Vision Foundation / IEEE, 2021: 11855-11864.
[59] KAMANN C, ROTHER C. Benchmarking the robustness of semantic segmentation models[C]//CVPR. IEEE, 2020: 8825-8835.
[60] WANG J, JIN S, LIU W, et al. When human pose estimation meets robustness: Adversarialalgorithms and benchmarks[J]. CoRR, 2021, abs/2105.06152.
[61] ZHANG S, NI Q, LI B, et al. Corruption-robust enhancement of deep neural networks forclassificationofperipheralbloodsmearimages[C]//LectureNotesinComputerScience: volume12265 MICCAI (5). Springer, 2020: 372-381.
[62] NAVARRO F, WATANABE C, SHIT S, et al. Evaluating the robustness of self-supervisedlearning in medical imaging[J]. CoRR, 2021, abs/2105.06986.
[63] KAROF,YEOT,ATANOVA,etal. 3dcommoncorruptionsanddataaugmentation[J]. CoRR,2022, abs/2203.01441.
[64] RENJ,PANL,LIUZ. Benchmarkingandanalyzingpointcloudclassificationundercorruptions[J]. CoRR, 2022, abs/2202.03377.
[65] RYCHALSKA B, BASAJ D, GOSIEWSKA A, et al. Models in the wild: On corruption robust-ness of neural NLP systems[C]//Lecture Notes in Computer Science: volume 11955 ICONIP(3). Springer, 2019: 235-247.
[66] SIMARD P Y, STEINKRAUS D, PLATT J C. Best practices for convolutional neural networksapplied to visual document analysis[C]//ICDAR. IEEE Computer Society, 2003: 958-962.
[67] SHORTENC,KHOSHGOFTAARTM. Asurveyonimagedataaugmentationfordeeplearning[J]. J. Big Data, 2019, 6: 60.
[68] CHENP,LIUS,ZHAOH,etal. Gridmaskdataaugmentation[J]. CoRR,2020,abs/2001.04086.
[69] YOO J, AHN N, SOHN K. Rethinking data augmentation for image super-resolution: A com-prehensive analysis and a new strategy[C]//CVPR. Computer Vision Foundation / IEEE, 2020:8372-8381.
[70] INOUE H. Data augmentation by pairing samples for images classification[J]. CoRR, 2018,abs/1801.02929.
[71] HARRIS E, MARCU A, PAINTER M, et al. Understanding and enhancing mixed sample dataaugmentation[J]. CoRR, 2020, abs/2002.12047.
[72] KIM J, CHOO W, SONG H O. Puzzle mix: Exploiting saliency and local statistics for optimalmixup[C]//Proceedings of Machine Learning Research: volume 119 ICML. PMLR, 2020:5275-5285.
[73] HUANG S, WANG X, TAO D. Snapmix: Semantically proportional mixing for augmentingfine-grained data[C]//AAAI. AAAI Press, 2021: 1628-1636.
[74] UDDIN A F M S, MONIRA M S, SHIN W, et al. Saliencymix: A saliency guided data aug-mentation strategy for better regularization[C]//ICLR. OpenReview.net, 2021.
[75] ZHANG H, YU Y, JIAO J, et al. Theoretically principled trade-off between robustness andaccuracy[C]//Proceedings of Machine Learning Research: volume 97 ICML. PMLR, 2019:7472-7482.
[76] CHEN P, SHARMA Y, ZHANG H, et al. EAD: elastic-net attacks to deep neural networks viaadversarial examples[C]//AAAI. AAAI Press, 2018: 10-17.
[77] DONG Y, LIAO F, PANG T, et al. Boosting adversarial attacks with momentum[C]//CVPR.Computer Vision Foundation / IEEE Computer Society, 2018: 9185-9193.
[78] JACKSON P T G, ABARGHOUEI A A, BONNER S, et al. Style augmentation: Data augmen-tation via style randomization[C]//CVPR Workshops. Computer Vision Foundation / IEEE,2019: 83-92.
[79] XU Y, GOEL A. Cross-domain image classification through neural-style transfer data augmen-tation[J]. CoRR, 2019, abs/1910.05611.
[80] ANTONIOU A, STORKEY A J, EDWARDS H. Augmenting image classifiers using dataaugmentation generative adversarial networks[C]//Lecture Notes in Computer Science: volume11141 ICANN (3). Springer, 2018: 594-603.
[81] WEIJW,SURIAWINATAAA,VAICKUSLJ,etal. Generativeimagetranslationfordataaug-mentation in colorectal histopathology images[C]//Proceedings of Machine Learning Research:volume 116 ML4H@NeurIPS. PMLR, 2019: 10-24.
[82] LIM S, KIM I, KIM T, et al. Fast autoaugment[C]//NeurIPS. 2019: 6662-6672.
[83] LI Y, HU G, WANG Y, et al. DADA: differentiable automatic data augmentation[J]. CoRR,2020, abs/2003.03780.
[84] CUBUKED,ZOPHB,SHLENSJ,etal. Randaugment: Practicalautomateddataaugmentationwith a reduced search space[C]//NeurIPS. 2020.
[85] RUSAK E, SCHNEIDER S, GEHLER P V, et al. Adapting imagenet-scale models to complexdistribution shifts with self-learning[J]. CoRR, 2021, abs/2104.12928.
[86] LAUGROS A, CAPLIER A, OSPICI M. Using the overlapping score to improve corruptionbenchmarks[C]//ICIP. IEEE, 2021: 959-963.
[87] SANTURKARS,TSIPRASD,MADRYA. BREEDS:benchmarksforsubpopulationshift[C]//ICLR. OpenReview.net, 2021.
[88] YE N, LI K, HONG L, et al. Ood-bench: Benchmarking and understanding out-of-distributiongeneralization datasets and algorithms[J]. CoRR, 2021, abs/2106.03721.
[89] HE Y, SHEN Z, CUI P. Towards non-i.i.d. image classification: A dataset and baselines[J].Pattern Recognit., 2021, 110: 107383.
[90] LI X, ZHENG W, WANG X, et al. Multi-scale learning for low-resolution person re-identification[C]//ICCV. IEEE Computer Society, 2015: 3765-3773.
[91] WANG Y, WANG L, YOU Y, et al. Resource aware person re-identification across multipleresolutions[C]//CVPR. IEEE Computer Society, 2018: 8042-8051.
[92] LI S, XIAO T, LI H, et al. Person search with natural language description[C]//CVPR. IEEEComputer Society, 2017: 5187-5196.
[93] CHEN T, DING S, XIE J, et al. Abd-net: Attentive but diverse person re-identification[C]//ICCV. IEEE, 2019: 8350-8360.
[94] CHEN B, DENG W, HU J. Mixed high-order attention network for person re-identification[C]//ICCV. IEEE, 2019: 371-381.
[95] CHEN X, FU C, ZHAO Y, et al. Salience-guided cascaded suppression network for personre-identification[C]//CVPR. Computer Vision Foundation / IEEE, 2020: 3297-3307.
[96] ZHOU K, YANG Y, CAVALLARO A, et al. Omni-scale feature learning for person re-identification[C]//ICCV. IEEE, 2019: 3701-3711.
[97] HERZOGF,JIX,TEEPET,etal. Lightweightmulti-branchnetworkforpersonre-identification[J]. CoRR, 2021, abs/2101.10774.
[98] XIE B, WU X, ZHANG S, et al. Learning diverse features with part-level resolution for personre-identification[C]//Lecture Notes in Computer Science: volume 12307 PRCV (3). Springer,2020: 16-28.
[99] SUN Y, ZHENG L, YANG Y, et al. Beyond part models: Person retrieval with refined partpooling (and A strong convolutional baseline)[C]//Lecture Notes in Computer Science: volume11208 ECCV (4). Springer, 2018: 501-518.
[100] WANG G, YUAN Y, CHEN X, et al. Learning discriminative features with multiple granular-ities for person re-identification[C]//ACM Multimedia. ACM, 2018: 274-282.
[101] ZHENG F, DENG C, SUN X, et al. Pyramidal person re-identification via multi-loss dynamictraining[C]//CVPR. Computer Vision Foundation / IEEE, 2019: 8514-8522.
[102] PARK H, HAM B. Relation network for person re-identification[C]//AAAI. AAAI Press, 2020:11839-11847.
[103] SUN Y, XU Q, LI Y, et al. Perceive where to focus: Learning visibility-aware part-level featuresfor partial person re-identification[C]//CVPR. Computer Vision Foundation / IEEE, 2019: 393-402.
[104] LUO H, JIANG W, ZHANG X, et al. Alignedreid++: Dynamically matching local informationfor person re-identification[J]. Pattern Recognit., 2019, 94: 53-61.
[105] YU F, JIANG X, GONG Y, et al. Devil’s in the details: Aligning visual clues for conditionalembedding in person re-identification[J]. arXiv preprint arXiv:2009.05250, 2020.
[106] ZHANG Z, LAN C, ZENG W, et al. Relation-aware global attention for person re-identification[C]//CVPR. Computer Vision Foundation / IEEE, 2020: 3183-3192.
[107] HOU R, MA B, CHANG H, et al. Interaction-and-aggregation network for person re-identification[C]//CVPR. Computer Vision Foundation / IEEE, 2019: 9317-9326.
[108] SHARMA C, KAPIL S R, CHAPMAN D. Person re-identification with a locally aware trans-former[J]. CoRR, 2021, abs/2106.03720.
[109] BRYAN B, GONG Y, ZHANG Y, et al. Second-order non-local attention networks for personre-identification[C]//ICCV. IEEE, 2019: 3759-3768.
[110] ZHOU S, WANG F, HUANG Z, et al. Discriminative feature learning with consistent attentionregularization for person re-identification[C]//ICCV. IEEE, 2019: 8039-8048.
[111] LUO C, CHEN Y, WANG N, et al. Spectral feature transformation for person re-identification[C]//ICCV. IEEE, 2019: 4975-4984.
[112] ZHENG L, ZHANG H, SUN S, et al. Person re-identification in the wild[C]//CVPR. IEEEComputer Society, 2017: 3346-3355.
[113] HERMANS A, BEYER L, LEIBE B. In defense of the triplet loss for person re-identification[J]. CoRR, 2017, abs/1703.07737.
[114] LIAO S, LI S Z. Efficient PSD constrained asymmetric metric learning for person re-identification[C]//ICCV. IEEE Computer Society, 2015: 3685-3693.
[115] CHENW,CHENX,ZHANGJ,etal. Beyondtripletloss: Adeepquadrupletnetworkforpersonre-identification[C]//CVPR. IEEE Computer Society, 2017: 1320-1329.
[116] XIAO Q, LUO H, ZHANG C. Margin sample mining loss: A deep learning based method forperson re-identification[J]. CoRR, 2017, abs/1710.00478.
[117] SUN Y, CHENG C, ZHANG Y, et al. Circle loss: A unified perspective of pair similarityoptimization[C]//CVPR. Computer Vision Foundation / IEEE, 2020: 6397-6406.
[118] ZHONG Z, ZHENG L, CAO D, et al. Re-ranking person re-identification with k-reciprocalencoding[C]//CVPR. IEEE Computer Society, 2017: 3652-3661.
[119] BAI S, BAI X, TIAN Q. Scalable person re-identification on supervised smoothed manifold[C]//CVPR. IEEE Computer Society, 2017: 3356-3365.
[120] SARFRAZ M S, SCHUMANN A, EBERLE A, et al. A pose-sensitive embedding for personre-identification with expanded cross neighborhood re-ranking[C]//CVPR. Computer VisionFoundation / IEEE Computer Society, 2018: 420-429.
[121] BENGIOY,MONPERRUSM. Non-localmanifoldtangentlearning[C]//NIPS. 2004: 129-136.
[122] LASSERRE J A, BISHOP C M, MINKA T P. Principled hybrids of generative and discrimi-native models[C]//CVPR (1). IEEE Computer Society, 2006: 87-94.
[123] SIMARD P Y, VICTORRI B, LECUN Y, et al. Tangent prop - A formalism for specifyingselected invariances in an adaptive network[C]//NIPS. Morgan Kaufmann, 1991: 895-903.
[124] SIMARDPY,LECUNY,DENKERJS,etal. Transformationinvarianceinpatternrecognition-tangentdistanceandtangentpropagation[M]//LectureNotesinComputerScience: volume1524Neural Networks: Tricks of the Trade. Springer, 1996: 239-27.
[125] CARLINI N, WAGNER D A. Towards evaluating the robustness of neural networks[C]//IEEESymposium on Security and Privacy. IEEE Computer Society, 2017: 39-57.
[126] GOODFELLOW I J, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial nets[C]//NIPS. 2014: 2672-2680.
[127] ZHU J, ZHANG R, PATHAK D, et al. Toward multimodal image-to-image translation[C]//NIPS. 2017: 465-476.
[128] ISOLA P, ZHU J, ZHOU T, et al. Image-to-image translation with conditional adversarial net-works[C]//CVPR. IEEE Computer Society, 2017: 5967-5976.
[129] RONNEBERGER O, FISCHER P, BROX T. U-net: Convolutional networks for biomedi-cal image segmentation[C]//Lecture Notes in Computer Science: volume 9351 MICCAI (3).Springer, 2015: 234-241.
[130] DING G W, WANG L, JIN X. Advertorch v0. 1: An adversarial robustness toolbox based onpytorch[J]. arXiv preprint arXiv:1902.07623, 2019.
[131] MADRY A, MAKELOV A, SCHMIDT L, et al. Towards deep learning models resistant toadversarial attacks[J]. arXiv preprint arXiv:1706.06083, 2017.
[132] GOODFELLOWIJ,SHLENSJ,SZEGEDYC. Explainingandharnessingadversarialexamples[J]. arXiv preprint arXiv:1412.6572, 2014.
[133] KURAKIN A, GOODFELLOW I, BENGIO S. Adversarial machine learning at scale[J]. arXivpreprint arXiv:1611.01236, 2016.
[134] DONG Y, LIAO F, PANG T, et al. Boosting adversarial attacks with momentum[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 9185-9193.
[135] CARLININ,WAGNERD. Towardsevaluatingthe robustnessof neuralnetworks[C]//2017ieeesymposium on security and privacy (sp). IEEE, 2017: 39-57.
[136] DENG J, DONG W, SOCHER R, et al. Imagenet: A large-scale hierarchical image database[C]//CVPR. IEEE Computer Society, 2009: 248-255.
[137] SPRINGENBERG J T, DOSOVITSKIY A, BROX T, et al. Striving for simplicity: The allconvolutional net[C]//ICLR (Workshop). 2015.
[138] HUANG G, LIU Z, VAN DER MAATEN L, et al. Densely connected convolutional networks[C]//CVPR. IEEE Computer Society, 2017: 2261-2269.
[139] ZAGORUYKO S, KOMODAKIS N. Wide residual networks[C]//BMVC. BMVA Press, 2016.
[140] ZHAOL,LIUT,PENGX,etal. Maximum-entropyadversarialdataaugmentationforimprovedgeneralization and robustness[C]//NeurIPS. 2020.
[141] LOPES R G, YIN D, POOLE B, et al. Improving robustness without sacrificing accuracy withpatch gaussian augmentation[J]. CoRR, 2019, abs/1906.02611.
[142] GOODFELLOW I J, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial networks[J]. arXiv preprint arXiv:1406.2661, 2014.
[143] GULRAJANI I, AHMED F, ARJOVSKY M, et al. Improved training of wasserstein gans[J].arXiv preprint arXiv:1704.00028, 2017.
[144] MAO X, LI Q, XIE H, et al. Least squares generative adversarial networks[C]//Proceedings ofthe IEEE international conference on computer vision. 2017: 2794-2802.
[145] RONNEBERGER O, FISCHER P, BROX T. U-net: Convolutional networks for biomedicalimage segmentation[C]//International Conference on Medical image computing and computer-assisted intervention. Springer, 2015: 234-241.
[146] HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]//Proceedingsof the IEEE conference on computer vision and pattern recognition. 2016: 770-778.
[147] ISOLA P, ZHU J Y, ZHOU T, et al. Image-to-image translation with conditional adversarialnetworks[C]//Proceedings of the IEEE conference on computer vision and pattern recognition.2017: 1125-1134.
[148] ZHU J Y, ZHANG R, PATHAK D, et al. Toward multimodal image-to-image translation[J].arXiv preprint arXiv:1711.11586, 2017.
[149] GILMER J, FORD N, CARLINI N, et al. Adversarial examples are a natural consequence oftest error in noise[C]//Proceedings of Machine Learning Research: volume 97 ICML. PMLR,2019: 2280-2289.
[150] MOOSAVI-DEZFOOLI S, FAWZI A, FAWZI O, et al. Universal adversarial perturbations[C]//CVPR. IEEE Computer Society, 2017: 86-94.
[151] ZHANG J, XU X, HAN B, et al. Attacks which do not kill training make adversarial learningstronger[C]//Proceedings of Machine Learning Research: volume 119 ICML. PMLR, 2020:11278-11287.
[152] HENDRYCKS D, MU N, CUBUK E D, et al. Augmix: A simple data processing method toimprove robustness and uncertainty[J]. arXiv preprint arXiv:1912.02781, 2019.
[153] ATHALYE A, CARLINI N, WAGNER D A. Obfuscated gradients give a false sense of se-curity: Circumventing defenses to adversarial examples[C]//Proceedings of Machine LearningResearch: volume 80 ICML. PMLR, 2018: 274-283.
[154] WANG X, DORETTO G, SEBASTIAN T, et al. Shape and appearance context modeling[C]//ICCV. IEEE Computer Society, 2007: 1-8.
[155] HE K, ZHANG X, REN S, et al. Deep residual learning forimage recognition[C]//CVPR. IEEEComputer Society, 2016: 770-778.
[156] LUOH,GUY,LIAOX,etal. Bagoftricksandastrongbaselinefordeeppersonre-identification[C]//CVPR Workshops. Computer Vision Foundation / IEEE, 2019: 1487-1495.
[157] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]//NIPS. 2017:5998-6008.
[158] TOUVRON H, CORD M, DOUZE M, et al. Training data-efficient image transformers & distil-lation through attention[C]//Proceedings of Machine Learning Research: volume 139 ICML.PMLR, 2021: 10347-10357.
[159] ZHENG Z, ZHENG L, YANG Y. A discriminatively learned CNN embedding for person rei-dentification[J]. ACM Trans. Multim. Comput. Commun. Appl., 2018, 14(1): 13:1-13:20.
[160] IOFFE S, SZEGEDY C. Batch normalization: Accelerating deep network training by reducinginternal covariate shift[C]//JMLR Workshop and Conference Proceedings: volume 37 ICML.JMLR.org, 2015: 448-456.
[161] DAI Z, CHEN M, GU X, et al. Batch dropblock network for person re-identification and beyond[C]//ICCV. IEEE, 2019: 3690-3700.
[162] GONGY,ZENGZ. Aneffectivedataaugmentationforpersonre-identification[J]. CoRR,2021,abs/2101.08533.
[163] GONG Y, ZENG Z, CHEN L, et al. A person re-identification data augmentation method withadversarial defense effect[J]. CoRR, 2021, abs/2101.08783.
[164] QUISPE R, PEDRINI H. Top-db-net: Top dropblock for activation enhancement in personre-identification[C]//ICPR. IEEE, 2020: 2980-2987.
[165] FU D, CHEN D, BAO J, et al. Unsupervised pre-training for person re-identification[J]. CoRR,2020, abs/2012.03753.
[166] PAN X, LUO P, SHI J, et al. Two at once: Enhancing learning and generalization capacities viaibn-net[C]//Lecture Notes in Computer Science: volume 11208 ECCV (4). Springer, 2018:484-500.
[167] HE L, LIAO X, LIU W, et al. Fastreid: A pytorch toolbox for general instance re-identification[J]. CoRR, 2020, abs/2006.02631.
[168] GE Y, ZHU F, CHEN D, et al. Self-paced contrastive learning with hybrid memory for domainadaptive object re-id[C]//NeurIPS. 2020.
[169] RISTANI E, SOLERA F, ZOU R S, et al. Performance measures and a data set for multi-target, multi-camera tracking[C]//Lecture Notes in Computer Science: volume 9914 ECCVWorkshops (2). 2016: 17-35.
[170] MERLER M, RATHA N K, FERIS R S, et al. Diversity in faces[J]. CoRR, 2019,abs/1901.10436.
[171] ORHAN A E. Robustness properties of facebook’s resnext WSL models[J]. CoRR, 2019,abs/1907.07640.
[172] HERMANN K L, CHEN T, KORNBLITH S. The origins and prevalence of texture bias inconvolutional neural networks[C]//NeurIPS. 2020.
[173] LI Y, YU Q, TAN M, et al. Shape-texture debiased neural network training[C]//ICLR. Open-Review.net, 2021.
[174] MUMMADI C K, SUBRAMANIAM R, HUTMACHER R, et al. Does enhanced shape biasimprove neural network robustness to common corruptions?[C]//ICLR. OpenReview.net, 2021.
[175] HENDRYCKS D, ZHAO K, BASART S, et al. Natural adversarial examples[C]//CVPR. Com-puter Vision Foundation / IEEE, 2021: 15262-15271.
[176] VAN DER WILK M, BAUER M, JOHN S T, et al. Learning invariances using the marginallikelihood[C]//NeurIPS. 2018: 9960-9970.
[177] SCHWÖBEL P E, JØRGENSEN M, OBER S W, et al. Last layer marginal likelihood for in-variance learning[J]. CoRR, 2021, abs/2106.07512.
修改评论