[1] O’SHEA T J, ROY T, CLANCY T C. Over-the-air deep learning based radio signal classification[J]. IEEE Journal of Selected Topics in Signal Processing, 2018, 12(1): 168-179.
[2] ZENG Y, ZHANG M, HAN F, et al. Spectrum analysis and convolutional neural network forautomatic modulation recognition[J]. IEEE Wireless Communications Letters, 2019, 8(3): 929-932.
[3] O’SHEA T J, CORGAN J, CLANCY T C. Convolutional radio modulation recognition networks[C]//International Conference on Engineering Applications of Neural Networks (EANN).2016.
[4] RAJENDRAN S, MEERT W, GIUSTINIANO D, et al. Deep learning models for wireless signalclassification with distributed low-cost spectrum sensors[J]. IEEE Transactions on CognitiveCommunications and Networking, 2018, 4(3): 433-445.
[5] ZHANG M, ZENG Y, HAN Z, et al. Automatic modulation recognition using deep learningarchitectures[C]//International Workshop on Signal Processing Advances in Wireless Communications (SPAWC). 2018.
[6] HANNA S S, DICK C, CABRIC D. Combining deep learning and linear processing formodulation classification and symbol decoding[C]//IEEE Global Communications Conference(GLOBECOM). 2020.
[7] SHANG X, HU H, LI X, et al. Dive into deep learning based automatic modulation classification: A disentangled approach[J]. IEEE Access, 2020, 8: 113271-113284.
[8] HAMEED F, DOBRE O A, POPESCU D C. On the likelihood-based approach to modulationclassification[J]. IEEE Transactions on Wireless Communications, 2009, 8(12): 5884-5892.
[9] ZHENG J, LV Y. Likelihood-based automatic modulation classification in OFDM with indexmodulation[J]. IEEE Transactions on Vehicular Technology, 2018, 67(9): 8192-8204.
[10] ZHANG J, CABRIC D, WANG F, et al. Cooperative modulation classification for multipathfading channels via expectation-maximization[J]. IEEE Transactions on Wireless Communications, 2017, 16(10): 6698-6711.
[11] WU H C, SAQUIB M, YUN Z. Novel automatic modulation classification using cumulantfeatures for communications via multipath channels[J]. IEEE Transactions on Wireless Communications, 2008, 7(8): 3098-3105.
[12] PUNCHIHEWA A, ZHANG Q, DOBRE O A, et al. On the cyclostationarity of OFDM andsingle carrier linearly digitally modulated signals in time dispersive channels: Theoretical developments and application[J]. IEEE Transactions on Wireless Communications, 2010, 9(8):2588-2599.
[13] SWAMI A, SADLER B M. Hierarchical digital modulation classification using cumulants[J].IEEE Transactions on Communications, 2000, 48(3): 416-429
[14] HONG L. Classification of BPSK and QPSK signals in fading environment using the ICAtechnique[C]//Southeastern Symposium on System Theory (SSST). 2005.
[15] SAFAVIAN S R, LANDGREBE D. A survey of decision tree classifier methodology[J]. IEEETransactions on Systems, Man, and Cybernetics, 1991, 21(3): 660-674.
[16] Schölkopf B, Tsuda K, Vert J P. Advanced application of support vector machines[M]. MITPress, 2004: 275-275.
[17] LIPPMANN R. An introduction to computing with neural nets[J]. IEEE ASSP Magazine, 1987,4(2): 4-22.
[18] YE H, LI G Y, JUANG B H. Power of deep learning for channel estimation and signal detectionin OFDM systems[J]. IEEE Wireless Communications Letters, 2017, 7(1): 114-117.
[19] HE H, WEN C K, JIN S, et al. Model-driven deep learning for MIMO detection[J]. IEEETransactions on Signal Processing, 2020, 68: 1702-1715.
[20] SOLTANIEH N, NOROUZI Y, YANG Y, et al. A review of radio frequency fingerprintingtechniques[J]. IEEE Journal of Radio Frequency Identification, 2020, 4(3): 222-233.
[21] BU K, HE Y, JING X, et al. Adversarial transfer learning for deep learning based automaticmodulation classification[J]. IEEE Signal Processing Letters, 2020, 27: 880-884.
[22] ZHANG F, LUO C, XU J, et al. An efficient deep learning model for automatic modulationrecognition based on parameter estimation and transformation[J]. IEEE Communications Letters, 2021, 25(10): 3287-3290.
[23] O’SHEA T J, WEST N. Radio machine learning dataset generation with gnu radio[C]//GNURadio Conference: volume 1. 2016.
[24] WEST N E, O’SHEA T. Deep architectures for modulation recognition[C]//IEEE InternationalSymposium on Dynamic Spectrum Access Networks (DySPAN). 2017.
[25] WANG Y, GUI G, OHTSUKI T, et al. Multi-task learning for generalized automatic modulationclassification under non-Gaussian noise with varying SNR conditions[J]. IEEE Transactions onWireless Communications, 2021, 20(6): 3587-3596.
[26] ZHANG J, LI Y, YIN J. Modulation classification method for frequency modulation signalsbased on the time–frequency distribution and CNN[J]. IET Radar, Sonar & Navigation, 2017,12(2): 244-249.
[27] LI Z, ZHA X. Modulation recognition based on IQ-eyes diagrams and deep learning[C]//IEEEInternational Conference on Computer and Communications (ICCC). 2019.
[28] XU J, LUO C, PARR G, et al. A spatiotemporal multi-channel learning framework for automaticmodulation recognition[J]. IEEE Wireless Communications Letters, 2020, 9(10): 1629-1632.
[29] HUANG L, ZHANG Y, PAN W, et al. Visualizing deep learning-based radio modulation classifier[J]. IEEE Transactions on Cognitive Communications and Networking, 2020, 7(1): 47-58.
[30] LI L, HUANG J, CHENG Q, et al. Automatic modulation recognition: A few-shot learningmethod based on the capsule network[J]. IEEE Wireless Communications Letters, 2020, 10(3):474-477.
[31] WANG Y, GUI G, GACANIN H, et al. Transfer learning for semi-supervised automatic modulation classification in ZF-MIMO systems[J]. IEEE Journal on Emerging and Selected Topicsin Circuits and Systems, 2020, 10(2): 231-239.
[32] HUANG S, LIN C, XU W, et al. Identification of active attacks in internet of things: joint modeland data-driven automatic modulation classification approach[J]. IEEE Internet of Things Journal, 2020, 8(3): 2051-2065.
[33] YU J, HU A, ZHOU F, et al. Radio frequency fingerprint identification based on denoisingautoencoders[C]//International Conference on Wireless and Mobile Computing, Networkingand Communications (WiMob). 2019.
[34] KE Z, VIKALO H. Real-time radio technology and modulation classification via an LSTMauto-encoder[J]. IEEE Transactions on Wireless Communications, 2021.
[35] RONNEBERGER O, FISCHER P, BROX T. U-net: Convolutional networks for biomedicalimage segmentation[C]//International Conference on Medical Image Computing and ComputerAssisted Intervention (MICCAI). 2015.
[36] HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]//IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2016.
[37] BAHDANAU D, CHO K, BENGIO Y. Neural machine translation by jointly learning to alignand translate[C]//International Conference on Learning Representations (ICLR). 2015.
[38] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[J]. Advances inNeural Information Processing Systems, 2017, 30.
[39] LUONG T, PHAM H, MANNING C D. Effective approaches to attention-based neural machinetranslation[C]//Conference on Empirical Methods in Natural Language Processing (EMNLP).2015.
[40] XU K, BA J, KIROS R, et al. Show, attend and tell: Neural image caption generation with visualattention[C]//International Conference on Machine Learning (ICML). 2015.
[41] LU J, XIONG C, PARIKH D, et al. Knowing when to look: Adaptive attention via a visual sentinel for image captioning[C]//IEEE Conference on Computer Vision and Pattern Recognition(CVPR). 2017.
[42] LIU G, GUO J. Bidirectional LSTM with attention mechanism and convolutional layer for textclassification[J]. Neurocomputing, 2019, 337: 325-338.
[43] LI Y, YANG L, XU B, et al. Improving user attribute classification with text and social networkattention[J]. Cognitive Computation, 2019, 11(4): 459-468.
[44] WANG S, HU L, CAO L, et al. Attention-based transactional context embedding for next-itemrecommendation[C]//AAAI Conference on Artificial Intelligence (AAAI). 2018.
[45] YING H, ZHUANG F, ZHANG F, et al. Sequential recommender system based on hierarchicalattention network[C]//International Joint Conference on Artificial Intelligence (IJCAI). 2018.
[46] SUTSKEVER I, VINYALS O, LE Q V. Sequence to sequence learning with neural networks[J]. Advances in Neural Information Processing Systems, 2014, 27.
[47] BRITZ D, GOLDIE A, LUONG M, et al. Massive exploration of neural machine translationarchitectures[J]. CoRR, 2017, abs/1703.03906.
[48] MORITZ N, HORI T, ROUX J L. Semi-supervised speech recognition via graph-based temporalclassification[C]//IEEE International Conference on Acoustics, Speech and Signal Processing(ICASSP). 2021.
[49] ZHAO Y, WANG D. Noisy-reverberant speech enhancement using DenseUNet with timefrequency attention.[C]//Proc. INTERSPEECH 2020. 2020: 3261-3265.
[50] SONG S, LAN C, XING J, et al. An end-to-end spatio-temporal attention model for humanaction recognition from skeleton data[C]//AAAI Conference on Artificial Intelligence (AAAI).2017.
[51] TIAN Y, HU W, JIANG H, et al. Densely connected attentional pyramid residual network forhuman pose estimation[J]. Neurocomputing, 2019, 347: 13-23.
[52] ZHAO A, QI L, LI J, et al. LSTM for diagnosis of neurodegenerative diseases using gait data[C]//Ninth International Conference on Graphic and Image Processing (ICGIP). 2018.
[53] ZHANG P, XUE J, LAN C, et al. Adding attentiveness to the neurons in recurrent neuralnetworks[C]//European Conference on Computer Vision (ECCV). 2018.
[54] SONG K, YAO T, LING Q, et al. Boosting image sentiment analysis with visual attention[J].Neurocomputing, 2018, 312: 218-228.
[55] YAN X, HU S, MAO Y, et al. Deep multi-view learning methods: A review[J]. Neurocomputing, 2021, 448: 106-129.
[56] CHOROWSKI J, BAHDANAU D, CHO K, et al. End-to-end continuous speech recognitionusing attention-based recurrent nn: First results[C]//NIPS 2014 Workshop on Deep Learning,December 2014. 2014.
[57] CHAN W, JAITLY N, LE Q, et al. Listen, attend and spell: A neural network for large vocabulary conversational speech recognition[C]//IEEE international Conference on Acoustics,Speech and Signal Processing (ICASSP). 2016.
[58] ITTI L, KOCH C, NIEBUR E. A model of saliency-based visual attention for rapid sceneanalysis[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1998, 20(11):1254-1259.
[59] MNIH V, HEESS N, GRAVES A, et al. Recurrent models of visual attention[J]. Advances inNeural Information Processing Systems, 2014, 27.
[60] HU J, SHEN L, SUN G. Squeeze-and-excitation networks[C]//IEEE Conference on ComputerVision and Pattern Recognition (CVPR). 2018: 7132-7141.
[61] WOO S, PARK J, LEE J Y, et al. CBAM: Convolutional block attention module[C]//EuropeanConference on Computer Vision (ECCV). 2018.
[62] ZHANG K, ZHONG G, DONG J, et al. Stock market prediction based on generative adversarialnetwork[J]. Procedia Computer Science, 2019, 147: 400-406.
[63] IERACITANO C, PAVIGLIANITI A, CAMPOLO M, et al. A novel automatic classification system based on hybrid unsupervised and supervised machine learning for electrospun nanofibers[J]. IEEE/CAA Journal of Automatica Sinica, 2020, 8(1): 64-76.
[64] FAN Z, ZHONG G, LI H. A feature fusion network for multi-modal mesoscale eddy detection[C]//International Conference on Neural Information Processing. Springer, 2020.
[65] LIU X, XIA Y, YU H, et al. Region based parallel hierarchy convolutional neural networkfor automatic facial nerve paralysis evaluation[J]. IEEE Transactions on Neural Systems andRehabilitation Engineering, 2020, 28(10): 2325-2332.
[66] MING Y, MENG X, FAN C, et al. Deep learning for monocular depth estimation: A review[J].Neurocomputing, 2021, 438: 14-33.
[67] XIA Y, YU H, WANG F Y. Accurate and robust eye center localization via fully convolutionalnetworks[J]. IEEE/CAA Journal of Automatica Sinica, 2019, 6(5): 1127-1138.
[68] GUIDOTTI R, MONREALE A, RUGGIERI S, et al. A survey of methods for explaining blackbox models[J]. ACM Computing Surveys, 2018, 51(5): 1-42.
[69] JAIN S, WALLACE B C. Attention is not explanation[C]//North American Chapter of theAssociation for Computational Linguistics (NAACL). 2019.
[70] SERRANO S, SMITH N A. Is attention interpretable?[C]//Annual Meeting of the Associationfor Computational Linguistics (ACL). 2019.
[71] LI L H, YATSKAR M, YIN D, et al. What does BERT with vision look at?[C]//Annual Meetingof the Association for Computational Linguistics (ACL). 2020.
[72] LETARTE G, PARADIS F, GIGUÈRE P, et al. Importance of self-attention for sentiment analysis[C]//Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and InterpretingNeural Networks for NLP. 2018: 267-275.
[73] VASHISHTH S, UPADHYAY S, TOMAR G S, et al. Attention interpretability across NLPtasks[J]. CoRR, 2019, abs/1909.11218.
[74] WIEGREFFE S, PINTER Y. Attention is not not explanation[C]//Proceedings of the 2019Conference on Empirical Methods in Natural Language Processing and the 9th InternationalJoint Conference on Natural Language Processing (EMNLP-IJCNLP). 2019: 11-20.
[75] O’SHEA T J, PEMULA L, BATRA D, et al. Radio transformer networks: Attention models forlearning to synchronize in wireless systems[C]//Asilomar Conference on Signals, Systems andComputers. 2016.
[76] LIANG Z, TAO M, WANG L, et al. Automatic modulation recognition based on adaptiveattention mechanism and ResNeXt WSL Model[J]. IEEE Communications Letters, 2021.
[77] KEHTARNAVAZ N. Frequency domain processing[M]//KEHTARNAVAZ N. Digital SignalProcessing System Design (Second Edition). Burlington: Academic Press, 2008: 175-196.
[78] HEARN G E, METCALFE A V. 12 - Hull roughness and ship resistance[M]//HEARN G E,METCALFE A V. Spectral Analysis in Engineering. Oxford: Butterworth-Heinemann, 1995:261-273.
[79] GLOROT X, BORDES A, BENGIO Y. Deep sparse rectifier neural networks[C]//InternationalConference on Artificial Intelligence and Statistics (AISTATS). 2011.
[80] RENSINK R A. The dynamic representation of scenes[J]. Visual Cognition, 2000, 7(1-3):17-42.
[81] CORBETTA M, SHULMAN G L. Control of goal-directed and stimulus-driven attention in thebrain[J]. Nature Reviews Neuroscience, 2002, 3(3): 201-215.
[82] TSOTSOS J K, CULHANE S M, WAI W Y K, et al. Modeling visual attention via selectivetuning[J]. Artificial Intelligence, 1995, 78(1-2): 507-545.
[83] HOCHREITER S, SCHMIDHUBER J. Long short-term memory[J]. Neural Computation,1997, 9(8): 1735-1780.
[84] CHO K, VAN MERRIENBOER B, GÜLÇEHRE Ç, et al. Learning phrase representations usingRNN encoder-decoder for statistical machine translation[C]//Conference on Empirical Methodsin Natural Language Processing (EMNLP). 2014.
[85] 农元君, 王俊杰, 等. 基于注意力和强化学习的遥感图像描述方法[J]. 光学学报, 2021, 41(22): 2228001.
[86] LONG S, TU C, LIU Z, et al. Automatic judgment prediction via legal reading comprehension[C]//China National Conference on Chinese Computational Linguistics. 2019.
[87] MI H, WANG Z, ITTYCHERIAH A. Supervised attentions for neural machine translation[C]//Conference on Empirical Methods in Natural Language Processing (EMNLP). 2016.
[88] LIU L, UTIYAMA M, FINCH A, et al. Neural machine translation with supervised attention[C]//International Conference on Computational Linguistics (COLING). 2016.
[89] ZHAO S, ZHANG Z. Attention-via-attention neural machine translation[C]//AAAI Conferenceon Artificial Intelligence (AAAI). 2018.
[90] YANG B, TU Z, WONG D F, et al. Modeling localness for self-attention networks[C]//Conference on Empirical Methods in Natural Language Processing (EMNLP). 2018.
[91] WANG S I, MANNING C D. Baselines and bigrams: Simple, good sentiment and topic classification[C]//Annual Meeting of the Association for Computational Linguistics (ACL). 2012.
[92] MAAS A, DALY R E, PHAM P T, et al. Learning word vectors for sentiment analysis[C]//Annual Meeting of the Association for Computational Linguistics (ACL). 2011.
[93] PANG B, LEE L, et al. Opinion mining and sentiment analysis[J]. Foundations and Trends®in Information Retrieval, 2008, 2(1–2): 1-135.
[94] SAHAMI M, DUMAIS S, HECKERMAN D, et al. A Bayesian approach to filtering junk e-mail[C]//Learning for Text Categorization: Papers from the 1998 workshop: volume 62. Citeseer,1998: 98-105.
[95] LIN Z, FENG M, DOS SANTOS C N, et al. A structured self-attentive sentence embedding[C]//International Conference on Learning Representations (ICLR). 2017.
[96] SHEN T, ZHOU T, LONG G, et al. Disan: Directional self-attention network for RNN/CNNfree language understanding[C]//AAAI Conference on Artificial Intelligence (AAAI). 2018.
[97] YANG Z, YANG D, DYER C, et al. Hierarchical attention networks for document classification[C]//North American Chapter of the Association for Computational Linguistics (NAACL).2016.
[98] SONG Y, WANG J, JIANG T, et al. Attentional encoder network for targeted sentiment classification[J]. CoRR, 2019, abs/1902.09314.
[99] AMBARTSOUMIAN A, POPOWICH F. Self-attention: A better building block for sentimentanalysis neural network classifiers[C]//Workshop on Computational Approaches to Subjectivityand Sentiment Analysis. 2018.
[100] TANG D, QIN B, LIU T. Aspect level sentiment classification with deep memory network[C]//Conference on Empirical Methods in Natural Language Processing (EMNLP). 2016.
[101] ZHU P, QIAN T. Enhanced aspect level sentiment classification with auxiliary memory[C]//International Conference on Computational Linguistics (COLING). 2018.
[102] SUKHBAATAR S, WESTON J, FERGUS R, et al. End-to-end memory networks[J]. Advancesin Neural Information Processing Systems, 2015, 28.
[103] CUI Y, CHEN Z, WEI S, et al. Attention-over-attention neural networks for reading comprehension[C]//Annual Meeting of the Association for Computational Linguistics (ACL). 2017.
[104] WANG B, LIU K, ZHAO J. Inner attention based recurrent neural networks for answer selection[C]//Annual Meeting of the Association for Computational Linguistics (ACL). 2016.
[105] KIM Y, DENTON C, HOANG L, et al. Structured attention networks[C]//International Conference on Learning Representations (ICLR). 2017.
[106] TAY Y, LUU A T, HUI S C. Hermitian co-attention networks for text matching in asymmetricaldomains[C]//International Joint Conference on Artificial Intelligence (IJCAI). 2018.
[107] LU J, YANG J, BATRA D, et al. Hierarchical question-image co-attention for visual questionanswering[J]. Advances in Neural Information Processing Systems, 2016, 29.
[108] MIKOLOV T, CHEN K, CORRADO G, et al. Efficient estimation of word representations invector space[C]//International Conference on Learning Representations (ICLR). 2013.
[109] PENNINGTON J, SOCHER R, MANNING C D. Glove: Global vectors for word representation[C]//Conference on Empirical Methods in Natural Language Processing (EMNLP). 2014.
[110] PETERS M E, NEUMANN M, IYYER M, et al. Deep contextualized word representations[C]//North American Chapter of the Association for Computational Linguistics (NAACL). 2018.
[111] DEVLIN J, CHANG M W, LEE K, et al. BERT: Pre-training of deep bidirectional transformersfor language understanding[C]//North American Chapter of the Association for ComputationalLinguistics (NAACL). 2019.
[112] RADFORD A, NARASIMHAN K, SALIMANS T, et al. Improving language understandingby generative pre-training[Z]. 2018.
[113] RADFORD A, WU J, CHILD R, et al. Language models are unsupervised multitask learners[J]. OpenAI blog, 2019, 1(8): 9.
[114] BROWN T, MANN B, RYDER N, et al. Language models are few-shot learners[J]. Advancesin Neural Information Processing Systems, 2020, 33: 1877-1901.
[115] JADERBERG M, SIMONYAN K, ZISSERMAN A, et al. Spatial transformer networks[J].Advances in Neural Information Processing Systems, 2015, 28.
[116] HU J, SHEN L, SUN G. Squeeze-and-excitation networks[C]//IEEE Conference on ComputerVision and Pattern Recognition (CVPR). 2018.
[117] LI X, WANG W, HU X, et al. Selective kernel networks[C]//IEEE Conference on ComputerVision and Pattern Recognition (CVPR). 2019.
[118] STOLLENGA M F, MASCI J, GOMEZ F, et al. Deep networks with internal selective attentionthrough feedback connections[J]. Advances in Neural Information Processing Systems, 2014,27.
[119] WANG X, GIRSHICK R, GUPTA A, et al. Non-local neural networks[C]//IEEE Conferenceon Computer Vision and Pattern Recognition (CVPR). 2018.
[120] FU J, LIU J, TIAN H, et al. Dual attention network for scene segmentation[C]//IEEE Conferenceon Computer Vision and Pattern Recognition (CVPR). 2019.
[121] CAO Y, XU J, LIN S, et al. Gcnet: Non-local networks meet squeeze-excitation networksand beyond[C]//International Conference on Computer Vision Workshops (ICCV Workshops).2019.
[122] WANG F, JIANG M, QIAN C, et al. Residual attention network for image classification[C]//IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2017.
[123] HUANG Z, WANG X, HUANG L, et al. Ccnet: Criss-cross attention for semantic segmentation[C]//International Conference on Computer Vision (ICCV). 2019.
[124] BISHOP C M. Pattern recognition[J]. Machine Learning, 2006, 128(9).
[125] YU F, KOLTUN V. Multi-scale context aggregation by dilated convolutions[C]//InternationalConference on Learning Representations (ICLR). 2016.
[126] HAN J, MORAGA C. The influence of the sigmoid function parameters on the speed of backpropagation learning[C]//International Workshop on Artificial Neural Networks. 1995.
[127] RADFORD A, METZ L, CHINTALA S. Unsupervised representation learning with deep convolutional generative adversarial networks[C]//International Conference on Learning Representations (ICLR). 2016.
[128] KINGMA D P, BA J. Adam: A method for stochastic optimization[C]//International Conferenceon Learning Representations (ICLR). 2015.
修改评论