[1] DIMITRAKOPOULOS G, DEMESTICHAS P. Intelligent transportation systems[J]. IEEE Vehicular Technology Magazine, 2010, 5(1): 77-84.
[2] FIGUEIREDO L, JESUS I, MACHADO J T, et al. Towards the development of intelligent transportation systems[C]//ITSC 2001. 2001 IEEE intelligent transportation systems. Proceedings (Cat. No. 01TH8585). IEEE, 2001: 1206-1211.
[3] NAGY A M, SIMON V. Survey on traffic prediction in smart cities[J]. Pervasive and Mobile Computing, 2018, 50: 148-163.
[4] BARROS J, ARAUJO M, ROSSETTI R J. Short-term real-time traffic prediction methods: A survey[C]//2015 International Conference on Models and Technologies for Intelligent Transportation Systems (MT-ITS). IEEE, 2015: 132-139.
[5] ISHAK S, AL-DEEK H. Performance evaluation of short-term time-series traffic prediction model[J]. Journal of transportation engineering, 2002, 128(6): 490-498.
[6] BAI L, YAO L, LI C, et al. Adaptive graph convolutional recurrent network for traffic forecasting [J]. Advances in neural information processing systems, 2020, 33: 17804-17815.
[7] ZHU J, WANG Q, TAO C, et al. AST-GCN: Attribute-augmented spatiotemporal graph convolutional network for traffic forecasting[J]. IEEE Access, 2021, 9: 35973-35983.
[8] LI Y, YU R, SHAHABI C, et al. Diffusion convolutional recurrent neural network: Data-driven traffic forecasting[A]. 2017.
[9] LIN Z, FENG J, LU Z, et al. Deepstn+: Context-aware spatial-temporal neural network for crowd flow prediction in metropolis[C]//Proceedings of the AAAI conference on artificial intelligence: volume 33. 2019: 1020-1027.
[10] ZHENG C, FAN X, WANG C, et al. Gman: A graph multi-attention network for traffic prediction[C]//Proceedings of the AAAI conference on artificial intelligence: volume 34. 2020: 1234-1241.
[11] WU Z, PAN S, LONG G, et al. Graph wavenet for deep spatial-temporal graph modeling[A]. 2019.
[12] CIRSTEA R G, KIEU T, GUO C, et al. EnhanceNet: Plugin neural networks for enhancing correlated time series forecasting[C]//2021 IEEE 37th International Conference on Data Engineering (ICDE). IEEE, 2021: 1739-1750.
[13] WU Z, PAN S, LONG G, et al. Connecting the dots: Multivariate time series forecasting with graph neural networks[C]//Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining. 2020: 753-763.
[14] JIANG R, YIN D, WANG Z, et al. Dl-traff: Survey and benchmark of deep learning models for urban traffic prediction[C]//Proceedings of the 30th ACM international conference on information & knowledge management. 2021: 4515-4525. 53
[15] LAN S, MA Y, HUANG W, et al. Dstagnn: Dynamic spatial-temporal aware graph neural network for traffic flow forecasting[C]//International Conference on Machine Learning. PMLR, 2022: 11906-11917.
[16] ZHAO L, SONG Y, ZHANG C, et al. T-gcn: A temporal graph convolutional network for traffic prediction[J]. IEEE Transactions on Intelligent Transportation Systems, 2019, 21(9): 3848-3858.
[17] XU M, DAI W, LIU C, et al. Spatial-temporal transformer networks for traffic flow forecasting [A]. 2020.
[18] YU B, YIN H, ZHU Z. Spatio-temporal graph convolutional networks: A deep learning framework for traffic forecasting[A]. 2017.
[19] FANG Z, LONG Q, SONG G, et al. Spatial-temporal graph ode networks for traffic flow forecasting[C]//Proceedings of the 27th ACM SIGKDD conference on knowledge discovery & data mining. 2021: 364-373.
[20] ZHOU H, REN D, XIA H, et al. Ast-gnn: An attention-based spatio-temporal graph neural network for interaction-aware pedestrian trajectory prediction[J]. Neurocomputing, 2021, 445: 298-308.
[21] PAN B, DEMIRYUREK U, SHAHABI C. Utilizing real-world transportation data for accurate traffic prediction[C]//2012 ieee 12th international conference on data mining. IEEE, 2012: 595- 604.
[22] ZHANG Y, LIU Y. Traffic forecasting using least squares support vector machines[J]. Transportmetrica, 2009, 5(3): 193-213.
[23] ZHANG J, ZHENG Y, QI D. Deep spatio-temporal residual networks for citywide crowd flows prediction[C]//Thirty-first AAAI conference on artificial intelligence. 2017.
[24] YAO H, TANG X, WEI H, et al. Revisiting spatial-temporal similarity: A deep learning framework for traffic prediction[C]//Proceedings of the AAAI conference on artificial intelligence: volume 33. 2019: 5668-5675.
[25] SONG C, LIN Y, GUO S, et al. Spatial-temporal synchronous graph convolutional networks: A new framework for spatial-temporal network data forecasting[C]//Proceedings of the AAAI conference on artificial intelligence: volume 34. 2020: 914-921.
[26] KRIZHEVSKY A, SUTSKEVER I, HINTON G E. Imagenet classification with deep convolutional neural networks[J]. Communications of the ACM, 2017, 60(6): 84-90.
[27] SCARSELLI F, GORI M, TSOI A C, et al. The graph neural network model[J]. IEEE transactions on neural networks, 2008, 20(1): 61-80.
[28] KIPF T N, WELLING M. Semi-supervised classification with graph convolutional networks [A]. 2016.
[29] LEA C, FLYNN M D, VIDAL R, et al. Temporal convolutional networks for action segmentation and detection[C]//proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017: 156-165.
[30] BI J, ZHANG X, YUAN H, et al. A hybrid prediction method for realistic network traffic with temporal convolutional network and LSTM[J]. IEEE Transactions on Automation Science and Engineering, 2021, 19(3): 1869-1879. 54
[31] KUANG L, HUA C, WU J, et al. Traffic volume prediction based on multi-sources GPS trajectory data by temporal convolutional network[J]. Mobile Networks and Applications, 2020, 25:1405-1417.
[32] ZHAO W, GAO Y, JI T, et al. Deep temporal convolutional networks for short-term traffic flowforecasting[J]. IEEE Access, 2019, 7: 114496-114507.
[33] ITTI L, KOCH C, NIEBUR E. A model of saliency-based visual attention for rapid sceneanalysis[J]. IEEE Transactions on pattern analysis and machine intelligence, 1998, 20(11):1254-1259.
[34] MNIH V, HEESS N, GRAVES A, et al. Recurrent models of visual attention[J]. Advances inneural information processing systems, 2014, 27.
[35] BAHDANAU D, CHO K, BENGIO Y. Neural machine translation by jointly learning to alignand translate[A]. 2014.
[36] XU K, BA J, KIROS R, et al. Show, attend and tell: Neural image caption generation with visualattention[C]//International conference on machine learning. PMLR, 2015: 2048-2057.
[37] LU J, XIONG C, PARIKH D, et al. Knowing when to look: Adaptive attention via a visualsentinel for image captioning[C]//Proceedings of the IEEE conference on computer vision andpattern recognition. 2017: 375-383.
[38] LIU G, GUO J. Bidirectional LSTM with attention mechanism and convolutional layer for textclassification[J]. Neurocomputing, 2019, 337: 325-338.
[39] SUTSKEVER I, VINYALS O, LE Q V. Sequence to sequence learning with neural networks[J]. Advances in neural information processing systems, 2014, 27.
[40] LUONG M T, PHAM H, MANNING C D. Effective approaches to attention-based neuralmachine translation[A]. 2015.
[41] ZHANG P, XUE J, LAN C, et al. Adding attentiveness to the neurons in recurrent neuralnetworks[C]//proceedings of the European conference on computer vision (ECCV). 2018: 135-151.
[42] SONG K, YAO T, LING Q, et al. Boosting image sentiment analysis with visual attention[J].Neurocomputing, 2018, 312: 218-228.
[43] CHOROWSKI J, BAHDANAU D, CHO K, et al. End-to-end continuous speech recognitionusing attention-based recurrent nn: First results[A]. 2014.
[44] CHAN W, JAITLY N, LE Q, et al. Listen, attend and spell: A neural network for large vocabulary conversational speech recognition[C]//2016 IEEE international conference on acoustics,speech and signal processing (ICASSP). IEEE, 2016: 4960-4964.
[45] YING H, ZHUANG F, ZHANG F, et al. Sequential recommender system based on hierarchicalattention network[C]//IJCAI International Joint Conference on Artificial Intelligence. 2018.
[46] VELIČKOVIĆ P, CUCURULL G, CASANOVA A, et al. Graph attention networks[A]. 2017.
[47] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[J]. Advances inneural information processing systems, 2017, 30.
[48] DEVLIN J, CHANG M W, LEE K, et al. Bert:Pre-training of deep bidirectional transformersfor language understanding[A]. 2018.55
[49] RADFORD A, NARASIMHAN K, SALIMANS T, et al. Improving language understandingwith unsupervised learning[M]. Technical report, OpenAI, 2018.
[50] RADFORD A, WU J, CHILD R, et al. Language models are unsupervised multitask learners[J]. OpenAI blog, 2019, 1(8): 9.
[51] BROWN T, MANN B, RYDER N, et al. Language models are few-shot learners[J]. Advancesin neural information processing systems, 2020, 33: 1877-1901.
[52] PARMAR N, VASWANI A, USZKOREIT J, et al. Image transformer[C]//International conference on machine learning. PMLR, 2018: 4055-4064.
[53] CHILD R, GRAY S, RADFORD A, et al. Generating long sequences with sparse transformers[A]. 2019.
[54] HO J, KALCHBRENNER N, WEISSENBORN D, et al. Axial attention in multidimensionaltransformers[A]. 2019.
[55] DOSOVITSKIY A, BEYER L, KOLESNIKOV A, et al. An image is worth 16x16 words:Transformers for image recognition at scale[A]. 2020.
[56] STOCK J H, WATSON M W. Vector autoregressions[J]. Journal of Economic perspectives,2001, 15(4): 101-115.
[57] HUANG W, SONG G, HONG H, et al. Deep architecture for traffic flow prediction: deep beliefnetworks with multitask learning[J]. IEEE Transactions on Intelligent Transportation Systems,2014, 15(5): 2191-2201.
[58] LV Y, DUAN Y, KANG W, et al. Traffic flow prediction with big data: a deep learning approach[J]. IEEE Transactions on Intelligent Transportation Systems, 2014, 16(2): 865-873.
[59] MA X, YU H, WANG Y, et al. Large-scale transportation network congestion evolution prediction using deep learning theory[J]. PloS one, 2015, 10(3): e0119044.
[60] LAI G, CHANG W C, YANG Y, et al. Modeling long-and short-term temporal patterns withdeep neural networks[C]//The 41st international ACM SIGIR conference on research & development in information retrieval. 2018: 95-104.
[61] HAMILTON J D, SUSMEL R. Autoregressive conditional heteroskedasticity and changes inregime[J]. Journal of econometrics, 1994, 64(1-2): 307-333.
[62] MA X, TAO Z, WANG Y, et al. Long short-term memory neural network for traffic speedprediction using remote microwave sensor data[J]. Transportation Research Part C: EmergingTechnologies, 2015, 54: 187-197.
[63] LEE H, JIN S, CHU H, et al. Learning to Remember Patterns: Pattern Matching MemoryNetworks for Traffic Forecasting[A]. 2021.
[64] LI S, JIN X, XUAN Y, et al. Enhancing the locality and breaking the memory bottleneck oftransformer on time series forecasting[J]. Advances in neural information processing systems,2019, 32.
[65] WU H, XU J, WANG J, et al. Autoformer: Decomposition transformers with auto-correlationfor long-term series forecasting[J]. Advances in Neural Information Processing Systems, 2021,34: 22419-22430.56
[66] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[J]. Advances inneural information processing systems, 2017, 30.
[67] WU S, XIAO X, DING Q, et al. Adversarial sparse transformer for time series forecasting[J].Advances in neural information processing systems, 2020, 33: 17105-17115.
[68] BROWN T, MANN B, RYDER N, et al. Language models are few-shot learners[J]. Advancesin neural information processing systems, 2020, 33: 1877-1901.
[69] HUANG C Z A, VASWANI A, USZKOREIT J, et al. Music transformer[A]. 2018.
[70] DOSOVITSKIY A, BEYER L, KOLESNIKOV A, et al. An image is worth 16x16 words:Transformers for image recognition at scale[A]. 2020.
[71] LIU Z, LIN Y, CAO Y, et al. Swin transformer: Hierarchical vision transformer using shiftedwindows[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision.2021: 10012-10022.
[72] ZHOU H, ZHANG S, PENG J, et al. Informer: Beyond efficient transformer for long sequencetime-series forecasting[C]//Proceedings of the AAAI conference on artificial intelligence: volume 35. 2021: 11106-11115.
[73] KITAEV N, KAISER Ł, LEVSKAYA A. Reformer: The efficient transformer[A]. 2020.
[74] SCARSELLI F, GORI M, TSOI A C, et al. The graph neural network model[J]. IEEE transactions on neural networks, 2008, 20(1): 61-80.
[75] HAMILTON W, YING Z, LESKOVEC J. Inductive representation learning on large graphs[J].Advances in neural information processing systems, 2017, 30.
[76] VELICKOVIC P, CUCURULL G, CASANOVA A, et al. Graph attention networks[J]. stat,2017, 1050: 20.
[77] HOCHREITER S, SCHMIDHUBER J. Long short-term memory[J]. Neural computation,1997, 9(8): 1735-1780.
[78] CHUNG J, GULCEHRE C, CHO K, et al. Empirical evaluation of gated recurrent neural networks on sequence modeling[A]. 2014.
[79] LI M, ZHU Z. Spatial-temporal fusion graph neural networks for traffic flow forecasting[C]//Proceedings of the AAAI conference on artificial intelligence: volume 35. 2021: 4189-4196.
[80] CHEN Y, SEGOVIA I, GEL Y R. Z-GCNETs: Time zigzags at graph convolutional networksfor time series forecasting[C]//International Conference on Machine Learning. PMLR, 2021:1684-1694.
[81] PAN Z, KE S, YANG X, et al. AutoSTG: Neural Architecture Search for Predictions of SpatioTemporal Graph[C]//Proceedings of the Web Conference 2021. 2021: 1846-1855.57
修改评论