[1] HOROWITZ M. 1.1 Computing’s energy problem (and what we can do about it)[C]//IEEE International Conference on Solid-State Circuits. 2014: 10-14.
[2] NAIR V, HINTON G. Rectified Linear Units Improve Restricted Boltzmann Machines Vinod Nair[C]//International Conference on Machine Learning. 2010: 807-814.
[3] GERSTNER W, KISTLER W M, NAUD R, et al. Neuronal Dynamics: From Single Neurons to Networks and Models of Cognition[M]. Cambridge University Press, 2014: 5-9.
[4] MAYNARD A. Navigating the fourth industrial revolution[J]. Nature Nanotechnology, 2015, 10: 1005-1006.
[5] ŚLUSARCZYK B. INDUSTRY 4.0-ARE WE READY?[J]. Polish Journal of Management Studies, 2018, 17.
[6] SARKER I. AI-Based Modeling: Techniques, Applications and Research Issues Towards Automation, Intelligent and Smart Systems[J]. SN Computer Science, 2022, 3.
[7] TOUVRON H, LAVRIL T, IZACARD G, et al. LLaMA: Open and Efficient Foundation Language Models[A]. 2023. arXiv: 2302.13971.
[8] SAMSI S, ZHAO D, MCDONALD J, et al. From Words to Watts: Benchmarking the Energy Costs of Large Language Model Inference[C]//IEEE High Performance Extreme Computing Conference. 2023: 1-9.
[9] CAPATINA L, CERNIAN A, MOISESCU M A. Efficient training models of Spiking Neural Networks deployed on a neuromorphic computing architectures[C]//International Conference on Control Systems and Computer Science. 2023: 383-390.
[10] PANCHAPAKESAN S, FANG Z, CHANDRACHOODAN N. EASpiNN: Effective Automated Spiking Neural Network Evaluation on FPGA[C]//IEEE International Symposium on Field-Programmable Custom Computing Machines. 2020: 242.
[11] CHEN Y, LIU H, SHI K, et al. Spiking neural network with working memory can integrate and rectify spatiotemporal features[J]. Frontiers in Neuroscience, 2023, 17: 1167134.
[12] MAASS W. Networks of spiking neurons: The third generation of neural network models[J]. Neural Networks, 1997, 10(9): 1659-1671.
[13] GHOSH-DASTIDAR S, ADELI H. Spiking neural networks[J]. International Journal of NeuralSystems, 2009, 19(04): 295-308.
[14] WANG X, LIN X, DANG X. Supervised learning in spiking neural networks: A review of algorithms and evaluations[J]. Neural Networks, 2020, 125: 258-280.
[15] MCCULLOCH W S, PITTS W. A Logical Calculus of the Ideas Immanent in Nervous Activity[J]. The Bulletin of Mathematical Biophysics, 1943, 5(4): 115-133.
[16] HODGKIN A L, HUXLEY A F. A quantitative description of membrane current and its application to conduction and excitation in nerve.[J]. The Journal of physiology, 1952, 117(4): 500-44.
[17] STEIN R B. The Information Capacity of Nerve Cells Using a Frequency Code[J]. Biophysical Journal, 1967, 7(6): 797-826.
[18] KOCH C, SEGEV I. Methods in Neuronal Modeling: From Ions to Networks[M]. MIT Press, 1998.
[19] IZHIKEVICH E. Simple model of spiking neurons[J]. IEEE Transactions on Neural Networks, 2003, 14(6): 1569-1572.
[20] BRETTE R, GERSTNER W. Adaptive Exponential Integrate-and-Fire Model as an Effective Description of Neuronal Activity[J]. Journal of Neurophysiology, 2005, 94(5): 3637-3642.
[21] HEBB D O. The Organization of Behavior: A Neuropsychological Theory[M]. Psychology Press, 2005.
[22] BLISS T V, GARDNER-MEDWIN A R. Long-lasting potentiation of synaptic transmission in the dentate area of the unanaesthetized rabbit following stimulation of the perforant path[J]. The Journal of Physiology, 1973, 232(2): 357.
[23] MASQUELIER T, GUYONNEAU R, THORPE S J. Spike Timing Dependent Plasticity Finds the Start of Repeating Patterns in Continuous Spike Trains[J]. PLOS One, 2008, 3(1): 1-9.
[24] BELL C C, HAN V Z, SUGAWARA Y, et al. Synaptic plasticity in a cerebellum-like structure depends on temporal order[J]. Nature, 1997, 387(6630): 278-281.
[25] BURBANK K S. Mirrored STDP Implements Autoencoder Learning in a Network of Spiking Neurons[J]. PLOS Computational Biology, 2015, 11(12): 1-25.
[26] TAVANAEI A, MASQUELIER T, MAIDA A S. Acquisition of visual features through probabilistic spike-timing-dependent plasticity[C]//International Joint Conference on Neural Networks. 2016: 307-314.
[27] ZHOU R. A Method of Converting ANN to SNN for Image Classification[C]//IEEE International Conference on Electronic Technology, Communication and Information. 2023: 819-822.
[28] WANG Y, LIU H, ZHANG M, et al. A universal ANN-to-SNN framework for achieving high accuracy and low latency deep Spiking Neural Networks[J]. Neural Networks, 2024, 174.
[29] CAO Y, CHEN Y, KHOSLA D. Spiking Deep Convolutional Neural Networks for Energy-Efficient Object Recognition[J]. International Journal of Computer Vision, 2015, 113: 54-66.
[30] DIEHL P U, NEIL D, BINAS J, et al. Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing[C]//International Joint Conference on Neural Networks. 2015: 1-8.
[31] RUECKAUER B, LUNGU I A, HU Y, et al. Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification[J]. Frontiers in Neuroscience, 2017, 11: 682.
[32] SIMONYAN K, ZISSERMAN A. Very Deep Convolutional Networks for Large-Scale Image Recognition[C]//International Conference on Learning Representations. 2015.
[33] HE K, ZHANG X, REN S, et al. Deep Residual Learning for Image Recognition[C]//IEEE Conference on Computer Vision and Pattern Recognition. 2016: 770-778.
[34] SENGUPTA A, YE Y, WANG R, et al. Going Deeper in Spiking Neural Networks: VGG and Residual Architectures[J]. Frontiers in Neuroscience, 2019, 13: 95.
[35] BOHTE S, KOK J, POUTRé J. SpikeProp: backpropagation for networks of spiking neurons[C].//European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning: volume 48. 2000: 419-424.
[36] ZENKE F, GANGULI S. SuperSpike: Supervised Learning in Multilayer Spiking Neural Networks[J]. Neural Computation, 2018, 30(6): 1514-1541.
[37] WU Y, DENG L, LI G, et al. Spatio-Temporal Backpropagation for Training High-Performance Spiking Neural Networks[J]. Frontiers in Neuroscience, 2018, 12: 331.
[38] XIAO M, MENG Q, ZHANG Z, et al. Online Training Through Time for Spiking Neural Networks[C]//Advances in Neural Information Processing Systems: volume 35. 2022: 20717-20730.
[39] DENG S, LI Y, ZHANG S, et al. Temporal efficient training of spiking neural network via grdient re-weight[C]//International Conference on Learning Representations. 2022.
[40] LI Y, GUO Y, ZHANG S, et al. Differentiable Spike: Rethinking Gradient-Descent for Training Spiking Neural Networks[C]//Advances in Neural Information Processing Systems: volume 34. 2021: 23426-23439.
[41] CHE K, LENG L, ZHANG K, et al. Differentiable hierarchical and surrogate gradient search for spiking neural networks[C]//Advances in Neural Information Processing Systems: volume 35. 2022: 24975-24990.
[42] GUO Y, CHEN Y, ZHANG L, et al. IM-Loss: Information Maximization Loss for Spiking Neural Networks[C]//Advances in Neural Information Processing Systems: volume 35. 2022: 156-166.
[43] GUO Y, TONG X, CHEN Y, et al. RecDis-SNN: Rectifying Membrane Potential Distribution for Directly Training Spiking Neural Networks[C]//IEEE Conference on Computer Vision and Pattern Recognition. 2022: 326-335.
[44] DING R, CHIN T W, LIU Z, et al. Regularizing Activation Distribution for Training Binarized Deep Networks[C]//IEEE Conference on Computer Vision and Pattern Recognition. 2019: 11400-11409.
[45] MENG Q, XIAO M, YAN S, et al. Training high-performance low-latency spiking neural networks by differentiation on spike representation[C]//IEEE Conference on Computer Vision and Pattern Recognition. 2022: 12444-12453.
[46] LIAN S, SHEN J, LIU Q, et al. Learnable Surrogate Gradient for Direct Training Spiking Neural Networks[C]//International Joint Conference on Artificial Intelligence: 2023-August. 2023: 3002-3010.
[47] ZENKE F, VOGELS T P. The Remarkable Robustness of Surrogate Gradient Learning for Instilling Complex Function in Spiking Neural Networks[J]. Neural Computation, 2021, 33(4): 899-925.
[48] WANG Z, JIANG R, LIAN S, et al. Adaptive Smoothing Gradient Learning for Spiking Neural Networks[C]//International Conference on Machine Learning: volume 202. 2023: 35798-35816.
[49] GALLEGO G, DELBRüCK T, ORCHARD G, et al. Event-Based Vision: A Survey[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(1): 154-180.
[50] ZHANG J, DONG B, ZHANG H, et al. Spiking Transformers for Event-based Single Object Tracking[C]//IEEE Conference on Computer Vision and Pattern Recognition. 2022: 8791-8800.
[51] ZHU Z, HOU J, LYU X. Learning Graph-embedded Key-event Back-tracing for Object Tracking in Event Clouds[C]//Advances in Neural Information Processing Systems: volume 35. 2022: 7462-7476.
[52] MESSIKOMMER N, FANG C, GEHRIG M, et al. Data-Driven Feature Tracking for Event Cameras[C]//IEEE Conference on Computer Vision and Pattern Recognition. 2023: 5642-5651.
[53] LIU M, DELBRUCK T. EDFLOW: Event Driven Optical Flow Camera With Keypoint Detection and Adaptive Block Matching[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2022, 32(9): 5776-5789.
[54] WAN Z, DAI Y, MAO Y. Learning Dense and Continuous Optical Flow From an Event Camera[J]. IEEE Transactions on Image Processing, 2022, 31: 7237-7251.
[55] ERCAN B, EKER O, SAGLAM C, et al. HyperE2VID: Improving Event-Based Video Reconstruction via Hypernetworks[J]. IEEE Transactions on Image Processing, 2024, 33: 1826-1837.
[56] ZHANG P, LIU H, GE Z, et al. Neuromorphic Imaging With Joint Image Deblurring and Event Denoising[J]. IEEE Transactions on Image Processing, 2024, 33: 2318-2333.
[57] GAO L, SU H, GEHRIG D, et al. A 5-Point Minimal Solver for Event Camera Relative Motion Estimation[C]//IEEE International Conference on Computer Vision. 2023: 8015-8025.
[58] GUO S, GALLEGO G. CMax-SLAM: Event-Based Rotational-Motion Bundle Adjustment and SLAM System Using Contrast Maximization[J]. IEEE Transactions on Robotics, 2024, 40: 2442-2461.
[59] ROSENBLATT F. The perceptron: a probabilistic model for information storage and organization in the brain.[J]. Psychological Review, 1958, 65 6: 386-408.
[60] MINSKY M, PAPERT S. Perceptrons[M]. Cambridge, MA: MIT Press, 1969.
[61] GIDON A, ZOLNIK T A, FIDZINSKI P, et al. Dendritic action potentials and computation in human layer 2/3 cortical neurons[J]. Science, 2020, 367(6473): 83-87.
[62] RUMELHART D E, HINTON G E, WILLIAMS R J. Learning internal representations by error propagation[M]//Neurocomputing, Volume 1: Foundations of Research. MIT Press, 1988.
[63] FUKUSHIMA K, MIYAKE S. Neocognitron: A Self-Organizing Neural Network Model for a Mechanism of Visual Pattern Recognition[C]//AMARI S I, ARBIB M A. Competition and Cooperation in Neural Nets. 1982: 267-285.
[64] YU F, KOLTUN V. Multi-scale context aggregation by dilated convolutions[C]//International Conference on Learning Representations. 2016.
[65] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is All you Need[C]//Advances in Neural Information Processing Systems: volume 30. 2017.
[66] DOSOVITSKIY A, BEYER L, KOLESNIKOV A, et al. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale[C]//International Conference on Learning Representations. 2021.
[67] TOLSTIKHIN I O, HOULSBY N, KOLESNIKOV A, et al. MLP-Mixer: An all-MLP Architecture for Vision[C]//Advances in Neural Information Processing Systems: volume 34. 2021: 24261-24272.
[68] TANG C, ZHAO Y, WANG G, et al. Sparse MLP for Image Recognition: Is Self-Attention Really Necessary?[C]//AAAI Conference on Artificial Intelligence: volume 36. 2022: 2344-2351.
[69] GU A, DAO T. Mamba: Linear-Time Sequence Modeling with Selective State Spaces[A]. 2023. arXiv: 2312.00752.
[70] TOUVRON H, CORD M, DOUZE M, et al. Training data-efficient image transformers & distillation through attention[C]//International Conference on Machine Learning: volume 139. 2021: 10347-10357.
[71] MENG Q, XIAO M, YAN S, et al. Towards memory-and time-efficient backpropagation for training spiking neural networks[C]//IEEE International Conference on Computer Vision. 2023: 6166-6176.
[72] IZHIKEVICH E M. Which model to use for cortical spiking neurons?[J]. IEEE Transactions on Neural Networks, 2004, 15(5): 1063-1070.
[73] DECO G, JIRSA V K, ROBINSON P A, et al. The Dynamic Brain: From Spiking Neurons to Neural Masses and Cortical Fields[J]. PLOS Computational Biology, 2008, 4(8): 1-35.
[74] KORCSAK-GORZO A, MüLLER M G, BAUMBACH A, et al. Cortical oscillations support sampling-based computations in spiking neural networks[J]. PLOS Computational Biology, 2022, 18(3): 1-41.
[75] SHRESTHA S B, ORCHARD G. SLAYER: Spike Layer Error Reassignment in Time[C]//Advances in Neural Information Processing Systems: volume 31. 2018.
[76] WU Y, DENG L, LI G, et al. Direct training for spiking neural networks: Faster, larger, better[C]//AAAI Conference on Artificial Intelligence: volume 33. 2019: 1311-1318.
[77] FANG W, YU Z, CHEN Y, et al. Deep Residual Learning in Spiking Neural Networks[C]//Advances in Neural Information Processing Systems: volume 34. 2021: 21056-21069.
[78] KIM S, PARK S, NA B, et al. Spiking-YOLO: Spiking neural network for energy-efficient object detection[C]//AAAI Conference on Artificial Intelligence: volume 34. 2020: 11270-11277.
[79] KIM Y, CHOUGH J, PANDA P. Beyond classification: directly training spiking neural networks for semantic segmentation[J]. Neuromorphic Computing and Engineering, 2022, 2(4): 044015.
[80] RANçON U, CUADRADO-ANIBARRO J, COTTEREAU B R, et al. StereoSpike: Depth Learning With a Spiking Neural Network[J]. IEEE Access, 2022, 10: 127428-127439.
[81] ZHU L, WANG X, CHANG Y, et al. Event-Based Video Reconstruction via Potential-Assisted Spiking Neural Network[C]//IEEE Conference on Computer Vision and Pattern Recognition. 2022: 3594-3604.
[82] LIU Z, LIN Y, CAO Y, et al. Swin Transformer: Hierarchical Vision Transformer Using Shifted Windows[C]//IEEE International Conference on Computer Vision. 2021: 10012-10022.
[83] ROY K, JAISWAL A, PANDA P. Towards spike-based machine intelligence with neuromorphic computing[J]. Nature, 2019, 575(7784): 607-617.
[84] RATHI N, ROY K. DIET-SNN: A low-latency spiking neural network with direct input encoding and leakage and threshold optimization[J]. IEEE Transactions on Neural Networks and Learning Systems, 2021.
[85] LIU H, SIMONYAN K, YANG Y. DARTS: Differentiable Architecture Search[C]//International Conference on Learning Representations. 2019.
[86] IOFFE S, SZEGEDY C. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift[C]//International Conference on Machine Learning: volume 37. 2015: 448-456.
[87] TATSUNAMI Y, TAKI M. RaftMLP: How Much Can Be Done Without Attention and with Less Spatial Locality?[C]//Asian Conference on Computer Vision. 2022: 3172-3188.
[88] HOU Q, JIANG Z, YUAN L, et al. Vision permutator: A permutable mlp-like architecture for visual recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.
[89] WANG Z, JIANG W, ZHU Y M, et al. DynaMixer: A Vision MLP Architecture with Dynamic Mixing[C]//International Conference on Machine Learning: volume 162. 2022: 22691-22701.
[90] HU Y, TANG H, PAN G. Spiking Deep Residual Networks[J]. IEEE Transactions on Neural Networks and Learning Systems, 2018.
[91] RATHI N, SRINIVASAN G, PANDA P, et al. Enabling Deep Spiking Neural Networks with Hybrid Conversion and Spike Timing Dependent Backpropagation[C]//International Conference on Learning Representations. 2020.
[92] LEE C, SARWAR S S, PANDA P, et al. Enabling Spike-Based Backpropagation for Training Deep Neural Network Architectures[J]. Frontiers in Neuroscience, 2019, 14.
[93] ZHENG H, WU Y, DENG L, et al. Going Deeper With Directly-Trained Larger Spiking Neural Networks[C]//AAAI Conference on Artificial Intelligence: volume 35. 2021: 11062-11070.
[94] HUBEL D H, WIESEL T N. Receptive fields, binocular interaction and functional architecture in the cat's visual cortex[J]. The Journal of Physiology, 1962, 160(1): 106.
[95] LIU R, LI Y, TAO L, et al. Are we ready for a new paradigm shift? A survey on visual deep MLP[J]. Patterns, 2022, 3(7): 100520.
[96] TANG Y, HAN K, GUO J, et al. An Image Patch Is a Wave: Phase-Aware Vision MLP[C]//IEEE Conference on Computer Vision and Pattern Recognition. 2022: 10935-10944.
[97] ZHANG D J, LI K, WANG Y, et al. Morphmlp: An efficient mlp-like backbone for spatial-temporal representation learning[C]//European Conference on Computer Vision. 2022: 230-248.
修改评论