中文版 | English
题名

基于FPGA 的专用神经网络硬件加速与资源优化

其他题名
HARDWARE ACCELERATION AND RESOURCE OPTIMIZATION FOR APPLICATION-SPECIFIC NEURAL NETWORKS ON FPGA
姓名
姓名拼音
LI Silong
学号
12032792
学位类型
硕士
学位专业
080902 电路与系统
学科门类/专业学位类别
08 工学
导师
叶涛
导师单位
电子与电气工程系
论文答辩日期
2023-05-08
论文提交日期
2023-06-26
学位授予单位
南方科技大学
学位授予地点
深圳
摘要

随着人工智能应用的发展和普及,神经网络的硬件实现与加速成为了研究热点。尤其在边缘应用领域,计算存储资源和功耗的限制使得神经网络的部署成为挑战。现场可编程门阵列(Field-Programmable Gate Array,FPGA)作为一种常用的边缘端器件,结合了专用集成电路的高能效和通用处理单元的可编程性,广泛应用于神经网络的硬件加速。本文的研究内容是基于FPGA 的专用神经网络的硬件加速与资源优化,主要包括两个方面:一是应用于糖尿病视网膜病变(Diabetic Retinopathy,DR)诊断的卷积神经网络加速;二是脉冲神经网络 FPGA 实现的软 硬件优化。

糖尿病视网膜病变是导致失明的主要原因之一。早期 DR筛查可显著降低视力受损的风险。传统的 DR诊断依赖于专业眼科医生检查潜在患者的眼底图像。然而,在偏远和欠发达地区,医疗资源相对有限,阻碍了DR筛查的专业性和及时性。在这样的背景下,迫切需要开发方便、快速的DR筛查方法,以提供有效的预 防。通过训练DR分级网络并量化后部署到FPGA上,与眼底照相机相连搭建了实时诊断系统。实现了单张图片推理仅需5.89 ms,与CPU(Central Processing Unit) 和 GPU(Graphics Processing Unit)相比实现了速度和能效的提升。

脉冲神经网络在实现低功耗的神经形态计算方面具有许多优势。基于LIF (Leaky-Integrate-and-Fire)神经元模型,提出了一种快速泄露的神经元机制,可以进一步减少脉冲卷积神经网络在硬件实现过程中所需的内存空间和计算资源。通过采用快速泄露机制,输入矩阵与卷积核之间的卷积计算可以简化为一个查找表操作,而不需要使用额外的计算资源来进行点积运算。通过在不同数据集上使用不同网络模型进行实验,验证了快速泄露机制和基于查找表的卷积方法的性能。在精度损失可忽略的情况下大大减少了脉冲卷积层在硬件实现中的存储空间和计算资源占用,同时提高了计算速度。

关键词
语种
中文
培养类别
独立培养
入学年份
2020
学位授予年份
2023-06
参考文献列表

[1] KRIZHEVSKY A, SUTSKEVER I, HINTON G E. Imagenet classification with deep convo lutional neural networks[J]. Communications of the ACM, 2017, 60(6): 84-90.
[2] PAK M, KIM S. A review of deep learning in image recognition[C]//2017 4th International Conference on Computer Applications and Information Processing Technology (CAIPT). IEEE, 2017: 1-3.
[3] SZEGEDY C, LIU W, JIA Y, et al. Going deeper with convolutions[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015: 1-9.
[4] HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016: 770-778.
[5] VAN DIS E A, BOLLEN J, ZUIDEMA W, et al. ChatGPT: five priorities for research[J].Nature, 2023, 614(7947): 224-226.
[6] FLORIDI L, CHIRIATTI M. GPT-3: Its nature, scope, limits, and consequences[J]. Minds and Machines, 2020, 30: 681-694.
[7] MITTAL S. A survey of FPGA-based accelerators for convolutional neural networks[J]. Neural Computing and Applications, 2020, 32(4): 1109-1139.
[8] HU Y, LIU Y, LIU Z. A survey on convolutional neural network accelerators: GPU, FPGA and ASIC[C]//2022 14th International Conference on Computer Research and Development (ICCRD). IEEE, 2022: 100-107.
[9] WU F, ZHAO N, LIU Y, et al. A Review of Convolutional Neural Networks Hardware Accel erators for AIoT Edge Computing[C]//2021 International Conference on UK-China Emerging Technologies (UCET). IEEE, 2021: 71-76.
[10] GUO K, ZENG S, YU J, et al. A survey of FPGA-based neural network accelerator[A]. 2017.
[11] SANDLER M, HOWARD A, ZHU M, et al. Mobilenetv2: Inverted residuals and linear bottle necks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 4510-4520.
[12] HOWARD A, SANDLER M, CHU G, et al. Searching for mobilenetv3[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019: 1314-1324.
[13] TAVANAEI A, GHODRATI M, KHERADPISHEH S R, et al. Deep learning in spiking neural networks[J]. Neural Networks, 2019, 111: 47-63.
[14] GOODMAN D F, BRETTE R. Brian: a simulator for spiking neural networks in python[J]. Frontiers in Neuroinformatics, 2008: 5.
[15] SHRESTHA A, FANG H, MEI Z, et al. A survey on neuromorphic computing: Models and hardware[J]. IEEE Circuits and Systems Magazine, 2022, 22(2): 6-35.
[16] CHAN H P, HADJIISKI L M, SAMALA R K. Computer-aided diagnosis in the era of deep learning[J]. Medical Physics, 2020, 47(5): e218-e227.
[17] PEREIRA C R, PEREIRA D R, WEBER S A, et al. A survey on computer-assisted Parkinson’s disease diagnosis[J]. Artificial Intelligence in Medicine, 2019, 95: 48-63.
[18] MOKLI Y, PFAFF J, DOS SANTOS D P, et al. Computer-aided imaging analysis in acute ischemic stroke–background and clinical applications[J]. Neurological Research and Practice, 2019, 1(1): 1-13.
[19] PADHY S K, TAKKAR B, CHAWLA R, et al. Artificial intelligence in diabetic retinopathy: A natural step to the future[J]. Indian Journal of Ophthalmology, 2019, 67(7): 1004.
[20] LI Y H, YEH N N, CHEN S J, et al. Computer-assisted diagnosis for diabetic retinopathy based on fundus images using deep convolutional neural network[J]. Mobile Information Systems, 2019, 2019.
[21] PASSARETTI D, JOSEPH J M, PIONTECK T. Survey on FPGAs in medical radiology appli cations: Challenges, architectures and programming models[C]//2019 International Conference on Field-Programmable Technology (ICFPT). IEEE, 2019: 279-282.
[22] GHANI A, SEE C H, SUDHAKARAN V, et al. Accelerating retinal fundus image classification using artificial neural networks (ANNs) and reconfigurable hardware (FPGA)[J]. Electronics, 2019, 8(12): 1522.
[23] DAVID D S. Retinal image classification system for diagnosis of diabetic retinopathy using morphological edge detection and feature extraction techniques[J]. Artech J. Eff. Res. Eng. Technol, 2020, 1: 28-33.
[24] CHAKRABARTI R, HARPER C A, KEEFFE J E. Diabetic retinopathy management guidelines [J]. Expert Review of Ophthalmology, 2012, 7(5): 417-439.
[25] HENG L, COMYN O, PETO T, et al. Diabetic retinopathy: pathogenesis, clinical grading, management and future developments[J]. Diabetic Medicine, 2013, 30(6): 640-650.
[26] ROY K, JAISWAL A, PANDA P. Towards spike-based machine intelligence with neuromorphic computing[J]. Nature, 2019, 575(7784): 607-617.
[27] COX D D, DEAN T. Neural networks and neuroscience-inspired computer vision[J]. Current Biology, 2014, 24(18): R921-R929.
[28] GHOSH-DASTIDAR S, ADELI H. Spiking neural networks[J]. International Journal of Neural Systems, 2009, 19(04): 295-308.
[29] WU Y, DENG L, LI G, et al. Direct training for spiking neural networks: Faster, larger, better [C]//Proceedings of the AAAI Conference on Artificial Intelligence: volume 33. 2019: 1311-1318.
[30] CHOWDHURY S S, LEE C, ROY K. Towards understanding the effect of leak in spiking neural networks[J]. Neurocomputing, 2021, 464: 83-94.
[31] VELICHKO A, BORISKOV P. Concept of LIF neuron circuit for rate coding in spike neural networks[J]. IEEE Transactions on Circuits and Systems II: Express Briefs, 2020, 67(12): 3477-3481.
[32] HODGKIN A L, HUXLEY A F. A quantitative description of membrane current and its ap plication to conduction and excitation in nerve[J]. The Journal of Physiology, 1952, 117(4):500.
[33] IZHIKEVICH E M. Simple model of spiking neurons[J]. IEEE Transactions on Neural Net works, 2003, 14(6): 1569-1572.
[34] DAVIES M, SRINIVASA N, LIN T H, et al. Loihi: A neuromorphic manycore processor with on-chip learning[J]. IEEE Micro, 2018, 38(1): 82-99.
[35] HAN J, LI Z, ZHENG W, et al. Hardware implementation of spiking neural networks on FPGA [J]. Tsinghua Science and Technology, 2020, 25(4): 479-486.
[36] Xilinx. Vitis-AI[EB/OL]. 2023
[2023-03-01]. https://github.com/Xilinx/Vitis-AI.
[37] STMicroelectronics. Free tool for Edge AI developers[EB/OL]. 2022
[2023-03-01]. https://stm32ai.st.com/stm32-cube-ai/.
[38] ZHANG Q, ZHANG M, CHEN T, et al. Recent advances in convolutional neural network acceleration[J]. Neurocomputing, 2019, 323: 37-51.
[39] CHEN Y, XIE Y, SONG L, et al. A survey of accelerator architectures for deep neural networks[J]. Engineering, 2020, 6(3): 264-274.
[40] CHEN T, DU Z, SUN N, et al. Diannao: A small-footprint high-throughput accelerator for ubiquitous machine-learning[J]. ACM SIGARCH Computer Architecture News, 2014, 42(1): 269-284.
[41] CHEN Y, LUO T, LIU S, et al. Dadiannao: A machine-learning supercomputer[C]//2014 47th Annual IEEE/ACM International Symposium on Microarchitecture. IEEE, 2014: 609-622.
[42] LIU D, CHEN T, LIU S, et al. Pudiannao: A polyvalent machine learning accelerator[J]. ACM SIGARCH Computer Architecture News, 2015, 43(1): 369-381.
[43] YIN S, OUYANG P, TANG S, et al. A high energy efficient reconfigurable hybrid neural network processor for deep learning applications[J]. IEEE Journal of Solid-State Circuits, 2017, 53(4): 968-982.
[44] QIU J, WANG J, YAO S, et al. Going deeper with embedded fpga platform for convolutional neural network[C]//Proceedings of the 2016 ACM/SIGDA International Symposium on Field Programmable Gate Arrays. 2016: 26-35.
[45] WANG D, AN J, XU K. PipeCNN: An OpenCL-based FPGA accelerator for large-scale con volution neuron networks[A]. 2016.
[46] WEI X, YU C H, ZHANG P, et al. Automated systolic array architecture synthesis for high throughput CNN inference on FPGAs[C]//Proceedings of the 54th Annual Design Automation Conference 2017. 2017: 1-6.
[47] LI X, HUANG H, CHEN T, et al. A hardware-efficient computing engine for FPGA-based deep convolutional neural network accelerator[J]. Microelectronics Journal, 2022, 128: 105547.
[48] ZHANG C, LI P, SUN G, et al. Optimizing FPGA-based accelerator design for deep convolu tional neural networks[C]//Proceedings of the 2015 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays. 2015: 161-170.
[49] ZHANG C, WU D, SUN J, et al. Energy-efficient CNN implementation on a deeply pipelined FPGA cluster[C]//Proceedings of the 2016 International Symposium on Low Power Electronics and Design. 2016: 326-331.
[50] GU X, ZHU Y, ZHOU S, et al. A real-time FPGA-based accelerator for ECG analysis and diag nosis using association-rule mining[J]. ACM Transactions on Embedded Computing Systems (TECS), 2016, 15(2): 1-23.
[51] MA D, SHEN J, GU Z, et al. Darwin: A neuromorphic hardware co-processor based on spiking neural networks[J]. Journal of Systems Architecture, 2017, 77: 43-51.
[52] FANG H, MEI Z, SHRESTHA A, et al. Encoding, model, and architecture: Systematic opti mization for spiking neural network in FPGAs[C]//Proceedings of the 39th International Con ference on Computer-Aided Design. 2020: 1-9.
[53] JU X, FANG B, YAN R, et al. An FPGA implementation of deep spiking neural networks for low-power and fast classification[J]. Neural Computation, 2020, 32(1): 182-204.
[54] WAN L, LUO Y, SONG S, et al. Efficient neuron architecture for FPGA-based spiking neural networks[C]//2016 27th Irish Signals and Systems Conference (ISSC). IEEE, 2016: 1-6.
[55] JOUPPI N P, YOUNG C, PATIL N, et al. In-datacenter performance analysis of a tensor pro cessing unit[C]//Proceedings of the 44th Annual International Symposium on Computer Archi tecture. 2017: 1-12.
[56] CHEN Y H, KRISHNA T, EMER J S, et al. Eyeriss: An energy-efficient reconfigurable accel erator for deep convolutional neural networks[J]. IEEE Journal of Solid-State Circuits, 2016, 52(1): 127-138.
[57] CHEN Y H, YANG T J, EMER J, et al. Eyeriss v2: A flexible accelerator for emerging deep neural networks on mobile devices[J]. IEEE Journal on Emerging and Selected Topics in Cir cuits and Systems, 2019, 9(2): 292-308.
[58] DENTON E L, ZAREMBA W, BRUNA J, et al. Exploiting linear structure within convolutional networks for efficient evaluation[J]. Advances in Neural Information Processing Systems, 2014, 27.
[59] ALWANI M, CHEN H, FERDMAN M, et al. Fused-layer CNN accelerators[C]//2016 49th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO). IEEE, 2016: 1-12.
[60] MOINI S, ALIZADEH B, EMAD M, et al. A resource-limited hardware accelerator for con volutional neural networks in embedded vision applications[J]. IEEE Transactions on Circuits and Systems II: Express Briefs, 2017, 64(10): 1217-1221.
[61] SHEN Y, FERDMAN M, MILDER P. Maximizing CNN accelerator efficiency through resource partitioning[J]. ACM SIGARCH Computer Architecture News, 2017, 45(2): 535-547.
[62] RAHMAN A, OH S, LEE J, et al. Design space exploration of FPGA accelerators for convo lutional neural networks[C]//Design, Automation & Test in Europe Conference & Exhibition (DATE), 2017. IEEE, 2017: 1147-1152.
[63] ZHANG J, LI J. Improving the performance of OpenCL-based FPGA accelerator for convolu tional neural network[C]//Proceedings of the 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays. 2017: 25-34.
[64] AKOPYAN F, SAWADA J, CASSIDY A, et al. Truenorth: Design and tool flow of a 65 mw 1 million neuron programmable neurosynaptic chip[J]. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2015, 34(10): 1537-1557.
[65] STUIJT J, SIFALAKIS M, YOUSEFZADEH A, et al. 𝜇Brain: An event-driven and fullysynthesizable architecture for spiking neural networks[J]. Frontiers in Neuroscience, 2021: 538.
[66] DAVIDSON S, FURBER S B. Comparison of artificial and spiking neural networks on digital hardware[J]. Frontiers in Neuroscience, 2021, 15: 651141.
[67] MOSTAFA H. Supervised learning based on temporal coding in spiking neural networks[J]. IEEE Transactions on Neural Networks and Learning Systems, 2017, 29(7): 3227-3235.
[68] ROSENBLATT F. The perceptron: a probabilistic model for information storage and organi zation in the brain.[J]. Psychological review, 1958, 65(6): 386.
[69] LECUN Y, TOURESKY D, HINTON G, et al. A theoretical framework for back-propagation [C]//Proceedings of the 1988 Connectionist Models Summer School: volume 1. 1988: 21-28.
[70] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[J]. Advances in Neural Information Processing Systems, 2017, 30.
[71] DING X, ZHANG X, HAN J, et al. Scaling up your kernels to 31x31: Revisiting large kernel design in cnns[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 11963-11975.
[72] VAN RULLEN R, THORPE S J. Rate coding versus temporal order coding: what the retinal ganglion cells tell the visual cortex[J]. Neural Computation, 2001, 13(6): 1255-1283.
[73] FANG W, YU Z, CHEN Y, et al. Deep residual learning in spiking neural networks[J]. Advances in Neural Information Processing Systems, 2021, 34: 21056-21069.
[74] ASL M M. Propagation delays determine the effects of synaptic plasticity on the structure and dynamics of neuronal networks[Z]. 2018.
[75] CAPORALE N, DAN Y. Spike timing–dependent plasticity: a Hebbian learning rule[J]. Annu. Rev. Neurosci., 2008, 31: 25-46.
[76] CAI W, ELLINGER F, TETZLAFF R. Neuronal synapse as a memristor: Modeling pair-and triplet-based STDP rule[J]. IEEE Transactions on Biomedical Circuits and Systems, 2014, 9 (1): 87-95.
[77] wikipedia. Spike-timing-dependent plasticity[EB/OL]. 2023
[2023-03-01]. https://en.wikipedia.org/wiki/Spike-timing-dependent_plasticity.
[78] wikipedia. Field-programmable gate array[EB/OL]. 2023
[2023-03-01]. https://en.wikipedia.org/wiki/Field-programmable_gate_array.
[79] Xilinx Inc. DPUCZDX8G for Zynq UltraScale+ MPSoCs Product Guide: volume 338[EB/OL]. 2021. www.xilinx.com.
[80] DE MICHELL G, GUPTA R K. Hardware/software co-design[J]. Proceedings of the IEEE, 1997, 85(3): 349-365.
[81] STITT A W, CURTIS T M, CHEN M, et al. The progress in understanding and treatment of diabetic retinopathy[J]. Progress in Retinal and Eye Research, 2016, 51: 156-186.
[82] FEDERATION I D. IDF Diabetes Atlas, 9th[M]. 9th ed. 2019.
[83] EyePACS. Diabetic Retinopathy Detection[EB/OL]. 2015
[2023-03-01]. https://www.kaggle.com/c/diabetic-retinopathy-detection/.
[84] GALDRAN A, DOLZ J, CHAKOR H, et al. Cost-sensitive regularization for diabetic retinopa thy grading from eye fundus images[C]//Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23. Springer, 2020: 665-674.
[85] DUTTA S, MANIDEEP B, BASHA S M, et al. Classification of diabetic retinopathy images by using deep learning models[J]. International Journal of Grid and Distributed Computing, 2018, 11(1): 89-106.
[86] MOOKIAH M R K, ACHARYA U R, CHUA C K, et al. Computer-aided diagnosis of diabetic retinopathy: A review[J]. Computers in Biology and Medicine, 2013, 43(12): 2136-2155.
[87] ZHOU Y, HE X, CUI S, et al. High-resolution diabetic retinopathy image synthesis manipulated by grading and lesions[C]//Medical Image Computing and Computer Assisted Intervention–MICCAI 2019: 22nd International Conference, Shenzhen, China, October 13–17, 2019, Pro ceedings, Part I. Springer, 2019: 505-513.
[88] HUANG Y, LIN L, CHENG P, et al. Lesion-based contrastive learning for diabeticretinopathy grading from fundus images[C]//Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, September 27–October 1, 2021, Proceedings, Part II 24. Springer, 2021: 113-123.
[89] DAI L, WU L, LI H, et al. A deep learning system for detecting diabetic retinopathy across the disease spectrum[J]. Nature Communications, 2021, 12(1): 3242.
[90] VANBELLE S. A new interpretation of the weighted kappa coefficients[J]. Psychometrika, 2016, 81(2): 399-410.
[91] NAGEL M, BAALEN M V, BLANKEVOORT T, et al. Data-free quantization through weight equalization and bias correction[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019: 1325-1334.
[92] WU Y, DENG L, LI G, et al. Spatio-temporal backpropagation for training high-performance spiking neural networks[J]. Frontiers in Neuroscience, 2018, 12: 331.
[93] LOSHCHILOV I, HUTTER F. Sgdr: Stochastic gradient descent with warm restarts[A]. 2016.
[94] RUECKAUER B, LUNGU I A, HU Y, et al. Conversion of continuous-valued deep networks to efficient event-driven networks for image classification[J]. Frontiers in Neuroscience, 2017, 11: 682.
[95] MOZAFARI M, GANJTABESH M, NOWZARI-DALINI A, et al. Bio-inspired digit recogni tion using reward-modulated spike-timing-dependent plasticity in deep convolutional networks [J]. Pattern Recognition, 2019, 94: 87-95.
[96] LEE C, SARWAR S S, PANDA P, et al. Enabling spike-based backpropagation for training deep neural network architectures[J]. Frontiers in neuroscience, 2020: 119.
[97] LEDINAUSKAS E, RUSECKAS J, JURŠĖNAS A, et al. Training deep spiking neural networks [A]. 2020.
[98] SENGUPTA A, YE Y, WANG R, et al. Going deeper in spiking neural networks: VGG and residual architectures[J]. Frontiers in Neuroscience, 2019, 13: 95.
[99] HAN B, SRINIVASAN G, ROY K. Rmp-snn: Residual membrane potential neuron for en abling deeper high-accuracy and low-latency spiking neural network[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020: 13558-13567.
[100] RATHI N, SRINIVASAN G, PANDA P, et al. Enabling deep spiking neural networks with hybrid conversion and spike timing dependent backpropagation[A]. 2020.
[101] ZHENG H, WU Y, DENG L, et al. Going deeper with directly-trained larger spiking neural networks[C]//Proceedings of the AAAI Conference on Artificial Intelligence: volume 35. 2021: 11062-11070.
[102] AMIR A, TABA B, BERG D, et al. A low power, fully event-based gesture recognition system [C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017: 7243-7252.
[103] SHRESTHA S B, ORCHARD G. Slayer: Spike layer error reassignment in time[J]. Advances in Neural Information Processing Systems, 2018, 31.
[104] GERLINGHOFF D, WANG Z, GU X, et al. E3ne: An end-to-end framework for accelerating spiking neural networks with emerging neural encoding on fpgas[J]. IEEE Transactions on Parallel and Distributed Systems, 2021, 33(11): 3207-3219.
[105] LIU Y, CHEN Y, YE W, et al. FPGA-NHAP: A General FPGA-Based Neuromorphic Hardware Acceleration Platform With High Speed and Low Power[J]. IEEE Transactions on Circuits and Systems I: Regular Papers, 2022, 69(6): 2553-2566.
[106] ZHANG J, WU H, WEI J, et al. An asynchronous reconfigurable SNN accelerator with event driven time step update[C]//2019 IEEE Asian Solid-State Circuits Conference (A-SSCC). IEEE, 2019: 213-216.

所在学位评定分委会
电子科学与技术
国内图书分类号
TN492
来源库
人工提交
成果类型学位论文
条目标识符http://sustech.caswiz.com/handle/2SGJ60CL/544037
专题工学院_电子与电气工程系
推荐引用方式
GB/T 7714
李四龙. 基于FPGA 的专用神经网络硬件加速与资源优化[D]. 深圳. 南方科技大学,2023.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可 操作
12032792-李四龙-电子与电气工程(2891KB)----限制开放--请求全文
个性服务
原文链接
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
导出为Excel格式
导出为Csv格式
Altmetrics Score
谷歌学术
谷歌学术中相似的文章
[李四龙]的文章
百度学术
百度学术中相似的文章
[李四龙]的文章
必应学术
必应学术中相似的文章
[李四龙]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
[发表评论/异议/意见]
暂无评论

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。