中文版 | English
题名

基于 FPGA 软硬件协同设计的混合精度深度学习加速器

其他题名
HARDWARE AND SOFTWARE CO-DESIGN FOR MIXED-PRECISION DEEP LEARNING ACCELERATOR ON FPGA
姓名
姓名拼音
LI Qiufeng
学号
12132454
学位类型
硕士
学位专业
080903 微电子学与固体电子学
学科门类/专业学位类别
08 工学
导师
余浩
导师单位
深港微电子学院
论文答辩日期
2024-05-08
论文提交日期
2024-06-18
学位授予单位
南方科技大学
学位授予地点
深圳
摘要

自 2012 年以来,随着人工智能的迅猛发展,深度学习模型,特别是卷积神经网络模型(CNN),在边缘端设备上的设计与部署已成为一个备受关注的研究领域,其中涉及计算资源和成本等方面的限制。CNN 作为一种简洁高效的算法,在图像识别、故障检测、同时定位与地图创建(SLAM)等领域得到了广泛应用。然 而 CNN 却面临着 CNN 模型参数数量庞大、边缘端部署延时较高以及缺乏相应的简便部署方法等挑战。本文采用了软硬件协同的设计方法,旨在实现深度学习算法在边缘端设备上的高效应用。 基于混合精度量化模型的特点,本文设计了支持混合精度的深度学习加速器, 采用改进的 Booth 乘法器和矢量脉动的数据流,以实现更高的数据并行度和通量, 从而降低推理延迟。此外,为了支持更多的算子,本文设计了一个基于 RISC-V 的 深度学习微处理器。在设计完善的硬件系统基础上,本文利用现有的 TVM 堆栈, 并运用图优化等方法以减少数据搬移、提高数据复用率,在混合精度加速器上自动生成各种网络模型的算子代码,实现深度学习模型的自动化部署。 实验结果表明,本文提出的混合精度加速器在 ZCU102 开发板上计算卷积时, 在 2、4、8 比特位宽模式下,峰值通量分别达到 1603GOPS、796GOPS 和 421GOPS,达到了国际先进水平。此外,本文还进行了基于深度学习的 SLAM 系统和基于深 度学习的人体位姿估计实验,验证了本文提出系统的有效性和敏捷性。

关键词
语种
中文
培养类别
独立培养
入学年份
2021
学位授予年份
2024-07
参考文献列表

[1] KRIZHEVSKY A, SUTSKEVER I, HINTON G E. ImageNet classification with deep convolutional neural networks[C]//NIPS’12: Proceedings of the 25th International Conference onNeural Information Processing Systems - Volume 1. Red Hook, NY, USA: Curran AssociatesInc., 2012: 1097–1105.
[2] SIMONYAN K, ZISSERMAN A. Very Deep Convolutional Networks for Large-Scale ImageRecognition[A]. 2015. arXiv: 1409.1556.
[3] CHEN T, DU Z, SUN N, et al. DianNao: a small-footprint high-throughput accelerator forubiquitous machine-learning[C/OL]//ASPLOS ’14: Proceedings of the 19th International Conference on Architectural Support for Programming Languages and Operating Systems. NewYork, NY, USA: Association for Computing Machinery, 2014: 269–284. https://doi.org/10.1145/2541940.2541967.
[4] HOROWITZ M. 1.1 Computing’s energy problem (and what we can do about it)[C/OL]//2014IEEE International Solid-State Circuits Conference Digest of Technical Papers (ISSCC). 2014:10-14. DOI: 10.1109/ISSCC.2014.6757323.
[5] MUNGER B, WILCOX K, SNIDERMAN J, et al. “Zen 4”: The AMD 5nm 5.7GHzx86-64 Microprocessor Core[C/OL]//2023 IEEE International Solid-State Circuits Conference(ISSCC). 2023: 38-39. DOI: 10.1109/ISSCC42615.2023.10067540.
[6] VARMA A, GURURAJARAO S, CHEN H, et al. 2.1 A 4nm 3.4GHz Tri-Gear Fully Outof-Order ARMv9.2 CPU Subsystem-Based 5G Mobile SoC[C/OL]//2024 IEEE InternationalSolid-State Circuits Conference (ISSCC): Vol. 67. 2024: 36-38. DOI: 10.1109/ISSCC49657.2024.10454494.
[7] BURD T, VENKATARAMAN S, LI W, et al. 2.2“Zen 4c”: The AMD 5nm Area-Optimized×86-64 Microprocessor Core[C/OL]//2024 IEEE International Solid-State Circuits Conference(ISSCC): Vol. 67. 2024: 38-40. DOI: 10.1109/ISSCC49657.2024.10454507.
[8] NAYAK A, CHEN H, MAIR H, et al. A 5nm 3.4GHz Tri-Gear ARMv9 CPU Subsystem in aFully Integrated 5G Flagship Mobile SoC[C/OL]//2022 IEEE International Solid-State CircuitsConference (ISSCC): Vol. 65. 2022: 50-52. DOI: 10.1109/ISSCC42614.2022.9731604.
[9] KIM T S, LEE S, LEE K, et al. 4.8 An Area and Energy Efficient 0.12nJ/Pixel 8K 30fps AV1Video Decoder in 5nm CMOS Process[C/OL]//2021 IEEE International Solid-State CircuitsConference (ISSCC): Vol. 64. 2021: 68-70. DOI: 10.1109/ISSCC42613.2021.9366011.
[10] DASGUPTA S, SINGH T, JAIN A, et al. 8.4 Radeon RX 5700 Series: The AMD 7nm EnergyEfficient High-Performance GPUs[C/OL]//2020 IEEE International Solid-State Circuits Conference - (ISSCC). 2020: 150-152. DOI: 10.1109/ISSCC19947.2020.9062947.
[11] MOON S, MUN H G, SON H, et al. A 127.8TOPS/W Arbitrarily Quantized 1-to-8b ScalablePrecision Accelerator for General-Purpose Deep Learning with Reduction of Storage, Logicand Latency Waste[C/OL]//2023 IEEE International Solid-State Circuits Conference (ISSCC).2023: 21-23. DOI: 10.1109/ISSCC42615.2023.10067615.
[12] DU C Y, TSAI C F, CHEN W C, et al. A 28nm 11.2TOPS/W Hardware-Utilization-AwareNeural-Network Accelerator with Dynamic Dataflow[C/OL]//2023 IEEE International SolidState Circuits Conference (ISSCC). 2023: 1-3. DOI: 10.1109/ISSCC42615.2023.10067774.
[13] GONG Y, ZHANG T, GUO H, et al. 22.7 DL-VOPU: An Energy-Efficient Domain-SpecificDeep-Learning-Based Visual Object Processing Unit Supporting Multi-Scale Semantic FeatureExtraction for Mobile Object Detection/Tracking Applications[C/OL]//2023 IEEE InternationalSolid-State Circuits Conference (ISSCC). 2023: 1-3. DOI: 10.1109/ISSCC42615.2023.10067704.
[14] PARK S, LEE S, PARK J, et al. 22.8 A0.81 mm2 740μW Real-Time Speech Enhancement Processor Using Multiplier-Less PE Arrays for Hearing Aids in 28nm CMOS[C/OL]//2023 IEEE International Solid-State Circuits Conference (ISSCC). 2023: 21-23. DOI:10.1109/ISSCC42615.2023.10067646.
[15] TAMBE T, ZHANG J, HOOPER C, et al. 22.9 A 12nm 18.1TFLOPs/W Sparse TransformerProcessor with Entropy-Based Early Exit, Mixed-Precision Predication and Fine-GrainedPower Management[C/OL]//2023 IEEE International Solid-State Circuits Conference (ISSCC).2023: 342-344. DOI: 10.1109/ISSCC42615.2023.10067817.
[16] SHAN W, YANG M, XU J, et al. 14.1 A 510nW 0.41V Low-Memory Low-ComputationKeyword-Spotting Chip Using Serial FFT-Based MFCC and Binarized Depthwise SeparableConvolutional Neural Network in 28nm CMOS[C/OL]//2020 IEEE International Solid-StateCircuits Conference - (ISSCC). 2020: 230-232. DOI: 10.1109/ISSCC19947.2020.9063000.
[17] YUAN Z, YANG Y, YUE J, et al. 14.2 A 65nm 24.7µJ/Frame 12.3mW Activation-SimilarityAware Convolutional Neural Network Video Processor Using Hybrid Precision, Inter-FrameData Reuse and Mixed-Bit-Width Difference-Frame Data Codec[C/OL]//2020 IEEE International Solid-State Circuits Conference - (ISSCC). 2020: 232-234. DOI: 10.1109/ISSCC19947.2020.9063155.
[18] UEYOSHI K, ANDO K, HIROSE K, et al. QUEST: A 7.49TOPS multi-purpose log-quantizedDNN inference engine stacked on 96MB 3D SRAM using inductive-coupling technologyin 40nm CMOS[C/OL]//2018 IEEE International Solid-State Circuits Conference - (ISSCC).2018: 216-218. DOI: 10.1109/ISSCC.2018.8310261.
[19] LEE J, KIM C, KANG S, et al. UNPU: A 50.6TOPS/W unified deep neural network acceleratorwith 1b-to-16b fully-variable weight bit-precision[C/OL]//2018 IEEE International Solid-StateCircuits Conference - (ISSCC). 2018: 218-220. DOI: 10.1109/ISSCC.2018.8310262.
[20] CHOI S, LEE J, LEE K, et al. A 9.02mW CNN-stereo-based real-time 3D hand-gesture recognition processor for smart mobile devices[C/OL]//2018 IEEE International Solid-State CircuitsConference - (ISSCC). 2018: 220-222. DOI: 10.1109/ISSCC.2018.8310263.
[21] BANKMAN D, YANG L, MOONS B, et al. An always-on 3.8μJ/86with all memory on chipin 28nm CMOS[C/OL]//2018 IEEE International Solid-State Circuits Conference - (ISSCC).2018: 222-224. DOI: 10.1109/ISSCC.2018.8310264.
[22] CHEN T S, KUO H C, WU A Y. A 232-to-1996KS/s robust compressive-sensing reconstructionengine for real-time physiological signals monitoring[C/OL]//2018 IEEE International SolidState Circuits Conference - (ISSCC). 2018: 226-228. DOI: 10.1109/ISSCC.2018.8310266.
[23] JIA H, OZATAY M, TANG Y, et al. 15.1 A Programmable Neural-Network Inference Accelerator Based on Scalable In-Memory Computing[C/OL]//2021 IEEE International Solid-State Circuits Conference (ISSCC): Vol. 64. 2021: 236-238. DOI: 10.1109/ISSCC42613.2021.9365788.
[24] CHEN Z, CHEN X, GU J. 15.3 A 65nm 3T Dynamic Analog RAM-Based Computing-inMemory Macro and CNN Accelerator with Retention Enhancement, Adaptive Analog Sparsityand 44TOPS/W System Energy Efficiency[C/OL]//2021 IEEE International Solid-State CircuitsConference (ISSCC): Vol. 64. 2021: 240-242. DOI: 10.1109/ISSCC42613.2021.9366045.
[25] GUO R, YUE Z, SI X, et al. 15.4 A 5.99-to-691.1TOPS/W Tensor-Train In-Memory-ComputingProcessor Using Bit-Level-Sparsity-Based Optimization and Variable-Precision Quantization[C/OL]//2021 IEEE International Solid-State Circuits Conference (ISSCC): Vol. 64. 2021: 242-244. DOI: 10.1109/ISSCC42613.2021.9365989.
[26] XUE C X, HUNG J M, KAO H Y, et al. 16.1 A 22nm 4Mb 8b-Precision ReRAM Computingin-Memory Macro with 11.91 to 195.7TOPS/W for Tiny AI Edge Devices[C/OL]//2021 IEEEInternational Solid-State Circuits Conference (ISSCC): Vol. 64. 2021: 245-247. DOI: 10.1109/ISSCC42613.2021.9365769.
[27] XIE S, NI C, SAYAL A, et al. 16.2 eDRAM-CIM: Compute-In-Memory Design with Reconfigurable Embedded-Dynamic-Memory Array Realizing Adaptive Data Converters and ChargeDomain Computing[C/OL]//2021 IEEE International Solid-State Circuits Conference (ISSCC):Vol. 64. 2021: 248-250. DOI: 10.1109/ISSCC42613.2021.9365932.
[28] SU J W, CHOU Y C, LIU R, et al. 16.3 A 28nm 384kb 6T-SRAM Computation-in-MemoryMacro with 8b Precision for AI Edge Chips[C/OL]//2021 IEEE International Solid-State Circuits Conference (ISSCC): Vol. 64. 2021: 250-252. DOI: 10.1109/ISSCC42613.2021.9365984.
[29] SI X, TU Y N, HUANG W H, et al. 15.5 A 28nm 64Kb 6T SRAM Computing-in-MemoryMacro with 8b MAC Operation for AI Edge Chips[C/OL]//2020 IEEE International Solid-StateCircuits Conference - (ISSCC). 2020: 246-248. DOI: 10.1109/ISSCC19947.2020.9062995.
[30] DONG Q, SINANGIL M E, ERBAGCI B, et al. 15.3 A 351TOPS/W and 372.4GOPS Computein-Memory SRAM Macro in 7nm FinFET CMOS for Machine-Learning Applications[C/OL]//2020 IEEE International Solid-State Circuits Conference - (ISSCC). 2020: 242-244. DOI:10.1109/ISSCC19947.2020.9062985.
[31] MOONS B, UYTTERHOEVEN R, DEHAENE W, et al. 14.5 Envision: A 0.26-to-10TOPS/Wsubword-parallel dynamic-voltage-accuracy-frequency-scalable Convolutional Neural Network processor in 28nm FDSOI[C/OL]//2017 IEEE International Solid-State Circuits Conference (ISSCC). 2017: 246-247. DOI: 10.1109/ISSCC.2017.7870353.
[32] YIN S, OUYANG P, TANG S, et al. A 1.06-to-5.09 TOPS/W reconfigurable hybrid-neuralnetwork processor for deep learning applications[C/OL]//2017 Symposium on VLSI Circuits.2017: C26-C27. DOI: 10.23919/VLSIC.2017.8008534.
[33] SHARMA H, PARK J, SUDA N, et al. Bit Fusion: Bit-Level Dynamically Composable Architecture for Accelerating Deep Neural Network[C/OL]//2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture (ISCA). 2018: 764-775. DOI: 10.1109/ISCA.2018.00069.
[34] MOON S, MUN H G, SON H, et al. A 127.8TOPS/W Arbitrarily Quantized 1-to-8b ScalablePrecision Accelerator for General-Purpose Deep Learning with Reduction of Storage, Logicand Latency Waste[C/OL]//2023 IEEE International Solid-State Circuits Conference (ISSCC).2023: 21-23. DOI: 10.1109/ISSCC42615.2023.10067615.
[35] LATTNER C, ADVE V. LLVM: a compilation framework for lifelong program analysis &transformation[C/OL]//International Symposium on Code Generation and Optimization, 2004.CGO 2004. 2004: 75-86. DOI: 10.1109/CGO.2004.1281665.
[36] KJOLSTAD F, CHOU S, LUGATO D, et al. Taco: A tool to generate tensor algebra kernels[C/OL]//2017 32nd IEEE/ACM International Conference on Automated Software Engineering(ASE). 2017: 943-948. DOI: 10.1109/ASE.2017.8115709.
[37] CYPHERS S, BANSAL A K, BHIWANDIWALLA A, et al. Intel nGraph: An IntermediateRepresentation, Compiler, and Executor for Deep Learning[A]. 2018. arXiv: 1801.08058.
[38] VASILACHE N, ZINENKO O, THEODORIDIS T, et al. Tensor Comprehensions: FrameworkAgnostic High-Performance Machine Learning Abstractions[A]. 2018. arXiv: 1802.04730.
[39] CHEN T, MOREAU T, JIANG Z, et al. TVM: An Automated End-to-End Optimizing Compilerfor Deep Learning[A]. 2018. arXiv: 1802.04799.
[40] ROTEM N, FIX J, ABDULRASOOL S, et al. Glow: Graph Lowering Compiler Techniquesfor Neural Networks[A]. 2019. arXiv: 1805.00907.
[41] SCHMIDT C, WRIGHT J, WANG Z, et al. 4.3 An Eight-Core 1.44GHz RISC-V VectorMachine in 16nm FinFET[C/OL]//2021 IEEE International Solid-State Circuits Conference(ISSCC): Vol. 64. 2021: 58-60. DOI: 10.1109/ISSCC42613.2021.9365789.
[42] PATTERSON D. 50 Years of computer architecture: From the mainframe CPU to the domainspecific tpu and the open RISC-V instruction set[C/OL]//2018 IEEE International Solid-StateCircuits Conference - (ISSCC). 2018: 27-31. DOI: 10.1109/ISSCC.2018.8310168.
[43] ASANOVIĆ K, AVIZIENIS R, BACHRACH J, et al. The Rocket Chip Generator[C/OL]//2016. https://api.semanticscholar.org/CorpusID:5364470.
[44] WHEELER B. WD rolls its own RISC-V core[J]. The Linley Group Microprocessor Report,2019, 3: 2019.
[45] ZHAO J, KORPAN B, GONZALEZ A, et al. SonicBOOM: The 3rd Generation BerkeleyOut-of-Order Machine[M]//Fourth Workshop on Computer Architecture Research with RISCV. 2020.
[46] BOUREAU Y L, BACH F, LECUN Y, et al. Learning mid-level features for recognition[C/OL]//2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 2010: 2559-2566. DOI: 10.1109/CVPR.2010.5539963.
[47] KUNG. Why systolic architectures?[J/OL]. Computer, 1982, 15(1): 37-46. DOI: 10.1109/MC.1982.1653825.
[48] CHEN Y, LUO T, LIU S, et al. DaDianNao: A Machine-Learning Supercomputer[C/OL]//201447th Annual IEEE/ACM International Symposium on Microarchitecture. 2014: 609-622. DOI:10.1109/MICRO.2014.58.
[49] LIU D, CHEN T, LIU S, et al. PuDianNao: A Polyvalent Machine Learning Accelerator[C/OL]//ASPLOS ’15: Proceedings of the Twentieth International Conference on ArchitecturalSupport for Programming Languages and Operating Systems. New York, NY, USA: Association for Computing Machinery, 2015: 369–381. https://doi.org/10.1145/2694344.2694358.
[50] ZHANG S, DU Z, ZHANG L, et al. Cambricon-X: An accelerator for sparse neural networks[C/OL]//2016 49th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO). 2016: 1-12. DOI: 10.1109/MICRO.2016.7783723.
[51] DENG C, SUI Y, LIAO S, et al. GoSPA: An Energy-efficient High-performance GloballyOptimized SParse Convolutional Neural Network Accelerator[C/OL]//2021 ACM/IEEE 48thAnnual International Symposium on Computer Architecture (ISCA). 2021: 1110-1123. DOI:10.1109/ISCA52012.2021.00090.
[52] ZHOU X, DU Z, GUO Q, et al. Cambricon-S: Addressing Irregularity in Sparse Neural Networks through A Cooperative Software/Hardware Approach[C/OL]//2018 51st AnnualIEEE/ACM International Symposium on Microarchitecture (MICRO). 2018: 15-28. DOI:10.1109/MICRO.2018.00011.
[53] JOUPPI N P, YOUNG C, PATIL N, et al. In-datacenter performance analysis of a tensor processing unit[C/OL]//2017 ACM/IEEE 44th Annual International Symposium on Computer Architecture (ISCA). 2017: 1-12. DOI: 10.1145/3079856.3080246.
[54] NORRIE T, PATIL N, YOON D H, et al. Google’s Training Chips Revealed: TPUv2 and TPUv3[C/OL]//2020 IEEE Hot Chips 32 Symposium (HCS). 2020: 1-70. DOI: 10.1109/HCS49909.2020.9220735.
[55] JOUPPI N P, HYUN YOON D, ASHCRAFT M, et al. Ten Lessons From Three GenerationsShaped Google’s TPUv4i : Industrial Product[C/OL]//2021 ACM/IEEE 48th Annual International Symposium on Computer Architecture (ISCA). 2021: 1-14. DOI: 10.1109/ISCA52012.2021.00010.
[56] DU Z, FASTHUBER R, CHEN T, et al. ShiDianNao: Shifting vision processing closer to thesensor[C/OL]//2015 ACM/IEEE 42nd Annual International Symposium on Computer Architecture (ISCA). 2015: 92-104. DOI: 10.1145/2749469.2750389.
[57] RAGAN-KELLEY, et al. Halide: A Language and Compiler for Optimizing Parallelism, Locality, and Recomputation in Image Processing Pipelines[J/OL]. SIGPLAN Not., 2013, 48(6):519–530. https://doi.org/10.1145/2499370.2462176.
[58] ZOPH B, LE Q V. Neural Architecture Search with Reinforcement Learning[A]. 2017. arXiv:1611.01578.
[59] HE Z, SHERI A, LI Q, et al. Agile Hardware and Software Co-design for RISC-V-based Multiprecision Deep Learning Microprocessor[C]//2023 28th Asia and South Pacific Design Automation Conference (ASP-DAC). 2023: 490-495.
[60] CHEN X, XIE L, WU J, et al. Progressive Differentiable Architecture Search: Bridging theDepth Gap Between Search and Evaluation[J/OL]. 2019 IEEE/CVF International Conferenceon Computer Vision (ICCV), 2019: 1294-1303. https://api.semanticscholar.org/CorpusID:139104282.
[61] GUO K, SUI L, QIU J, et al. Angel-Eye: A Complete Design Flow for Mapping CNNonto Customized Hardware[C/OL]//2016 IEEE Computer Society Annual Symposium on VLSI(ISVLSI). 2016: 24-29. DOI: 10.1109/ISVLSI.2016.129.
[62] QIU J, WANG J, YAO S, et al. Going Deeper with Embedded FPGA Platform for Convolutional Neural Network[C/OL]//FPGA ’16: Proceedings of the 2016 ACM/SIGDA InternationalSymposium on Field-Programmable Gate Arrays. Monterey, California, USA, 2016: 26–35.DOI: 10.1145/2847263.2847265.
[63] ZHANG C, WU D, SUN J, et al. Energy-Efficient CNN Implementation on a Deeply PipelinedFPGA Cluster[C/OL]//ISLPED ’16: Proceedings of the 2016 International Symposium on LowPower Electronics and Design. San Francisco Airport, CA, USA, 2016: 326–331. DOI:10.1145/2934583.2934644.
[64] MA Y, CAO Y, VRUDHULA S, et al. Optimizing Loop Operation and Dataflow in FPGAAcceleration of Deep Convolutional Neural Networks[C/OL]//FPGA ’17: Proceedings of the2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays. New York,NY, USA, 2017: 45–54. DOI: 10.1145/3020078.3021736.
[65] ZHANG C, SUN G, FANG Z, et al. Caffeine: Toward Uniformed Representation and Acceleration for Deep Convolutional Neural Networks[J/OL]. IEEE Transactions on Computer-AidedDesign of Integrated Circuits and Systems, 2019, 38(11): 2072-2085. DOI: 10.1109/TCAD.2017.2785257.
[66] LI R, WANG S, LONG Z, et al. UnDeepVO: Monocular Visual Odometry Through Unsupervised Deep Learning[C/OL]//2018 IEEE International Conference on Robotics and Automation(ICRA). 2018: 7286-7291. DOI: 10.1109/ICRA.2018.8461251.
[67] TAO T, ZHANG Z, YANG X. Visual Perception Method Based on Human Pose Estimation forHumanoid Robot Imitating Human Motions[C/OL]//CCRIS ’21: Proceedings of the 2021 2ndInternational Conference on Control, Robotics and Intelligent System. New York, NY, USA:Association for Computing Machinery, 2021: 54–61. https://doi.org/10.1145/3483845.3483894.

所在学位评定分委会
电子科学与技术
国内图书分类号
TN402
来源库
人工提交
成果类型学位论文
条目标识符http://sustech.caswiz.com/handle/2SGJ60CL/765621
专题南方科技大学
南方科技大学-香港科技大学深港微电子学院筹建办公室
推荐引用方式
GB/T 7714
李秋峰. 基于 FPGA 软硬件协同设计的混合精度深度学习加速器[D]. 深圳. 南方科技大学,2024.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可 操作
12132454-李秋峰-南方科技大学-(6176KB)----限制开放--请求全文
个性服务
原文链接
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
导出为Excel格式
导出为Csv格式
Altmetrics Score
谷歌学术
谷歌学术中相似的文章
[李秋峰]的文章
百度学术
百度学术中相似的文章
[李秋峰]的文章
必应学术
必应学术中相似的文章
[李秋峰]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
[发表评论/异议/意见]
暂无评论

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。