中文版 | English
题名

基于比特级稀疏混合精度卷积神经网络的高能效存算一体加速器设计

其他题名
A HIGH ENERGY EFFICIENCY COMPUTING-IN-MEMORY ACCELERATOR DESIGN FOR BIT-LEVEL SPARSE MIXED-PRECISION CONVOLUTIONAL NEURAL NETWORKS
姓名
姓名拼音
ZHOU Haoxiang
学号
12132489
学位类型
硕士
学位专业
0809 电子科学与技术
学科门类/专业学位类别
08 工学
导师
余浩
导师单位
深港微电子学院
论文答辩日期
2024-05-08
论文提交日期
2024-06-12
学位授予单位
南方科技大学
学位授予地点
深圳
摘要

随着人工智能技术的飞速进步,算力成为了推动该领域发展的关键因素,特别是在面对模型参数量迅速增长的挑战时更显重要。然而,传统的冯诺依曼架构加速器在应对这种增长时遇到了所谓的“存储墙”问题,即数据传输成为了限制系统性能提升的瓶颈。为了解决这一难题,存算一体化加速器应运而生,旨在通过算法和硬件的协同优化,显著提升计算效率和能效。尽管存算一体化技术在理论上能够缓解这一问题,但在具体的设计与实现过程中依然面临着众多挑战。

本文在全面分析国内外存算一体化加速器的研究现状基础上,提出了一个高能效的存算一体化加速器设计方案。在算法层面,通过神经架构搜索算法精选出卷积神经网络各层的最适精度位宽,进而利用比特级稀疏量化技术在训练过程中引 导生成更多的比特级“0”。相比传统方法,本方案在 Cifar-10 数据集上对 ResNet-18 和 VGG-16 的测试表明,在每一个卷积层的权重的比特位上,“0”的占比平均超过了 90%,从而为硬件层面的稀疏性优化提供了更广阔的空间。

硬件设计方面,本文创新性地提出了 9T1C SRAM 存内计算单元,有效地实现了存储与计算的高度整合,并在读取状态时显著提升了数据读取的噪声容限(Read Static Noise Margin,RSNM),实验结果表明,相比传统 6T SRAM 单元,提出的 9T1C 单元在 RSNM 上的提升最高可达 41.99%。同时,本文研究了一种新型的稀疏感知 ADC,该 ADC 能够根据网络的比特级稀疏性动态调整,以优化功耗。此外,本文开发了支持权重有符号的混合精度移位累加单元,相较于正负权重分别存储的方法,极大的降低了面积开销。最后,本文详细规划了混合精度卷积神经网络在存内计算阵列上的映射方式及整体数据流。

系统性能评估表明,本文采用 TSMC 28nm 工艺实现的存算一体加速器不仅符合可制造性标准,而且在使用比特级稀疏混合精度的 ResNet-18 网络评估时,实现了平均 245.7 TOPS/W 的能效比和 7.37 TOPS 的峰值吞吐量,有效促进了卷积神 经网络推理的加速。本研究的成果不仅为存算一体化加速器设计提供了创新思路, 也为未来人工智能硬件技术的进步提供了优化可能。

其他摘要

As the field of Artificial Intelligence (AI) rapidly progresses, computational power has become a key factor driving its development, especially when facing the challenge of quickly increasing model parameter sizes. However, traditional Von Neumann architecture accelerators encounter the so-called ”Memory Wall” problem when dealing with this growth, where data transmission becomes a bottleneck, limiting the improvements of system performance. To address this issue, Computing-in-Memory (CIM) accelerators have emerged, aiming to significantly enhance computational and energy efficiency through close optimization of algorithms and hardware. Despite the theoretical potential of CIM technology to alleviate this problem, it still faces numerous challenges in practical design and implementation.

After thoroughly analyzing the current state of CIM accelerators research both domestically and internationally, this article proposes a high effergy efficiency design for a CIM accelerator. At the algorithmic level, it employs a Neural Architecture Search algorithm to select the optimal precision bit-width for each layer of convolutional neural networks (CNNs). Then, it uses bit-level sparsity quantization techniques to guide the generation of more bit-level ”0”s during training. Compared to traditional methods, this approach demonstrated on the Cifar-10 dataset for ResNet-18 and VGG-16 tests that the proportion of ”0”s in each convolutional layer’s weight bit-width averages over 90%, thus providing broader possibilities for hardware-level sparsity optimization.

In terms of hardware design, this article innovatively proposes a 9T1C SRAM CIM unit that effectively integrates storage and computation and significantly improves Read Static Noise Margin (RSNM) during the read state. Experimental results indicate that, compared to traditional 6T SRAM unit, the proposed 9T1C unit achieves up to a 41.99% increase in RSNM. Additionally, a novel sparse-aware ADC has been proposed, which can dynamically adjust according to the bit-level sparsity of the network to optimize power consumption. Furthermore, a mixed-precision shiftadd unit supporting signed weight computation has been developed, significantly reducing area overhead compared to methods that separately store positive and negative weights. Lastly, this paper meticulously maps mixed-precision CNNs onto the CIM array and the overall data flow.

System performance evaluation shows that the CIM accelerator, implemented with TSMC 28nm process, not only meets manufacturability standards but also achieves an average energy efficiency of 245.7 TOPS/W and a peak throughput of 7.37 TOPS when evaluated using a bit-level sparse mixed-precision ResNet-18 network, effectively accelerating the inference of CNNs. The outcomes of this research provide innovative ideas for the design of CIM accelerators and provide a possibility of optimizing the AI hardware technology in the future.

关键词
其他关键词
语种
中文
培养类别
独立培养
入学年份
2021
学位授予年份
2024-06
参考文献列表

[1] LECUN Y, BOTTOU L, BENGIO Y, et al. Gradient-based learning applied to document recognition[J]. Proceedings of the IEEE, 1998, 86(11): 2278-2324.
[2] KRIZHEVSKY A, SUTSKEVER I, HINTON G E. Imagenet classification with deep convolutional neural networks[J]. Advances in neural information processing systems, 2012, 25.
[3] SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale imagerecognition[A]. 2014.
[4] HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 770-778.
[5] SZE V, CHEN Y H, YANG T J, et al. Efficient processing of deep neural networks: A tutorial and survey[J]. Proceedings of the IEEE, 2017, 105(12): 2295-2329.
[6] GHOLAMI A, YAO Z, KIM S, et al. AI and Memory Wall[J]. RiseLab Medium Post, 2021.
[7] JHANG C J, XUE C X, HUNG J M, et al. Challenges and Trends of SRAM-Based ComputingIn-Memory for AI Edge Devices[J/OL]. IEEE Transactions on Circuits and Systems I: Regular Papers, 2021, 68(5): 1773-1786. DOI: 10.1109/TCSI.2021.3064189.
[8] NOWLAN S J, HINTON G E. Simplifying neural networks by soft weight sharing[M]//TheMathematics of Generalization. CRC Press, 2018: 373-394.
[9] JACOB B, KLIGYS S, CHEN B, et al. Quantization and training of neural networks for efficient integer-arithmetic-only inference[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 2704-2713.
[10] YU R, LI A, CHEN C F, et al. Nisp: Pruning networks using neuron importance score propagation[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 9194-9203.
[11] LEE N, AJANTHAN T, TORR P H. Snip: Single-shot network pruning based on connection sensitivity[A]. 2018.
[12] KAUTZ W. Cellular Logic-in-Memory Arrays[J/OL]. IEEE Transactions on Computers, 1969, C-18(8): 719-727. DOI: 10.1109/T-C.1969.222754.
[13] KHADDAM-ALJAMEH R, STANISAVLJEVIC M, FORNT MAS J, et al. HERMES Core–A 14nm CMOS and PCM-based In-Memory Compute Core using an array of 300ps/LSBLinearized CCO-based ADCs and local digital processing[C/OL]//2021 Symposium on VLSICircuits. 2021: 1-2. DOI: 10.23919/VLSICircuits52068.2021.9492362.
[14] ANTOLINI A, LICO A, SCARSELLI E F, et al. An embedded PCM Peripheral Unit addingAnalog MAC In-Memory Computing Feature addressing Non-linearity and Time Drift Compensation[C/OL]//ESSCIRC 2022- IEEE 48th European Solid State Circuits Conference (ESSCIRC). 2022: 109-112. DOI: 10.1109/ESSCIRC55480.2022.9911447.
[15] SUN X, KHWA W S, CHEN Y S, et al. PCM-Based Analog Compute-In-Memory: Impact ofDevice Non-Idealities on Inference Accuracy[J/OL]. IEEE Transactions on Electron Devices,2021, 68(11): 5585-5591. DOI: 10.1109/TED.2021.3113300.
[16] BOYBAT I, KERSTING B, SARWAT S G, et al. Temperature sensitivity of analog inmemory computing using phase-change memory[C/OL]//2021 IEEE International Electron Devices Meeting (IEDM). 2021: 28.3.1-28.3.4. DOI: 10.1109/IEDM19574.2021.9720519.
[17] MA X, DENG S, WU J, et al. A 2-Transistor-2-Capacitor Ferroelectric Edge Compute-inMemory Scheme With Disturb-Free Inference and High Endurance[J/OL]. IEEE Electron Device Letters, 2023, 44(7): 1088-1091. DOI: 10.1109/LED.2023.3274362.
[18] LUO J, SHAO H, FU B, et al. Novel Ferroelectric Tunnel FinFET based Encryption-embeddedComputing-in-Memory for Secure AI with High Area-and Energy-Efficiency[C/OL]//2022 International Electron Devices Meeting (IEDM). 2022: 36.5.1-36.5.4. DOI: 10.1109/IEDM45625.2022.10019387.
[19] WANG J, BAI Y, WANG H, et al. Reconfigurable Bit-Serial Operation Using Toggle SOTMRAM for High-Performance Computing in Memory Architecture[J/OL]. IEEE Transactionson Circuits and Systems I: Regular Papers, 2022, 69(11): 4535-4545. DOI: 10.1109/TCSI.2022.3192165.
[20] JAIN S, RANJAN A, ROY K, et al. Computing in Memory With Spin-Transfer Torque Magnetic RAM[J/OL]. IEEE Transactions on Very Large Scale Integration (VLSI) Systems, 2018,26(3): 470-483. DOI: 10.1109/TVLSI.2017.2776954.
[21] CHEN W H, LI K X, LIN W Y, et al. A 65nm 1Mb nonvolatile computing-in-memory ReRAMmacro with sub-16ns multiply-and-accumulate for binary DNN AI edge processors[C/OL]//2018 IEEE International Solid-State Circuits Conference - (ISSCC). 2018: 494-496. DOI:10.1109/ISSCC.2018.8310400.
[22] HSU H H, WEN T H, HUANG W H, et al. A Nonvolatile AI-Edge Processor With SLC–MLCHybrid ReRAM Compute-in-Memory Macro Using Current–Voltage-Hybrid Readout Scheme[J/OL]. IEEE Journal of Solid-State Circuits, 2024, 59(1): 116-127. DOI: 10.1109/JSSC.2023.3314433.
[23] YOON J H, CHANG M, KHWA W S, et al. A 40-nm, 64-Kb, 56.67 TOPS/W Voltage-SensingComputing-In-Memory/Digital RRAM Macro Supporting Iterative Write With Verification andOnline Read-Disturb Detection[J/OL]. IEEE Journal of Solid-State Circuits, 2022, 57(1): 68-79. DOI: 10.1109/JSSC.2021.3101209.
[24] WU P C, SU J W, CHUNG Y L, et al. A 28nm 1Mb Time-Domain Computing-in-Memory 6TSRAM Macro with a 6.6ns Latency, 1241GOPS and 37.01TOPS/W for 8b-MAC Operations forEdge-AI Devices[C/OL]//2022 IEEE International Solid-State Circuits Conference (ISSCC):Vol. 65. 2022: 1-3. DOI: 10.1109/ISSCC42614.2022.9731681.
[25] GUO A, SI X, CHEN X, et al. A 28nm 64-kb 31.6-TFLOPS/W Digital-Domain Floating-PointComputing-Unit and Double-Bit 6T-SRAM Computing-in-Memory Macro for Floating-PointCNNs[C/OL]//2023 IEEE International Solid-State Circuits Conference (ISSCC). 2023: 128-130. DOI: 10.1109/ISSCC42615.2023.10067260.
[26] CHIH Y D, LEE P H, FUJIWARA H, et al. 16.4 An 89TOPS/W and 16.3TOPS/mm2 All-Digital SRAM-Based Full-Precision Compute-In Memory Macro in 22nm for MachineLearning Edge Applications[C/OL]//2021 IEEE International Solid-State Circuits Conference (ISSCC): Vol. 64. 2021: 252-254. DOI: 10.1109/ISSCC42613.2021.9365766.
[27] JIANG Z, YIN S, SEO J S, et al. C3SRAM: An In-Memory-Computing SRAM Macro Based on Robust Capacitive Coupling Computing Mechanism[J/OL]. IEEE Journal of Solid-State Circuits, 2020, 55(7): 1888-1897. DOI: 10.1109/JSSC.2020.2992886.
[28] SUDARSHAN C, SOLIMAN T, LAPPAS J, et al. A Weighted Current Summation Based Mixed Signal DRAM-PIM Architecture for Deep Neural Network Inference[J/OL]. IEEE Journal on Emerging and Selected Topics in Circuits and Systems, 2022, 12(2): 367-380. DOI: 10.1109/ JETCAS.2022.3170235.
[29] XIE S, NI C, SAYAL A, et al. eDRAM-CIM: Compute-In-Memory Design with Reconfigurable Embedded-Dynamic-Memory Array Realizing Adaptive Data Converters and Charge-Domain Computing[C/OL]//2021 IEEE International Solid-State Circuits Conference (ISSCC): Vol. 64. 2021: 248-250. DOI: 10.1109/ISSCC42613.2021.9365932.
[30] KIM J, PARK J. The Quantitative Comparisons of Analog and Digital SRAM Compute-InMemories for Deep Neural Network Applications[C/OL]//2022 19th International SoC Design Conference (ISOCC). 2022: 129-130. DOI: 10.1109/ISOCC56007.2022.10031424.
[31] SUN W, YUE J, HE Y, et al. A Survey of Computing-in-Memory Processor: From Circuit to Application[J/OL]. IEEE Open Journal of the Solid-State Circuits Society, 2023: 1-1. DOI: 10.1109/OJSSCS.2023.3328290.
[32] LIU Q, GAO B, YAO P, et al. 33.2 A Fully Integrated Analog ReRAM Based 78.4TOPS/W Compute-In-Memory Chip with Fully Parallel MAC Computing[C/OL]//2020 IEEE International Solid-State Circuits Conference - (ISSCC). 2020: 500-502. DOI: 10.1109/ISSCC19947.2020.9062953.
[33] SONG J, TANG X, LUO H, et al. A 4-bit Calibration-Free Computing-In-Memory Macro With3T1C Current-Programed Dynamic-Cascode Multi-Level-Cell eDRAM[J/OL]. IEEE Journalof Solid-State Circuits, 2023: 1-13. DOI: 10.1109/JSSC.2023.3339887.
[34] YU C, YOO T, KIM T T H, et al. A 16K Current-Based 8T SRAM Compute-In-Memory Macro with Decoupled Read/Write and 1-5bit Column ADC[C/OL]//2020 IEEE Custom Integrated Circuits Conference (CICC). 2020: 1-4. DOI: 10.1109/CICC48029.2020.9075883.
[35] VALAVI H, RAMADGE P J, NESTLER E, et al. A 64-Tile 2.4-Mb In-Memory-Computing CNN Accelerator Employing Charge-Domain Compute[J/OL]. IEEE Journal of Solid-State Circuits, 2019, 54(6): 1789-1799. DOI: 10.1109/JSSC.2019.2899730.
[36] OH H, KIM H, AHN D, et al. Energy-Efficient In-Memory Binary Neural Network Accelerator Design Based on 8T2C SRAM Cell[J/OL]. IEEE Solid-State Circuits Letters, 2022, 5: 70-73. DOI: 10.1109/LSSC.2022.3161592.
[37] WANG H, LIU R, DORRANCE R, et al. A Charge Domain SRAM Compute-in-Memory Macro With C-2C Ladder-Based 8-Bit MAC Unit in 22-nm FinFET Process for Edge Inference[J/OL]. IEEE Journal of Solid-State Circuits, 2023, 58(4): 1037-1050. DOI: 10.1109/JSSC.2022.3232601.
[38] KIM H, YOO T, KIM T T H, et al. Colonnade: A Reconfigurable SRAM-Based Digital BitSerial Compute-In-Memory Macro for Processing Neural Networks[J/OL]. IEEE Journal of Solid-State Circuits, 2021, 56(7): 2221-2233. DOI: 10.1109/JSSC.2021.3061508.
[39] YIN S, JIANG Z, SEO J S, et al. XNOR-SRAM: In-Memory Computing SRAM Macro for Binary/Ternary Deep Neural Networks[J/OL]. IEEE Journal of Solid-State Circuits, 2020, 55(6): 1733-1743. DOI: 10.1109/JSSC.2019.2963616.
[40] COURBARIAUX M, HUBARA I, SOUDRY D, et al. Binarized neural networks: Training deep neural networks with weights and activations constrained to+ 1 or-1[A]. 2016.
[41] RASTEGARI M, ORDONEZ V, REDMON J, et al. Xnor-net: Imagenet classification using binary convolutional neural networks[C]//European conference on computer vision. Springer, 2016: 525-542.
[42] SUN X, YIN S, PENG X, et al. XNOR-RRAM: A scalable and parallel resistive synaptic architecture for binary neural networks[C/OL]//2018 Design, Automation & Test in Europe Conference & Exhibition (DATE). 2018: 1423-1428. DOI: 10.23919/DATE.2018.8342235.
[43] YU C, YOO T, CHAI K T C, et al. A 65-nm 8T SRAM Compute-in-Memory Macro With Column ADCs for Processing Neural Networks[J/OL]. IEEE Journal of Solid-State Circuits, 2022, 57(11): 3466-3476. DOI: 10.1109/JSSC.2022.3162602.
[44] VALAVI H, RAMADGE P J, NESTLER E, et al. A Mixed-Signal Binarized ConvolutionalNeural-Network Accelerator Integrating Dense Weight Storage and Multiplication for Reduced Data Movement[C/OL]//2018 IEEE Symposium on VLSI Circuits. 2018: 141-142. DOI: 10.1109/VLSIC.2018.8502421.
[45] ZHANG J, WANG Z, VERMA N. In-Memory Computation of a Machine-Learning Classifier in a Standard 6T SRAM Array[J/OL]. IEEE Journal of Solid-State Circuits, 2017, 52(4): 915-924. DOI: 10.1109/JSSC.2016.2642198.
[46] SU J W, SI X, CHOU Y C, et al. A 28nm 64Kb Inference-Training Two-Way Transpose Multibit 6T SRAM Compute-in-Memory Macro for AI Edge Chips[C/OL]//2020 IEEE International Solid-State Circuits Conference - (ISSCC). 2020: 240-242. DOI: 10.1109/ISSCC19947.2020.9062949.
[47] YUE J, YUAN Z, FENG X, et al. A 65nm Computing-in-Memory-Based CNN Processor with 2.9-to-35.8TOPS/W System Energy Efficiency Using Dynamic-Sparsity Performance-Scaling Architecture and Energy-Efficient Inter/Intra-Macro Data Reuse[C/OL]//2020 IEEE International Solid-State Circuits Conference - (ISSCC). 2020: 234-236. DOI: 10.1109/ISSCC19947.2020.9062958.
[48] BISWAS A, CHANDRAKASAN A P. Conv-RAM: An energy-efficient SRAM with embedded convolution computation for low-power CNN-based machine learning applications[C/OL]//2018 IEEE International Solid-State Circuits Conference - (ISSCC). 2018: 488-490. DOI:10.1109/ISSCC.2018.8310397.
[49] KANG M, GONUGONDLA S K, PATIL A, et al. A Multi-Functional In-Memory InferenceProcessor Using a Standard 6T SRAM Array[J/OL]. IEEE Journal of Solid-State Circuits, 2018, 53(2): 642-655. DOI: 10.1109/JSSC.2017.2782087.
[50] GONUGONDLA S K, KANG M, SHANBHAG N R. A Variation-Tolerant In-Memory Machine Learning Classifier via On-Chip Training[J/OL]. IEEE Journal of Solid-State Circuits, 2018, 53(11): 3163-3173. DOI: 10.1109/JSSC.2018.2867275.
[51] WANG J, WANG X, ECKERT C, et al. A Compute SRAM with Bit-Serial Integer/FloatingPoint Operations for Programmable In-Memory Vector Acceleration[C/OL]//2019 IEEE International Solid-State Circuits Conference - (ISSCC). 2019: 224-226. DOI: 10.1109/ISSCC.2019.8662419.
[52] SEHGAL R, THAREJA T, XIE S, et al. A Bit-Serial, Compute-in-SRAM Design Featuring Hybrid-Integrating ADCs and Input Dependent Binary Scaled Precharge Eliminating DACs for Energy-Efficient DNN Inference[J/OL]. IEEE Journal of Solid-State Circuits, 2023, 58(7): 2109-2124. DOI: 10.1109/JSSC.2023.3235210.
[53] KIM H, CHEN Q, YOO T, et al. A 1-16b Precision Reconfigurable Digital In-MemoryComputing Macro Featuring Column-MAC Architecture and Bit-Serial Computation[C/OL]//ESSCIRC 2019 - IEEE 45th European Solid State Circuits Conference (ESSCIRC). 2019: 345-348. DOI: 10.1109/ESSCIRC.2019.8902824.
[54] SINANGIL M E, ERBAGCI B, NAOUS R, et al. A 7-nm Compute-in-Memory SRAM Macro Supporting Multi-Bit Input, Weight and Output and Achieving 351 TOPS/W and 372.4 GOPS [J/OL]. IEEE Journal of Solid-State Circuits, 2021, 56(1): 188-198. DOI: 10.1109/JSSC.2020.3031290.
[55] XUE C X, CHEN W H, LIU J S, et al. Embedded 1-Mb ReRAM-Based Computing-in- Memory Macro With Multibit Input and Weight for CNN-Based AI Edge Processors[J/OL]. IEEE Journal of Solid-State Circuits, 2020, 55(1): 203-215. DOI: 10.1109/JSSC.2019.2951363.
[56] MAO H, HAN S, POOL J, et al. Exploring the Granularity of Sparsity in Convolutional Neural Networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops. 2017.
[57] BAI J, SUN S, ZHAO W, et al. CIMQ: A Hardware-Efficient Quantization Framework for Computing-In-Memory-Based Neural Network Accelerators[J/OL]. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2024, 43(1): 189-202. DOI: 10.1109/TCAD.2023.3298705.
[58] KARIMZADEH F, RAYCHOWDHURY A. Towards CIM-friendly and Energy-Efficient DNN Accelerator via Bit-level Sparsity[C/OL]//2022 IFIP/IEEE 30th International Conference on Very Large Scale Integration (VLSI-SoC). 2022: 1-2. DOI: 10.1109/VLSI-SoC54400.2022.9939651.
[59] YANG H, DUAN L, CHEN Y, et al. BSQ: Exploring bit-level sparsity for mixed-precision neural network quantization[A]. 2021.
[60] LIU H, SIMONYAN K, YANG Y. Darts: Differentiable architecture search[A]. 2018.
[61] YUE J, LIU Y, YUAN Z, et al. STICKER-IM: A 65 nm computing-in-memory NN processor using block-wise sparsity optimization and inter/intra-macro data reuse[J]. IEEE Journal of Solid-State Circuits, 2022, 57(8): 2560-2573.
[62] KIM J H, LEE J, LEE J, et al. Z-PIM: A sparsity-aware processing-in-memory architecture with fully variable weight bit-precision for energy-efficient deep neural networks[J]. IEEE Journal of Solid-State Circuits, 2021, 56(4): 1093-1104.
[63] ALI M, CHAKRABORTY I, CHOUDHARY S, et al. A 65 nm 1.4-6.7 TOPS/W AdaptiveSNR Sparsity-Aware CIM Core with Load Balancing Support for DL workloads[C]//2023 IEEE Custom Integrated Circuits Conference (CICC). IEEE, 2023: 1-2.

所在学位评定分委会
电子科学与技术
国内图书分类号
TN402
来源库
人工提交
成果类型学位论文
条目标识符http://sustech.caswiz.com/handle/2SGJ60CL/765610
专题南方科技大学
南方科技大学-香港科技大学深港微电子学院筹建办公室
推荐引用方式
GB/T 7714
周浩翔. 基于比特级稀疏混合精度卷积神经网络的高能效存算一体加速器设计[D]. 深圳. 南方科技大学,2024.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可 操作
12132489-周浩翔-南方科技大学-(13313KB)----限制开放--请求全文
个性服务
原文链接
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
导出为Excel格式
导出为Csv格式
Altmetrics Score
谷歌学术
谷歌学术中相似的文章
[周浩翔]的文章
百度学术
百度学术中相似的文章
[周浩翔]的文章
必应学术
必应学术中相似的文章
[周浩翔]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
[发表评论/异议/意见]
暂无评论

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。