题名 | VSPIM: SRAM Processing-in-Memory DNN Acceleration via Vector-Scalar Operations |
作者 | |
发表日期 | 2024-10-01
|
DOI | |
发表期刊 | |
ISSN | 2326-3814
|
卷号 | 73期号:10 |
摘要 | Processing-in-Memory (PIM) has been widely explored for accelerating data-intensive machine learning computation that mainly consists of general-matrix-multiplication (GEMM), by mitigating the burden of data movements and exploiting the ultra-high memory parallelism. The two mainstreams of PIM, the analog- and digital-type, have both been exploited in accelerating machine learning workloads by numerous outstanding prior works. Currently, the digital-PIM is increasingly favored due to the broader computing support and the avoidance of errors caused by intrinsic non-idealities, e.g., process variation. Nevertheless, it still lacks further optimization considering the characteristics of the GEMM computation, including better efficient data layout and scheduling, and the ability to handle the sparsity of activations at the bit-level. To boost the performance and efficiency of digital SRAM PIM, we propose the architecture called VSPIM that performs the computation in a bit-serial fashion, with unique support of vector-scalar computing pattern. The novelties of the VSPIM can be concluded as follows: 1) support bit-serial based scalar-vector computing via ingenious parallel bit-broadcasting; 2) refine the GEMM mapping strategy and computing pattern to enhance performance and efficiency; 3) powered by the introduced scalar-vector operation, the bit-sparsity of activation is leveraged to halt unnecessary computation to maximize efficiency and throughput. Our comprehensive evaluation shows that, compared to the state-of-the-art SRAM-based digital-PIM design (Neural Cache), VSPIM can significantly boost the performance and energy efficiency by up to $8.87\times$8.87× and $4.81\times$4.81× respectively, with negligible area overhead, upon multiple representative neural networks. |
相关链接 | [IEEE记录] |
学校署名 | 其他
|
引用统计 | |
成果类型 | 期刊论文 |
条目标识符 | http://sustech.caswiz.com/handle/2SGJ60CL/705260 |
专题 | 工学院_深港微电子学院 |
作者单位 | 1.Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China 2.University of Central Florida, Orlando, FL, USA 3.School of Microelectronics, Southern University of Science and Technology, Shenzhen, China 4.Microsoft Research Asia, Beijing, China 5.Alibaba Group US Inc, San Diego, CA, USA 6.University of Michigan-Shanghai Jiao Tong University Joint Institute, Ann Arbor, MI, USA 7.Department of Micro-Nano Electronics, Shanghai Jiao Tong University, Shanghai, China |
推荐引用方式 GB/T 7714 |
Chen Nie,Chenyu Tang,Jie Lin,et al. VSPIM: SRAM Processing-in-Memory DNN Acceleration via Vector-Scalar Operations[J]. IEEE Transactions on Computers,2024,73(10).
|
APA |
Chen Nie.,Chenyu Tang.,Jie Lin.,Huan Hu.,Chenyang Lv.,...&Zhezhi He.(2024).VSPIM: SRAM Processing-in-Memory DNN Acceleration via Vector-Scalar Operations.IEEE Transactions on Computers,73(10).
|
MLA |
Chen Nie,et al."VSPIM: SRAM Processing-in-Memory DNN Acceleration via Vector-Scalar Operations".IEEE Transactions on Computers 73.10(2024).
|
条目包含的文件 | ||||||
文件名称/大小 | 文献类型 | 版本类型 | 开放类型 | 使用许可 | 操作 | |
VSPIM_SRAM_Processin(851KB) | -- | -- | 限制开放 | -- |
|
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论