中文版 | English
题名

High Throughput, Area-Efficient, and Variation-Tolerant 3D In-memory Compute System for Deep Convolutional Neural Networks

作者
发表日期
2021
DOI
发表期刊
ISSN
2372-2541
EISSN
2327-4662
卷号8期号:11页码:9219-9232
摘要

Untethered computing using Deep Convolutional Neural Networks at the edge of IoT with limited resources requires systems that are exceedingly power and area-efficient. Analog in-memory matrix-matrix multiplications enabled by emerging memories can significantly reduce the energy budget of such systems and result in compact accelerators. In this paper, we report a high-throughput RRAM-based DCNN processor that boasts 7.12× area-efficiency (AE) and 6.52× power-efficiency (PE) enhancements over state-of-the-art accelerators. We achieve this by coupling a novel in-memory computing methodology with a staggered-3D memristor array. Our variation-tolerant in-memory compute method, which performs operations on signed floating-point numbers within a single array, leverages charge domain operations and conductance discretization to reduce peripheral overheads. Voltage pulses applied at the staggered bottom electrodes of the 3D-array generate a concurrent input shift and parallelize convolution operations to boost throughput. The high density and low footprint of the 3D-array, along with the modified in-memory M2M execution, improve peak AE to 9.1TOPsmm-2 while the elimination of input regeneration improves PE to 10.6TOPsW-1. This work provides a path towards infallible RRAM-based hardware accelerators that are fast, low-power, and low-area.

关键词
相关链接[Scopus记录]
收录类别
SCI ; EI
语种
英语
学校署名
其他
WOS记录号
WOS:000652798400039
EI入藏号
20210809936067
EI主题词
Budget control ; Convolution ; Deep neural networks ; Digital arithmetic ; Matrix algebra ; RRAM
EI分类号
Information Theory and Signal Processing:716.1 ; Algebra:921.1 ; Numerical Methods:921.6
Scopus记录号
2-s2.0-85100838042
来源库
Scopus
全文链接https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9351816
引用统计
被引频次[WOS]:15
成果类型期刊论文
条目标识符http://sustech.caswiz.com/handle/2SGJ60CL/221818
专题南方科技大学
工学院_深港微电子学院
作者单位
1.Electrical & Computer Engineering, National University of Singapore, 4 Engineering Drive 3, Singapore-117583.
2.Engineering Research Center of Integrated Circuits for Next-Generation communications, Ministry of Education, Southern University of Science and Technology, Shenzhen-518055, China.
3.Electrical & Computer Engineering, National University of Singapore, 4 Engineering Drive 3, Singapore-117583. (e-mail: aaron.thean@nus.edu.sg)
推荐引用方式
GB/T 7714
Veluri,Hasita,Li,Yida,Niu,Jessie Xuhua,et al. High Throughput, Area-Efficient, and Variation-Tolerant 3D In-memory Compute System for Deep Convolutional Neural Networks[J]. IEEE Internet of Things Journal,2021,8(11):9219-9232.
APA
Veluri,Hasita,Li,Yida,Niu,Jessie Xuhua,Zamburg,Evgeny,&Thean,Aaron Voon Yew.(2021).High Throughput, Area-Efficient, and Variation-Tolerant 3D In-memory Compute System for Deep Convolutional Neural Networks.IEEE Internet of Things Journal,8(11),9219-9232.
MLA
Veluri,Hasita,et al."High Throughput, Area-Efficient, and Variation-Tolerant 3D In-memory Compute System for Deep Convolutional Neural Networks".IEEE Internet of Things Journal 8.11(2021):9219-9232.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可 操作
High-Throughput_Area(5152KB)----限制开放--
个性服务
原文链接
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
导出为Excel格式
导出为Csv格式
Altmetrics Score
谷歌学术
谷歌学术中相似的文章
[Veluri,Hasita]的文章
[Li,Yida]的文章
[Niu,Jessie Xuhua]的文章
百度学术
百度学术中相似的文章
[Veluri,Hasita]的文章
[Li,Yida]的文章
[Niu,Jessie Xuhua]的文章
必应学术
必应学术中相似的文章
[Veluri,Hasita]的文章
[Li,Yida]的文章
[Niu,Jessie Xuhua]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
[发表评论/异议/意见]
暂无评论

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。