中文版 | English
题名

Multiple-Precision Floating-Point Dot Product Unit for Efficient Convolution Computation

作者
通讯作者Li, Kai
DOI
发表日期
2021
会议名称
IEEE 3rd International Conference on Artificial Intelligence Circuits and Systems (AICAS)
ISBN
978-1-6654-3025-8
会议录名称
页码
1-4
会议日期
JUN 06-09, 2021
会议地点
null,null,ELECTR NETWORK
出版地
345 E 47TH ST, NEW YORK, NY 10017 USA
出版者
摘要
In the computational process of the convolutional neural network (CNN) and high-performance computing, the convolution operation dominates the hardware costs and performance of the overall system. To improve the hardware efficiency, the dot-product units (DPU) have been used for convolution computation. Besides, multiple-precision floating-point (FP) support is essential for the accuracy requirement of various applications. In this work, multiple-precision FP many-term DPU is designed with single instruction multiple data (SIMD) structure. The proposed design also supports multiple-precision operations with configurable multiplier and alignment shifter. FP16 twenty-term, FP32 five-term or FP64 one-term dot product operations can be executed in two successive clock cycles without idle multiplication resources. To speed up the summation process, a carry-select adder (CSLA) is designed with excellent area-delay product (ADP) and power-delay product (PDP) performance. The proposed design is realized in UMC 55-nm process with experiment results. Compared with the state-of-the-art multiple-precision work, the proposed design achieves maximum 3.76 times power performance improvement for FP16 operations. Compared with previous CSLA designs, the proposed work improves ADP and PDP performance by 4.7% and 3.91%, respectively.
关键词
学校署名
第一 ; 通讯
语种
英语
相关链接[来源记录]
收录类别
资助项目
Science and Technology Innovation Committee Foundation of Shenzhen[JCYJ20180504170454381]
WOS研究方向
Computer Science ; Engineering
WOS类目
Computer Science, Artificial Intelligence ; Computer Science, Hardware & Architecture ; Engineering, Electrical & Electronic
WOS记录号
WOS:000722241000061
EI入藏号
20213410817462
EI主题词
Convolution ; Convolutional neural networks
EI分类号
Information Theory and Signal Processing:716.1 ; Computer Theory, Includes Formal Logic, Automata Theory, Switching Theory, Programming Theory:721.1
来源库
Web of Science
全文链接https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9458534
引用统计
被引频次[WOS]:1
成果类型会议论文
条目标识符http://sustech.caswiz.com/handle/2SGJ60CL/222052
专题工学院_深港微电子学院
作者单位
1.Southern Univ Sci & Technol, Sch Microelect, Shenzhen, Peoples R China
2.Huawei Technol Co Ltd, Shenzhen, Peoples R China
第一作者单位深港微电子学院
通讯作者单位深港微电子学院
第一作者的第一单位深港微电子学院
推荐引用方式
GB/T 7714
Li, Kai,Mao, Wei,Xie, Xinang,et al. Multiple-Precision Floating-Point Dot Product Unit for Efficient Convolution Computation[C]. 345 E 47TH ST, NEW YORK, NY 10017 USA:IEEE,2021:1-4.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可 操作
C132.Multiple-Precis(486KB)----限制开放--
个性服务
原文链接
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
导出为Excel格式
导出为Csv格式
Altmetrics Score
谷歌学术
谷歌学术中相似的文章
[Li, Kai]的文章
[Mao, Wei]的文章
[Xie, Xinang]的文章
百度学术
百度学术中相似的文章
[Li, Kai]的文章
[Mao, Wei]的文章
[Xie, Xinang]的文章
必应学术
必应学术中相似的文章
[Li, Kai]的文章
[Mao, Wei]的文章
[Xie, Xinang]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
[发表评论/异议/意见]
暂无评论

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。