题名 | Optimization based layer-wise magnitude-based pruning for DNN compression |
作者 | |
发表日期 | 2018
|
ISSN | 1045-0823
|
会议录名称 | |
卷号 | 2018-July
|
页码 | 2383-2389
|
摘要 | Layer-wise magnitude-based pruning (LMP) is a very popular method for deep neural network (DNN) compression. However, tuning the layer-specific thresholds is a difficult task, since the space of threshold candidates is exponentially large and the evaluation is very expensive. Previous methods are mainly by hand and require expertise. In this paper, we propose an automatic tuning approach based on optimization, named OLMP. The idea is to transform the threshold tuning problem into a constrained optimization problem (i.e., minimizing the size of the pruned model subject to a constraint on the accuracy loss), and then use powerful derivative-free optimization algorithms to solve it. To compress a trained DNN, OLMP is conducted within a new iterative pruning and adjusting pipeline. Empirical results show that OLMP can achieve the best pruning ratio on LeNet-style models (i.e., 114 times for LeNet-300-100 and 298 times for LeNet-5) compared with some state-ofthe-art DNN pruning methods, and can reduce the size of an AlexNet-style network up to 82 times without accuracy loss. |
学校署名 | 其他
|
语种 | 英语
|
相关链接 | [Scopus记录] |
Scopus记录号 | 2-s2.0-85055705469
|
来源库 | Scopus
|
成果类型 | 会议论文 |
条目标识符 | http://sustech.caswiz.com/handle/2SGJ60CL/44353 |
专题 | 工学院_计算机科学与工程系 |
作者单位 | 1.Anhui Province Key Lab of Big Data Analysis and Application, University of Science and Technology of China, ,Hefei,230027,China 2.CERCIA, School of Computer Science, University of Birmingham, ,Birmingham,B15 2TT,United Kingdom 3.Shenzhen Key Lab of Computational Intelligence, Department of Computer Science and Engineering, Southern University of Science and Technology, ,Shenzhen,518055,China |
推荐引用方式 GB/T 7714 |
Li,Guiying,Qian,Chao,Jiang,Chunhui,et al. Optimization based layer-wise magnitude-based pruning for DNN compression[C],2018:2383-2389.
|
条目包含的文件 | 条目无相关文件。 |
|
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论