题名 | Model Compression by Iterative Pruning with Knowledge Distillation and Its Application to Speech Enhancement |
作者 | |
DOI | |
发表日期 | 2022
|
会议名称 | Interspeech Conference
|
ISSN | 2308-457X
|
EISSN | 1990-9772
|
会议录名称 | |
卷号 | 2022-September
|
页码 | 941-945
|
会议日期 | SEP 18-22, 2022
|
会议地点 | null,Incheon,SOUTH KOREA
|
出版地 | C/O EMMANUELLE FOXONET, 4 RUE DES FAUVETTES, LIEU DIT LOUS TOURILS, BAIXAS, F-66390, FRANCE
|
出版者 | |
摘要 | Over the past decade, deep learning has demonstrated its effectiveness and keeps setting new records in a wide variety of tasks. However, good model performance usually leads to a huge amount of parameters and extremely high computational complexity which greatly limit the use cases of deep learning models, particularly in embedded systems. Therefore, model compression is getting more and more attention. In this paper, we propose a compression strategy based on iterative pruning and knowledge distillation. Specifically, in each iteration, we first utilize a pruning criterion to drop the weights which have less impact on performance. Then, the model before pruning is used as a teacher to fine-tune the student which is the model after pruning. After several iterations, we get the final compressed model. The proposed method is verified on gated convolutional recurrent network (GCRN) and long short-term memory (LSTM) for single channel speech enhancement tasks. Experimental results show that the proposed compression strategy can dramatically reduce the model size by 40x without significant performance degradation for GCRN. |
关键词 | |
学校署名 | 其他
|
语种 | 英语
|
相关链接 | [Scopus记录] |
收录类别 | |
WOS研究方向 | Acoustics
; Audiology & Speech-Language Pathology
; Computer Science
; Engineering
|
WOS类目 | Acoustics
; Audiology & Speech-Language Pathology
; Computer Science, Artificial Intelligence
; Engineering, Electrical & Electronic
|
WOS记录号 | WOS:000900724501024
|
Scopus记录号 | 2-s2.0-85140075848
|
来源库 | Scopus
|
引用统计 |
被引频次[WOS]:2
|
成果类型 | 会议论文 |
条目标识符 | http://sustech.caswiz.com/handle/2SGJ60CL/406914 |
专题 | 工学院_电子与电气工程系 工学院_计算机科学与工程系 |
作者单位 | 1.Department of Computer Science,Inner Mongolia University,Canada 2.Department of Electrical and Electronic Engineering,Southern University of Science and Technology,China |
推荐引用方式 GB/T 7714 |
Wei,Zeyuan,Li,Hao,Zhang,Xueliang. Model Compression by Iterative Pruning with Knowledge Distillation and Its Application to Speech Enhancement[C]. C/O EMMANUELLE FOXONET, 4 RUE DES FAUVETTES, LIEU DIT LOUS TOURILS, BAIXAS, F-66390, FRANCE:ISCA-INT SPEECH COMMUNICATION ASSOC,2022:941-945.
|
条目包含的文件 | 条目无相关文件。 |
|
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论