中文版 | English
题名

Multi-View Self-Attention Based Transformer for Speaker Recognition

作者
DOI
发表日期
2022
会议名称
47th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
ISSN
1520-6149
ISBN
978-1-6654-0541-6
会议录名称
卷号
2022-May
页码
6732-6736
会议日期
23-27 May 2022
会议地点
Singapore, Singapore
出版地
345 E 47TH ST, NEW YORK, NY 10017 USA
出版者
摘要
Initially developed for natural language processing (NLP), Transformer model is now widely used for speech processing tasks such as speaker recognition, due to its powerful sequence modeling capabilities. However, conventional self-attention mechanisms are originally designed for modeling textual sequence without considering the characteristics of speech and speaker modeling. Besides, different Transformer variants for speaker recognition have not been well studied. In this work, we propose a novel multi-view self-attention mechanism and present an empirical study of different Transformer variants with or without the proposed attention mechanism for speaker recognition. Specifically, to balance the capabilities of capturing global dependencies and modeling the locality, we propose a multi-view self-attention mechanism for speaker Transformer, in which different attention heads can attend to different ranges of the receptive field. Furthermore, we introduce and compare five Transformer variants with different network architectures, embedding locations, and pooling methods to learn speaker embeddings. Experimental results on the VoxCeleb1 and VoxCeleb2 datasets show that the proposed multi-view self-attention mechanism achieves improvement in the performance of speaker recognition, and the proposed speaker Transformer network attains excellent results compared with state-of-the-art models.
关键词
学校署名
其他
语种
英语
相关链接[IEEE记录]
收录类别
资助项目
National Nature Science Foundation of China["61976160","62076182","61906137"]
WOS研究方向
Acoustics ; Computer Science ; Engineering
WOS类目
Acoustics ; Computer Science, Artificial Intelligence ; Engineering, Electrical & Electronic
WOS记录号
WOS:000864187907007
EI入藏号
20222312199281
来源库
IEEE
全文链接https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9746639
引用统计
被引频次[WOS]:18
成果类型会议论文
条目标识符http://sustech.caswiz.com/handle/2SGJ60CL/347982
专题工学院_计算机科学与工程系
作者单位
1.Tongji University,Department of Computer Science and Technology
2.Southern University of Science and Technology,Department of Computer Science and Engineering
3.Microsoft Research Asia
4.The Hong Kong Polytechnic University,Department of Computing
推荐引用方式
GB/T 7714
Rui Wang,Junyi Ao,Long Zhou,et al. Multi-View Self-Attention Based Transformer for Speaker Recognition[C]. 345 E 47TH ST, NEW YORK, NY 10017 USA:IEEE,2022:6732-6736.
条目包含的文件
条目无相关文件。
个性服务
原文链接
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
导出为Excel格式
导出为Csv格式
Altmetrics Score
谷歌学术
谷歌学术中相似的文章
[Rui Wang]的文章
[Junyi Ao]的文章
[Long Zhou]的文章
百度学术
百度学术中相似的文章
[Rui Wang]的文章
[Junyi Ao]的文章
[Long Zhou]的文章
必应学术
必应学术中相似的文章
[Rui Wang]的文章
[Junyi Ao]的文章
[Long Zhou]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
[发表评论/异议/意见]
暂无评论

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。