中文版 | English
题名

SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing

作者
通讯作者Zhou, Long
发表日期
2022
会议名称
60th Annual Meeting of the Association-for-Computational-Linguistics (ACL)
会议录名称
会议日期
MAY 22-27, 2022
会议地点
null,Dublin,IRELAND
出版地
209 N EIGHTH STREET, STROUDSBURG, PA 18360 USA
出版者
摘要
Motivated by the success of T5 (Text-To-Text Transfer Transformer) in pre-trained natural language processing models, we propose a unified-modal SpeechT5 framework that explores the encoder-decoder pre-training for self-supervised speech/text representation learning. The SpeechT5 framework consists of a shared encoder-decoder network and six modal-specific (speech/text) pre/post-nets. After preprocessing the input speech/text through the pre-nets, the shared encoder-decoder network models the sequence-to-sequence transformation, and then the post-nets generate the output in the speech/text modality based on the output of the decoder. Leveraging large-scale unlabeled speech and text data, we pre-train SpeechT5 to learn a unified-modal representation, hoping to improve the modeling capability for both speech and text. To align the textual and speech information into this unified semantic space, we propose a cross-modal vector quantization approach that randomly mixes up speech/text states with latent units as the interface between encoder and decoder. Extensive evaluations show the superiority of the proposed SpeechT5 framework on a wide variety of spoken language processing tasks, including automatic speech recognition, speech synthesis, speech translation, voice conversion, speech enhancement, and speaker identification.
学校署名
第一
语种
英语
相关链接[来源记录]
收录类别
WOS研究方向
Computer Science ; Linguistics
WOS类目
Computer Science, Artificial Intelligence ; Computer Science, Interdisciplinary Applications ; Linguistics
WOS记录号
WOS:000828702305058
来源库
Web of Science
引用统计
被引频次[WOS]:28
成果类型会议论文
条目标识符http://sustech.caswiz.com/handle/2SGJ60CL/401486
专题工学院_计算机科学与工程系
作者单位
1.Southern Univ Sci & Technol, Dept Comp Sci & Engn, Shenzhen, Peoples R China
2.Hong Kong Polytech Univ, Dept Comp, Hong Kong, Peoples R China
3.Tongji Univ, Dept Comp Sci & Technol, Shanghai, Peoples R China
4.Microsoft, Redmond, WA 98052 USA
5.Peng Cheng Lab, Shenzhen, Peoples R China
第一作者单位计算机科学与工程系
第一作者的第一单位计算机科学与工程系
推荐引用方式
GB/T 7714
Ao, Junyi,Wang, Rui,Zhou, Long,et al. SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing[C]. 209 N EIGHTH STREET, STROUDSBURG, PA 18360 USA:ASSOC COMPUTATIONAL LINGUISTICS-ACL,2022.
条目包含的文件
条目无相关文件。
个性服务
原文链接
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
导出为Excel格式
导出为Csv格式
Altmetrics Score
谷歌学术
谷歌学术中相似的文章
[Ao, Junyi]的文章
[Wang, Rui]的文章
[Zhou, Long]的文章
百度学术
百度学术中相似的文章
[Ao, Junyi]的文章
[Wang, Rui]的文章
[Zhou, Long]的文章
必应学术
必应学术中相似的文章
[Ao, Junyi]的文章
[Wang, Rui]的文章
[Zhou, Long]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
[发表评论/异议/意见]
暂无评论

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。