题名 | Proximal policy optimization with model-based methods |
作者 | |
通讯作者 | Zhang, Wei; Leng, Yuquan |
发表日期 | 2022
|
DOI | |
发表期刊 | |
ISSN | 1064-1246
|
EISSN | 1875-8967
|
卷号 | 42期号:6页码:5399-5410 |
摘要 | Model-free reinforcement learning methods have successfully been applied to practical applications such as decision-making problems in Atari games. However, these methods have inherent shortcomings, such as a high variance and low sample efficiency. To improve the policy performance and sample efficiency of model-free reinforcement learning, we propose proximal policy optimization with model-based methods (PPOMM), a fusion method of both model-based and model-free reinforcement learning. PPOMM not only considers the information of past experience but also the prediction information of the future state. PPOMM adds the information of the next state to the objective function of the proximal policy optimization (PPO) algorithm through a model-based method. This method uses two components to optimize the policy: the error of PPO and the error of model-based reinforcement learning. We use the latter to optimize a latent transition model and predict the information of the next state. For most games, this method outperforms the state-of-the-art PPO algorithm when we evaluate across 49 Atari games in the Arcade Learning Environment (ALE). The experimental results show that PPOMM performs better or the same as the original algorithm in 33 games. |
关键词 | |
相关链接 | [来源记录] |
收录类别 | |
语种 | 英语
|
学校署名 | 通讯
|
资助项目 | National Natural Science Foundation of China[52175272]
; State Key Laboratory of Robotics, China[2020-KF-22-03]
; StateKey Laboratory of Robotics Foundation[Y91Z0303]
; China Postdoctoral Science Foundation[2020M670814]
; Liaoning Provincial Natural Science Foundation[2020-MS-033]
|
WOS研究方向 | Computer Science
|
WOS类目 | Computer Science, Artificial Intelligence
|
WOS记录号 | WOS:000790690300042
|
出版者 | |
EI入藏号 | 20222012111244
|
EI主题词 | Computer aided instruction
; Decision making
; Reinforcement learning
|
EI分类号 | Artificial Intelligence:723.4
; Computer Applications:723.5
; Education:901.2
; Management:912.2
; Production Engineering:913.1
|
ESI学科分类 | COMPUTER SCIENCE
|
来源库 | Web of Science
|
引用统计 |
被引频次[WOS]:1
|
成果类型 | 期刊论文 |
条目标识符 | http://sustech.caswiz.com/handle/2SGJ60CL/334722 |
专题 | 工学院_机械与能源工程系 |
作者单位 | 1.Chinese Acad Sci, Shenyang Inst Automat, State Key Lab Robot, Shenyang, Peoples R China 2.Chinese Acad Sci, Inst Robot & Intelligent Mfg, Shenyang, Peoples R China 3.Univ Chinese Acad Sci, Beijing, Peoples R China 4.CVTE Res, Guangzhou, Peoples R China 5.Southern Univ Sci & Technol, Dept Mech & Energy Engn, Shenzhen Key Lab Biomimet Robot & Intelligent Sys, Shenzhen, Peoples R China 6.Southern Univ Sci & Technol, Guangdong Prov Key Lab Human Augmentat & Rehabil, Shenzhen, Peoples R China |
通讯作者单位 | 机械与能源工程系; 南方科技大学 |
推荐引用方式 GB/T 7714 |
Li, Shuailong,Zhang, Wei,Zhang, Huiwen,et al. Proximal policy optimization with model-based methods[J]. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS,2022,42(6):5399-5410.
|
APA |
Li, Shuailong,Zhang, Wei,Zhang, Huiwen,Zhang, Xin,&Leng, Yuquan.(2022).Proximal policy optimization with model-based methods.JOURNAL OF INTELLIGENT & FUZZY SYSTEMS,42(6),5399-5410.
|
MLA |
Li, Shuailong,et al."Proximal policy optimization with model-based methods".JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 42.6(2022):5399-5410.
|
条目包含的文件 | 条目无相关文件。 |
|
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论