题名 | Adaptive Policy Learning for Offline-to-Online Reinforcement Learning |
作者 | |
通讯作者 | Luo,Xufang |
发表日期 | 2023-06-27
|
会议名称 | 37th AAAI Conference on Artificial Intelligence (AAAI) / 35th Conference on Innovative Applications of Artificial Intelligence / 13th Symposium on Educational Advances in Artificial Intelligence
|
ISSN | 2159-5399
|
EISSN | 2374-3468
|
ISBN | *****************
|
会议录名称 | |
卷号 | 37
|
页码 | 11372-11380
|
会议日期 | FEB 07-14, 2023
|
会议地点 | null,Washington,DC
|
出版地 | 2275 E BAYSHORE RD, STE 160, PALO ALTO, CA 94303 USA
|
出版者 | |
摘要 | Conventional reinforcement learning (RL) needs an environment to collect fresh data, which is impractical when online interactions are costly. Offline RL provides an alternative solution by directly learning from the previously collected dataset. However, it will yield unsatisfactory performance if the quality of the offline datasets is poor. In this paper, we consider an offline-to-online setting where the agent is first learned from the offline dataset and then trained online, and propose a framework called Adaptive Policy Learning for effectively taking advantage of offline and online data. Specifically, we explicitly consider the difference between the online and offline data and apply an adaptive update scheme accordingly, that is, a pessimistic update strategy for the offline dataset and an optimistic/greedy update scheme for the online dataset. Such a simple and effective method provides a way to mix the offline and online RL and achieve the best of both worlds. We further provide two detailed algorithms for implementing the framework through embedding value or policy-based RL algorithms into it. Finally, we conduct extensive experiments on popular continuous control tasks, and results show that our algorithm can learn the expert policy with high sample efficiency even when the quality of offline dataset is poor, e.g., random dataset. |
学校署名 | 其他
|
语种 | 英语
|
相关链接 | [Scopus记录] |
收录类别 | |
WOS研究方向 | Computer Science
; Mathematics
|
WOS类目 | Computer Science, Artificial Intelligence
; Computer Science, Theory & Methods
; Mathematics, Applied
|
WOS记录号 | WOS:001243747800120
|
EI入藏号 | 20233414600821
|
EI主题词 | E-learning
|
EI分类号 | Artificial Intelligence:723.4
|
Scopus记录号 | 2-s2.0-85162680903
|
来源库 | Scopus
|
引用统计 |
被引频次[WOS]:4
|
成果类型 | 会议论文 |
条目标识符 | http://sustech.caswiz.com/handle/2SGJ60CL/559917 |
专题 | 南方科技大学 |
作者单位 | 1.University of Technology Sydney,Australia 2.Microsoft Research Asia,China 3.National University of Singapore,Singapore 4.Southern University of Science and Technology,China |
推荐引用方式 GB/T 7714 |
Zheng,Han,Luo,Xufang,Wei,Pengfei,et al. Adaptive Policy Learning for Offline-to-Online Reinforcement Learning[C]. 2275 E BAYSHORE RD, STE 160, PALO ALTO, CA 94303 USA:ASSOC ADVANCEMENT ARTIFICIAL INTELLIGENCE,2023:11372-11380.
|
条目包含的文件 | 条目无相关文件。 |
|
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论