中文版 | English
题名

Collective Learning of Low-Memory Matrix Adaptation for Large-Scale Black-Box Optimization

作者
通讯作者Duan,Qiqi
DOI
发表日期
2022
会议名称
17th International Conference on Parallel Problem Solving from Nature (PPSN)
ISSN
0302-9743
EISSN
1611-3349
ISBN
978-3-031-14720-3
会议录名称
卷号
13399 LNCS
页码
281-294
会议日期
SEP 10-14, 2022
会议地点
null,Dortmund,GERMANY
出版地
GEWERBESTRASSE 11, CHAM, CH-6330, SWITZERLAND
出版者
摘要
The increase of computing power can be continuously driven by parallelism, despite of the end of Moore’s law. To cater to this trend, we propose to parallelize the low-memory matrix adaptation evolution strategy (LM-MA-ES) recently proposed for large-scale black-box optimization, aiming at further improving its scalability (w.r.t. CPU cores) in the modern distributed computing platform. To achieve this aim, three key design choices are carefully made and naturally combined within the multilevel learning framework. First, to fit into the memory hierarchy and reduce communication cost, which is critical for parallel performance on modern multi-core computer architectures, the well-known island model with a star interaction network is employed to run multiple concurrent LM-MA-ES instances, each of which can be effectively and serially executed in each separate island owing to its low computational complexity. Second, to support fast convergence under the multilevel learning framework, we adopt Meta-ES to hierarchically exploit the spatial-nonlocal information for global step-size adaptation at the outer-ES level, combined with cumulative step-size adaptation, which exploits the temporal-nonlocal information in the inner-ES (i.e., serial LM-MA-ES) level. Third, a set of fitter individuals at the outer-ES level, represented as (distribution mean, evolution path, transformation matrix)-tuples, are collectively recombined to utilize the desirable genetic repair effect for statistically more stable online learning. Experiments in a clustering computing environment empirically validate the parallel performance of our approach on high-dimensional memory-costly test functions. Its Python code is available at https://github.com/Evolutionary-Intelligence/D-LM-MA.
关键词
学校署名
通讯
语种
英语
相关链接[Scopus记录]
收录类别
资助项目
Shenzhen Fundamental Research Program[JCYJ20200109141235597]
WOS研究方向
Computer Science
WOS类目
Computer Science, Artificial Intelligence
WOS记录号
WOS:000871753400020
EI入藏号
20223712707331
EI主题词
Computing power ; Evolutionary algorithms ; Learning systems ; Linear transformations ; Matrix algebra ; Memory architecture
EI分类号
Computer Systems and Equipment:722 ; Computer Peripheral Equipment:722.2 ; Digital Computers and Systems:722.4 ; Computer Software, Data Handling and Applications:723 ; Algebra:921.1 ; Mathematical Transformations:921.3 ; Optimization Techniques:921.5
Scopus记录号
2-s2.0-85137275010
来源库
Scopus
引用统计
被引频次[WOS]:2
成果类型会议论文
条目标识符http://sustech.caswiz.com/handle/2SGJ60CL/401661
专题南方科技大学
作者单位
1.Harbin Institute of Technology,Harbin,China
2.University of Technology Sydney,Sydney,Australia
3.Southern University of Science and Technology,Shenzhen,China
第一作者单位南方科技大学
通讯作者单位南方科技大学
推荐引用方式
GB/T 7714
Duan,Qiqi,Zhou,Guochen,Shao,Chang,et al. Collective Learning of Low-Memory Matrix Adaptation for Large-Scale Black-Box Optimization[C]. GEWERBESTRASSE 11, CHAM, CH-6330, SWITZERLAND:SPRINGER INTERNATIONAL PUBLISHING AG,2022:281-294.
条目包含的文件
条目无相关文件。
个性服务
原文链接
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
导出为Excel格式
导出为Csv格式
Altmetrics Score
谷歌学术
谷歌学术中相似的文章
[Duan,Qiqi]的文章
[Zhou,Guochen]的文章
[Shao,Chang]的文章
百度学术
百度学术中相似的文章
[Duan,Qiqi]的文章
[Zhou,Guochen]的文章
[Shao,Chang]的文章
必应学术
必应学术中相似的文章
[Duan,Qiqi]的文章
[Zhou,Guochen]的文章
[Shao,Chang]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
[发表评论/异议/意见]
暂无评论

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。