中文版 | English
题名

基于贝叶斯优化的竞争多任务优化问题的研究

其他题名
RESEARCH ON COMPETITIVE MULTI-TASK OPTIMIZATION BASED ON BAYESIAN OPTIMIZATION
姓名
姓名拼音
OU Weiming
学号
12133007
学位类型
硕士
学位专业
0701Z1 商务智能与大数据管理
学科门类/专业学位类别
07 理学
导师
王松昊
导师单位
商学院
论文答辩日期
2023-05-18
论文提交日期
2023-06-09
学位授予单位
南方科技大学
学位授予地点
深圳
摘要

多任务贝叶斯优化是一种能有效同时处理多个优化任务的方法。目前已有的研究主要关注主任务已知(仅需优化主任务,其余任务提供帮助)以及各任务平等(同时优化所有任务)这两个问题。但在现实中存在主任务未知,需要在优化过程中识别并优化主任务的问题。在识别主任务的过程中,各个任务依据优化中的表现竞争成为主任务,因此这类问题被简称竞争多任务优化问题。我们指出竞争多任务优化问题可以广泛应用到机器学习和管理科学领域。竞争多任务优化问题的关键在于识别并优化主任务,即刻画各个任务的竞争性。事实上求解竞争多任务优化问题不需要对每个任务都求最优解,因此为了高效求解,需要评估各个任务的优劣以及他们成为主任务的可能性,合理分配计算资源。 为此,本文在贝叶斯优化的框架下,基于多任务高斯过程建模,用内部共区域化模型刻画任务间的相关性,使用上置信界(UCB)作为采集函数。针对任务的竞争性问题,我们将该问题转化为多臂老虎机问题并设计了三种刻画任务之间竞争性的算法以合理分配资源。其中,算法一基于各任务的当前最优观测值和选取次数,算法二基于各任务的 UCB 采集函数,算法三基于多臂老虎机问题的连续删除方法。

我们从理论上证明了这三个算法的无后悔性,并通过测试函数和实际问题验证这三个算法相比其它算法在分配计算资源和求解最优值方面的有效性。此外,我们给出的算法无后悔性结果首次提供了多臂老虎机领域的识别具有最大的 100% 分位数的老虎臂这一问题的理论保证。

 

其他摘要

Multi-task Bayesian Optimization is an approach that can solve multiple tasks simultaneously. Most existing research focus on the problems that the primary task is known or all tasks are equal. However, there are some problems with unknown primary task, which needs to identify and optimize the main task during the process of optimization. During the process of identifying the primary task, each task competes to become the primary task according to their performance. We call these problems as Competitive Multi-task Optimization Problem, which can be widely used in machine learning and management science domain. The key to this problem is to find the optimum and the corresponding task, which means to characterise the competitiveness. In addition, we do not need to solve each task for this problem in fact. Therefore, we should evaluate the performance of each task to allocate computation resources reasonably.

We employ the Bayesian Optimization framework to solve this problem. Then we adopt a multi-task Gaussian process with ICM covariance function and Upper Confidence Bound (UCB) acquisition function to model these tasks and characterize the correlation across the tasks. We formulate the tasks’competitiveness as a multi-armed bandit problem and design three algorithms to select the task to query in each iteration. The first algorithm is based on the current optimal observation and selection number of each task. The second is based on the UCB acquisition function of each task, while the third is based on the successive elimination approach in MAB. Then we propose three competitive multi-task Bayesian optimization algorithms. We prove no regret for our proposed algorithms theoretically and test their performance for computation resources allocation and solving optimal value in several synthetic functions and real-world problems. In addition, our convergence results provide a novel theoretical guarantee for the problem of finding the best 100%-quantile arm.

 

关键词
其他关键词
语种
中文
培养类别
独立培养
入学年份
2021
学位授予年份
2023-06
参考文献列表

[1] GARNETT R. Bayesian Optimization[M]. Cambridge University Press, 2022. in preparation.
[2] DAI S, SONG J, YUE Y. Multi-task bayesian optimization via gaussian process upper confidence bound[C]//ICML 2020 Workshop on Real World Experiment Design and Active Learning. 2020: 1-12.
[3] LI G, ZHANG Q, WANG Z. Evolutionary competitive multitasking optimization[J]. IEEE Transactions on Evolutionary Computation, 2022, 26(2): 278-289.
[4] GOODFELLOW I, BENGIO Y, COURVILLE A. Deep learning[M]. MIT press, 2016.
[5] SHEN B, GNANASAMBANDAM R, WANG R, et al. Multi-task Gaussian process upper confidence bound for hyperparameter tuning and its application for simulation studies of additive manufacturing[J]. IISE Transactions, 2023, 55(5): 496-508.
[6] FRAZIER P I. A tutorial on Bayesian optimization[A]. 2018.
[7] SHAHRIARI B, SWERSKY K, WANG Z, et al. Taking the human out of the loop: A review of Bayesian optimization[J]. Proceedings of the IEEE, 2015, 104(1): 148-175.
[8] WILLIAMS C K, RASMUSSEN C E. Gaussian processes for machine learning: volume 2[M]. MIT press Cambridge, MA, 2006.
[9] JONES D R, SCHONLAU M, WELCH W J. Efficient global optimization of expensive black box functions[J]. Journal of Global optimization, 1998, 13(4): 455-492.
[10] SRINIVAS N, KRAUSE A, KAKADE S M, et al. Gaussian process optimization in the bandit setting: No regret and experimental design[A]. 2009.
[11] FRAZIER P, POWELL W, DAYANIK S. The knowledge-gradient policy for correlated normal beliefs[J]. INFORMS journal on Computing, 2009, 21(4): 599-613.
[12] HENNIG P, SCHULER C J. Entropy Search for Information-Efficient Global Optimization.[J]. Journal of Machine Learning Research, 2012, 13(6).
[13] HERNÁNDEZ-LOBATO J M, GELBART M, HOFFMAN M, et al. Predictive entropy search for Bayesian optimization with unknown constraints[C]//International conference on machine learning. PMLR, 2015: 1699-1707.
[14] MINKA T P. A family of algorithms for approximate Bayesian inference[D]. Massachusetts Institute of Technology, 2001.
[15] CHAPELLE O, LI L. An empirical evaluation of thompson sampling[J]. Advances in neural information processing systems, 2011, 24.
[16] WU J, FRAZIER P. The parallel knowledge gradient method for batch Bayesian optimization [J]. Advances in neural information processing systems, 2016, 29.
[17] WU J, POLOCZEK M, WILSON A G, et al. Bayesian optimization with gradients[J]. Advances in neural information processing systems, 2017, 30.
[18] LIU H, CAI J, ONG Y S. Remarks on multi-output Gaussian process regression[J]. Knowledge Based Systems, 2018, 144: 102-121.
[19] GONG M, TANG Z, LI H, et al. Evolutionary multitasking with dynamic resource allocating strategy[J]. IEEE Transactions on Evolutionary Computation, 2019, 23(5): 858-869.
[20] BALI K K, ONG Y S, GUPTA A, et al. Multifactorial evolutionary algorithm with online transfer parameter estimation: MFEA-II[J]. IEEE Transactions on Evolutionary Computation, 2019, 24(1): 69-83.
[21] GUPTA A, ONG Y S, FENG L. Multifactorial evolution: toward evolutionary multitasking[J]. IEEE Transactions on Evolutionary Computation, 2015, 20(3): 343-357.
[22] DING J, YANG C, JIN Y, et al. Generalized multitasking for evolutionary optimization of expensive problems[J]. IEEE Transactions on Evolutionary Computation, 2017, 23(1): 44-58.
[23] ZHENG X, QIN A K, GONG M, et al. Self-regulated evolutionary multitask optimization[J]. IEEE Transactions on Evolutionary Computation, 2019, 24(1): 16-28.
[24] BONILLA E V, CHAI K, WILLIAMS C. Multi-task Gaussian process prediction[J]. Advances in neural information processing systems, 2007, 20.
[25] SWERSKY K, SNOEK J, ADAMS R P. Multi-task bayesian optimization[J]. Advances in neural information processing systems, 2013, 26.
[26] PENG Y, CHEN C H, FU M C, et al. Efficient sampling allocation procedures for optimal quantile selection[J]. INFORMS Journal on Computing, 2021, 33(1): 230-245.
[27] RYZHOV I O. On the convergence rates of expected improvement methods[J]. Operations Research, 2016, 64(6): 1515-1528.
[28] BULL A D. Convergence rates of efficient global optimization algorithms.[J]. Journal of Ma chine Learning Research, 2011, 12(10).
[29] KRAUSE A, ONG C. Contextual gaussian process bandit optimization[J]. Advances in neural information processing systems, 2011, 24.
[30] WILLIAMS C, KLANKE S, VIJAYAKUMAR S, et al. Multi-task gaussian process learning of robot inverse dynamics[J]. Advances in neural information processing systems, 2008, 21.
[31] HAKHAMANESHI K, ABBEEL P, STOJANOVIC V, et al. JUMBO: Scalable Multi-taskBayesian Optimization using Offline Data[A]. 2021.
[32] RU B, ALVI A, NGUYEN V, et al. Bayesian optimisation over multiple continuous and categorical inputs[C]//International Conference on Machine Learning. PMLR, 2020: 8276-8285.
[33] DAVID Y, SHIMKIN N. Pure exploration for max-quantile bandits[C]//Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, 2016: 556-571.
[34] NIKOLAKAKIS K E, KALOGERIAS D S, SHEFFET O, et al. Quantile multi-armed bandits: Optimal best-arm identification and a differentially private scheme[J]. IEEE Journal on Selected Areas in Information Theory, 2021, 2(2): 534-548.
[35] HOWARD S R, RAMDAS A. Sequential estimation of quantiles with applications to A/B testing and best-arm identification[J]. Bernoulli, 2022, 28(3): 1704-1728.
[36] ZHANG M, ONG C S. Quantile bandits for best arms identification[C]//International Confer ence on Machine Learning. PMLR, 2021: 12513-12523.
[37] YANG Y, BUETTNER F. Multi-output Gaussian Processes for uncertainty-aware recommender systems[C]//Uncertainty in Artificial Intelligence. PMLR, 2021: 1505-1514.
[38] BARDENET R, BRENDEL M, KÉGL B, et al. Collaborative hyperparameter tuning[C]//International conference on machine learning. PMLR, 2013: 199-207.
[39] FEURER M, SPRINGENBERG J, HUTTER F. Initializing bayesian hyperparameter optimization via meta-learning[C]//Proceedings of the AAAI Conference on Artificial Intelligence: volume 29. 2015.
[40] YOGATAMA D, MANN G. Efficient transfer learning method for automatic hyperparameter tuning[C]//Artificial intelligence and statistics. PMLR, 2014: 1077-1085.
[41] HUTTER F, HOOS H H, LEYTON-BROWN K. Sequential model-based optimization for general algorithm configuration[C]//International conference on learning and intelligent optimization. Springer, 2011: 507-523.
[42] GOOVAERTS P, et al. Geostatistics for natural resources evaluation[M]. Oxford University Press on Demand, 1997.
[43] PINHEIRO J C, BATES D M. Unconstrained parametrizations for variance-covariance matrices[J]. Statistics and computing, 1996, 6(3): 289-296.
[44] TEH Y W, SEEGER M, JORDAN M I. Semiparametric latent factor models[C]//International Workshop on Artificial Intelligence and Statistics. PMLR, 2005: 333-340.
[45] PEARCE M, BRANKE J. Continuous multi-task bayesian optimisation with correlation[J]. European Journal of Operational Research, 2018, 270(3): 1074-1085.
[46] ZHANG Y, APLEY D W, CHEN W. Bayesian optimization for materials design with mixed quantitative and qualitative variables[J]. Scientific reports, 2020, 10(1): 1-13.
[47] SURJANOVIC S, BINGHAM D. Virtual Library of Simulation Experiments: Test Functions and Datasets[Z]. Retrieved March 26, 2023, from http://www.sfu.ca/~ssurjano.

所在学位评定分委会
数学
国内图书分类号
110.7480
来源库
人工提交
成果类型学位论文
条目标识符http://sustech.caswiz.com/handle/2SGJ60CL/543859
专题商学院_信息系统与管理工程系
推荐引用方式
GB/T 7714
欧伟铭. 基于贝叶斯优化的竞争多任务优化问题的研究[D]. 深圳. 南方科技大学,2023.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可 操作
12133007-欧伟铭-信息系统与管理(1813KB)----限制开放--请求全文
个性服务
原文链接
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
导出为Excel格式
导出为Csv格式
Altmetrics Score
谷歌学术
谷歌学术中相似的文章
[欧伟铭]的文章
百度学术
百度学术中相似的文章
[欧伟铭]的文章
必应学术
必应学术中相似的文章
[欧伟铭]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
[发表评论/异议/意见]
暂无评论

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。