[1] GARNETT R. Bayesian Optimization[M]. Cambridge University Press, 2022. in preparation.
[2] DAI S, SONG J, YUE Y. Multi-task bayesian optimization via gaussian process upper confidence bound[C]//ICML 2020 Workshop on Real World Experiment Design and Active Learning. 2020: 1-12.
[3] LI G, ZHANG Q, WANG Z. Evolutionary competitive multitasking optimization[J]. IEEE Transactions on Evolutionary Computation, 2022, 26(2): 278-289.
[4] GOODFELLOW I, BENGIO Y, COURVILLE A. Deep learning[M]. MIT press, 2016.
[5] SHEN B, GNANASAMBANDAM R, WANG R, et al. Multi-task Gaussian process upper confidence bound for hyperparameter tuning and its application for simulation studies of additive manufacturing[J]. IISE Transactions, 2023, 55(5): 496-508.
[6] FRAZIER P I. A tutorial on Bayesian optimization[A]. 2018.
[7] SHAHRIARI B, SWERSKY K, WANG Z, et al. Taking the human out of the loop: A review of Bayesian optimization[J]. Proceedings of the IEEE, 2015, 104(1): 148-175.
[8] WILLIAMS C K, RASMUSSEN C E. Gaussian processes for machine learning: volume 2[M]. MIT press Cambridge, MA, 2006.
[9] JONES D R, SCHONLAU M, WELCH W J. Efficient global optimization of expensive black box functions[J]. Journal of Global optimization, 1998, 13(4): 455-492.
[10] SRINIVAS N, KRAUSE A, KAKADE S M, et al. Gaussian process optimization in the bandit setting: No regret and experimental design[A]. 2009.
[11] FRAZIER P, POWELL W, DAYANIK S. The knowledge-gradient policy for correlated normal beliefs[J]. INFORMS journal on Computing, 2009, 21(4): 599-613.
[12] HENNIG P, SCHULER C J. Entropy Search for Information-Efficient Global Optimization.[J]. Journal of Machine Learning Research, 2012, 13(6).
[13] HERNÁNDEZ-LOBATO J M, GELBART M, HOFFMAN M, et al. Predictive entropy search for Bayesian optimization with unknown constraints[C]//International conference on machine learning. PMLR, 2015: 1699-1707.
[14] MINKA T P. A family of algorithms for approximate Bayesian inference[D]. Massachusetts Institute of Technology, 2001.
[15] CHAPELLE O, LI L. An empirical evaluation of thompson sampling[J]. Advances in neural information processing systems, 2011, 24.
[16] WU J, FRAZIER P. The parallel knowledge gradient method for batch Bayesian optimization [J]. Advances in neural information processing systems, 2016, 29.
[17] WU J, POLOCZEK M, WILSON A G, et al. Bayesian optimization with gradients[J]. Advances in neural information processing systems, 2017, 30.
[18] LIU H, CAI J, ONG Y S. Remarks on multi-output Gaussian process regression[J]. Knowledge Based Systems, 2018, 144: 102-121.
[19] GONG M, TANG Z, LI H, et al. Evolutionary multitasking with dynamic resource allocating strategy[J]. IEEE Transactions on Evolutionary Computation, 2019, 23(5): 858-869.
[20] BALI K K, ONG Y S, GUPTA A, et al. Multifactorial evolutionary algorithm with online transfer parameter estimation: MFEA-II[J]. IEEE Transactions on Evolutionary Computation, 2019, 24(1): 69-83.
[21] GUPTA A, ONG Y S, FENG L. Multifactorial evolution: toward evolutionary multitasking[J]. IEEE Transactions on Evolutionary Computation, 2015, 20(3): 343-357.
[22] DING J, YANG C, JIN Y, et al. Generalized multitasking for evolutionary optimization of expensive problems[J]. IEEE Transactions on Evolutionary Computation, 2017, 23(1): 44-58.
[23] ZHENG X, QIN A K, GONG M, et al. Self-regulated evolutionary multitask optimization[J]. IEEE Transactions on Evolutionary Computation, 2019, 24(1): 16-28.
[24] BONILLA E V, CHAI K, WILLIAMS C. Multi-task Gaussian process prediction[J]. Advances in neural information processing systems, 2007, 20.
[25] SWERSKY K, SNOEK J, ADAMS R P. Multi-task bayesian optimization[J]. Advances in neural information processing systems, 2013, 26.
[26] PENG Y, CHEN C H, FU M C, et al. Efficient sampling allocation procedures for optimal quantile selection[J]. INFORMS Journal on Computing, 2021, 33(1): 230-245.
[27] RYZHOV I O. On the convergence rates of expected improvement methods[J]. Operations Research, 2016, 64(6): 1515-1528.
[28] BULL A D. Convergence rates of efficient global optimization algorithms.[J]. Journal of Ma chine Learning Research, 2011, 12(10).
[29] KRAUSE A, ONG C. Contextual gaussian process bandit optimization[J]. Advances in neural information processing systems, 2011, 24.
[30] WILLIAMS C, KLANKE S, VIJAYAKUMAR S, et al. Multi-task gaussian process learning of robot inverse dynamics[J]. Advances in neural information processing systems, 2008, 21.
[31] HAKHAMANESHI K, ABBEEL P, STOJANOVIC V, et al. JUMBO: Scalable Multi-taskBayesian Optimization using Offline Data[A]. 2021.
[32] RU B, ALVI A, NGUYEN V, et al. Bayesian optimisation over multiple continuous and categorical inputs[C]//International Conference on Machine Learning. PMLR, 2020: 8276-8285.
[33] DAVID Y, SHIMKIN N. Pure exploration for max-quantile bandits[C]//Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, 2016: 556-571.
[34] NIKOLAKAKIS K E, KALOGERIAS D S, SHEFFET O, et al. Quantile multi-armed bandits: Optimal best-arm identification and a differentially private scheme[J]. IEEE Journal on Selected Areas in Information Theory, 2021, 2(2): 534-548.
[35] HOWARD S R, RAMDAS A. Sequential estimation of quantiles with applications to A/B testing and best-arm identification[J]. Bernoulli, 2022, 28(3): 1704-1728.
[36] ZHANG M, ONG C S. Quantile bandits for best arms identification[C]//International Confer ence on Machine Learning. PMLR, 2021: 12513-12523.
[37] YANG Y, BUETTNER F. Multi-output Gaussian Processes for uncertainty-aware recommender systems[C]//Uncertainty in Artificial Intelligence. PMLR, 2021: 1505-1514.
[38] BARDENET R, BRENDEL M, KÉGL B, et al. Collaborative hyperparameter tuning[C]//International conference on machine learning. PMLR, 2013: 199-207.
[39] FEURER M, SPRINGENBERG J, HUTTER F. Initializing bayesian hyperparameter optimization via meta-learning[C]//Proceedings of the AAAI Conference on Artificial Intelligence: volume 29. 2015.
[40] YOGATAMA D, MANN G. Efficient transfer learning method for automatic hyperparameter tuning[C]//Artificial intelligence and statistics. PMLR, 2014: 1077-1085.
[41] HUTTER F, HOOS H H, LEYTON-BROWN K. Sequential model-based optimization for general algorithm configuration[C]//International conference on learning and intelligent optimization. Springer, 2011: 507-523.
[42] GOOVAERTS P, et al. Geostatistics for natural resources evaluation[M]. Oxford University Press on Demand, 1997.
[43] PINHEIRO J C, BATES D M. Unconstrained parametrizations for variance-covariance matrices[J]. Statistics and computing, 1996, 6(3): 289-296.
[44] TEH Y W, SEEGER M, JORDAN M I. Semiparametric latent factor models[C]//International Workshop on Artificial Intelligence and Statistics. PMLR, 2005: 333-340.
[45] PEARCE M, BRANKE J. Continuous multi-task bayesian optimisation with correlation[J]. European Journal of Operational Research, 2018, 270(3): 1074-1085.
[46] ZHANG Y, APLEY D W, CHEN W. Bayesian optimization for materials design with mixed quantitative and qualitative variables[J]. Scientific reports, 2020, 10(1): 1-13.
[47] SURJANOVIC S, BINGHAM D. Virtual Library of Simulation Experiments: Test Functions and Datasets[Z]. Retrieved March 26, 2023, from http://www.sfu.ca/~ssurjano.
修改评论