[1] 工业和信息化部关于推进工业机器人产业发展的指导意见[J]. 新型工业化, 2014, 4: 1-2.
[2] 工业和信息化部关于推进工业机器人产业发展的指导意见[J]. 科技导报, 33: 79.
[3] 工信部装备工业司. 《中国制造 2025》解读之:推动机器人发展[EB/OL]. https://www.gov.cn/zhuanti/2016-05/12/content_5072768.htm.
[4] NEHMZOW U. Mobile robotics: a practical introduction[M]. Springer Science & Business Media, 2012.
[5] ISLAM M J, HONG J, SATTAR J. Person-following by autonomous robots: A categorical overview[J]. The International Journal of Robotics Research, 2019, 38(14): 1581-1618.
[6] TRIEBEL R, ARRAS K, ALAMI R, et al. Spencer: A socially aware service robot for passenger guidance and help in busy airports[C]//Field and Service Robotics: Results of the 10th International Conference. Springer, 2016: 607-622.
[7] KOIDE K, MIURA J. Identification of a specific person using color, height, and gait features for a person following robot[J]. Robotics and Autonomous Systems, 2016, 84: 76-87.
[8] KOIDE K, MIURA J, MENEGATTI E. Monocular person tracking and identification with online deep feature selection for person following robots[J]. Robotics and Autonomous Systems, 2020, 124: 103348.
[9] MI W, WANG X, REN P, et al. A system for an anticipative front human following robot [C]//Proceedings of the International Conference on Artificial Intelligence and Robotics and the International Conference on Automation, Control and Robotics Engineering. 2016: 1-6.
[10] GUPTA M, KUMAR S, BEHERA L, et al. A novel vision-based tracking algorithm for a human-following mobile robot[J]. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2016, 47(7): 1415-1427.
[11] MAC T T, COPOT C, TRAN D T, et al. Heuristic approaches in robot path planning: A survey[J]. Robotics and Autonomous Systems, 2016, 86: 13-28.
[12] MOHANTY P K, PARHI D R. Controlling the motion of an autonomous mobile robot using various techniques: a review[J]. Journal of Advance Mechanical Engineering, 2013, 1(1): 24-39.
[13] DIJKSTRA E W. A note on two problems in connexion with graphs[M]//Edsger Wybe Dijkstra: His Life, Work, and Legacy. 2022: 287-290.
[14] KARUR K, SHARMA N, DHARMATTI C, et al. A survey of path planning algorithms for mobile robots[J]. Vehicles, 2021, 3(3): 448-468.
[15] HOLTE R, FELNER A, SHARON G, et al. Bidirectional search that is guaranteed to meet in the middle[C]//Proceedings of the AAAI Conference on Artificial Intelligence: Vol. 30. 2016.
[16] SÁNCHEZ-IBÁÑEZ J R, PÉREZ-DEL PULGAR C J, GARCÍA-CEREZO A. Path planning for autonomous mobile robots: A review[J]. Sensors, 2021, 21(23): 7898.
[17] CLAUSSMANN L, REVILLOUD M, GRUYER D, et al. A review of motion planning for highway autonomous driving[J]. IEEE Transactions on Intelligent Transportation Systems, 2019,21(5): 1826-1848.
[18] YURTSEVER E, LAMBERT J, CARBALLO A, et al. A survey of autonomous driving: Common practices and emerging technologies[J]. IEEE access, 2020, 8: 58443-58469.
[19] PADEN B, ČÁP M, YONG S Z, et al. A survey of motion planning and control techniques for self-driving urban vehicles[J]. IEEE Transactions on intelligent vehicles, 2016, 1(1): 33-55.
[20] KARAMAN S, FRAZZOLI E. Sampling-based algorithms for optimal motion planning[J]. The international journal of robotics research, 2011, 30(7): 846-894.
[21] KUFFNER J J, LAVALLE S M. RRT-connect: An efficient approach to single-query path planning[C]//Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No. 00CH37065): Vol. 2. IEEE, 2000: 995-1001.
[22] LATOMBE J C. Robot motion planning: Vol. 124[M]. Springer Science & Business Media, 2012.
[23] CHENG C, SHA Q, HE B, et al. Path planning and obstacle avoidance for AUV: A review[J]. Ocean Engineering, 2021, 235: 109355.
[24] RÖSMANN C, FEITEN W, WÖSCH T, et al. Trajectory modification considering dynamic constraints of autonomous robots[C]//ROBOTIK 2012; 7th German Conference on Robotics. VDE, 2012: 1-6.
[25] RÖSMANN C, FEITEN W, WÖSCH T, et al. Efficient trajectory optimization using a sparse model[C]//2013 European Conference on Mobile Robots. IEEE, 2013: 138-143.
[26] KATOCH S, CHAUHAN S S, KUMAR V. A review on genetic algorithm: past, present, and future[J]. Multimedia tools and applications, 2021, 80: 8091-8126.
[27] HU Y, YANG S X. A knowledge based genetic algorithm for path planning of a mobile robot [C]//IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA’04. 2004: Vol. 5. IEEE, 2004: 4350-4355.
[28] YUN S C, PARASURAMAN S, GANAPATHY V. Dynamic path planning algorithm in mobile robot navigation[C]//2011 IEEE Symposium on Industrial Electronics and Applications. IEEE, 2011: 364-369.
[29] GAD A G. Particle swarm optimization algorithm and its applications: a systematic review[J]. Archives of computational methods in engineering, 2022, 29(5): 2531-2561.
[30] YANG C, SIMON D. A new particle swarm optimization technique[C]//18th International Conference on Systems Engineering (ICSEng’05). IEEE, 2005: 164-169.
[31] WANG D, TAN D, LIU L. Particle swarm optimization algorithm: an overview[J]. Soft computing, 2018, 22: 387-408.
[32] ZHANG Y, GONG D W, ZHANG J H. Robot path planning in uncertain environment using multi-objective particle swarm optimization[J]. Neurocomputing, 2013, 103: 172-185
[33] TANG J, LIU G, PAN Q. A review on representative swarm intelligence algorithms for solving optimization problems: Applications and trends[J]. IEEE/CAA Journal of Automatica Sinica, 2021, 8(10): 1627-1643.
[34] BELLMAN R. Dynamic programming[J]. Science, 1966, 153(3731): 34-37.
[35] IHME M, CHUNG W T, MISHRA A A. Combustion machine learning: Principles, progress and prospects[J]. Progress in Energy and Combustion Science, 2022, 91: 101010.
[36] BELLMAN R. A Markovian decision process[J]. Journal of mathematics and mechanics, 1957:679-684.
[37] SUTTON R S. Learning to predict by the methods of temporal differences[J]. Machine learning,1988, 3: 9-44.
[38] SILVER D, SINGH S, PRECUP D, et al. Reward is enough[J]. Artificial Intelligence, 2021,299: 103535.
[39] WATKINS C J, DAYAN P. Q-learning[J]. Machine learning, 1992, 8: 279-292.
[40] AFSAR M M, CRUMP T, FAR B. Reinforcement learning based recommender systems: A survey[J]. ACM Computing Surveys, 2022, 55(7): 1-38.
[41] MNIH V, KAVUKCUOGLU K, SILVER D, et al. Playing atari with deep reinforcement learning [A]. 2013.
[42] MNIH V, KAVUKCUOGLU K, SILVER D, et al. Human-level control through deep reinforcement learning[J]. nature, 2015, 518(7540): 529-533.
[43] YANG Y, JUNTAO L, LINGLING P. Multi-robot path planning based on a deep reinforcement learning DQN algorithm[J]. CAAI Transactions on Intelligence Technology, 2020, 5(3): 177-183.
[44] VAN HASSELT H, GUEZ A, SILVER D. Deep reinforcement learning with double q-learning[C]//Proceedings of the AAAI conference on artificial intelligence: Vol. 30. 2016.
[45] MOHAMED A, LEE H Y, BORGHOLT L, et al. Self-supervised speech representation learning: A review[J]. IEEE Journal of Selected Topics in Signal Processing, 2022, 16(6): 1179-1210.
[46] SCHULMAN J, LEVINE S, ABBEEL P, et al. Trust region policy optimization[C]//International conference on machine learning. PMLR, 2015: 1889-1897.
[47] SCHULMAN J, WOLSKI F, DHARIWAL P, et al. Proximal policy optimization algorithms [A]. 2017.
[48] ACHIAM J. Spinning Up in Deep Reinforcement Learning[Z]. 2018.
[49] SILVER D, LEVER G, HEESS N, et al. Deterministic policy gradient algorithms[C]//International conference on machine learning. Pmlr, 2014: 387-395.
[50] MNIH V, BADIA A P, MIRZA M, et al. Asynchronous methods for deep reinforcement learning [C]//International conference on machine learning. PMLR, 2016: 1928-1937.
[51] DABNEY W, ROWLAND M, BELLEMARE M, et al. Distributional reinforcement learning with quantile regression[C]//Proceedings of the AAAI Conference on Artificial Intelligence: Vol. 32. 2018.
[52] REDMON J, FARHADI A. Yolov3: An incremental improvement[A]. 2018.
修改评论