[1] BURGER B, MAFFETTONE P M, GUSEV V V, et al. A mobile robotic chemist[J]. Nature, 2020, 583: 237–241.
[2] ZHU Q, ZHANG F, HUANG Y, et al. An all-round AI-Chemist with a scientific mind[J]. National Science Review, 2022, 9(10): 190.
[3] ZHU Q, HUANG Y, ZHOU D, et al. Automated synthesis of oxygen-producing catalysts from Martian meteorites by a robotic AI chemist[J]. Nature Synthesis, 2024, 3: 319–328.
[4] 赵明, 郑泽宇, 么庆丰, 等. 基于改进人工势场法的移动机器人路径规划方法[J]. 计算机应用研究, 2020, 37(S2): 66-68+72.
[5] 祝敬, 杨马英. 基于改进人工势场法的机械臂避障路径规划[J]. 计算机测量与控制, 2018, 26(10): 205-210.
[6] 谢龙. 冗余机械臂动态避障规划[D]. 浙江大学, 2018: 29-68.
[7] HART P E, NILSSON N J, RAPHAEL B. A formal basis for the heuristic determination of minimum cost paths[J]. IEEE Transactions on Systems Science and Cybernetics, 1968, 4(2): 100-107.
[8] DIJKSTRA E W. A note on two problems in connexion with graphs[J]. Numer Math, 1959, 1(1): 269-271.
[9] KAVRAKI L E, SVESTKA P, LATOMBE J, et al. Probabilistic roadmaps for path planning in high-dimensional configuration spaces[J]. IEEE Transactions on Robotics and Automation, 1996, 12(4): 566-580.
[10] UMARI H, MUKHOPADHYAY S. Autonomous robotic exploration based on multiple rapidly-exploring randomized trees[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2017: 1396-1402.
[11] KUFFNER J J, LAVALLE S M. Rrt-connect: An efficient approach to single-query path planning[C].IEEE International Conference on Robotics and Automation(ICRA), 2000: 995-1001.
[12] BOHLIN R, KAVRAKI L E. Path planning using lazy prm[C]. IEEE International Conference on Robotics and Automation(ICRA), 2000: 521-528.
[13] KARAMAN S, FRAZZOLI E. Sampling-based algorithms for optimal motion planning[J]. The international journal of robotics research, 2011: 846-894.
[14] QURESHI A H, AYAZ Y. Potential functions based sampling heuristic for optimal path planning[J]. Autonomous Robots, 2016: 1079-1093.
[15] GAMMELL J D, SRINIVASA S S, BARFOOT T D. Informed rrt*: Optimal sampling-based path planning focused via direct sampling of an admissible ellipsoidal heuristic[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2014: 2997-3004.
[16] GAMMELL J D, SRINIVASA S S, BARFOOT T D. Batch informed trees (bit*): Sampling-based optimal planning via the heuristically guided search of implicit random geometric graphs[C]. IEEE International Conference on Robotics and Automation (ICRA), 2015: 3067-3074.
[17] TAHIR Z, QURESHI A H, AYAZ Y, et al. Potentially guided bidirectionalized RRT* for fast optimal path planning in cluttered environments[J]. Robotics and Autonomous Systems, 2018, 108: 13-27.
[18] 邹宇星, 李立君, 高自成. 基于改进PRM的采摘机器人机械臂避障路径规划[J]. 传感器与微系统, 2019, 38(1): 52-56.
[19] 卞永明, 季鹏成, 周怡和, 等. 基于改进型DWA的移动机器人避障路径规划[J]. 中国工程机械学报, 2021, 19(1): 44-49.
[20] 曹毅, 郭梦诗, 刘海洲. 基于改进RRT算法的串联机械臂避障空间路径规划[J]. 机床与液压, 2018, 46(18): 70-76.
[21] 王功亮, 王好臣, 李振雨, 等. 基于优化遗传算法的移动机器人路径规划[J]. 机床与液压, 2019, 47(3): 37-40+100.
[22] LEVEN P, HUTCHINSON S. A framework for real-time path planning in changing environments[J]. The International Journal of Robotics Research, 2002, 21(12): 999-1030.
[23] POMARLAN M, ŞUCAN I A. Motion planning for manipulators in dynamically changing environments using real-time mapping of free workspace[C]. IEEE 14th International Symposium on Computational Intelligence and Informatics (CINTI), 2013: 483-487.
[24] NADERI K, RAJAMAKI J, HAMALAINEN P. RT-RRT*: A real-time path planning algorithm based on RRT*[C]. Proceedings of the 8th ACM SIGGRAPH Conference on Motion in Games MIG, 2015: 113-118.
[25] PAN J, MANOCHA D. GPU-based parallel collision detection for fast motion planning[J]. The International Journal of Robotics Research, 2012, 31(2): 187-200.
[26] MURRAY S, FLOYD W, QI Y, et al. Robot motion planning on a chip[C]. Robotics: Science and Systems, 2016: 1-28.
[27] RATLIFF N, ZUCKER M, BAGNELL J A, et al. CHOMP: Gradient optimization techniques for efficient motion planning[C]. IEEE International Conference on Robotics and Automation(ICRA), 2009: 489-494.
[28] KALAKRISHNAN M, CHITTA S, THEODOROU E, et al. STOMP: Stochastic trajectory optimization for motion planning[C]. IEEE International Conference on Robotics and Automation(ICRA), 2011: 4569-4574.
[29] PARK C, PAN J, MANOCHA D. Real-time optimization-based planning in dynamic environments using GPUs[C]. IEEE International Conference on Robotics and Automation(ICRA), 2013: 4090-4097.
[30] TAPIA L, THOMAS S, BOYD B, et al. An unsupervised adaptive strategy for constructing probabilistic roadmaps[C]. IEEE International Conference on Robotics and Automation(ICRA), 2009: 4037-4044.
[31] BERENSON D, ABBEEL P, GOLDBERG K. A robot path planning framework that learns from experience[C]. IEEE International Conference on Robotics and Automation(ICRA), 2012: 3671-3678.
[32] ICHTER B, HARRISON J, PAVONE M. Learning sampling distributions for robot motion planning[C]. IEEE International Conference on Robotics and Automation (ICRA), 2018: 7087-7094.
[33] QURESHI A H, YIP M C. Deeply informed neural sampling for robot motion planning[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018: 6582-6588.
[34] TRAN T, EKENNA C. Identifying valid robot configurations via a deep learning approach[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2021: 8973-8978.
[35] SHI C, LAN X, WANG Y. Motion planning for unmanned vehicle based on hybrid deep learning[C]. International Conference on Security Pattern Analysis and Cybernetics (SPAC), 2017: 473-478.
[36] ZHANG C, HUH J, LEE D D. Learning implicit sampling distributions for motion planning[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018: 3654-3661.
[37] QURESHI A H, YIP M C. Deeply Informed Neural Sampling for Robot Motion Planning[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018: 6582-6588.
[38] GERAERTS R, OVERMARS M H. Creating high-quality paths for motion planning[J]. The International Journal of Robotics Research, 2007, 26(8): 845-863.
[39] PAN J, ZHANG L, MANOCHA D. Collision-free and smooth trajectory computation in cluttered environments[J]. The International Journal of Robotics Research, 2012, 31(10): 1155-1175.
[40] FUJII S, PHAM Q C. Realtime Trajectory Smoothing with Neural Nets[C]. IEEE International Conference on Robotics and Automation (ICRA), 2022: 7248-7254.
[41] DAI S, ORTON M, SCHAFFERT S, et al. Improving trajectory optimization using a roadmap framework[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018: 8674-8681.
[42] ZHAO R, SIDOBRE D. Trajectory smoothing using jerk bounded shortcuts for service manipulator robots[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2015: 4929-4934.
[43] KAPPLER D. Real-time perception meets reactive motion generation[J]. IEEE Robotics and Automation Letters, 2018, 3(3): 1864-1871.
[44] ICHTER B, PAVONE M. Robot motion planning in learned latent spaces[J]. IEEE Robotics and Automation Letters, 2019, 4(3): 2407-2414.
[45] YE G, ALTEROVITZ R. Demonstration-guided motion planning in Robotics Research[C]. Switzerland:Springer, 2017: 291-307.
[46] QURESHI A H, MIAO Y, SIMEONOV A, et al. Motion planning networks: bridging the gap between learning-based and classical motion planners[J]. IEEE Transactions on Robotics, 2021, 37(1): 48-66.
[47] PARQUE V. Learning motion planning functions using a linear transition in the c-space: networks and kernels[C]. IEEE 45th Annual Computers, Software, and Applications Conference (COMPSAC), 2021: 1538-1543.
[48] CARVALHO J, LE A T, BAIERL M, et al. Motion planning diffusion: learning and planning of robot motions with diffusion models[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2023: 1916-1923.
[49] BENCY M J, QURESHI A H, YIP M C. Neural path planning: Fixed time near-optimal path generation via oracle imitation[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2019: 3965-3972.
[50] ICHTER B, HARRISON J, PAVONE M. Learning sampling distributions for robot motion planning[C]. IEEE International Conference on Robotics and Automation (ICRA), 2018: 7087-7094.
[51] ZHANG C, HUH J, LEE D D. Learning implicit sampling distributions for motion planning[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018: 3654-3661.
[52] BHARDWAJ M, CHOUDHURY S, Scherer S. Learning heuristic search via imitation[C]. Robot Learn, 2017: 271-280.
[53] WANG J, JIA X, ZHANG T, et al. Deep neural network enhanced sampling-based path planning in 3D space[J]. IEEE Transactions on Automation Science and Engineering, 2022, 19(4): 3434-3443.
[54] LEE S U, GONZALEZ R, IAGNEMMA K. Robust sampling-based motion planning for autonomous tracked vehicles in deformable high slip terrain[C]. IEEE International Conference on Robotics and Automation (ICRA), 2016: 2569-2574.
[55] WANG J, MENG M Q -H. Optimal path planning using generalized voronoi graph and multiple potential functions[J]. IEEE Transactions on Industrial Electronics, 2020, 67(12): 10621-10630.
[56] LI Q, GAMA F, RIBEIRO A, et al. Graph neural networks for decentralized multi-robot path planning[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020: 11785-11792.
[57] MOLINA D, KUMAR K, SRIVASTAVA S. Learn and link: learning critical regions for efficient planning[C]. IEEE International Conference on Robotics and Automation (ICRA), 2020: 10605-10611.
[58] LI X, CAO Q, SUN M, et al. Fast motion planning via free C-space estimation based on deep neural network[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2019: 3542-3548.
[59] CHARLES R Q, SU H, KAICHUN M, et al. PointNet: deep learning on point sets for 3d classification and segmentation[C]. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017: 77-85.
[60] GAEBERT C, THOMAS U. Learning-based adaptive sampling for manipulator motion planning[C]. IEEE 18th International Conference on Automation Science and Engineering (CASE), 2022: 715-721.
[61] DAYAN D, SOLOVEY K, PAVONE M, et al. Near-optimal multi-robot motion planning with finite sampling[J]. IEEE Transactions on Robotics, 2023, 39(5): 3422-3436.
[62] KIM B, KAELBLING L P, LOZANO T. Guiding search in continuous state-action spaces by learning an action sampler from off-target search experience[C]. AAAI Conference on Artificial Intelligence, 2018: 6509-6516.
[63] KUO Y L, BARBU A, KATE B. Deep sequential models for sampling-based planning[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018: 6490-6497.
[64] MOLINA D, KUMAR K, SRIVASTAVA S. Learn and link: Learning critical regions for efficient planning[C].IEEE International Conference on Robotics and Automation (ICRA), 2020: 10605-10611.
[65] KEW J C, ICHTER B, BANDARI M, et al. Neural collision clearance estimator for batched motion planning[J]. arXiv preprint arXiv:1910.05917, 2019.
[66] GUO N, LI C, WANG D, et al. A fusion method of local path planning for mobile robots based on lstm neural network and reinforcement learning[J]. Mathematical Problems In Engineering, 2021: 1-21.
[67] BHARDWAJ M, CHOUDHURY S, SCHERER S. Learning heuristic search via imitation[C]. 1st Annual Conference on Robot Learning (CORL), 2017: 271-280.
[68] TERASAWA R, ARIKI Y, NARIHIRA T, et al. 3D-CNN based heuristic guided task-space planner for faster motion planning[C]. IEEE International Conference on Robotics and Automation (ICRA), 2020: 9548-9554.
[69] URTANS E, VECINS V. Value iteration solver networks[C]. 2020 3rd International Conference on Intelligent Autonomous Systems (ICoIAS), Singapore, 2020: 8-13.
[70] RICKERT M, SIEVERLING A, BROCK O. Balancing exploration and exploitation in sampling-based motion planning[J]. IEEE Transactions on Robotics, 2014, 30(6): 1305-1317.
[71] MESESAN G, ROA M A, ICER E, et al. Hierarchical path planner using workspace decomposition and parallel task-space rrts[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018: 1-9.
[72] LILLICRAP T P, HUNT J J, PRITZEL A, et al. Continuous control with deep reinforcement learning[J]. arXiv preprint arXiv:1509.02971, 2015.
[73] MNIH V, BADIA A P, MIRZA M, et al. Asynchronous methods for deep reinforcement learning [C]. International conference on machine learning, 2016: 1928-1937.
[74] SCHULMAN J, WOLSKI F, DHARIWAL P, et al. Proximal policy optimization algorithms[J]. arXiv preprint arXiv:1707.06347, 2017.
[75] ZHANG J, GUO J, BAI C. Heuristic reward function for reinforcement learning based manipulator motion planning[C]. IEEE International Conference on Unmanned Systems (ICUS), 2022: 1545-1550.
[76] NUBERT J, KÖHLER J, BERENZ V, et al. Safe and fast tracking on a robot manipulator: robust mpc and neural network control[J]. IEEE Robotics and Automation Letters, 2020, 5(2): 3050-3057.
[77] YANG S, WANG Q. Robotic arm motion planning with autonomous obstacle avoidance based on deep reinforcement learning[C]. The 41st Chinese Control Conference (CCC), 2022: 3692-3697.
[78] SHEN T, LIU X, DONG Y, et al. Energy-efficient motion planning and control for robotic arms via deep reinforcement learning[C]. 34th Chinese Control and Decision Conference (CCDC), 2022: 5502-5507.
[79] ZHOU D, JIA R, YAO H, et al. Robotic arm motion planning based on residual reinforcement learning[C]. The 13th International Conference on Computer and Automation Engineering (ICCAE), 2021: 89-94.
[80] ZHOU D, JIA R, YAO H. Robotic arm motion planning based on curriculum reinforcement learning[C]. 6th International Conference on Control and Robotics Engineering (ICCRE), 2021: 44-49.
[81] ANDRYCHOWICZ M, WOLSKI F, RAY A, et al. Hindsight experience replay[C]. Advances in Neural Information Processing Systems, 2017: 5048-5058.
[82] LIU W, NIU H, MAHYUDDIN M N, et al. A model-free deep reinforcement learning approach for robotic manipulators path planning[C]. 21st International Conference on Control, Automation and Systems (ICCAS), 2021: 512-517.
[83] NAIR A, MCGREW B, ANDRYCHOWICZ M. Overcoming exploration in reinforcement learning with demonstrations[C]. IEEE International Conference on Robotics and Automation (ICRA), 2018: 6292-6299.
[84] COHEN B, CHITTA S, LIKHACHEV M. Single- and dual-arm motion planning with heuristic search[J]. The International Journal of Robotics Research, 2014, 33(2): 305-320.
[85] WANG B, XIAO Y. Reinforcement learning based end-to-end control of bimanual robotic coordination[C]. 9th International Conference on Systems and Informatics (ICSAI), 2023: 1-7.
[86] SANGIOVANNI B, RENDINIELLO A, INCREMONA G P, et al. Deep reinforcement learning for collision avoidance of robotic manipulators[C]. European Control Conference (ECC), 2018: 2063-2068.
[87] AKINOLA I, WANG Z, ALLEN P. Clamgen: closed-loop arm motion generation via multi-view vision-based rl[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2021: 2376-2382.
[88] ZHANG G, WANG B, LIU K, et al. A deep reinforcement learning-based motion planner for human-robot collaboration in pathological experiments[C]. IEEE 7th Information Technology and Mechatronics Engineering Conference (ITOEC), 2023: 504-508.
[89] JURGENSON T, TAMAR A. Harnessing reinforcement learning for neural motion planning. arXiv preprint arXiv:1906.00214, 2019.
[90] QIAN L, DONG Z, LIN Q, et al. Intelligent control method of robotic arm follow-up based on reinforcement learning[C]. 2nd International Symposium on Control Engineering and Robotics (ISCER), 2023: 48-55.
[91] PANOV A I, YAKOVLEV K S, SUVOROV R. Grid path planning with deep reinforcement learning: preliminary results[C]. International Conference on Biologically Inspired Cognitive Architectures, 2017: 347-353.
[92] CHIANG H T L, HSU J, FISER M, et al. RL-RRT: Kinody-namic motion planning via learning reachability estimators from RL policies[J]. IEEE Robotics and Automation Letter, 2019: 4298-4305.
[93] NGUYEN L H, DAO M K, HUA H Q B, et al. Motion navigation algorithm based on deep reinforcement learning for manipulators[C]. IEEE 3rd International Conference on Electronic Communications, Internet of Things and Big Data (ICEIB), 2023: 537-541.
[94] YAO Q, ZHENG Z, QI L, et al. Path planning method with im-proved artificial potential field-a reinforcement learning perspective[J]. IEEE Access, 2020, 8: 135513-135523.
[95] DONG M, YING F, LI X, et al. Efficient policy learning for general robotic tasks with adaptive dual-memory hindsight experience replay based on deep reinforcement learning[C]. 7th International Conference on Robotics, Control and Automation (ICRCA), 2023: 62-66.
[96] HAN H, XI Z, CHENG J, et al. Obstacle avoidance based on deep reinforcement learning and artificial potential field[C]. The 9th International Conference on Control, Automation and Robotics (ICCAR), 2023: 215-220.
[97] TANG Z, XU X, SHI Y. Grasp planning based on deep reinforcement learning: a brief survey[C]. China Automation Congress (CAC), 2021: 7293-7299.
[98] LINDNER T, MILECKI A. Reinforcement learning-based algorithm to avoid obstacles by the anthropomorphic robotic arm[J]. Applied Sciences, 2022, 12(13): 6629.
[99] WANG H, ZHU H, CAO F. Trajectory planning algorithm of manipulator in small space based on reinforcement learning[C]. China Automation Congress (CAC), 2023: 5780-5785.
[100] XIANG Y, WEN J, LUO W, et al. Research on collision-free control and simulation of single-agent based on an improved ddpg algorithm[C]. The 35th Youth Academic Annual Conference of Chinese Association of Automation (YAC), 2020: 552-556.
[101] LI Z, MA H, DING Y, et al. Motion planning of six-dof arm robot based on improved ddpg algorithm[C]. 39th Chinese Control Conference (CCC), 2020: 3954-3959.
[102] FAUST A, OSLUND K, RAMIREZ O, et al. Prm-rl: Long-range robotic navigation tasks by combining reinforcement learning and sampling-based planning[C]. IEEE International Conference on Robotics and Automation (ICRA), 2018: 5113-5120.
[103] KAMALI K, BONEV I A, DESROSIERS C. Real-time motion planning for robotic teleoperation using dynamic-goal deep reinforcement learning[C]. 17th Conference on Computer and Robot Vision (CRV), 2020: 182-189.
[104] BANSAL S, TOLANI V, GUPTA S, et al. Combining optimal control and learning for visual navigation in novel environments[C]. Conference on Robot Learning, 2020: 420-429.
[105] ZHANG J, GUO J, BAI C. Heuristic reward function for reinforcement learning based manipulator motion planning[C]. IEEE International Conference on Unmanned Systems (ICUS), 2022: 1545-1550.
[106] KÄSTNER L. Connecting deep-reinforcement-learning-based obstacle avoidance with conventional global planners using waypoint generators[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2021: 1213-1220.
[107] GIESELMANN R, POKORNY F T. Planning-augmented hierarchical reinforcement learning[J]. IEEE Robotics and Automation Letters, 2021, 6(3): 5097-5104.
[108] SUTTON R S, MCALLESTER D A, SINGH S P, et al. Policy gradient methods for reinforcement learning with function approximation[J]. NIPs, 1999, 99: 1057-1063.
修改评论