[1] ROBOTICS U. Quadruped Robot B1 Empower Fire Scouting and Emergency Rescue[EB/OL]. 2023. https://www.youtube.com/watch?v=V_tsqLtuKBI.
[2] DYNAMICS B. Spot at AB InBev Belgium[EB/OL]. 2024. https://www.youtube.com/watch?v=9pZQ29RSz4I.
[3] DYNAMICS B. Spot at Ontario Power Generation: Automating Circuit Breaker Tripping and Racking[EB/OL]. 2023. https://www.youtube.com/watch?v=CyjYIgnsIeY.
[4] ALIBABATECH. Quadruped robots dancing on Alibaba Cloud Intelligence’s Hangzhou EFC campus[EB/OL]. 2021. https://www.youtube.com/watch?v=ZK-ZO8ovw_E.
[5] DI CARLO J, WENSING P M, KATZ B, et al. Dynamic Locomotion in the MIT Cheetah3 Through Convex Model-Predictive Control[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018: 1-9.
[6] BLEDT G, POWELL M J, KATZ B, et al. MIT Cheetah 3: Design and Control of a Robust,Dynamic Quadruped Robot[C]//2018 IEEE/RSJ International Conference on Intelligent Robotsand Systems (IROS). IEEE, 2018: 2245-2252.
[7] HWANGBO J, LEE J, DOSOVITSKIY A, et al. Learning Agile and Dynamic Motor Skills forLegged Robots[J]. Science Robotics, 2019, 4(26): eaau5872.
[8] KUMAR A, FU Z, PATHAK D, et al. RMA: Rapid Motor Adaptation for Legged Robots[C]//Robotics: Science and Systems (RSS). 2021: 1-15.
[9] CHEN H, HONG Z, YANG S, et al. Quadruped Capturability and Push Recovery via aSwitched-Systems Characterization of Dynamic Balance[J]. IEEE Transactions on Robotics(T-RO), 2023.
[10] SCHULMAN J, LEVINE S, ABBEEL P, et al. Trust Region Policy Optimization[C]//International Conference on Machine Learning (ICML). PMLR, 2015: 1889-1897.
[11] SCHULMAN J, WOLSKI F, DHARIWAL P, et al. Proximal Policy Optimization Algorithms[A]. 2017.
[12] MAKOVIYCHUK V, WAWRZYNIAK L, GUO Y, et al. Isaac Gym: High Performance GPUBased Physics Simulation For Robot Learning[C]//Conference on Neural Information ProcessingSystems (NeurIPS) Datasets and Benchmarks Track. 2021.
[13] TOBIN J, FONG R, RAY A, et al. Domain Randomization for Transferring Deep Neural Networksfrom Simulation to the Real World[C]//2017 IEEE/RSJ International Conference on IntelligentRobots and Systems (IROS). IEEE, 2017: 23-30.
[14] PENG X B, ANDRYCHOWICZ M, ZAREMBA W, et al. Sim-to-Real Transfer of RoboticControl with Dynamics Randomization[C]//2018 IEEE International Conference on Roboticsand Automation (ICRA). IEEE, 2018: 3803-3810.
[15] LEE J, HWANGBO J, WELLHAUSEN L, et al. Learning Quadrupedal Locomotion over ChallengingTerrain[J]. Science Robotics, 2020, 5(47): eabc5986.
[16] TODOROV E, EREZ T, TASSA Y. MuJoCo: A Physics Engine for Model-based Control[C]//2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE,2012: 5026-5033.
[17] COUMANS E, BAI Y. PyBullet, a Python Module for Physics Simulation for Games, Roboticsand Machine Learning[EB/OL]. 2016. http://pybullet.org.
[18] RUDIN N, HOELLER D, REIST P, et al. Learning to Walk in Minutes Using Massively ParallelDeep Reinforcement Learning[C]//Conference on Robot Learning (CoRL). PMLR, 2022: 91-100.
[19] PENG X B, ABBEEL P, LEVINE S, et al. Deepmimic: Example-Guided Deep ReinforcementLearning of Physics-Based Character Skills[J]. ACM Transactions on Graphics (TOG), 2018,37(4): 1-14.
[20] PENG X B, COUMANS E, ZHANG T, et al. Learning Agile Robotic Locomotion Skills byImitating Animals[C]//Robotics: Science and Systems (RSS). 2020.
[21] PENG X B, MA Z, ABBEEL P, et al. AMP: Adversarial Motion Priors for Stylized Physics-Based Character Control[J]. ACM Transactions on Graphics (TOG), 2021, 40(4): 1-20.
[22] ESCONTRELA A, PENG X B, YU W, et al. Adversarial Motion Priors Make Good Substitutesfor Complex Reward Functions[C]//2022 IEEE/RSJ International Conference on IntelligentRobots and Systems (IROS). IEEE, 2022: 25-32.
[23] FUCHIOKA Y, XIE Z, VAN DE PANNE M. OPT-Mimic: Imitation of Optimized Trajectoriesfor Dynamic Quadruped Behaviors[A]. 2022.
[24] KANG D, CHENG J, ZAMORA M, et al. RL+ Model-Based Control: Using On-DemandOptimal Control to Learn Versatile Legged Locomotion[A]. 2023.
[25] SHAO Y, JIN Y, LIU X, et al. Learning Free Gait Transition for Quadruped Robots via Phase-Guided Controller[J]. IEEE Robotics and Automation Letters (RA-L), 2021, 7(2): 1230-1237.
[26] JIN Y, LIU X, SHAO Y, et al. High-Speed Quadrupedal Locomotion by Imitation-RelaxationReinforcement Learning[J]. Nature Machine Intelligence, 2022: 1-11.
[27] MILLER A J, FAHMI S, CHIGNOLI M, et al. Reinforcement Learning for Legged Robots:Motion Imitation from Model-Based Optimal Control[A]. 2023.
[28] MARGOLIS G B, YANG G, PAIGWAR K, et al. Rapid Locomotion via Reinforcement Learning[J]. The International Journal of Robotics Research (IJRR), 2022.
[29] WU J, XIN G, QI C, et al. Learning Robust and Agile Legged Locomotion Using AdversarialMotion Priors[J]. IEEE Robotics and Automation Letters (RA-L), 2023.
[30] CHEN D, ZHOU B, KOLTUN V, et al. Learning by Cheating[C]//Conference on Robot Learning(CoRL). PMLR, 2020: 66-75.
[31] MIKI T, LEE J, HWANGBO J, et al. Learning Robust Perceptive Locomotion for QuadrupedalRobots in the Wild[J]. Science Robotics, 2022, 7(62): eabk2822.
[32] CHENG X, SHI K, AGARWAL A, et al. Extreme Parkour with Legged Robots[A]. 2023.
[48] ACEITUNO-CABEZAS B, MASTALLI C, DAI H, et al. Simultaneous Contact, Gait, andMotion Planning for Robust Multilegged Locomotion via Mixed-integer Convex Optimization[J]. IEEE Robotics and Automation Letters (RA-L), 2017, 3(3): 2531-2538.
[49] GRIFFIN R J, WIEDEBACH G, MCCRORY S, et al. Footstep Planning for AutonomousWalking over Rough Terrain[C]//2019 IEEE-RAS 19th International Conference on HumanoidRobots (Humanoids). IEEE, 2019: 9-16.
[50] LEE Y H, LEE Y H, LEE H, et al. Whole-body Motion and Landing Force Control forQuadrupedal Stair Climbing[C]//2019 IEEE/RSJ International Conference on Intelligent Robotsand Systems (IROS). IEEE, 2019: 4746-4751.
[51] QI S, LIN W, HONG Z, et al. Perceptive Autonomous Stair Climbing for Quadrupedal Robots[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2021.
[52] GRANDIA R, TAYLOR A J, AMES A D, et al. Multi-layered Safety for Legged Robots viaControl Barrier Functions and Model Predictive Control[C]//2021 IEEE International Conferenceon Robotics and Automation (ICRA). IEEE, 2021: 8352-8358.
[53] GRANDIA R, JENELTEN F, YANG S, et al. Perceptive Locomotion Through NonlinearModel-Predictive Control[J]. IEEE Transactions on Robotics (T-RO), 2023.
[54] FISCHLER M A, BOLLES R C. Random Sample Consensus: A Paradigm for Model Fittingwith Applications to Image Analysis and Automated Cartography[J]. Communications of theACM, 1981, 24(6): 381-395.
[55] SCHNABEL R, WAHL R, KLEIN R. Efficient RANSAC for Point-cloud Shape Detection[C]//Computer graphics forum: volume 26. Wiley Online Library, 2007: 214-226.
[56] LEE M, KWON Y, LEE S, et al. Dynamic Humanoid Locomotion over Rough Terrain withStreamlined Perception-control Pipeline[C]//2021 IEEE/RSJ International Conference on IntelligentRobots and Systems (IROS). IEEE, 2021: 4111-4117.
[57] BERTRAND S, LEE I, MISHRA B, et al. Detecting Usable Planar Regions for Legged RobotLocomotion[C]//2020 IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS). IEEE, 2020: 4736-4742.
[58] POPPINGA J, VASKEVICIUS N, BIRK A, et al. Fast Plane Detection and Polygonalizationin Noisy 3D Range Images[C]//2008 IEEE/RSJ International Conference on Intelligent Robotsand Systems (IROS). IEEE, 2008: 3378-3383.
[59] HOLZ D, BEHNKE S. Fast Range Image Segmentation and Smoothing Using ApproximateSurface Reconstruction and Region Growing.[C]//IAS (2). 2012: 61-73.
[60] FENG C, TAGUCHI Y, KAMAT V R. Fast Plane Extraction in Organized Point Clouds UsingAgglomerative Hierarchical Clustering[C]//2014 IEEE International Conference on Roboticsand Automation (ICRA). IEEE, 2014: 6218-6225.
[61] PROENÇA P F, GAO Y. Fast Cylinder and Plane Extraction from Depth Cameras for VisualOdometry[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS). IEEE, 2018: 6813-6820.
[62] PRINCE S J. Computer Vision: Models, Learning, and Inference[M]. Cambridge UniversityPress, 2012: 264-266.
[63] XU Z, ZHU H, CHEN H, et al. Polytopic Planar Region Characterization of Rough Terrainsfor Legged Locomotion[C]//2022 IEEE/RSJ International Conference on Intelligent Robots andSystems (IROS). IEEE, 2022: 8682-8689.
[64] SUTTON R S, BARTO A G. Reinforcement Learning: An Introduction[M]. MIT press, 2018.
[65] WILLIAMS R J. Simple Statistical Gradient-Following Algorithms for Connectionist ReinforcementLearning[J]. Machine Learning, 1992, 8: 229-256.
[66] SUTTON R S, MCALLESTER D, SINGH S, et al. Policy Gradient Methods for ReinforcementLearning with Function Approximation[J]. Advances in Neural Information Processing Systems(NeurIPS), 1999, 12.
[67] SCHULMAN J, MORITZ P, LEVINE S, et al. High-dimensional Continuous Control UsingGeneralized Advantage Estimation[A]. 2015.
[68] MEAGHER D J. Octree Encoding: A New Technique for the Representation, Manipulationand Display of Arbitrary 3-D Objects by Computer[M]. 1980.
[69] LIN Z H, HUANG S Y, WANG Y C F. Convolution in the Cloud: Learning Deformable Kernelsin 3D Graph Convolution Networks for Point Cloud Analysis[C]//Proceedings of the IEEE/CVFconference on computer vision and pattern recognition (CVPR). 2020: 1800-1809.
[70] QI C R, SU H, MO K, et al. PointNet: Deep Learning on Point Sets for 3D Classification andSegmentation[C]//Proceedings of the IEEE conference on computer vision and pattern recognition(CVPR). 2017: 652-660.
[71] HORNUNG A, WURM K M, BENNEWITZ M, et al. OctoMap: An Efficient Probabilistic 3DMapping Framework Based on Octrees[J/OL]. Autonomous Robots, 2013. https://octomap.github.io. DOI: 10.1007/s10514-012-9321-0.
[72] RUSU R B, COUSINS S. 3D is Here: Point Cloud Library (PCL)[C]//IEEE InternationalConference on Robotics and Automation (ICRA). IEEE, 2011: 1-4.
[73] MIKI T, LEE J, HWANGBO J, et al. Learning Robust Perceptive Locomotion for QuadrupedalRobots in the Wild[J]. Science Robotics, 2022, 7(62): eabk2822.57
修改评论