题名 | Rigid-Soft Interactive Learning for Robotic Manipulation |
姓名 | |
姓名拼音 | YANG Linhan
|
学号 | 11950013
|
学位类型 | 博士
|
学位专业 | 计算机科学
|
导师 | |
导师单位 | 机械与能源工程系
|
论文答辩日期 | 2024-07-30
|
论文提交日期 | 2024-08-23
|
学位授予单位 | 香港大学
|
学位授予地点 | 香港大学
|
摘要 | Recent years have witnessed significant advancements in the field of robotic manipulation through the adoption of machine learning methods. Unlike other domains such as computer vision and natural language processing, robotic manipulation involves complex physical interactions that pose substantial challenges for developing scalable and generalized control policies. In this thesis, we explore the understanding and representation learning of these interactions across various robotic manipulation scenarios. We classify these interactions into two categories: Internal Interactions between the manipulator (gripper or robot) and the objects, and External Interactions involving the objects/robots and their external environments. Focusing on the internal interactions, we initially investigate a grasp prediction task. We change the variables such as gripper stiffness (rigid or soft fingers) and the type of grasp (power or precision), which implicitly encodes interaction data within our dataset. Our experiments reveal that this configuration greatly improves the training speed and the grasping performance. Furthermore, these interactions can be explic itly represented through force and torque data, facilitated by equipping the finger sur faces with multi-channel optical fibers. We have developed an interactive grasp policy that utilizes local interaction data. The proprioceptive capabilities of the fingers enable them to conform to object contact regions, ensuring a stable grasp. We then extend our research to include dexterous in-hand manipulation, specifically rotating two spheres within the hand by 180 degrees. During this task, interactions between the objects and the hand are continuously disrupted and reformed. We utilize a hand equipped with four fingers and a tactile sensor array to gather comprehensive interaction data. To effectively represent this data, we introduce the TacGNN, a generalized model for tactile information across various shapes. This model allows us to achieve in-hand manipulation using solely proprioceptive tactile sensing. In our exploration of external interactions between objects/robots and external environments, we begin with a rigid-rigid interaction within a loco-manipulation problem. Our aim is to merge interaction data from both locomotion and manipulation into a unified graph-based framework, encapsulated within the graph representation. A shared control policy is then developed through simulations and directly transferred to real-world applications in a zero-shot manner. Additionally, we investigate rigid-soft interactions through a fabric manipulation task involving deformable objects. We have developed a graph-based, environment-aware representation for fabric, which integrates environmental data. This model logically encodes interaction data, enabling each fabric segment to detect and respond to environmental contact. Employing this strategy, we successfully execute a goal-conditioned manipulation task-placing the fabric in a specified configuration within complex scenarios on the first attempt. |
关键词 | |
语种 | 英语
|
培养类别 | 联合培养
|
入学年份 | 2019
|
学位授予年份 | 2024-08
|
参考文献列表 | [1] M. Ahn, H. Zhu, K. Hartikainen, H. Ponte, A. Gupta, S. Levine, and V. Kumar.“ROBEL: Robotics Benchmarks for Learning with Low-Cost Robots”. In: Pro ceedings of the Conference on Robot Learning. Ed. by L. P. Kaelbling, D. Kragic,and K. Sugiura. Vol. 100. Proceedings of Machine Learning Research. PMLR,30 Oct–01 Nov 2020, pp. 1300–1313. URL: https://proceedings.mlr.press/v100/ahn20a.html. [2] I. Akkaya, M. Andrychowicz, M. Chociej, M. Litwin, B. McGrew, A. Petron, A.Paino, M. Plappert, G. Powell, R. Ribas, et al. “Solving rubik’s cube with a robothand”. In: arXiv preprint arXiv:1910.07113 (2019). [3] O. M. Andrychowicz, B. Baker, M. Chociej, R. Jozefowicz, B. McGrew, J. Pa chocki, A. Petron, M. Plappert, G. Powell, A. Ray, et al. “Learning Dexterous In hand Manipulation”. In: The International Journal of Robotics Research 39.1 (2020),pp. 3–20. [4] O. M. Andrychowicz, B. Baker, M. Chociej, R. Jozefowicz, B. McGrew, J. Pa chocki, A. Petron, M. Plappert, G. Powell, A. Ray, et al. “Learning dexterous in hand manipulation”. In: The International Journal of Robotics Research 39.1 (2020),pp. 3–20. [5] M. Anwar, T. A. Khawli, I. Hussain, D. Gan, and F. Renda. “Modeling and pro totyping of a soft closed-chain modular gripper”. In: Industrial Robot: the inter national journal of robotics research and application 46.1 (2019), pp. 135–145. [6] C. I. Basson and G. Bright. “Geometric Conformity Study of a Fin Ray GripperUtilizing Active Haptic Control”. In: 2019 IEEE 15th International Conference onControl and Automation (ICCA). IEEE. 2019, pp. 713–718. [7] P. W. Battaglia, J. B. Hamrick, V. Bapst, A. Sanchez-Gonzalez, V. Zambaldi, M.Malinowski, A. Tacchetti, D. Raposo, A. Santoro, R. Faulkner, et al. “Relationalinductive biases, deep learning, and graph networks”. In: arXiv preprint arXiv:1806.01261(2018). [8] C. D. Bellicoso, K. Krämer, M. Stäuble, D. Sako, F. Jenelten, M. Bjelonic, andM. Hutter. “ALMA - Articulated Locomotion and Manipulation for a Torque Controllable Robot”. In: 2019 International Conference on Robotics and Automation(ICRA). 2019, pp. 8477–8483. DOI: 10.1109/ICRA.2019.8794273. [9] D. Berenson. “Manipulation of deformable objects without modeling and sim ulating deformation”. In: 2013 IEEE/RSJ International Conference on IntelligentRobots and Systems. IEEE. 2013, pp. 4525–4532. [10] A. Bhatt, A. Sieler, S. Puhlmann, and O. Brock. “Surprisingly Robust In-HandManipulation: An Empirical Study”. In: arXiv preprint arXiv:2201.11503 (2022). [11] A. Bicchi. “Hands for dexterous manipulation and robust grasping: A difficultroad toward simplicity”. In: IEEE Transactions on robotics and automation 16.6(2000), pp. 652–662. [12] P. Billeschou, N. N. Bijma, L. B. Larsen, S. N. Gorb, J. C. Larsen, and P. Manoon pong. “Framework for Developing Bio-Inspired Morphologies for Walking Robots”.In: Applied Sciences 10.19 (2020). DOI: 10.3390/app10196986. [13] J. Bohg, A. Morales, T. Asfour, and D. Kragic. “Data-driven grasp synthesis—asurvey”. In: IEEE Transactions on Robotics 30.2 (2013), pp. 289–309. [14] A. Brohan, N. Brown, J. Carbajal, Y. Chebotar, X. Chen, K. Choromanski, T. Ding,D. Driess, A. Dubey, C. Finn, et al. “Rt-2: Vision-language-action models transferweb knowledge to robotic control”. In: arXiv preprint arXiv:2307.15818 (2023). [15] R. Calandra, A. Owens, D. Jayaraman, J. Lin, W. Yuan, J. Malik, E. H. Adelson,and S. Levine. “More than a feeling: Learning to grasp and regrasp using visionand touch”. In: IEEE Robotics and Automation Letters 3.4 (2018), pp. 3300–3307. [16] B. Calli, A. Walsman, A. Singh, S. Srinivasa, P. Abbeel, and A. M. Dollar. “Bench marking in Manipulation Research: Using the Yale-CMU-Berkeley Object andModel Set”. In: IEEE Robotics Automation Magazine 22.3 (Sept. 2015), pp. 36–52.ISSN: 1558-223X. DOI: 10.1109/MRA.2015.2448951. [17] B. Calli, A. Singh, A. Walsman, S. Srinivasa, P. Abbeel, and A. M. Dollar. “Theycb object and model set: Towards common benchmarks for manipulation re search”. In: 2015 international conference on advanced robotics (ICRA). IEEE. 2015,pp. 510–517. [18] A. Canberk, C. Chi, H. Ha, B. Burchfiel, E. Cousineau, S. Feng, and S. Song.“Cloth funnels: Canonicalized-alignment for multi-purpose garment manipula tion”. In: IEEE International Conference on Robotics and Automation (ICRA). IEEE.2023, pp. 5872–5879. [19] P. Chang and T. Padif. “Sim2real2sim: Bridging the gap between simulation andreal-world in flexible object manipulation”. In: 2020 Fourth IEEE InternationalConference on Robotic Computing (IRC). IEEE. 2020, pp. 56–62. [20] H. J. Charlesworth and G. Montana. “Solving challenging dexterous manipula tion tasks with trajectory optimisation and reinforcement learning”. In: Interna tional Conference on Machine Learning. PMLR. 2021, pp. 1496–1506. [21] T. Chen, J. Xu, and P. Agrawal. A System for General In-Hand Object Re-Orientation.2021. DOI: 10.48550/arXiv.2111.03043. arXiv: 2111.03043 [cs]. (Visited on02/14/2023). [22] W. Chen, Y. Yan, Z. Zhang, L. Yang, and J. Pan. “Polymer-Based Self-CalibratedOptical Fiber Tactile Sensor”. In: arXiv preprint arXiv:2303.00619 (2023). [23] X. Cheng, A. Kumar, and D. Pathak. Legs as Manipulator: Pushing QuadrupedalAgility Beyond Locomotion. 2023. arXiv: 2303.11330 [cs.RO]. [24] C. Chi, B. Burchfiel, E. Cousineau, S. Feng, and S. Song. “Iterative Residual Pol icy: For Goal-Conditioned Dynamic Manipulation of Deformable Objects”. In:The International Journal of Robotics Research (). [25] F.-J. Chu, R. Xu, and P. A. Vela. “Real-world multiobject, multigrasp detection”.In: IEEE Robotics and Automation Letters 3.4 (2018), pp. 3355–3362. [26] M. Cianchetti, C. Laschi, A. Menciassi, and P. Dario. “Biomedical applicationsof soft robotics”. In: Nature Reviews Materials 3.6 (2018), pp. 143–153. [27] F. Cini, V. Ortenzi, P. Corke, and M. Controzzi. “On the choice of grasp typeand location when handing over an object”. In: Science Robotics 4.27 (2019). DOI:10.1126/scirobotics.aau9757. eprint: https://robotics.sciencemag.org/content/4/27/eaau9757.full.pdf. URL: https://robotics.sciencemag.org/content/4/27/eaau9757. [28] N. Correll, K. E. Bekris, D. Berenson, O. Brock, A. Causo, K. Hauser, K. Okada,A. Rodriguez, J. M. Romano, and P. R. Wurman. “Analysis and observationsfrom the first amazon picking challenge”. In: IEEE Transactions on AutomationScience and Engineering 15.1 (2016), pp. 172–188. [29] W. Crooks, S. Rozen-Levy, B. Trimmer, C. Rogers, and W. Messner. “Passivegripper inspired by Manduca sexta and the Fin Ray® Effect”. In: InternationalJournal of Advanced Robotic Systems 14.4 (2017), p. 1729881417721155. [30] R. S. Dahiya and M. Valle. Robotic tactile sensing: technologies and system. Vol. 1.Springer, 2013. [31] R. Deimel and O. Brock. “A novel type of compliant and underactuated robotichand for dexterous grasping”. In: The International Journal of Robotics Research35.1-3 (2016), pp. 161–185. [32] Z. Ding, Y.-Y. Tsai, W. W. Lee, and B. Huang. “Sim-to-Real Transfer for RoboticManipulation with Tactile Sensory”. In: 2021 IEEE/RSJ International Conferenceon Intelligent Robots and Systems (IROS). IEEE. 2021, pp. 6778–6785. [33] A. Erol, G. Bebis, M. Nicolescu, R. D. Boyle, and X. Twombly. “Vision-basedhand pose estimation: A review”. In: Computer Vision and Image Understanding108.1-2 (2007), pp. 52–73. [34] N. Fazeli, M. Oller, J. Wu, Z. Wu, J. B. Tenenbaum, and A. Rodriguez. “See, feel,act: Hierarchical learning for complex manipulation skills with multisensoryfusion”. In: Science Robotics 4.26 (2019). [35] Y. Feng, C. Shi, J. Du, Y. Yu, F. Sun, and Y. Song. “Variable Admittance InteractionControl of UAVs via Deep Reinforcement Learning”. In: 2023 IEEE InternationalConference on Robotics and Automation (ICRA). IEEE. 2023, pp. 1291–1297. [36] M. Fey and J. E. Lenssen. “Fast Graph Representation Learning with PyTorchGeometric”. In: ICLR Workshop on Representation Learning on Graphs and Mani folds. 2019. [37] Z. Fu, X. Cheng, and D. Pathak. “Deep Whole-Body Control: Learning a UnifiedPolicy for Manipulation and Locomotion”. In: Proceedings of The 6th Conference onRobot Learning. Ed. by K. Liu, D. Kulic, and J. Ichnowski. Vol. 205. Proceedings ofMachine Learning Research. PMLR, 14–18 Dec 2023, pp. 138–149. URL: https://proceedings.mlr.press/v205/fu23a.html. [38] Z. Fu, T. Z. Zhao, and C. Finn. “Mobile aloha: Learning bimanual mobile manip ulation with low-cost whole-body teleoperation”. In: arXiv preprint arXiv:2401.02117(2024). [39] S. Fujimoto, H. Hoof, and D. Meger. “Addressing function approximation errorin actor-critic methods”. In: International conference on machine learning. PMLR.2018, pp. 1587–1596. [40] S. Funabashi, T. Isobe, F. Hongyi, A. Hiramoto, A. Schmitz, S. Sugano, andT. Ogata. “Multi-Fingered In-Hand Manipulation with Various Object Proper ties Using Graph Convolutional Networks and Distributed Tactile Sensors”. In:IEEE Robotics and Automation Letters (2022). [41] S. Funabashi, T. Isobe, S. Ogasa, T. Ogata, A. Schmitz, T. P. Tomo, and S. Sugano.“Stable in-grasp manipulation with a low-cost robot hand by using 3-axis tac tile sensors with a cnn”. In: 2020 IEEE/RSJ International Conference on IntelligentRobots and Systems (IROS). IEEE. 2020, pp. 9166–9173. [42] S. Funabashi, S. Morikuni, A. Geier, A. Schmitz, S. Ogasa, T. P. Torno, S. Som lor, and S. Sugano. “Object recognition through active sensing using a multi fingered robot hand with 3d tactile sensors”. In: 2018 IEEE/RSJ International Con ference on Intelligent Robots and Systems (IROS). IEEE. 2018, pp. 2589–2595. [43] A. Garcia-Garcia, B. S. Zapata-Impata, S. Orts-Escolano, P. Gil, and J. Garcia Rodriguez. “Tactilegcn: A graph convolutional network for predicting graspstability with tactile sensors”. In: 2019 International Joint Conference on NeuralNetworks (IJCNN). IEEE. 2019, pp. 1–8. [44] S. K. S. Ghasemipour, S. Kataoka, B. David, D. Freeman, S. S. Gu, and I. Mor datch. “Blocks Assemble! Learning to Assemble with Large-Scale StructuredReinforcement Learning”. In: Proceedings of the 39th International Conference onMachine Learning. Ed. by K. Chaudhuri, S. Jegelka, L. Song, C. Szepesvari, G.Niu, and S. Sabato. Vol. 162. Proceedings of Machine Learning Research. PMLR,17–23 Jul 2022, pp. 7435–7469. [45] S. Gu, E. Holly, T. P. Lillicrap, and S. Levine. “Deep reinforcement learning forrobotic manipulation”. In: arXiv preprint arXiv:1610.00633 1 (2016), p. 1. [46] Y. Gu, S. Feng, Y. Guo, F. Wan, J. S. Dai, J. Pan, and C. Song. “OverconstrainedCoaxial Design of Robotic Legs with Omni-Directional Locomotion”. In: Mecha nism and Machine Theory 176 (2022), p. 105018. DOI: 10.1016/j.mechmachtheory.2022.105018. [47] H. Ha and S. Song. “FlingBot: The Unreasonable Effectiveness of Dynamic Ma nipulation for Cloth Unfolding”. In: Proceedings of the 5th Conference on RobotLearning. Ed. by A. Faust, D. Hsu, and G. Neumann. Vol. 164. Proceedings ofMachine Learning Research. PMLR, Aug. 2022, pp. 24–33. [48] K. Han, A. Xiao, E. Wu, J. Guo, C. Xu, and Y. Wang. “Transformer in trans former”. In: Advances in neural information processing systems 34 (2021), pp. 15908–15919. [49] J. Hashizume, T. M. Huh, S. A. Suresh, and M. R. Cutkosky. “Capacitive Sens ing for a Gripper With Gecko-Inspired Adhesive Film”. In: IEEE Robotics andAutomation Letters 4.2 (2019), pp. 677–683. [50] B. He, Q. Miao, Y. Zhou, Z. Wang, G. Li, and S. Xu. “Review of bioinspiredvision-tactile fusion perception (VTFP): From humans to humanoids”. In: IEEETransactions on Medical Robotics and Bionics 4.4 (2022), pp. 875–888. [51] K. He, X. Zhang, S. Ren, and J. Sun. “Delving deep into rectifiers: Surpassinghuman-level performance on imagenet classification”. In: Proceedings of the IEEEinternational conference on computer vision. 2015, pp. 1026–1034. [52] J. Hooks, M. S. Ahn, J. Yu, X. Zhang, T. Zhu, H. Chae, and D. Hong. “ALPHRED:A Multi-Modal Operations Quadruped Robot for Package Delivery Applica tions”. In: IEEE Robotics and Automation Letters 5.4 (2020), pp. 5409–5416. DOI:10.1109/LRA.2020.3007482. [53] R. Hoque, D. Seita, A. Balakrishna, A. Ganapathi, A. Tanwani, N. Jamali, K.Yamane, S. Iba, and K. Goldberg. “VisuoSpatial Foresight for Multi-Step, Multi Task Fabric Manipulation”. In: Proceedings of Robotics: Science and Systems. Cor valis, Oregon, USA, July 2020. [54] Z. Hu, Y. Zheng, and J. Pan. “Living object grasping using two-stage graph rein forcement learning”. In: IEEE Robotics and Automation Letters 6.2 (2021), pp. 1950–1957. [55] W. Huang, I. Mordatch, and D. Pathak. “One Policy to Control Them All: SharedModular Policies for Agent-Agnostic Control”. In: Proceedings of the 37th Inter national Conference on Machine Learning. Ed. by H. D. III and A. Singh. Vol. 119.Proceedings of Machine Learning Research. PMLR, 13–18 Jul 2020, pp. 4455–4464. URL: https://proceedings.mlr.press/v119/huang20d.html. [56] X. Huang, Z. Li, Y. Xiang, Y. Ni, Y. Chi, Y. Li, L. Yang, X. B. Peng, and K. Sreenath.Creating a Dynamic Quadrupedal Robotic Goalkeeper with Reinforcement Learning.2022. arXiv: 2210.04435 [cs.RO]. [57] A. Hussein, M. M. Gaber, E. Elyan, and C. Jayne. “Imitation learning: A surveyof learning methods”. In: ACM Computing Surveys (CSUR) 50.2 (2017), pp. 1–35. [58] J. Hwangbo, J. Lee, A. Dosovitskiy, D. Bellicoso, V. Tsounis, V. Koltun, and M.Hutter. “Learning Agile and Dynamic Motor Skills for Legged Robots”. In: Sci ence Robotics 4.26 (2019), eaau5872. [59] R. Jafri, S. A. Ali, H. R. Arabnia, and S. Fatima. “Computer vision-based objectrecognition for the visually impaired in an indoors environment: a survey”. In:The Visual Computer 30 (2014), pp. 1197–1222. [60] T. Johannink, S. Bahl, A. Nair, J. Luo, A. Kumar, M. Loskyll, J. A. Ojea, E. Solowjow,and S. Levine. “Residual reinforcement learning for robot control”. In: 2019 In ternational Conference on Robotics and Automation (ICRA). IEEE. 2019, pp. 6023–6029. [61] R. S. Johansson and J. R. Flanagan. “Coding and use of tactile signals from thefingertips in object manipulation tasks.” In: Nature reviews. Neuroscience 10.5(May 2009), pp. 345–59. ISSN: 1471-0048. DOI: 10 . 1038 / nrn2621. URL: http ://www.ncbi.nlm.nih.gov/pubmed/19352402. [62] E. Johns. “Coarse-to-fine imitation learning: Robot manipulation from a singledemonstration”. In: 2021 IEEE international conference on robotics and automation(ICRA). IEEE. 2021, pp. 4613–4619. [63] A. M. Johnson and D. E. Koditschek. “Legged Self-Manipulation”. In: IEEE Ac cess 1 (2013), pp. 310–334. DOI: 10.1109/ACCESS.2013.2263192. [64] D. Kalashnikov, A. Irpan, P. Pastor, J. Ibarz, A. Herzog, E. Jang, D. Quillen,E. Holly, M. Kalakrishnan, V. Vanhoucke, et al. “Scalable deep reinforcementlearning for vision-based robotic manipulation”. In: Conference on robot learning.PMLR. 2018, pp. 651–673. [65] T. N. Kipf and M. Welling. “Semi-supervised classification with graph convolu tional networks”. In: arXiv preprint arXiv:1609.02907 (2016). [66] R. Kohavi et al. “A study of cross-validation and bootstrap for accuracy estima tion and model selection”. In: Ijcai. Vol. 14. 2. Montreal, Canada. 1995, pp. 1137–1145. [67] N. Koyachi, H. Adachi, M. Izumi, and T. Hirose. “Control of walk and manip ulation by a hexapod with integrated limb mechanism: MELMANTIS-1”. In:2002 IEEE International Conference on Robotics and Automation (ICRA). Vol. 4. 2002,pp. 3553–3558. DOI: 10.1109/ROBOT.2002.1014260. [68] J. Lee, J. Hwangbo, L. Wellhausen, V. Koltun, and M. Hutter. “Learning QuadrupedalLocomotion over Challenging Terrain”. In: Science Robotics 5.47 (2020), eabc5986.DOI: 10.1126/scirobotics.abc5986. [69] M. A. Lee, Y. Zhu, K. Srinivasan, P. Shah, S. Savarese, L. Fei-Fei, A. Garg, andJ. Bohg. “Making sense of vision and touch: Self-supervised learning of multi modal representations for contact-rich tasks”. In: 2019 International Conference onRobotics and Automation (ICRA). IEEE. 2019, pp. 8943–8950. [70] M. A. Lee, Y. Zhu, P. Zachares, M. Tan, K. Srinivasan, S. Savarese, L. Fei-Fei, A.Garg, and J. Bohg. “Making sense of vision and touch: Learning multimodal rep resentations for contact-rich tasks”. In: IEEE Transactions on Robotics 36.3 (2020),pp. 582–596. [71] W. W. Lee, S. L. Kukreja, and N. V. Thakor. “A kilohertz kilotaxel tactile sen sor array for investigating spatiotemporal features in neuromorphic touch”. In:IEEE Biomedical Circuits and Systems Conference: Engineering for Healthy Minds andAble Bodies, BioCAS 2015 - Proceedings c (2015), pp. 1–4. DOI: 10.1109/BioCAS.2015.7348326. [72] J. Leitner, A. W. Tow, N. Sünderhauf, J. E. Dean, J. W. Durham, M. Cooper,M. Eich, C. Lehnert, R. Mangels, C. McCool, et al. “The ACRV picking bench mark: A robotic shelf picking benchmark to foster reproducible research”. In:2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE. 2017,pp. 4705–4712. [73] I. Lenz, H. Lee, and A. Saxena. “Deep learning for detecting robotic grasps”. In:The International Journal of Robotics Research 34.4-5 (2015), pp. 705–724. [74] S. Levine, P. Pastor, A. Krizhevsky, J. Ibarz, and D. Quillen. “Learning hand eye coordination for robotic grasping with deep learning and large-scale datacollection”. In: The International Journal of Robotics Research 37.4-5 (2018), pp. 421–436. [75] L. Li, F. Xie, T. Wang, G. Wang, Y. Tian, T. Jin, and Q. Zhang. “Stiffness-tunablesoft gripper with soft-rigid hybrid actuation for versatile manipulations”. In:Soft Robotics 9.6 (2022), pp. 1108–1119. [76] Y. Li, T. Du, K. Wu, J. Xu, and W. Matusik. “DiffCloth: Differentiable Cloth Sim ulation with Dry Frictional Contact”. In: ACM Transactions on Graphics 42.1 (Oct.2022). [77] Y. Li, J.-Y. Zhu, R. Tedrake, and A. Torralba. “Connecting touch and vision viacross-modal prediction”. In: Proceedings of the IEEE/CVF Conference on ComputerVision and Pattern Recognition. 2019, pp. 10609–10618. [78] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, andD. Wierstra. “Continuous control with deep reinforcement learning”. In: arXivpreprint arXiv:1509.02971 (2015). [79] X. Lin, Y. Wang, Z. Huang, and D. Held. “Learning Visible Connectivity Dynam ics for Cloth Smoothing”. In: Proceedings of the 5th Conference on Robot Learning.Ed. by A. Faust, D. Hsu, and G. Neumann. Vol. 164. Proceedings of MachineLearning Research. PMLR, Aug. 2022, pp. 256–266. [80] X. Lin, Y. Wang, J. Olkin, and D. Held. “SoftGym: Benchmarking Deep Rein forcement Learning for Deformable Object Manipulation”. In: Proceedings of the2020 Conference on Robot Learning. Ed. by J. Kober, F. Ramos, and C. Tomlin.Vol. 155. Proceedings of Machine Learning Research. PMLR, 16–18 Nov 2021,pp. 432–448. [81] K. Lowrey, S. Kolev, J. Dao, A. Rajeswaran, and E. Todorov. “Reinforcementlearning for non-prehensile manipulation: Transfer from simulation to physi cal system”. In: 2018 IEEE International Conference on Simulation, Modeling, andProgramming for Autonomous Robots (SIMPAR). 2018, pp. 35–42. DOI: 10.1109/SIMPAR.2018.8376268. [82] K. M. Lynch and F. C. Park. Modern robotics. Cambridge University Press, 2017. [83] K. M. Lynch. “Nonprehensile Robotic Manipulation: Controllability and Plan ning”. PhD thesis. Carnegie Mellon University, 1996. [84] R. Ma and A. Dollar. “Yale openhand project: Optimizing open-source handdesigns for ease of fabrication and adoption”. In: IEEE Robotics & AutomationMagazine 24.1 (2017), pp. 32–40. [85] J. Mahler, J. Liang, S. Niyaz, M. Laskey, R. Doan, X. Liu, J. A. Ojea, and K. Gold berg. “Dex-Net 2.0: Deep Learning to Plan Robust Grasps with Synthetic PointClouds and Analytic Grasp Metrics”. In: (2017). [86] J. Mahler, M. Matl, V. Satish, M. Danielczuk, B. DeRose, S. McKinley, and K.Goldberg. “Learning ambidextrous robot grasping policies”. In: Science Robotics4.26 (2019), eaau4984. [87] J. Mahler, F. T. Pokorny, B. Hou, M. Roderick, M. Laskey, M. Aubry, K. Kohlhoff,T. Kröger, J. Kuffner, and K. Goldberg. “Dex-net 1.0: A cloud-based network of3d objects for robust grasp planning using a multi-armed bandit model withcorrelated rewards”. In: 2016 IEEE International Conference on Robotics and Au tomation (ICRA). IEEE. 2016, pp. 1957–1964. [88] V. Makoviychuk, L. Wawrzyniak, Y. Guo, M. Lu, K. Storey, M. Macklin, D.Hoeller, N. Rudin, A. Allshire, A. Handa, and G. State. Isaac Gym: High Per formance GPU-Based Physics Simulation For Robot Learning. 2021. [89] E. Marchand, F. Spindler, and F. Chaumette. “ViSP for visual servoing: a genericsoftware platform with a wide class of robot control skills”. In: IEEE Robotics andAutomation Magazine 12.4 (2005), pp. 40–52. DOI: 10.1109/MRA.2005.1577023. [90] G. B. Margolis, G. Yang, K. Paigwar, T. Chen, and P. Agrawal. “Rapid Locomo tion via Reinforcement Learning”. In: Robotics: Science and Systems (RSS). 2022. [91] M. T. Mason. “Toward robotic manipulation”. In: Annual Review of Control, Robotics,and Autonomous Systems 1 (2018), pp. 1–28. [92] M. T. Mason and K. M. Lynch. “Dynamic manipulation”. In: Proceedings of 1993IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS’93). Vol. 1.IEEE. 1993, pp. 152–159. [93] M. T. Mason. “Toward Robotic Manipulation”. In: Annual Review of Control,Robotics, and Autonomous Systems 1.1 (2018), pp. 1–28. DOI: 10.1146/annurev control-060117-104848. [94] J. Matas, S. James, and A. J. Davison. “Sim-to-real reinforcement learning fordeformable object manipulation”. In: Conference on Robot Learning. PMLR. 2018,pp. 734–743. [95] D. McConachie, A. Dobson, M. Ruan, and D. Berenson. “Manipulating deformableobjects by interleaving prediction, planning, and control”. In: The InternationalJournal of Robotics Research 39.8 (2020), pp. 957–982. [96] C. Moenning and N. A. Dodgson. Fast marching farthest point sampling. Tech. rep.University of Cambridge, Computer Laboratory, 2003. [97] A. Nagabandi, K. Konolige, S. Levine, and V. Kumar. “Deep dynamics modelsfor learning dexterous manipulation”. In: Conference on Robot Learning. PMLR.2020, pp. 1101–1112. [98] Y. C. Nakamura, D. M. Troniak, A. Rodriguez, M. T. Mason, and N. S. Pollard.“The complexities of grasping in the wild”. In: 2017 IEEE-RAS 17th InternationalConference on Humanoid Robotics (Humanoids). IEEE. 2017, pp. 233–240. [99] S. E. Navarro, S. Mühlbacher-Karrer, H. Alagi, H. Zangl, K. Koyama, B. Hein,C. Duriez, and J. R. Smith. “Proximity perception in human-centered robotics:A survey on sensing systems and applications”. In: IEEE Transactions on Robotics38.3 (2021), pp. 1599–1620. [100] Y. Niu, S. Jin, Z. Zhang, J. Zhu, D. Zhao, and L. Zhang. “GOATS: Goal SamplingAdaptation for Scooping with Curriculum Reinforcement Learning”. In: arXivpreprint arXiv:2303.05193 (2023). [101] F. Nogueira. Bayesian Optimization: Open source constrained global optimization toolfor Python. 2014–. URL: https://github.com/fmfn/BayesianOptimization. [102] L. U. Odhner, L. P. Jentoft, M. R. Claffee, N. Corson, Y. Tenzer, R. R. Ma, M.Buehler, R. Kohout, R. D. Howe, and A. M. Dollar. “A compliant, underactuatedhand for robust manipulation”. In: The International Journal of Robotics Research33.5 (2014), pp. 736–752. [103] S. Omata, Y. Murayama, and C. E. Constantinou. “Real time robotic tactile sen sor system for the determination of the physical properties of biomaterials”. In:Sensors and Actuators A: Physical 112.2-3 (2004), pp. 278–285. [104] A. Padalkar, A. Pooley, A. Jain, A. Bewley, A. Herzog, A. Irpan, A. Khazatsky, A.Rai, A. Singh, A. Brohan, et al. “Open x-embodiment: Robotic learning datasetsand rt-x models”. In: arXiv preprint arXiv:2310.08864 (2023). [105] W. Park, S. Seo, and J. Bae. “A hybrid gripper with soft material and rigid struc tures”. In: IEEE Robotics and Automation Letters 4.1 (2018), pp. 65–72. [106] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M.Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cour napeau, M. Brucher, M. Perrot, and E. Duchesnay. “Scikit-learn: Machine Learn ing in Python”. In: Journal of Machine Learning Research 12 (2011), pp. 2825–2830. [107] L. Pinto, D. Gandhi, Y. Han, Y.-L. Park, and A. Gupta. “The curious robot: Learn ing visual representations via physical interactions”. In: European Conference onComputer Vision. Springer. 2016, pp. 3–18. [108] L. Pinto and A. Gupta. “Supersizing self-supervision: Learning to grasp from50k tries and 700 robot hours”. In: 2016 IEEE international conference on roboticsand automation (ICRA). IEEE. 2016, pp. 3406–3413. [109] C. R. Qi, L. Yi, H. Su, and L. J. Guibas. “Pointnet++: Deep hierarchical featurelearning on point sets in a metric space”. In: Advances in neural information pro cessing systems 30 (2017). [110] J. Redmon and A. Angelova. “Real-time grasp detection using convolutionalneural networks”. In: 2015 IEEE International Conference on Robotics and Automa tion (ICRA). IEEE. 2015, pp. 1316–1322. [111] B. U. Rehman, M. Focchi, J. Lee, H. Dallali, D. G. Caldwell, and C. Semini. “To wards a Multi-Legged Mobile Manipulator”. In: 2016 IEEE International Con ference on Robotics and Automation (ICRA). 2016, pp. 3618–3624. DOI: 10.1109/ICRA.2016.7487545. [112] N. Rudin, D. Hoeller, P. Reist, and M. Hutter. “Learning to walk in minutesusing massively parallel deep reinforcement learning”. In: Conference on RobotLearning. PMLR. 2022, pp. 91–100. [113] J. Sanchez, J.-A. Corrales, B.-C. Bouzgarrou, and Y. Mezouar. “Robotic manipu lation and sensing of deformable objects in domestic and industrial applications:a survey”. In: The International Journal of Robotics Research 37.7 (2018), pp. 688–716. [114] A. Sanchez-Gonzalez, N. Heess, J. T. Springenberg, J. Merel, M. Riedmiller, R.Hadsell, and P. Battaglia. “Graph Networks as Learnable Physics Engines forInference and Control”. In: Proceedings of the 35th International Conference on Ma chine Learning. Ed. by J. Dy and A. Krause. Vol. 80. Proceedings of MachineLearning Research. PMLR, Oct. 2018, pp. 4470–4479. URL: https://proceedings.mlr.press/v80/sanchez-gonzalez18a.html. [115] V. Satish, J. Mahler, and K. Goldberg. “On-policy dataset synthesis for learn ing robot grasping policies using fully convolutional deep networks”. In: IEEERobotics and Automation Letters 4.2 (2019), pp. 1357–1364. [116] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal PolicyOptimization Algorithms. 2017. arXiv: 1707.06347 [cs.LG]. [117] D. Seita, P. Florence, J. Tompson, E. Coumans, V. Sindhwani, K. Goldberg, andA. Zeng. “Learning to rearrange deformable cables, fabrics, and bags with goal conditioned transporter networks”. In: 2021 IEEE International Conference on Roboticsand Automation (ICRA). IEEE. 2021, pp. 4568–4575. [118] A. Serrano-Muñoz, N. Arana-Arexolaleiba, D. Chrysostomou, and S. Bøgh. “skrl:Modular and Flexible Library for Reinforcement Learning”. In: arXiv preprintarXiv:2202.03825 (2022). [119] B. Shen, Z. Jiang, C. Choy, S. Savarese, L. J. Guibas, A. Anandkumar, and Y. Zhu.“Action-conditional implicit visual dynamics for deformable object manipula tion”. In: The International Journal of Robotics Research 0.0 (0), p. 02783649231191222. [120] F. Shi, T. Homberger, J. Lee, T. Miki, M. Zhao, F. Farshidian, K. Okada, M. In aba, and M. Hutter. “Circus ANYmal: A Quadruped Learning Dexterous Ma nipulation with its Limbs”. In: 2021 IEEE International Conference on Robotics andAutomation (ICRA). IEEE. 2021, pp. 2316–2323. [121] W. Shi and R. Rajkumar. “Point-gnn: Graph neural network for 3d object de tection in a point cloud”. In: Proceedings of the IEEE/CVF conference on computervision and pattern recognition. 2020, pp. 1711–1719. [122] J. Shintake, V. Cacucciolo, D. Floreano, and H. Shea. “Soft robotic grippers”. In:Advanced Materials 30.29 (2018), p. 1707035. [123] M. W. Spong, S. Hutchinson, and M. Vidyasagar. Robot modeling and control. JohnWiley & Sons, 2020. [124] D. Z. Stupar, J. S. Bajic, L. M. Manojlovic, M. P. Slankamenac, A. V. Joza, andM. B. Zivanov. “Wearable low-cost system for human joint movements monitor ing based on fiber-optic curvature sensor”. In: IEEE Sensors Journal 12.12 (2012),pp. 3424–3431. [125] H. Sun, L. Yang, Y. Gu, J. Pan, F. Wan, and C. Song. “Bridging locomotion andmanipulation using reconfigurable robotic limbs via reinforcement learning”.In: Biomimetics 8.4 (2023), p. 364. [126] S. Sundaram, P. Kellnhofer, Y. Li, J.-Y. Zhu, A. Torralba, and W. Matusik. “Learn ing the signatures of the human grasp using a scalable tactile glove”. In: Nature569.7758 (2019), pp. 698–702. [127] P. Sundaresan, J. Grannen, B. Thananjeyan, A. Balakrishna, M. Laskey, K. Stone,J. E. Gonzalez, and K. Goldberg. “Learning Rope Manipulation Policies UsingDense Object Descriptors Trained on Synthetic Depth Data”. In: IEEE Interna tional Conference on Robotics and Automation (ICRA). 2020, pp. 9411–9418. [128] T. G. Thuruthel, B. Shih, C. Laschi, and M. T. Tolley. “Soft robot perception usingembedded soft sensors and recurrent neural networks”. In: Science Robotics 4.26(2019). [129] J. Tobin, R. Fong, A. Ray, J. Schneider, W. Zaremba, and P. Abbeel. “Domainrandomization for transferring deep neural networks from simulation to the realworld”. In: 2017 IEEE/RSJ international conference on intelligent robots and systems(IROS). IEEE. 2017, pp. 23–30. [130] Torralba. “Context-based vision system for place and object recognition”. In:Proceedings Ninth IEEE International Conference on Computer Vision. IEEE. 2003,pp. 273–280. [131] I. Van Meerbeek, C. De Sa, and R. Shepherd. “Soft optoelectronic sensory foamswith proprioception”. In: Science Robotics 3.24 (2018). [132] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł.Kaiser, and I. Polosukhin. “Attention is all you need”. In: Advances in neuralinformation processing systems 30 (2017). [133] F. Wan, H. Wang, X. Liu, L. Yang, and C. Song. “DeepClaw: A robotic hardwarebenchmarking platform for learning object manipulation”. In: 2020 IEEE/ASMEInternational Conference on Advanced Intelligent Mechatronics (AIM). IEEE. 2020,pp. 2011–2018. [134] F. Wan, H. Wang, J. Wu, Y. Liu, S. Ge, and C. Song. “A Reconfigurable Designfor Omni-Adaptive Grasp Learning”. In: IEEE Robotics and Automation Letters 5.3(2020), pp. 4210–4217. [135] T. Wang, R. Liao, J. Ba, and S. Fidler. “NerveNet: Learning Structured Policy withGraph Neural Networks”. In: 6th International Conference on Learning Representa tions, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference TrackProceedings. OpenReview.net, 2018. URL: https://openreview.net/forum?id=S1sqHMZCb. [136] T. Wang, R. Liao, J. Ba, and S. Fidler. “Nervenet: Learning structured policy withgraph neural networks”. In: International conference on learning representations.2018. [137] W. Wang, G. Li, M. Zamora, and S. Coros. “TRTM: Template-based Reconstruc tion and Target-oriented Manipulation of Crumpled Cloths”. In: arXiv:2308.04670[cs.RO] (2023). [138] Y. Wang, C. Huang, and Q. Zhu. “Energy-Efficient Control Adaptation withSafety Guarantees for Learning-Enabled Cyber-Physical Systems”. In: Proceed ings of the 39th International Conference on Computer-Aided Design. 2020. ISBN:9781450380263. DOI: 10.1145/3400302.3415676. [139] J.-B. Weibel, T. Patten, and M. Vincze. “Addressing the sim2real gap in robotic 3-d object classification”. In: IEEE Robotics and Automation Letters 5.2 (2019), pp. 407–413. [140] T. Weng, S. M. Bajracharya, Y. Wang, K. Agrawal, and D. Held. “FabricFlowNet:Bimanual Cloth Manipulation with a Flow-based Policy”. In: Proceedings of the5th Conference on Robot Learning. Ed. by A. Faust, D. Hsu, and G. Neumann.Vol. 164. Proceedings of Machine Learning Research. PMLR, Aug. 2022, pp. 192–202. [141] D. White. “Solving infinite horizon discounted Markov decision process prob lems for a range of discount factors”. In: Journal of Mathematical Analysis andApplications 141.2 (1989), pp. 303–317. ISSN: 0022-247X. DOI: https://doi.org/10.1016/0022- 247X(89)90179- 0. URL: https://www.sciencedirect.com/science/article/pii/0022247X89901790. [142] Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, and S. Y. Philip. “A comprehensivesurvey on graph neural networks”. In: IEEE transactions on neural networks andlearning systems 32.1 (2020), pp. 4–24. [143] M. Wuthrich, F. Widmaier, F. Grimminger, S. Joshi, V. Agrawal, B. Hammoud,M. Khadiv, M. Bogdanovic, V. Berenz, J. Viereck, M. Naveau, L. Righetti, B.Schölkopf, and S. Bauer. “TriFinger: An Open-Source Robot for Learning Dex terity”. In: Proceedings of the 2020 Conference on Robot Learning. Ed. by J. Kober, F.Ramos, and C. Tomlin. Vol. 155. Proceedings of Machine Learning Research.PMLR, 16–18 Nov 2021, pp. 1871–1882. URL: https : / / proceedings . mlr .press/v155/wuthrich21a.html. [144] A. Yahya, A. Li, M. Kalakrishnan, Y. Chebotar, and S. Levine. “Collective robotreinforcement learning with distributed asynchronous guided policy search”.In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).IEEE. 2017, pp. 79–86. [145] A. Yamaguchi and C. G. Atkeson. “Combining finger vision and optical tactilesensing: Reducing and handling errors while cutting vegetables”. In: 2016 IEEE RAS 16th International Conference on Humanoid Robots (Humanoids). IEEE. 2016,pp. 1045–1051. [146] W. Yan, A. Vangipuram, P. Abbeel, and L. Pinto. “Learning Predictive Represen tations for Deformable Objects Using Contrastive Estimation”. In: Proceedings ofthe 2020 Conference on Robot Learning. Ed. by J. Kober, F. Ramos, and C. Tomlin.Vol. 155. Proceedings of Machine Learning Research. PMLR, 16–18 Nov 2021,pp. 564–574. [147] B. Yang, D. Jayaraman, J. Zhang, and S. Levine. “REPLAB: A Reproducible Low Cost Arm Benchmark for Robotic Learning”. In: 2019 International Conference onRobotics and Automation (ICRA). May 2019, pp. 8691–8697. DOI: 10.1109/ICRA.2019.8794390. [148] L. Yang, X. Han, W. Guo, F. Wan, J. Pan, and C. Song. “Learning-based optoelec tronically innervated tactile finger for rigid-soft interactive grasping”. In: IEEERobotics and Automation Letters 6.2 (2021), pp. 3817–3824. [149] L. Yang, B. Huang, Q. Li, Y.-Y. Tsai, W. W. Lee, C. Song, and J. Pan. “TacGNN:Learning Tactile-Based In-Hand Manipulation With a Blind Robot Using Hier archical Graph Neural Network”. In: IEEE Robotics and Automation Letters 8.6(2023), pp. 3605–3612. [150] L. Yang, F. Wan, H. Wang, X. Liu, Y. Liu, J. Pan, and C. Song. “Rigid-soft inter active learning for robust grasping”. In: IEEE Robotics and Automation Letters 5.2(2020), pp. 1720–1727. [151] Z. Yang, S. Ge, F. Wan, Y. Liu, and C. Song. “Scalable Tactile Sensing for anOmni-adaptive Soft Robot Finger”. In: 2020 3rd IEEE International Conference onSoft Robotics (RoboSoft). 2020, pp. 572–577. DOI: 10.1109/RoboSoft48309.2020.9116026. [152] H. Yin, A. Varava, and D. Kragic. “Modeling, learning, perception, and controlmethods for deformable object manipulation”. In: Science Robotics 6.54 (2021),eabd8803. [153] Y. Yu and E. Miyako. “Recent advances in liquid metal manipulation towardsoft robotics and biotechnologies”. In: Chemistry–A European Journal 24.38 (2018),pp. 9456–9462. [154] S. Yuan, A. D. Epps, J. B. Nowak, and J. K. Salisbury. “Design of a Roller-BasedDexterous Hand for Object Grasping and Within-Hand Manipulation”. In: 2020IEEE International Conference on Robotics and Automation. 2020. [155] W. Yuan, S. Dong, and E. H. Adelson. “Gelsight: High-resolution robot tactilesensors for estimating geometry and force”. In: Sensors 17.12 (2017), p. 2762. [156] A. Zeng, P. Florence, J. Tompson, S. Welker, J. Chien, M. Attarian, T. Armstrong,I. Krasin, D. Duong, V. Sindhwani, and J. Lee. “Transporter Networks: Rear ranging the Visual World for Robotic Manipulation”. In: Proceedings of the 2020Conference on Robot Learning. Ed. by J. Kober, F. Ramos, and C. Tomlin. Vol. 155.Proceedings of Machine Learning Research. PMLR, 16–18 Nov 2021, pp. 726–747. [157] F. Zhang and Y. Demiris. “Learning garment manipulation policies toward robot assisted dressing”. In: Science Robotics 7.65 (2022), eabm6010. [158] X. Zhang, Z. Jiang, H. Zhang, and Q. Wei. “Vision-based pose estimation fortextureless space objects by contour points matching”. In: IEEE Transactions onAerospace and Electronic Systems 54.5 (2018), pp. 2342–2355. [159] H. Zhao, K. O’Brien, S. Li, and R. F. Shepherd. “Optoelectronically innervatedsoft prosthetic hand via stretchable optical waveguides”. In: Science robotics 1.1(2016). [160] H. Zhen, X. Qiu, P. Chen, J. Yang, X. Yan, Y. Du, Y. Hong, and C. Gan. “3D-VLA:A 3D Vision-Language-Action Generative World Model”. In: arXiv preprint arXiv:2403.09631(2024). [161] P. Zhou, P. Zheng, J. Qi, C. Li, H.-Y. Lee, A. Duan, L. Lu, Z. Li, L. Hu, andD. Navarro-Alarcon. “Reactive human–robot collaborative manipulation of de formable linear objects using a new topological latent control model”. In: Roboticsand Computer-Integrated Manufacturing 88 (2024), p. 102727. [162] P. Zhou, J. Zhu, S. Huo, and D. Navarro-Alarcon. “LaSeSOM: A Latent andSemantic Representation Framework for Soft Object Manipulation”. In: IEEERobotics and Automation Letters 6.3 (2021), pp. 5381–5388. [163] Q. Zhu, C. Huang, R. Jiao, S. Lan, H. Liang, X. Liu, Y. Wang, Z. Wang, and S. Xu.“Safety-Assured Design and Adaptation of Learning-Enabled Autonomous Sys tems”. In: Proceedings of the 26th Asia and South Pacific Design Automation Confer ence. 2021, pp. 753–760. ISBN: 9781450379991. DOI: 10.1145/3394885.3431623. [164] W. Zhu, C. Lu, Q. Zheng, Z. Fang, H. Che, K. Tang, M. Zhu, S. Liu, and Z. Wang.“A soft-rigid hybrid gripper with lateral compliance and dexterous in-hand ma nipulation”. In: IEEE/ASME Transactions on Mechatronics 28.1 (2022), pp. 104–115. [165] T. G. Zimmerman. Optical flex sensor. US Patent 4,542,291. Sept. 1985. [166] S. Zimmermann, R. Poranne, and S. Coros. “Go Fetch! - Dynamic Grasps us ing Boston Dynamics Spot with External Robotic Arm”. In: 2021 IEEE Interna tional Conference on Robotics and Automation (ICRA). 2021, pp. 4488–4494. DOI:10.1109/ICRA48506.2021.9561835. |
来源库 | 人工提交
|
成果类型 | 学位论文 |
条目标识符 | http://sustech.caswiz.com/handle/2SGJ60CL/804719 |
专题 | 工学院_机械与能源工程系 |
推荐引用方式 GB/T 7714 |
Yang LH. Rigid-Soft Interactive Learning for Robotic Manipulation[D]. 香港大学. 香港大学,2024.
|
条目包含的文件 | ||||||
文件名称/大小 | 文献类型 | 版本类型 | 开放类型 | 使用许可 | 操作 | |
11950013-杨林瀚-机械与能源工程(18342KB) | -- | -- | 限制开放 | -- | 请求全文 |
个性服务 |
原文链接 |
推荐该条目 |
保存到收藏夹 |
查看访问统计 |
导出为Endnote文件 |
导出为Excel格式 |
导出为Csv格式 |
Altmetrics Score |
谷歌学术 |
谷歌学术中相似的文章 |
[杨林瀚]的文章 |
百度学术 |
百度学术中相似的文章 |
[杨林瀚]的文章 |
必应学术 |
必应学术中相似的文章 |
[杨林瀚]的文章 |
相关权益政策 |
暂无数据 |
收藏/分享 |
|
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论