[1] HADDADIN S, JOHANNSMEIER L, DíAZ LEDEZMA F. Tactile robots as a central embodiment of the tactile internet[J/OL]. Proceedings of the IEEE, 2019, 107(2): 471-487. DOI:10.1109/JPROC.2018.2879870.
[2] ZHANG J, YAO H, MO J, et al. Finger-inspired rigid-soft hybrid tactile sensor with superiorsensitivity at high frequency[J]. Nature communications, 2022, 13(1): 5076.
[3] LASCHI C, MAZZOLAI B, CIANCHETTI M. Soft robotics: Technologies and systems pushing the boundaries of robot abilities[J]. Science robotics, 2016, 1(1): eaah3690.
[4] GALLOWAY K C, BECKER K P, PHILLIPS B, et al. Soft robotic grippers for biologicalsampling on deep reefs[J]. Soft robotics, 2016, 3(1): 23-33.
[5] SUBAD R A S I, CROSS L B, PARK K. Soft robotic hands and tactile sensors for underwaterrobotics[J]. Applied Mechanics, 2021, 2(2): 356-382.
[6] SANTINA C D, KATZSCHMANN R K, BICCHI A, et al. Model-based dynamic feedbackcontrol of a planar soft robot: trajectory tracking and interaction with the environment[J/OL].The International Journal of Robotics Research, 2020, 39(4): 490-513. DOI: 10.1177/0278364919897292.
[7] GONG Z, FANG X, CHEN X, et al. A soft manipulator for efficient delicate grasping in shallowwater: Modeling, control, and real-world experiments[J]. The International Journal of RoboticsResearch, 2021, 40(1): 449-469.
[8] RENDA F, BOYER F, DIAS J, et al. Discrete cosserat approach for multisection soft manipulator dynamics[J/OL]. IEEE Transactions on Robotics, 2018, 34(6): 1518-1533. DOI:10.1109/TRO.2018.2868815.
[9] LI H, XUN L, ZHENG G. Piecewise linear strain cosserat model for soft slender manipulator[J/OL]. IEEE Transactions on Robotics, 2023, 39(3): 2342-2359. DOI: 10.1109/TRO.2023.3236942.
[10] FAURE F, DURIEZ C, DELINGETTE H, et al. Sofa: A multi-model framework for interactivephysical simulation[J]. Soft tissue biomechanical modeling for computer assisted surgery, 2012:283-321.
[11] NAVARRO S E, NAGELS S, ALAGI H, et al. A model-based sensor fusion approach for forceand shape estimation in soft robotics[J/OL]. IEEE Robotics and Automation Letters, 2020, 5(4): 5621-5628. DOI: 10.1109/LRA.2020.3008120.
[12] WU K, ZHENG G, ZHANG J. Fem-based trajectory tracking control of a soft trunk robot[J].Robotics and Autonomous Systems, 2022, 150: 103961.
[13] ZHANG Z, BIEZE T M, DEQUIDT J, et al. Visual servoing control of soft robots based onfinite element model[C]//2017 IEEE/RSJ International Conference on Intelligent Robots andSystems (IROS). IEEE, 2017: 2895-2901
[14] FANG G, MATTE C D, SCHARFF R B, et al. Kinematics of soft robots by geometric computing[J]. IEEE Transactions on Robotics, 2020, 36(4): 1272-1286.
[15] DELLA SANTINA C, DURIEZ C, RUS D. Model-based control of soft robots: A survey ofthe state of the art and open challenges[J/OL]. IEEE Control Systems Magazine, 2023, 43(3):30-65. DOI: 10.1109/MCS.2023.3253419.
[16] 刘佳鹏, 王江北, 赵威, 等. 多功能软体机械手的设计与建模[J]. 机械工程学报, 2022, 58(9): 9.
[17] 梅栋, 赵鑫, 唐刚强, 等. 软体机器人建模与控制技术研究进展[J]. 机器人, 2024, 46(2):234.
[18] ZHANG Y, ZHOU X, ZHANG N, et al. Ultrafast piezocapacitive soft pressure sensors with over10 khz bandwidth via bonded microstructured interfaces[J]. Nature Communications, 2024, 15(1): 3048.
[19] SUNDARAM S, KELLNHOFER P, LI Y, et al. Learning the signatures of the human grasp usinga scalable tactile glove[J/OL]. Nature, 2019, 569(7758). DOI: 10.1038/s41586-019-1234-z.
[20] YAN Y, HU Z, YANG Z, et al. Soft magnetic skin for super-resolution tactile sensing with forceself-decoupling[J]. Science Robotics, 2021, 6(51): eabc8801.
[21] ZHANG S, CHEN Z, GAO Y, et al. Hardware technology of vision-based tactile sensor: Areview[J/OL]. IEEE Sensors Journal, 2022, 22(22): 21410-21427. DOI: 10.1109/JSEN.2022.3210210.
[22] YAMAGUCHI A, ATKESON C G. Recent progress in tactile sensing and sensors for roboticmanipulation: can we turn tactile sensing into vision?[J]. Advanced Robotics, 2019, 33(14):661-673.
[23] YUAN W, DONG S, ADELSON E H. Gelsight: High-resolution robot tactile sensors for estimating geometry and force[J]. Sensors, 2017, 17(12): 2762.
[24] LIU S Q, ADELSON E H. Gelsight fin ray: Incorporating tactile sensing into a soft compliantrobotic gripper[C/OL]//2022 IEEE 5th International Conference on Soft Robotics (RoboSoft).2022: 925-931. DOI: 10.1109/RoboSoft54090.2022.9762175.
[25] LIU S Q, MA Y, ADELSON E H. Gelsight baby fin ray: A compact, compliant, flexiblefinger with high-resolution tactile sensing[C/OL]//2023 IEEE International Conference on SoftRobotics (RoboSoft). 2023: 1-8. DOI: 10.1109/RoboSoft55895.2023.10122078.
[26] LIU S Q, YAñEZ L Z, ADELSON E H. Gelsight endoflex: A soft endoskeleton hand withcontinuous high-resolution tactile sensing[C/OL]//2023 IEEE International Conference on SoftRobotics (RoboSoft). 2023: 1-6. DOI: 10.1109/RoboSoft55895.2023.10122053.
[27] TIPPUR M H, ADELSON E H. Gelsight360: An omnidirectional camera-based tactile sensor for dexterous robotic manipulation[C/OL]//2023 IEEE International Conference on SoftRobotics (RoboSoft). 2023: 1-8. DOI: 10.1109/RoboSoft55895.2023.10122097.
[28] ZHAO J, ADELSON E H. Gelsight svelte: A human finger-shaped single-camera tactile robotfinger with large sensing coverage and proprioceptive sensing[C/OL]//2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2023: 8979-8984. DOI:10.1109/IROS55552.2023.10341646.
[29] LAMBETA M, CHOU P W, TIAN S, et al. Digit: A novel design for a low-cost compact highresolution tactile sensor with application to in-hand manipulation[J/OL]. IEEE Robotics andAutomation Letters, 2020, 5(3): 3838-3845. DOI: 10.1109/LRA.2020.2977257.
[30] CUI S, WANG R, HU J, et al. In-hand object localization using a novel high-resolution visuotactile sensor[J/OL]. IEEE Transactions on Industrial Electronics, 2022, 69(6): 6015-6025.DOI: 10.1109/TIE.2021.3090697.
[31] ALSPACH A, HASHIMOTO K, KUPPUSWAMY N, et al. Soft-bubble: A highly compliantdense geometry tactile sensor for robot manipulation[C/OL]//2019 2nd IEEE International Conference on Soft Robotics (RoboSoft). 2019: 597-604. DOI: 10.1109/ROBOSOFT.2019.8722713.
[32] FUNK N, HELMUT E, CHALVATZAKI G, et al. Evetac: An event-based optical tactile sensorfor robotic manipulation[A]. 2023. arXiv: 2312.01236.
[33] KUPPUSWAMY N, ALSPACH A, UTTAMCHANDANI A, et al. Soft-bubble grippers for robust and perceptive manipulation[C/OL]//2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2020: 9917-9924. DOI: 10.1109/IROS45743.2020.9341534.
[34] WARD-CHERRIER B, PESTELL N, CRAMPHORN L, et al. The tactip family: Soft opticaltactile sensors with 3d-printed biomimetic morphologies[J]. Soft robotics, 2018, 5(2): 216-227.
[35] PADMANABHA A, EBERT F, TIAN S, et al. Omnitact: A multi-directional high-resolutiontouch sensor[C/OL]//2020 IEEE International Conference on Robotics and Automation (ICRA).2020: 618-624. DOI: 10.1109/ICRA40945.2020.9196712.
[36] DO W K, KENNEDY M. Densetact: Optical tactile sensor for dense shape reconstruction[C]//2022 International Conference on Robotics and Automation (ICRA). IEEE, 2022: 6188-6194.
[37] SUN H, KUCHENBECKER K J, MARTIUS G. A soft thumb-sized vision-based sensor withaccurate all-round force perception[J]. Nature Machine Intelligence, 2022, 4(2): 135-145.
[38] LEPORA N F. Soft biomimetic optical tactile sensing with the tactip: A review[J]. IEEESensors Journal, 2021, 21(19): 21131-21143.
[39] KAMIYAMA K, KAJIMOTO H, INAMI M, et al. A vision-based tactile sensor[C]//International Conference on Artificial Reality and Telexistence. 2001: 127-134.
[40] GUO F, ZHANG C, YAN Y, et al. Measurement of three-dimensional deformation and loadusing vision-based tactile sensor[C]//2016 IEEE 25th International Symposium on IndustrialElectronics (ISIE). IEEE, 2016: 1252-1257.
[41] YAMAGUCHI A, ATKESON C G. Implementing tactile behaviors using fingervision[C]//2017IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids). IEEE, 2017:241-248.
[42] MA D, DONLON E, DONG S, et al. Dense tactile force estimation using gelslim and inverse fem[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019:5418-5424.
[43] BAI H, LI S, BARREIROS J, et al. Stretchable distributed fiber-optic sensors[J]. Science, 2020,370(6518): 848-852.
[44] TAPIA J, KNOOP E, MUTNỲ M, et al. Makesense: Automated sensor design for proprioceptive soft robots[J]. Soft robotics, 2020, 7(3): 332-345.
[45] WANG H, ZHANG R, CHEN W, et al. Shape detection algorithm for soft manipulator based onfiber bragg gratings[J]. IEEE/ASME Transactions on Mechatronics, 2016, 21(6): 2977-2982.
[46] XU W, ZHANG H, YUAN H, et al. A compliant adaptive gripper and its intrinsic force sensingmethod[J]. IEEE Transactions on Robotics, 2021, 37(5): 1584-1603.
[47] FARIS O, MUTHUSAMY R, RENDA F, et al. Proprioception and exteroception of a softrobotic finger using neuromorphic vision-based sensing[J]. Soft Robotics, 2023, 10(3): 467-481.
[48] SHE Y, LIU S Q, YU P, et al. Exoskeleton-covered soft finger with vision-based proprioceptionand tactile sensing[C]//2020 ieee international conference on robotics and automation (icra).IEEE, 2020: 10075-10081.
[49] 崔少伟, 王硕, 胡静怡, 等. 面向机器人操作任务的视触觉传感技术综述[J]. 智能科学与技术学报, 2022(004-002).
[50] GOLDFEDER C, ALLEN P K, LACKNER C, et al. Grasp planning via decomposition trees[C]//Proceedings 2007 IEEE International Conference on Robotics and Automation. IEEE,2007: 4679-4684.
[51] WETTELS N, SANTOS V J, JOHANSSON R S, et al. Biomimetic tactile sensor array[J].Advanced robotics, 2008, 22(8): 829-849.
[52] FACCIO M, BOTTIN M, ROSATI G. Collaborative and traditional robotic assembly: a comparison model[J]. The International Journal of Advanced Manufacturing Technology, 2019,102: 1355-1372.
[53] DE CONINCK E, VERBELEN T, VAN MOLLE P, et al. Learning robots to grasp by demonstration[J]. Robotics and Autonomous Systems, 2020, 127: 103474.
[54] BEKIROGLU Y, HUEBNER K, KRAGIC D. Integrating grasp planning with online stability assessment using tactile sensing[C]//2011 IEEE International Conference on Robotics andAutomation. 2011: 4750-4755.
[55] LYNCH P, CULLINAN M F, MCGINN C. Adaptive grasping of moving objects through tactilesensing[J]. Sensors, 2021, 21(24): 8339.
[56] KOENIG A, LIU Z, JANSON L, et al. Tactile sensing and its role in learning and deployingrobotic grasping controllers[C]//ICRA 2022 Workshop: Reinforcement Learning for ContactRich Manipulation. 2022.
[57] NEWBURY R, GU M, CHUMBLEY L, et al. Deep learning approaches to grasp synthesis: Areview[J]. IEEE Transactions on Robotics, 2023.
[58] BOHG J, MORALES A, ASFOUR T, et al. Data-driven grasp synthesis—a survey[J]. IEEETransactions on robotics, 2013, 30(2): 289-309.
[59] 崔少伟, 魏俊杭, 王睿, 等. 基于视触融合的机器人抓取滑动检测[J]. 华中科技大学学报:自然科学版, 2020, 48(1): 5.
[60] KABOLI M, YAO K, CHENG G. Tactile-based manipulation of deformable objects with dynamic center of mass[C]//2016 IEEE-RAS 16th International Conference on Humanoid Robots(Humanoids). 2016: 752-757.
[61] VEIGA F, PETERS J, HERMANS T. Grip stabilization of novel objects using slip prediction[J]. IEEE Transactions on Haptics, 2018, 11(4): 531-542.
[62] YUAN W, LI R, SRINIVASAN M A, et al. Measurement of shear and slip with a gelsighttactile sensor[C]//2015 IEEE International Conference on Robotics and Automation (ICRA).2015: 304-311.
[63] ZAPATA-IMPATA B S, GIL P, TORRES F. Tactile-driven grasp stability and slip prediction[J].Robotics, 2019, 8(4): 85.
[64] BEKIROGLU Y, SONG D, WANG L, et al. A probabilistic framework for task-orientedgrasp stability assessment[C]//2013 IEEE International Conference on Robotics and Automation. IEEE, 2013: 3040-3047.
[65] GARCIA-GARCIA A, ZAPATA-IMPATA B S, ORTS-ESCOLANO S, et al. Tactilegcn: Agraph convolutional network for predicting grasp stability with tactile sensors[C]//2019 International Joint Conference on Neural Networks (IJCNN). IEEE, 2019: 1-8.
[66] LI J, DONG S, ADELSON E. Slip detection with combined tactile and visual information[C]//2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018:7772-7777.
[67] ZHANG Y, YUAN W, KAN Z, et al. Towards learning to detect and predict contact events onvision-based tactile sensors[C]//Conference on Robot Learning. PMLR, 2020: 1395-1404.
[68] TIAN S, EBERT F, JAYARAMAN D, et al. Manipulation by feel: Touch-based controlwith deep predictive models[C]//2019 International Conference on Robotics and Automation(ICRA). IEEE, 2019: 818-824.
[69] YAMAGUCHI A, ATKESON C G. Tactile behaviors with the vision-based tactile sensor fingervision[J]. International Journal of Humanoid Robotics, 2019, 16(03): 1940002.
[70] ZHANG Y, KAN Z, YANG Y, et al. Effective estimation of contact force and torque for visionbased tactile sensors with helmholtz–hodge decomposition[J]. IEEE Robotics and AutomationLetters, 2019, 4(4): 4094-4101.
[71] MARTINEZ-HERNANDEZ U, DODD T J, EVANS M H, et al. Active sensorimotor controlfor tactile exploration[J/OL]. Robotics and Autonomous Systems, 2017, 87: 15-27. https://www.sciencedirect.com/science/article/pii/S0921889016303086. DOI: https://doi.org/10.1016/j.robot.2016.09.014.
[72] LUO S, BIMBO J, DAHIYA R, et al. Robotic tactile perception of object properties: A review[J]. Mechatronics, 2017, 48: 54-67.
[73] YAN F, WANG D, HE H. Robotic understanding of spatial relationships using neural-logiclearning[C/OL]//2020 IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS). 2020: 8358-8365. DOI: 10.1109/IROS45743.2020.9340917.
[74] VISERAS A, SHUTIN D, MERINO L. Robotic active information gathering for spatial fieldreconstruction with rapidly-exploring random trees and online learning of gaussian processes[J]. Sensors, 2019, 19(5).
[75] AHL A S. The role of vibrissae in behavior: a status review[J]. Veterinary research communications, 1986, 10(1): 245-268.
[76] ADIBI M. Whisker-mediated touch system in rodents: from neuron to behavior[J]. Frontiersin systems neuroscience, 2019, 13: 40.
[77] HUET L A, RUDNICKI J W, HARTMANN M J. Tactile sensing with whiskers of variousshapes: Determining the three-dimensional location of object contact based on mechanical signals at the whisker base[J]. Soft robotics, 2017, 4(2): 88-102.
[78] WANG Z, LO F P W, HUANG Y, et al. Tactile perception: a biomimetic whisker-based methodfor clinical gastrointestinal diseases screening[J]. npj Robotics, 2023, 1(1): 3.
[79] STAROSTIN E, GOSS V, VAN DER HEIJDEN G. Whisker sensing by force and momentmeasurements at the whisker base[J]. Soft Robotics, 2023, 10(2): 326-335.
[80] SAYEGH M A, DARAGHMA H, MEKID S, et al. Review of recent bio-inspired design andmanufacturing of whisker tactile sensors[J]. Sensors, 2022, 22(7): 2705.
[81] XIAO C, XU S, WU W, et al. Active multiobject exploration and recognition via tactile whiskers[J/OL]. IEEE Transactions on Robotics, 2022, 38(6): 3479-3497. DOI: 10.1109/TRO.2022.3182487.
[82] 胡静怡, 崔少伟, 张超凡, 等. 基于触觉感知和伺服的物体三维边缘重建方法[J]. 智能科学与技术学报, 2022(004-002).
[83] JAMALI N, CILIBERTO C, ROSASCO L, et al. Active perception: Building objects’ models using tactile exploration[C]//2016 IEEE-RAS 16th International Conference on HumanoidRobots (Humanoids). IEEE, 2016: 179-185.
[84] SURESH S, SI Z, MANGELSON J G, et al. Shapemap 3-d: Efficient shape mapping throughdense touch and vision[C]//2022 International Conference on Robotics and Automation (ICRA).IEEE, 2022: 7073-7080.
[85] MAO H, XIAO J. Object shape estimation through touch-based continuum manipulation[C]//Robotics Research: The 18th International Symposium ISRR. Springer, 2020: 573-588.
[86] XAVIER M S, FLEMING A J, YONG Y K. Finite element modeling of soft fluidic actuators:Overview and recent developments[J]. Advanced Intelligent Systems, 2021, 3(2): 2000187.
[87] DESHMUKH M, BHOSLE U. A survey of image registration[J]. International Journal ofImage Processing (IJIP), 2011, 5(3): 245.
[88] NGO D T, ÖSTLUND J, FUA P. Template-based monocular 3d shape recovery using laplacianmeshes[J]. IEEE transactions on pattern analysis and machine intelligence, 2015, 38(1): 172-187.
[89] TRETSCHK E, KAIRANDA N, BR M, et al. State of the art in dense monocular non-rigid 3dreconstruction[C]//Computer Graphics Forum: Vol. 42. Wiley Online Library, 2023: 485-520.
[90] SOTIRAS A, DAVATZIKOS C, PARAGIOS N. Deformable medical image registration: Asurvey[J]. IEEE transactions on medical imaging, 2013, 32(7): 1153-1190.
[91] MENGALDO G, RENDA F, BRUNTON S L, et al. A concise guide to modelling the physicsof embodied intelligence in soft robotics[J]. Nature Reviews Physics, 2022, 4(9): 595-610.
[92] JORGE N, STEPHEN J W. Numerical optimization[M]. Spinger, 2006.
[93] WAN F, LIU X, GUO N, et al. Visual learning towards soft robot force control using a 3dmetamaterial with differential stiffness[C]//Conference on Robot Learning. PMLR, 2022: 1269-1278.
[94] HAO J, ZHANG Z, WANG S, et al. 2d shape estimation of a pneumatic-driven soft finger witha large bending angle based on learning from two sensing modalities[J]. Advanced IntelligentSystems, 2023, 5(10): 2200324.
[95] BAAIJ T, HOLKENBORG M K, STÖLZLE M, et al. Learning 3d shape proprioception forcontinuum soft robots with multiple magnetic sensors[J]. Soft Matter, 2023, 19(1): 44-56.
[96] ZHANG Z, HU Y, YU G, et al. DeepTag: A General Framework for Fiducial Marker Designand Detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(3): 2931-2944.
[97] MARCHAND E, UCHIYAMA H, SPINDLER F. Pose estimation for augmented reality: ahands-on survey[J]. IEEE transactions on visualization and computer graphics, 2015, 22(12):2633-2651.
[98] HARTLEY R, ZISSERMAN A. Multiple view geometry in computer vision[M]. Cambridgeuniversity press, 2003.
[99] GEUZAINE C, REMACLE J F. Gmsh: A 3-d finite element mesh generator with built-in preand post-processing facilities[J]. International journal for numerical methods in engineering,2009, 79(11): 1309-1331.
[100] FABRI A, PION S. Cgal: The computational geometry algorithms library[C]//Proceedings ofthe 17th ACM SIGSPATIAL international conference on advances in geographic informationsystems. 2009: 538-539.
[101] RABINOVICH M, PORANNE R, PANOZZO D, et al. Scalable locally injective mappings[J/OL]. ACM Trans. Graph., 2017, 36(2). DOI: 10.1145/2983621.
[102] ARMIJO L. Minimization of functions having lipschitz continuous first partial derivatives[J].Pacific Journal of mathematics, 1966, 16(1): 1-3.
[103] SHIH B, SHAH D, LI J, et al. Electronic skins and machine learning for intelligent soft robots[J]. Science Robotics, 2020, 5(41): eaaz9239.
[104] LIU F, DESWAL S, CHRISTOU A, et al. Neuro-inspired electronic skin for robots[J]. Sciencerobotics, 2022, 7(67): eabl7344.
[105] BAI N, XUE Y, CHEN S, et al. A robotic sensory system with high spatiotemporal resolutionfor texture recognition[J]. Nature Communications, 2023, 14(1): 7121.
[106] SATO K, KAMIYAMA K, KAWAKAMI N, et al. Finger-shaped gelforce: sensor for measuringsurface traction fields for robotic hand[J]. IEEE Transactions on Haptics, 2009, 3(1): 37-47.
[107] ZHANG G, DU Y, YU H, et al. Deltact: A vision-based tactile sensor using a dense colorpattern[J]. IEEE Robotics and Automation Letters, 2022, 7(4): 10778-10785.
[108] BRADSKI G. The OpenCV Library[J]. Dr. Dobb’s Journal of Software Tools, 2000.
[109] BONET J, WOOD R D. Nonlinear continuum mechanics for finite element analysis[M]. 2nded. Cambridge University Press, 2008.
[110] BONNET M, CONSTANTINESCU A. Inverse problems in elasticity[J]. Inverse Problems,2005, 21(2): R1.
[111] XU T, LI M, WANG Z, et al. A method for determining elastic constants and boundary conditions of three-dimensional hyperelastic materials[J]. International Journal of Mechanical Sciences, 2022, 225: 107329.
[112] XU K, DARVE E. Physics constrained learning for data-driven inverse modeling from sparseobservations[J]. Journal of Computational Physics, 2022, 453: 110938.
[113] HUANG D Z, XU K, FARHAT C, et al. Learning constitutive relations from indirect observations using deep neural networks[J]. Journal of Computational Physics, 2020, 416: 109491.
[114] GIVOLI D. A tutorial on the adjoint method for inverse problems[J/OL]. Computer Methodsin Applied Mechanics and Engineering, 2021, 380: 113810. https://www.sciencedirect.com/science/article/pii/S0045782521001468. DOI: https://doi.org/10.1016/j.cma.2021.113810.
[115] LIU J, WANG Z. Non-commutative discretize-then-optimize algorithms for elliptic pdeconstrained optimal control problems[J]. Journal of Computational and Applied Mathematics,2019, 362: 596-613.
[116] MESTDAGH G, COTIN S. An optimal control problem for elastic registration and force estimation in augmented surgery[C]//International Conference on Medical Image Computing andComputer-Assisted Intervention. Springer, 2022: 74-83.
[117] SIN F S, SCHROEDER D, BARBIč J. Vega: Non-linear fem deformable object simulator[J].Computer Graphics Forum, 2013, 32(1): 36-48.
[118] BOLLAPRAGADA R, NOCEDAL J, MUDIGERE D, et al. A progressive batching l-bfgsmethod for machine learning[C]//International Conference on Machine Learning. PMLR, 2018:620-629.
[119] COTIN S, MESTDAGH G, PRIVAT Y. Organ registration from partial surface data in augmented surgery from an optimal control perspective[J/OL]. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 2024, 480(2281): 20230197. DOI:10.1098/rspa.2023.0197.
[120] HE Y, XIANG S, KANG C, et al. Cross-modal retrieval via deep and bidirectional representationlearning[J/OL]. IEEE Transactions on Multimedia, 2016, 18(7): 1363-1377. DOI: 10.1109/TMM.2016.2558463.
[121] GUO W, WANG J, WANG S. Deep multimodal representation learning: A survey[J/OL]. IEEEAccess, 2019, 7: 63373-63394. DOI: 10.1109/ACCESS.2019.2916887.
[122] GING S, ZOLFAGHARI M, PIRSIAVASH H, et al. Coot: Cooperative hierarchical transformerfor video-text representation learning[C]//LAROCHELLE H, RANZATO M, HADSELL R,et al. Advances in Neural Information Processing Systems: Vol. 33. Curran Associates, Inc.,2020: 22605-22618.
[123] PATRICK M, HUANG P Y, ASANO Y, et al. Support-set bottlenecks for video-text representation learning[C]//International Conference on Learning Representations. 2021.
[124] LIN C C, LIN K, WANG L, et al. Cross-modal representation learning for zero-shot actionrecognition[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and PatternRecognition (CVPR). 2022: 19978-19988.
[125] ANDONIAN A, CHEN S, HAMID R. Robust cross-modal representation learning with progressive self-distillation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision andPattern Recognition (CVPR). 2022: 16430-16441.
[126] ZHANG X, ZHANG F, XU C. Explicit cross-modal representation learning for visual commonsense reasoning[J/OL]. IEEE Transactions on Multimedia, 2022, 24: 2986-2997. DOI:10.1109/TMM.2021.3091882.
[127] SATHIAN K, LACEY S. Cross-modal interactions of the tactile system[J/OL]. Current Directions in Psychological Science, 2022, 31(5): 411-418. DOI: 10.1177/09637214221101877.
[128] ZHAO N, JIAO J, XIE W, et al. Cali-nce: Boosting cross-modal video representation learningwith calibrated alignment[C]//Proceedings of the IEEE/CVF Conference on Computer Visionand Pattern Recognition (CVPR) Workshops. 2023: 6317-6327.
[129] WANG S, ZHU L, SHI L, et al. A survey of full-cycle cross-modal retrieval: From arepresentation learning perspective[J/OL]. Applied Sciences, 2023, 13(7): 4571. DOI:10.3390/app13074571.
[130] LEE J T, BOLLEGALA D, LUO S. “touching to see” and “seeing to feel” : Robotic crossmodal sensory data generation for visual-tactile perception[C/OL]//2019 International Conference on Robotics and Automation (ICRA). 2019: 4276-4282. DOI: 10.1109/ICRA.2019.8793763.
[131] CAI S, ZHU K, BAN Y, et al. Visual-tactile cross-modal data generation using residue-fusiongan with feature-matching and perceptual losses[J/OL]. IEEE Robotics and Automation Letters,2021, 6(4): 7525-7532. DOI: 10.1109/LRA.2021.3095925.
[132] LUO S, LEPORA N F, MARTINEZ-HERNANDEZ U, et al. Vitac: Integrating vision andtouch for multimodal and cross-modal perception[J]. Frontiers in Robotics and AI, 2021, 8:697601.
[133] BISHOP C M, NASRABADI N M. Pattern recognition and machine learning: Vol. 4[M].Springer, 2006.
[134] GOODFELLOW I, BENGIO Y, COURVILLE A. Deep learning[M]. MIT press, 2016.
[135] GOODFELLOW I, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial nets[J].Advances in neural information processing systems, 2014, 27.
[136] KINGMA D P, WELLING M. Auto-Encoding Variational Bayes[C]//International Conferenceon Learning Representations (ICLR), Banff, AB, Canada, April 14-16. 2014.
[137] CRESWELL A, WHITE T, DUMOULIN V, et al. Generative adversarial networks: Anoverview[J]. IEEE signal processing magazine, 2018, 35(1): 53-65.
[138] LEDIG C, THEIS L, HUSZÁR F, et al. Photo-realistic single image super-resolution using agenerative adversarial network[C]//Proceedings of the IEEE conference on computer vision andpattern recognition. 2017: 4681-4690.
[139] LU Y, WU S, TAI Y W, et al. Image generation from sketch constraint using contextual gan[C]//Proceedings of the European conference on computer vision (ECCV). 2018: 205-220.
[140] FRID-ADAR M, DIAMANT I, KLANG E, et al. Gan-based synthetic medical image augmentation for increased cnn performance in liver lesion classification[J]. Neurocomputing, 2018,321: 321-331.
[141] ZHANG J, LI K, LAI Y K, et al. Pise: Person image synthesis and editing with decoupled gan[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2021: 7982-7990.
[142] CHLAP P, MIN H, VANDENBERG N, et al. A review of medical image data augmentationtechniques for deep learning applications[J]. Journal of Medical Imaging and Radiation Oncology, 2021, 65(5): 545-563.
[143] KINGMA D P, MOHAMED S, REZENDE D J, et al. Semi-supervised learning with deep generative models[C]//Advances in Neural Information Processing Systems (NIPS). 2014: 3581-3589.
[144] REZENDE D J, MOHAMED S, WIERSTRA D. Stochastic backpropagation and approximateinference in deep generative models[C]//International conference on machine learning. PMLR,2014: 1278-1286.
[145] HIGGINS I, MATTHEY L, PAL A, et al. beta-vae: Learning basic visual concepts with aconstrained variational framework[C]//ICLR. 2017.
[146] TAKAHASHI H, IWATA T, YAMANAKA Y, et al. Variational autoencoder with implicit optimal priors[C]//Proceedings of the AAAI Conference on Artificial Intelligence: Vol. 33. 2019:5066-5073.
[147] GUO N, HAN X, LIU X, et al. Autoencoding a soft touch to learn grasping from on-land tounderwater[J]. Advanced Intelligent Systems, 2024, 6(1): 2300382.
[148] TU J, WANG M, LI W, et al. Electronic skins with multimodal sensing and perception[J]. SoftScience, 2023.
[149] LAWRENCE N. Gaussian process latent variable models for visualisation of high dimensionaldata[J]. Advances in neural information processing systems, 2003, 16.
[150] SONI M, DAHIYA R. Soft eskin: distributed touch sensing with harmonized energy and computing[J]. Philosophical Transactions of the Royal Society A, 2020, 378(2164): 20190156.
[151] CHOI S, LEE K, LIM S, et al. Uncertainty-aware learning from demonstration using mixturedensity networks with sampling-free variance modeling[C]//2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018: 6915-6922.
[152] MAKANSI O, ILG E, CICEK O, et al. Overcoming limitations of mixture density networks:A sampling and fitting framework for multimodal future prediction[C]//Proceedings of theIEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019: 7144-7153.
[153] ERRICA F, BACCIU D, MICHELI A. Graph mixture density networks[C]//International Conference on Machine Learning. PMLR, 2021: 3025-3035.
[154] AMARI S I. Backpropagation and stochastic gradient descent method[J]. Neurocomputing,1993, 5(4-5): 185-196.
[155] ZINKEVICH M, WEIMER M, LI L, et al. Parallelized stochastic gradient descent[J]. Advancesin neural information processing systems, 2010, 23.
[156] HARDT M, RECHT B, SINGER Y. Train faster, generalize better: Stability of stochasticgradient descent[C]//International conference on machine learning. PMLR, 2016: 1225-1234.
[157] LOUIZOS C, SHALIT U, MOOIJ J M, et al. Causal effect inference with deep latent-variablemodels[J]. Advances in neural information processing systems, 2017, 30.
[158] MATTEI P A, FRELLSEN J. Leveraging the exact likelihood of deep latent variable models[J]. Advances in Neural Information Processing Systems, 2018, 31.
[159] YOUSEF H, BOUKALLEL M, ALTHOEFER K. Tactile sensing for dexterous in-hand manipulation in robotics—a review[J]. Sensors and Actuators A: physical, 2011, 167(2): 171-187.
[160] JORDAN M I, MITCHELL T M. Machine learning: Trends, perspectives, and prospects[J].Science, 2015, 349(6245): 255-260.
[161] KUBO S. Inverse problems related to the mechanics and fracture of solids and structures[J].JSME international journal. Ser. 1, Solid mechanics, strength of materials, 1988, 31(2): 157-166.
[162] HOFMANN B. Regularization for applied inverse and ill-posed problems: a numerical approach: Vol. 85[M]. Springer-Verlag, 2013.
[163] SCARSELLI F, TSOI A C. Universal approximation using feedforward neural networks: Asurvey of some existing methods, and some new results[J]. Neural networks, 1998, 11(1): 15-37.
[164] KINGMA D P, BA J. Adam: A method for stochastic optimization[C]//International Conferenceon Learning Representations(ICLR), San Diego, CA, USA. 2015.
[165] ASPERTI A, TRENTIN M. Balancing reconstruction error and kullback-leibler divergence invariational autoencoders[J]. IEEE Access, 2020, 8: 199440-199448.
[166] ZHU X, DAMARLA S K, HAO K, et al. Parallel interaction spatiotemporal constrained variational autoencoder for soft sensor modeling[J]. IEEE Transactions on Industrial Informatics,2022, 18(8): 5190-5198.
[167] HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]//Proceedingsof the IEEE conference on computer vision and pattern recognition. 2016: 770-778.
[168] RYBKIN O, DANIILIDIS K, LEVINE S. Simple and effective vae training with calibrateddecoders[C]//International conference on machine learning. PMLR, 2021: 9179-9189.
[169] ZHANG Y, YANG Q. An overview of multi-task learning[J]. National Science Review, 2018,5(1): 30-43.
[170] ACHILLE A, SOATTO S. Emergence of invariance and disentanglement in deep representations[J]. Journal of Machine Learning Research, 2018, 19(50): 1-34.
[171] TIPPING M E, BISHOP C M. Probabilistic principal component analysis[J]. Journal of theRoyal Statistical Society Series B: Statistical Methodology, 1999, 61(3): 611-622.
[172] HUBER M, RICKERT M, KNOLL A, et al. Human-robot interaction in handing-over tasks[C]//RO-MAN 2008 - The 17th IEEE International Symposium on Robot and Human InteractiveCommunication. 2008: 107-112.
[173] CHAN W P, PARKER C A C, VAN DER LOOS H F M, et al. Grip forces and load forces in handovers: Implications for designing human-robot handover controllers[C]//2012 7th ACM/IEEEInternational Conference on Human-Robot Interaction (HRI). 2012: 9-16.
[174] COSTANZO M, DE MARIA G, NATALE C. Handover control for human-robot and robotrobot collaboration[J]. Frontiers in Robotics and AI, 2021, 8: 672995.
[175] LI G, WONG T W, SHIH B, et al. Bioinspired soft robots for deep-sea exploration[J]. NatureCommunications, 2023, 14(1): 7097.
[176] QU J, XU Y, LI Z, et al. Recent advances on underwater soft robots[J]. Advanced IntelligentSystems, 2024, 6(2): 2300299.
[177] CHENG H D, JIANG X H, SUN Y, et al. Color image segmentation: advances and prospects[J]. Pattern recognition, 2001, 34(12): 2259-2281.
[178] LYNCH K M, PARK F C. Modern robotics: Mechanics, planning, and control[M]. CambridgeUniveristy Press, 2017.
[179] FEATHERSTONE R. Rigid body dynamics algorithms[M]. Springer-Verlag, 2007.
[180] DE SCHUTTER J, DE LAET T, RUTGEERTS J, et al. Constraint-based task specificationand estimation for sensor-based robot systems in the presence of geometric uncertainty[J]. TheInternational Journal of Robotics Research, 2007, 26(5): 433-455.
[181] KEEMINK A Q, VAN DER KOOIJ H, STIENEN A H. Admittance control for physical human–robot interaction[J/OL]. The International Journal of Robotics Research, 2018, 37(11): 1421-1444. DOI: 10.1177/0278364918768950.
[182] LEE K K, BUSS M. Force tracking impedance control with variable target stiffness[J]. IFACProceedings Volumes, 2008, 41(2): 6751-6756.
[183] LIU Y, CHEN Y, FANG X. A review of turbidity detection based on computer vision[J/OL].IEEE Access, 2018, 6: 60586-60604. DOI: 10.1109/ACCESS.2018.2875071.
[184] DRAGIEV S, TOUSSAINT M, GIENGER M. Gaussian process implicit surfaces for shape estimation and grasping[C/OL]//2011 IEEE International Conference on Robotics and Automation.2011: 2845-2850. DOI: 10.1109/ICRA.2011.5980395.
[185] RASMUSSEN C E, WILLIAMS C K I. Gaussian Processes for Machine Learning[M]. TheMIT Press, 2005.
[186] THAYANANTHAN A, STENGER B, TORR P, et al. Shape context and chamfer matchingin cluttered scenes[C/OL]//2003 IEEE Computer Society Conference on Computer Vision andPattern Recognition, 2003. Proceedings.: Vol. 1. 2003: I-I. DOI: 10.1109/CVPR.2003.1211346.
修改评论