[1] 黄群慧, 贺俊. 中国制造业的核心能力、功能定位与发展战略——兼评《中国制造 2025》[Z]. 2015: 13.
[2] 毕胜. 国内外工业机器人的发展现状[J]. 机械工程师, 2008(7): 4.
[3] NAGARAJAN P, SARAVANA PERUMAAL S, YOGAMEENA B. Vision Based Pose Estimation of Multiple Peg-in-Hole for Robotic Assembly[M]//Computer Vision, Graphics, andImage Processing. Cham: Springer International Publishing, 2017: 50-62.
[4] LIU S, LIU C, LIU Z, et al. Laser tracker-based control for peg-in-hole assembly robot[J/OL].The 4th Annual IEEE International Conference on Cyber Technology in Automation, Controland Intelligent, 2014: 569-573. https://api.semanticscholar.org/CorpusID:18849643.
[5] LIU Z, XIE Y, XU J, et al. Laser tracker based robotic assembly system for large scale peg-holeparts[M/OL]//The 4th Annual IEEE International Conference on Cyber Technology in Automation, Control and Intelligent. 2014: 574-578. DOI: 10.1109/CYBER.2014.6917527.
[6] PARK H, PARK J, LEE D H, et al. Compliance-Based Robotic Peg-in-Hole Assembly Strategy Without Force Feedback[J/OL]. IEEE Transactions on Industrial Electronics, 2017, 64(8):6299-6309. DOI: 10.1109/TIE.2017.2682002.
[7] PARK H, KIM P K, BAE J H, et al. Dual arm peg-in-hole assembly with a programmed compliant system[M/OL]//2014 11th International Conference on Ubiquitous Robots and AmbientIntelligence (URAI). 2014: 431-433. DOI: 10.1109/URAI.2014.7057477.
[8] CHANG W C. Robotic assembly of smartphone back shells with eye-in-hand visual servoing[J/OL]. Robotics and Computer-Integrated Manufacturing, 2018, 50: 102-113. https://www.sciencedirect.com/science/article/pii/S073658451630271X. DOI: https://doi.org/10.1016/j.rcim.2017.09.010.
[9] WANG R, LIANG C, PAN D, et al. Research on a Visual Servo Method of a ManipulatorBased on Velocity Feedforward[J/OL]. Space: Science &; Technology, 2021, 2021. DOI:10.34133/2021/9763179.
[10] NIU X, PU J, ZHANG C. An Improved SIFT Algorithm for Monocular Vision Positioning[J/OL]. IOP Conference Series: Materials Science and Engineering, 2019, 612(3): 032124.https://dx.doi.org/10.1088/1757-899X/612/3/032124.
[11] DING G, LIU Y, ZANG X, et al. A Task-Learning Strategy for Robotic Assembly Tasks fromHuman Demonstrations[J/OL]. Sensors, 2020, 20(19). https://www.mdpi.com/1424-8220/20/19/5505.
[12] KANG H, ZANG Y, WANG X, et al. Uncertainty-Driven Spiral Trajectory for Robotic Peg-inHole Assembly[J/OL]. IEEE Robotics and Automation Letters, 2022, 7(3): 6661-6668. DOI:10.1109/LRA.2022.3176718.52参考文献
[13] GU J, ZHU M, CAO L, et al. Improved Uncalibrated Visual Servo Strategy for HyperRedundant Manipulators in On-Orbit Automatic Assembly[J/OL]. Applied Sciences, 2020,10(19). https://www.mdpi.com/2076-3417/10/19/6968.
[14] MOL N, SMISEK J, BABUšKA R, et al. Nested compliant admittance control for roboticmechanical assembly of misaligned and tightly toleranced parts[M/OL]//2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC). 2016: 002717-002722. DOI:10.1109/SMC.2016.7844650.
[15] LISTMANN K D, HANS F, WAHRBURG A. Compliance Control for Robust Assembly withRedundant Manipulators[J/OL]. IFAC-PapersOnLine, 2020, 53(2): 8538-8545. https://www.sciencedirect.com/science/article/pii/S2405896320318231. DOI: https://doi.org/10.1016/j.ifacol.2020.12.1413.
[16] CAO H, CHEN X, HE Y, et al. Dynamic Adaptive Hybrid Impedance Control for Dynamic Contact Force Tracking in Uncertain Environments[J/OL]. IEEE Access, 2019, 7: 83162-83174.DOI: 10.1109/ACCESS.2019.2924696.
[17] XU J, HOU Z, LIU Z, et al. Compare contact model-based control and contact model-freelearning: A survey of robotic peg-in-hole assembly strategies[A]. 2019.
[18] JASIM I F, PLAPPER P W, VOOS H. Contact-state modelling in force-controlled roboticpeg-in-hole assembly processes of flexible objects using optimised Gaussian mixtures[J/OL].Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture, 2017, 231(8): 1448-1463. DOI: 10.1177/0954405415598945.
[19] HOU Z, FEI J, DENG Y, et al. Data-Efficient Hierarchical Reinforcement Learning for RoboticAssembly Control Applications[J/OL]. IEEE Transactions on Industrial Electronics, 2021, 68(11): 11565-11575. DOI: 10.1109/TIE.2020.3038072.
[20] HOU Z, LI Z, HSU C, et al. Fuzzy Logic-Driven Variable Time-Scale Prediction-Based Reinforcement Learning for Robotic Multiple Peg-in-Hole Assembly[J/OL]. IEEE Transactions onAutomation Science and Engineering, 2022, 19(1): 218-229. DOI: 10.1109/TASE.2020.3024725.
[21] WANG Y, BELTRAN-HERNANDEZ C C, WAN W, et al. Robotic Imitation of Human Assembly Skills Using Hybrid Trajectory and Force Learning[M/OL]//2021 IEEE InternationalConference on Robotics and Automation (ICRA). 2021: 11278-11284. DOI: 10.1109/ICRA48506.2021.9561619.
[22] CALINON S. Robot programming by demonstration[M]. EPFL Press, 2009.
[23] ARGALL B D, CHERNOVA S, VELOSO M, et al. A survey of robot learning from demonstration[J]. Robotics and Autonomous Systems, 2009, 57(5): 469-483.
[24] ZHU Z, HU H. Robot learning from demonstration in robotic assembly: A survey[J]. Robotics,2018, 7(2): 17.
[25] CALINON S, GUENTER F, BILLARD A. On learning, representing, and generalizing a taskin a humanoid robot[J]. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 2007, 37(2): 286-298.53参考文献
[26] CALINON S, BILLARD A. A probabilistic programming by demonstration framework handling constraints in joint space and task space[M]//2008 IEEE/RSJ International Conference onIntelligent Robots and Systems. IEEE, 2008: 367-372.
[27] CALINON S, ALIZADEH T, CALDWELL D G. On improving the extrapolation capability of task-parameterized movement models[M]//2013 IEEE/RSJ International Conference onIntelligent Robots and Systems. IEEE, 2013: 610-616.
[28] ALIZADEH T, CALINON S, CALDWELL D G. Learning from demonstrations with partiallyobservable task parameters[M]//2014 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2014: 3309-3314.
[29] OSA T, ESFAHANI A M G, STOLKIN R, et al. Guiding trajectory optimization by demonstrated distributions[J]. IEEE Robotics and Automation Letters, 2017, 2(2): 819-826.
[30] ZHU Z, HU H. Robot learning from demonstration in robotic assembly: A survey[J]. Robotics,2018, 7(2): 17.
[31] YANG C, ZENG C, CONG Y, et al. A learning framework of adaptive manipulative skills fromhuman to robot[J]. IEEE Transactions on Industrial Informatics, 2018, 15(2): 1153-1161.
[32] DING L, LI S, GAO H, et al. Adaptive partial reinforcement learning neural network-basedtracking control for wheeled mobile robotic systems[J]. IEEE Transactions on Systems, Man,and Cybernetics: Systems, 2018, 50(7): 2512-2523.
[33] ZHANG M Y, TIAN G H, LI C C, et al. Learning to transform service instructions into actionswith reinforcement learning and knowledge base[J]. International Journal of Automation andComputing, 2018, 15: 582-592.
[34] WANG Y, JIAO Y, XIONG R, et al. MASD: A multimodal assembly skill decoding systemfor robot programming by demonstration[J]. IEEE Transactions on Automation Science andEngineering, 2018, 15(4): 1722-1734.
[35] WAN A, XU J, CHEN H, et al. Optimal path planning and control of assembly robots for hardmeasuring easy-deformation assemblies[J]. IEEE/ASME Transactions on Mechatronics, 2017,22(4): 1600-1609.
[36] GU S, HOLLY E, LILLICRAP T, et al. Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates[M/OL]//2017 IEEE International Conference onRobotics and Automation (ICRA). 2017: 3389-3396. DOI: 10.1109/ICRA.2017.7989385.
[37] HOPPE S, LOU Z, HENNES D, et al. Planning approximate exploration trajectories for modelfree reinforcement learning in contact-rich manipulation[J]. IEEE Robotics and AutomationLetters, 2019, 4(4): 4042-4047.
[38] HOU Z, FEI J, DENG Y, et al. Data-efficient hierarchical reinforcement learning for roboticassembly control applications[J]. IEEE Transactions on Industrial Electronics, 2020, 68(11):11565-11575.
[39] MNIH V, BADIA A P, MIRZA M, et al. Asynchronous methods for deep reinforcement learning[M]//International Conference on Machine Learning. PMLR, 2016: 1928-1937.
[40] XU J, HOU Z, WANG W, et al. Feedback deep deterministic policy gradient with fuzzy rewardfor robotic multiple peg-in-hole assembly tasks[J]. IEEE Transactions on Industrial Informatics,2018, 15(3): 1658-1667.54参考文献
[41] HOU Z, LI Z, HSU C, et al. Fuzzy logic-driven variable time-scale prediction-based reinforcement learning for robotic multiple peg-in-hole assembly[J]. IEEE Transactions on AutomationScience and Engineering, 2020, 19(1): 218-229.
[42] SAKATA N, KINOSHITA Y, KATO Y. Predicting a pedestrian trajectory using seq2seq formobile robot navigation[M]//IECON 2018-44th Annual Conference of the IEEE Industrial Electronics Society. IEEE, 2018: 4300-4305.
[43] ALAHI A, GOEL K, RAMANATHAN V, et al. Social lstm: Human trajectory predictionin crowded spaces[M]//Proceedings of the IEEE Conference on Computer Vision and PatternRecognition. 2016: 961-971.
[44] DEO N, TRIVEDI M M. Multi-modal trajectory prediction of surrounding vehicles with maneuver based lstms[M]//2018 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2018: 1179-1184.
[45] BAHDANAU D, CHO K, BENGIO Y. Neural machine translation by jointly learning to alignand translate[A]. 2014.
[46] CHEN J, JIANG D, ZHANG Y. A hierarchical bidirectional GRU model with attention forEEG-based emotion classification[J]. IEEE Access, 2019, 7: 118530-118540.
[47] LU S, ZHANG Q, CHEN G, et al. A combined method for short-term traffic flow predictionbased on recurrent neural network[J]. Alexandria Engineering Journal, 2021, 60(1): 87-94.
[48] LIU S, SHENG J, OU Y. Learning Compliant Assembly Strategy From Demonstration[M]//2023 IEEE International Conference on Real-time Computing and Robotics (RCAR). IEEE,2023: 929-934.
[49] SHU W, CAI K, XIONG N N. A short-term traffic flow prediction model based on an improvedgate recurrent unit neural network[J]. IEEE Transactions on Intelligent Transportation Systems,2021, 23(9): 16654-16665.
[50] XUE H, HUYNH D Q, REYNOLDS M. Bi-prediction: Pedestrian trajectory predictionbased on bidirectional LSTM classification[M]//2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA). IEEE, 2017: 1-8.
[51] CHEN C P, LIU Z. Broad learning system: An effective and efficient incremental learningsystem without the need for deep architecture[J]. IEEE Transactions on Neural Networks andLearning Systems, 2017, 29(1): 10-24.
[52] GONG X, ZHANG T, CHEN C P, et al. Research review for broad learning system: Algorithms,theory, and applications[J]. IEEE Transactions on Cybernetics, 2021, 52(9): 8922-8950.
[53] XU S, LIU J, YANG C, et al. A learning-based stable servo control strategy using broad learningsystem applied for microrobotic control[J]. IEEE Transactions on Cybernetics, 2021, 52(12):13727-13737.
[54] GONG X, ZHANG T, CHEN C L P, et al. Research Review for Broad Learning System: Algorithms, Theory, and Applications[J/OL]. IEEE Transactions on Cybernetics, 2022, 52(9):8922-8950. DOI: 10.1109/TCYB.2021.3061094.
[55] ZHOU P, SHI W, TIAN J, et al. Attention-based bidirectional long short-term memory networksfor relation classification[M]//Proceedings of the 54th annual meeting of the association forcomputational linguistics (volume 2: Short papers). 2016: 207-212.
修改评论