中文版 | English
题名

针对建筑表面质量的自动化无损检测系统

其他题名
AUTOMATIC NONDESTRUCTIVE INSPECTION SYSTEM FOR BUILDING SURFACE QUALITY
姓名
姓名拼音
ZHANG Wentao
学号
12032432
学位类型
硕士
学位专业
080902 电路与系统
学科门类/专业学位类别
08 工学
导师
陈义明
导师单位
机械与能源工程系
论文答辩日期
2023-05-13
论文提交日期
2023-06-29
学位授予单位
南方科技大学
学位授予地点
深圳
摘要

工业 4.0 时代的到来,掀起了信息与工业相结合的浪潮。在第三次信息革命中得到大力发展的信息技术,纷纷开始落地,寻求在工业中的应用场景。逢此机遇,建筑业也开始寻求与信息技术相结合,尝试开启智能化建筑的新方向。建筑信息模型技术、智能化集成装配、建筑机器人等等相关应用技术的研发成为了建筑业智能化的关键因素。建筑质量检测作为建筑物验收过程中不可或缺的一环,成为了建筑业科研者们关注的重点。传统房屋质量检测以人工操作工具进行测绘的方式为主,效率低下且精度缺乏保证。本课题提出了一款适用于建筑表面质量的自动化无损检测系统,尝试代替人工劳作高效且精准地进行表面质量检测。

本课题针对未来实际部署场景,搭建了相应的实验场地,并且在符合要求的移动底座上完成了质检系统的搭建。该系统依托机器人操作系统对底盘移动、模块间通信进行控制,完成图像、点云等数据采集的任务。此外,该系统参考上层建筑信息模型数据对待检测区域进行确认,在保证表面检测覆盖率的同时规划导航路线。

系统的检测模型分为裂缝检测与建筑结构指标检测两部分。裂缝检测模型采用了残差学习网络与全卷积网络相结合的方式,搭建编码器-解码器式的深度学习网络结构,初步实现了像素级别的裂缝检测。针对裂缝检测模型存在的易受杂物干扰、对细裂缝不敏感等问题,本课题使用YOLO目标检测、霍夫直线检测、图像超分等方法对模型输入进行预处理,针对性解决相应问题,实现了检测模型性能上的优化。

建筑结构指标检测模型基于点云数据与平面拟合算法搭建。结合法向量计算与平面点离散程度分析等方法,该检测模型可对建筑表面的平整度、水平度进行高精度检测。本课题使用不同材质表面对建筑结构指标检测算法进行了相关实验,确保了算法可靠性的同时,对算法相关参数的设定也给出了参考意见。

其他摘要

The advent of the Industry 4.0 era has triggered a significant convergence of information and industry. Information technology, which has seen remarkable advancements in the third information revolution, is now being applied in various industries. The construction sector, in particular, has begun actively exploring the integration of information technology to pave the way for intelligent buildings. Within the realm of intelligent construction, notable areas of development include Building Information Modeling technology, intelligent integrated assembly, construction robotics, and other related application technologies. While these advancements are promising, the research on quality inspection remains a challenge in the construction industry. Currently, traditional building quality inspections heavily rely on manual tools, leading to inefficiencies and inaccuracies. To address this issue, this topic proposes an automatic nondestructive testing system for assessing indoor surface quality. By replacing manual labor, this system aims to enhance efficiency and accuracy in detecting surface quality during the building acceptance process.

In this thesis, an experimental site has been established to simulate the future deployment scenario, and a quality inspection system has been successfully developed on a mobile base, meeting the necessary requirements. The system leverages the Robot Operating System to control the movement of the base and facilitate inter-module communication. It effectively performs data acquisition tasks, including capturing images and generating point clouds. Furthermore, the system utilizes Building Information Modeling data to identify the specific areas that require inspection and devises an optimized navigation route to ensure comprehensive surface coverage during the detection process.

The detection model employed in the system consists of two main components: crack detection and building structure index detection. For crack detection, a combination of residual learning network and full convolution network is utilized to construct an encoder-decoder deep learning network structure. This architecture enables pixel-level crack detection as an initial outcome. However, to address challenges such as interference from debris and insensitivity to fine cracks, several preprocessing methods are employed in this project. These methods include YOLO object detection, Hough line detection, and image super resolution. By incorporating these techniques into the model's input processing, the system successfully resolves the associated issues and optimizes the performance of the detection model.

The building structure index detection model in this system relies on point cloud data and the plane fitting algorithm. By integrating normal vector calculation and plane point dispersion analysis, this model achieves precise detection of indoor surface evenness and alignment. To ensure the algorithm's reliability, the thesis conducts experiments using walls covered with different materials. The testing results validate the effectiveness of the algorithm. Furthermore, the thesis elaborates on the key parameters and their relevant data settings based on practical implementation, providing guidance for practical operations and ensuring the applicability of the model in real-world scenarios.

关键词
其他关键词
语种
中文
培养类别
独立培养
入学年份
2020
学位授予年份
2023-06
参考文献列表

[1] 赵峰, 王要武, 金玲, 等. 2022 年上半年全国建筑业发展统计分析[J]. 建筑, 2022(17): 14-19.
[2] 中华人民共和国住房和城乡建设部. 住房和城乡建设部办公厅关于 2020 年房屋市政工程生产安全事故情况的通报[EB/OL]. (2022-10-27). https://www.mohurd.gov.cn/gongkai/fdzdgknr/zfhcxjsbwj/202210/20221026_768565.html.
[3] 苏展. 建筑安全事故成因分析及预警管理研究[J]. 中华建设, 2022, 299(10): 33-35.
[4] 韦艳, 段重利, 梅丽, 等. 从第七次人口普查数据看新时代中国人口发展[J]. 西安财经大学学报, 2021, 34(05): 107-121.
[5] REBEKKA V, JULIAN S, FRANK S. Building Information Modeling (BIM) for existing buildings —Literature review and future needs[J]. Automation in Construction, 2014, 38: 109-127.
[6] 李朋昊, 李朱锋, 益田正, 等. 建筑机器人应用与发展[J]. 机械设计与研究, 2018, 34(06): 25-29.
[7] 于军琪, 曹建福, 雷小康. 建筑机器人研究现状与展望[J]. 自动化博览, 2016, 268(08):68-75.
[8] WIęCKOWSKI A.“JA-WA”- A wall construction system using unilateral material application with a mobile robot[J]. Automation in Construction, 2017, 83(10): 19-28.
[9] Mobile Robotic Tiling[EB/OL]. https://gramaziokohler.arch.ethz.ch/web/e/forschung/257.html.
[10] ASADI E, LI B, CHEN I. Pictobot: A Cooperative Painting Robot for Interior Finishing of Industrial Developments[J]. IEEE Robotics & Automation Magazine, 2018, 25(2): 82-94.
[11] Hephaestus - Drilling and bracket mounting robot[EB/OL]. https://www.nlinkrobotics.com/projects/hephaestus-drilling-and-bracket-mounting-robot.
[12] Husqvarna DXR 305[EB/OL]. https://www.husqvarnaconstruction.com/us/demolition-equipment/dxr305/.
[13] 周炎生. 建筑机器人发展与关键技术综述[J]. 机电信息, 2022(08): 109+111.
[14] LI B, USHIRODA K, YANG L, et al. Wall-climbing robot for non-destructive evaluation using impact-echo and metric learning SVM[J]. International Journal of Intelligent Robotics and Applications, 2017, 1: 255-270.
[15] LIM R S, LA H M, SHAN Z, et al. Developing a crack inspection robot for bridge maintenance[C]//2011 IEEE International Conference on Robotics and Automation. IEEE, 2011: 6288-6293.
[16] LOUPOS K, DOULAMIS A D, STENTOUMIS C, et al. Autonomous robotic system for tun nel structural inspection and assessment[J]. International Journal of Intelligent Robotics and Applications, 2018, 2: 43-66.
[17] OH J K, JANG G, OH S, et al. Bridge inspection robot system with machine vision[J]. Automation in Construction, 2009, 18(7): 929-941.
[18] XIONG X, ADAN A, AKINCI B, et al. Automatic creation of semantically rich 3D building models from laser scanner data[J]. Automation in construction, 2013, 31: 325-337.
[19] DAVTALAB O, KAZEMIAN A, KHOSHNEVIS B. Perspectives on a BIM-integrated software platform for robotic construction through Contour Crafting[J]. Automation in construction, 2018, 89: 13-23.
[20] DE OLIVEIRA J H E, LAGES W F. Robotized inspection of power lines with infrared vision[C]//2010 1st International Conference on Applied Robotics for the Power Industry. IEEE, 2010: 1-6.
[21] AMMOUCHE A, RISS J, BREYSSE D, et al. Image analysis for the automated study of microcracks in concrete[J]. Cement and Concrete Composites, 2001, 23(2).
[22] 耿飞, 解建光, 钱春香. 图像分析技术对混凝土裂缝的定量评价[J]. 混凝土, 2005(05): 78-80+87-94.
[23] ZHAO H, QIN G, WANG X. Improvement of canny algorithm based on pavement edge detection[J]. International Congress on Image and Signal Processing, 2010, 2: 964-967.
[24] ZALAMA E, GÓMEZ-GARCÍA-BERMEJO J, MEDINA R, et al. Road crack detection using visual features extracted by Gabor filters[J]. Computer-Aided Civil and Infrastructure Engineering, 2014, 29(5): 342-358.
[25] YAN R, KAYACAN E, CHEN I, et al. QuicaBot: Quality inspection and assessment robot[J].IEEE Transactions on Automation Science and Engineering, 2018, 16(2): 506-517.
[26] ZHANG L, YANG F, ZHANG Y D, et al. Road crack detection using deep convolutional neural network[J]. IEEE International Conference on Image Processing (ICIP), 2016: 3708-3712.
[27] WANG T, CHEN Y, QIAO M, et al. A fast and robust convolutional neural network-based defect detection model in product quality control[J]. The International Journal of Advanced Manufacturing Technology, 2018, 94(9): 3465-3471.
[28] LONG J, SHELHAMER E, DARRELL T. Fully convolutional networks for semantic segmentation[J]. IEEE Conference on Computer Vision and Pattern Recognition, 2015: 3431-3440.
[29] YANG X, LI H, YU Y, et al. Automatic pixel-level crack detection and measurement using fully convolutional network[J]. Computer-Aided Civil and Infrastructure Engineering, 2018, 33(12): 1090-1109.
[30] QIU L, WU X, YU Z. A high-efficiency fully convolutional networks for pixel-wise surface defect detection[J]. IEEE Access, 2019, 7: 15884-15893.
[31] BANG S, PARK S, KIM H. Encoder–decoder network for pixel-level road crack detection in black-box images[J]. Computer-Aided Civil and Infrastructure Engineering, 2019, 34(8): 713-727.
[32] XU L, HATSUTANI T, LIU X, et al. Pushing the Envelope of Thin Crack Detection[J]. CoRR, 2021, abs/2101.03326.
[33] REDMON J, DIVVALA S K, GIRSHICK R B, et al. You Only Look Once: Unified, Real-Time Object Detection[J]. CoRR, 2015, abs/1506.02640.
[34] 关于加快新型建筑工业化发展的若干意见[J]. 上海建材, 2021, 219(05): 1-4.
[35] CAMILLA F, VALERIO M, KILIAN F, et al. BIM-Integrated Collaborative Robotics for Application in Building Construction and Maintenance[J]. Robotics, 2020, 10(1): 2.
[36] SUZUKI S, et al. Topological structural analysis of digitized binary images by border following[J]. Computer Vision, Graphics, and Image Processing, 1985, 30(1): 32-46.
[37] 杨璟, 朱雷. 基于 RGB 颜色空间的彩色图像分割方法[J]. 计算机与现代化, 2010, 108(08):147-149+171.
[38] CANNY J. A computational approach to edge detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1986(6): 679-698.
[39] MCCULLOCH W, PITTS W. A logical calculus of the ideas immanent in nervous activity[J]. The bulletin of mathematical biophysics, 1943, 5: 115-133.
[40] RUMELHART D, HINTON G, WILLIAMS R. Learning representations by back-propagating errors[J]. nature, 1986, 323: 533–536.
[41] CORTES C, VAPNIK V. Support-vector networks[J]. Machine Learning, 1995, 20: 273–297.
[42] GU J, WANG Z, KUEN J, et al. Recent advances in convolutional neural networks[J]. Pattern Recognition, 2018, 77: 354-377.
[43] HUBEL D, WIESEL T. Receptive fields and functional architecture of monkey striate cortex[J]. The Journal of physiology, 1968: 215-243.
[44] FUKUSHIMA K. Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position[J]. Biological Cybernetics, 1980, 36: 193–202.
[45] RUSSAKOVSKY O, DENG J, SU H, et al. ImageNet Large Scale Visual Recognition Challenge[J]. International Journal of Computer Vision, 2014, 115: 211-252.
[46] ZHENG Y, LIU Q, CHEN E, et al. Time Series Classification Using Multi-Channels Deep Convolutional Neural Networks[M]. Cham: Springer, 2014: 298–310.
[47] YU F, KOLTUN V. Multi-scale context aggregation by dilated convolutions[C]//International Conference on Learning Representations. 2016.
[48] MONTAVON G, ORR G B, MüLLER K R. Neural Networks: Tricks of the Trade[M]. 2nd ed.Berlin, Heidelberg: Springer, 2012: 9–48.
[49] NAIR V, HINTON G E. Rectified Linear Units Improve Restricted Boltzmann Machines[C]//Proceedings of the 27th International Conference on International Conference on Machine Learning. Madison, WI, USA: Omnipress, 2010: 807–814.
[50] MAAS A L, HANNUN A Y, NG A Y. Rectifier Nonlinearities Improve Neural Network Acous tic Models[C]//Proceedings of the International Conference on Machine Learning. 2013.
[51] WANG T, WU D J, COATES A, et al. End-to-end text recognition with convolutional neural networks[C]//Proceedings of the 21st International Conference on Pattern Recognition. 2012: 3304-3308.
[52] BOUREAU Y L, PONCE J, LECUN Y. A Theoretical Analysis of Feature Pooling in Visual Recognition[C]//Proceedings of the 27th International Conference on International Conference on Machine Learning. Madison, WI, USA: Omnipress, 2010: 111–118.
[53] HINTON G E, SRIVASTAVA N, KRIZHEVSKY A, et al. Improving neural networks by preventing co-adaptation of feature detectors[J]. CoRR, 2012, abs/1207.0580.
[54] IOFFE S, SZEGEDY C. Batch Normalization: Accelerating Deep Network Training by Reduc ing Internal Covariate Shift[C]//Lille, France: JMLR.org, 2015: 448–456.
[55] HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[M]//IEEE Conference on Computer Vision and Pattern Recognition. 2016: 770-778.
[56] HE K, ZHANG X, REN S. Identity mappings in deep residual networks[C]//European conference on computer vision. Springer, 2016: 630-645.
[57] ZEILER M D, KRISHNAN D, TAYLOR G W, et al. Deconvolutional networks[C]//2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 2010: 2528-2535.
[58] PyTorch[EB/OL]. https://pytorch.org/.
[59] WIJNHOVEN R, DE WITH P. Fast Training of Object Detection Using Stochastic Gradient Descent[C]//2010 20th International Conference on Pattern Recognition. 2010: 424-427.
[60] KINGMA D P, BA J. Adam: A Method for Stochastic Optimization[C]//BENGIO Y, LECUN Y. 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. 2015.
[61] NEUBECK A, VAN GOOL L. Efficient Non-Maximum Suppression[C]//18th International Conference on Pattern Recognition: volume 3. 2006: 850-855.
[62] AGGARWAL N, KARL W C. Line detection in images through regularized Hough transform[J]. IEEE Transactions on Image Processing, 2006, 15(3): 582-591.
[63] DONG C, LOY C C, HE K, et al. Image Super-Resolution Using Deep Convolutional Networks[J]. CoRR, 2015, abs/1501.00092.
[64] GOODFELLOW I, POUGET-ABADIE J, MIRZA M, et al. Generative Adversarial Networks[J]. Commun. ACM, 2020, 63(11): 139–144.
[65] LEDIG C, THEIS L, HUSZAR F, et al. Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network[J]. CoRR, 2016, abs/1609.04802.
[66] WANG X, YU K, WU S, et al. ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks[J]. CoRR, 2018, abs/1809.00219.
[67] WANG X, XIE L, DONG C, et al. Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data[J]. 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), 2021: 1905-1914.
[68] FISCHLER M A, BOLLES R C. Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography[J]. Commun. ACM, 1981, 24(6): 381–395.

所在学位评定分委会
电子科学与技术
国内图书分类号
TP242
来源库
人工提交
成果类型学位论文
条目标识符http://sustech.caswiz.com/handle/2SGJ60CL/544669
专题工学院_机械与能源工程系
推荐引用方式
GB/T 7714
张文韬. 针对建筑表面质量的自动化无损检测系统[D]. 深圳. 南方科技大学,2023.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可 操作
12032432-张文韬-机械与能源工程(11690KB)----限制开放--请求全文
个性服务
原文链接
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
导出为Excel格式
导出为Csv格式
Altmetrics Score
谷歌学术
谷歌学术中相似的文章
[张文韬]的文章
百度学术
百度学术中相似的文章
[张文韬]的文章
必应学术
必应学术中相似的文章
[张文韬]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
[发表评论/异议/意见]
暂无评论

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。