中文版 | English
题名

基于深度学习的大鼠肝脏血管分割和拓扑分析

其他题名
SEGMENTATION AND TOPOLOGY ANALYSIS OF RAT LIVER VESSELS BASED ON DEEP LEARNING
姓名
姓名拼音
WANG Xueying
学号
11930187
学位类型
硕士
学位专业
0809 电子科学与技术
学科门类/专业学位类别
08 工学
导师
唐晓颖
导师单位
电子与电气工程系
论文答辩日期
2022-05-09
论文提交日期
2022-06-12
学位授予单位
南方科技大学
学位授予地点
深圳
摘要

    目前,肝癌在全球癌症死亡人数排名中位列第三,对社会造成沉重负担。对于不能切除的中期原发性肝癌,经动脉化疗栓塞术是最广泛使用的治疗方案。该技术通过导管将含化疗药物的栓塞剂输送到肝动脉,阻断肝癌细胞血供的同时杀死肿瘤细胞。因此,评估药物扩散范围和浓度,在杀死肿瘤细胞的同时尽可能保护患者的肝脏具有重大意义。基于以上背景,本文研究并设计了基于深度学习的可视化肝脏化疗药物扩散模型:将含药物的血管区域分割提取出来,设计拓扑分析算法统计局部血管的药物存在范围和浓度。

    针对血管分割,本文提出的U-DRLSE将U-Net和水平集相结合,实现由粗到细的血管分割。为了进一步提高分割精度,本文设计了基于生成对抗网络的血管分割网络U-GAN,将U-Net作为其生成器,分割性能初步得到提升。本文进一步探究注意力机制对分割性能的影响,加入了基于空间、通道的双注意力机制,对网络提取特征图的权重进行调整,得到DAU-GAN网络。

    针对拓扑分析,本文设计了全自动的骨架提取和拓扑分级算法,对药物扩散范围和浓度进行统计分析。首先在分割结果的基础上进行骨架提取,然后根据血管的分岔顺序和长度将血管分成多级(从主干到细梢)。本文分析了每一级血管在各个时刻的药物扩散长度、直径、面积、浓度,分别实现了三种药物栓塞方式的半定量药物扩散建模,大大减少研究者的手动分析时间。

    实验结果表明,本文所提出的方法在血管分割的准确度与拓扑分析的稳定性上均具有较好的性能表现。

关键词
语种
中文
培养类别
独立培养
入学年份
2019
学位授予年份
2022-06
参考文献列表

[1] BALOGH J, VICTOR III D, ASHAM E H, et al. Hepatocellular carcinoma: a review[J]. Journal of Hepatocellular Carcinoma, 2016, 3: 41.
[2] JINDAL A, THADI A, SHAILUBHAI K. Hepatocellular carcinoma: etiology and current and future drugs[J]. Journal of Clinical and Experimental Hepatology, 2019, 9(2): 221-232.
[3] CHEN S, CAO Q, WEN W, et al. Targeted therapy for hepatocellular carcinoma: challenges and opportunities[J]. Cancer Letters, 2019, 460: 1-9.
[4] MIKHAIL A S, NEGUSSIE A H, MAUDA-HAVAKUK M, et al. Drug-eluting embolic microspheres: State-of-the-art and emerging clinical applications[J]. Expert Opinion on Drug Delivery, 2021, 18(3): 383-398.
[5] GABA R C, KHABBAZ R C, MUCHIRI R N, et al. Conventional versus drug-eluting embolic transarterial chemoembolization with doxorubicin: comparative drug delivery, pharmacokinetics, and treatment response in a rabbit VX2 tumor model[J]. Drug Delivery and Translational Research, 2021: 1-13.
[6] DESCHAMPS F, ISOARDO T, DENIS S, et al. Biodegradable pickering emulsions of lipiodol for liver trans-arterial chemo-embolization[J]. Acta Biomaterialia, 2019, 87: 177-186.
[7] RADMANSOURI M, BAHMANI E, SARIKHANI E, et al. Doxorubicin hydrochloride-Loaded electrospun chitosan/cobalt ferrite/titanium oxide nanofibers for hyperthermic tumor cell treatment and controlled drug release[J]. International Journal of Biological Macromolecules, 2018, 116: 378-384.
[8] FUCHS K, BIZE P E, DORMOND O, et al. Drug-eluting beads loaded with antiangiogenic agents for chemoembolization: in vitro sunitinib loading and release and in vivo pharmacokinetics in an animal model[J]. Journal of Vascular and Interventional Radiology, 2014, 25(3): 379-387.
[9] JORDAN O, DENYS A, DE BAERE T, et al. Comparative study of chemoembolization loadable beads: in vitro drug release and physical properties of DC bead and hepasphere loaded with doxorubicin and irinotecan[J]. Journal of Vascular and Interventional Radiology, 2010, 21(7): 1084-1090.
[10] GAO Y, LI Z, HONG Y, et al. Decellularized liver as a translucent ex vivo model for vascular embolization evaluation[J]. Biomaterials, 2020, 240: 119855.
[11] DELGADO-MARTÍNEZ I, SERRANO L, HIGUERAS-ESTEBAN A, et al. On the use of digital subtraction angiography in stereoelectroencephalography surgical planning to prevent collisions with vessels[J]. World Neurosurgery, 2021, 147: e47-e56.
[12] KUCZYNSKI E A, VERMEULEN P B, PEZZELLA F, et al. Vessel co-option in cancer[J]. Nature Reviews Clinical Oncology, 2019, 16(8): 469-493.
[13] IMRAN A, LI J, PEI Y, et al. Comparative analysis of vessel segmentation techniques in retinal images[J]. IEEE Access, 2019, 7: 114862-114887.
[14] MOCCIA S, DE MOMI E, EL HADJI S, et al. Blood vessel segmentation algorithms—review of methods, datasets and evaluation metrics[J]. Computer Methods and Programs in Biomedicine, 2018, 158: 71-91.
[15] RAMESH K, KUMAR G K, SWAPNA K, et al. A review of medical image segmentation algorithms[J]. EAI Endorsed Transactions on Pervasive Health and Technology, 2021, 7(27): e6.
[16] SHI C, CHENG Y, WANG J, et al. Low-rank and sparse decomposition based shape model and probabilistic atlas for automatic pathological organ segmentation[J]. Medical Image Analysis, 2017, 38: 30-49.
[17] FU H, XU Y, LIN S, et al. Deepvessel: Retinal vessel segmentation via deep learning and conditional random field[C]//International Conference on Medical Image Computing and Computerassisted Intervention. Springer, 2016: 132-139.
[18] ESTRADA R, ALLINGHAM M J, METTU P S, et al. Retinal artery-vein classification via topology estimation[J]. IEEE Transactions on Medical Imaging, 2015, 34(12): 2518-2534.
[19] MESCHI S S, FARGHADAN A, ARZANI A. Flow topology and targeted drug delivery in cardiovascular disease[J]. Journal of Biomechanics, 2021, 119: 110307.
[20] HAHN A, BODE J, KRÜWEL T, et al. Glioblastoma multiforme restructures the topological connectivity of cerebrovascular networks[J]. Scientific Reports, 2019, 9(1): 1-17.
[21] WANG X, HEIMANN T, LO P, et al. Statistical tracking of tree-like tubular structures with efficient branching detection in 3D medical image data[J]. Physics in Medicine & Biology, 2012, 57(16): 5325.
[22] LESAGE D, ANGELINI E D, BLOCH I, et al. Design and study of flux-based features for 3D vascular tracking[C]//2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro. IEEE, 2009: 286-289.
[23] BAUER C, POCK T, SORANTIN E, et al. Segmentation of interwoven 3d tubular tree structures utilizing shape priors and graph cuts[J]. Medical Image Analysis, 2010, 14(2): 172-184.
[24] CHENG Y, HU X, WANG J, et al. Accurate vessel segmentation with constrained B-snake[J]. IEEE Transactions on Image Processing, 2015, 24(8): 2440-2455.
[25] BRIGGER P, HOEG J, UNSER M. B-spline snakes: a flexible tool for parametric contour detection[J]. IEEE Transactions on Image Processing, 2000, 9(9): 1484-1496.
[26] LEE S H, LEE S. Adaptive Kalman snake for semi-autonomous 3D vessel tracking[J]. Computer Methods and Programs in Biomedicine, 2015, 122(1): 56-75.
[27] OSHER S, SETHIAN J A. Fronts propagating with curvature-dependent speed: Algorithms based on Hamilton-Jacobi formulations[J]. Journal of Computational Physics, 1988, 79(1): 12-49.
[28] LAW M W, CHUNG A C. Efficient implementation for spherical flux computation and its application to vascular segmentation[J]. IEEE Transactions on Image Processing, 2009, 18(3): 596-612.
[29] HASSOUNA M S, FARAG A A. Robust centerline extraction framework using level sets [C]//Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition: volume 1. IEEE, 2005: 458-465.
[30] CHAN T F, VESE L A. Active contours without edges[J]. IEEE Transactions on Image Processing, 2001, 10(2): 266-277.
[31] WANG L, ZHANG H, HE K, et al. Active contours driven by multi-feature Gaussian distribution fitting energy with application to vessel segmentation[J]. PloS One, 2015, 10(11): e0143105.
[32] KLEPACZKO A, SZCZYPIńSKI P, DEISTUNG A, et al. Simulation of MR angiography imaging for validation of cerebral arteries segmentation algorithms[J]. Computer Methods & Programs in Biomedicine, 2016: 293-309.
[33] XIAO Z, MOULOUD A, SALAH B. Bayesian method with spatial constraint for retinal vessel segmentation[J]. Computational and Mathematical Methods in Medicine, 2013: 401413.
[34] SCHMIDHUBER J. Deep learning in neural networks: An overview[J]. Neural Networks, 2015, 61: 85-117.
[35] HINTON G E, SALAKHUTDINOV R R. Reducing the dimensionality of data with neural networks[J]. Science, 2006, 313(5786): 504-507.
[36] BENGIO Y, LAMBLIN P, POPOVICI D, et al. Greedy layer-wise training of deep networks [J]. Advances in Neural Information Processing Systems, 2006, 19.
[37] HINTON G E. A practical guide to training restricted Boltzmann machines[M]//Neural Networks: Tricks of the Trade. Springer, 2012: 599-619.
[38] YAMASHITA R, NISHIO M, DO R K G, et al. Convolutional neural networks: an overview and application in radiology[J]. Insights into Imaging, 2018, 9(4): 611-629.
[39] YU Y, SI X, HU C, et al. A review of recurrent neural networks: LSTM cells and network architectures[J]. Neural Computation, 2019, 31(7): 1235-1270.
[40] CRESWELL A, WHITE T, DUMOULIN V, et al. Generative adversarial networks: An overview[J]. IEEE Signal Processing Magazine, 2018, 35(1): 53-65.
[41] ZHANG S, TONG H, XU J, et al. Graph convolutional networks: a comprehensive review[J]. Computational Social Networks, 2019, 6(1): 1-23.
[42] LECUN Y, BOSER B, DENKER J, et al. Handwritten digit recognition with a back-propagation network[J]. Advances in Neural Information Processing Systems, 1989, 2.
[43] LECUN Y, BOTTOU L, BENGIO Y, et al. Gradient-based learning applied to document recognition[J]. Proceedings of the IEEE, 1998, 86(11): 2278-2324.
[44] KRIZHEVSKY A, SUTSKEVER I, HINTON G E. Imagenet classification with deep convolutional neural networks[J]. Advances in Neural Information Processing Systems, 2012, 25.
[45] DENG J, DONG W, SOCHER R, et al. Imagenet: A large-scale hierarchical image database [C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2009: 248-255.
[46] SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[C]//3rd International Conference on Learning Representations. 2015.
[47] LIN M, CHEN Q, YAN S. Network in network[C]//arXiv preprint arXiv:1312.4400. 2013.
[48] SZEGEDY C, LIU W, JIA Y, et al. Going deeper with onvolutions[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern recognition. 2015: 1-9.
[49] HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016: 770-778.
[50] LONG J, SHELHAMER E, DARRELL T. Fully convolutional networks for semantic segmentation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015: 3431-3440.
[51] RONNEBERGER O, FISCHER P, BROX T. U-net: Convolutional networks for biomedical image segmentation[C]//International Conference on Medical Image Computing and Computerassisted Intervention. Springer, 2015: 234-241.
[52] DU G, CAO X, LIANG J, et al. Medical image segmentation based on u-net: A review[J]. Journal of Imaging Science and Technology, 2020, 64(2): 20508-1.
[53] ARJOVSKY M, CHINTALA S, BOTTOU L. Wasserstein GAN[C]//arXiv preprintarXiv:1701.07875. 2017.
[54] GULRAJANI I, AHMED F, ARJOVSKY M, et al. Improved training of wasserstein gans[J]. Advances in Neural Information Processing Systems, 2017, 30.
[55] YANG W, ZHANG X, TIAN Y, et al. Deep learning for single image super-resolution: A brief review[J]. IEEE Transactions on Multimedia, 2019, 21(12): 3106-3121.
[56] SHEN Y, LUO P, YAN J, et al. Faceid-gan: Learning a symmetry three-player gan for identitypreserving face synthesis[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 821-830.
[57] HU J, SHEN L, SUN G. Squeeze-and-excitation networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 7132-7141.
[58] WANG X, CAI Z, GAO D, et al. Towards universal object detection by domain attention[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019: 7289-7298.
[59] JADERBERG M, SIMONYAN K, ZISSERMAN A, et al. Spatial transformer networks[J].Advances in Neural Information Processing Systems, 2015, 28.
[60] ZHU X, CHENG D, ZHANG Z, et al. An empirical study of spatial attention mechanisms in deep networks[C]//Proceedings of the IEEE International Conference on Computer Vision. 2019: 6688-6697.
[61] LI H, XIONG P, AN J, et al. Pyramid attention network for semantic segmentation[J]. British Machine Vision Conference, 2018.
[62] CHEN B, DENG W, HU J. Mixed high-order attention network for person re-identification[C]// Proceedings of the IEEE International Conference on Computer Vision. 2019: 371-381.
[63] WOO S, PARK J, LEE J Y, et al. Cbam: Convolutional block attention module[C]//Proceedings of the European Conference on Computer Vision. 2018: 3-19.
[64] MHASKAR H N, MICCHELLI C A. How to choose an activation function[J]. Advances in Neural Information Processing Systems, 1993, 6.
[65] NAIR V, HINTON G E. Rectified linear units improve restricted boltzmann machines[C]// Proceedings of the 27th International Conference on International Conference on Machine Learning. Omnipress, 2010: 807-814.
[66] MAAS A L, HANNUN A Y, NG A Y, et al. Rectifier nonlinearities improve neural network acoustic models[C]//Proceedings of the 30th International Conference on International Conference on Machine Learning: volume 30. Citeseer, 2013: 3.
[67] BOUREAU Y L, PONCE J, LECUN Y. A theoretical analysis of feature pooling in visual recognition[C]//Proceedings of the 27th International Conference on Machine Learning. 2010: 111-118.
[68] WANG T, WU D J, COATES A, et al. End-to-end text recognition with convolutional neural networks[C]//Proceedings of the 21st International Conference on Pattern Recognition. IEEE, 2012: 3304-3308.
[69] ZHANG T Y, SUEN C Y. A fast parallel algorithm for thinning digital patterns[J]. Communications of the ACM, 1984, 27(3): 236-239.
[70] DIJKSTRA E W, et al. A note on two problems in connexion with graphs[J]. Numerische Mathematik, 1959, 1(1): 269-271.
[71] GIBOU F, FEDKIW R, OSHER S. A review of level-set methods and some recent applications [J]. Journal of Computational Physics, 2018, 353: 82-109.
[72] LI C, XU C, GUI C, et al. Distance regularized level set evolution and its application to image segmentation[J]. IEEE Transactions on Image Processing, 2010, 19(12): 3243-3254.
[73] LI C, XU C, GUI C, et al. Level set evolution without re-initialization: a new variational formulation[C/OL]//2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition: volume 1. 2005: 430-436 vol. 1. DOI: 10.1109/CVPR.2005.213.
[74] GOODFELLOW I, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial nets[J]. Advances in Neural Information Processing Systems, 2014, 27.
[75] MIRZA M, OSINDERO S. Conditional generative adversarial nets[C]//arXiv preprint arXiv:1411.1784. 2014.
[76] GAO F, YANG Y, WANG J, et al. A deep convolutional generative adversarial networks (DCGANs)-based semi-supervised method for object recognition in synthetic aperture radar (SAR) images[J]. Remote Sensing, 2018, 10(6): 846.
[77] LIN T Y, GOYAL P, GIRSHICK R, et al. Focal loss for dense object detection[C]//Proceedings of the IEEE International Conference on Computer Vision. 2017: 2980-2988.
[78] SALEHI S S M, ERDOGMUS D, GHOLIPOUR A. Tversky loss function for image segmentation using 3D fully convolutional deep networks[C]//International Workshop on Machine Learning in Medical Imaging. Springer, 2017: 379-387.
[79] BERMAN M, TRIKI A R, BLASCHKO M B. The lovász-softmax loss: A tractable surrogate for the optimization of the intersection-over-union measure in neural networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 4413-4421.
[80] HU J, SHEN L, SUN G. Squeeze-and-excitation networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 7132-7141.
[81] HOU Q, ZHOU D, FENG J. Coordinate attention for efficient mobile network design[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2021: 13713-13722.
[82] CHAURASIA A, CULURCIELLO E. Linknet: Exploiting encoder representations for efficient semantic segmentation[C]//2017 IEEE Visual Communications and Image Processing. IEEE, 2017: 1-4.
[83] LIN T Y, DOLLÁR P, GIRSHICK R, et al. Feature pyramid networks for object detection [C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017: 2117-2125.
[84] SON J, PARK S J, JUNG K H. Retinal vessel segmentation in fundoscopic images with generative adversarial networks[J]. Journal of Digital Imaging, 2017.

所在学位评定分委会
电子与电气工程系
国内图书分类号
TP391.41
来源库
人工提交
成果类型学位论文
条目标识符http://sustech.caswiz.com/handle/2SGJ60CL/335679
专题工学院_电子与电气工程系
推荐引用方式
GB/T 7714
王雪莹. 基于深度学习的大鼠肝脏血管分割和拓扑分析[D]. 深圳. 南方科技大学,2022.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可 操作
11930187-王雪莹-电子与电气工程(9005KB)----限制开放--请求全文
个性服务
原文链接
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
导出为Excel格式
导出为Csv格式
Altmetrics Score
谷歌学术
谷歌学术中相似的文章
[王雪莹]的文章
百度学术
百度学术中相似的文章
[王雪莹]的文章
必应学术
必应学术中相似的文章
[王雪莹]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
[发表评论/异议/意见]
暂无评论

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。