中文版 | English
题名

LoGo Transformer: Hierarchy Lightweight Full Self-Attention Network for Corneal Endothelial Cell Segmentation

作者
DOI
发表日期
2023
会议名称
International Joint Conference on Neural Networks (IJCNN)
ISSN
2161-4393
ISBN
978-1-6654-8868-6
会议录名称
卷号
2023-June
页码
1-7
会议日期
18-23 June 2023
会议地点
Gold Coast, Australia
出版地
345 E 47TH ST, NEW YORK, NY 10017 USA
出版者
摘要
Corneal endothelial cell segmentation plays an important role in quantifying clinical indicators for the cornea health state evaluation. Although Convolution Neural Networks (CNNs) are widely used for medical image segmentation, their receptive fields are limited. Recently, Transformer outperforms convolution in modeling long-range dependencies but lacks local inductive bias so the pure transformer network is difficult to train on small medical image datasets. Moreover, Transformer networks cannot be effectively adopted for secular microscopes as they are parameter-heavy and computationally complex. To this end, we find that appropriately limiting attention spans and modeling information at different granularity can introduce local constraints and enhance attention representations. This paper explores a hierarchy full self-attention lightweight network for medical image segmentation, using Local and Global (LoGo) transformers to separately model attention representation at lowlevel and high-level layers. Specifically, the local efficient transformer (LoTr) layer is employed to decompose features into finergrained elements to model local attention representation, while the global axial transformer (GoTr) is utilized to build long-range dependencies across the entire feature space. With this hierarchy structure, we gradually aggregate the semantic features from different levels efficiently. Experiment results on segmentation tasks of the corneal endothelial cell, the ciliary body, and the liver prove the accuracy, effectiveness, and robustness of our method. Compared with the convolution neural networks (CNNs) and the hybrid CNN-Transformer state-of-the-art (SOTA) methods, the LoGo transformer obtains the best result.
关键词
学校署名
其他
语种
英语
相关链接[IEEE记录]
收录类别
WOS研究方向
Computer Science ; Engineering
WOS类目
Computer Science, Artificial Intelligence ; Computer Science, Hardware & Architecture ; Engineering, Electrical & Electronic
WOS记录号
WOS:001046198700079
EI入藏号
20233614678334
EI主题词
Convolution ; Cytology ; Medical imaging ; Semantic Segmentation ; Semantics
EI分类号
Biomedical Engineering:461.1 ; Biological Materials and Tissue Engineering:461.2 ; Biology:461.9 ; Information Theory and Signal Processing:716.1 ; Artificial Intelligence:723.4 ; Imaging Techniques:746
来源库
IEEE
全文链接https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10191116
引用统计
被引频次[WOS]:0
成果类型会议论文
条目标识符http://sustech.caswiz.com/handle/2SGJ60CL/553194
专题工学院_斯发基斯可信自主研究院
作者单位
1.School of Computer Science, University of Nottingham Ningbo China, Ningbo, China
2.Research Institute of Trustworthy Autonomous Systems, Southern University of Science and Technology, Shenzhen, China
3.Tomey Corporation, Nagoya, Japan
推荐引用方式
GB/T 7714
Yinglin Zhang,Zichao Cai,Risa Higashita,et al. LoGo Transformer: Hierarchy Lightweight Full Self-Attention Network for Corneal Endothelial Cell Segmentation[C]. 345 E 47TH ST, NEW YORK, NY 10017 USA:IEEE,2023:1-7.
条目包含的文件
条目无相关文件。
个性服务
原文链接
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
导出为Excel格式
导出为Csv格式
Altmetrics Score
谷歌学术
谷歌学术中相似的文章
[Yinglin Zhang]的文章
[Zichao Cai]的文章
[Risa Higashita]的文章
百度学术
百度学术中相似的文章
[Yinglin Zhang]的文章
[Zichao Cai]的文章
[Risa Higashita]的文章
必应学术
必应学术中相似的文章
[Yinglin Zhang]的文章
[Zichao Cai]的文章
[Risa Higashita]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
[发表评论/异议/意见]
暂无评论

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。