通过CNN-Transformer组合显微镜图像增强网络的自动活细胞评估

IF 6.4 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS
Wenyuan Chen;Haocong Song;Zhuoran Zhang;Changsheng Dai;Guanqiao Shan;Hang Liu;Aojun Jiang;Chen Sun;Wenkun Dou;Changhai Ru;Clifford Librach;Yu Sun
{"title":"通过CNN-Transformer组合显微镜图像增强网络的自动活细胞评估","authors":"Wenyuan Chen;Haocong Song;Zhuoran Zhang;Changsheng Dai;Guanqiao Shan;Hang Liu;Aojun Jiang;Chen Sun;Wenkun Dou;Changhai Ru;Clifford Librach;Yu Sun","doi":"10.1109/TASE.2025.3585728","DOIUrl":null,"url":null,"abstract":"Automated morphological measurement of cellular and subcellular structures in live cells is important for evaluating cell functions. Due to their small size and transparent appearance, visualizing cellular and subcellular structures often requires high magnification microscopy and fluorescent staining. However, high magnification microscopy gives a limited field of view, and fluorescent staining alters cell viability and/or activity. Therefore, microscopy image enhancement methods have been developed to predict detailed intracellular structures in live cells. Existing image enhancement networks are mostly CNN-based models lacking global information or Transformer-based models lacking local information. For these purposes, a novel CNN-Transformer combined bilateral U-Net (CTBUnet) is proposed to effectively aggregate both local and global information. Experiments on the collected sperm cell enhancement dataset demonstrate the effectiveness of proposed network for both super-resolution and virtual staining prediction. Note to Practitioners—Automated and accurate intracellular morphology measurement is crucial for cell quality analysis. Microscopy image enhancement methods including super-resolution and virtual staining prediction were proposed to enhance or highlight details of subcellular structures without high magnification microscopy or invasive staining. To effectively combine local and global information, a novel CNN-Transformer combined image enhancement network is proposed. Different from traditional CNN-Transformer combined structures that only directionally fuse outputs from CNN and Transformer, the proposed bilateral fusion module bidirectionally fuses and exchanges features from CNN and Transformer. The proposed bilateral fusion module incorporates not only channel-wise fusion but also spatial-wise fusion to effectively aggregate local and global features. Additionally, a region-aware attention gate is proposed to urge the network to only focus on reconstructing cell structures regardless of background. The proposed method outperformed existing networks with a better enhancement effect for subcellular details.","PeriodicalId":51060,"journal":{"name":"IEEE Transactions on Automation Science and Engineering","volume":"22 ","pages":"18269-18280"},"PeriodicalIF":6.4000,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Automated Live Cell Evaluation via a CNN-Transformer Combined Microscopy Image Enhancement Network\",\"authors\":\"Wenyuan Chen;Haocong Song;Zhuoran Zhang;Changsheng Dai;Guanqiao Shan;Hang Liu;Aojun Jiang;Chen Sun;Wenkun Dou;Changhai Ru;Clifford Librach;Yu Sun\",\"doi\":\"10.1109/TASE.2025.3585728\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Automated morphological measurement of cellular and subcellular structures in live cells is important for evaluating cell functions. Due to their small size and transparent appearance, visualizing cellular and subcellular structures often requires high magnification microscopy and fluorescent staining. However, high magnification microscopy gives a limited field of view, and fluorescent staining alters cell viability and/or activity. Therefore, microscopy image enhancement methods have been developed to predict detailed intracellular structures in live cells. Existing image enhancement networks are mostly CNN-based models lacking global information or Transformer-based models lacking local information. For these purposes, a novel CNN-Transformer combined bilateral U-Net (CTBUnet) is proposed to effectively aggregate both local and global information. Experiments on the collected sperm cell enhancement dataset demonstrate the effectiveness of proposed network for both super-resolution and virtual staining prediction. Note to Practitioners—Automated and accurate intracellular morphology measurement is crucial for cell quality analysis. Microscopy image enhancement methods including super-resolution and virtual staining prediction were proposed to enhance or highlight details of subcellular structures without high magnification microscopy or invasive staining. To effectively combine local and global information, a novel CNN-Transformer combined image enhancement network is proposed. Different from traditional CNN-Transformer combined structures that only directionally fuse outputs from CNN and Transformer, the proposed bilateral fusion module bidirectionally fuses and exchanges features from CNN and Transformer. The proposed bilateral fusion module incorporates not only channel-wise fusion but also spatial-wise fusion to effectively aggregate local and global features. Additionally, a region-aware attention gate is proposed to urge the network to only focus on reconstructing cell structures regardless of background. The proposed method outperformed existing networks with a better enhancement effect for subcellular details.\",\"PeriodicalId\":51060,\"journal\":{\"name\":\"IEEE Transactions on Automation Science and Engineering\",\"volume\":\"22 \",\"pages\":\"18269-18280\"},\"PeriodicalIF\":6.4000,\"publicationDate\":\"2025-07-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Automation Science and Engineering\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/11068996/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Automation Science and Engineering","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/11068996/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

活细胞中细胞和亚细胞结构的自动形态学测量对于评估细胞功能非常重要。由于它们的小尺寸和透明的外观,可视化的细胞和亚细胞结构往往需要高倍显微镜和荧光染色。然而,高倍显微镜提供了一个有限的视野,荧光染色改变细胞活力和/或活性。因此,显微镜图像增强方法已经发展到预测活细胞内的详细细胞结构。现有的图像增强网络大多是缺乏全局信息的基于cnn的模型或缺乏局部信息的基于transformer的模型。为此,提出了一种新型的CNN-Transformer联合双边U-Net (CTBUnet)来有效地聚合局部和全局信息。在收集的精子细胞增强数据集上的实验证明了所提出的网络在超分辨率和虚拟染色预测方面的有效性。从业者注意:自动和准确的细胞内形态学测量是细胞质量分析的关键。提出了显微图像增强方法,包括超分辨率和虚拟染色预测,以增强或突出亚细胞结构的细节,而无需高倍显微镜或侵入性染色。为了有效地结合局部和全局信息,提出了一种新的CNN-Transformer组合图像增强网络。与传统的CNN-Transformer组合结构只对CNN和Transformer的输出进行定向融合不同,本文提出的双边融合模块对CNN和Transformer的特征进行双向融合和交换。所提出的双边融合模块不仅融合了通道智能融合,而且融合了空间智能融合,从而有效地聚合了局部和全局特征。此外,提出了一个区域感知的注意门,促使网络只关注重建细胞结构,而不考虑背景。该方法比现有网络具有更好的亚细胞细节增强效果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Automated Live Cell Evaluation via a CNN-Transformer Combined Microscopy Image Enhancement Network
Automated morphological measurement of cellular and subcellular structures in live cells is important for evaluating cell functions. Due to their small size and transparent appearance, visualizing cellular and subcellular structures often requires high magnification microscopy and fluorescent staining. However, high magnification microscopy gives a limited field of view, and fluorescent staining alters cell viability and/or activity. Therefore, microscopy image enhancement methods have been developed to predict detailed intracellular structures in live cells. Existing image enhancement networks are mostly CNN-based models lacking global information or Transformer-based models lacking local information. For these purposes, a novel CNN-Transformer combined bilateral U-Net (CTBUnet) is proposed to effectively aggregate both local and global information. Experiments on the collected sperm cell enhancement dataset demonstrate the effectiveness of proposed network for both super-resolution and virtual staining prediction. Note to Practitioners—Automated and accurate intracellular morphology measurement is crucial for cell quality analysis. Microscopy image enhancement methods including super-resolution and virtual staining prediction were proposed to enhance or highlight details of subcellular structures without high magnification microscopy or invasive staining. To effectively combine local and global information, a novel CNN-Transformer combined image enhancement network is proposed. Different from traditional CNN-Transformer combined structures that only directionally fuse outputs from CNN and Transformer, the proposed bilateral fusion module bidirectionally fuses and exchanges features from CNN and Transformer. The proposed bilateral fusion module incorporates not only channel-wise fusion but also spatial-wise fusion to effectively aggregate local and global features. Additionally, a region-aware attention gate is proposed to urge the network to only focus on reconstructing cell structures regardless of background. The proposed method outperformed existing networks with a better enhancement effect for subcellular details.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Automation Science and Engineering
IEEE Transactions on Automation Science and Engineering 工程技术-自动化与控制系统
CiteScore
12.50
自引率
14.30%
发文量
404
审稿时长
3.0 months
期刊介绍: The IEEE Transactions on Automation Science and Engineering (T-ASE) publishes fundamental papers on Automation, emphasizing scientific results that advance efficiency, quality, productivity, and reliability. T-ASE encourages interdisciplinary approaches from computer science, control systems, electrical engineering, mathematics, mechanical engineering, operations research, and other fields. T-ASE welcomes results relevant to industries such as agriculture, biotechnology, healthcare, home automation, maintenance, manufacturing, pharmaceuticals, retail, security, service, supply chains, and transportation. T-ASE addresses a research community willing to integrate knowledge across disciplines and industries. For this purpose, each paper includes a Note to Practitioners that summarizes how its results can be applied or how they might be extended to apply in practice.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信