International Journal of Imaging Systems and Technology最新文献

筛选
英文 中文
SSANet—Novel Residual Network for Computer-Aided Diagnosis of Pulmonary Nodules in Chest Computed Tomography 用于胸部计算机断层扫描肺结节计算机辅助诊断的 SSANet-Novel 残差网络
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-09-14 DOI: 10.1002/ima.23176
Yu Gu, Jiaqi Liu, Lidong Yang, Baohua Zhang, Jing Wang, Xiaoqi Lu, Jianjun Li, Xin Liu, Dahua Yu, Ying Zhao, Siyuan Tang, Qun He
{"title":"SSANet—Novel Residual Network for Computer-Aided Diagnosis of Pulmonary Nodules in Chest Computed Tomography","authors":"Yu Gu,&nbsp;Jiaqi Liu,&nbsp;Lidong Yang,&nbsp;Baohua Zhang,&nbsp;Jing Wang,&nbsp;Xiaoqi Lu,&nbsp;Jianjun Li,&nbsp;Xin Liu,&nbsp;Dahua Yu,&nbsp;Ying Zhao,&nbsp;Siyuan Tang,&nbsp;Qun He","doi":"10.1002/ima.23176","DOIUrl":"https://doi.org/10.1002/ima.23176","url":null,"abstract":"<div>\u0000 \u0000 <p>The manifestations of early lung cancer in medical imaging often appear as pulmonary nodules, which can be classified as benign or malignant. In recent years, there has been a gradual application of deep learning-based computer-aided diagnosis technology to assist in the diagnosis of lung nodules. This study introduces a novel three-dimensional (3D) residual network called SSANet, which integrates split-based convolution, shuffle attention, and a novel activation function. The aim is to enhance the accuracy of distinguishing between benign and malignant lung nodules using convolutional neural networks (CNNs) and alleviate the burden on doctors when interpreting the images. To fully extract pulmonary nodule information from chest CT images, the original residual network is expanded into a 3D CNN structure. Additionally, a 3D split-based convolutional operation (SPConv) is designed and integrated into the feature extraction module to reduce redundancy in feature maps and improve network inference speed. In the SSABlock part of the proposed network, ACON (Activated or Not) function is also introduced. The proposed SSANet also incorporates an attention module to capture critical characteristics of lung nodules. During the training process, the PolyLoss function is utilized. Once SSANet generates the diagnosis result, a heatmap displays using Score-CAM is employed to evaluate whether the network accurately identifies the location of lung nodules. In the final test set, the proposed network achieves an accuracy of 89.13%, an F1-score of 84.85%, and a G-mean of 86.20%. These metrics represent improvements of 5.43%, 5.98%, and 4.09%, respectively, compared with the original base network. The experimental results align with those of previous studies on pulmonary nodule diagnosis networks, confirming the reliability and clinical applicability of the diagnostic outcomes.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 5","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142233214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Dual Attention Approach for DNN Based Automated Diabetic Retinopathy Grading 基于 DNN 的糖尿病视网膜病变自动分级的新型双重关注方法
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-09-13 DOI: 10.1002/ima.23175
Tareque Bashar Ovi, Nomaiya Bashree, Hussain Nyeem, Md Abdul Wahed, Faiaz Hasanuzzaman Rhythm, Ayat Subah Alam
{"title":"A Novel Dual Attention Approach for DNN Based Automated Diabetic Retinopathy Grading","authors":"Tareque Bashar Ovi,&nbsp;Nomaiya Bashree,&nbsp;Hussain Nyeem,&nbsp;Md Abdul Wahed,&nbsp;Faiaz Hasanuzzaman Rhythm,&nbsp;Ayat Subah Alam","doi":"10.1002/ima.23175","DOIUrl":"https://doi.org/10.1002/ima.23175","url":null,"abstract":"<div>\u0000 \u0000 <p>Diabetic retinopathy (DR) poses a serious threat to vision, emphasising the need for early detection. Manual analysis of fundus images, though common, is error-prone and time-intensive. Existing automated diagnostic methods lack precision, particularly in the early stages of DR. This paper introduces the Soft Convolutional Block Attention Module-based Network (Soft-CBAMNet), a deep learning network designed for severity detection, which features Soft-CBAM attention to capture complex features from fundus images. The proposed network integrates both the convolutional block attention module (CBAM) and the soft-attention components, ensuring simultaneous processing of input features. Following this, attention maps undergo a max-pooling operation, and refined features are concatenated before passing through a dropout layer with a dropout rate of 50%. Experimental results on the APTOS dataset demonstrate the superior performance of Soft-CBAMNet, achieving an accuracy of 85.4% in multiclass DR grading. The proposed architecture has shown strong robustness and general feature learning capability, achieving a mean AUC of 0.81 on the IDRID dataset. Soft-CBAMNet's dynamic feature extraction capability across all classes is further justified by the inspection of intermediate feature maps. The model excels in identifying all stages of DR with increased precision, surpassing contemporary approaches. Soft-CBAMNet presents a significant advancement in DR diagnosis, offering improved accuracy and efficiency for timely intervention.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 5","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142230984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lightweight Deep Learning Model Optimization for Medical Image Analysis 用于医学图像分析的轻量级深度学习模型优化
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-09-13 DOI: 10.1002/ima.23173
Zahraa Al-Milaji, Hayder Yousif
{"title":"Lightweight Deep Learning Model Optimization for Medical Image Analysis","authors":"Zahraa Al-Milaji,&nbsp;Hayder Yousif","doi":"10.1002/ima.23173","DOIUrl":"https://doi.org/10.1002/ima.23173","url":null,"abstract":"<div>\u0000 \u0000 <p>Medical image labeling requires specialized knowledge; hence, the solution to the challenge of medical image classification lies in efficiently utilizing the few labeled samples to create a high-performance model. Building a high-performance model requires a complicated convolutional neural network (CNN) model with numerous parameters to be trained which makes the test quite expensive. In this paper, we propose optimizing a lightweight deep learning model with only five convolutional layers using the particle swarm optimization (PSO) algorithm to find the best number of kernel filters for each convolutional layer. For colored red, green, and blue (RGB) images acquired from different data sources, we suggest using stain separation using color deconvolution and horizontal and vertical flipping to produce new versions that can concentrate the representation of the images on structures and patterns. To mitigate the effect of training with incorrectly or uncertainly labeled images, grades of disease could have small variances, we apply a second-pass training excluding uncertain data. With a small number of parameters and higher accuracy, the proposed lightweight deep learning model optimization (LDLMO) algorithm shows strong resilience and generalization ability compared with most recent research on four MedMNIST datasets (RetinaMNIST, BreastMNIST, DermMNIST, and OCTMNIST), Medical-MNIST, and brain tumor MRI datasets.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 5","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142230985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Dynamic Multi-Output Convolutional Neural Network for Skin Lesion Classification 用于皮肤病变分类的动态多输出卷积神经网络
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-09-09 DOI: 10.1002/ima.23164
Yingyue Zhou, Junfei Guo, Hanmin Yao, Jiaqi Zhao, Xiaoxia Li, Jiamin Qin, Shuangli Liu
{"title":"A Dynamic Multi-Output Convolutional Neural Network for Skin Lesion Classification","authors":"Yingyue Zhou,&nbsp;Junfei Guo,&nbsp;Hanmin Yao,&nbsp;Jiaqi Zhao,&nbsp;Xiaoxia Li,&nbsp;Jiamin Qin,&nbsp;Shuangli Liu","doi":"10.1002/ima.23164","DOIUrl":"https://doi.org/10.1002/ima.23164","url":null,"abstract":"<div>\u0000 \u0000 <p>Skin cancer is a pressing global health issue, with high incidence and mortality rates. Convolutional neural network (CNN) models have been proven to be effective in improving performance in skin lesion image classification and reducing the medical burden. However, the inherent class imbalance in training data caused by the difficulty of collecting dermoscopy images leads to categorical overfitting, which still limits the effectiveness of these data-driven models in recognizing few-shot categories. To address this challenge, we propose a dynamic multi-output convolutional neural network (DMO-CNN) model that incorporates exit nodes into the standard CNN structure and includes feature refinement layers (FRLs) and an adaptive output scheduling (AOS) module. This model improves feature representation ability through multi-scale sub-feature maps and reduces the inter-layer dependencies during backpropagation. The FRLs ensure efficient and low-loss down-sampling, while the AOS module utilizes a trainable layer selection mechanism to refocus the model's attention on few-shot lesion categories. Additionally, a novel correction factor loss is introduced to supervise and promote AOS convergence. Our evaluation of the DMO-CNN model on the HAM10000 dataset demonstrates its effectiveness in multi-class skin lesion classification and its superior performance in recognizing few-shot categories. Despite utilizing a very simple VGG structure as the sole backbone structure, DMO-CNN achieved impressive performance of 0.885 in BACC and 0.983 in weighted AUC. These results are comparable to those of the ensemble model that won the ISIC 2018 challenge, highlighting the strong potential of DMO-CNN in dealing with few-shot skin lesion data.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 5","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142165288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reduction Accelerated Adaptive Step-Size FISTA Based Smooth-Lasso Regularization for Fluorescence Molecular Tomography Reconstruction 基于 FISTA 的平滑-拉索正则化荧光分子断层成像重构的还原加速自适应步长
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-09-09 DOI: 10.1002/ima.23166
Xiaoli Luo, Renhao Jiao, Tao Ma, Yunjie Liu, Zhu Gao, Xiuhong Shen, Qianqian Ren, Heng Zhang, Xiaowei He
{"title":"Reduction Accelerated Adaptive Step-Size FISTA Based Smooth-Lasso Regularization for Fluorescence Molecular Tomography Reconstruction","authors":"Xiaoli Luo,&nbsp;Renhao Jiao,&nbsp;Tao Ma,&nbsp;Yunjie Liu,&nbsp;Zhu Gao,&nbsp;Xiuhong Shen,&nbsp;Qianqian Ren,&nbsp;Heng Zhang,&nbsp;Xiaowei He","doi":"10.1002/ima.23166","DOIUrl":"https://doi.org/10.1002/ima.23166","url":null,"abstract":"<div>\u0000 \u0000 <p>In this paper, a reduced accelerated adaptive fast iterative shrinkage threshold algorithm based on Smooth-Lasso regularization (SL-RAFISTA-BB) is proposed for fluorescence molecular tomography (FMT) 3D reconstruction. This method uses the Smooth-Lasso regularization to fuse the group sparse prior information which can balance the relationship between the sparsity and smoothness of the solution, simplifying the process of calculation. In particular, the convergence speed of the FISTA is improved by introducing a reduction strategy and Barzilai-Borwein variable step size factor, and constructing a continuation strategy to reduce computing costs and the number of iterations. The experimental results show that the proposed algorithm not only accelerates the convergence speed of the iterative algorithm, but also improves the positioning accuracy of the tumor target, alleviates the over-sparse or over-smooth phenomenon of the reconstructed target, and clearly outlines the boundary information of the tumor target. We hope that this method can promote the development of optical molecular tomography.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 5","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142165287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Cascade U-Net With Transformer for Retinal Multi-Lesion Segmentation 用于视网膜多病灶分割的带变换器的级联 U-Net
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-09-05 DOI: 10.1002/ima.23163
Haiyang Zheng, Feng Liu
{"title":"A Cascade U-Net With Transformer for Retinal Multi-Lesion Segmentation","authors":"Haiyang Zheng,&nbsp;Feng Liu","doi":"10.1002/ima.23163","DOIUrl":"https://doi.org/10.1002/ima.23163","url":null,"abstract":"<div>\u0000 \u0000 <p>Diabetic retinopathy (DR) is an important cause of blindness. If not diagnosed and treated in a timely manner, it can lead to irreversible vision loss. The diagnosis of DR relies heavily on specialized ophthalmologists. In recent years, with the development of artificial intelligence a number of diagnostics using this technique have begun to appear. One method for diagnosing diseases in this field is to segment four common kinds of lesions from color fundus images, including: exudates (EX), soft exudates (SE), hemorrhages (HE), and microaneurysms (MA). In this paper, we propose a segmentation model for DR based on deep learning. The main part of the model consists of two layers of improved U-Net network based on transformer, corresponding to the two stages of coarse segmentation and fine segmentation, respectively. The model can segment four common kinds of lesions from the input color fundus image at the same time. To validate the performance of our proposed model, we test our model on three public datasets: IDRiD, DDR, and DIARETDB1. The test results show that our proposed model achieves competitive results compared with the existing methods in terms of PR-AUC, ROC-AUC, Dice, and IoU, especially for lesions segmentation of SE and MA.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 5","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142152261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrating Multi-Scale Feature Boundary Module and Feature Fusion With CNN for Accurate Skin Cancer Segmentation and Classification 将多尺度特征边界模块和特征融合与 CNN 相结合,实现准确的皮肤癌分段和分类
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-09-05 DOI: 10.1002/ima.23167
S. Malaiarasan, R. Ravi
{"title":"Integrating Multi-Scale Feature Boundary Module and Feature Fusion With CNN for Accurate Skin Cancer Segmentation and Classification","authors":"S. Malaiarasan,&nbsp;R. Ravi","doi":"10.1002/ima.23167","DOIUrl":"https://doi.org/10.1002/ima.23167","url":null,"abstract":"<div>\u0000 \u0000 <p>The skin, a crucial organ, plays a protective role in the human body, emphasizing the significance of early detection of skin diseases to prevent potential progression to skin cancer. The challenge lies in diagnosing these diseases at their early stages, where visual resemblance complicates differentiation, highlighting the need for an innovative automated method for precisely identifying skin lesions in biomedical images. This paper introduces a holistic methodology that combines DenseNet, multi-scale feature boundary module (MFBM), and feature fusion and decoding engine (FFDE) to tackle challenges in existing deep-learning image segmentation methods. Furthermore, a convolutional neural network model is designed for the classification of segmented images. The DenseNet encoder efficiently extracts features at four resolution levels, leveraging dense connectivity to capture intricate hierarchical features. The proposed MFBM plays a crucial role in extracting boundary information, employing parallel dilated convolutions with various dilation rates for effective multi-scale information capture. To overcome potential disadvantages related to the conversion of features during segmentation, our approach ensures the preservation of context features. The proposed FFDE method adaptively fuses features from different levels, restoring skin lesion location information while preserving local details. The evaluation of the model is conducted on the HAM10000 dataset, which consists of 10 015 dermoscopy images, yielding promising results.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 5","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142152262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unsupervised Domain Adaptation for Simultaneous Segmentation and Classification of the Retinal Arteries and Veins 用于视网膜动脉和静脉同时分割和分类的无监督领域适应技术
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-09-04 DOI: 10.1002/ima.23151
Lanyan Xue, Wenjun Zhang, Lizheng Lu, Yunsheng Chen, Kaibin Li
{"title":"Unsupervised Domain Adaptation for Simultaneous Segmentation and Classification of the Retinal Arteries and Veins","authors":"Lanyan Xue,&nbsp;Wenjun Zhang,&nbsp;Lizheng Lu,&nbsp;Yunsheng Chen,&nbsp;Kaibin Li","doi":"10.1002/ima.23151","DOIUrl":"https://doi.org/10.1002/ima.23151","url":null,"abstract":"<div>\u0000 \u0000 <p>Automatic segmentation of the fundus retinal vessels and accurate classification of the arterial and venous vessels play an important role in clinical diagnosis. This article proposes a fundus retinal vascular segmentation and arteriovenous classification network that combines the adversarial training and attention mechanism to address the issues of fundus retinal arteriovenous classification error and ambiguous segmentation of fine blood vessels. It consists of three core components: discriminator, generator, and segmenter. In order to address the domain shift issue, U-Net is employed as a discriminator, and data samples for arterial and venous vessels are generated with a generator using an unsupervised domain adaption (UDA) approach. The classification of retinal arterial and venous vessels (A/V) as well as the segmentation of fine vessels is improved by adding a self-attention mechanism to improve attention to vessel edge features and the terminal fine vessels. Non-strided convolution and non-pooled downsampling methods are also used to avoid losing fine-grained information and learning less effective feature representations. The performance of multi-class blood vessel segmentation is as follows, per test results on the DRIVE dataset: F1-score (F1) has a value of 0.7496 and an accuracy of 0.9820. The accuracy of A/V categorization has increased by 1.35% when compared to AU-Net. The outcomes demonstrate that by enhancing the baseline U-Net, the strategy we suggested enhances the automated classification and segmentation of blood vessels.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 5","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142137693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computational Synthesis of Histological Stains: A Step Toward Virtual Enhanced Digital Pathology 组织学染色的计算合成:迈向虚拟增强数字病理学的一步
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-09-04 DOI: 10.1002/ima.23165
Massimo Salvi, Nicola Michielli, Lorenzo Salamone, Alessandro Mogetta, Alessandro Gambella, Luca Molinaro, Mauro Papotti, Filippo Molinari
{"title":"Computational Synthesis of Histological Stains: A Step Toward Virtual Enhanced Digital Pathology","authors":"Massimo Salvi,&nbsp;Nicola Michielli,&nbsp;Lorenzo Salamone,&nbsp;Alessandro Mogetta,&nbsp;Alessandro Gambella,&nbsp;Luca Molinaro,&nbsp;Mauro Papotti,&nbsp;Filippo Molinari","doi":"10.1002/ima.23165","DOIUrl":"https://doi.org/10.1002/ima.23165","url":null,"abstract":"<p>Histological staining plays a crucial role in anatomic pathology for the analysis of biological tissues and the formulation of diagnostic reports. Traditional methods like hematoxylin and eosin (H&amp;E) primarily offer morphological information but lack insight into functional details, such as the expression of biomarkers indicative of cellular activity. To overcome this limitation, we propose a computational approach to synthesize virtual immunohistochemical (IHC) stains from H&amp;E input, transferring imaging features across staining domains. Our approach comprises two stages: (i) a multi-stage registration framework ensuring precise alignment of cellular and subcellular structures between the source H&amp;E and target IHC stains, and (ii) a deep learning-based generative model which incorporates functional attributes from the target IHC stain by learning cell-to-cell mappings from paired training data. We evaluated our approach of virtual restaining H&amp;E slides to simulate IHC staining for phospho-histone H3, on inguinal lymph node and bladder tissues. Blind pathologist assessments and quantitative metrics validated the diagnostic quality of the synthetic slides. Notably, mitotic counts derived from synthetic images exhibited a strong correlation with physical staining. Moreover, global and stain-specific metrics confirmed the high quality of the synthetic IHC images generated by our approach. This methodology represents an important advance in automated functional restaining, achieved through robust registration and a model trained on precisely paired H&amp;E and IHC data to transfer functions cell-by-cell. Our approach forms the basis for multiparameter histology analysis and comprehensive cohort staining using only digitized H&amp;E slides.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 5","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ima.23165","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142137729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GLAN: Global Local Attention Network for Thoracic Disease Classification to Enhance Heart Failure Diagnosis GLAN:用于胸腔疾病分类的全局局部注意力网络,加强心力衰竭诊断
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-08-30 DOI: 10.1002/ima.23168
Dengao Li, Yujia Mu, Jumin Zhao, Changcheng Shi, Fei Wang
{"title":"GLAN: Global Local Attention Network for Thoracic Disease Classification to Enhance Heart Failure Diagnosis","authors":"Dengao Li,&nbsp;Yujia Mu,&nbsp;Jumin Zhao,&nbsp;Changcheng Shi,&nbsp;Fei Wang","doi":"10.1002/ima.23168","DOIUrl":"https://doi.org/10.1002/ima.23168","url":null,"abstract":"<div>\u0000 \u0000 <p>Chest x-ray (CXR) is an effective method for diagnosing heart failure, and identifying important features such as cardiomegaly, effusion, and edema on patient chest x-rays is significant for aiding the treatment of heart failure. However, manually identifying a vast amount of CXR data places a huge burden on physicians. Deep learning's progression has led to the utilization of this technology in numerous research aimed at tackling these particular challenges. However, many of these studies utilize global learning methods, where the contribution of each pixel to the classification is considered equal, or they overly focus on small areas of the lesion while neglecting the global context. In response to these issues, we propose the Global Local Attention Network (GLAN), which incorporates an improved attention module on a branched structure. This enables the network to capture small lesion areas while also considering both local and global features. We evaluated the effectiveness of the proposed model by testing it on multiple public datasets and real-world datasets. Compared to the state-of-the-art methods, our network structure demonstrated greater accuracy and effectiveness in the identification of three key features: cardiomegaly, effusion, and edema. This provides more targeted support for diagnosing and treating heart failure.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 5","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142100277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信