{"title":"Advancing Visual Perception Through VCANet-Crossover Osprey Algorithm: Integrating Visual Technologies.","authors":"Yuwen Ning, Jiaxin Li, Shuyi Sun","doi":"10.1007/s10278-025-01467-w","DOIUrl":"https://doi.org/10.1007/s10278-025-01467-w","url":null,"abstract":"<p><p>Diabetic retinopathy (DR) is a significant vision-threatening condition, necessitating accurate and efficient automated screening methods. Traditional deep learning (DL) models struggle to detect subtle lesions and also suffer from high computational complexity. Existing models primarily mimic the primary visual cortex (V1) of the human visual system, neglecting other higher-order processing regions. To overcome these limitations, this research introduces the vision core-adapted network-based crossover osprey algorithm (VCANet-COP) for subtle lesion recognition with better computational efficiency. The model integrates sparse autoencoders (SAEs) to extract vascular structures and lesion-specific features at a pixel level for improved abnormality detection. The front-end network in the VCANet emulates the V1, V2, V4, and inferotemporal (IT) regions to derive subtle lesions effectively and improve lesion detection accuracy. Additionally, the COP algorithm leveraging the osprey optimization algorithm (OOA) with a crossover strategy optimizes hyperparameters and network configurations to ensure better computational efficiency, faster convergence, and enhanced performance in lesion recognition. The experimental assessment of the VCANet-COP model on multiple DR datasets namely Diabetic_Retinopathy_Data (DR-Data), Structured Analysis of the Retina (STARE) dataset, Indian Diabetic Retinopathy Image Dataset (IDRiD), Digital Retinal Images for Vessel Extraction (DRIVE) dataset, and Retinal fundus multi-disease image dataset (RFMID) demonstrates superior performance over baseline works, namely EDLDR, FFU_Net, LSTM_MFORG, fundus-DeepNet, and CNN_SVD by achieving average outcomes of 98.14% accuracy, 97.9% sensitivity, 98.08% specificity, 98.4% precision, 98.1% F1-score, 96.2% kappa coefficient, 2.0% false positive rate (FPR), 2.1% false negative rate (FNR), and 1.5-s execution time. By addressing critical limitations, VCANet-COP provides a scalable and robust solution for real-world DR screening and clinical decision support.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143782301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluating the Impact of a Ki-67 Decision Support Algorithm on Pathology Residents' Scoring Accuracy.","authors":"Mine İlayda Şengör Aygün, Özben Yalçın, Burak Uzel, Gamze Kulduk, Cem Çomunoğlu","doi":"10.1007/s10278-025-01490-x","DOIUrl":"https://doi.org/10.1007/s10278-025-01490-x","url":null,"abstract":"<p><p>Ki-67 scoring is of essential importance in the evaluation of breast cancer. We evaluated a Ki-67 algorithm as a decision support tool to improve accuracy for pathology residents. We retrospectively evaluated Ki-67 scores on whole slide images (WSI) obtained from 156 consecutive breast cancer patients. Two senior pathologists determined the 2.1 mm<sup>2</sup> hotspot to be evaluated. Ki-67 scores from senior pathologists were compared with results generated by the algorithm, results from 10 pathology residents, and results from pathology residents with the assistance of the algorithm. In addition to numerical results from the algorithm, residents were also presented with a visual representation of nuclei that were counted and excluded. Statistical analysis was performed using Wilcoxon and intra-class correlation (ICC) tests. The mean Ki-67 scores from senior pathologists and the algorithm were 23 ± 18 and 24 ± 18, respectively (ICC, 0.98). Ki-67 scores from the residents were 19 ± 16 and 22 ± 16, without and with input from the algorithm, respectively. With input from the algorithm, residents' scores were significantly closer to those obtained by senior pathologists (p = 0.008). Residents modified their scores in 53.8% of the cases where 74% of the better scores were characterized by an increase in the original scores. The results obtained by the Ki-67 algorithm were highly correlated with those assessed by senior pathologists. We demonstrated that the algorithm may serve as a decision support tool for residents to align their results with those of senior pathologists.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143782304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Prediction of Future Risk of Moderate to Severe Kidney Function Loss Using a Deep Learning Model-Enabled Chest Radiography.","authors":"Kai-Chieh Chen, Shang-Yang Lee, Dung-Jang Tsai, Kai-Hsiung Ko, Yi-Chih Hsu, Wei-Chou Chang, Wen-Hui Fang, Chin Lin, Yu-Juei Hsu","doi":"10.1007/s10278-025-01489-4","DOIUrl":"https://doi.org/10.1007/s10278-025-01489-4","url":null,"abstract":"<p><p>Chronic kidney disease (CKD) remains a major public health concern, requiring better predictive models for early intervention. This study evaluates a deep learning model (DLM) that utilizes raw chest X-ray (CXR) data to predict moderate to severe kidney function decline. We analyzed data from 79,219 patients with an estimated Glomerular Filtration Rate (eGFR) between 65 and 120, segmented into development (n = 37,983), tuning (n = 15,346), internal validation (n = 14,113), and external validation (n = 11,777) sets. Our DLM, pretrained on CXR-report pairs, was fine-tuned with the development set. We retrospectively examined data spanning April 2011 to February 2022, with a 5-year maximum follow-up. Primary and secondary endpoints included CKD stage 3b progression, ESRD/dialysis, and mortality. The overall concordance index (C-index) values for the internal and external validation sets were 0.903 (95% CI, 0.885-0.922) and 0.851 (95% CI, 0.819-0.883), respectively. In these sets, the incidences of progression to CKD stage 3b at 5 years were 19.2% and 13.4% in the high-risk group, significantly higher than those in the median-risk (5.9% and 5.1%) and low-risk groups (0.9% and 0.9%), respectively. The sex, age, and eGFR-adjusted hazard ratios (HR) for the high-risk group compared to the low-risk group were 16.88 (95% CI, 10.84-26.28) and 7.77 (95% CI, 4.77-12.64), respectively. The high-risk group also exhibited higher probabilities of progressing to ESRD/dialysis or experiencing mortality compared to the low-risk group. Further analysis revealed that the high-risk group compared to the low/median-risk group had a higher prevalence of complications and abnormal blood/urine markers. Our findings demonstrate that a DLM utilizing CXR can effectively predict CKD stage 3b progression, offering a potential tool for early intervention in high-risk populations.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143775244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Breast Ultrasound Image Segmentation Using Multi-branch Skip Connection Search.","authors":"Yue Wu, Lin Huang, Tiejun Yang","doi":"10.1007/s10278-025-01487-6","DOIUrl":"https://doi.org/10.1007/s10278-025-01487-6","url":null,"abstract":"<p><p>To reduce the cost of designing neural networks and improve the accuracy of breast ultrasound image segmentation, an encoder-decoder neural network architecture search method is proposed, tailored for constructing segmentation models automatically. Initially, a multi-branch skip connection module is designed in which each branch utilizes distinct operations to extract features of varying scales and types from subsets of channels. Subsequently, a learnable operation weight search strategy is introduced that employs Gumbel-Softmax for reparameterizing discrete operation weights. This strategy explores optimal operations within the multi-branch skip connection module through both shared and non-shared methodologies. The candidate neural networks incorporate encoder-decoder block pairs that utilize the Swin Transformer from Swin-Unet and convolutional blocks from TransUNet, respectively. Experimental results demonstrate that the method identifies the optimal encoder-decoder model in approximately two hours. The automatically constructed model achieves superior segmentation accuracy, with Dice scores of approximately 85.94% and 84.44% on the BUSI and OASBUD datasets, respectively. It outperforms state-of-the-art (SOTA) methods such as AAU-Net, SK-U-Net, and TransUNet. High-precision segmentation results offer clear localization of lesion boundaries, thereby reducing the risk of missed diagnoses. The model's quantitative metrics, such as lesion area and morphology, can be seamlessly incorporated into diagnostic reports, facilitating the development of personalized treatment plans.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143775243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuqin Min, Jing Li, Shouqiang Jia, Yuehua Li, Shengdong Nie
{"title":"Automated Cerebrovascular Segmentation and Visualization of Intracranial Time-of-Flight Magnetic Resonance Angiography Based on Deep Learning.","authors":"Yuqin Min, Jing Li, Shouqiang Jia, Yuehua Li, Shengdong Nie","doi":"10.1007/s10278-024-01215-6","DOIUrl":"10.1007/s10278-024-01215-6","url":null,"abstract":"<p><p>Time-of-flight magnetic resonance angiography (TOF-MRA) is a non-contrast technique used to visualize neurovascular. However, manual reconstruction of the volume render (VR) by radiologists is time-consuming and labor-intensive. Deep learning-based (DL-based) vessel segmentation technology may provide intelligent automation workflow. To evaluate the image quality of DL vessel segmentation for automatically acquiring intracranial arteries in TOF-MRA. A total of 394 TOF-MRA scans were selected, which included cerebral vascular health, aneurysms, or stenoses. Both our proposed method and two state-of-the-art DL methods are evaluated on external datasets for generalization ability. For qualitative assessment, two experienced clinical radiologists evaluated the image quality of cerebrovascular diagnostic and visualization (scoring 0-5 as unacceptable to excellent) obtained by manual VR reconstruction or automatic convolutional neural network (CNN) segmentation. The proposed CNN outperforms the other two DL-based methods in clinical scoring on external datasets, and its visualization was evaluated by readers as having the appearance of the radiologists' manual reconstructions. Scoring of proposed CNN and VR of intracranial arteries demonstrated good to excellent agreement with no significant differences (median, 5.0 and 5.0, P ≥ 12) at healthy-type scans. All proposed CNN image quality were considered to have adequate diagnostic quality (median scores > 2). Quantitative analysis demonstrated a superior dice similarity coefficient of cerebrovascular overlap (training sets and validation sets; 0.947 and 0.927). Automatic cerebrovascular segmentation using DL is feasible and the image quality in terms of vessel integrity, collateral circulation and lesion morphology is comparable to expert manual VR without significant differences.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"703-716"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950609/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141918551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rguibi Zakaria, Hajami Abdelmajid, Zitouni Dya, Allali Hakim
{"title":"PelviNet: A Collaborative Multi-agent Convolutional Network for Enhanced Pelvic Image Registration.","authors":"Rguibi Zakaria, Hajami Abdelmajid, Zitouni Dya, Allali Hakim","doi":"10.1007/s10278-024-01249-w","DOIUrl":"10.1007/s10278-024-01249-w","url":null,"abstract":"<p><p>PelviNet introduces a groundbreaking multi-agent convolutional network architecture tailored for enhancing pelvic image registration. This innovative framework leverages shared convolutional layers, enabling synchronized learning among agents and ensuring an exhaustive analysis of intricate 3D pelvic structures. The architecture combines max pooling, parametric ReLU activations, and agent-specific layers to optimize both individual and collective decision-making processes. A communication mechanism efficiently aggregates outputs from these shared layers, enabling agents to make well-informed decisions by harnessing combined intelligence. PelviNet's evaluation centers on both quantitative accuracy metrics and visual representations to elucidate agents' performance in pinpointing optimal landmarks. Empirical results demonstrate PelviNet's superiority over traditional methods, achieving an average image-wise error of 2.8 mm, a subject-wise error of 3.2 mm, and a mean Euclidean distance error of 3.0 mm. These quantitative results highlight the model's efficiency and precision in landmark identification, crucial for medical contexts such as radiation therapy, where exact landmark identification significantly influences treatment outcomes. By reliably identifying critical structures, PelviNet advances pelvic image analysis and offers potential enhancements for broader medical imaging applications, marking a significant step forward in computational healthcare.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"957-966"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950488/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142157124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eric K Lai, Evan Slavik, Bessie Ganim, Laurie A Perry, Caitlin Treuting, Troy Dee, Melissa Osborne, Cieara Presley, Alexander J Towbin
{"title":"Implementing a Photodocumentation Program.","authors":"Eric K Lai, Evan Slavik, Bessie Ganim, Laurie A Perry, Caitlin Treuting, Troy Dee, Melissa Osborne, Cieara Presley, Alexander J Towbin","doi":"10.1007/s10278-024-01236-1","DOIUrl":"10.1007/s10278-024-01236-1","url":null,"abstract":"<p><p>The widespread availability of smart devices has facilitated the use of medical photography, yet photodocumentation workflows are seldom implemented in healthcare organizations due to integration challenges with electronic health records (EHR) and standard clinical workflows. This manuscript details the implementation of a comprehensive photodocumentation workflow across all phases of care at a large healthcare organization, emphasizing efficiency and patient safety. From November 2018 to December 2023, healthcare workers at our institution uploaded nearly 32,000 photodocuments spanning 54 medical specialties. The photodocumentation process requires as few as 11 mouse clicks and keystrokes within the EHR and on smart devices. Automation played a crucial role in driving workflow efficiency and patient safety. For example, body part rules were used to automate the application of a sensitive label to photos of the face, chest, external genitalia, and buttocks. This automation was successful, with over 50% of the uploaded photodocuments being labeled as sensitive. Our implementation highlights the potential for standardizing photodocumentation workflows, thereby enhancing clinical documentation, improving patient care, and ensuring the secure handling of sensitive images.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"671-680"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950542/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142038686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optimized Spatial Transformer for Segmenting Pancreas Abnormalities.","authors":"Banavathu Sridevi, B John Jaidhan","doi":"10.1007/s10278-024-01224-5","DOIUrl":"10.1007/s10278-024-01224-5","url":null,"abstract":"<p><p>The precise delineation of the pancreas from clinical images poses a substantial obstacle in the realm of medical image analysis and surgical procedures. Challenges arise from the complexities of clinical image analysis and complications in clinical practice related to the pancreas. To tackle these challenges, a novel approach called the Spatial Horned Lizard Attention Approach (SHLAM) has been developed. As a result, a preprocessing function has been developed to examine and eliminate noise barriers from the trained MRI data. Furthermore, an assessment of the current attributes is conducted, followed by the identification of essential elements for forecasting the impacted region. Once the affected region has been identified, the images undergo segmentation. Furthermore, it is crucial to emphasize that the present study assigns 80% of the data for training and 20% for testing purposes. The optimal parameters were assessed based on precision, accuracy, recall, F-measure, error rate, Dice, and Jaccard. The performance improvement has been demonstrated by validating the method on various existing models. The SHLAM method proposed demonstrated an accuracy rate of 99.6%, surpassing that of all alternative methods.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"931-945"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950475/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142127895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Weiguo Cao, Marc J Pomeroy, Zhengrong Liang, Yongfeng Gao, Yongyi Shi, Jiaxing Tan, Fangfang Han, Jing Wang, Jianhua Ma, Hongbin Lu, Almas F Abbasi, Perry J Pickhardt
{"title":"Lesion Classification by Model-Based Feature Extraction: A Differential Affine Invariant Model of Soft Tissue Elasticity in CT Images.","authors":"Weiguo Cao, Marc J Pomeroy, Zhengrong Liang, Yongfeng Gao, Yongyi Shi, Jiaxing Tan, Fangfang Han, Jing Wang, Jianhua Ma, Hongbin Lu, Almas F Abbasi, Perry J Pickhardt","doi":"10.1007/s10278-024-01178-8","DOIUrl":"10.1007/s10278-024-01178-8","url":null,"abstract":"<p><p>The elasticity of soft tissues has been widely considered a characteristic property for differentiation of healthy and lesions and, therefore, motivated the development of several elasticity imaging modalities, for example, ultrasound elastography, magnetic resonance elastography, and optical coherence elastography to directly measure the tissue elasticity. This paper proposes an alternative approach of modeling the elasticity for prior knowledge-based extraction of tissue elastic characteristic features for machine learning (ML) lesion classification using computed tomography (CT) imaging modality. The model describes a dynamic non-rigid (or elastic) soft tissue deformation in differential manifold to mimic the tissues' elasticity under wave fluctuation in vivo. Based on the model, a local deformation invariant is formulated using the 1<sup>st</sup> and 2<sup>nd</sup> order derivatives of the lesion volumetric CT image and used to generate elastic feature map of the lesion volume. From the feature map, tissue elastic features are extracted and fed to ML to perform lesion classification. Two pathologically proven image datasets of colon polyps and lung nodules were used to test the modeling strategy. The outcomes reached the score of area under the curve of receiver operating characteristics of 94.2% for the polyps and 87.4% for the nodules, resulting in an average gain of 5 to 20% over several existing state-of-the-art image feature-based lesion classification methods. The gain demonstrates the importance of extracting tissue characteristic features for lesion classification, instead of extracting image features, which can include various image artifacts and may vary for different protocols in image acquisition and different imaging modalities.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"804-818"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950485/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142010199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"EAAC-Net: An Efficient Adaptive Attention and Convolution Fusion Network for Skin Lesion Segmentation.","authors":"Chao Fan, Zhentong Zhu, Bincheng Peng, Zhihui Xuan, Xinru Zhu","doi":"10.1007/s10278-024-01223-6","DOIUrl":"10.1007/s10278-024-01223-6","url":null,"abstract":"<p><p>Accurate segmentation of skin lesions in dermoscopic images is of key importance for quantitative analysis of melanoma. Although existing medical image segmentation methods significantly improve skin lesion segmentation, they still have limitations in extracting local features with global information, do not handle challenging lesions well, and usually have a large number of parameters and high computational complexity. To address these issues, this paper proposes an efficient adaptive attention and convolutional fusion network for skin lesion segmentation (EAAC-Net). We designed two parallel encoders, where the efficient adaptive attention feature extraction module (EAAM) adaptively establishes global spatial dependence and global channel dependence by constructing the adjacency matrix of the directed graph and can adaptively filter out the least relevant tokens at the coarse-grained region level, thus reducing the computational complexity of the self-attention mechanism. The efficient multiscale attention-based convolution module (EMA⋅C) utilizes multiscale attention for cross-space learning of local features extracted from the convolutional layer to enhance the representation of richly detailed local features. In addition, we designed a reverse attention feature fusion module (RAFM) to enhance the effective boundary information gradually. To validate the performance of our proposed network, we compared it with other methods on ISIC 2016, ISIC 2018, and PH<sup>2</sup> public datasets, and the experimental results show that EAAC-Net has superior segmentation performance under commonly used evaluation metrics.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1120-1136"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950606/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141989826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}