International Journal of Imaging Systems and Technology最新文献

筛选
英文 中文
U2-DBA: A Dual-Scale Boundary-Aware Network With Feature-Boundary-Skeleton Loss for Robust Skin Lesion Segmentation 基于特征-边界-骨架损失的双尺度边界感知网络鲁棒皮肤损伤分割
IF 2.5 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2026-02-17 DOI: 10.1002/ima.70315
Zhiyan Che, Ruyun Chen, Hao Chen, Yonggui Li
{"title":"U2-DBA: A Dual-Scale Boundary-Aware Network With Feature-Boundary-Skeleton Loss for Robust Skin Lesion Segmentation","authors":"Zhiyan Che,&nbsp;Ruyun Chen,&nbsp;Hao Chen,&nbsp;Yonggui Li","doi":"10.1002/ima.70315","DOIUrl":"10.1002/ima.70315","url":null,"abstract":"<div>\u0000 \u0000 <p>Accurate segmentation of skin lesions is crucial for dependable computer-aided diagnosis of melanoma. However, many existing deep learning models still have difficulty dealing with vague lesion borders, uneven appearance, and unstable performance when applied to new datasets. This paper proposes a dual-scale boundary-aware network (U<sup>2</sup>-DBA) for dermoscopic image segmentation. The model includes a nested U-in-U encoder that captures both local and global features, a dual-branch gating module that balances semantic and structural information, and a decoder that focuses on preserving boundary details. We further propose a novel Feature-Boundary-Skeleton (FBS) loss function, which integrates region overlap, edge gradient, and skeleton-level shape constraints to enhance segmentation accuracy and structural consistency. To evaluate model efficiency, we introduce the Smooth Accuracy-Compactness Score (SACS), combining Dice and IoU metrics with a logarithmic penalty on model size. Experiments conducted on the ISIC 2018 dataset demonstrate that U<sup>2</sup>-DBA achieves high performance (Dice = 0.884, IoU = 0.799) and outperforms six state-of-the-art models in SACS. When directly evaluated on PH2 and HAM10000 without fine-tuning, the model retains strong performance. These findings indicate that U<sup>2</sup>-DBA is not only accurate and compact but also generalizes effectively across diverse datasets, offering a practical and deployable solution for clinical dermoscopic lesion segmentation. The code is available at https://github.com/kid-od/U2-DBA.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"36 2","pages":""},"PeriodicalIF":2.5,"publicationDate":"2026-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146217238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust Segmentation of Skin Lesions via Confidence-Guided ConvLSTM Integrated With Advanced Curriculum Strategies 结合先进课程策略的基于信心引导的卷积stm的皮肤损伤鲁棒分割
IF 2.5 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2026-02-10 DOI: 10.1002/ima.70307
Neetu Verma, Ranvijay Ranvijay, Dharmendra Kumar Yadav
{"title":"Robust Segmentation of Skin Lesions via Confidence-Guided ConvLSTM Integrated With Advanced Curriculum Strategies","authors":"Neetu Verma,&nbsp;Ranvijay Ranvijay,&nbsp;Dharmendra Kumar Yadav","doi":"10.1002/ima.70307","DOIUrl":"10.1002/ima.70307","url":null,"abstract":"<div>\u0000 \u0000 <p>Skin Cancer is one of the fastest-growing cancers, and it requires early diagnosis for effective treatment. The diagnosis of skin cancer depends heavily on image segmentation, as traditional models frequently have challenges capturing the intricacy and diversity of lesion features. This research proposes a new approach to improve segmentation accuracy by combining evolved curriculum learning, multimodel training, and ConvLSTM-based refinement. The lesion complexity (size, contrast, texture, and borders) is used to stratify the dataset into easy, moderate, and tough categories. After that, specialized models are trained: UNet for easy lesions, UNet++ for moderate lesions, and Attention UNet for tough lesions. The same image is processed by each model during inference, and the outputs are weighted by confidence masks that represent the dependability of the models. These outputs are further integrated by a ConvLSTM refinement module, which uses temporal and spatial connection to provide precise and cohesive segmentation masks. The method outperforms existing methods in validation on the ISIC 2017, ISIC 2018, and PH2 datasets. The curriculum-based method ensures directed learning and keeps simpler instances from dominating the training process. This research paves up the possibility for sophisticated automated diagnostic tools in clinical practice by showing how curriculum learning combined with multimodel refinement may increase the robustness of medical image segmentation.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"36 2","pages":""},"PeriodicalIF":2.5,"publicationDate":"2026-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146217058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hybrid BeeHive Algorithm: Proposed Ensemble Model for Multiclass Multi-Label Ophthalmological Eye Diseases Prediction 混合蜂巢算法:多类别多标签眼科眼病预测的集成模型
IF 2.5 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2026-02-09 DOI: 10.1002/ima.70308
Akanksha Bali, Kuljeet Singh, Vibhakar Mansotra
{"title":"Hybrid BeeHive Algorithm: Proposed Ensemble Model for Multiclass Multi-Label Ophthalmological Eye Diseases Prediction","authors":"Akanksha Bali,&nbsp;Kuljeet Singh,&nbsp;Vibhakar Mansotra","doi":"10.1002/ima.70308","DOIUrl":"10.1002/ima.70308","url":null,"abstract":"<div>\u0000 \u0000 <p>The purpose of this study is to develop a novel and effective deep learning method, named the Hybrid BeeHive Algorithm (HBA), for the intelligent identification and classification of retinal diseases using the Ocular Disease Intelligent Recognition (ODIR) dataset. The methodology involves the introduction of the HBA, which incorporates key elements from Inception, DenseNet, and VGG architectures. This study utilizes the ODIR dataset, consisting of 5000 cases from various Chinese hospitals, to train and evaluate the proposed model. The preprocessing phase includes standardizing image resolutions, normalizing pixel values, and augmenting data to enhance the model's generalizability. The HBA integrates data from color fundus photographs to create a robust multimodal system. Backpropagation is used to optimize the model's parameters, enhancing its ability to recognize complex patterns in the data. The results demonstrate the efficacy of the HBA in classifying various retinal diseases. The performance metrics for different models (DenseNet121, InceptionV3, and VGG19) are compared, highlighting their accuracy, F1 scores, precision, recall, and specificity in diagnosing age-related macular degeneration (AMD) and other conditions. DenseNet121 and InceptionV3 models exhibit high performance, with InceptionV3 achieving near-perfect metrics. Traditional machine learning models like RF and DT also show commendable performance but with some trade-offs in recall. The HBA proves to be a significant advancement in the field of automated ocular disease diagnostics. Building upon existing multimodal ensemble approaches, HBA introduces a novel hybrid fusion strategy that integrates three deep CNN backbones for improved multi-label retinal disease prediction.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"36 2","pages":""},"PeriodicalIF":2.5,"publicationDate":"2026-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146216768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DeepMixNet: Deep Multi-Scale Interactive Feature Mixing Network for Automated Skin Lesion Segmentation DeepMixNet:用于自动皮肤病变分割的深度多尺度交互式特征混合网络
IF 2.5 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2026-02-08 DOI: 10.1002/ima.70289
Ying Wang, XinYu Wang, Meng Zhang, Meiyan Liang, Jian'an Liang
{"title":"DeepMixNet: Deep Multi-Scale Interactive Feature Mixing Network for Automated Skin Lesion Segmentation","authors":"Ying Wang,&nbsp;XinYu Wang,&nbsp;Meng Zhang,&nbsp;Meiyan Liang,&nbsp;Jian'an Liang","doi":"10.1002/ima.70289","DOIUrl":"10.1002/ima.70289","url":null,"abstract":"<div>\u0000 \u0000 <p>In recent years, medical image segmentation has emerged as a pivotal technology in medical image analysis, playing a particularly critical role in the diagnosis of skin diseases, including melanoma. Consequently, enhancing the accuracy and robustness of medical image segmentation remains a core challenge in contemporary medical image analysis. U-Net is known for its efficient image processing capabilities and outstanding medical image segmentation performance. Therefore, it has been widely adopted in medical image segmentation. However, many current modeling approaches remain imperfect and exhibit substantial limitations when applied to real-world clinical scenarios. In this paper, we propose DeepMixNet, a novel deep multi-scale interactive feature fusion network specifically tailored for automated skin lesion segmentation. Specifically, we introduce DMixblock, a deep multi-scale interactive feature mixing block integrated into the U-shaped model, enabling low-level spatial details and high-level semantic information to interactively enhance each other through bidirectional paths. We conducted comparative experiments on three public skin lesion datasets (ISIC 2017, ISIC 2018, and HAM10000) and then used the PH<sup>2</sup> dataset as external validation. The experimental results show that the proposed DeepMixNet model demonstrates significant accuracy advantages in skin lesion segmentation tasks.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"36 2","pages":""},"PeriodicalIF":2.5,"publicationDate":"2026-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146224017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SF2AL-GNN: A Spatiotemporal Feature Fusion Adaptive Learning Graph Neural Network for ASD Diagnosis SF2AL-GNN:用于ASD诊断的时空特征融合自适应学习图神经网络
IF 2.5 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2026-02-08 DOI: 10.1002/ima.70306
Ming Liu, Hongyuan Gu, Chaolei Sun, Yudong Zhang, Shuaiqi Liu
{"title":"SF2AL-GNN: A Spatiotemporal Feature Fusion Adaptive Learning Graph Neural Network for ASD Diagnosis","authors":"Ming Liu,&nbsp;Hongyuan Gu,&nbsp;Chaolei Sun,&nbsp;Yudong Zhang,&nbsp;Shuaiqi Liu","doi":"10.1002/ima.70306","DOIUrl":"10.1002/ima.70306","url":null,"abstract":"<div>\u0000 \u0000 <p>As modern medical imaging technology advances, resting-state functional magnetic resonance imaging (rs-fMRI) is becoming a preferred method for studying brain activity and identifying autism spectrum disorder (ASD) because of its cost-effectiveness and noninvasive approach. To better utilize the temporal and spatial dimensions of the rs-fMRI signals, this paper proposes a spatiotemporal feature fusion adaptive learning graph neural network (SF<sub>2</sub>AL-GNN) for ASD diagnosis. SF<sub>2</sub>AL-GNN first creates a functional connectivity (FC) matrix for every individual. Then, combining gated recurrent units (GRU), transformer, and graph convolution, it develops a spatiotemporal local feature learning module to extract temporal features from 1D time series and the FC matrix–based spatial features. Subsequently, these features are fused to construct a global subject graph using multimodal information. A self-adjusting global feature learning (SGFL) module adds adaptive weights during GCN updates to better obtain ASD-related feature embeddings. Finally, an MLP is used for classification. Training and evaluation of the model were conducted using the ABIDE I dataset, outperforming the latest advanced approaches.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"36 2","pages":""},"PeriodicalIF":2.5,"publicationDate":"2026-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146216175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Manually Annotated, Open-Source Dataset for Coronary, Aortic, and Valvular Calcium Segmentation in Non-Gated Chest CT 非门控胸部CT中冠状动脉、主动脉和瓣膜钙分割的人工注释开源数据集
IF 2.5 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2026-02-07 DOI: 10.1002/ima.70311
Keola Ching, Brady Carlson, Ernesto Robinson, Kevin Jiang, Kyra Rozanitis, Brian Desalme, Divya Beeram, John Ciubuc, Kal Clark
{"title":"A Manually Annotated, Open-Source Dataset for Coronary, Aortic, and Valvular Calcium Segmentation in Non-Gated Chest CT","authors":"Keola Ching,&nbsp;Brady Carlson,&nbsp;Ernesto Robinson,&nbsp;Kevin Jiang,&nbsp;Kyra Rozanitis,&nbsp;Brian Desalme,&nbsp;Divya Beeram,&nbsp;John Ciubuc,&nbsp;Kal Clark","doi":"10.1002/ima.70311","DOIUrl":"10.1002/ima.70311","url":null,"abstract":"<div>\u0000 \u0000 <p>Automated cardiovascular calcium scoring on non-gated chest CT offers a major opportunity for opportunistic cardiovascular risk stratification. However, progress is hindered by the scarcity of publicly available, annotated datasets that differentiate between coronary, aortic, and valvular sources. This study aimed to develop, characterize, and release a manually annotated, open-source dataset for cardiovascular segmentation and to provide a baseline deep learning model performance benchmark. A dataset of 203 non-gated chest CT scans from the Stanford AIMI COCA dataset was manually annotated for calcium across eight anatomical classes, with all segmentations verified by an expert radiologist. A YOLOv8m-Seg model was trained on an 80% patient-level split (171 scans) and evaluated on the remaining 20% (42 scans). Performance was assessed using instance segmentation metrics, and the dataset's utility was demonstrated with an illustrative risk stratification benchmark based on a total thoracic calcium score. The final dataset comprised 1649 distinct calcium instances, exhibiting significant class imbalance (thoracic aorta: 68.65%). The baseline YOLOv8m-Seg model yielded moderate segmentation performance (Mask mAP50: 14.4%). The illustrative benchmark demonstrated that models trained on this data can learn features correlated with high calcium burden. This publicly available, expertly verified dataset is a valuable resource for developing and validating algorithms for comprehensive cardiovascular calcium analysis on non-gated CT. The primary value lies in the dataset itself, which enables the development of next-generation AI models capable of anatomically precise distinction between coronary and non-coronary calcifications for more cardiovascular risk stratification, warranting further research and external validation.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"36 2","pages":""},"PeriodicalIF":2.5,"publicationDate":"2026-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146216171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Manually Annotated, Open-Source Dataset for Coronary, Aortic, and Valvular Calcium Segmentation in Non-Gated Chest CT 非门控胸部CT中冠状动脉、主动脉和瓣膜钙分割的人工注释开源数据集
IF 2.5 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2026-02-07 DOI: 10.1002/ima.70311
Keola Ching, Brady Carlson, Ernesto Robinson, Kevin Jiang, Kyra Rozanitis, Brian Desalme, Divya Beeram, John Ciubuc, Kal Clark
{"title":"A Manually Annotated, Open-Source Dataset for Coronary, Aortic, and Valvular Calcium Segmentation in Non-Gated Chest CT","authors":"Keola Ching,&nbsp;Brady Carlson,&nbsp;Ernesto Robinson,&nbsp;Kevin Jiang,&nbsp;Kyra Rozanitis,&nbsp;Brian Desalme,&nbsp;Divya Beeram,&nbsp;John Ciubuc,&nbsp;Kal Clark","doi":"10.1002/ima.70311","DOIUrl":"https://doi.org/10.1002/ima.70311","url":null,"abstract":"<div>\u0000 \u0000 <p>Automated cardiovascular calcium scoring on non-gated chest CT offers a major opportunity for opportunistic cardiovascular risk stratification. However, progress is hindered by the scarcity of publicly available, annotated datasets that differentiate between coronary, aortic, and valvular sources. This study aimed to develop, characterize, and release a manually annotated, open-source dataset for cardiovascular segmentation and to provide a baseline deep learning model performance benchmark. A dataset of 203 non-gated chest CT scans from the Stanford AIMI COCA dataset was manually annotated for calcium across eight anatomical classes, with all segmentations verified by an expert radiologist. A YOLOv8m-Seg model was trained on an 80% patient-level split (171 scans) and evaluated on the remaining 20% (42 scans). Performance was assessed using instance segmentation metrics, and the dataset's utility was demonstrated with an illustrative risk stratification benchmark based on a total thoracic calcium score. The final dataset comprised 1649 distinct calcium instances, exhibiting significant class imbalance (thoracic aorta: 68.65%). The baseline YOLOv8m-Seg model yielded moderate segmentation performance (Mask mAP50: 14.4%). The illustrative benchmark demonstrated that models trained on this data can learn features correlated with high calcium burden. This publicly available, expertly verified dataset is a valuable resource for developing and validating algorithms for comprehensive cardiovascular calcium analysis on non-gated CT. The primary value lies in the dataset itself, which enables the development of next-generation AI models capable of anatomically precise distinction between coronary and non-coronary calcifications for more cardiovascular risk stratification, warranting further research and external validation.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"36 2","pages":""},"PeriodicalIF":2.5,"publicationDate":"2026-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146216172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MAR: Cancer Grading Metric With AI-Based Histopathological Image Assessment MAR:基于人工智能的组织病理图像评估的癌症分级标准
IF 2.5 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2026-02-06 DOI: 10.1002/ima.70305
Nooshin Nemati, Merve Ozkan, Refik Samet
{"title":"MAR: Cancer Grading Metric With AI-Based Histopathological Image Assessment","authors":"Nooshin Nemati,&nbsp;Merve Ozkan,&nbsp;Refik Samet","doi":"10.1002/ima.70305","DOIUrl":"10.1002/ima.70305","url":null,"abstract":"<div>\u0000 \u0000 <p>The aim of this study is to propose the cancer grading metric of MAR (Mitosis Area Rate) with AI based histopathological image assessment. Proposed MAR metric is defined as the ratio of the mitosis pixels area to the total image area in the histopathological images. The mitosis pixels area is defined as total pixels area of all mitoses in the patches of histopathological image by CNN that distinguishes true mitosis from false positives. The total image area is defined as the total pixels area of histopathological image. HSV color-space segmentation in <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mn>32</mn>\u0000 <mo>×</mo>\u0000 <mn>32</mn>\u0000 </mrow>\u0000 <annotation>$$ 32times 32 $$</annotation>\u0000 </semantics></math> pixel patches of annotated H&amp;E-stained histopathological images from ICPR12, MiDeSeC, and MIDOG21 datasets is used for the calculation of the mitosis pixels area. MAR is used to categorize mitosis activity into low, moderate, or high levels. The existing Ki-67 index is used to validate the MAR metric. Ki-67 is defined as the ratio of the number of mitoses to the total number of cells. The proposed AI-based MAR metric improves accuracy and consistency in assessing tumor proliferation in histopathological images. Performance was evaluated using ROC analysis and Cohen's kappa. Obtained results showed that proposed MAR metric gives a better assessment than state of art. The proposed metric of MAR aligns well with the Ki-67 index and improves diagnostic consistency. MAR is the first metric in the literature that uses mitosis pixels area as a quantitative approach in automated histopathology.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"36 2","pages":""},"PeriodicalIF":2.5,"publicationDate":"2026-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146216752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SegRef3D: A Versatile Open-Source Platform for Artificial Intelligence-Assisted Segmentation and Three-Dimensional Reconstruction in Morphological Research 用于形态学研究的人工智能辅助分割和三维重建的通用开源平台
IF 2.5 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2026-02-06 DOI: 10.1002/ima.70313
Satoru Muro, Takuya Ibara, Akimoto Nimura, Keiichi Akita
{"title":"SegRef3D: A Versatile Open-Source Platform for Artificial Intelligence-Assisted Segmentation and Three-Dimensional Reconstruction in Morphological Research","authors":"Satoru Muro,&nbsp;Takuya Ibara,&nbsp;Akimoto Nimura,&nbsp;Keiichi Akita","doi":"10.1002/ima.70313","DOIUrl":"10.1002/ima.70313","url":null,"abstract":"<p>Accurate and efficient image segmentation is crucial in anatomy, histology, and pathology research. Conventional manual approaches are time-consuming, whereas fully automated artificial intelligence segmentation requires substantial manual correction owing to inaccuracy. To address this, we developed SegRef3D, a tool integrating the Segment Anything Model 2 with multiframe tracking and interactive refinement functions, enabling streamlined segmentation workflows for anatomical research. SegRef3D is implemented as a standalone, offline desktop application that operates entirely in a local environment, eliminating the need for cloud-based services. SegRef3D provides a unified workflow from data import to segmentation, object tracking, refinement, and three-dimensional model export. Users can specify segmentation prompts through bounding box input, track objects across multiple frames with start–end range selection, and refine results using intuitive Add to Mask and Erase from Mask tools. Up to 20 objects can be handled simultaneously, with each assigned a unique color. The software supports the Standard Tessellation Language output for three-dimensional modeling and includes volume measurement functions. The SegRef3D prototype, called Seg&amp;Ref, has been applied in studies using serial histological sections, correlative microscopy with block-face imaging, and pelvic magnetic resonance imaging. Building on these applications, SegRef3D further enhances usability and enables a seamless workflow. SegRef3D offers an accessible, efficient, and accurate segmentation environment tailored for morphological and anatomical studies. Combining artificial intelligence-powered automatic segmentation with human-guided refinement in a user-friendly graphical user interface bridges the gap between research needs and computational methods. By supporting applications that span traditional anatomy and modern pathology, SegRef3D provides a versatile platform for integrative morphological analysis. Its open-source availability ensures its broad applicability in research, education, and clinical training in the anatomical sciences.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"36 2","pages":""},"PeriodicalIF":2.5,"publicationDate":"2026-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ima.70313","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146216719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Multi-Scale Boundary-Enhanced Denoising Diffusion Network for Medical Image Segmentation 用于医学图像分割的多尺度边界增强去噪扩散网络
IF 2.5 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2026-02-06 DOI: 10.1002/ima.70314
Bing Wang, Xinming Zhang, Xiaofang Liu, Shiyin Zhang
{"title":"A Multi-Scale Boundary-Enhanced Denoising Diffusion Network for Medical Image Segmentation","authors":"Bing Wang,&nbsp;Xinming Zhang,&nbsp;Xiaofang Liu,&nbsp;Shiyin Zhang","doi":"10.1002/ima.70314","DOIUrl":"10.1002/ima.70314","url":null,"abstract":"<div>\u0000 \u0000 <p>Automatic and accurate medical image segmentation (MIS) can assist doctors identify regions of interest (ROIs) more efficiently, and provide more reliable diagnostic information and treatment options. In recent years, the denoising diffusion model, known for its excellent detail expression ability and good generalization performance, has demonstrated promising effect in MIS. The existing diffusion-based segmentation networks typically take the original images as the conditional information, and ignore the ambiguity of the ROIs' boundaries, resulting in inconsistent boundary predictions and inaccurate segmentation results. The variability in the size and shape of ROIs poses additional challenges when applying diffusion models to MIS. To solve these problems, we propose a multi-scale boundary-enhanced diffusion segmentation network (MBDS-Net) for MIS to improve the accuracy of boundary segmentation. Specifically, we design a multi-scale boundary-aware enhancement (MBE) module to enhance the boundary restoration ability of the ROIs of different scales and shapes. Besides, we propose an attention denoising residual (ADR) module that focuses on extracting key features during the progressive denoising process, reducing the impact of noise on segmentation and enhancing the robustness of the model. Furthermore, we adopt deep supervision in the decoder to enhance the training convergence and feature discriminability of the diffusion model. We conduct experiments on three public datasets and compare our model to the existing advanced segmentation models to demonstrate its superiority in MIS. The code is available at https://github.com/FionaYeager/MBDS-Net.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"36 2","pages":""},"PeriodicalIF":2.5,"publicationDate":"2026-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146216218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书