Yingnian Wu, Meiqi Sheng, Ding Wang, Shiwei Gao, Hao Tan
{"title":"Application of SVM Based on Optimization of Newton Raphson's Algorithm in Non-Invasive Blood Glucose Detection","authors":"Yingnian Wu, Meiqi Sheng, Ding Wang, Shiwei Gao, Hao Tan","doi":"10.1002/ima.70100","DOIUrl":"https://doi.org/10.1002/ima.70100","url":null,"abstract":"<div>\u0000 \u0000 <p>Traditional invasive blood glucose monitoring methods carry risks such as wound infections and patient discomfort. To address these issues, we propose a non-invasive method based on facial infrared thermography, aiming to enhance patient comfort and improve the accuracy and convenience of blood glucose detection. To address the data imbalance problem, a wavelet-based sample pairing fusion technique was used to enhance the thermal imaging dataset. Features extracted by the MobileNetV3 network were input into an SVM model for training, and the Newton–Raphson optimization algorithm was applied to optimize the SVM parameters to improve performance. Compared with the standalone MobileNetV3 model, the MobileNetV3-NRBO-SVM regression network exhibits better performance in terms of maximum error and root mean square error (RMSE). The predicted blood glucose values of our proposed model are all within region A of the Clark error grid with a maximum deviation of less than 10%. These results indicate that the non-invasive blood glucose detection technique based on infrared thermography and the MobileNetV3-NRBO-SVM model proposed in this study achieves clinically acceptable accuracy.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 3","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144091674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yanchao Yuan, Shangming Zhu, Yao Qu, Jifeng Sun, Min He, Jinlong Ma, Hanxing Song, Jinting Zhang, Zhiqiang Yin, Jicong Zhang, Xunming Ji
{"title":"Discriminative Consistency Semi-Supervised Carotid Ultrasound Plaque Segmentation by Exploiting Global Context","authors":"Yanchao Yuan, Shangming Zhu, Yao Qu, Jifeng Sun, Min He, Jinlong Ma, Hanxing Song, Jinting Zhang, Zhiqiang Yin, Jicong Zhang, Xunming Ji","doi":"10.1002/ima.70114","DOIUrl":"https://doi.org/10.1002/ima.70114","url":null,"abstract":"<div>\u0000 \u0000 <p>Carotid plaques in ultrasound images are a routine indicator for stroke accident risk evaluation. However, plaque segmentation for diagnosis is a difficult task because artifacts and heterogeneity can obfuscate the plaque boundaries. Moreover, pixel-level labeling of numerous images can be time-consuming and laborious. In this paper, we propose a discriminative consistency semi-supervised method by employing global contexts, named DCGC-Net, to segment carotid ultrasound plaques. Firstly, student-teacher consistency learning is adopted to leverage unlabeled images using data perturbations. However, the unsupervised outputs may suffer from a lack of shape constraint. Thus, we introduce an adversarial network to enforce the outputs of unlabeled images more reliably. Finally, a global dilated convolution block (GDCB), embedded in U-Net, is designed to obtain global contexts for reducing the effect of artifacts. Extensive experiments are performed on 1400 images of 1259 patients using 1/2, 1/4, and 1/8 labeled training images. Compared to cutting-edge semi-supervised methods, the proposed method can acquire more outstanding results on metrics of DSC and MHD (<i>p</i> value < 0.05). Ablation experiments demonstrate the validity of each proposed module. Besides, plaque clinical parameters are automatedly calculated as a short diagnostic report. Our proposed semi-supervised method can be useful for clinically segmenting carotid ultrasound plaques by using limited labeled images and numerous unlabeled images.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 3","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144091836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fused Texture Feature of Segmented Retinal Image Based Multiretinal Disease Classification","authors":"Prem Kumari Verma, Jagdeep Kaur, Nagendra Pratap Singh","doi":"10.1002/ima.70112","DOIUrl":"https://doi.org/10.1002/ima.70112","url":null,"abstract":"<div>\u0000 \u0000 <p>The examination of retinal blood vessels is crucial for ophthalmologists to diagnose various eye abnormalities, including diabetic retinopathy, glaucoma, cardiovascular diseases, high blood pressure, arteriosclerosis, and age-related macular degeneration. The manual scrutiny of retinal vasculature poses a significant challenge for medical professionals due to the intricate structure of the eye, the minuscule size of blood vessels, and the variability in vessel width. In recent literature, numerous automated techniques for retinal vessel extraction have been proposed, offering valuable assistance to ophthalmologists in promptly identifying and diagnosing eye disorders. The study introduces a comprehensive model that assesses and evaluates the performance of 13 state-of-the-art machine learning networks. This model aims to contribute to deep feature extraction and image classification for fundus images. The proposed approach extracts the Gray-Level Co-occurrence Matrix, Histogram of Oriented Gradients, Wavelet, Tamura, Law's of Texture Energy, and Local Binary Pattern texture feature vector from the segmented retinal blood vessel structure. After extracting the feature, create the group of all possible combinations by using six feature vectors. After an exhaustive experimental analysis, we select a suitable group of feature vectors and apply a machine Learning Classifier to classify the four Retinal blood vessel-related diseases, namely Hypertensive Retinopathy, Pathological myopia, Moderated Diabetic retinopathy, and Healthy retina. Finally, obtain the accuracy of 97.7% with Cubic SVM.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 3","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144085023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Pioneering Precision in Magnetic Resonance Imaging Training: The Introduction of the MRI Interpretation Competency Scale","authors":"Halil Yilmaz, Dilber Polat","doi":"10.1002/ima.70115","DOIUrl":"https://doi.org/10.1002/ima.70115","url":null,"abstract":"<p>Despite the central role of magnetic resonance imaging (MRI) in clinical diagnosis and medical education, there is a notable absence of standardized, validated tools specifically designed to assess MRI interpretation competencies. Existing assessment methods often evaluate general diagnostic reasoning but fail to address the unique cognitive demands of MRI interpretation, such as spatial orientation, recognition of sectional anatomy, and differentiation of normal and pathological structures. In response to these challenges, this study aimed to develop the MRI Interpretation Competency Scale (MRI-ICS), a tool specifically targeting the skills required for accurate MRI interpretation. A sequential exploratory mixed methods approach was employed. Semi-structured interviews with experienced MRI interpreter students (selected via snowball sampling) informed item development. Exploratory factor analysis (EFA) was conducted to establish construct validity, supported by the Kaiser–Meyer–Olkin measure (KMO) and Bartlett's Test of Sphericity (BTS). Reliability was assessed using Cronbach's alpha (α). The MRI-ICS identified three factors: (1) ability to discern structures in MRI images (eight items, explained variance 27.46%, Cronbach's <i>α</i> = 0.89); (2) necessity for professional development (seven items, explained variance 20.25%, Cronbach's <i>α</i> = 0.80); and (3) utilization in the diagnostic process (six items, explained variance 14.01%, Cronbach's <i>α</i> = 0.84). The total explained variance was 61.72%, with an overall Cronbach's α of 0.89. The MRI-ICS offers a reliable, validated framework to enhance MRI interpretation training globally, filling a critical gap in medical education assessment.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 3","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ima.70115","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144091835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ye Zhang, Muqing Zhang, Jianxin Zhang, Yangyang Shen, Datian Niu
{"title":"GTMamba: Graph Tri-Orientated Mamba Network for 3D Brain Tumor Segmentation","authors":"Ye Zhang, Muqing Zhang, Jianxin Zhang, Yangyang Shen, Datian Niu","doi":"10.1002/ima.70111","DOIUrl":"https://doi.org/10.1002/ima.70111","url":null,"abstract":"<div>\u0000 \u0000 <p>Recently, Mamba has garnered increasing attention due to its efficiency and effectiveness in modeling long-range dependencies. However, adapting it to non-sequential brain tumor image data remains a significant challenge. To address this, we propose the Graph Tri-orientated Mamba network (GTMamba) for brain tumor image segmentation. This network is capable of flexibly capturing the relationships between vertices and their neighboring vertices, thereby enhancing the selection mechanism of the Mamba module. This improvement allows the network to better adapt to non-sequential image data and significantly enhances segmentation accuracy. On the BraTS 2021 and MSD Task01_BrainTumour datasets, GTMamba achieved Dice values of 94.29%/92.08%, 94.01%/90.58%, and 88.44%/74.02% for the whole tumor, tumor core, and enhanced tumor segmentation tasks, respectively. Compared to other state-of-the-art methods, GTMamba demonstrates superior overall performance in terms of segmentation accuracy and parameter efficiency.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 3","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143939360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dual-Region Consistency Learning With Contrastive Refinement for Semi-Supervised Medical Image Segmentation","authors":"Junmei Sun, Meixi Wang, Jianxiang Zhao, Defu Yang, Huang Bai, Xiumei Li","doi":"10.1002/ima.70091","DOIUrl":"https://doi.org/10.1002/ima.70091","url":null,"abstract":"<div>\u0000 \u0000 <p>Consistency regularization methods based on uncertainty estimation are a promising strategy for improving semi-supervised medical image segmentation. However, existing consistency regularization methods based on uncertainty estimation often neglect comprehensive feature extraction from both low and high uncertainty regions. Additionally, the lack of class separability in segmentation limits the learning of more robust representations from unlabeled images. To address these issues, this paper proposes a novel semi-supervised medical image segmentation framework named Dual-Region Consistency Learning with Contrastive Refinement. The proposed Dual-Region Balanced Consistency Learning (DRBCL) strategy assigns different learning weights to low and high uncertainty regions in predictions to fully learn complete images. Furthermore, the proposed Contrastive Learning with Hard Negative Samples (CLHNS) module incorporates the idea of contrastive learning. Positive and hard negative sample pairs constructed by the CLHNS module further improve inter-class contrast and intra-class consistency in segmentation. In the 10% labeled image experiment, the proposed method achieves Dice coefficients of 89.50% on the LA MR dataset and 72.08% on the Pancreas CT dataset, which surpass existing benchmarks and establishes new state-of-the-art performance.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 3","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143938994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Review of U-Net-Based Deep Learning Models for Skin Lesion Segmentation","authors":"S. S. Kumar, R. S. Vinod Kumar, D. Subbulekshmi","doi":"10.1002/ima.70107","DOIUrl":"https://doi.org/10.1002/ima.70107","url":null,"abstract":"<div>\u0000 \u0000 <p>Automated skin lesion segmentation is crucial for early and accurate skin cancer diagnosis. Deep learning, particularly U-Net, has revolutionized the field of automatic skin lesion segmentation. This review comprehensively examines U-Net and its variants employed for automated skin lesion segmentation. It outlines the foundational U-Net architecture and explores diverse architectural innovations, including attention mechanisms, advanced skip connections, residual and dilated convolutions, transformer models, and hybrid models. The review highlights how these adaptations address inherent challenges in skin lesion segmentation, including data limitations and lesion heterogeneity. It also discusses the commonly used datasets, evaluation metrics, and compares model performance and computational cost. Finally, it addresses the existing challenges and outlines future research directions to advance automated skin cancer diagnosis.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 3","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143925937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-Task Network Guided by Ultrasound Features for Predicting BRAFV600E Mutation Status in Thyroid Ultrasound Images","authors":"Yansheng Xu, Lucheng Chang, Xiaohong Han, Xi Wei","doi":"10.1002/ima.70098","DOIUrl":"https://doi.org/10.1002/ima.70098","url":null,"abstract":"<div>\u0000 \u0000 <p>Thyroid cancer is recognized as one of the most prevalent malignancies worldwide, with its incidence often linked to the BRAF<sup>V600E</sup> mutation, a mutation in the BRaf protooncogene serine/threonine kinase (BRAF). The conventional method for detecting this mutation involves invasive fine-needle aspiration, highlighting the urgent need for a noninvasive alternative. This study aims to establish a predictive framework for BRAF<sup>V600E</sup> mutation status in thyroid cancer by leveraging the correlation between BRAF<sup>V600E</sup> and various ultrasound image features. The goal is to introduce a noninvasive technique for determining the mutation status, thus advancing thyroid cancer diagnostics. The investigation thoroughly examined ultrasound images of 3310 thyroid nodules, including 2115 instances of the BRAF<sup>V600E</sup> mutation, using a dataset approved by the Ethics Committee of Tianjin Medical University Cancer Institute and Hospital. A deep learning-based multitask model was developed and trained on a collection of 2718 images, which were marked by imbalanced feature labels. The model was then rigorously tested on a balanced set of 592 images to determine the mutation status. Using advanced deep learning techniques, the study designed a multitask learning model proficient in predicting the presence of the BRAF<sup>V600E</sup> mutation. This model utilized ultrasound characteristics such as composition, echogenicity, margin, echogenic foci, and shape. The model combines methods for local and global feature extraction, selection, and fusion. It begins by deriving feature representations from the ultrasound characteristics of thyroid nodules via multitask learning and then merges these features to pinpoint the signature representation indicative of the BRAF<sup>V600E</sup> mutation. The code is publicly available at https://github.com/xuyansheng07/MTL_BRAFV600E. The model exhibited significant predictive performance, achieving an accuracy rate of 92.91%, a sensitivity of 97.94%, and a specificity of 83.25%. Additionally, relationship exploration experiments conducted in this study meticulously explored the connection between gene mutations and ultrasound features, highlighting the critical role of echogenic foci features in predicting the BRAF<sup>V600E</sup> status. This study proposes a noninvasive method for predicting the BRAF<sup>V600E</sup> mutation status in thyroid nodules. The findings not only demonstrate the high predictive accuracy of the model but also highlight the importance of echogenic foci in determining mutation status. The introduction of this noninvasive predictive framework opens new avenues for future research.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 3","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143925897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sajid Ullah Khan, Sultan Alanazi, Fahdah Almarshad, Tallha Akram
{"title":"A Novel Approach for Dental X-Ray Enhancement and Caries Detection","authors":"Sajid Ullah Khan, Sultan Alanazi, Fahdah Almarshad, Tallha Akram","doi":"10.1002/ima.70108","DOIUrl":"https://doi.org/10.1002/ima.70108","url":null,"abstract":"<div>\u0000 \u0000 <p>Typical manual processes are time-consuming, error-prone, and subjective, especially for complex radiological diagnoses. Although current artificial intelligence models show promising results for identifying caries, they generally fail due to a lack of well-pre-processed images. This research work is two-fold. Initially, we propose a novel layer division non-zero elimination model to reduce Poisson noise and de-blur the acquired images. In the second step, we propose a more accurate and intuitive method in segmenting and classifying caries of the teeth. We used a total of 17 840 radiographs, which are a mix of bitewing and periapical X-rays, for classification with ResNet-50 and segmentation with ResUNet. ResNet-50 uses skip connections within the residual blocks to solve the gradient issue existing in cavity presence. ResUNet combines the encoder-decoder structure of U-Net with the residual block features of ResNet to improve the performance of segmentation on radiographs with cavities. Finally, the Stochastic Gradient Descent optimizer was employed during the training phase to ensure the possibility of convergence and improve accuracy. ResNet-50 was proven to outperform earlier versions, like ResNet-18 and ResNet-34, in achieving a recognition accuracy of 87% in the classification challenge, which is a very reliable indicator of promising results. Similarly, ResUNet was proved to be better than existing state-of-the-art models such as CariesNet, DeepLab v3, and U-Net++ in terms of accuracy, even achieving the level of 98% accuracy in segmentation.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 3","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143914382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Leakage-Resistant Spatially Weighted Active Contour for Brain Tumor Segmentation","authors":"Bijay Kumar Sa, Sanjay Agrawal, Rutuparna Panda","doi":"10.1002/ima.70110","DOIUrl":"https://doi.org/10.1002/ima.70110","url":null,"abstract":"<div>\u0000 \u0000 <p>Accurate delineation of brain tumor in a magnetic resonance (MR) image is crucial for its prognosis. Recently, active contour models (ACM) are increasingly being applied in brain tumor segmentation, owing to their flexibility in capturing intricate boundaries and optimization-driven approach. However, the accuracy of these models often gets limited due to the image's intensity inhomogeneity induced false convergence and leakage through weak edged boundaries. In contrast to the traditional ACMs that use fixed or adaptive scalar weights, we propose to counter these limitations using spatially adaptive weights for the contour's regularization energy terms. This keeps the ACM independent of the weight initializations. Further, no exclusive image-fitting term is required in its overall energy, as the spatial weighting of the regularization terms can inhibit the contour's motion near the boundary pixels. Our model dynamically adjusts the variable weight elements along the contour based on Hellinger distances of the local intensity distributions from a reference. It mitigates leakage by using a special weighting factor that checks contour motion particularly at points of changing intensity statistics. Despite the overhead caused by the local evaluation of spatial weights along the contour, implementation using parallel processing maintains a decent computational efficiency. Experimental results obtained on Cheng's brain MR dataset demonstrate the model's accuracy and robustness against various levels of inhomogeneity and boundary smoothness. Further tests on multiple other medical images highlight its generality. It outperforms the compared state-of-the-art machine learning models and major ACMs.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 3","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143919588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}