Intelligence-based medicine最新文献

筛选
英文 中文
Enhancing emotion recognition through multi-modal data fusion and graph neural networks 通过多模态数据融合和图神经网络增强情绪识别
Intelligence-based medicine Pub Date : 2025-01-01 DOI: 10.1016/j.ibmed.2025.100291
Kasthuri Devarajan , Suresh Ponnan , Sundresan Perumal
{"title":"Enhancing emotion recognition through multi-modal data fusion and graph neural networks","authors":"Kasthuri Devarajan ,&nbsp;Suresh Ponnan ,&nbsp;Sundresan Perumal","doi":"10.1016/j.ibmed.2025.100291","DOIUrl":"10.1016/j.ibmed.2025.100291","url":null,"abstract":"<div><div>In this paper, a novel emotion detection system is proposed based on Graph Neural Network (GNN) architecture, which is used to integrate and learn from multiple data sets (EEG, face expression, physiological signals). The proposed GNN is able to learn about interactions between multiple modalities, so as to extract a single picture of emotion categorization. This model is very good and gets 91.25 % accuracy, 91.26 % precision, 91.25 % recall and 91.25 % F1-score. Moreover, the proposed GNN is a sensible trade-off between speed and precision, with a calculation time of 163 ms. The Proposed GNN is better, primarily due to its ability to represent complex relations between multi-modal inputs, thereby improving its real-time emotional state recognition and classification performance. The proposed GNN demonstrates its suitability for powerful emotion detection by outperforming all models in classification precision and multi-modal data fusion, surpassing traditional models such as SVM, KNN, CCA, CNN, and RNN. The Proposed GNN consistently proves to be the most accurate and robust solution, having been the most dominant technique in emotion detection, despite CNN and RNN achieving slightly better results.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"12 ","pages":"Article 100291"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144893233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Clinical-ready CNN framework for lung cancer classification: Systematic optimization for healthcare deployment with enhanced computational efficiency 用于肺癌分类的临床就绪CNN框架:提高计算效率的医疗部署系统优化
Intelligence-based medicine Pub Date : 2025-01-01 DOI: 10.1016/j.ibmed.2025.100292
G. Inbasakaran, J. Anitha Ruth
{"title":"Clinical-ready CNN framework for lung cancer classification: Systematic optimization for healthcare deployment with enhanced computational efficiency","authors":"G. Inbasakaran,&nbsp;J. Anitha Ruth","doi":"10.1016/j.ibmed.2025.100292","DOIUrl":"10.1016/j.ibmed.2025.100292","url":null,"abstract":"<div><h3>Purpose</h3><div>This study develops a computationally efficient Convolutional Neural Network (CNN) for lung cancer classification in Computed Tomography (CT) images, addressing the critical need for accurate diagnostic tools deployable in resource-constrained clinical settings.</div></div><div><h3>Methods</h3><div>Using the IQ-OTH/NCCD dataset (1190 CT images: normal, benign, and malignant classes from 110 patients), we implemented systematic architecture optimization with strategic data augmentation to address class imbalance and limited dataset challenges. Patient-level data splitting prevented leakage, ensuring valid performance metrics. The model was evaluated using 5-fold cross-validation and compared against established architectures using McNemar's test for statistical significance.</div></div><div><h3>Results</h3><div>The optimized CNN achieved 94 % classification accuracy with only 4.2 million parameters and 18 ms inference time. Performance significantly exceeded AlexNet (85 %), VGG-16 (88 %), ResNet-50 (90 %), InceptionV3 (87 %), and DenseNet (86 %) with p &lt; 0.05. Malignant case detection showed excellent clinical metrics (precision: 0.96, recall: 0.95, F1-score: 0.95), critical for minimizing false negatives. Ablation studies revealed data augmentation contributed 6.6 % accuracy improvement, with rotation and translation proving most effective. The model operates 4.3 × faster than ResNet-50 while using 6 × fewer parameters, enabling deployment on standard clinical workstations with 4–8 GB GPU memory.</div></div><div><h3>Conclusions</h3><div>Carefully optimized CNN architectures can achieve superior diagnostic performance while meeting computational constraints of real-world medical settings. Our approach demonstrates that systematic optimization strategies effectively balance accuracy with clinical deployment feasibility, providing a practical framework for implementing AI-assisted lung cancer detection in resource-limited healthcare environments. The model's high sensitivity for malignant cases positions it as a valuable clinical decision support tool.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"12 ","pages":"Article 100292"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144893234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A drug recommendation system based on response prediction: Integrating gene expression and K-mer fragmentation of drug SMILES using LightGBM 基于反应预测的药物推荐系统:利用LightGBM整合药物SMILES的基因表达和K-mer碎片化
Intelligence-based medicine Pub Date : 2025-01-01 DOI: 10.1016/j.ibmed.2025.100206
Sajid Naveed , Mujtaba Husnain
{"title":"A drug recommendation system based on response prediction: Integrating gene expression and K-mer fragmentation of drug SMILES using LightGBM","authors":"Sajid Naveed ,&nbsp;Mujtaba Husnain","doi":"10.1016/j.ibmed.2025.100206","DOIUrl":"10.1016/j.ibmed.2025.100206","url":null,"abstract":"<div><div>Medical experts and physicians examine the gene expression abnormality in glioblastoma (GBM) cancer patients to identify the drug response. The main objective of this research is to build a machine learning (ML) based model for improve the outcome of cancer medication to save the time and effort of medical practitioners. Developing a drug response recommendation system is our goal that uses the gene expression data of cancer cell lines to predict the response of anticancer drugs in terms of half-maximal inhibitory concentration (IC50). Genetic data from a GBM cancer patient is used as input into a system to predict and recommend the response of multiple anticancer drugs in a particular cancer sample. In this research, we used K-mer molecular fragmentation to process drug SMILES in a novel way, which enabled us to build a competent model that provides drug response. We used the Light Gradient Boosting Machine (LightGBM) regression algorithm and Genomics of Drug Sensitivity of Cancer (GDSC) data for this proposed recommendation system. The results showed that all predicted IC50 values are fall within the range of the real values when examining GBM data. Two drugs, temozolomide and carmustine, were predicted with a Mean Squared Error (MSE) of 0.10 and 0.11 respectively, and 0.41 in unseen test samples. These recommended responses were then verified by expert doctors, who confirmed that the responses to these drugs were very close to the actual response. These recommendation are also effective in slowing the growth of these tumors and improving patients quality of life by monitoring medication effects.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"11 ","pages":"Article 100206"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143173636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Arkangel AI: A conversational agent for real-time, evidence-based medical question-answering Arkangel AI:实时、循证医学问答的对话代理
Intelligence-based medicine Pub Date : 2025-01-01 DOI: 10.1016/j.ibmed.2025.100274
Maria Camila Villa, Natalia Castano-Villegas, Isabella Llano, Julian Martinez, Maria Fernanda Guevara, Jose Zea, Laura Velásquez
{"title":"Arkangel AI: A conversational agent for real-time, evidence-based medical question-answering","authors":"Maria Camila Villa,&nbsp;Natalia Castano-Villegas,&nbsp;Isabella Llano,&nbsp;Julian Martinez,&nbsp;Maria Fernanda Guevara,&nbsp;Jose Zea,&nbsp;Laura Velásquez","doi":"10.1016/j.ibmed.2025.100274","DOIUrl":"10.1016/j.ibmed.2025.100274","url":null,"abstract":"<div><h3>Introduction</h3><div>Large Language Models (LLMs) have been trained and tested on several medical question-answering (QA) datasets built from medical licensing exams and natural interactions between doctors and patients to fine-tune them for specific health-related tasks.</div></div><div><h3>Objective</h3><div>We aimed to develop LLM-powered Conversational Agents (CAs) equipped to produce fast, accurate, and real-time responses to medical queries in different clinical and scientific scenarios. This paper presents Arkangel AI, our first conversational agent and research assistant.</div></div><div><h3>Methods</h3><div>The model is based on a system containing five LLMs; each is classified within a specific workflow with pre-defined instructions to produce the best search strategy and provide evidence-based answers. We assessed accuracy, intra/inter-class variability, and Cohen's Kappa using the question-answer (QA) dataset MedQA. Additionally, we used the PubMedQA dataset and assessed both databases using the RAGAS framework, including Context, Response Relevance, and Faithfulness. Traditional statistical analysis was performed with hypothesis tests and 95 % IC.</div></div><div><h3>Results</h3><div>Accuracy for MedQA (n: 1273) was 90.26 % and Cohen's kappa was 87 %, surpassing current SoTAs for other LLMs (GPT-4o, MedPaLM2). The model retrieved 80 % of the expected articles and provided relevant answers in 82 % of PubMedQA.</div></div><div><h3>Conclusion</h3><div>Arkangel AI showed proficient retrieval and reasoning abilities and unbiased responses. Evenly distributed medical QA datasets to train improved LLMs and external validation for the model with real-world physicians in clinical scenarios are needed. Clinical decision-making remains in the hands of trained healthcare professionals.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"12 ","pages":"Article 100274"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144829353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design and implementation of a low-cost malaria diagnostic system based on convolutional neural network 基于卷积神经网络的低成本疟疾诊断系统的设计与实现
Intelligence-based medicine Pub Date : 2025-01-01 DOI: 10.1016/j.ibmed.2025.100272
Ekobo Akoa Brice , Ndoumbe Jean , Mohamadou Madina
{"title":"Design and implementation of a low-cost malaria diagnostic system based on convolutional neural network","authors":"Ekobo Akoa Brice ,&nbsp;Ndoumbe Jean ,&nbsp;Mohamadou Madina","doi":"10.1016/j.ibmed.2025.100272","DOIUrl":"10.1016/j.ibmed.2025.100272","url":null,"abstract":"<div><div>This work focuses on the design and implementation of an intelligent system that can diagnose malaria from blood smear images. This system takes data in the image format and provides an instant and automated diagnosis to output the result of the patient’s condition on a screen. The methodology for achieving the system is based on the CNN (convolutional neural network). The latter has the specificity to function as a feature extractor and image classifier. The software part thus obtained is implemented in an electronic device that serves as a kit mounted with our care. The establishment of such a system has innumerable assets, such as rapidity during diagnosis by a laboratory technician or not; its portability that will facilitate its use wherever needed. From an ergonomic and functional point of view, the system has a real impact in the diagnosis of a large-scale malaria endemic. The CNN was trained on a large dataset of blood smears and was able to accurately classify infected and uninfected samples with high sensitivity and specificity. Insofar as the system carried out after testing on several samples reaches an average sensitivity of 89.50% and an average precision of 89%, this improves decision-making on the diagnosis of malaria. The system thus created allows malaria to be diagnosed at low cost from blood smear images. The use of CNNs in this project has the advantage of automatically extracting features from blood smear images and classifying them efficiently. The major advantage of the proposed system is its portability and lower cost. The performance of the proposed algorithm was evaluated on a publicly available malaria data set.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"12 ","pages":"Article 100272"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144596406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Feature selection using hybridized Genghis Khan Shark with snow ablation optimization technique for multi-disease prognosis 成吉思汗鲨杂交特征选择与雪消融优化技术在多疾病预后中的应用
Intelligence-based medicine Pub Date : 2025-01-01 DOI: 10.1016/j.ibmed.2025.100249
Ruqsar Zaitoon , Shaik Salma Asiya Begum , Sachi Nandan Mohanty , Deepa Jose
{"title":"Feature selection using hybridized Genghis Khan Shark with snow ablation optimization technique for multi-disease prognosis","authors":"Ruqsar Zaitoon ,&nbsp;Shaik Salma Asiya Begum ,&nbsp;Sachi Nandan Mohanty ,&nbsp;Deepa Jose","doi":"10.1016/j.ibmed.2025.100249","DOIUrl":"10.1016/j.ibmed.2025.100249","url":null,"abstract":"<div><div>The exponential growth in medical data and feature dimensionality presents significant challenges in building accurate and efficient diagnostic models. High-dimensional datasets often contain redundant or irrelevant features that degrade classification performance and increase computational burden. Feature selection (FS) is therefore a critical step in medical data analysis to enhance model accuracy and interpretability. While many recent FS techniques rely on optimization algorithms, tuning their parameters and avoiding early convergence remain major challenges. This study introduces a novel hybrid optimization technique—Hybridized Genghis Khan Shark with Snow Ablation Optimization (HyGKS-SAO)—to identify the most informative features for multi-disease classification. The raw medical datasets are first pre-processed using a Tanh-based normalization method. The HyGKS-SAO algorithm then selects optimal features, balancing exploration and exploitation effectively. Finally, a multi-kernel support vector machine (SVM) is employed to classify diseases based on the selected features. The proposed framework is evaluated on six publicly available medical datasets, including breast cancer, diabetes, heart disease, stroke, lung cancer, and chronic kidney disease. Experimental results demonstrate the effectiveness of the proposed method, achieving 98 % accuracy, 97.99 % MCC, 96.31 % PPV, 97.35 % G-mean, 98.03 % Kappa Coefficient, and a low computation time of 50 s, outperforming several state-of-the-art approaches.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"11 ","pages":"Article 100249"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143848482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI speechbots and 3D segmentations in virtual reality improve radiology on-call training in resource-limited settings 人工智能语音机器人和虚拟现实中的3D分割改善了资源有限环境下的放射学随叫随到培训
Intelligence-based medicine Pub Date : 2025-01-01 DOI: 10.1016/j.ibmed.2025.100245
Yusuf Alibrahim , Muhieldean Ibrahim , Devindra Gurdayal , Muhammad Munshi
{"title":"AI speechbots and 3D segmentations in virtual reality improve radiology on-call training in resource-limited settings","authors":"Yusuf Alibrahim ,&nbsp;Muhieldean Ibrahim ,&nbsp;Devindra Gurdayal ,&nbsp;Muhammad Munshi","doi":"10.1016/j.ibmed.2025.100245","DOIUrl":"10.1016/j.ibmed.2025.100245","url":null,"abstract":"<div><h3>Objective</h3><div>Evaluate the use of large-language model (LLM) speechbot tools and deep learning-assisted generation of 3D reconstructions when integrated in a virtual reality (VR) setting to teach radiology on-call topics to radiology residents.</div></div><div><h3>Methods</h3><div>Three first year radiology residents in Guyana were enrolled in an 8-week radiology course that focused on preparation for on-call duties. The course, delivered via VR headsets with custom software integrating LLM-powered speechbots trained on imaging reports and 3D reconstructions segmented with the help of a deep learning model. Each session focused on a specific radiology area, employing a didactic and case-based learning approach, enhanced with 3D reconstructions and an LLM-powered speechbot. Post-session, residents reassessed their knowledge and provided feedback on their VR and LLM-powered speechbot experiences.</div></div><div><h3>Results/discussion</h3><div>Residents found that the 3D reconstructions segmented semi-automatically by deep learning algorithms and AI-driven self-learning via speechbot was highly valuable. The 3D reconstructions, especially in the interventional radiology session, were helpful and the benefit is augmented by VR where navigating the models is seamless and perception of depth is pronounced. Residents also found conversing with the AI-speechbot seamless and was valuable in their post session self-learning. The major drawback of VR was motion sickness, which was mild and improved over time.</div></div><div><h3>Conclusion</h3><div>AI-assisted VR radiology education could be used to develop new and accessible ways of teaching a variety of radiology topics in a seamless and cost-effective way. This could be especially useful in supporting radiology education remotely in regions which lack local radiology expertise.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"11 ","pages":"Article 100245"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143747483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning based detection of endometriosis lesions in laparoscopic images with 5-fold cross-validation 基于深度学习的腹腔镜子宫内膜异位症病变检测及5倍交叉验证
Intelligence-based medicine Pub Date : 2025-01-01 DOI: 10.1016/j.ibmed.2025.100230
Shujaat Ali Zaidi , Varin Chouvatut , Chailert Phongnarisorn , Dussadee Praserttitipong
{"title":"Deep learning based detection of endometriosis lesions in laparoscopic images with 5-fold cross-validation","authors":"Shujaat Ali Zaidi ,&nbsp;Varin Chouvatut ,&nbsp;Chailert Phongnarisorn ,&nbsp;Dussadee Praserttitipong","doi":"10.1016/j.ibmed.2025.100230","DOIUrl":"10.1016/j.ibmed.2025.100230","url":null,"abstract":"<div><div>Endometriosis, a complex gynecological condition, presents significant diagnostic challenges due to the subtle and varied appearance of its lesions. This study leverages deep learning to classify endometriosis lesions in laparoscopic images using the Gynecologic Laparoscopy Endometriosis Dataset (GLENDA). Three deep learning models VGG19, ResNet50, and Inception V3 were trained and evaluated with 5-fold cross-validation to enhance generalizability and mitigate overfitting. Robust data augmentation techniques were applied to address dataset limitations. The models were tasked with classifying lesions into pathological and nonpathological categories. Experimental results demonstrated strong performance, with VGG19, ResNet50, and Inception V3 achieving accuracies of 0.89, 0.91, and 0.93, respectively. Inception V3 outperformed the others, highlighting its efficacy for this task. The findings underscore the potential of deep learning in improving endometriosis diagnosis, offering a reliable tool for clinicians. This study contributes to the growing field of AI-driven medical image analysis, emphasizing the value of cross-validation and data augmentation in enhancing model performance for specialized medical applications.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"11 ","pages":"Article 100230"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143621027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging Conv-XGBoost algorithm for perceived mental stress detection using Photoplethysmography 利用卷积- xgboost算法利用光电容积脉搏波检测感知精神压力
Intelligence-based medicine Pub Date : 2025-01-01 DOI: 10.1016/j.ibmed.2025.100209
Geethu S. Kumar, B. Ankayarkanni
{"title":"Leveraging Conv-XGBoost algorithm for perceived mental stress detection using Photoplethysmography","authors":"Geethu S. Kumar,&nbsp;B. Ankayarkanni","doi":"10.1016/j.ibmed.2025.100209","DOIUrl":"10.1016/j.ibmed.2025.100209","url":null,"abstract":"<div><div>Stress detection is crucial for monitoring mental health and preventing stress-related disorders. Real-time stress detection shows promise with photoplethysmography (PPG), a non-invasive optical technology that analyzes blood volume changes in the microvascular bed of tissue. This study introduces a novel hybrid model, Conv-XGBoost, which combines Convolutional Neural Networks (CNN) and eXtreme Gradient Boosting (XGBoost) to improve the accuracy and robustness of stress detection from PPG signals. The Conv-XGBoost model utilizes the feature extraction capabilities of CNNs to process PPG signals, converting them into spectrograms that capture the time–frequency characteristics of data. The XGBoost component is essential for handling the complex, high-dimensional feature sets provided by the CNN, enhancing prediction capabilities through gradient boosting. This customized approach addresses the limitations of traditional machine learning algorithms in dealing with hand-crafted features. The Pulse Rate Variability-based Photoplethysmography dataset was chosen for training and validation. The outcomes of the experiments revealed that the proposed Conv-XGBoost model outperformed more conventional machine learning techniques with a training accuracy of 98.87%, validation accuracy of 93.28% and an F1-score of 97.25%. Additionally, the model demonstrated superior resilience to noise and variability in PPG signals, common in real-world scenarios. This study underscores how hybrid models can improve stress detection and sets the stage for future research integrating physiological signals with advanced deep learning techniques.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"11 ","pages":"Article 100209"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143377342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BreastCare application: Moroccan Breast cancer diagnosis through deep learning-based image segmentation and classification
Intelligence-based medicine Pub Date : 2025-01-01 DOI: 10.1016/j.ibmed.2025.100254
Nouhaila Erragzi , Nabila Zrira , Safae Lanjeri , Youssef Omor , Anwar Jimi , Ibtissam Benmiloud , Rajaa Sebihi , Rachida Latib , Nabil Ngote , Haris Ahmad Khan , Shah Nawaz
{"title":"BreastCare application: Moroccan Breast cancer diagnosis through deep learning-based image segmentation and classification","authors":"Nouhaila Erragzi ,&nbsp;Nabila Zrira ,&nbsp;Safae Lanjeri ,&nbsp;Youssef Omor ,&nbsp;Anwar Jimi ,&nbsp;Ibtissam Benmiloud ,&nbsp;Rajaa Sebihi ,&nbsp;Rachida Latib ,&nbsp;Nabil Ngote ,&nbsp;Haris Ahmad Khan ,&nbsp;Shah Nawaz","doi":"10.1016/j.ibmed.2025.100254","DOIUrl":"10.1016/j.ibmed.2025.100254","url":null,"abstract":"<div><div>Breast cancer remains a critical health problem worldwide. Increasing survival rates requires early detection. Accurate classification and segmentation are crucial for effective diagnosis and treatment. Although breast imaging modalities offer many advantages for the diagnosis of breast cancer, the interpretation of breast ultrasound images has always been a vital issue for physicians and radiologists due to misdiagnosis. Moreover, detecting cancer at an early stage increases the chances of survival. This article presents two approaches: Attention-DenseUNet for the segmentation task and EfficientNetB7 for the classification task using public datasets: BUSI, UDIAT, BUSC, BUSIS, and STUHospital. These models are proposed in the context of Computer-Aided Diagnosis (CAD) for breast cancer detection. In the first study, we obtained an impressive Dice coefficient for all datasets, with scores of 88.93%, 95.35%, 92.79%, 93.29%, and 94.24%, respectively. In the classification task, we achieved a high accuracy using only four public datasets that include the two classes benign and malignant: BUSI, UDIAT, BUSC, and BUSIS, with an accuracy of 97%, 100%, 99%, and 94%, respectively. Generally, the results show that our proposed methods are considerably better than other state-of-the-art methods, which will undoubtedly help improve cancer diagnosis and reduce the number of false positives. Finally, we used the suggested approaches to create “Moroccan BreastCare”, an advanced breast cancer segmentation and classification software that automatically processes, segments, and classifies breast ultrasound images.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"11 ","pages":"Article 100254"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143943162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信