Anwar Jimi , Nabila Zrira , Oumaima Guendoul , Ibtissam Benmiloud , Haris Ahmad Khan , Shah Nawaz
{"title":"ESC-UNET: A hybrid CNN and Swin Transformers for skin lesion segmentation","authors":"Anwar Jimi , Nabila Zrira , Oumaima Guendoul , Ibtissam Benmiloud , Haris Ahmad Khan , Shah Nawaz","doi":"10.1016/j.ibmed.2025.100257","DOIUrl":"10.1016/j.ibmed.2025.100257","url":null,"abstract":"<div><div>One of the most important tasks in computer-aided diagnostics is the automatic segmentation of skin lesions, which plays an essential role in the early diagnosis and treatment of skin cancer. In recent years, the Convolutional Neural Network (CNN) has largely replaced other traditional methods for segmenting skin lesions. However, due to insufficient information and unclear lesion region segmentation, skin lesion image segmentation still has challenges. In this paper, we propose a novel deep medical image segmentation approach named “ESC-UNET” which combines the advantages of CNN and Transformer to effectively leverage local information and long-range dependencies to enhance medical image segmentation. In terms of the local information, we use a CNN-based encoder and decoder framework. The CNN branch mines local information from medical images using the locality of convolution processes and the pre-trained EfficientNetB5 network. As for the long-range dependencies, we build a Transformer branch that emphasizes the global context. In addition, we employ Atrous Spatial Pyramid Pooling (ASPP) to gather network-wide relevant information. The Convolution Block Attention Module (CBAM) is added to the model to promote effective features and suppress ineffective features in segmentation. We have evaluated our network using the ISIC 2016, ISIC 2017, and ISIC 2018 datasets. The results demonstrate the efficiency of the proposed model in segmenting skin lesions.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"12 ","pages":"Article 100257"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144168078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joon Kim , Hoyeon Lee , Jonghyeok Park , Sang Hyun Park , Myungjae Lee , Leonard Sunwoo , Chi Kyung Kim , Beom Joon Kim , Dong-Eog Kim , Wi-Sun Ryu
{"title":"In-silo federated learning vs. centralized learning for segmenting acute and chronic ischemic brain lesions","authors":"Joon Kim , Hoyeon Lee , Jonghyeok Park , Sang Hyun Park , Myungjae Lee , Leonard Sunwoo , Chi Kyung Kim , Beom Joon Kim , Dong-Eog Kim , Wi-Sun Ryu","doi":"10.1016/j.ibmed.2025.100283","DOIUrl":"10.1016/j.ibmed.2025.100283","url":null,"abstract":"<div><h3>Objectives</h3><div>To investigate the efficacy of federated learning (FL) compared to industry-level centralized learning (CL) for segmenting acute infarct and white matter hyperintensity.</div></div><div><h3>Materials and methods</h3><div>This retrospective study included 13,546 diffusion-weighted images (DWI) from 10 hospitals and 8421 fluid-attenuated inversion recovery (FLAIR) images from 9 hospitals for acute (Task I) and chronic (Task II) lesion segmentation. We trained with datasets originated from 9 and 3 institutions for Task I and Task II, respectively, and externally tested them in datasets originated from 1 and 6 institutions each. For FL, the central server aggregated training results every four rounds with FedYogi (Task I) and FedAvg (Task II). A batch clipping strategy was tested for the FL models. Performances were evaluated with the Dice similarity coefficient (DSC).</div></div><div><h3>Results</h3><div>The mean ages (SD) for the training datasets were 68.1 (12.8) for Task I and 67.4 (13.0) for Task II. The frequency of male participants was 51.5 % and 60.4 %, respectively. In Task I, the FL model employing batch clipping trained for 360 epochs achieved a DSC of 0.754 ± 0.183, surpassing an equivalently trained CL model (DSC 0.691 ± 0.229; p < 0.001) and comparable to the best-performing CL model at 940 epochs (DSC 0.755 ± 0.207; p = 0.701). In Task II, no significant differences were observed amongst FL model with clipping, without clipping, and CL model after 48 epochs (DSCs of 0.761 ± 0.299, 0.751 ± 0.304, 0.744 ± 0.304). Few-shot FL showed significantly lower performance. Task II reduced training times with batch clipping (3.5–1.75 h).</div></div><div><h3>Conclusions</h3><div>Comparisons between CL and FL in identical settings suggest the feasibility of FL for medical image segmentation.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"12 ","pages":"Article 100283"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144771995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optimizing ResNet50 performance using stochastic gradient descent on MRI images for Alzheimer's disease classification","authors":"Mohamed Amine Mahjoubi , Driss Lamrani , Shawki Saleh , Wassima Moutaouakil , Asmae Ouhmida , Soufiane Hamida , Bouchaib Cherradi , Abdelhadi Raihani","doi":"10.1016/j.ibmed.2025.100219","DOIUrl":"10.1016/j.ibmed.2025.100219","url":null,"abstract":"<div><div>The field of optimization is focused on the formulation, analysis, and resolution of problems involving the minimization or maximization of functions. A particular subclass of optimization problems, known as empirical risk minimization, involves fitting a model to observed data. These problems play a central role in various areas such as machine learning, statistical modeling, and decision theory, where the objective is to find a model that best approximates underlying patterns in the data by minimizing a specified loss or risk function. In deep learning (DL) systems, various optimization algorithms are utilized with the gradient descent (GD) algorithm being one of the most significant and effective. Research studies have improved the GD algorithm and developed various successful variants, including stochastic gradient descent (SGD) with momentum, AdaGrad, RMSProp, and Adam. This article provides a comparative analysis of these stochastic gradient descent algorithms based on their accuracy, loss, and training time, as well as the loss of each algorithm in generating an optimization solution. Experiments were conducted using Transfer Learning (TL) technique based on the pre-trained ResNet50 base model for image classification, with a focus on stochastic gradient (SG) for performances optimization. The case study in this work is based on a data extract from the Alzheimer's image dataset, which contains four classes such as Mild Demented, Moderate Demented, Non-Demented, and Very Mild Demented. The obtained results with the Adam and SGD momentum optimizers achieved the highest accuracy of 97.66 % and 97.58 %, respectively.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"11 ","pages":"Article 100219"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143173637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A mobile application LukaKu as a tool for detecting external wounds with artificial intelligence","authors":"Dessy Novita , Herika Hayurani , Eva Krishna Sutedja , Firdaus Ryan Pratomo , Achmad Dino Saputra , Zahra Ramadhanti , Nuryadin Abutani , Muhammad Rafi Triandi , Aldin Mubarok Guferol , Anindya Apriliyanti Pravitasari , Fajar Wira Adikusuma , Atiek Rostika Noviyanti","doi":"10.1016/j.ibmed.2025.100200","DOIUrl":"10.1016/j.ibmed.2025.100200","url":null,"abstract":"<div><div>This study was conducted due to the lack of applications that can assist people intreating common external wounds. Therefore, we proposed the application of image-based detection which takes external wounds and identifies them using Artificial Intelligence namely LukaKu. In addition to detecting the type of wound that occurs, the application is expected to be able to produce first aid and medicine for each existing external wound label. The model used is YOLOv5 with various versions, namely YOLOv5n, YOLOv5s, YOLOv5m, YOLOv5l, and YOLOv5x. By calculating the validation data, each version has its own precision, recall, f1-score, and Mean Average Precision (mAP) values which are the comparison factors in determining the best model version, where YOLOv5l with mAP value of 0.785 is the best result and YOLOv5n with mAP value of 0.588 is the result with the lowest value. In the model development process, datasets of external injuries are needed to be used during the training process and test datasets for each existing model version. After each version of the model has been successfully built and analysed, the model with the best value is implemented in the mobile application, making it easier for users to access.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"11 ","pages":"Article 100200"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143174331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maria Jiménez , Cristian Soza-Ried , Vasko Kramer , Sebastian A. Ríos , Arlette Haeger , Carlos Juri , Horacio Amaral , Pedro Chana-Cuevas
{"title":"Image-based machine learning model as a tool for classification of [18F]PR04.MZ PET images in patients with parkinsonian syndrome","authors":"Maria Jiménez , Cristian Soza-Ried , Vasko Kramer , Sebastian A. Ríos , Arlette Haeger , Carlos Juri , Horacio Amaral , Pedro Chana-Cuevas","doi":"10.1016/j.ibmed.2025.100232","DOIUrl":"10.1016/j.ibmed.2025.100232","url":null,"abstract":"<div><div>Parkinsonian syndrome (PS) is characterized by bradykinesia, resting tremor, rigidity, and encapsulates the clinical manifestation observed in various neurodegenerative disorders. Positron emission tomography (PET) imaging plays an important role in diagnosing PS by detecting the progressive loss of dopaminergic neurons. This study aimed to develop and compare five machine-learning models for the automatic classification of 204 [<sup>18</sup>F]PR04.MZ PET images, distinguishing between patients with PS and subjects without clinical evidence for dopaminergic deficit (SWEDD). Previously analyzed and classified by three expert blind readers into PS compatible (1) and SWEDDs (0), the dataset was processed in both two-dimensional and three-dimensional formats. Five widely used pattern recognition algorithms were trained and validated their performance. These algorithms were compared against the majority reading of expert diagnosis, considered the gold standard. Comparing the accuracy of 2D and 3D format images suggests that, without the depth dimension, a single image may overemphasize specific regions. Overall, three models outperformed with an accuracy greater than 98 %, demonstrating that machine-learning models trained with [<sup>18</sup>F]PR04.MZ PET images can provide a highly accurate and precise tool to support clinicians in automatic PET image analysis. This approach may be a first step in reducing the time required for interpretation, as well as increase certainty in the diagnostic process.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"11 ","pages":"Article 100232"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143628516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A comparison of techniques for predicting telehealth visit failure","authors":"Alexander J. Idarraga , David F. Schneider","doi":"10.1016/j.ibmed.2025.100235","DOIUrl":"10.1016/j.ibmed.2025.100235","url":null,"abstract":"<div><h3>Objective</h3><div>Telehealth is an increasingly important method for delivering care. Health systems lack the ability to accurately predict which telehealth visits will fail due to poor connection, poor technical literacy, or other reasons. This results in wasted resources and disrupted patient care. The purpose of this study is to characterize and compare various methods for predicting telehealth visit failure, and to determine the prediction method most suited for implementation in a real-time operational setting.</div></div><div><h3>Methods</h3><div>A single-center, retrospective cohort study was conducted using data sourced from our data warehouse. Patient demographic information and data characterizing prior visit success and engagement with electronic health tools were included. Three main model types were evaluated: an existing scoring model developed by Hughes et al., a regression-based scoring model, and Machine Learning classifiers. Variables were selected for their importance and anticipated availability; Number Needed to Treat was used to demonstrate the number of interventions (e.g. pre-visit phone calls) required to improve success rates in the context of weekly patient volumes.</div></div><div><h3>Results</h3><div>217, 229 visits spanning 480 days were evaluated, of which 22,443 (10.33 %) met criteria for failure. Hughes et al.’s model applied to our data yielded an Area Under the Receiver Operating Characteristics Curve (AUC ROC) of 0.678 when predicting failure. A score-based model achieved an AUC ROC of 0.698. Logistic Regression, Random Forest, and Gradient Boosting models demonstrated AUC ROCs ranging from 0.7877 to 0.7969. A NNT of 32 was achieved if the 263 highest-risk patients were selected in a low-volume week using the RF classifier, compared to an expected NNT of 90 if the same number of patients were randomly selected.</div></div><div><h3>Conclusions</h3><div>Machine Learning classifiers demonstrated superiority over score-based methods for predicting telehealth visit failure. Prospective evaluation is required; evaluation using NNT as a metric can help to operationalize these models.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"11 ","pages":"Article 100235"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143747628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Esaie Naroum , Ebenezer Maka Maka , Hamadjam Abboubakar , Paul Dayang , Appolinaire Batoure Bamana , Benjamin Garga , Hassana Daouda Daouda , Mohsen Bakouri , Ilyas Khan
{"title":"Comparative analysis of deep learning and machine learning techniques for forecasting new malaria cases in Cameroon’s Adamaoua region","authors":"Esaie Naroum , Ebenezer Maka Maka , Hamadjam Abboubakar , Paul Dayang , Appolinaire Batoure Bamana , Benjamin Garga , Hassana Daouda Daouda , Mohsen Bakouri , Ilyas Khan","doi":"10.1016/j.ibmed.2025.100220","DOIUrl":"10.1016/j.ibmed.2025.100220","url":null,"abstract":"<div><div>The Plasmodium parasite, which causes malaria is transmitted by Anopheles mosquitoes, and remains a major development barrier in Africa. This is particularly true considering the conducive environment that promotes the spread of malaria. This study examines several machine learning approaches, such as long short term memory (LSTM), random forests (RF), support vector machines (SVM), and data regularization models including Ridge, Lasso, and ElasticNet, in order to forecast the occurrence of malaria in the Adamaoua region of Cameroon. The LSTM, a recurrent neural network variant, performed the best with 76% accuracy and a low error rate (RMSE = 0.08). Statistical evidence indicates that temperatures exceeding 34 degrees halt mosquito vector reproduction, thereby slowing the spread of malaria. However, humidity increases the morbidity of the condition. The survey also identified high-risk areas in Ngaoundéré Rural and Urban and Meiganga. Between 2018 and 2022, the Adamaoua region had 20.1%, 12.3%, and 10.0% of malaria cases, respectively, in these locations. According to the estimate, the number of malaria cases in the Adamaoua region will rise gradually between 2023 and 2026, peaking in 2029 before declining in 2031.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"11 ","pages":"Article 100220"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143388305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ArunaDevi Karuppasamy , Hamza zidoum , Majda Said Sultan Al-Rashdi , Maiya Al-Bahri
{"title":"Optimizing breast cancer diagnosis with convolutional autoencoders: Enhanced performance through modified loss functions","authors":"ArunaDevi Karuppasamy , Hamza zidoum , Majda Said Sultan Al-Rashdi , Maiya Al-Bahri","doi":"10.1016/j.ibmed.2025.100248","DOIUrl":"10.1016/j.ibmed.2025.100248","url":null,"abstract":"<div><div>The Deep Learning (DL) has demonstrated a significant impact on a various pattern recognition applications, resulting in significant advancements in areas such as visual recognition, autonomous cars, language processing, and healthcare. Nowadays, deep learning was widely applied on the medical images to identify the diseases efficiently. Still, the use of applications in clinical settings is now limited to a small number. The main factors to this might be due to an inadequate annotated data, noises in the images and challenges related to collecting data. Our research proposed a convolutional autoencoder to classify the breast cancer tumors, using the Sultan Qaboos University Hospital(SQUH) and BreakHis datasets. The proposed model named Convolutional AutoEncoder with modified Loss Function (CAE-LF) achieved a good performance, by attaining a F1-score of 0.90, recall of 0.89, and accuracy of 91%. The results obtained are comparable to those obtained in earlier researches. Additional analyses conducted on the SQUH dataset demonstrate that it yields a good performance with an F1-score of 0.91, 0.93, 0.92, and 0.93 for 4x, 10x, 20x, and 40x magnifications, respectively. Our study highlights the potential of deep learning in analyzing medical images to classify breast tumors.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"11 ","pages":"Article 100248"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143887937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shuaibu Saidu Musa , Adamu Muhammad Ibrahim , Muhammad Yasir Alhassan , Abubakar Hafs Musa , Abdulrahman Garba Jibo , Auwal Rabiu Auwal , Olalekan John Okesanya , Zhinya Kawa Othman , Muhammad Sadiq Abubakar , Mohamed Mustaf Ahmed , Carina Joane V. Barroso , Abraham Fessehaye Sium , Manuel B. Garcia , James Brian Flores , Adamu Safiyanu Maikifi , M.B.N. Kouwenhoven , Don Eliseo Lucero-Prisno
{"title":"Nanotechnology and machine learning: a promising confluence for the advancement of precision medicine","authors":"Shuaibu Saidu Musa , Adamu Muhammad Ibrahim , Muhammad Yasir Alhassan , Abubakar Hafs Musa , Abdulrahman Garba Jibo , Auwal Rabiu Auwal , Olalekan John Okesanya , Zhinya Kawa Othman , Muhammad Sadiq Abubakar , Mohamed Mustaf Ahmed , Carina Joane V. Barroso , Abraham Fessehaye Sium , Manuel B. Garcia , James Brian Flores , Adamu Safiyanu Maikifi , M.B.N. Kouwenhoven , Don Eliseo Lucero-Prisno","doi":"10.1016/j.ibmed.2025.100267","DOIUrl":"10.1016/j.ibmed.2025.100267","url":null,"abstract":"<div><div>The fusion of molecular-scale engineering in nanotechnology with machine learning (ML) analytics is reshaping the field of precision medicine. Nanoparticles enable ultrasensitive diagnostics, targeted drug and gene delivery, and high-resolution imaging, whereas ML models mine vast multimodal datasets to optimize nanoparticle design, enhance predictive accuracy, and personalize treatment in real-time. Recent breakthroughs include ML-guided formulations of lipid, polymeric, and inorganic carriers that cross biological barriers; AI-enhanced nanosensors that flag early disease from breath, sweat, or blood; and nanotheranostic agents that simultaneously track and treat tumors. Comparative insights into Retrieval-Augmented Generation and supervised learning pipelines reveal distinct advantages for nanodevice engineering across diverse data environments. An expanded focus on explainable AI tools, such as SHAP, LIME, Grad-CAM, and Integrated Gradients, highlights their role in enhancing transparency, trust, and interpretability in nano-enabled clinical decisions. A structured narrative review method was applied, and key ML model performances were synthesized to strengthen analytical clarity. Emerging biodegradable nanomaterials, autonomous micro-nanorobots, and hybrid lab-on-chip systems promise faster point-of-care decisions but raise pressing questions about data integrity, interpretability, scalability, regulation, ethics, and equitable access. Addressing these hurdles will require robust data standards, privacy safeguards, interdisciplinary R&D networks, and flexible approval pathways to translate bench advances into bedside benefits for patients. This review synthesizes the current landscape, critical challenges, and future directions at the intersection of nanotechnology and ML in precision medicine.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"12 ","pages":"Article 100267"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144271155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"PU-MLP: A PU-learning based method for polypharmacy side-effects detection based on multi-layer perceptron and feature extraction techniques","authors":"Abedin Keshavarz, Amir Lakizadeh","doi":"10.1016/j.ibmed.2025.100265","DOIUrl":"10.1016/j.ibmed.2025.100265","url":null,"abstract":"<div><div>Polypharmacy, or the concurrent use of multiple medications, increases the risk of adverse effects due to drug interactions. As polypharmacy becomes more prevalent, forecasting these interactions is essential in the pharmaceutical field. Due to the limitations of clinical trials in detecting rare side effects associated with polypharmacy, computational methods are being developed to model these adverse effects. This study introduces a method named PU-MLP, based on a Multi-Layer Perceptron, to predict side effects from drug combinations. This research utilizes advanced machine learning techniques to explore the connections between medications and their adverse effects. The approach consists of three key stages: first, it creates an optimal representation of each drug using a combination of a random forest classifier, Graph Neural Networks (GNNs), and dimensionality reduction techniques. Second, it employs Positive Unlabeled learning to address data uncertainty. Finally, a Multi-Layer Perceptron model is utilized to predict polypharmacy side effects. Performance evaluation using 5-fold cross-validation shows that the proposed method surpasses other approaches, achieving impressive scores of 0.99, 0.99, and 0.98 in AUPR, AUC, and F1 measures, respectively.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"12 ","pages":"Article 100265"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144220989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}