Muhammad Tahir, Mahboobeh Norouzi, Shehroz S Khan, James R Davie, Soichiro Yamanaka, Ahmed Ashraf
{"title":"Artificial intelligence and deep learning algorithms for epigenetic sequence analysis: A review for epigeneticists and AI experts.","authors":"Muhammad Tahir, Mahboobeh Norouzi, Shehroz S Khan, James R Davie, Soichiro Yamanaka, Ahmed Ashraf","doi":"10.1016/j.compbiomed.2024.109302","DOIUrl":"10.1016/j.compbiomed.2024.109302","url":null,"abstract":"<p><p>Epigenetics encompasses mechanisms that can alter the expression of genes without changing the underlying genetic sequence. The epigenetic regulation of gene expression is initiated and sustained by several mechanisms such as DNA methylation, histone modifications, chromatin conformation, and non-coding RNA. The changes in gene regulation and expression can manifest in the form of various diseases and disorders such as cancer and congenital deformities. Over the last few decades, high-throughput experimental approaches have been used to identify and understand epigenetic changes, but these laboratory experimental approaches and biochemical processes are time-consuming and expensive. To overcome these challenges, machine learning and artificial intelligence (AI) approaches have been extensively used for mapping epigenetic modifications to their phenotypic manifestations. In this paper we provide a narrative review of published research on AI models trained on epigenomic data to address a variety of problems such as prediction of disease markers, gene expression, enhancer-promoter interaction, and chromatin states. The purpose of this review is twofold as it is addressed to both AI experts and epigeneticists. For AI researchers, we provided a taxonomy of epigenetics research problems that can benefit from an AI-based approach. For epigeneticists, given each of the above problems we provide a list of candidate AI solutions in the literature. We have also identified several gaps in the literature, research challenges, and recommendations to address these challenges.</p>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"183 ","pages":"109302"},"PeriodicalIF":7.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142580972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Integrating multimodal learning for improved vital health parameter estimation.","authors":"Ashish Marisetty, Prathistith Raj Medi, Praneeth Nemani, Venkanna Udutalapally, Debanjan Das","doi":"10.1016/j.compbiomed.2024.109104","DOIUrl":"10.1016/j.compbiomed.2024.109104","url":null,"abstract":"<p><p>Malnutrition poses a significant threat to global health, resulting from an inadequate intake of essential nutrients that adversely impacts vital organs and overall bodily functioning. Periodic examinations and mass screenings, incorporating both conventional and non-invasive techniques, have been employed to combat this challenge. However, these approaches suffer from critical limitations, such as the need for additional equipment, lack of comprehensive feature representation, absence of suitable health indicators, and the unavailability of smartphone implementations for precise estimations of Body Fat Percentage (BFP), Basal Metabolic Rate (BMR), and Body Mass Index (BMI) to enable efficient smart-malnutrition monitoring. To address these constraints, this study presents a groundbreaking, scalable, and robust smart malnutrition-monitoring system that leverages a single full-body image of an individual to estimate height, weight, and other crucial health parameters within a multi-modal learning framework. Our proposed methodology involves the reconstruction of a highly precise 3D point cloud, from which 512-dimensional feature embeddings are extracted using a headless-3D classification network. Concurrently, facial and body embeddings are also extracted, and through the application of learnable parameters, these features are then utilized to estimate weight accurately. Furthermore, essential health metrics, including BMR, BFP, and BMI, are computed to comprehensively analyze the subject's health, subsequently facilitating the provision of personalized nutrition plans. While being robust to a wide range of lighting conditions across multiple devices, our model achieves a low Mean Absolute Error (MAE) of ± 4.7 cm and ± 5.3 kg in estimating height and weight.</p>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"183 ","pages":"109104"},"PeriodicalIF":7.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142459865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Riemannian manifold-based geometric clustering of continuous glucose monitoring to improve personalized diabetes management.","authors":"Jiafeng Song, Jocelyn McNeany, Yifei Wang, Tanicia Daley, Arlene Stecenko, Rishikesan Kamaleswaran","doi":"10.1016/j.compbiomed.2024.109255","DOIUrl":"10.1016/j.compbiomed.2024.109255","url":null,"abstract":"<p><strong>Background: </strong>Continuous Glucose Monitoring (CGM) provides a detailed representation of glucose fluctuations in individuals, offering a rich dataset for understanding glycemic control in diabetes management. This study explores the potential of Riemannian manifold-based geometric clustering to analyze and interpret CGM data for individuals with Type 1 Diabetes (T1D) and healthy controls (HC), aiming to enhance diabetes management and treatment personalization.</p><p><strong>Methods: </strong>We utilized CGM data from publicly accessible datasets, covering both T1D individuals on insulin and HC. Data were segmented into daily intervals, from which 27 distinct glycemic features were extracted. Uniform Manifold Approximation and Projection (UMAP) was then applied to reduce dimensionality and visualize the data, with model performance validated through correlation analysis between Silhouette Score (SS) against HC cluster and HbA1c levels.</p><p><strong>Results: </strong>UMAP effectively distinguished between T1D on daily insulin and HC groups, with data points clustering according to glycemic profiles. Moderate inverse correlations were observed between SS against HC cluster and HbA1c levels, supporting the clinical relevance of the UMAP-derived metric.</p><p><strong>Conclusions: </strong>This study demonstrates the utility of UMAP in enhancing the analysis of CGM data for diabetes management. We revealed distinct clustering of glycemic profiles between healthy individuals and diabetics on daily insulin indicating that in most instances insulin does not restore a normal glycemic phenotype. In addition, the SS quantifies day by day the degree of this continued dysglycemia and therefore potentially offers a novel approach for personalized diabetes care.</p>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"183 ","pages":"109255"},"PeriodicalIF":7.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142459879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Noreen Fatima, Umair Khan, Xi Han, Emanuela Zannin, Camilla Rigotti, Federico Cattaneo, Giulia Dognini, Maria Luisa Ventura, Libertario Demi
{"title":"Deep learning approaches for automated classification of neonatal lung ultrasound with assessment of human-to-AI interrater agreement.","authors":"Noreen Fatima, Umair Khan, Xi Han, Emanuela Zannin, Camilla Rigotti, Federico Cattaneo, Giulia Dognini, Maria Luisa Ventura, Libertario Demi","doi":"10.1016/j.compbiomed.2024.109315","DOIUrl":"10.1016/j.compbiomed.2024.109315","url":null,"abstract":"<p><p>Neonatal respiratory disorders pose significant challenges in clinical settings, often requiring rapid and accurate diagnostic solutions for effective management. Lung ultrasound (LUS) has emerged as a promising tool to evaluate respiratory conditions in neonates. This evaluation is mainly based on the interpretation of visual patterns (horizontal artifacts, vertical artifacts, and consolidations). Automated interpretation of these patterns can assist clinicians in their evaluations. However, developing AI-based solutions for this purpose is challenging, primarily due to the lack of annotated data and inherent subjectivity in expert interpretations. This study aims to propose an automated solution for the reliable interpretation of patterns in LUS videos of newborns. We employed two distinct strategies. The first strategy is a frame-to-video-level approach that computes frame-level predictions from deep learning (DL) models trained from scratch (F2V-TS) along with fine-tuning pre-trained models (F2V-FT) followed by aggregation of those predictions for video-level evaluation. The second strategy is a direct video classification approach (DV) for evaluating LUS data. To evaluate our methods, we used LUS data from 34 neonatal patients comprising of 70 exams with annotations provided by three expert human operators (3HOs). Results show that within the frame-to-video-level approach, F2V-FT achieved the best performance with an accuracy of 77% showing moderate agreement with the 3HOs. while the direct video classification approach resulted in an accuracy of 72%, showing substantial agreement with the 3HOs, our proposed study lays down the foundation for reliable AI-based solutions for newborn LUS data evaluation.</p>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"183 ","pages":"109315"},"PeriodicalIF":7.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142590267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mahmoud Abdel-Salam, Essam H Houssein, Marwa M Emam, Nagwan Abdel Samee, Mona M Jamjoom, Gang Hu
{"title":"An adaptive enhanced human memory algorithm for multi-level image segmentation for pathological lung cancer images.","authors":"Mahmoud Abdel-Salam, Essam H Houssein, Marwa M Emam, Nagwan Abdel Samee, Mona M Jamjoom, Gang Hu","doi":"10.1016/j.compbiomed.2024.109272","DOIUrl":"10.1016/j.compbiomed.2024.109272","url":null,"abstract":"<p><p>Lung cancer is a critical health issue that demands swift and accurate diagnosis for effective treatment. In medical imaging, segmentation is crucial for identifying and isolating regions of interest, which is essential for precise diagnosis and treatment planning. Traditional metaheuristic-based segmentation methods often struggle with slow convergence speed, poor optimized thresholds results, balancing exploration and exploitation, leading to suboptimal performance in the multi-thresholding segmenting of lung cancer images. This study presents ASG-HMO, an enhanced variant of the Human Memory Optimization (HMO) algorithm, selected for its simplicity, versatility, and minimal parameters. Although HMO has never been applied to multi-thresholding image segmentation, its characteristics make it ideal to improve pathology lung cancer image segmentation. The ASG-HMO incorporating four innovative strategies that address key challenges in the segmentation process. Firstly, the enhanced adaptive mutualism phase is proposed to balance exploration and exploitation to accurately delineate tumor boundaries without getting trapped in suboptimal solutions. Second, the spiral motion strategy is utilized to adaptively refines segmentation solutions by focusing on both the overall lung structure and the intricate tumor details. Third, the gaussian mutation strategy introduces diversity in the search process, enabling the exploration of a broader range of segmentation thresholds to enhance the accuracy of segmented regions. Finally, the adaptive t-distribution disturbance strategy is proposed to help the algorithm avoid local optima and refine segmentation in later stages. The effectiveness of ASG-HMO is validated through rigorous testing on the IEEE CEC'17 and CEC'20 benchmark suites, followed by its application to multilevel thresholding segmentation in nine histopathology lung cancer images. In these experiments, six different segmentation thresholds were tested, and the algorithm was compared to several classical, recent, and advanced segmentation algorithms. In addition, the proposed ASG-HMO leverages 2D Renyi entropy and 2D histograms to enhance the precision of the segmentation process. Quantitative result analysis in pathological lung cancer segmentation showed that ASG-HMO achieved superior maximum Peak Signal-to-Noise Ratio (PSNR) of 31.924, Structural Similarity Index Measure (SSIM) of 0.919, Feature Similarity Index Measure (FSIM) of 0.990, and Probability Rand Index (PRI) of 0.924. These results indicate that ASG-HMO significantly outperforms existing algorithms in both convergence speed and segmentation accuracy. This demonstrates the robustness of ASG-HMO as a framework for precise segmentation of pathological lung cancer images, offering substantial potential for improving clinical diagnostic processes.</p>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"183 ","pages":"109272"},"PeriodicalIF":7.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142459858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The efficient classification of breast cancer on low-power IoT devices: A study on genetically evolved U-Net.","authors":"Mohit Agarwal, Amit Kumar Dwivedi, Dibyanarayan Hazra, Preeti Sharma, Suneet Kumar Gupta, Deepak Garg","doi":"10.1016/j.compbiomed.2024.109296","DOIUrl":"10.1016/j.compbiomed.2024.109296","url":null,"abstract":"<p><p>Breast cancer is the most common cancer among women, and in some cases, it also affects men. Since early detection allows for proper treatment, automated data classification is essential. Although such classifications provide timely results, the resource requirements for such models, i.e., computation and storage, are high. As a result, these models are not suitable for resource-constrained devices (for example, IOT). In this work, we highlight the U-Net model, and to deploy it to IOT devices, we compress the same model using a genetic algorithm. We assess the proposed method using a publicly accessible, bench-marked dataset. To verify the efficacy of the suggested methodology, we conducted experiments on two more datasets, specifically CamVid and Potato leaf disease. In addition, we used the suggested method to shrink the MiniSegNet and FCN 32 models, which shows that the compressed U-Net approach works for classifying breast cancer. The results of the study indicate a significant decrease in the storage capacity of UNet with 96.12% compression for the breast cancer dataset with 1.97x enhancement in inference time. However, after compression of the model, there is a drop in accuracy of only 1.33%.</p>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"183 ","pages":"109296"},"PeriodicalIF":7.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142581370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tianyun Hu, Hongqing Zhu, Ziying Wang, Ning Chen, Bingcang Huang, Weiping Lu, Ying Wang
{"title":"A conflict-free multi-modal fusion network with spatial reinforcement transformers for brain tumor segmentation.","authors":"Tianyun Hu, Hongqing Zhu, Ziying Wang, Ning Chen, Bingcang Huang, Weiping Lu, Ying Wang","doi":"10.1016/j.compbiomed.2024.109331","DOIUrl":"10.1016/j.compbiomed.2024.109331","url":null,"abstract":"<p><p>Brain gliomas are a leading cause of cancer mortality worldwide. Existing glioma segmentation approaches using multi-modal inputs often rely on a simplistic approach of stacking images from all modalities, disregarding modality-specific features that could optimize diagnostic outcomes. This paper introduces STE-Net, a spatial reinforcement hybrid Transformer-based tri-branch multi-modal evidential fusion network designed for conflict-free brain tumor segmentation. STE-Net features two independent encoder-decoder branches that process distinct modality sets, along with an additional branch that integrates features through a cross-modal channel-wise fusion (CMCF) module. The encoder employs a spatial reinforcement hybrid Transformer (SRHT), which combines a Swin Transformer block and a modified convolution block to capture richer spatial information. At the output level, a conflict-free evidential fusion mechanism (CEFM) is developed, leveraging the Dempster-Shafer (D-S) evidence theory and a conflict-solving strategy within a complex network framework. This mechanism ensures balanced reliability among the three output heads and mitigates potential conflicts. Each output is treated as a node in the complex network, and its importance is reassessed through the computation of direct and indirect weights to prevent potential mutual conflicts. We evaluate STE-Net on three public datasets: BraTS2018, BraTS2019, and BraTS2021. Both qualitative and quantitative results demonstrate that STE-Net outperforms several state-of-the-art methods. Statistical analysis further confirms the strong correlation between predicted tumors and ground truth. The code for this project is available at https://github.com/whotwin/STE-Net.</p>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"183 ","pages":"109331"},"PeriodicalIF":7.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142590258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Arnab Palit, Mark A Williams, Ercihan Kiraci, Vineet Seemala, Vatsal Gupta, Jim Pierrepont, Christopher Plaskos, Richard King
{"title":"Simulation of hip bony range of motion (BROM) corresponds to the observed functional range of motion (FROM) for pure flexion, internal rotation in deep flexion, and external rotation in minimal flexion-extension - A cadaver study.","authors":"Arnab Palit, Mark A Williams, Ercihan Kiraci, Vineet Seemala, Vatsal Gupta, Jim Pierrepont, Christopher Plaskos, Richard King","doi":"10.1016/j.compbiomed.2024.109270","DOIUrl":"10.1016/j.compbiomed.2024.109270","url":null,"abstract":"<p><strong>Background: </strong>The study investigated the relationship between computed bony range of motion (BROM) and actual functional range of motion (FROM) as directly measured in cadaveric hips. The hypothesis was that some hip movements are not substantially restricted by soft tissues, and therefore, computed BROM for these movements may effectively represent FROM, providing a reliable parameter for computational pre-operative planning.</p><p><strong>Methods: </strong>Maximum passive FROM was measured in nine cadaveric hips using optical tracking. Each hip was measured in at least ninety FROM positions, covering flexion, extension, abduction, flexion-internal rotation (IR), flexion-external rotation (ER), extension-IR, and extension-ER movements. The measured FROM was virtually recreated using 3D models of the femur and pelvis derived from CT scans, and the corresponding BROM was computed. The relationship between FROM and BROM was classified into three groups: close (mean difference<5°), moderate (mean difference 5-15°), and weak (mean difference>15°).</p><p><strong>Results: </strong>The relationship between FROM and BROM was close for pure flexion (difference = 3.1° ± 3.9°) and IR in deep (>70°) flexion (difference = 4.3° ± 4.6°). The relationship was moderate for ER in minimal flexion (difference = 10.3° ± 5.8°) and ER in minimal extension (difference = 11.7° ± 7.2°). Bony impingement was observed in some cases during these movements. Other movements showed a weak relationship: large differences were observed in extension (51.9° ± 14.4°), abduction (18.6° ± 11.3°), flexion-IR at flexion<70° (37.1° ± 9.4°), extension-IR (79.6° ± 4.8°), flexion-ER at flexion>30° (45.9° ± 11.3°), and extension-ER at extension>20° (15.8° ± 4.8°).</p><p><strong>Conclusion: </strong>BROM simulations of hip flexion, IR in deep flexion, and ER in low flexion/extension may be useful in dynamic pre-operative planning of total hip arthroplasty.</p>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"183 ","pages":"109270"},"PeriodicalIF":7.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142590285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anuar Giménez-El-Amrani, Andres Sanz-Garcia, Néstor Villalba-Rojas, Vicente Mirabet, Alfonso Valverde-Navarro, Carmen Escobedo-Lucea
{"title":"The untapped potential of 3D virtualization using high resolution scanner-based and photogrammetry technologies for bone bank digital modeling.","authors":"Anuar Giménez-El-Amrani, Andres Sanz-Garcia, Néstor Villalba-Rojas, Vicente Mirabet, Alfonso Valverde-Navarro, Carmen Escobedo-Lucea","doi":"10.1016/j.compbiomed.2024.109340","DOIUrl":"10.1016/j.compbiomed.2024.109340","url":null,"abstract":"<p><p>Three-dimensional (3D) scanning technologies could transform medical practices by creating virtual tissue banks. In bone transplantation, new approaches are needed to provide surgeons with accurate tissue measurements while minimizing contamination risks and avoiding repeated freeze-thaw cycles of banked tissues. This study evaluates three prominent non-contact 3D scanning methods-structured light scanning (SLG), laser scanning (LAS), and photogrammetry (PHG)-to support tissue banking operations. We conducted a thorough examination of each technology and the precision of the 3D scanned bones using relevant anatomical specimens under sterile conditions. Cranial caps were scanned as separate inner and outer surfaces, automatically aligned, and merged with post-processing. A colorimetric analysis based on CIEDE2000 was performed, and the results were compared with questionnaires distributed among neurosurgeons. The findings indicate that certain 3D scanning methods were more appropriate for specific bones. Among the technologies, SLG emerged as optimal for tissue banking, offering a superior balance of accuracy, minimal distortion, cost-efficiency, and ease of use. All methods slightly underestimated the volume of the specimens in their virtual models. According to the colorimetric analysis and the questionnaires given to the neurosurgeons, our low-cost PHG system performed better than others in capturing cranial caps, although it exhibited the least dimensional accuracy. In conclusion, this study provides valuable insights for surgeons and tissue bank personnel in selecting the most efficient 3D non-contact scanning technology and optimizing protocols for modernized tissue banking. Future work will advance towards smart healthcare solutions, explore the development of virtual tissue banks.</p>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"183 ","pages":"109340"},"PeriodicalIF":7.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142590297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Transformative artificial intelligence in gastric cancer: Advancements in diagnostic techniques.","authors":"Mobina Khosravi, Seyedeh Kimia Jasemi, Parsa Hayati, Hamid Akbari Javar, Saadat Izadi, Zhila Izadi","doi":"10.1016/j.compbiomed.2024.109261","DOIUrl":"10.1016/j.compbiomed.2024.109261","url":null,"abstract":"<p><p>Gastric cancer represents a significant global health challenge with elevated incidence and mortality rates, highlighting the need for advancements in diagnostic and therapeutic strategies. This review paper addresses the critical need for a thorough synthesis of the role of artificial intelligence (AI) in the management of gastric cancer. It provides an in-depth analysis of current AI applications, focusing on their contributions to early diagnosis, treatment planning, and outcome prediction. The review identifies key gaps and limitations in the existing literature by examining recent studies and technological developments. It aims to clarify the evolution of AI-driven methods and their impact on enhancing diagnostic accuracy, personalizing treatment strategies, and improving patient outcomes. The paper emphasizes the transformative potential of AI in overcoming the challenges associated with gastric cancer management and proposes future research directions to further harness AI's capabilities. Through this synthesis, the review underscores the importance of integrating AI technologies into clinical practice to revolutionize gastric cancer management.</p>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"183 ","pages":"109261"},"PeriodicalIF":7.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142564110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}