Jabulani Nyengere, Frank Tchuwa, Harineck Mayamiko Tholo, Lucius Malalu, Allena Laura Njala, Petros Kachulu, Rodney Maganga, Brenda Matewere, Lackson Jamu, Clement Nyirenda, Jones Kanjira, Macdonald Chabwera, Patson Nalivata, Weston Mwase, Agness Mwangwela
{"title":"Maize yield prediction using machine learning: a systematic literature review.","authors":"Jabulani Nyengere, Frank Tchuwa, Harineck Mayamiko Tholo, Lucius Malalu, Allena Laura Njala, Petros Kachulu, Rodney Maganga, Brenda Matewere, Lackson Jamu, Clement Nyirenda, Jones Kanjira, Macdonald Chabwera, Patson Nalivata, Weston Mwase, Agness Mwangwela","doi":"10.3389/frai.2026.1735157","DOIUrl":"https://doi.org/10.3389/frai.2026.1735157","url":null,"abstract":"<p><strong>Introduction: </strong>Accurate maize yield prediction is critical for food security planning, particularly in sub-Saharan Africa, where maize is essential to national economies and livelihoods. This systematic review assesses the use of machine learning (ML) techniques in maize yield estimation, focusing on the methodologies, predictor variables, and results in peer-reviewed studies.</p><p><strong>Methods: </strong>The review followed the PRISMA 2021 guidelines, synthesizing 81 peer-reviewed studies published between 2014 and 2025. The analysis examined the ML algorithms, predictor variables, evaluation metrics, and methodological gaps identified in these studies.</p><p><strong>Results: </strong>The review found a significant increase in publications after 2021, reflecting growing confidence in the application of ML for agronomic decision-support. Random Forest (49.4%), XGBoost (16.1%), and Support Vector Machines (12.4%) were the most common algorithms, with hybrid deep-learning frameworks showing superior performance. Environmental variables, remote-sensing indices, and soil properties were the most frequently used predictors. RMSE and <i>R</i> <sup>2</sup> were the primary evaluation metrics.</p><p><strong>Discussion: </strong>The findings underscore the challenges of data scarcity, limited interpretability, and geographical imbalance in the research, with Africa contributing less than 25% of the studies. There is a need for open-access agricultural data systems, hybrid explainable AI frameworks, and capacity building in computational agronomy to improve the effectiveness of ML applications in maize yield prediction.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"9 ","pages":"1735157"},"PeriodicalIF":4.7,"publicationDate":"2026-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13136094/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147843679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Machine learning based approach to intrusion detection in internet of things environments.","authors":"Oluwatoyin Esther Akinbowale, Adebola Tajudeen Adesina, Mulatu Fekadu Zerihun, Polly Mashigo","doi":"10.3389/frai.2026.1760137","DOIUrl":"https://doi.org/10.3389/frai.2026.1760137","url":null,"abstract":"<p><p>The growing security requirements of Internet of Things (IoT) networks where heterogeneous networks and resource-constrained devices offer exponentially increased attack surface, was addressed using machine learning based intrusion detection system. Open source secondary quantitative IoT intrusion traffic data was obtained and trained using machine learning models. The dataset comprises of over one million labeled flow records consisting of 34 kinds of attacks and benign traffic. First, extensive preprocessing including managing of missing values, encoding features, scaling, and removal of redundancy was carried out followed by the training of three supervised machine learning (ML) classifiers namely Decision Tree (DT), Random Forest (RF), and Support Vector Machine (SVM) for the differentiation of the different types of intrusions. The performance evaluation of the ML models was conducted by evaluating the accuracy, precision, and recall, and F1-score. It was observed that Decision Tree model was the most outstanding in terms of overall accuracy (99.36%) and respectable performance in prevalent attack classes, and was closely followed ccy Random Forest (99.27%) while SVM lagged behind with an accuracy of 80.08% due to computational constraints in handling massive amounts of big data. Inter-arrival time and total packet size were identified as the significant discriminators in malicious behavior through feature-importance analysis. Conclusively, the tree-based models, and specifically Decision Trees, offer extremely effective and interpretable solution for real-time IoT intrusion detection, and provide future avenues in handling class imbalance and examining lightweight, ensemble, and deep-learning approaches for robust detection of rare and unknown threats. This study contributes to cybersecurity via the identification and classification of intrusions in IoT devices for proper mitigation.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"9 ","pages":"1760137"},"PeriodicalIF":4.7,"publicationDate":"2026-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13133079/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147821609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ehsan E Alam, Nickolas Littlefield, Arash Shaban-Nejad, Hamidreza Moradi
{"title":"Enhancing segmentation fairness through curriculum learning and progressive loss: a centralized and federated perspective on radiograph analysis.","authors":"Ehsan E Alam, Nickolas Littlefield, Arash Shaban-Nejad, Hamidreza Moradi","doi":"10.3389/frai.2026.1793305","DOIUrl":"https://doi.org/10.3389/frai.2026.1793305","url":null,"abstract":"<p><strong>Background: </strong>Bias in medical image segmentation can lead to unequal performance across demographic subgroups, raising concerns about fairness and reliability in clinical AI systems. While deep learning models have achieved high segmentation accuracy, ensuring equitable performance across race and gender remains a significant challenge, particularly in privacy-sensitive healthcare environments.</p><p><strong>Methods: </strong>This study investigates fairness-aware medical image segmentation for hip and knee radiographs using deep learning models evaluated in both centralized and Federated Learning (FL) settings. We introduce Curriculum Learning (CL) strategies and Progressive Loss (PL) functions to regulate sample difficulty during training. In addition, we propose two novel fairness-oriented federated learning algorithms, Federated Intersection over Union (FedIoU) and Federated Intersection over Union with Outlier Analysis (FedIoUoutlier). Experiments are conducted using multiple segmentation backbones and simulated multi-site data partitions derived from the Osteoarthritis Initiative dataset. Model performance is evaluated using Intersection over Union (IoU), IoU standard deviation, Skewed Error Ratio (SER), and Min-Max Disparity across race and gender subgroups. Statistical significance was verified using paired <i>t</i>-tests to compare per-sample IoU performance against baseline configurations.</p><p><strong>Results: </strong>Across both hip and knee segmentation tasks, curriculum learning and progressive loss strategies consistently improved segmentation accuracy and reduced demographic performance disparities in centralized training. In federated settings, fairness-aware aggregation further enhanced performance. Notably, FedIoUoutlier combined with balanced curriculum learning and tiered progressive loss achieved the highest mean IoU while yielding the lowest SER and Min-Max Disparity, indicating improved fairness without sacrificing accuracy. In several configurations, federated models matched or exceeded the performance of optimized centralized models, with statistically significant improvements in per-sample IoU over baseline configurations.</p><p><strong>Conclusion: </strong>The results demonstrate that structured training strategies and fairness-aware federated aggregation can jointly improve accuracy, stability, and demographic fairness in medical image segmentation. By integrating curriculum learning, progressive loss, and novel FL algorithms, this work provides a practical pathway toward equitable and privacy-preserving AI systems for medical imaging.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"9 ","pages":"1793305"},"PeriodicalIF":4.7,"publicationDate":"2026-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13133066/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147821268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fusing appearance and vein morphology using dual-branch deep networks for accurate medicinal plant identification.","authors":"Chembon Rajeendran Karthik, Parthiban Maheswari Adithya, Naveen Nidadavolu, Ananthakrishnan Balasundaram, Ayesha Shaik","doi":"10.3389/frai.2026.1771431","DOIUrl":"https://doi.org/10.3389/frai.2026.1771431","url":null,"abstract":"<p><p>Accurate identification of medicinal plants from leaf images is essential for pharmacognosy, biodiversity conservation, and agricultural decisions. But, accurate identification of medicinal leaves still poses a potential challenge in real-world conditions due to high similarity between species, variability within classes, uneven lighting, background clutter, partial views and occlusions. Existing RGB-based deep models often overfit to color-texture cues that vary with environmental conditions, whereas venation-based (skeleton) methods provide anatomically stable morphology but inherently suppress the critical appearance information needed to distinguish visually similar species. In this study, we introduced a novel dual-branch deep learning framework that explicitly separates and preserves appearance and venation learning using two independent pre-trained feature extractors, instead of relying on traditional fusion methods that combine the modalities at the input level or compress both cues into a single fused image stream. Specifically, MobileNetV2 is used to capture global appearance descriptors (texture, pigmentation, and shape), while DenseNet121 learns fine-grained vascular topology from skeletonized vein representations; the resulting embeddings are then combined via late feature-level fusion to form a unified discriminative representation that minimizes modality interference and maximizes complementarity. To further improve robustness and reduce bias introduced by dataset imbalance, we have integrated a class-frequency aware augmentation strategy that adaptively strengthens minority-class transformations while preserving majority-class fidelity, alongside transfer learning, class weighting, and regularization. The proposed approach is trained and evaluated on a curated dataset of 14,344 paired RGB-skeleton images spanning seven medicinal plant species. It is rigorously benchmarked against RGB-only, skeleton-only, and fused image baselines. Experimental results have shown that the proposed dual-branch model achieves 97 % overall accuracy with high precision, recall, and F1-score, showcasing that the structured dual-stream learning of appearance and vein morphology provides a solution for medicinal plant recognition with the potential for robust performance in changing and real-world settings.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"9 ","pages":"1771431"},"PeriodicalIF":4.7,"publicationDate":"2026-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13133058/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147821485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"FSD-Net: underwater object detection based on frequency and spatial domain feature enhancement.","authors":"Chao Zhang, Shuang Wu, Baohua Huang, Binchen Zhao, Fengqi Cui, Xingkun Li","doi":"10.3389/frai.2026.1770342","DOIUrl":"https://doi.org/10.3389/frai.2026.1770342","url":null,"abstract":"<p><strong>Background: </strong>Complex underwater visual conditions cause severe missed and false detections in conventional object detection models, hindering reliable autonomous underwater exploration. This work addresses these key performance limitations.</p><p><strong>Methods: </strong>We propose FSD-Net, a novel underwater detection model with two core enhancement modules. The Frequency Attention Convolution Module reduces missed detections via frequency-domain spatial feature preservation, and the Multi-dimensional Feature Enhancement Module suppresses false detections via enhanced semantic fusion. Experiments include ablation studies and state-of-the-art method comparisons on the UTDAC2020 and Brackish datasets.</p><p><strong>Results: </strong>FSD-Net achieves state-of-the-art performance on both tested datasets. On the UTDAC2020 dataset, it reaches 85.7% AP50 and 82.5% F1-score, with a 3.8% AP50 improvement over the baseline model. On the Brackish dataset, it achieves 98.1% AP50 and 97.0% F1-score, with a 3.9% AP50 improvement over the baseline. The model outperforms all compared mainstream detection algorithms, and ablation studies validate the effectiveness of both proposed modules.</p><p><strong>Conclusion: </strong>FSD-Net's joint frequency-spatial enhancement strategy effectively mitigates underwater image degradation challenges, providing a robust detection solution for autonomous underwater exploration. The proposed dual-module design offers a practical reference for detection model optimization in complex visual environments, with future work focused on lightweight model optimization.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"9 ","pages":"1770342"},"PeriodicalIF":4.7,"publicationDate":"2026-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13133022/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147821306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A structured framework for effective and responsible generative artificial intelligence chatbot prompt engineering throughout the scientific process: a comprehensive guide for the health and medical researcher.","authors":"Jeremy Y Ng","doi":"10.3389/frai.2026.1745928","DOIUrl":"https://doi.org/10.3389/frai.2026.1745928","url":null,"abstract":"<p><p>Generative artificial intelligence (GenAI) chatbots powered by large language models (LLMs) are becoming increasingly integrated into health and medical research workflows, offering researchers new tools to enhance efficiency, support innovation, and assist with knowledge translation. Although their use in health and medical research is expanding rapidly, the practical application of these tools across the broader health and medical research landscape remains complex and evolving. Health and medical researchers often engage with complex study designs, theoretical frameworks, and population needs, all of which require thoughtful, effective and responsible use when involving AI tools. This 10-chapter guide serves as a practical, evidence-informed resource for health and medical researchers to engage effectively and responsibly with GenAI chatbots through the practice of prompt engineering, the design of clear, structured, and purposeful prompts that guide GenAI chatbot outputs. It presents strategies to improve prompt quality and adapt GenAI chatbot interactions to the varied methodological and disciplinary contexts found across health and medical research. The article outlines a structured framework for how GenAI chatbots can be applied throughout the research cycle, including research question development, study design, literature searching, querying for appropriate reporting guidelines and appraisal tools, quantitative and qualitative data analysis, writing and dissemination, and implementation. AI-generated content should be treated as a preliminary draft and must always be reviewed, verified against credible sources, and aligned with disciplinary standards. Risks such as hallucinated content, embedded biases, and ethical challenges are addressed, particularly in sensitive or high-stakes settings. Transparency in AI use and researcher accountability are essential. While GenAI chatbots have the potential to expand access to research support and foster innovation, they cannot replace critical thinking, methodological rigour, or contextual understanding. Instead, they should augment, not replace, human expertise. This guide encourages effective and responsible use of GenAI chatbots and support their thoughtful integration into the health and medical research process.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"9 ","pages":"1745928"},"PeriodicalIF":4.7,"publicationDate":"2026-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13130486/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147821697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jan Benedikt Ruhland, Doguhan Bahcivan, Jan-Peter Sowa, Ali Canbay, Dominik Heider
{"title":"MedChat: a fully offline multimodal AI system for privacy-preserving clinical anamnesis.","authors":"Jan Benedikt Ruhland, Doguhan Bahcivan, Jan-Peter Sowa, Ali Canbay, Dominik Heider","doi":"10.3389/frai.2026.1809142","DOIUrl":"https://doi.org/10.3389/frai.2026.1809142","url":null,"abstract":"<p><p>Recent advances in large language models made it possible to achieve high conversational performance with substantially reduced computational demands, enabling practical on-site deployment in clinical environments. Such progress allows for local integration of AI systems that uphold strict data protection and patient privacy requirements, yet their secure implementation in medicine necessitates careful consideration of ethical, regulatory, and technical constraints. In this study, we introduce MedChat, a locally deployable virtual physician framework that integrates an LLM-based medical chatbot with a diffusion-driven avatar for automated and structured anamnesis. The chatbot was fine-tuned using a corpus of LLM-generated medical dialogues derived from publicly available symptom-disease datasets, enabling scalable and privacy-preserving training. A secure and isolated database interface was implemented to ensure complete separation between patient data and the model's inference process. The avatar component was realized through a conditional diffusion model operating in latent space, trained on researcher video datasets and synchronized with mel-frequency audio features for realistic speech and facial animation. We demonstrate that the complete multimodal pipeline can operate fully offline on consumer-grade hardware while maintaining interactive response times (average latency: 2.9 ± 0.3 s) and stable system performance. Preliminary evaluation of generated dialogue indicates high linguistic coherence, supporting its suitability for structured anamnesis tasks. MedChat provides a privacy-preserving, resource-efficient, and multimodal solution for clinical data collection. While clinical validation is ongoing, the presented framework establishes a foundation for secure, locally deployable AI-assisted anamnesis in real-world healthcare settings.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"9 ","pages":"1809142"},"PeriodicalIF":4.7,"publicationDate":"2026-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13128582/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147821568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jesús Antonio Navarrete-López, Michelle Sainos-Vizuett, Irvin Hussein Lopez-Nava
{"title":"Automatic recognition of dynamic signs of Mexican sign language using deep learning.","authors":"Jesús Antonio Navarrete-López, Michelle Sainos-Vizuett, Irvin Hussein Lopez-Nava","doi":"10.3389/frai.2026.1794923","DOIUrl":"https://doi.org/10.3389/frai.2026.1794923","url":null,"abstract":"<p><strong>Introduction: </strong>Over four million individuals in Mexico face communication barriers due to hearing impairments. Sign language serves as an essential communication tool within the deaf community; however, automatic translation between sign and oral languages remains a significant challenge. This study proposes an approach for recognizing dynamic gestures from Mexican Sign Language (LSM) to support the development of assistive communication technologies.</p><p><strong>Methods: </strong>In collaboration with expert interpreters, an LSM corpus comprising 121 signs was developed, including a specialized lexicon focused on medical emergencies and accident scenarios. A standardized video acquisition protocol was implemented with both expert and non-expert participants. The proposed methodology consists of skeletal keypoint extraction using MediaPipe, data augmentation through frame sampling, and dataset normalization. Multiple deep learning architectures were evaluated, including ResNet, Simple RNN, LSTM, Bidirectional LSTM (BiLSTM), Gated Recurrent Units (GRU), a Transformer encoder, and a hybrid ResNet-Transformer model.</p><p><strong>Results: </strong>Among the evaluated models, the ResNet architecture achieved the best performance, obtaining an F1-score of 0.948 under subject-independent evaluation, with an average inference time of 0.468 seconds. Hyperparameter optimization analysis indicated that performance improvements were primarily driven by training dynamics and regularization strategies rather than increases in architectural depth.</p><p><strong>Discussion: </strong>The results demonstrate the effectiveness of deep learning-based approaches for dynamic LSM gesture recognition and highlight the importance of optimization strategies for robust generalization. This work contributes toward LSM-to-Spanish translation systems and provides a foundation for advancing data-driven sign language recognition technologies.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"9 ","pages":"1794923"},"PeriodicalIF":4.7,"publicationDate":"2026-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13128546/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147821674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Virgiliu-Mihail Prunoiu, Ovidiu Juverdeanu, Codruta Cosma, Simion Laurentiu, Victor Strâmbu, Adrian Radu Petru, Mihai Stana, Mircea-Nicolae Brătucu
{"title":"Legal and ethical reflections on the use of artificial intelligence in the diagnosis and treatment of cancer: who assumes responsibility?","authors":"Virgiliu-Mihail Prunoiu, Ovidiu Juverdeanu, Codruta Cosma, Simion Laurentiu, Victor Strâmbu, Adrian Radu Petru, Mihai Stana, Mircea-Nicolae Brătucu","doi":"10.3389/frai.2026.1812408","DOIUrl":"https://doi.org/10.3389/frai.2026.1812408","url":null,"abstract":"<p><p>Artificial intelligence (AI) offers multiple advantages, such as: improvement and accuracy of the diagnosis, decrease of the doctors' workload, decrease of the hospitalization costs, and becoming increasingly widespread, studied, and applied in medicine. AI is already used in image recognition, has haptic perception, and can manipulate instruments. Thus, surgical robots will likely be driven by AI. In the near future, machine learning (ML) will also appear. The use of AI and the study of the specialty literature raise ethical and legal questions for which there is no unanimous answer yet. Medical liability (malpractice) for AI-related errors and damages to the patient prompts legal reflections on this topic. The diagnostic algorithms of AI raise questions regarding the risks of using AI in the diagnosis and treatment of cancer (especially in rare cases), the information provided to the patient, all of these having moral and legal implications, as well as regarding the impact on the empathic doctor-patient relationship. Actually, the use of AI in the medical field has triggered a revolution in the doctor-patient relationship, but it has possible medico-legal consequences as well. The current legal framework regulating medical liability when AI is applied is inadequate and requires urgent measures, because there is no specific and uniform legislation to regulate the liability of the various parties involved in applying AI, or that of the end-users. Consequently, greater attention should be paid to the risk of applying AI, to the necessity to regulate its safe use, and to maintain the safety standards of the patient by continuously adapting and updating the system.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"9 ","pages":"1812408"},"PeriodicalIF":4.7,"publicationDate":"2026-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13124942/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147821449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}