Jiapeng Yu, Yuqian Wu, Yajing Zhan, Wenhao Guo, Zhou Xu, Raymond Lee
{"title":"Co-Learning: code learning for multi-agent reinforcement collaborative framework with conversational natural language interfaces.","authors":"Jiapeng Yu, Yuqian Wu, Yajing Zhan, Wenhao Guo, Zhou Xu, Raymond Lee","doi":"10.3389/frai.2025.1431003","DOIUrl":"https://doi.org/10.3389/frai.2025.1431003","url":null,"abstract":"<p><p>Online question-and-answer (Q&A) systems based on the Large Language Model (LLM) have progressively diverged from recreational to professional use. However, beginners in programming often struggle to correct code errors independently, limiting their learning efficiency. This paper proposed a Multi-Agent framework with environmentally reinforcement learning (E-RL) for code correction called Code Learning (Co-Learning) community, assisting beginners to correct code errors independently. It evaluates the performance of multiple LLMs from an original dataset with 702 error codes, uses it as a reward or punishment criterion for E-RL; Analyzes input error codes by the current agent; selects the appropriate LLM-based agent to achieve optimal error correction accuracy and reduce correction time. Experiment results showed that 3% improvement in Precision score and 15% improvement in time cost as compared with no E-RL method respectively. The results indicate that integrating E-RL with a multi-agent selection strategy can effectively enhance both the accuracy and efficiency of LLM-based code correction systems, making them more practical for educational and professional programming support scenarios.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1431003"},"PeriodicalIF":3.0,"publicationDate":"2025-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12120352/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144183442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Can chatbots teach us how to behave? Examining assumptions about user interactions with AI assistants and their social implications.","authors":"Eleonora Lima, Tiffany Morisseau","doi":"10.3389/frai.2025.1545607","DOIUrl":"https://doi.org/10.3389/frai.2025.1545607","url":null,"abstract":"<p><p>In this article we examine the issue of AI assistants, and the way they respond to insults and sexually explicit requests. Public concern over these responses, particularly because AI assistants are usually female-voiced, prompted tech companies to make them more assertive. Researchers have explored whether these female-voiced AI assistants could encourage abusive behavior and reinforce societal sexism. However, the extent and nature of the problem are unclear due to a lack of data on user interactions. By combining psychological and socio-cultural perspectives, we problematize these assumptions and outline a number of research questions for leveraging AI assistants to promote gender inclusivity more effectively.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1545607"},"PeriodicalIF":3.0,"publicationDate":"2025-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12116430/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144175127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michael Edidem, Bill Xu, Ruopu Li, Di Wu, Banafsheh Rekabdar, Guangxing Wang
{"title":"Deep learning classification of drainage crossings based on high-resolution DEM-derived geomorphological information.","authors":"Michael Edidem, Bill Xu, Ruopu Li, Di Wu, Banafsheh Rekabdar, Guangxing Wang","doi":"10.3389/frai.2025.1561281","DOIUrl":"10.3389/frai.2025.1561281","url":null,"abstract":"<p><p>High-resolution digital elevation models (HRDEMs) from LiDAR and InSAR technologies have significantly improved the accuracies of mapping hydrographic features such as river boundaries, streamlines, and waterbodies over large areas. However, drainage crossings that facilitate the passage of drainage flows beneath roads are not often represented in HRDEMs, resulting in erratic or distorted hydrographic features. At present, drainage crossing datasets are largely missing or available with variable quality. While previous studies have investigated basic convolutional neural network (CNN) models for drainage crossing characterization, it remains unclear if advanced deep learning models will improve the accuracy of drainage crossing classification. Although HRDEM-derived geomorphological features have been identified to enhance feature extraction in other hydrography applications, the contributions of these features to drainage crossing image classification have yet to be sufficiently investigated. This study develops advanced CNN models, EfficientNetV2, using four co-registered 1-meter resolution geomorphological data layers derived from HRDEMs for drainage crossing classification. These layers include positive openness (POS), geometric curvature, and two topographic position index (TPI) layers utilizing 3 × 3 and 21 × 21 cell windows. The findings reveal that the advanced CNN models with HRDEM, TPI (21 × 21), and a combination of HRDEM, POS, and TPI (21 × 21) improve classification accuracy in comparison to the baseline model by 3.39, 4.27, and 4.93%, respectively. The study culminates in explainable artificial intelligence (XAI) for evaluating those most critical image segments responsible for characterizing drainage crossings.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1561281"},"PeriodicalIF":3.0,"publicationDate":"2025-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12106317/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144162495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Raman Kumar, Sarvesh Garg, Rupinder Kaur, M G M Johar, Sehijpal Singh, Soumya V Menon, Pulkit Kumar, Ali Mohammed Hadi, Shams Abbass Hasson, Jasmina Lozanović
{"title":"A comprehensive review of machine learning for heart disease prediction: challenges, trends, ethical considerations, and future directions.","authors":"Raman Kumar, Sarvesh Garg, Rupinder Kaur, M G M Johar, Sehijpal Singh, Soumya V Menon, Pulkit Kumar, Ali Mohammed Hadi, Shams Abbass Hasson, Jasmina Lozanović","doi":"10.3389/frai.2025.1583459","DOIUrl":"10.3389/frai.2025.1583459","url":null,"abstract":"<p><p>This review provides a thorough and organized overview of machine learning (ML) applications in predicting heart disease, covering technological advancements, challenges, and future prospects. As cardiovascular diseases (CVDs) are the leading cause of global mortality, there is an urgent demand for early and precise diagnostic tools. ML models hold considerable potential by utilizing large-scale healthcare data to enhance predictive diagnostics. To systematically investigate this field, the literature is organized into five thematic categories such as \"Heart Disease Detection and Diagnostics,\" \"Machine Learning Models and Algorithms for Healthcare,\" \"Feature Engineering and Optimization Techniques,\" \"Emerging Technologies in Healthcare,\" and \"Applications of AI Across Diseases and Conditions.\" The review incorporates performance benchmarking of various ML models, highlighting that hybrid deep learning (DL) frameworks, e.g., convolutional neural network-long short-term memory (CNN-LSTM) consistently outperform traditional models in terms of sensitivity, specificity, and area under the curve (AUC). Several real-world case studies are presented to demonstrate the successful deployment of ML models in clinical and wearable settings. This review showcases the progression of ML approaches from traditional classifiers to hybrid DL structures and federated learning (FL) frameworks. It also discusses ethical issues, dataset limitations, and model transparency. The conclusions provide important insights for the development of artificial intelligence (AI) powered, clinically applicable heart disease prediction systems.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1583459"},"PeriodicalIF":3.0,"publicationDate":"2025-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12106346/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144162482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maxime Gobin, Muriel Gosnat, Seindé Toure, Lina Faik, Joel Belafa, Antoine Villedieu de Torcy, Florence Armstrong
{"title":"From data extraction to analysis: a comparative study of ELISE capabilities in scientific literature.","authors":"Maxime Gobin, Muriel Gosnat, Seindé Toure, Lina Faik, Joel Belafa, Antoine Villedieu de Torcy, Florence Armstrong","doi":"10.3389/frai.2025.1587244","DOIUrl":"10.3389/frai.2025.1587244","url":null,"abstract":"<p><p>The exponential growth of scientific literature presents challenges for pharmaceutical, biotechnological, and Medtech industries, particularly in regulatory documentation, clinical research, and systematic reviews. Ensuring accurate data extraction, literature synthesis, and compliance with industry standards require AI tools that not only streamline workflows but also uphold scientific rigor. This study evaluates the performance of AI tools designed for bibliographic review, data extraction, and scientific synthesis, assessing their impact on decision-making, regulatory compliance, and research productivity. The AI tools assessed include general-purpose models like ChatGPT and specialized solutions such as ELISE (Elevated LIfe SciencEs), SciSpace/Typeset, Humata, and Epsilon. The evaluation is based on three main criteria: Extraction, Comprehension, and Analysis with Compliance and Traceability (ECACT) as additional dimensions. Human experts established reference benchmarks, while AI Evaluator models ensure objective performance measurement. The study introduces the ECACT score, a structured metric assessing AI reliability in scientific literature analysis, regulatory reporting and clinical documentation. Results demonstrate that ELISE consistently outperforms other AI tools, excelling in precise data extraction, deep contextual comprehension, and advanced content analysis. ELISE's ability to generate traceable, well-reasoned insights makes it particularly well-suited for high-stakes applications such as regulatory affairs, clinical trials, and medical documentation, where accuracy, transparency, and compliance are paramount. Unlike other AI tools, ELISE provides expert-level reasoning and explainability, ensuring AI-generated insights align with industry best practices. ChatGPT is efficient in data retrieval but lacks precision in complex analysis, limiting its use in high-stakes decision-making. Epsilon, Humata, and SciSpace/Typeset exhibit moderate performance, with variability affecting their reliability in critical applications. In conclusion, while AI tools such as ELISE enhance literature review, regulatory writing, and clinical data interpretation, human oversight remains essential to validate AI outputs and ensure compliance with scientific and regulatory standards. For pharmaceutical, biotechnological, and Medtech industries, AI integration must strike a balance between automation and expert supervision to maintain data integrity, transparency, and regulatory adherence.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1587244"},"PeriodicalIF":3.0,"publicationDate":"2025-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12104259/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144152132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Melanie J McGrath, Oliver Lack, James Tisch, Andreas Duenser
{"title":"Measuring trust in artificial intelligence: validation of an established scale and its short form.","authors":"Melanie J McGrath, Oliver Lack, James Tisch, Andreas Duenser","doi":"10.3389/frai.2025.1582880","DOIUrl":"10.3389/frai.2025.1582880","url":null,"abstract":"<p><p>An understanding of the nature and function of human trust in artificial intelligence (AI) is fundamental to the safe and effective integration of these technologies into organizational settings. The Trust in Automation Scale is a commonly used self-report measure of trust in automated systems; however, it has not yet been subjected to comprehensive psychometric validation. Across two studies, we tested the capacity of the scale to effectively measure trust across a range of AI applications. Results indicate that the Trust in Automation Scale is a valid and reliable measure of human trust in AI; however, with 12 items, it is often impractical for contexts requiring frequent and minimally disruptive measurements. To address this limitation, we developed and validated a three-item version of the TIAS, the Short Trust in Automation Scale (S-TIAS). In two further studies, we tested the sensitivity of the S-TIAS to manipulations of the trustworthiness of an AI system, as well as the convergent validity of the scale and its capacity to predict intentions to rely on AI-generated recommendations. In both studies, the S-TIAS also demonstrated convergent validity and significantly predicted intentions to rely on the AI system in patterns similar to the TIAS. This suggests that the S-TIAS is a practical and valid alternative for measuring trust in automation and AI for the purposes of identifying antecedent factors of trust and predicting trust outcomes.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1582880"},"PeriodicalIF":3.0,"publicationDate":"2025-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12098057/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144143791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Samir Ghandour, Adriana A Rodriguez Alvarez, Isabella F Cieri, Shiv Patel, Mounika Boya, Rahul Chaudhary, Anna Poucey, Anahita Dua
{"title":"Using machine learning models to predict post-revascularization thrombosis in PAD.","authors":"Samir Ghandour, Adriana A Rodriguez Alvarez, Isabella F Cieri, Shiv Patel, Mounika Boya, Rahul Chaudhary, Anna Poucey, Anahita Dua","doi":"10.3389/frai.2025.1540503","DOIUrl":"10.3389/frai.2025.1540503","url":null,"abstract":"<p><strong>Background: </strong>Graft/ stent thrombosis after lower extremity revascularization (LER) is a serious complication in patients with peripheral arterial disease (PAD), often leading to amputation. Thus, predicting arterial thrombotic events (ATE) within 1 year is crucial. Given the high rates of thrombosis post-revascularization, this study aimed to develop a machine learning model (MLM) incorporating viscoelastic testing and patient-specific variables to predict ATE following LER.</p><p><strong>Methods: </strong>We prospectively enrolled PAD patients undergoing LER from 2020 to 2024, collecting demographic, clinical, and intervention-related data alongside perioperative thromboelastography with platelet mapping (TEG-PM) values over 12 months post-revascularization. Univariate analysis identified predictors from 52 candidate variables. Multiple MLMs, including logistic regression, XGBoost, and decision tree algorithms, were developed and evaluated using a 70-30 train-test split and five-fold cross-validation. The Synthetic Minority Oversampling Technique (SMOTE) was employed to address the class imbalance between the primary outcomes (ATE vs. no ATE). Model performance was assessed by area under the curve (AUC), accuracy, sensitivity, specificity, negative predictive value, and positive predictive value.</p><p><strong>Results: </strong>Of the 308 patients analyzed, 66% were male, 84% were White, and 18.3% experienced an ATE during the one-year post-revascularization follow-up period. The logistic regression MLM demonstrated the best combined descriptive and calibration performance, especially when TEG-PM parameters were used in combination with patient-specific baseline characteristics, with an AUC of 0.76, classification accuracy of 70%, sensitivity of 68%, and specificity of 71%.</p><p><strong>Conclusion: </strong>Combining patient-specific characteristics with TEG-PM values in MLMs can effectively predict ATE following LER in PAD patients, enhancing high-risk patient identification and enabling tailored thromboprophylaxis.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1540503"},"PeriodicalIF":3.0,"publicationDate":"2025-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12092403/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144120987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploring ChatGPT's potential for augmenting post-editing in machine translation across multiple domains: challenges and opportunities.","authors":"Jeehaan Algaraady, Mohammad Mahyoob","doi":"10.3389/frai.2025.1526293","DOIUrl":"https://doi.org/10.3389/frai.2025.1526293","url":null,"abstract":"<p><strong>Introduction: </strong>Post-editing plays a crucial role in enhancing the quality of machine-generated translation (MGT) by correcting errors and ensuring cohesion and coherence. With advancements in artificial intelligence, Large Language Models (LLMs) like ChatGPT-4o offer promising capabilities for post-editing tasks. This study investigates the effectiveness of ChatGPT-4o as a natural language processing tool in post-editing Arabic translations across various domains, aiming to evaluate its performance in improving productivity, accuracy, consistency, and overall translation quality.</p><p><strong>Methods: </strong>The study involved a comparative analysis of Arabic translations generated by Google Translate. These texts, drawn from multiple domains, were post-edited by two professional human translators and ChatGPT-4o. Subsequently, three additional professional human post-editors evaluated both sets of post-edited outputs. To statistically assess the differences in quality between humans and ChatGPT-4o post-edits, a paired <i>t</i>-test was employed, focusing on metrics such as fluency, accuracy, coherence, and efficiency.</p><p><strong>Results: </strong>The findings indicated that human post-editors outperformed ChatGPT-4o in most quality metrics. However, ChatGPT-4o demonstrated superior efficiency, yielding a positive <i>t</i>-statistic of 8.00 and a <i>p</i>-value of 0.015, indicating a statistically significant difference. Regarding fluency, no significant difference was observed between the two methods (<i>t</i>-statistic = -3.5, <i>p</i>-value = 0.074), suggesting comparable performance in ensuring the natural flow of text.</p><p><strong>Discussion: </strong>ChatGPT-4o showed competitive performance in English-to-Arabic post-editing, particularly in producing fluent, coherent, and stylistically consistent text. Its conversational design enables efficient and consistent editing across various domains. Nonetheless, the model faced challenges in handling grammatical and syntactic nuances, domain-specific idioms, and complex terminology, especially in medical and sports contexts. Overall, the study highlights the potential of ChatGPT-4o as a supportive tool in translation post-editing workflows, complementing human translators by enhancing productivity and maintaining acceptable quality standards.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1526293"},"PeriodicalIF":3.0,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12078335/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144080979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tsedeke Temesgen Habe, Keijo Haataja, Pekka Toivanen
{"title":"Precision enhancement in wireless capsule endoscopy: a novel transformer-based approach for real-time video object detection.","authors":"Tsedeke Temesgen Habe, Keijo Haataja, Pekka Toivanen","doi":"10.3389/frai.2025.1529814","DOIUrl":"https://doi.org/10.3389/frai.2025.1529814","url":null,"abstract":"<p><strong>Background: </strong>Wireless Capsule Endoscopy (WCE) enables non-invasive imaging of the gastrointestinal tract but generates vast video data, making real-time and accurate abnormality detection challenging. Traditional detection methods struggle with uncontrolled illumination, complex textures, and high-speed processing demands.</p><p><strong>Methods: </strong>This study presents a novel approach using Real-Time Detection Transformer (RT-DETR), a transformer-based object detection model, specifically optimized for WCE video analysis. The model captures contextual information between frames and handles variable image conditions. It was evaluated using the Kvasir-Capsule dataset, with performance assessed across three RT-DETR variants: Small (S), Medium (M), and X-Large (X).</p><p><strong>Results: </strong>RT-DETR-X achieved the highest detection precision. RT-DETR-M offered a practical trade-off between accuracy and speed, while RT-DETR-S processed frames at 270 FPS, enabling real-time performance. All three models demonstrated improved detection accuracy and computational efficiency compared to baseline methods.</p><p><strong>Discussion: </strong>The RT-DETR framework significantly enhances precision and real-time performance in gastrointestinal abnormality detection using WCE. Its clinical potential lies in supporting faster and more accurate diagnosis. Future work will focus on further optimization and deployment in endoscopic video analysis systems.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1529814"},"PeriodicalIF":3.0,"publicationDate":"2025-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12075415/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144080982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Navigating AI ethics: ANN and ANFIS for transparent and accountable project evaluation amidst contesting AI practices and technologies.","authors":"Sandeep Wankhade, Manoj Sahni, Ernesto León-Castro, Maricruz Olazabal-Lugo","doi":"10.3389/frai.2025.1535845","DOIUrl":"10.3389/frai.2025.1535845","url":null,"abstract":"<p><strong>Introduction: </strong>The rapid evolution of Artificial Intelligence (AI) necessitates robust ethical frameworks to ensure responsible project deployment. This study addresses the challenge of quantifying ethical criteria in AI projects amidst contesting communicative practices, organizational structures, and enabling technologies, which shape AI's societal implications.</p><p><strong>Methods: </strong>We propose a novel framework integrating Artificial Neural Networks (ANN) and Adaptive Neuro-Fuzzy Inference Systems (ANFIS) to evaluate AI project performance and model ethical uncertainties using Fuzzy logic. A Fuzzy weighted average approach quantifies critical ethical dimensions: transparency, fairness, accountability, privacy, security, explainability, human involvement, and societal impact.</p><p><strong>Results: </strong>The framework enables a structured assessment of AI projects, enhancing transparency and accountability by mapping ethical criteria to project outcomes. ANN evaluates performance metrics, while ANFIS models uncertainties, providing a comprehensive ethical evaluation under complex conditions.</p><p><strong>Discussion: </strong>By combining ANN and ANFIS, this study advances the understanding of AI's ethical dimensions, offering a scalable approach for accountable AI systems. It reframes organizational communication and decision-making, embedding ethics within AI's technological and structural contexts. This work contributes to responsible AI innovation, fostering trust and societal alignment in AI deployments.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1535845"},"PeriodicalIF":3.0,"publicationDate":"2025-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12083503/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144095085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}