{"title":"Exploring artificial intelligence techniques to research low energy nuclear reactions.","authors":"Anasse Bari, Tanya Pushkin Garg, Yvonne Wu, Sneha Singh, David Nagel","doi":"10.3389/frai.2024.1401782","DOIUrl":"https://doi.org/10.3389/frai.2024.1401782","url":null,"abstract":"<p><p>The world urgently needs new sources of clean energy due to a growing global population, rising energy use, and the effects of climate change. Nuclear energy is one of the most promising solutions for meeting the world's energy needs now and in the future. One type of nuclear energy, Low Energy Nuclear Reactions (LENR), has gained interest as a potential clean energy source. Recent AI advancements create new ways to help research LENR and to comprehensively analyze the relationships between experimental parameters, materials, and outcomes across diverse LENR research endeavors worldwide. This study explores and investigates the effectiveness of modern AI capabilities leveraging embedding models and topic modeling techniques, including Latent Dirichlet Allocation (LDA), BERTopic, and Top2Vec, in elucidating the underlying structure and prevalent themes within a large LENR research corpus. These methodologies offer unique perspectives on understanding relationships and trends within the LENR research landscape, thereby facilitating advancements in this crucial energy research area. Furthermore, the study presents LENRsim, an experimental machine learning tool to identify similar LENR studies, along with a user-friendly web interface for widespread adoption and utilization. The findings contribute to the understanding and progression of LENR research through data-driven analysis and tool development, enabling more informed decision-making and strategic planning for future research in this field. The insights derived from this study, along with the experimental tools we developed and deployed, hold the potential to significantly aid researchers in advancing their studies of LENR.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11377257/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142156169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multitask connected U-Net: automatic lung cancer segmentation from CT images using PET knowledge guidance.","authors":"Lu Zhou, Chaoyong Wu, Yiheng Chen, Zhicheng Zhang","doi":"10.3389/frai.2024.1423535","DOIUrl":"https://doi.org/10.3389/frai.2024.1423535","url":null,"abstract":"<p><p>Lung cancer is a predominant cause of cancer-related mortality worldwide, necessitating precise tumor segmentation of medical images for accurate diagnosis and treatment. However, the intrinsic complexity and variability of tumor morphology pose substantial challenges to segmentation tasks. To address this issue, we propose a multitask connected U-Net model with a teacher-student framework to enhance the effectiveness of lung tumor segmentation. The proposed model and framework integrate PET knowledge into the segmentation process, leveraging complementary information from both CT and PET modalities to improve segmentation performance. Additionally, we implemented a tumor area detection method to enhance tumor segmentation performance. In extensive experiments on four datasets, the average Dice coefficient of 0.56, obtained using our model, surpassed those of existing methods such as Segformer (0.51), Transformer (0.50), and UctransNet (0.43). These findings validate the efficacy of the proposed method in lung tumor segmentation tasks.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11377414/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142156170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AttentionTTE: a deep learning model for estimated time of arrival.","authors":"Mu Li, Yijun Feng, Xiangdong Wu","doi":"10.3389/frai.2024.1258086","DOIUrl":"https://doi.org/10.3389/frai.2024.1258086","url":null,"abstract":"<p><p>Estimating travel time (ETA) for arbitrary paths is crucial in urban intelligent transportation systems. Previous studies primarily focus on constructing complex feature systems for individual road segments or sub-segments, which fail to effectively model the influence of each road segment on others. To address this issue, we propose an end-to-end model, AttentionTTE. It utilizes a self-attention mechanism to capture global spatial correlations and a recurrent neural network to capture temporal dependencies from local spatial correlations. Additionally, a multi-task learning module integrates global spatial correlations and temporal dependencies to estimate the travel time for both the entire path and each local path. We evaluate our model on a large trajectory dataset, and extensive experimental results demonstrate that AttentionTTE achieves state-of-the-art performance compared to other methods.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11378341/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142156168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Implications of causality in artificial intelligence.","authors":"Luís Cavique","doi":"10.3389/frai.2024.1439702","DOIUrl":"10.3389/frai.2024.1439702","url":null,"abstract":"<p><p>Over the last decade, investment in artificial intelligence (AI) has grown significantly, driven by technology companies and the demand for PhDs in AI. However, new challenges have emerged, such as the 'black box' and bias in AI models. Several approaches have been developed to reduce these problems. Responsible AI focuses on the ethical development of AI systems, considering social impact. Fair AI seeks to identify and correct algorithm biases, promoting equitable decisions. Explainable AI aims to create transparent models that allow users to interpret results. Finally, Causal AI emphasizes identifying cause-and-effect relationships and plays a crucial role in creating more robust and reliable systems, thereby promoting fairness and transparency in AI development. Responsible, Fair, and Explainable AI has several weaknesses. However, Causal AI is the approach with the slightest criticism, offering reassurance about the ethical development of AI.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11371780/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142134036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jayakumar Kaliappan, I J Saravana Kumar, S Sundaravelan, T Anesh, R R Rithik, Yashbir Singh, Diana V Vera-Garcia, Yassine Himeur, Wathiq Mansoor, Shadi Atalla, Kathiravan Srinivasan
{"title":"Analyzing classification and feature selection strategies for diabetes prediction across diverse diabetes datasets.","authors":"Jayakumar Kaliappan, I J Saravana Kumar, S Sundaravelan, T Anesh, R R Rithik, Yashbir Singh, Diana V Vera-Garcia, Yassine Himeur, Wathiq Mansoor, Shadi Atalla, Kathiravan Srinivasan","doi":"10.3389/frai.2024.1421751","DOIUrl":"10.3389/frai.2024.1421751","url":null,"abstract":"<p><strong>Introduction: </strong>In the evolving landscape of healthcare and medicine, the merging of extensive medical datasets with the powerful capabilities of machine learning (ML) models presents a significant opportunity for transforming diagnostics, treatments, and patient care.</p><p><strong>Methods: </strong>This research paper delves into the realm of data-driven healthcare, placing a special focus on identifying the most effective ML models for diabetes prediction and uncovering the critical features that aid in this prediction. The prediction performance is analyzed using a variety of ML models, such as Random Forest (RF), XG Boost (XGB), Linear Regression (LR), Gradient Boosting (GB), and Support VectorMachine (SVM), across numerousmedical datasets. The study of feature importance is conducted using methods including Filter-based, Wrapper-based techniques, and Explainable Artificial Intelligence (Explainable AI). By utilizing Explainable AI techniques, specifically Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP), the decision-making process of the models is ensured to be transparent, thereby bolstering trust in AI-driven decisions.</p><p><strong>Results: </strong>Features identified by RF in Wrapper-based techniques and the Chi-square in Filter-based techniques have been shown to enhance prediction performance. A notable precision and recall values, reaching up to 0.9 is achieved in predicting diabetes.</p><p><strong>Discussion: </strong>Both approaches are found to assign considerable importance to features like age, family history of diabetes, polyuria, polydipsia, and high blood pressure, which are strongly associated with diabetes. In this age of data-driven healthcare, the research presented here aspires to substantially improve healthcare outcomes.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11371799/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142134035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jose M Gonzalez, Thomas H Edwards, Guillaume L Hoareau, Eric J Snider
{"title":"Refinement of machine learning arterial waveform models for predicting blood loss in canines.","authors":"Jose M Gonzalez, Thomas H Edwards, Guillaume L Hoareau, Eric J Snider","doi":"10.3389/frai.2024.1408029","DOIUrl":"10.3389/frai.2024.1408029","url":null,"abstract":"<p><strong>Introduction: </strong>Hemorrhage remains a leading cause of death in civilian and military trauma. Hemorrhages also extend to military working dogs, who can experience injuries similar to those of the humans they work alongside. Unfortunately, current physiological monitoring is often inadequate for early detection of hemorrhage. Here, we evaluate if features extracted from the arterial waveform can allow for early hemorrhage prediction and improved intervention in canines.</p><p><strong>Methods: </strong>In this effort, we extracted more than 1,900 features from an arterial waveform in canine hemorrhage datasets prior to hemorrhage, during hemorrhage, and during a shock hold period. Different features were used as input to decision tree machine learning (ML) model architectures to track three model predictors-total blood loss volume, estimated percent blood loss, and area under the time versus hemorrhaged blood volume curve.</p><p><strong>Results: </strong>ML models were successfully developed for total and estimated percent blood loss, with the total blood loss having a higher correlation coefficient. The area predictors were unsuccessful at being directly predicted by decision tree ML models but could be calculated indirectly from the ML prediction models for blood loss. Overall, the area under the hemorrhage curve had the highest sensitivity for detecting hemorrhage at approximately 4 min after hemorrhage onset, compared to more than 45 min before detection based on mean arterial pressure.</p><p><strong>Conclusion: </strong>ML methods successfully tracked hemorrhage and provided earlier prediction in canines, potentially improving hemorrhage detection and objectifying triage for veterinary medicine. Further, its use can potentially be extended to human use with proper training datasets.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11371769/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142134037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Diagnostic performance of AI-based models versus physicians among patients with hepatocellular carcinoma: a systematic review and meta-analysis.","authors":"Feras Al-Obeidat, Wael Hafez, Muneir Gador, Nesma Ahmed, Marwa Muhammed Abdeljawad, Antesh Yadav, Asrar Rashed","doi":"10.3389/frai.2024.1398205","DOIUrl":"10.3389/frai.2024.1398205","url":null,"abstract":"<p><strong>Background: </strong>Hepatocellular carcinoma (HCC) is a common primary liver cancer that requires early diagnosis due to its poor prognosis. Recent advances in artificial intelligence (AI) have facilitated hepatocellular carcinoma detection using multiple AI models; however, their performance is still uncertain.</p><p><strong>Aim: </strong>This meta-analysis aimed to compare the diagnostic performance of different AI models with that of clinicians in the detection of hepatocellular carcinoma.</p><p><strong>Methods: </strong>We searched the PubMed, Scopus, Cochrane Library, and Web of Science databases for eligible studies. The R package was used to synthesize the results. The outcomes of various studies were aggregated using fixed-effect and random-effects models. Statistical heterogeneity was evaluated using I-squared (I<sup>2</sup>) and chi-square statistics.</p><p><strong>Results: </strong>We included seven studies in our meta-analysis;. Both physicians and AI-based models scored an average sensitivity of 93%. Great variation in sensitivity, accuracy, and specificity was observed depending on the model and diagnostic technique used. The region-based convolutional neural network (RCNN) model showed high sensitivity (96%). Physicians had the highest specificity in diagnosing hepatocellular carcinoma(100%); furthermore, models-based convolutional neural networks achieved high sensitivity. Models based on AI-assisted Contrast-enhanced ultrasound (CEUS) showed poor accuracy (69.9%) compared to physicians and other models. The leave-one-out sensitivity revealed high heterogeneity among studies, which represented true differences among the studies.</p><p><strong>Conclusion: </strong>Models based on Faster R-CNN excel in image classification and data extraction, while both CNN-based models and models combining contrast-enhanced ultrasound (CEUS) with artificial intelligence (AI) had good sensitivity. Although AI models outperform physicians in diagnosing HCC, they should be utilized as supportive tools to help make more accurate and timely decisions.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11368160/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142120780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kerstin Denecke, Daniel Reichenpfader, Dominic Willi, Karin Kennel, Harald Bonel, Knud Nairz, Nikola Cihoric, Damien Papaux, Hendrik von Tengg-Kobligk
{"title":"Person-based design and evaluation of MIA, a digital medical interview assistant for radiology.","authors":"Kerstin Denecke, Daniel Reichenpfader, Dominic Willi, Karin Kennel, Harald Bonel, Knud Nairz, Nikola Cihoric, Damien Papaux, Hendrik von Tengg-Kobligk","doi":"10.3389/frai.2024.1431156","DOIUrl":"10.3389/frai.2024.1431156","url":null,"abstract":"<p><strong>Introduction: </strong>Radiologists frequently lack direct patient contact due to time constraints. Digital medical interview assistants aim to facilitate the collection of health information. In this paper, we propose leveraging conversational agents to realize a medical interview assistant to facilitate medical history taking, while at the same time offering patients the opportunity to ask questions on the examination.</p><p><strong>Methods: </strong>MIA, the digital medical interview assistant, was developed using a person-based design approach, involving patient opinions and expert knowledge during the design and development with a specific use case in collecting information before a mammography examination. MIA consists of two modules: the interview module and the question answering module (Q&A). To ensure interoperability with clinical information systems, we use HL7 FHIR to store and exchange the results collected by MIA during the patient interaction. The system was evaluated according to an existing evaluation framework that covers a broad range of aspects related to the technical quality of a conversational agent including usability, but also accessibility and security.</p><p><strong>Results: </strong>Thirty-six patients recruited from two Swiss hospitals (Lindenhof group and Inselspital, Bern) and two patient organizations conducted the usability test. MIA was favorably received by the participants, who particularly noted the clarity of communication. However, there is room for improvement in the perceived quality of the conversation, the information provided, and the protection of privacy. The Q&A module achieved a precision of 0.51, a recall of 0.87 and an F-Score of 0.64 based on 114 questions asked by the participants. Security and accessibility also require improvements.</p><p><strong>Conclusion: </strong>The applied person-based process described in this paper can provide best practices for future development of medical interview assistants. The application of a standardized evaluation framework helped in saving time and ensures comparability of results.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11363708/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142112829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep learning models for the early detection of maize streak virus and maize lethal necrosis diseases in Tanzania.","authors":"Flavia Mayo, Ciira Maina, Mvurya Mgala, Neema Mduma","doi":"10.3389/frai.2024.1384709","DOIUrl":"10.3389/frai.2024.1384709","url":null,"abstract":"<p><p>Agriculture is considered the backbone of Tanzania's economy, with more than 60% of the residents depending on it for survival. Maize is the country's dominant and primary food crop, accounting for 45% of all farmland production. However, its productivity is challenged by the limitation to detect maize diseases early enough. Maize streak virus (MSV) and maize lethal necrosis virus (MLN) are common diseases often detected too late by farmers. This has led to the need to develop a method for the early detection of these diseases so that they can be treated on time. This study investigated the potential of developing deep-learning models for the early detection of maize diseases in Tanzania. The regions where data was collected are Arusha, Kilimanjaro, and Manyara. Data was collected through observation by a plant. The study proposed convolutional neural network (CNN) and vision transformer (ViT) models. Four classes of imagery data were used to train both models: MLN, Healthy, MSV, and WRONG. The results revealed that the ViT model surpassed the CNN model, with 93.1 and 90.96% accuracies, respectively. Further studies should focus on mobile app development and deployment of the model with greater precision for early detection of the diseases mentioned above in real life.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11362060/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142112753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Editorial: Explainable AI in Natural Language Processing.","authors":"Somnath Banerjee, David Tomás","doi":"10.3389/frai.2024.1472086","DOIUrl":"https://doi.org/10.3389/frai.2024.1472086","url":null,"abstract":"","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11362634/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142112828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}