Frontiers in Artificial Intelligence最新文献

筛选
英文 中文
Artificial intelligence and machine learning applications for cultured meat. 人工智能和机器学习在养殖肉类中的应用。
IF 3
Frontiers in Artificial Intelligence Pub Date : 2024-09-24 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1424012
Michael E Todhunter, Sheikh Jubair, Ruchika Verma, Rikard Saqe, Kevin Shen, Breanna Duffy
{"title":"Artificial intelligence and machine learning applications for cultured meat.","authors":"Michael E Todhunter, Sheikh Jubair, Ruchika Verma, Rikard Saqe, Kevin Shen, Breanna Duffy","doi":"10.3389/frai.2024.1424012","DOIUrl":"https://doi.org/10.3389/frai.2024.1424012","url":null,"abstract":"<p><p>Cultured meat has the potential to provide a complementary meat industry with reduced environmental, ethical, and health impacts. However, major technological challenges remain which require time-and resource-intensive research and development efforts. Machine learning has the potential to accelerate cultured meat technology by streamlining experiments, predicting optimal results, and reducing experimentation time and resources. However, the use of machine learning in cultured meat is in its infancy. This review covers the work available to date on the use of machine learning in cultured meat and explores future possibilities. We address four major areas of cultured meat research and development: establishing cell lines, cell culture media design, microscopy and image analysis, and bioprocessing and food processing optimization. In addition, we have included a survey of datasets relevant to CM research. This review aims to provide the foundation necessary for both cultured meat and machine learning scientists to identify research opportunities at the intersection between cultured meat and machine learning.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1424012"},"PeriodicalIF":3.0,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11460582/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142393857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards enhanced creativity in fashion: integrating generative models with hybrid intelligence. 增强时尚创意:将生成模型与混合智能相结合。
IF 3
Frontiers in Artificial Intelligence Pub Date : 2024-09-23 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1460217
Alexander Ryjov, Vagan Kazaryan, Andrey Golub, Alina Egorova
{"title":"Towards enhanced creativity in fashion: integrating generative models with hybrid intelligence.","authors":"Alexander Ryjov, Vagan Kazaryan, Andrey Golub, Alina Egorova","doi":"10.3389/frai.2024.1460217","DOIUrl":"https://doi.org/10.3389/frai.2024.1460217","url":null,"abstract":"<p><strong>Introduction: </strong>This study explores the role and potential of large language models (LLMs) and generative intelligence in the fashion industry. These technologies are reshaping traditional methods of design, production, and retail, leading to innovation, product personalization, and enhanced customer interaction.</p><p><strong>Methods: </strong>Our research analyzes the current applications and limitations of LLMs in fashion, identifying challenges such as the need for better spatial understanding and design detail processing. We propose a hybrid intelligence approach to address these issues.</p><p><strong>Results: </strong>We find that while LLMs offer significant potential, their integration into fashion workflows requires improvements in understanding spatial parameters and creating tools for iterative design.</p><p><strong>Discussion: </strong>Future research should focus on overcoming these limitations and developing hybrid intelligence solutions to maximize the potential of LLMs in the fashion industry.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1460217"},"PeriodicalIF":3.0,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11468243/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142485951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image restoration in frequency space using complex-valued CNNs. 使用复值 CNN 在频率空间中修复图像。
IF 3
Frontiers in Artificial Intelligence Pub Date : 2024-09-23 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1353873
Zafran Hussain Shah, Marcel Müller, Wolfgang Hübner, Henning Ortkrass, Barbara Hammer, Thomas Huser, Wolfram Schenck
{"title":"Image restoration in frequency space using complex-valued CNNs.","authors":"Zafran Hussain Shah, Marcel Müller, Wolfgang Hübner, Henning Ortkrass, Barbara Hammer, Thomas Huser, Wolfram Schenck","doi":"10.3389/frai.2024.1353873","DOIUrl":"https://doi.org/10.3389/frai.2024.1353873","url":null,"abstract":"<p><p>Real-valued convolutional neural networks (RV-CNNs) in the spatial domain have outperformed classical approaches in many image restoration tasks such as image denoising and super-resolution. Fourier analysis of the results produced by these spatial domain models reveals the limitations of these models in properly processing the full frequency spectrum. This lack of complete spectral information can result in missing textural and structural elements. To address this limitation, we explore the potential of complex-valued convolutional neural networks (CV-CNNs) for image restoration tasks. CV-CNNs have shown remarkable performance in tasks such as image classification and segmentation. However, CV-CNNs for image restoration problems in the frequency domain have not been fully investigated to address the aforementioned issues. Here, we propose several novel CV-CNN-based models equipped with complex-valued attention gates for image denoising and super-resolution in the frequency domains. We also show that our CV-CNN-based models outperform their real-valued counterparts for denoising super-resolution structured illumination microscopy (SR-SIM) and conventional image datasets. Furthermore, the experimental results show that our proposed CV-CNN-based models preserve the frequency spectrum better than their real-valued counterparts in the denoising task. Based on these findings, we conclude that CV-CNN-based methods provide a plausible and beneficial deep learning approach for image restoration in the frequency domain.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1353873"},"PeriodicalIF":3.0,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11456741/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142393859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A global model-agnostic rule-based XAI method based on Parameterized Event Primitives for time series classifiers. 基于时间序列分类器参数化事件原语的全局模型无关规则 XAI 方法。
IF 3
Frontiers in Artificial Intelligence Pub Date : 2024-09-20 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1381921
Ephrem Tibebe Mekonnen, Luca Longo, Pierpaolo Dondio
{"title":"A global model-agnostic rule-based XAI method based on Parameterized Event Primitives for time series classifiers.","authors":"Ephrem Tibebe Mekonnen, Luca Longo, Pierpaolo Dondio","doi":"10.3389/frai.2024.1381921","DOIUrl":"10.3389/frai.2024.1381921","url":null,"abstract":"<p><p>Time series classification is a challenging research area where machine learning and deep learning techniques have shown remarkable performance. However, often, these are seen as black boxes due to their minimal interpretability. On the one hand, there is a plethora of eXplainable AI (XAI) methods designed to elucidate the functioning of models trained on image and tabular data. On the other hand, adapting these methods to explain deep learning-based time series classifiers may not be straightforward due to the temporal nature of time series data. This research proposes a novel global <i>post-hoc</i> explainable method for unearthing the key time steps behind the inferences made by deep learning-based time series classifiers. This novel approach generates a decision tree graph, a specific set of rules, that can be seen as explanations, potentially enhancing interpretability. The methodology involves two major phases: (1) training and evaluating deep-learning-based time series classification models, and (2) extracting parameterized primitive events, such as increasing, decreasing, local max and local min, from each instance of the evaluation set and clustering such events to extract prototypical ones. These prototypical primitive events are then used as input to a decision-tree classifier trained to fit the model predictions of the test set rather than the ground truth data. Experiments were conducted on diverse real-world datasets sourced from the UCR archive, employing metrics such as accuracy, fidelity, robustness, number of nodes, and depth of the extracted rules. The findings indicate that this global <i>post-hoc</i> method can improve the global interpretability of complex time series classification models.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1381921"},"PeriodicalIF":3.0,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11449859/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142381797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MLGCN: an ultra efficient graph convolutional neural model for 3D point cloud analysis. MLGCN:用于三维点云分析的超高效图卷积神经模型。
IF 3
Frontiers in Artificial Intelligence Pub Date : 2024-09-20 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1439340
Mohammad Khodadad, Ali Shiraee Kasmaee, Hamidreza Mahyar, Morteza Rezanejad
{"title":"MLGCN: an ultra efficient graph convolutional neural model for 3D point cloud analysis.","authors":"Mohammad Khodadad, Ali Shiraee Kasmaee, Hamidreza Mahyar, Morteza Rezanejad","doi":"10.3389/frai.2024.1439340","DOIUrl":"10.3389/frai.2024.1439340","url":null,"abstract":"<p><p>With the rapid advancement of 3D acquisition technologies, 3D sensors such as LiDARs, 3D scanners, and RGB-D cameras have become increasingly accessible and cost-effective. These sensors generate 3D point cloud data that require efficient algorithms for tasks such as 3D model classification and segmentation. While deep learning techniques have proven effective in these areas, existing models often rely on complex architectures, leading to high computational costs that are impractical for real-time applications like augmented reality and robotics. In this work, we propose the Multi-level Graph Convolutional Neural Network (MLGCN), an ultra-efficient model for 3D point cloud analysis. The MLGCN model utilizes shallow Graph Neural Network (GNN) blocks to extract features at various spatial locality levels, leveraging precomputed KNN graphs shared across GCN blocks. This approach significantly reduces computational overhead and memory usage, making the model well-suited for deployment on low-memory and low-CPU devices. Despite its efficiency, MLGCN achieves competitive performance in object classification and part segmentation tasks, demonstrating results comparable to state-of-the-art models while requiring up to a thousand times fewer floating-point operations and significantly less storage. The contributions of this paper include the introduction of a lightweight, multi-branch graph-based network for 3D shape analysis, the demonstration of the model's efficiency in both computation and storage, and a thorough theoretical and experimental evaluation of the model's performance. We also conduct ablation studies to assess the impact of different branches within the model, providing valuable insights into the role of specific components.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1439340"},"PeriodicalIF":3.0,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11449895/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142381798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
David vs. Goliath: comparing conventional machine learning and a large language model for assessing students' concept use in a physics problem. David vs. Goliath:比较传统机器学习和大型语言模型在物理问题中对学生概念使用的评估。
IF 3
Frontiers in Artificial Intelligence Pub Date : 2024-09-18 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1408817
Fabian Kieser, Paul Tschisgale, Sophia Rauh, Xiaoyu Bai, Holger Maus, Stefan Petersen, Manfred Stede, Knut Neumann, Peter Wulff
{"title":"David vs. Goliath: comparing conventional machine learning and a large language model for assessing students' concept use in a physics problem.","authors":"Fabian Kieser, Paul Tschisgale, Sophia Rauh, Xiaoyu Bai, Holger Maus, Stefan Petersen, Manfred Stede, Knut Neumann, Peter Wulff","doi":"10.3389/frai.2024.1408817","DOIUrl":"10.3389/frai.2024.1408817","url":null,"abstract":"<p><p>Large language models have been shown to excel in many different tasks across disciplines and research sites. They provide novel opportunities to enhance educational research and instruction in different ways such as assessment. However, these methods have also been shown to have fundamental limitations. These relate, among others, to hallucinating knowledge, explainability of model decisions, and resource expenditure. As such, more conventional machine learning algorithms might be more convenient for specific research problems because they allow researchers more control over their research. Yet, the circumstances in which either conventional machine learning or large language models are preferable choices are not well understood. This study seeks to answer the question to what extent either conventional machine learning algorithms or a recently advanced large language model performs better in assessing students' concept use in a physics problem-solving task. We found that conventional machine learning algorithms in combination outperformed the large language model. Model decisions were then analyzed via closer examination of the models' classifications. We conclude that in specific contexts, conventional machine learning can supplement large language models, especially when labeled data is available.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1408817"},"PeriodicalIF":3.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11445140/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142366769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating the contribution of image time series observations to cauliflower harvest-readiness prediction. 研究图像时间序列观测对花椰菜收获准备预测的贡献。
IF 3
Frontiers in Artificial Intelligence Pub Date : 2024-09-18 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1416323
Jana Kierdorf, Timo Tjarden Stomberg, Lukas Drees, Uwe Rascher, Ribana Roscher
{"title":"Investigating the contribution of image time series observations to cauliflower harvest-readiness prediction.","authors":"Jana Kierdorf, Timo Tjarden Stomberg, Lukas Drees, Uwe Rascher, Ribana Roscher","doi":"10.3389/frai.2024.1416323","DOIUrl":"10.3389/frai.2024.1416323","url":null,"abstract":"<p><p>Cauliflower cultivation is subject to high-quality control criteria during sales, which underlines the importance of accurate harvest timing. Using time series data for plant phenotyping can provide insights into the dynamic development of cauliflower and allow more accurate predictions of when the crop is ready for harvest than single-time observations. However, data acquisition on a daily or weekly basis is resource-intensive, making selection of acquisition days highly important. We investigate which data acquisition days and development stages positively affect the model accuracy to get insights into prediction-relevant observation days and aid future data acquisition planning. We analyze harvest-readiness using the cauliflower image time series of the GrowliFlower dataset. We use an adjusted ResNet18 classification model, including positional encoding of the data acquisition dates to add implicit information about development. The explainable machine learning approach GroupSHAP analyzes time points' contributions. Time points with the lowest mean absolute contribution are excluded from the time series to determine their effect on model accuracy. Using image time series rather than single time points, we achieve an increase in accuracy of 4%. GroupSHAP allows the selection of time points that positively affect the model accuracy. By using seven selected time points instead of all 11 ones, the accuracy improves by an additional 4%, resulting in an overall accuracy of 89.3%. The selection of time points may therefore lead to a reduction in data collection in the future.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1416323"},"PeriodicalIF":3.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11445755/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142366770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Uncertainty quantification in multi-class image classification using chest X-ray images of COVID-19 and pneumonia. 使用 COVID-19 和肺炎的胸部 X 光图像进行多类图像分类的不确定性量化。
IF 3
Frontiers in Artificial Intelligence Pub Date : 2024-09-18 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1410841
Albert Whata, Katlego Dibeco, Kudakwashe Madzima, Ibidun Obagbuwa
{"title":"Uncertainty quantification in multi-class image classification using chest X-ray images of COVID-19 and pneumonia.","authors":"Albert Whata, Katlego Dibeco, Kudakwashe Madzima, Ibidun Obagbuwa","doi":"10.3389/frai.2024.1410841","DOIUrl":"10.3389/frai.2024.1410841","url":null,"abstract":"<p><p>This paper investigates uncertainty quantification (UQ) techniques in multi-class classification of chest X-ray images (COVID-19, Pneumonia, and Normal). We evaluate Bayesian Neural Networks (BNN) and the Deep Neural Network with UQ (DNN with UQ) techniques, including Monte Carlo dropout, Ensemble Bayesian Neural Network (EBNN), Ensemble Monte Carlo (EMC) dropout, across different evaluation metrics. Our analysis reveals that DNN with UQ, especially EBNN and EMC dropout, consistently outperform BNNs. For example, in Class 0 vs. All, EBNN achieved a <i>U</i>Acc of 92.6%, <i>U</i>AUC-ROC of 95.0%, and a Brier Score of 0.157, significantly surpassing BNN's performance. Similarly, EMC Dropout excelled in Class 1 vs. All with a <i>U</i>Acc of 83.5%, <i>U</i>AUC-ROC of 95.8%, and a Brier Score of 0.165. These advanced models demonstrated higher accuracy, better discriaminative capability, and more accurate probabilistic predictions. Our findings highlight the efficacy of DNN with UQ in enhancing model reliability and interpretability, making them highly suitable for critical healthcare applications like chest X-ray imageQ6 classification.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1410841"},"PeriodicalIF":3.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11445153/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142366771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evolving intellectual property landscape for AI-driven innovations in the biomedical sector: opportunities in stable IP regime for shared success. 生物医学领域人工智能驱动创新的知识产权格局演变:稳定的知识产权制度为共享成功带来的机遇。
IF 3
Frontiers in Artificial Intelligence Pub Date : 2024-09-17 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1372161
Abhijit Poddar, S R Rao
{"title":"Evolving intellectual property landscape for AI-driven innovations in the biomedical sector: opportunities in stable IP regime for shared success.","authors":"Abhijit Poddar, S R Rao","doi":"10.3389/frai.2024.1372161","DOIUrl":"10.3389/frai.2024.1372161","url":null,"abstract":"<p><p>Artificial Intelligence (AI) has revolutionized the biomedical sector in advanced diagnosis, treatment, and personalized medicine. While these AI-driven innovations promise vast benefits for patients and service providers, they also raise complex intellectual property (IP) challenges due to the inherent nature of AI technology. In this review, we discussed the multifaceted impact of AI on IP within the biomedical sector, exploring implications in areas like drug research and discovery, personalized medicine, and medical diagnostics. We dissect critical issues surrounding AI inventorship, patent and copyright protection for AI-generated works, data ownership, and licensing. To provide context, we analyzed the current IP legislative landscape in the United States, EU, China, and India, highlighting convergences, divergences, and precedent-setting cases relevant to the biomedical sector. Recognizing the need for harmonization, we reviewed current developments and discussed a way forward. We advocate for a collaborative approach, convening policymakers, clinicians, researchers, industry players, legal professionals, and patient advocates to navigate this dynamic landscape. It will create a stable IP regime and unlock the full potential of AI for enhanced healthcare delivery and improved patient outcomes.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1372161"},"PeriodicalIF":3.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11442499/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142362210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparing emotions in ChatGPT answers and human answers to the coding questions on Stack Overflow. 比较 ChatGPT 答案中的情绪和 Stack Overflow 上人类对编码问题的回答。
IF 3
Frontiers in Artificial Intelligence Pub Date : 2024-09-16 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1393903
Somayeh Fatahi, Julita Vassileva, Chanchal K Roy
{"title":"Comparing emotions in ChatGPT answers and human answers to the coding questions on Stack Overflow.","authors":"Somayeh Fatahi, Julita Vassileva, Chanchal K Roy","doi":"10.3389/frai.2024.1393903","DOIUrl":"10.3389/frai.2024.1393903","url":null,"abstract":"<p><strong>Introduction: </strong>Recent advances in generative Artificial Intelligence (AI) and Natural Language Processing (NLP) have led to the development of Large Language Models (LLMs) and AI-powered chatbots like ChatGPT, which have numerous practical applications. Notably, these models assist programmers with coding queries, debugging, solution suggestions, and providing guidance on software development tasks. Despite known issues with the accuracy of ChatGPT's responses, its comprehensive and articulate language continues to attract frequent use. This indicates potential for ChatGPT to support educators and serve as a virtual tutor for students.</p><p><strong>Methods: </strong>To explore this potential, we conducted a comprehensive analysis comparing the emotional content in responses from ChatGPT and human answers to 2000 questions sourced from Stack Overflow (SO). The emotional aspects of the answers were examined to understand how the emotional tone of AI responses compares to that of human responses.</p><p><strong>Results: </strong>Our analysis revealed that ChatGPT's answers are generally more positive compared to human responses. In contrast, human answers often exhibit emotions such as anger and disgust. Significant differences were observed in emotional expressions between ChatGPT and human responses, particularly in the emotions of anger, disgust, and joy. Human responses displayed a broader emotional spectrum compared to ChatGPT, suggesting greater emotional variability among humans.</p><p><strong>Discussion: </strong>The findings highlight a distinct emotional divergence between ChatGPT and human responses, with ChatGPT exhibiting a more uniformly positive tone and humans displaying a wider range of emotions. This variance underscores the need for further research into the role of emotional content in AI and human interactions, particularly in educational contexts where emotional nuances can impact learning and communication.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1393903"},"PeriodicalIF":3.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11439875/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142355498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信