Frontiers in Artificial Intelligence最新文献

筛选
英文 中文
Comparing emotions in ChatGPT answers and human answers to the coding questions on Stack Overflow. 比较 ChatGPT 答案中的情绪和 Stack Overflow 上人类对编码问题的回答。
IF 3
Frontiers in Artificial Intelligence Pub Date : 2024-09-16 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1393903
Somayeh Fatahi, Julita Vassileva, Chanchal K Roy
{"title":"Comparing emotions in ChatGPT answers and human answers to the coding questions on Stack Overflow.","authors":"Somayeh Fatahi, Julita Vassileva, Chanchal K Roy","doi":"10.3389/frai.2024.1393903","DOIUrl":"10.3389/frai.2024.1393903","url":null,"abstract":"<p><strong>Introduction: </strong>Recent advances in generative Artificial Intelligence (AI) and Natural Language Processing (NLP) have led to the development of Large Language Models (LLMs) and AI-powered chatbots like ChatGPT, which have numerous practical applications. Notably, these models assist programmers with coding queries, debugging, solution suggestions, and providing guidance on software development tasks. Despite known issues with the accuracy of ChatGPT's responses, its comprehensive and articulate language continues to attract frequent use. This indicates potential for ChatGPT to support educators and serve as a virtual tutor for students.</p><p><strong>Methods: </strong>To explore this potential, we conducted a comprehensive analysis comparing the emotional content in responses from ChatGPT and human answers to 2000 questions sourced from Stack Overflow (SO). The emotional aspects of the answers were examined to understand how the emotional tone of AI responses compares to that of human responses.</p><p><strong>Results: </strong>Our analysis revealed that ChatGPT's answers are generally more positive compared to human responses. In contrast, human answers often exhibit emotions such as anger and disgust. Significant differences were observed in emotional expressions between ChatGPT and human responses, particularly in the emotions of anger, disgust, and joy. Human responses displayed a broader emotional spectrum compared to ChatGPT, suggesting greater emotional variability among humans.</p><p><strong>Discussion: </strong>The findings highlight a distinct emotional divergence between ChatGPT and human responses, with ChatGPT exhibiting a more uniformly positive tone and humans displaying a wider range of emotions. This variance underscores the need for further research into the role of emotional content in AI and human interactions, particularly in educational contexts where emotional nuances can impact learning and communication.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1393903"},"PeriodicalIF":3.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11439875/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142355498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OLTW-TEC: online learning with sliding windows for text classifier ensembles. OLTW-TEC:文本分类器集合的滑动窗口在线学习。
IF 3
Frontiers in Artificial Intelligence Pub Date : 2024-09-11 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1401126
Khrystyna Lipianina-Honcharenko, Yevgeniy Bodyanskiy, Nataliia Kustra, Andrii Ivasechkо
{"title":"OLTW-TEC: online learning with sliding windows for text classifier ensembles.","authors":"Khrystyna Lipianina-Honcharenko, Yevgeniy Bodyanskiy, Nataliia Kustra, Andrii Ivasechkо","doi":"10.3389/frai.2024.1401126","DOIUrl":"https://doi.org/10.3389/frai.2024.1401126","url":null,"abstract":"<p><p>In the digital age, rapid dissemination of information has elevated the challenge of distinguishing between authentic news and disinformation. This challenge is particularly acute in regions experiencing geopolitical tensions, where information plays a pivotal role in shaping public perception and policy. The prevalence of disinformation in the Ukrainian-language information space, intensified by the hybrid war with russia, necessitates the development of sophisticated tools for its detection and mitigation. Our study introduces the \"Online Learning with Sliding Windows for Text Classifier Ensembles\" (OLTW-TEC) method, designed to address this urgent need. This research aims to develop and validate an advanced machine learning method capable of dynamically adapting to evolving disinformation tactics. The focus is on creating a highly accurate, flexible, and efficient system for detecting disinformation in Ukrainian-language texts. The OLTW-TEC method leverages an ensemble of classifiers combined with a sliding window technique to continuously update the model with the most recent data, enhancing its adaptability and accuracy over time. A unique dataset comprising both authentic and fake news items was used to evaluate the method's performance. Advanced metrics, including precision, recall, and F1-score, facilitated a comprehensive analysis of its effectiveness. The OLTW-TEC method demonstrated exceptional performance, achieving a classification accuracy of 93%. The integration of the sliding window technique with a classifier ensemble significantly contributed to the system's ability to accurately identify disinformation, making it a robust tool in the ongoing battle against fake news in the Ukrainian context. The application of the OLTW-TEC method highlights its potential as a versatile and effective solution for disinformation detection. Its adaptability to the specifics of the Ukrainian language and the dynamic nature of information warfare offers valuable insights into the development of similar tools for other languages and regions. OLTW-TEC represents a significant advancement in the detection of disinformation within the Ukrainian-language information space. Its development and successful implementation underscore the importance of innovative machine learning techniques in combating fake news, paving the way for further research and application in the field of digital information integrity.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1401126"},"PeriodicalIF":3.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11422347/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142355500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dimensions of artificial intelligence on family communication. 人工智能对家庭沟通的影响
IF 3
Frontiers in Artificial Intelligence Pub Date : 2024-09-11 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1398960
Nada Mohammed Alfeir
{"title":"Dimensions of artificial intelligence on family communication.","authors":"Nada Mohammed Alfeir","doi":"10.3389/frai.2024.1398960","DOIUrl":"https://doi.org/10.3389/frai.2024.1398960","url":null,"abstract":"<p><strong>Introduction: </strong>Artificial intelligence (AI) has created a plethora of prospects for communication. The study aims to examine the impacts of AI dimensions on family communication. By investigating the multifaceted effects of AI on family communication, this research aims to provide valuable insights, uncover potential concerns, and offer recommendations for both families and society at large in this digital era.</p><p><strong>Method: </strong>A convenience sampling technique was adopted to recruit 300 participants.</p><p><strong>Results: </strong>A linear regression model was measured to examine the impact of AI dimensions which showed a statistically significant effect on accessibility (<i>p</i> = 0.001), personalization (<i>p</i> = 0.001), and language translation (<i>p</i> = 0.016).</p><p><strong>Discussion: </strong>The findings showed that in terms of accessibility (<i>p</i> = 0.006), and language translation (<i>p</i> = 0.010), except personalization (<i>p</i> = 0.126), there were differences between males and females. However, using multiple AI tools was statistically associated with raising concerns about bias and privacy (<i>p</i> = 0.015), safety, and dependence (<i>p</i> = 0.049) of parents.</p><p><strong>Conclusion: </strong>The results showed a lack of knowledge and transparency about the data storage and privacy policy of AI-enabled communication systems. Overall, there was a positive impact of AI dimensions on family communication.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1398960"},"PeriodicalIF":3.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11422382/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142355499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A deep-learning pipeline for the diagnosis and grading of common blinding ophthalmic diseases based on lesion-focused classification model. 基于病灶聚焦分类模型的眼科常见致盲疾病诊断和分级深度学习管道。
IF 3
Frontiers in Artificial Intelligence Pub Date : 2024-09-11 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1444136
Zhihuan Li, Junxiong Huang, Jingfang Chen, Jin Zeng, Hong Jiang, Lin Ding, TianZi Zhang, Wen Sun, Rong Lu, Qiuli Zhang, Lizhong Liang
{"title":"A deep-learning pipeline for the diagnosis and grading of common blinding ophthalmic diseases based on lesion-focused classification model.","authors":"Zhihuan Li, Junxiong Huang, Jingfang Chen, Jin Zeng, Hong Jiang, Lin Ding, TianZi Zhang, Wen Sun, Rong Lu, Qiuli Zhang, Lizhong Liang","doi":"10.3389/frai.2024.1444136","DOIUrl":"https://doi.org/10.3389/frai.2024.1444136","url":null,"abstract":"<p><strong>Background: </strong>Glaucoma (GLAU), Age-related Macular Degeneration (AMD), Retinal Vein Occlusion (RVO), and Diabetic Retinopathy (DR) are common blinding ophthalmic diseases worldwide.</p><p><strong>Purpose: </strong>This approach is expected to enhance the early detection and treatment of common blinding ophthalmic diseases, contributing to the reduction of individual and economic burdens associated with these conditions.</p><p><strong>Methods: </strong>We propose an effective deep-learning pipeline that combine both segmentation model and classification model for diagnosis and grading of four common blinding ophthalmic diseases and normal retinal fundus.</p><p><strong>Results: </strong>In total, 102,786 fundus images of 75,682 individuals were used for training validation and external validation purposes. We test our model on internal validation data set, the micro Area Under the Receiver Operating Characteristic curve (AUROC) of which reached 0.995. Then, we fine-tuned the diagnosis model to classify each of the four disease into early and late stage, respectively, which achieved AUROCs of 0.597 (GL), 0.877 (AMD), 0.972 (RVO), and 0.961 (DR) respectively. To test the generalization of our model, we conducted two external validation experiments on Neimeng and Guangxi cohort, all of which maintained high accuracy.</p><p><strong>Conclusion: </strong>Our algorithm demonstrates accurate artificial intelligence diagnosis pipeline for common blinding ophthalmic diseases based on Lesion-Focused fundus that overcomes the low-accuracy of the traditional classification method that based on raw retinal images, which has good generalization ability on diverse cases in different regions.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1444136"},"PeriodicalIF":3.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11422385/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142355497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Large language model triaging of simulated nephrology patient inbox messages. 模拟肾科病人收件箱信息的大语言模型分流。
IF 3
Frontiers in Artificial Intelligence Pub Date : 2024-09-09 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1452469
Justin H Pham, Charat Thongprayoon, Jing Miao, Supawadee Suppadungsuk, Priscilla Koirala, Iasmina M Craici, Wisit Cheungpasitporn
{"title":"Large language model triaging of simulated nephrology patient inbox messages.","authors":"Justin H Pham, Charat Thongprayoon, Jing Miao, Supawadee Suppadungsuk, Priscilla Koirala, Iasmina M Craici, Wisit Cheungpasitporn","doi":"10.3389/frai.2024.1452469","DOIUrl":"10.3389/frai.2024.1452469","url":null,"abstract":"<p><strong>Background: </strong>Efficient triage of patient communications is crucial for timely medical attention and improved care. This study evaluates ChatGPT's accuracy in categorizing nephrology patient inbox messages, assessing its potential in outpatient settings.</p><p><strong>Methods: </strong>One hundred and fifty simulated patient inbox messages were created based on cases typically encountered in everyday practice at a nephrology outpatient clinic. These messages were triaged as non-urgent, urgent, and emergent by two nephrologists. The messages were then submitted to ChatGPT-4 for independent triage into the same categories. The inquiry process was performed twice with a two-week period in between. ChatGPT responses were graded as correct (agreement with physicians), overestimation (higher priority), or underestimation (lower priority).</p><p><strong>Results: </strong>In the first trial, ChatGPT correctly triaged 140 (93%) messages, overestimated the priority of 4 messages (3%), and underestimated the priority of 6 messages (4%). In the second trial, it correctly triaged 140 (93%) messages, overestimated the priority of 9 (6%), and underestimated the priority of 1 (1%). The accuracy did not depend on the urgency level of the message (<i>p</i> = 0.19). The internal agreement of ChatGPT responses was 92% with an intra-rater Kappa score of 0.88.</p><p><strong>Conclusion: </strong>ChatGPT-4 demonstrated high accuracy in triaging nephrology patient messages, highlighting the potential for AI-driven triage systems to enhance operational efficiency and improve patient care in outpatient clinics.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1452469"},"PeriodicalIF":3.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11417033/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142308684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A modified U-Net to detect real sperms in videos of human sperm cell. 改进的 U-Net 用于检测人类精子细胞视频中的真精子。
IF 3
Frontiers in Artificial Intelligence Pub Date : 2024-09-09 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1376546
Hanan Saadat, Mohammad Mehdi Sepehri, Mahdi-Reza Borna, Behnam Maleki
{"title":"A modified U-Net to detect real sperms in videos of human sperm cell.","authors":"Hanan Saadat, Mohammad Mehdi Sepehri, Mahdi-Reza Borna, Behnam Maleki","doi":"10.3389/frai.2024.1376546","DOIUrl":"10.3389/frai.2024.1376546","url":null,"abstract":"<p><strong>Background: </strong>This study delves into the crucial domain of sperm segmentation, a pivotal component of male infertility diagnosis. It explores the efficacy of diverse architectural configurations coupled with various encoders, leveraging frames from the VISEM dataset for evaluation.</p><p><strong>Methods: </strong>The pursuit of automated sperm segmentation led to the examination of multiple deep learning architectures, each paired with distinct encoders. Extensive experimentation was conducted on the VISEM dataset to assess their performance.</p><p><strong>Results: </strong>Our study evaluated various deep learning architectures with different encoders for sperm segmentation using the VISEM dataset. While each model configuration exhibited distinct strengths and weaknesses, UNet++ with ResNet34 emerged as a top-performing model, demonstrating exceptional accuracy in distinguishing sperm cells from non-sperm cells. However, challenges persist in accurately identifying closely adjacent sperm cells. These findings provide valuable insights for improving automated sperm segmentation in male infertility diagnosis.</p><p><strong>Discussion: </strong>The study underscores the significance of selecting appropriate model combinations based on specific diagnostic requirements. It also highlights the challenges related to distinguishing closely adjacent sperm cells.</p><p><strong>Conclusion: </strong>This research advances the field of automated sperm segmentation for male infertility diagnosis, showcasing the potential of deep learning techniques. Future work should aim to enhance accuracy in scenarios involving close proximity between sperm cells, ultimately improving clinical sperm analysis.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1376546"},"PeriodicalIF":3.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11418809/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142308683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transparency and precision in the age of AI: evaluation of explainability-enhanced recommendation systems. 人工智能时代的透明度和精确度:可解释性增强推荐系统的评估。
IF 3
Frontiers in Artificial Intelligence Pub Date : 2024-09-05 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1410790
Jaime Govea, Rommel Gutierrez, William Villegas-Ch
{"title":"Transparency and precision in the age of AI: evaluation of explainability-enhanced recommendation systems.","authors":"Jaime Govea, Rommel Gutierrez, William Villegas-Ch","doi":"10.3389/frai.2024.1410790","DOIUrl":"https://doi.org/10.3389/frai.2024.1410790","url":null,"abstract":"<p><p>In today's information age, recommender systems have become an essential tool to filter and personalize the massive data flow to users. However, these systems' increasing complexity and opaque nature have raised concerns about transparency and user trust. Lack of explainability in recommendations can lead to ill-informed decisions and decreased confidence in these advanced systems. Our study addresses this problem by integrating explainability techniques into recommendation systems to improve both the precision of the recommendations and their transparency. We implemented and evaluated recommendation models on the MovieLens and Amazon datasets, applying explainability methods like LIME and SHAP to disentangle the model decisions. The results indicated significant improvements in the precision of the recommendations, with a notable increase in the user's ability to understand and trust the suggestions provided by the system. For example, we saw a 3% increase in recommendation precision when incorporating these explainability techniques, demonstrating their added value in performance and improving the user experience.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1410790"},"PeriodicalIF":3.0,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11410769/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Noise-induced modality-specific pretext learning for pediatric chest X-ray image classification. 用于儿科胸部 X 光图像分类的噪声诱导模式特定借口学习。
IF 3
Frontiers in Artificial Intelligence Pub Date : 2024-09-05 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1419638
Sivaramakrishnan Rajaraman, Zhaohui Liang, Zhiyun Xue, Sameer Antani
{"title":"Noise-induced modality-specific pretext learning for pediatric chest X-ray image classification.","authors":"Sivaramakrishnan Rajaraman, Zhaohui Liang, Zhiyun Xue, Sameer Antani","doi":"10.3389/frai.2024.1419638","DOIUrl":"https://doi.org/10.3389/frai.2024.1419638","url":null,"abstract":"<p><strong>Introduction: </strong>Deep learning (DL) has significantly advanced medical image classification. However, it often relies on transfer learning (TL) from models pretrained on large, generic non-medical image datasets like ImageNet. Conversely, medical images possess unique visual characteristics that such general models may not adequately capture.</p><p><strong>Methods: </strong>This study examines the effectiveness of modality-specific pretext learning strengthened by image denoising and deblurring in enhancing the classification of pediatric chest X-ray (CXR) images into those exhibiting no findings, i.e., normal lungs, or with cardiopulmonary disease manifestations. Specifically, we use a <i>VGG-16-Sharp-U-Net</i> architecture and leverage its encoder in conjunction with a classification head to distinguish normal from abnormal pediatric CXR findings. We benchmark this performance against the traditional TL approach, <i>viz.</i>, the VGG-16 model pretrained only on ImageNet. Measures used for performance evaluation are balanced accuracy, sensitivity, specificity, F-score, Matthew's Correlation Coefficient (MCC), Kappa statistic, and Youden's index.</p><p><strong>Results: </strong>Our findings reveal that models developed from CXR modality-specific pretext encoders substantially outperform the ImageNet-only pretrained model, <i>viz.</i>, Baseline, and achieve significantly higher sensitivity (<i>p</i> < 0.05) with marked improvements in balanced accuracy, F-score, MCC, Kappa statistic, and Youden's index. A novel attention-based fuzzy ensemble of the pretext-learned models further improves performance across these metrics (Balanced accuracy: 0.6376; Sensitivity: 0.4991; F-score: 0.5102; MCC: 0.2783; Kappa: 0.2782, and Youden's index:0.2751), compared to Baseline (Balanced accuracy: 0.5654; Sensitivity: 0.1983; F-score: 0.2977; MCC: 0.1998; Kappa: 0.1599, and Youden's index:0.1327).</p><p><strong>Discussion: </strong>The superior results of CXR modality-specific pretext learning and their ensemble underscore its potential as a viable alternative to conventional ImageNet pretraining for medical image classification. Results from this study promote further exploration of medical modality-specific TL techniques in the development of DL models for various medical imaging applications.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1419638"},"PeriodicalIF":3.0,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11410760/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MixTrain: accelerating DNN training via input mixing. MixTrain:通过输入混合加速 DNN 训练。
IF 3
Frontiers in Artificial Intelligence Pub Date : 2024-09-04 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1387936
Sarada Krithivasan, Sanchari Sen, Swagath Venkataramani, Anand Raghunathan
{"title":"MixTrain: accelerating DNN training via input mixing.","authors":"Sarada Krithivasan, Sanchari Sen, Swagath Venkataramani, Anand Raghunathan","doi":"10.3389/frai.2024.1387936","DOIUrl":"https://doi.org/10.3389/frai.2024.1387936","url":null,"abstract":"<p><p>Training Deep Neural Networks (DNNs) places immense compute requirements on the underlying hardware platforms, expending large amounts of time and energy. An important factor contributing to the long training times is the increasing dataset complexity required to reach state-of-the-art performance in real-world applications. To address this challenge, we explore the use of input mixing, where multiple inputs are combined into a single composite input with an associated composite label for training. The goal is for training on the mixed input to achieve a similar effect as training separately on each the constituent inputs that it represents. This results in a lower number of inputs (or mini-batches) to be processed in each epoch, proportionally reducing training time. We find that naive input mixing leads to a considerable drop in learning performance and model accuracy due to interference between the forward/backward propagation of the mixed inputs. We propose two strategies to address this challenge and realize training speedups from input mixing with minimal impact on accuracy. First, we reduce the impact of inter-input interference by exploiting the spatial separation between the features of the constituent inputs in the network's intermediate representations. We also adaptively vary the mixing ratio of constituent inputs based on their loss in previous epochs. Second, we propose heuristics to automatically identify the subset of the training dataset that is subject to mixing in each epoch. Across ResNets of varying depth, MobileNetV2 and two Vision Transformer networks, we obtain upto 1.6 × and 1.8 × speedups in training for the ImageNet and Cifar10 datasets, respectively, on an Nvidia RTX 2080Ti GPU, with negligible loss in classification accuracy.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1387936"},"PeriodicalIF":3.0,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11443600/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142362211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial intelligence in respiratory care: knowledge, perceptions, and practices-a cross-sectional study. 人工智能在呼吸护理中的应用:知识、认知和实践--一项横断面研究。
IF 3
Frontiers in Artificial Intelligence Pub Date : 2024-09-03 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1451963
Jithin K Sreedharan, Asma Alharbi, Amal Alsomali, Gokul Krishna Gopalakrishnan, Abdullah Almojaibel, Rawan Alajmi, Ibrahim Albalawi, Musallam Alnasser, Meshal Alenezi, Abdullah Alqahtani, Mohammed Alahmari, Eidan Alzahrani, Manjush Karthika
{"title":"Artificial intelligence in respiratory care: knowledge, perceptions, and practices-a cross-sectional study.","authors":"Jithin K Sreedharan, Asma Alharbi, Amal Alsomali, Gokul Krishna Gopalakrishnan, Abdullah Almojaibel, Rawan Alajmi, Ibrahim Albalawi, Musallam Alnasser, Meshal Alenezi, Abdullah Alqahtani, Mohammed Alahmari, Eidan Alzahrani, Manjush Karthika","doi":"10.3389/frai.2024.1451963","DOIUrl":"https://doi.org/10.3389/frai.2024.1451963","url":null,"abstract":"<p><strong>Background: </strong>Artificial intelligence (AI) is reforming healthcare, particularly in respiratory medicine and critical care, by utilizing big and synthetic data to improve diagnostic accuracy and therapeutic benefits. This survey aimed to evaluate the knowledge, perceptions, and practices of respiratory therapists (RTs) regarding AI to effectively incorporate these technologies into the clinical practice.</p><p><strong>Methods: </strong>The study approved by the institutional review board, aimed at the RTs working in the Kingdom of Saudi Arabia. The validated questionnaire collected reflective insights from 448 RTs in Saudi Arabia. Descriptive statistics, thematic analysis, Fisher's exact test, and chi-square test were used to evaluate the significance of the data.</p><p><strong>Results: </strong>The survey revealed a nearly equal distribution of genders (51% female, 49% male). Most respondents were in the 20-25 age group (54%), held bachelor's degrees (69%), and had 0-5 years of experience (73%). While 28% had some knowledge of AI, only 8.5% had practical experience. Significant gender disparities in AI knowledge were noted (<i>p</i> < 0.001). Key findings included 59% advocating for basics of AI in the curriculum, 51% believing AI would play a vital role in respiratory care, and 41% calling for specialized AI personnel. Major challenges identified included knowledge deficiencies (23%), skill enhancement (23%), and limited access to training (17%).</p><p><strong>Conclusion: </strong>In conclusion, this study highlights differences in the levels of knowledge and perceptions regarding AI among respiratory care professionals, underlining its recognized significance and futuristic awareness in the field. Tailored education and strategic planning are crucial for enhancing the quality of respiratory care, with the integration of AI. Addressing these gaps is essential for utilizing the full potential of AI in advancing respiratory care practices.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1451963"},"PeriodicalIF":3.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11405306/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信