InformaticsPub Date : 2024-07-26DOI: 10.3390/informatics11030054
Moonkyoung Jang
{"title":"AI Literacy and Intention to Use Text-Based GenAI for Learning: The Case of Business Students in Korea","authors":"Moonkyoung Jang","doi":"10.3390/informatics11030054","DOIUrl":"https://doi.org/10.3390/informatics11030054","url":null,"abstract":"With the increasing use of large-scale language model-based AI tools in modern learning environments, it is important to understand students’ motivations, experiences, and contextual influences. These tools offer new support dimensions for learners, enhancing academic achievement and providing valuable resources, but their use also raises ethical and social issues. In this context, this study aims to systematically identify factors influencing the usage intentions of text-based GenAI tools among undergraduates. By modifying the core variables of the Unified Theory of Acceptance and Use of Technology (UTAUT) with AI literacy, a survey was designed to measure GenAI users’ intentions to collect participants’ opinions. The survey, conducted among business students at a university in South Korea, gathered 239 responses during March and April 2024. Data were analyzed using Partial Least Squares Structural Equation Modeling (PLS-SEM) with SmartPLS software (Ver. 4.0.9.6). The findings reveal that performance expectancy significantly affects the intention to use GenAI, while effort expectancy does not. In addition, AI literacy and social influence significantly influence performance, effort expectancy, and the intention to use GenAI. This study provides insights into determinants affecting GenAI usage intentions, aiding the development of effective educational strategies and policies to support ethical and beneficial AI use in academic settings.","PeriodicalId":507941,"journal":{"name":"Informatics","volume":"53 12","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141798692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
InformaticsPub Date : 2024-07-23DOI: 10.3390/informatics11030053
I. Akpan, Onyebuchi Felix Offodile, A. Akpanobong, Y. Kobara
{"title":"A Comparative Analysis of Virtual Education Technology, E-Learning Systems Research Advances, and Digital Divide in the Global South","authors":"I. Akpan, Onyebuchi Felix Offodile, A. Akpanobong, Y. Kobara","doi":"10.3390/informatics11030053","DOIUrl":"https://doi.org/10.3390/informatics11030053","url":null,"abstract":"This pioneering study evaluates the digital divide and advances in virtual education (VE) and e-learning research in the Global South Countries (GSCs). Using metadata from bibliographic and World Bank data on research and development (R&D), we conduct quantitative bibliometric performance analyses and evaluate the connection between R&D expenditures on VE/e-learning research advances in GSCs. The results show that ‘East Asia and the Pacific’ (EAP) spent significantly more on (R&D) and achieved the highest scientific literature publication (SLP), with significant impacts. Other GSCs’ R&D expenditure was flat until 2020 (during COVID-19), when R&D funding increased, achieving a corresponding 42% rise in SLPs. About 67% of ‘Arab States’ (AS) SLPs and 60% of citation impact came from SLPs produced from global north and other GSCs regions, indicating high dependence. Also, 51% of high-impact SLPs were ‘Multiple Country Publications’, mainly from non-GSC institutions, indicating high collaboration impact. The EAP, AS, and ‘South Asia’ (SA) regions experienced lower disparity. In contrast, the less developed countries (LDCs), including ‘Sub-Sahara Africa’, ‘Latin America and the Caribbean’, and ‘Europe (Eastern) and Central Asia’, showed few dominant countries with high SLPs and higher digital divides. We advocate for increased educational research funding to enhance innovative R&D in GSCs, especially in LDCs.","PeriodicalId":507941,"journal":{"name":"Informatics","volume":"88 19","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141812561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
InformaticsPub Date : 2024-07-22DOI: 10.3390/informatics11030052
Parya Fathi, Mita Bhattacharya, Sankar Bhattacharya, Nemai Karmakar
{"title":"Use of Chipless Radio Frequency Identification Technology for Smart Food Packaging: An Economic Analysis for an Australian Seafood Industry","authors":"Parya Fathi, Mita Bhattacharya, Sankar Bhattacharya, Nemai Karmakar","doi":"10.3390/informatics11030052","DOIUrl":"https://doi.org/10.3390/informatics11030052","url":null,"abstract":"Effective monitoring of perishable food products has become increasingly important for ensuring quality, enabling smart packaging to be a key consideration for food companies. Among the promising technologies available for transforming packaging into intelligent packaging, chipless radio frequency identification (RFID) sensors stand out. Despite the high initial implementation costs associated with chipless RFID technology, the potential benefits could outweigh the costs if electrical challenges can be overcome. We examine various economic methods to analyze the economic benefits of chipless RFID technology, evaluating the benefits of using this technology for the quality monitoring of seafood products of an Australian seafood producer, Tassal. The analysis considers three primary business drivers, viz. quality monitoring, operational efficiency, and tracking and tracing, using net present value and return on investment as the key indicators to assess the feasibility of implementing the technology. Based on sensitivity analysis, we suggest chipless RFID technology is currently best suited for large firms facing significant quality monitoring and operational efficiency challenges. However, as the cost of chipless RFID sensors decreases with further development, this technology may become a more viable option for small businesses in the future.","PeriodicalId":507941,"journal":{"name":"Informatics","volume":"70 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141817594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Non-Invasive Diagnostic Approach for Diabetes Using Pulse Wave Analysis and Deep Learning","authors":"Hiruni Gunathilaka, Rumesh Rajapaksha, Thosini Kumarika, Dinusha Perera, Uditha Herath, Charith Jayathilaka, Janitha Liyanage, Sudath Kalingamudali","doi":"10.3390/informatics11030051","DOIUrl":"https://doi.org/10.3390/informatics11030051","url":null,"abstract":"The surging prevalence of diabetes globally necessitates advancements in non-invasive diagnostics, particularly for the early detection of cardiovascular anomalies associated with the condition. This study explores the efficacy of Pulse Wave Analysis (PWA) for distinguishing diabetic from non-diabetic individuals through morphological examination of pressure pulse waveforms. The research unfolds in four phases: data accrual, preprocessing, Convolutional Neural Network (CNN) model construction, and performance evaluation. Data were procured using a multipara patient monitor, resulting in 2000 pulse waves equally divided between healthy individuals and those with diabetes. These were used to train, validate, and test three distinct CNN architectures: the conventional CNN, Visual Geometry Group (VGG16), and Residual Networks (ResNet18). The accuracy, precision, recall, and F1 score gauged each model’s proficiency. The CNN demonstrated a training accuracy of 82.09% and a testing accuracy of 80.6%. The VGG16, with its deeper structure, surpassed the baseline with training and testing accuracies of 90.2% and 86.57%, respectively. ResNet18 excelled, achieving a training accuracy of 92.50% and a testing accuracy of 92.00%, indicating its robustness in pattern recognition within pulse wave data. Deploying deep learning for diabetes screening marks progress, suggesting clinical use and future studies on bigger datasets for refinement.","PeriodicalId":507941,"journal":{"name":"Informatics","volume":" 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141823008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Machine Learning to Estimate Workload and Balance Resources with Live Migration and VM Placement","authors":"Taufik Hidayat, K. Ramli, Nadia Thereza, Amarudin Daulay, Rushendra Rushendra, Rahutomo Mahardiko","doi":"10.3390/informatics11030050","DOIUrl":"https://doi.org/10.3390/informatics11030050","url":null,"abstract":"Currently, utilizing virtualization technology in data centers often imposes an increasing burden on the host machine (HM), leading to a decline in VM performance. To address this issue, live virtual migration (LVM) is employed to alleviate the load on the VM. This study introduces a hybrid machine learning model designed to estimate the direct migration of pre-copied migration virtual machines within the data center. The proposed model integrates Markov Decision Process (MDP), genetic algorithm (GA), and random forest (RF) algorithms to forecast the prioritized movement of virtual machines and identify the optimal host machine target. The hybrid models achieve a 99% accuracy rate with quicker training times compared to the previous studies that utilized K-nearest neighbor, decision tree classification, support vector machines, logistic regression, and neural networks. The authors recommend further exploration of a deep learning approach (DL) to address other data center performance issues. This paper outlines promising strategies for enhancing virtual machine migration in data centers. The hybrid models demonstrate high accuracy and faster training times than previous research, indicating the potential for optimizing virtual machine placement and minimizing downtime. The authors emphasize the significance of considering data center performance and propose further investigation. Moreover, it would be beneficial to delve into the practical implementation and dissemination of the proposed model in real-world data centers.","PeriodicalId":507941,"journal":{"name":"Informatics","volume":"104 23","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141821890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
InformaticsPub Date : 2024-07-19DOI: 10.3390/informatics11030049
Yan Cong
{"title":"AI Language Models: An Opportunity to Enhance Language Learning","authors":"Yan Cong","doi":"10.3390/informatics11030049","DOIUrl":"https://doi.org/10.3390/informatics11030049","url":null,"abstract":"AI language models are increasingly transforming language research in various ways. How can language educators and researchers respond to the challenge posed by these AI models? Specifically, how can we embrace this technology to inform and enhance second language learning and teaching? In order to quantitatively characterize and index second language writing, the current work proposes the use of similarities derived from contextualized meaning representations in AI language models. The computational analysis in this work is hypothesis-driven. The current work predicts how similarities should be distributed in a second language learning setting. The results suggest that similarity metrics are informative of writing proficiency assessment and interlanguage development. Statistically significant effects were found across multiple AI models. Most of the metrics could distinguish language learners’ proficiency levels. Significant correlations were also found between similarity metrics and learners’ writing test scores provided by human experts in the domain. However, not all such effects were strong or interpretable. Several results could not be consistently explained under the proposed second language learning hypotheses. Overall, the current investigation indicates that with careful configuration and systematic metrics design, AI language models can be promising tools in advancing language education.","PeriodicalId":507941,"journal":{"name":"Informatics","volume":"101 51","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141821807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
InformaticsPub Date : 2024-07-18DOI: 10.3390/informatics11030048
P. Ariza-Colpas, M. Piñeres-Melo, M. Urina-Triana, E. Barcelo-Martínez, Camilo Barceló-Castellanos, Fabian Roman
{"title":"Machine Learning Applied to the Analysis of Prolonged COVID Symptoms: An Analytical Review","authors":"P. Ariza-Colpas, M. Piñeres-Melo, M. Urina-Triana, E. Barcelo-Martínez, Camilo Barceló-Castellanos, Fabian Roman","doi":"10.3390/informatics11030048","DOIUrl":"https://doi.org/10.3390/informatics11030048","url":null,"abstract":"The COVID-19 pandemic continues to constitute a public health emergency of international importance, although the state of emergency declaration has indeed been terminated worldwide, many people continue to be infected and present different symptoms associated with the illness. Undoubtedly, solutions based on divergent technologies such as machine learning have made great contributions to the understanding, identification, and treatment of the disease. Due to the sudden appearance of this virus, many works have been carried out by the scientific community to support the detection and treatment processes, which has generated numerous publications, making it difficult to identify the status of current research and future contributions that can continue to be generated around this problem that is still valid among us. To address this problem, this article shows the result of a scientometric analysis, which allows the identification of the various contributions that have been generated from the line of automatic learning for the monitoring and treatment of symptoms associated with this pathology. The methodology for the development of this analysis was carried out through the implementation of two phases: in the first phase, a scientometric analysis was carried out, where the countries, authors, and magazines with the greatest production associated with this subject can be identified, later in the second phase, the contributions based on the use of the Tree of Knowledge metaphor are identified. The main concepts identified in this review are related to symptoms, implemented algorithms, and the impact of applications. These results provide relevant information for researchers in the field in the search for new solutions or the application of existing ones for the treatment of still-existing symptoms of COVID-19.","PeriodicalId":507941,"journal":{"name":"Informatics","volume":" 7","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141826540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
InformaticsPub Date : 2024-07-16DOI: 10.3390/informatics11030047
Inas Al Khatib, A. Shamayleh, Malick Ndiaye
{"title":"Healthcare and the Internet of Medical Things: Applications, Trends, Key Challenges, and Proposed Resolutions","authors":"Inas Al Khatib, A. Shamayleh, Malick Ndiaye","doi":"10.3390/informatics11030047","DOIUrl":"https://doi.org/10.3390/informatics11030047","url":null,"abstract":"In recent years, the Internet of medical things (IoMT) has become a significant technological advancement in the healthcare sector. This systematic review aims to identify and summarize the various applications, key challenges, and proposed technical solutions within this domain, based on a comprehensive analysis of the existing literature. This review highlights diverse applications of the IoMT, including mobile health (mHealth) applications, remote biomarker detection, hybrid RFID-IoT solutions for scrub distribution in operating rooms, IoT-based disease prediction using machine learning, and the efficient sharing of personal health records through searchable symmetric encryption, blockchain, and IPFS. Other notable applications include remote healthcare management systems, non-invasive real-time blood glucose measurement devices, distributed ledger technology (DLT) platforms, ultra-wideband (UWB) radar systems, IoT-based pulse oximeters, accident and emergency informatics (A&EI), and integrated wearable smart patches. The key challenges identified include privacy protection, sustainable power sources, sensor intelligence, human adaptation to sensors, data speed, device reliability, and storage efficiency. The proposed mitigations encompass network control, cryptography, edge-fog computing, and blockchain, alongside rigorous risk planning. The review also identifies trends and advancements in the IoMT architecture, remote monitoring innovations, the integration of machine learning and AI, and enhanced security measures. This review makes several novel contributions compared to the existing literature, including (1) a comprehensive categorization of IoMT applications, extending beyond the traditional use cases to include emerging technologies such as UWB radar systems and DLT platforms; (2) an in-depth analysis of the integration of machine learning and AI in IoMT, highlighting innovative approaches in disease prediction and remote monitoring; (3) a detailed examination of privacy and security measures, proposing advanced cryptographic solutions and blockchain implementations to enhance data protection; and (4) the identification of future research directions, providing a roadmap for addressing current limitations and advancing the scientific understanding of IoMT in healthcare. By addressing current limitations and suggesting future research directions, this work aims to advance scientific understanding of the IoMT in healthcare.","PeriodicalId":507941,"journal":{"name":"Informatics","volume":"6 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141642169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
InformaticsPub Date : 2024-07-15DOI: 10.3390/informatics11030046
Helia Farhood, I. Joudah, Amin Beheshti, Samuel Muller
{"title":"Evaluating and Enhancing Artificial Intelligence Models for Predicting Student Learning Outcomes","authors":"Helia Farhood, I. Joudah, Amin Beheshti, Samuel Muller","doi":"10.3390/informatics11030046","DOIUrl":"https://doi.org/10.3390/informatics11030046","url":null,"abstract":"Predicting student outcomes is an essential task and a central challenge among artificial intelligence-based personalised learning applications. Despite several studies exploring student performance prediction, there is a notable lack of comprehensive and comparative research that methodically evaluates and compares multiple machine learning models alongside deep learning architectures. In response, our research provides a comprehensive comparison to evaluate and improve ten different machine learning and deep learning models, either well-established or cutting-edge techniques, namely, random forest, decision tree, support vector machine, K-nearest neighbours classifier, logistic regression, linear regression, and state-of-the-art extreme gradient boosting (XGBoost), as well as a fully connected feed-forward neural network, a convolutional neural network, and a gradient-boosted neural network. We implemented and fine-tuned these models using Python 3.9.5. With a keen emphasis on prediction accuracy and model performance optimisation, we evaluate these methodologies across two benchmark public student datasets. We employ a dual evaluation approach, utilising both k-fold cross-validation and holdout methods, to comprehensively assess the models’ performance. Our research focuses primarily on predicting student outcomes in final examinations by determining their success or failure. Moreover, we explore the importance of feature selection using the ubiquitous Lasso for dimensionality reduction to improve model efficiency, prevent overfitting, and examine its impact on prediction accuracy for each model, both with and without Lasso. This study provides valuable guidance for selecting and deploying predictive models for tabular data classification like student outcome prediction, which seeks to utilise data-driven insights for personalised education.","PeriodicalId":507941,"journal":{"name":"Informatics","volume":"27 8","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141646475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
InformaticsPub Date : 2024-07-11DOI: 10.3390/informatics11030045
Raza Nowrozy
{"title":"GPTs or Grim Position Threats? The Potential Impacts of Large Language Models on Non-Managerial Jobs and Certifications in Cybersecurity","authors":"Raza Nowrozy","doi":"10.3390/informatics11030045","DOIUrl":"https://doi.org/10.3390/informatics11030045","url":null,"abstract":"ChatGPT, a Large Language Model (LLM) utilizing Natural Language Processing (NLP), has caused concerns about its impact on job sectors, including cybersecurity. This study assesses ChatGPT’s impacts in non-managerial cybersecurity roles using the NICE Framework and Technological Displacement theory. It also explores its potential to pass top cybersecurity certification exams. Findings reveal ChatGPT’s promise to streamline some jobs, especially those requiring memorization. Moreover, this paper highlights ChatGPT’s challenges and limitations, such as ethical implications, LLM limitations, and Artificial Intelligence (AI) security. The study suggests that LLMs like ChatGPT could transform the cybersecurity landscape, causing job losses, skill obsolescence, labor market shifts, and mixed socioeconomic impacts. A shift in focus from memorization to critical thinking, and collaboration between LLM developers and cybersecurity professionals, is recommended.","PeriodicalId":507941,"journal":{"name":"Informatics","volume":"32 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141658675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}