Ajay Sudhir Bale, Naveen Ghorpade, Muhammed Furqaan Hashim, Jatin Vaishnav, Z. Almaspoor
{"title":"A Comprehensive Study on Metaverse and Its Impacts on Humans","authors":"Ajay Sudhir Bale, Naveen Ghorpade, Muhammed Furqaan Hashim, Jatin Vaishnav, Z. Almaspoor","doi":"10.1155/2022/3247060","DOIUrl":"https://doi.org/10.1155/2022/3247060","url":null,"abstract":"Virtual Reality (VR) and Augmented Reality (AR) have revolutionized technology and taken the world by storm. They established the foundation for numerous future innovations. Virtual and augmented reality are now widely employed to improve user experiences in various areas. Over time, more and more companies and businesses have begun to use this cutting-edge technology to improve their products and services. Recently, the attention to VR and AR has exploded with the concept “Metaverse” surfacing in mainstream media. Many major companies have already set their goals in motion and are working on building the core of their metaverses. This review paper focuses on explaining the concept of the metaverse, its history, and its associated benefits. Through a survey, it helps understand people’s concerns with the metaverse and how it can impact and affect humans mentally, physically, and psychologically. The analysis of this paper can help humans prepare themselves for what the new technologies have to offer, in addition to assisting companies in building a flawless metaverse.","PeriodicalId":192934,"journal":{"name":"Adv. Hum. Comput. Interact.","volume":"2022 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129224814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Facial Expression Recognition Using a Novel Modeling of Combined Gray Local Binary Pattern","authors":"An Hoa Ton-That, Nhan T. Cao","doi":"10.1155/2022/6798208","DOIUrl":"https://doi.org/10.1155/2022/6798208","url":null,"abstract":"Facial Expression Recognition (FER) is an active research field at present. Deep learning is a good method that is widely used in this field but it has extreme hardware requirements and it is hard to apply in normal terminal devices. So, many other methods are being researched to apply FER in such devices and systems. This work proposes fresh modeling of Combined Gray Local Binary Pattern (CGLBP) for extracting features in facial expression recognition to enhance the recognition rate that can apply FER in the kind of devices and systems. The work included the main steps such as the technique of cropping an input face image from a camera or dataset, the approach of dividing face images into nonoverlap regions for extracting LBP features, applying the fresh modeling of Combined Gray Local Binary Pattern (CGLBP) for extracting features, using uniform feature to reduce the lengths of descriptors, and finally using Support Vector Machine (SVM) for emotion classification. Four popular facial emotion datasets are used in experiments and their results demonstrate that the recognition rate of the proposed method is better in comparison with two types of existent features: Local Binary Pattern (LBP) and Combined Local Binary Pattern (CLBP). The accuracy of experiments performed on four facial expression datasets with different sizes is from about 95% to more than 99%.","PeriodicalId":192934,"journal":{"name":"Adv. Hum. Comput. Interact.","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124687493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Delphi Evaluation of User Interface Design Guidelines: The Case of Arabic","authors":"Ahmed Al-Sa’di, H. Al-Samarraie","doi":"10.1155/2022/5492230","DOIUrl":"https://doi.org/10.1155/2022/5492230","url":null,"abstract":"Due to the importance of design guidelines in facilitating user experience and promoting efficiency, it is essential to determine the effectiveness of certain design guidelines for a specific population. There have been a number of challenges reported in the literature when designing learning courses for Arabic users. Confirming the suitability of design guidelines for specific users’ needs can be challenging in the context of the Arabic language. This is mainly due to the unique characteristics of this language, which contribute to users’ satisfaction with the interface. This study evaluated the feasibility of using Arabic User Interface (UI) guidelines for tablet PCs. The UI guidelines were developed, evaluated, and refined using the Delphi technique. A total of six UI design experts were recruited for this study. The results revealed a number of guidelines that can be used in the design of Arabic UI. The proposed guidelines can standardise the design of Arabic UI by offering future directions on how to effectively apply design principles for tablet PCs.","PeriodicalId":192934,"journal":{"name":"Adv. Hum. Comput. Interact.","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133768805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Alshar'e, A. Al-Badi, Malik Jawarneh, Noman Tahir, Marya Al Amri
{"title":"Usability Evaluation of Educational Games: An Analysis of Culture as a Factor Affecting Children's Educational Attainment","authors":"M. Alshar'e, A. Al-Badi, Malik Jawarneh, Noman Tahir, Marya Al Amri","doi":"10.1155/2022/9427405","DOIUrl":"https://doi.org/10.1155/2022/9427405","url":null,"abstract":"Educational games have been employed among Omani schools but those used by local Omani schools were imported and were mostly designed based on western contexts. For Omani children, these games may be culturally inappropriate and difficult to comprehend and follow, impeding children’s learning. Three questionnaires and one observational checklist were used to gather data from 40 respondents (observers). SPSS was used in data analysis. Through experiments, the behavior of Omani students towards the use of imported educational games was examined. Five main factors, namely, efficiency, learnability, memorability, errors, and satisfaction, of educational games for a target user were measured using Hybrid User Evaluation Methodology for Remote Evaluation (HUEMRE), Training Framework for Untrained Observer (TFUO), and Framework on Educational Games Behavior Intention (EGsBI), which are specifically designed frameworks for this purpose. The results of this study explained that the Omani children are facing difficulties in using the imported educational games; furthermore, this study proves that culture, language, animation, and interaction are contributing heavily to benefiting from educational games, and, therefore, these factors shall be highly considered in the process of educational games design to facilitate and ensure children learning; furthermore, the findings of this study enrich the comprehension of how the specified factors positively affect behavioral intention of Omani students in the use of educational games and in improving the behavior intention level of these students.","PeriodicalId":192934,"journal":{"name":"Adv. Hum. Comput. Interact.","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116786442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Comparative Study of Some Automatic Arabic Text Diacritization Systems","authors":"Ali Mijlad, Yacine El Younoussi","doi":"10.1155/2022/3613710","DOIUrl":"https://doi.org/10.1155/2022/3613710","url":null,"abstract":"Arabic diacritization is the task of restoring diacritics or vowels for Arabic texts considering that they are mostly written without them. This task, when automated, shows better results for some natural language processing tasks; hence, it is necessary for the field of Arabic language processing. In this paper, we are going to present a comparative study of some automatic diacritization systems. One uses a variant of the hidden Markov model. The other one is a pipeline, which includes a Long Short-Term Memory deep learning model, a rule-based correction component, and a statistical-based component. Additionally, we are proposing some modifications to those systems. We have trained and tested those systems in the same benchmark dataset based on the same evaluation metrics proposed in previous work. The best system results are 9.42% and 22.82% for the diacritic error rate DER and the word error rate WER, respectively.","PeriodicalId":192934,"journal":{"name":"Adv. Hum. Comput. Interact.","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116979340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multiclass Classification of Imagined Speech Vowels and Words of Electroencephalography Signals Using Deep Learning","authors":"Nrushingh Charan Mahapatra, Prachet Bhuyan","doi":"10.1155/2022/1374880","DOIUrl":"https://doi.org/10.1155/2022/1374880","url":null,"abstract":"The paper’s emphasis is on the imagined speech decoding of electroencephalography (EEG) neural signals of individuals in accordance with the expansion of the brain-computer interface to encompass individuals with speech problems encountering communication challenges. Decoding an individual’s imagined speech from nonstationary and nonlinear EEG neural signals is a complex task. Related research work in the field of imagined speech has revealed that imagined speech decoding performance and accuracy require attention to further improve. The evolution of deep learning technology increases the likelihood of decoding imagined speech from EEG signals with enhanced performance. We proposed a novel supervised deep learning model that combined the temporal convolutional networks and the convolutional neural networks with the intent of retrieving information from the EEG signals. The experiment was carried out using an open-access dataset of fifteen subjects’ imagined speech multichannel signals of vowels and words. The raw multichannel EEG signals of multiple subjects were processed using discrete wavelet transformation technique. The model was trained and evaluated using the preprocessed signals, and the model hyperparameters were adjusted to achieve higher accuracy in the classification of imagined speech. The experiment results demonstrated that the multiclass imagined speech classification of the proposed model exhibited a higher overall accuracy of 0.9649 and a classification error rate of 0.0350. The results of the study indicate that individuals with speech difficulties might well be able to leverage a noninvasive EEG-based imagined speech brain-computer interface system as one of the long-term alternative artificial verbal communication mediums.","PeriodicalId":192934,"journal":{"name":"Adv. Hum. Comput. Interact.","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129690181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Full Diacritization of the Arabic Text to Improve Screen Readers for the Visually Impaired","authors":"Batool Abuali, MOHAMAD-BASSAM KURDY","doi":"10.1155/2022/1186678","DOIUrl":"https://doi.org/10.1155/2022/1186678","url":null,"abstract":"This paper aims to find the relationship between the full diacritization of the Arabic text and the quality of the speech synthesized in screen readers and presents a new methodology to develop screen readers for the visually impaired, focusing on preprocessing and diacritization of the text before converting it to audio. First, the actual need for our proposal was measured by conducting a MOS (Mean Opinion Score) questionnaire to evaluate the quality of the speech synthesized before and after full diacritization in the NVDA (https://www.nvda-ar.org/) screen reader. Then, an e-reader was built by integrating two models: the first one is for automatic Arabic diacritization (depending on Shakkala), and the second is a TTS model (depending on Tacotron). The quality of our proposed system was measured in terms of (1) pronunciation and (2) intelligibility, in which our system outperformed the commercial screen readers, NVDA and IBSAR (https://www.sakhr.com), as it recorded 60.67%, 17.67%, and 21.67% as correct, incorrect, and partially correct, respectively, for the isolated word test, and 84% correct results for the homograph test, and 78.50% and 93% correct results, respectively, for the DRT and DMRT tests.","PeriodicalId":192934,"journal":{"name":"Adv. Hum. Comput. Interact.","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129868632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Midair Gestural Techniques for Translation Tasks in Large-Display Interaction","authors":"V. Remizova, Y. Gizatdinova, Veikko Surakka","doi":"10.1155/2022/9362916","DOIUrl":"https://doi.org/10.1155/2022/9362916","url":null,"abstract":"Midair gestural interaction has gained a lot of attention over the past decades, with numerous attempts to apply midair gestural interfaces with large displays (and TVs), interactive walls, and smart meeting rooms. These attempts, reviewed in numerous studies, utilized differing gestural techniques for the same action making them inherently incomparable, which further makes it difficult to summarize recommendations for the development of midair gestural interaction applications. Therefore, the aim was to take a closer look at one common action, translation, that is defined as dragging (or moving) an entity to a predefined target position while retaining the entity’s size and rotation. We compared performance and subjective experiences (participants = 30) of four midair gestural techniques (i.e., by fist, palm, pinch, and sideways) in the repetitive translation of 2D objects to short and long distances with a large display. The results showed statistically significant differences in movement time and error rate favoring translation by palm over pinch and sideways at both distances. Further, fist and sideways gestural techniques showed good performances, especially at short and long distances correspondingly. We summarize the implications of the results for the design of midair gestural interfaces, which would be useful for interaction designers and gesture recognition researchers.","PeriodicalId":192934,"journal":{"name":"Adv. Hum. Comput. Interact.","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124781304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. C. Romero-Ternero, R. Robles, D. Cagigas-Muñiz, O. Rivera-Romero, M. J. Romero-Ternero
{"title":"Participant Observation to Apply an Empirical Method of Codesign with Children","authors":"M. C. Romero-Ternero, R. Robles, D. Cagigas-Muñiz, O. Rivera-Romero, M. J. Romero-Ternero","doi":"10.1155/2022/1101847","DOIUrl":"https://doi.org/10.1155/2022/1101847","url":null,"abstract":"Dental anxiety in children is a well-documented problem in the scientific literature. Tools mediated by Information Technology have been shown to positively influence children’s mood based on distraction as well as relaxing activities. We propose an empirical method of codesign with children to generate app content for reducing dental anxiety. The results are embedded in text through a thick description as an ethnographic technique. The method was applied to 163 children (6–8 years old) from a summer school and a primary school, obtaining multimedia products that were integrated into an app prototype. Finally, although this use case of the presented method is applied to the health field, it can be transferred to any other field of application of codesign to children by using material that is specific to new scenarios.","PeriodicalId":192934,"journal":{"name":"Adv. Hum. Comput. Interact.","volume":"50 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130924694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Infrared Thermal Image Gender Classifier Based on the Deep ResNet Model","authors":"Alyaa J. Jalil, Naglaa M. Reda","doi":"10.1155/2022/3852054","DOIUrl":"https://doi.org/10.1155/2022/3852054","url":null,"abstract":"Gender classification from human face images has attracted researchers over the past decade. It has great impact in different fields including defense, human-computer interaction, surveillance industry, and mobile applications. Many methods and techniques have been proposed depending on clear digital images and complex feature extraction preprocessing. However, most recent critical real systems use thermal cameras. This paper has the novelty of utilizing thermal images in gender classification. It proposes a unique approach called IRT_ResNet that adopts residual network (ResNet) model with different layer configurations: 18, 50, and 101. Two different datasets of thermal images have been leveraged to train and test these models. The proposed approach has been compared with convolutional neural network (CNN), principal component analysis (PCA), local binary pattern (LBP), and scale invariant feature transform (SIFT). The experimental results show that the proposed model has higher overall classification accuracy, precision, and F-score compared to the other techniques.","PeriodicalId":192934,"journal":{"name":"Adv. Hum. Comput. Interact.","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125213459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}