U. O. Matthew, J. S. Kazaure, Ado Saleh Kazaure, Ogechukwu N. Onyedibe, Abraham N. Okafor
{"title":"The Twenty First Century E- Learning Education Management & Implication for Media Technology Adoption in the Period of Pandemic","authors":"U. O. Matthew, J. S. Kazaure, Ado Saleh Kazaure, Ogechukwu N. Onyedibe, Abraham N. Okafor","doi":"10.4108/eetel.v8i1.2342","DOIUrl":"https://doi.org/10.4108/eetel.v8i1.2342","url":null,"abstract":"INTRODUCTION: The relevance of multimedia electronic learning(e-learning) education in the ongoing COVID-19 pandemic in the developing nations are justifiable on the pedagogical connections between the twenty first century digital automation and education itself. Multimedia is a creative combination of computer hardware , software and lifeware that allows for integration of video, animation, audio, graphical information and text resources in an interactive engagement , in which information are accessed interactively with any information processing devices. \u0000OBJECTIVES: To enable personalizable and autonomous learning accomplishments when multimedia educational tools are merged , which allows for diversity in curriculum presentation. \u0000METHODS: The current research investigated 400 postgraduate students of faculty of computer science and information technology who adopted the multimedia e-learning education approach to ensure that the expected date of graduation was not extended during the recent institution lock . \u0000RESULTS :The research observed that out of six multimedia e-learning education tools used, e-mail functionalities, chat apps, audio/video computing application and discussion forum were mostly used to provide meaningful interactive engagement while blogs and webcast were less utilized. \u0000CONCLUSION: The research proposed an enhanced level electronic participation, electronic readiness and e-learning education framework that matched the standards for the smartest educational reform that will enable regular and consistent educational accomplishment without disruptions of academic workflow in the global educational business ,notwithstanding the severity of any future pandemic similar to ongoing COVID-19.","PeriodicalId":298151,"journal":{"name":"EAI Endorsed Trans. e Learn.","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131556899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Review on One-Stage Object Detection Based on Deep Learning","authors":"Hang Zhang, Rayan S. Cloutier","doi":"10.4108/eai.9-6-2022.174181","DOIUrl":"https://doi.org/10.4108/eai.9-6-2022.174181","url":null,"abstract":"As a popular research direction in computer vision, deep learning technology has promoted breakthroughs in the field of object detection. In recent years, the combination of object detection and the Internet of Things (IoT) has been widely used in the fields of face recognition, pedestrian detection, unmanned driving, and customs detection. With the development of object detection, two different detection algorithms, one-stage, and two-stage have gradually formed. This paper mainly introduces the one-stage object detection algorithm. Firstly, the development process of the convolutional neural network is briefly reviewed, Then, the current mainstream one-stage object detection model is summarized. Based on YOLOv1, it is continuously optimized, and the improvements and shortcomings are summarized in detail. Finally, a summary is made based on the difficulties and challenges of one-stage object detection algorithms. Abstract","PeriodicalId":298151,"journal":{"name":"EAI Endorsed Trans. e Learn.","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129439467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Recognition system for fruit classification based on 8-layer convolutional neural network","authors":"Jiaji Wang","doi":"10.4108/eai.17-2-2022.173455","DOIUrl":"https://doi.org/10.4108/eai.17-2-2022.173455","url":null,"abstract":"INTRODUCTION: Automatic fruit classification is a challenging task. The types, shapes, and colors of fruits are all essential factors affecting classification. OBJECTIVES: This paper aimed to use deep learning methods to improve the overall accuracy of fruit classification, thereby improving the sorting efficiency of the fruit factory. METHODS: In this study, our recognition system is based on an 8-layer convolutional neural network(CNN) combined with the RMSProp optimization algorithm to classify fruits. It is verified through 10 times 10-fold crossover validation. CONCLUSION: Our method achieves an accuracy of 91.63%, which is superior to the other four state-of-the-art methods. Abstract","PeriodicalId":298151,"journal":{"name":"EAI Endorsed Trans. e Learn.","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117012359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Covid-19 Recognition by Chest CT and Deep Learning","authors":"Lin Yang, Dimas Lima","doi":"10.4108/eai.7-1-2022.172812","DOIUrl":"https://doi.org/10.4108/eai.7-1-2022.172812","url":null,"abstract":"INTRODUCTION: The current RT-qPCR approach to identify Covid-19 diseases is slow and non-optimal for a large number of candidates. OBJECTIVES: Several studies have demonstrated that deep learning can help healthcare professionals diagnose Covid-19 patients. The deep learning model proposed in this paper significantly enhanced the accuracy of identifying Covid-19 patients compared to prior approaches. METHODS: This paper applies transfer learning and deep residual network ResNet152V2 to detect Covid-19 patients with the help of CT scan images. Monte Carlo Cross-Validation has been applied to obtain an accurate and valid result. RESULTS: The proposed model can identify Covid-19 disease with an overall accuracy of 95.06%, along with an average precision and recall of 97.19% and 92.81%, respectively. It also obtained a specificity of 93.14% and a F1-score of 94.96%. CONCLUSION: The performance of this proposed ResNet152V2 model is superior to most of the current Covid-19 detection models.","PeriodicalId":298151,"journal":{"name":"EAI Endorsed Trans. e Learn.","volume":"284 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123116025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Teachers' Use of Technology: Examining the Domains of Laptop Competence","authors":"P. Moses, S. Wong","doi":"10.4108/eai.26-10-2021.171598","DOIUrl":"https://doi.org/10.4108/eai.26-10-2021.171598","url":null,"abstract":"INTRODUCTION: The current coronavirus pandemic has forced the transition from face-to-face teaching and learning to fully online in Malaysia. OBJECTIVES: To examine the domains of laptop competence and its relationship with laptop use among teachers. METHODS: A quantitative descriptive survey design using questionnaires involved 133 Mathematics and Science secondary school teachers. RESULTS: Based on the results, the teachers are highly competent in word processor; basic laptop operation skills; telecommunication; spreadsheet; multimedia integration; and setup, maintenance, and troubleshooting of laptop but only moderately competent in media communication; and database. There was a significant positive relationship between teachers’ laptop competence and laptop use. CONCLUSION: Act as a guide to plan effectual training according to the needs of the teachers based on each item of the domains outlined to promote more rigorous use of laptops among the teachers.","PeriodicalId":298151,"journal":{"name":"EAI Endorsed Trans. e Learn.","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114440667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Lei, Xi Huang, T. Chang, Qijun Wu, Yingying Long, Xianwei Jiang
{"title":"An immersive game of simulated visually impaired","authors":"J. Lei, Xi Huang, T. Chang, Qijun Wu, Yingying Long, Xianwei Jiang","doi":"10.4108/eai.1-7-2021.170254","DOIUrl":"https://doi.org/10.4108/eai.1-7-2021.170254","url":null,"abstract":"INTRODUCTION: \"Inverse light\" is the experience of visually impaired people an immersive gaming experience, reducing the blind of the difficulties in life and study, etc., to bring players to experience the real life scene, the hearing, sight, touch, fully integrated into the game, let the players from the Angle of view of the blind, rational view of the nowadays social existence of accessibility issues, It also analyses several important issues, such as fun, playability, education in games, etc. This paper will elaborate the connotation of educational games and gamified learning, the source of creativity, and the design and implementation of games. \u0000 \u0000OBJECTIVES: In order to raise the public's attention to the visually impaired people and strengthen quality education, mobile games are combined with the inconveniences that the visually impaired people encounter in life to arouse the players' thinking and call on people to be more patient and helpful to the visually impaired. \u0000 \u0000METHODS: This game is mainly based on the black screen effect. In the case of limited vision, the direction is judged by clicking, holding down, sliding and other keys as well as the background sound effect, so as to bypass the obstacles. Finally, through the level, the background and animation in the game are designed with Photoshop, and the modeling is completed with 3DMAX. Microsoft Visual Studio was used to write C# scripts, and Unity was used for scene construction and overall coordination. \u0000 \u0000RESULTS: The experiment proves that the game can analyse the current situation of quality education very well and combine quality education with games. The system framework of the game has clear logic, moderate operation difficulty, and appropriate sounds are used to simulate visually impaired people, which increases the immersive sense of the game. At the same time, the problem of boring game and easy loss of players is solved, and the win-win effect of education and playability of the game is realized. \u0000 \u0000CONCLUSION: \" Inverse light \" game from the design ideas, game development framework, difficulty and other aspects of the analysis, has a very good reflection, and the game interface design is exquisite, has a certain level of art, can meetthe experience of visually impaired people's feelings of the original intention of game development.","PeriodicalId":298151,"journal":{"name":"EAI Endorsed Trans. e Learn.","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126303440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Tea category classification via 5-layer customized convolutional neural network","authors":"Xiang Li, Mengyao Zhai, Junding Sun","doi":"10.4108/EAI.5-5-2021.169811","DOIUrl":"https://doi.org/10.4108/EAI.5-5-2021.169811","url":null,"abstract":"INTRODUCTION: Green tea, oolong, and black tea are the three most popular teas in the world. If classified tea by manual, it will not only take a lot of time, but also be affected by other factors, such as smell, vision, emotion, etc. OBJECTIVES: Other methods of tea category classification have the shortcomings of low classification accuracy, weak robustness. To solve the above problems, we proposed a method of deep learning. METHODS: This paper proposed a 5-layer customized convolutional neural network for 3 tea categories classification. RESULTS: The experimental results show that the method has fast speed and high accuracy of tea classification, which is 97.96%. CONCLUSION: Compared with state-of-the-art methods, our method has better performance than six state-of-the-art methods.","PeriodicalId":298151,"journal":{"name":"EAI Endorsed Trans. e Learn.","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129694856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluating Factors for Student Success in a Flipped Classroom Approach","authors":"A. Moore","doi":"10.4108/eai.3-12-2020.167293","DOIUrl":"https://doi.org/10.4108/eai.3-12-2020.167293","url":null,"abstract":"This paper describes the application of flipped and blended learning techniques to a Final Year Computer Science course in a UK University. All student interactions with the course are recorded in the institution’s Virtual Learning Environment and a number of metrics of engagement are identified and assessed as potential indicators of success. A statistical analysis of these metrics revealed that although attendance remains a significant indicator of student success in this scenario, consistency of engagement is a more accurate guide. Results show that students with a longest average gap between engagements of 12 days or fewer are likely to achieve the highest grade, while measuring the number of days on which material was accessed or measuring the total time spent engaging with the material are less reliable measures.","PeriodicalId":298151,"journal":{"name":"EAI Endorsed Trans. e Learn.","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131952514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Chinese fingerspelling recognition via gray-level co-occurrence matrix and fuzzy support vector machine","authors":"Ya Gao, Chen Xue, Ran Wang, Xianwei Jiang","doi":"10.4108/eai.12-10-2020.166554","DOIUrl":"https://doi.org/10.4108/eai.12-10-2020.166554","url":null,"abstract":"INTRODUCTION: Chinese deaf-mutes communicate in their native language, Chinese sign language which contains gesture language and finger language. Chinese finger language conveys information through various movements of fingers, and its expression is accurate and convenient for classification and recognition. OBJECTIVES: In this paper, we proposed a new model using gray-level co-occurrence matrix (GLCM) and fuzzy support vector machine (FSVM) to improve sign language recognition accuracy. METHODS: Firstly, we acquired the sign language images directly by a digital camera or selected key frames from the video as the data set, meanwhile, we segmented the hand shapes from the background. Secondly, we adjusted the size of each images to N×N and then switched them into gray-level images. Thirdly, we reduced the dimension of the intensity values by using the Principal Component Analysis (PCA) and acquired the data features by creating the gray-level co-occurrence matrix. Finally, we sent the extracted and reduced dimensionality features to Fuzzy Support Vector Machine (FSVM) to conduct the classification tests. RESULTS: Moreover, we compared it with similar algorithms, and the result shows that our method performs the highest classification accuracy which is up to 86.7%. CONCLUSION: The experiment result displays that our model performs well in Chinese finger language recognition and has potential for further research.","PeriodicalId":298151,"journal":{"name":"EAI Endorsed Trans. e Learn.","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114940513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ya Gao, Chengchong Jia, Hongli Chen, Xianwei Jiang
{"title":"Chinese fingerspelling sign language recognition using a nine-layer convolutional neural network","authors":"Ya Gao, Chengchong Jia, Hongli Chen, Xianwei Jiang","doi":"10.4108/eai.12-10-2020.166555","DOIUrl":"https://doi.org/10.4108/eai.12-10-2020.166555","url":null,"abstract":"INTRODUCTION: Sign language is a form of communication and exchange of ideas by people who are hearing-impaired or unable to speak. Chinese fingerspelling is an important component of Chinese sign language, which is suitable for denoting terminology and using as the basis of gesture sign language learning. OBJECTIVES: We propose a nine-layer convolutional neural network (CNN) for the classification of Chinese sign language. METHODS: With self-learning and self-organization abilities, CNN is committed to processing data with similar network structure. CNN has a good application prospect in the aspect of image classification and plays a very important role in the classification of Chinese sign language. RESULTS : Through experiments on 1320 data samples of 30 categories, the results show that the classification accuracy based on the nine-layer convolutional neural network can reach up to 89.69± 2.10 %, it can be seen that this method can effectively classify Chinese gestures. CONCLUSION: We proposed a nine-layer convolutional neural network (CNN) that can classify Chinese sign language.","PeriodicalId":298151,"journal":{"name":"EAI Endorsed Trans. e Learn.","volume":"232 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131477728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}