2022 IEEE International Conference on Artificial Intelligence in Engineering and Technology (IICAIET)最新文献

筛选
英文 中文
Speech-Based Number Recognition Using KNN and SVM 基于KNN和SVM的语音数字识别
R. R. Porle, Suzanih Embok
{"title":"Speech-Based Number Recognition Using KNN and SVM","authors":"R. R. Porle, Suzanih Embok","doi":"10.1109/IICAIET55139.2022.9936761","DOIUrl":"https://doi.org/10.1109/IICAIET55139.2022.9936761","url":null,"abstract":"Speech-Based Number Recognition is a system that recognizes numbers based on the speech of the user. Most of the research makes use of English, Bangla, Tamil, etc., but the Malay language has received little attention. In this paper, the Malay numbers one through ten are recognized and implemented on devices consisting primarily of the Arduino UNO, the ELECHOUSE Voice Recognition Module v3, Microphone, and Light Emitting Diode. This system employs database creation, preprocessing, feature extraction, Mel-frequency cepstral coefficients, and classification utilizing using K-Nearest Neighbour and Support Vector Machine. Two experiments were carried out using 900 samples. In the first experiment, 80 percent of the training samples and 20 percent of the test samples were used. The second experiment utilized 70 percent of the training samples and 30 percent of the testing samples. The results show that the Support Vector Machine outperformed K-Nearest Neighbour with an average accuracy of 91.27 percent.","PeriodicalId":142482,"journal":{"name":"2022 IEEE International Conference on Artificial Intelligence in Engineering and Technology (IICAIET)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129827686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance Evaluation of HMI based on AHP and GRT for GUI 基于AHP和GRT的人机界面性能评价
Jianbang Liu, M. Ang, J. Chaw, K. Ng
{"title":"Performance Evaluation of HMI based on AHP and GRT for GUI","authors":"Jianbang Liu, M. Ang, J. Chaw, K. Ng","doi":"10.1109/IICAIET55139.2022.9936844","DOIUrl":"https://doi.org/10.1109/IICAIET55139.2022.9936844","url":null,"abstract":"With consideration to the multi-factor and multi-level characteristics of the human-machine evaluation system for graphical user interface (GUI), the main factors affecting the interactive performance are analyzed. The evaluation system was established to evaluate the performance of human-machine interaction (HMI) for the GUI based on the analytic hierarchy process (AHP) and grey relational theory (GRT) model. Furthermore, we conducted an actual HMI experiment for four interactive products (desktop computer, intelligent refrigerator, smart car, and recreational machine) to verify the validation of the performance evaluation system. The application value of the evaluation model is demonstrated through the calculation. In conclusion, this study can provide a reference to designers for the scientific evaluation of human-machine performance of interactive products, which will help them design user-friendly interactive products with higher interaction efficiency.","PeriodicalId":142482,"journal":{"name":"2022 IEEE International Conference on Artificial Intelligence in Engineering and Technology (IICAIET)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130674224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Efficient Distributed Consensus Algorithm For Swarm Robotic 群机器人的高效分布式共识算法
S. Ranganathan, M. Mariappan, Karthigayan Muthukaruppan
{"title":"Efficient Distributed Consensus Algorithm For Swarm Robotic","authors":"S. Ranganathan, M. Mariappan, Karthigayan Muthukaruppan","doi":"10.1109/IICAIET55139.2022.9936787","DOIUrl":"https://doi.org/10.1109/IICAIET55139.2022.9936787","url":null,"abstract":"Swarm robotics is a network based multi-device system designed to achieve shared objectives in a synchronized way. This system is widely used in industries like farming, manufacturing, and defense applications. In recent implementations, swarm robotics is integrated with Blockchain based networks to enhance communication, security, and decentralized decision-making capabilities. As most of the current blockchain applications are based on complex consensus algorithms, every individual robot in the swarm network requires high computing power to run these complex algorithms. Thus, it is a challenging task to achieve consensus between the robots in the network. This paper will discuss the details of designing an effective consensus algorithm that meets the requirements of swarm robotics network.","PeriodicalId":142482,"journal":{"name":"2022 IEEE International Conference on Artificial Intelligence in Engineering and Technology (IICAIET)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125398683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Snail Recognition Using YOLO 使用YOLO识别蜗牛
Juan Ricardo I. Borreta, Justin A. Bautista, A. Yumang
{"title":"Snail Recognition Using YOLO","authors":"Juan Ricardo I. Borreta, Justin A. Bautista, A. Yumang","doi":"10.1109/IICAIET55139.2022.9936736","DOIUrl":"https://doi.org/10.1109/IICAIET55139.2022.9936736","url":null,"abstract":"Many species of snails inhabit different areas in the world. Some species have made their way to farmlands and the urban regions, surviving through eating plants and breeding unnoticed making them a cause for concern and a known threat to some crops. A study on snail detection has been previously conducted, but recognizing individual species for their risk has not yet been pursued. This study aims to develop a Tiny-YOLOv4 snail recognition system using a Raspberry Pi. The model focuses on four snail species subject to an input image processed through the system. The outputs show the image with the relevant bounding boxes and labels and notify a user through email for any recognitions. The system produced an overall accuracy of 92%, proving successful in the study's objectives and providing a basis for future literature.","PeriodicalId":142482,"journal":{"name":"2022 IEEE International Conference on Artificial Intelligence in Engineering and Technology (IICAIET)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126673206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Real-time Detection of Aquarium Fish Species Using YOLOv4-tiny on Raspberry Pi 4 利用YOLOv4-tiny在树莓派4上实时检测观赏鱼种类
Cyril Jay L. Chan, Ethan James A. Reyes, N. Linsangan, Roben A. Juanatas
{"title":"Real-time Detection of Aquarium Fish Species Using YOLOv4-tiny on Raspberry Pi 4","authors":"Cyril Jay L. Chan, Ethan James A. Reyes, N. Linsangan, Roben A. Juanatas","doi":"10.1109/IICAIET55139.2022.9936790","DOIUrl":"https://doi.org/10.1109/IICAIET55139.2022.9936790","url":null,"abstract":"A version of the YOLO detection algorithm, the YOLOv4, has yet to find much use on aquatic species. Detection systems optimized for aquarium fish species are also currently lacking. This study provides a detection program for select fish species, namely the dwarf gourami, guppy, and zebrafish, using the YOLOv4-tiny detection model. The program was implemented in the Raspberry Pi 4 Model B single-board computer with an 8MP camera. The YOLOv4-tiny model was trained using images from Kaggle, FishBase, and the Global Biodiversity Information Facility, along with local images. The program was tested on live samples of the three fish species along with one irrelevant fish species, the petticoat tetra. There were three live samples of each species. Close shots for each sample were taken from the aquarium's front, left, right, and back sides, making a total of 48 images for detection. Training data and the confusion matrix from the experiment were utilized to determine the program's reliability in detecting the fish species. For the results, the trained model achieved a mAP of 97.81% during training and a global accuracy of 91.67% during the experiment. The program exhibited reliable performance across the board, achieving above 90% AP and accuracy in all classes.","PeriodicalId":142482,"journal":{"name":"2022 IEEE International Conference on Artificial Intelligence in Engineering and Technology (IICAIET)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126319784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Maximizing Power Generation in Variable Speed Micro-Hydro with Power Point Tracking 基于功率点跟踪的变速微水力发电最大化
M. K. Tan, Norafe Maximo Javinez, Kit Guan Lim, A. Haron, Pungut Ibrahim, K. Teo
{"title":"Maximizing Power Generation in Variable Speed Micro-Hydro with Power Point Tracking","authors":"M. K. Tan, Norafe Maximo Javinez, Kit Guan Lim, A. Haron, Pungut Ibrahim, K. Teo","doi":"10.1109/IICAIET55139.2022.9936859","DOIUrl":"https://doi.org/10.1109/IICAIET55139.2022.9936859","url":null,"abstract":"Conventional variable speed micro-hydro control systems suffer from non-optimal input control. The controllers estimate the changes in flow rate without anticipating the global maximum power curve. As such, this paper aims to explore and develop a feasible maximum power point tracker (MPPT) with perturb and observe (P&O) and genetic algorithm (GA) in providing optimal power generation for variable speed micro-hydro system. This research first introduces a mathematical model for an experimental variable speed micro-hydro platform and then simulates the micro-hydro in MATLAB. Conventional P&O MPPT algorithm used fixed perturbation size which requires large computation time when the perturbation size is small and suffers from power fluctuation issues when the perturbation size is large. Thus, a GA-based P&O MPPT algorithm with adaptive perturbation size is proposed to provide a large perturbation size during transient response and a small perturbation size at a steady state. The simulation results showed that the proposed GA-based P&O MPPT algorithm was able to track the global maximum power point (MPP).","PeriodicalId":142482,"journal":{"name":"2022 IEEE International Conference on Artificial Intelligence in Engineering and Technology (IICAIET)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121882167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Forest Fire Detection for Edge Devices 边缘设备的森林火灾探测
Teo Khai Xian, Hermawan Nugroho
{"title":"Forest Fire Detection for Edge Devices","authors":"Teo Khai Xian, Hermawan Nugroho","doi":"10.1109/IICAIET55139.2022.9936786","DOIUrl":"https://doi.org/10.1109/IICAIET55139.2022.9936786","url":null,"abstract":"It is observed that the forest land mass was reducing rapidly from 1990 to 2020. As many plants and animals are depending on the forest, this is very alarming. Forest fire is one of the major causes of such loss. Forest fires tend to spread quickly and are difficult to control in a short time. Early detection of these forest fires is the key to mitigate the forest fire. There are many methods developed by researchers to monitor forest fire. An aerial-based detection system with unmanned aerial vehicles (U A V s) is one of the emerging methods which can provider a bird's eye view of the forest from above. Monitoring with UAVs however requires trained personnel to operate and manually monitor the forest. In this paper, we develop a fire detection algorithm that can analyzed images taken by UAVs and can be equipped into an autonomous UA V. The developed method does not require a lot computing power. It is based on YOLOv5 which is build and converted into optimized model that can run on an embedded board. Result shows that the method has a high MAP (>97%) with acceptable inference time indicating a good potential of the developed model.","PeriodicalId":142482,"journal":{"name":"2022 IEEE International Conference on Artificial Intelligence in Engineering and Technology (IICAIET)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131794032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Hand-Foot-Mouth Disease Classification using Features from Fibre Grating Biosensor Spectral Data 基于光纤光栅生物传感器光谱数据特征的手足口病分类
A. Mahmood, S. Azzuhri, Adnan N. Qureshi, Palwasha Jaan, Iqra Sadia
{"title":"Hand-Foot-Mouth Disease Classification using Features from Fibre Grating Biosensor Spectral Data","authors":"A. Mahmood, S. Azzuhri, Adnan N. Qureshi, Palwasha Jaan, Iqra Sadia","doi":"10.1109/IICAIET55139.2022.9936818","DOIUrl":"https://doi.org/10.1109/IICAIET55139.2022.9936818","url":null,"abstract":"Hand, Foot and Mouth disease (HFMD) is a common viral childhood disease affected by the family of enterovirus and Coxsackie. Current laboratory identification is based on the RT-PCR test, which is expensive, time-consuming, and unsuitable for the pandemic. The SPR- TFBG was biofunctionalized with monoclonal antibody (Mab). Mab is a bioreceptor with an affinity for the virus for detecting EV-A71. A dataset of reflectance spectra of 660 samples of different virus impurities measured with SPR-TFBG biosensor to detect EV-71 virus was developed. The extracted signal has around 4000 different features based on wavelength information. The first subset was selected based on the region of interest analysis, and the dimension has reduced from 4000 to 1496 features. The dimensionality of the large feature set is reduced based on the statistical feature engineering procedure using 10 features including mean, variance, skewness, RMS, kurtosis, standard deviation, range, crest factor, impulse factor and shape factor. Subsequently, classification of the virus (signal) data is achieved through SVM and it is evaluated with different types of kernels. For the evaluation of classifiers, we used accuracy, sensitivity, precision and F1 score performance metrics. The obtained results of accuracy are 87.88 for linear SVM, 86.06 for radial basis, 75.76 for sigmoid SVM, and 75.15 for polynomial SVM, respectively. The results show that for our experiments, Linear SVM performs better than radial, polynomial and sigmoid kernels. This is because projecting the data onto higher dimensions is not required as data exhibits linear properties confirmed by White Neural Network (WNN) test for nonlinearity.","PeriodicalId":142482,"journal":{"name":"2022 IEEE International Conference on Artificial Intelligence in Engineering and Technology (IICAIET)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132247635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance of Content-Based Features to Detect Depression Tendencies in Different Text Lengths 基于内容的特征在不同文本长度中检测抑郁倾向的性能
N. Z. Zulkarnain, N. Yusof, Sharifah Sakinah Syed Ahmad, Zuraini Othman, Azura Hanim Hashim
{"title":"Performance of Content-Based Features to Detect Depression Tendencies in Different Text Lengths","authors":"N. Z. Zulkarnain, N. Yusof, Sharifah Sakinah Syed Ahmad, Zuraini Othman, Azura Hanim Hashim","doi":"10.1109/IICAIET55139.2022.9936811","DOIUrl":"https://doi.org/10.1109/IICAIET55139.2022.9936811","url":null,"abstract":"Text analytics have been widely used nowadays in the field of mental health to predict onset mental health issues such as depression and anxiety, with the intention to perform early intervention. Most existing works focuses on looking at how such mental health issues can be predicted based on social media data. These texts are often short and straightforward as compared to blogs and journals. In this paper, we are interested in comparing the performance of a classification model in classifying long texts and short texts as having depression tendencies. An existing model that can perform well in classifying short texts using content-based features was adopted and tested on longer texts. From the result, it is found that compared to shorter text, content-based features performed worst in long texts whereby all five classifiers used produced an accuracy of less than 0.65.","PeriodicalId":142482,"journal":{"name":"2022 IEEE International Conference on Artificial Intelligence in Engineering and Technology (IICAIET)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132280208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pre-trained Deep Learning Models for COVID19 Classification: CNNs vs. Vision Transformer covid - 19分类的预训练深度学习模型:cnn与Vision Transformer
Mai Sufian, E. Moung, J. Dargham, Farashazillah Yahya, S. Omatu
{"title":"Pre-trained Deep Learning Models for COVID19 Classification: CNNs vs. Vision Transformer","authors":"Mai Sufian, E. Moung, J. Dargham, Farashazillah Yahya, S. Omatu","doi":"10.1109/IICAIET55139.2022.9936852","DOIUrl":"https://doi.org/10.1109/IICAIET55139.2022.9936852","url":null,"abstract":"The fast proliferation of the coronavirus disease 2019 (COVID19) has pushed many countries' healthcare systems to the brink of disaster. It has become a necessity to automate the screening procedures to reduce the ongoing cost to the healthcare systems. Although the use of the Convolutional Neural Networks (CNNs) is gaining attention in the field of COVID19 diagnosis based on medical images, these models have disadvantages due to their image-specific inductive bias, which contradict to the Vision Transformer (ViT). This paper conducts comparative study of the use of the three most established CNN models and a ViT to deal with the classification of COVID19 and Non-COVID19 cases. This study uses 2481 computed tomography (CT) images of 1252 COVID19 and 1229 Non-COVID19 patients. Confusion metrics and performance metrics were used to analyze the models. The experimental results show all the pre-trained CNNs (VGG16, ResNet50, and IncetionV3)outperformed the pre-trained ViT model, with InceptionV3 as the best performing model (99.20% of accuracy).","PeriodicalId":142482,"journal":{"name":"2022 IEEE International Conference on Artificial Intelligence in Engineering and Technology (IICAIET)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133097263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信