John Arvic J. Hizon, Ramon G. Garcia, Ma. Rica J. Rebustillo
{"title":"Jetson Nano and Arduino-Based Robot for Physical Distancing using Yolov4 Algorithm with Thermal Scanner","authors":"John Arvic J. Hizon, Ramon G. Garcia, Ma. Rica J. Rebustillo","doi":"10.1109/CSPA55076.2022.9781901","DOIUrl":"https://doi.org/10.1109/CSPA55076.2022.9781901","url":null,"abstract":"Coronavirus disease, more famously known as COVID-19, was first discovered in Wuhan, China; it was declared a global pandemic by WHO in March 2020. Due to the threatening characteristics of the virus, certain precautions had to be imposed by the government and health authorities to put the situation under control. To mitigate the further transmission of the virus, the \"New Normal\" was introduced to the public. This is by practicing the minimum safety protocols: wearing a facemask, frequently washing hands, and observing physical distancing. This study aims to build an autonomous robot that can monitor physical distancing, specifically focusing on people's queues. The robot utilizes the YOLOV4 Algorithm to detect the individuals and determine their Euclidean distance to determine if these people are observing the distance safety protocol that is 1.5 meters apart. The robot also includes a voice alarm that apprehends violators and reminds them to follow the practice. Moreover, the robot has an additional feature of detecting the body temperature of the people detected by the program. In assessing the robot's program, the implemented object detection achieved an accuracy of 93%, a precision of 87.5%, an error rate of 7%, and a recall of 94.6%. Moreover, by determining the constraint distance of the robot, which is 3.5 meters, the physical distancing program obtained a percent error of 4.26%.","PeriodicalId":174315,"journal":{"name":"2022 IEEE 18th International Colloquium on Signal Processing & Applications (CSPA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129171414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kaito Odagiri, Shogo Shibuya, Q. H. Cap, H. Iyatomi
{"title":"Key Area Acquisition Training for Practical Image-based Plant Disease Diagnosis","authors":"Kaito Odagiri, Shogo Shibuya, Q. H. Cap, H. Iyatomi","doi":"10.1109/CSPA55076.2022.9781877","DOIUrl":"https://doi.org/10.1109/CSPA55076.2022.9781877","url":null,"abstract":"Automatic diagnosis of plant diseases using images is a fine-grained task, and disease symptoms are often ambiguous and highly variable. Pre-extraction of the region of interest (ROI) exhibiting disease symptoms (such as one or more leaves) is known to have a certain effect on improving accuracy. However, the ROI extraction at runtime is time-consuming, resulting in issues of system usability. This paper proposes a new training method called key area acquisition training (KAAT). KAAT reduces the variation in prediction results between images before and after the extraction of the ROI. By directing the model’s attention to the ROI through learning, KAAT contributes to improved diagnostic performance without sacrificing execution time during diagnosis. In the evaluation, we conducted nine class diagnosis task (eight diseases and healthy) using 77K and 9K images of cucumber leaves (collected from different fields) for training and testing, respectively. The proposed KAAT improved diagnostic accuracy by 3.8% in macro-F1 and 2.0% in micro accuracy without increasing execution time.","PeriodicalId":174315,"journal":{"name":"2022 IEEE 18th International Colloquium on Signal Processing & Applications (CSPA)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127178689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SMARTFLORA Mobile Flower Recognition Application Using Machine Learning Tools","authors":"F. Khalid, Azfar Husna Abdullah, L. N. Abdullah","doi":"10.1109/CSPA55076.2022.9781961","DOIUrl":"https://doi.org/10.1109/CSPA55076.2022.9781961","url":null,"abstract":"There are around 369,000 flowering plant species documented globally. However, the majority of people have difficulties telling these blooms apart. Usually, people often consult specialists, study floral reference books, or do keyword searches on relevant web resources. Therefore, this flower recognition mobile application was proposed to ease those people to recognize types of flowers without using any computer or machine. In this paper, a system architecture is designed based on Teachable Machine Learning platform, Tensorflow Lite Model and Android Studio to develop a SMARTFLORA Mobile Flower Recognition application that allows users to identify three types of flower species: daisies, roses, and sunflowers. Kaggle dataset has been used and the accuracy was 88%.","PeriodicalId":174315,"journal":{"name":"2022 IEEE 18th International Colloquium on Signal Processing & Applications (CSPA)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129466175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}