Warisara Asawaponwiput, Panyawut Sriiesaranusorn, Thawat Mohchit, N. Thatphithakkul, D. Surangsrirat
{"title":"Application of Machine Learning in Lifestyle: Weight-In Image Classification using Convolutional Neural Networks","authors":"Warisara Asawaponwiput, Panyawut Sriiesaranusorn, Thawat Mohchit, N. Thatphithakkul, D. Surangsrirat","doi":"10.1109/ICA55837.2022.00018","DOIUrl":null,"url":null,"abstract":"Nowadays, people are increasingly concerned for their health as being healthy is regarded as a profitable investment. Obesity is one of the most common health problems that leads to multiple diseases. We work with the team that developed a mobile application to encourage users to change their eating and activity behaviors to improve their health based on a virtual competition platform. Participants are required to upload a weight-in photo to verify their weight before the challenge. Manually verifying these images can be time-consuming and error-prone due to the large number of images in each competition. In this study, we proposed an image classification approach to help screen incorrect images of the weight-in photo for the virtual competition. The image augmentation techniques were applied to the training images before being input into the classification model. Since the goal is to deploy the model in a mobile application, the suitable model must be small and efficient enough for use in a limited resources environment. Therefore, VGGNet-16 and MobileNet-V2 were selected as the classification models. The experimental results show that the model could learn from the preprocessed images and obtain satisfactory classification results from pre-trained VGGNet-16 with the highest accuracy and F1-score of 95.00% and 95.23%, respectively. MobileNet-V2 inference time was approximately 10 times faster but the performance was lower with the highest accuracy and F1-score of 93.00% and 93.32%, respectively.","PeriodicalId":150818,"journal":{"name":"2022 IEEE International Conference on Agents (ICA)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Agents (ICA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICA55837.2022.00018","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Nowadays, people are increasingly concerned for their health as being healthy is regarded as a profitable investment. Obesity is one of the most common health problems that leads to multiple diseases. We work with the team that developed a mobile application to encourage users to change their eating and activity behaviors to improve their health based on a virtual competition platform. Participants are required to upload a weight-in photo to verify their weight before the challenge. Manually verifying these images can be time-consuming and error-prone due to the large number of images in each competition. In this study, we proposed an image classification approach to help screen incorrect images of the weight-in photo for the virtual competition. The image augmentation techniques were applied to the training images before being input into the classification model. Since the goal is to deploy the model in a mobile application, the suitable model must be small and efficient enough for use in a limited resources environment. Therefore, VGGNet-16 and MobileNet-V2 were selected as the classification models. The experimental results show that the model could learn from the preprocessed images and obtain satisfactory classification results from pre-trained VGGNet-16 with the highest accuracy and F1-score of 95.00% and 95.23%, respectively. MobileNet-V2 inference time was approximately 10 times faster but the performance was lower with the highest accuracy and F1-score of 93.00% and 93.32%, respectively.