{"title":"A Study on the Prediction Model for Bioactive Components of Cnidium officinale Makino according to Climate Change using Machine Learning","authors":"Hyunjo Lee, Hyun Jung Koo, Kyeong Cheol Lee, Won-Kyun Joo, Cheol-Joo Chae","doi":"10.30693/smj.2023.12.10.93","DOIUrl":"https://doi.org/10.30693/smj.2023.12.10.93","url":null,"abstract":"Climate change has emerged as a global problem, with frequent temperature increases, droughts, and floods, and it is predicted that it will have a great impact on the characteristics and productivity of crops. Cnidium officinale is used not only as traditionally used herbal medicines, but also as various industrial raw materials such as health functional foods, natural medicines, and living materials, but productivity is decreasing due to threats such as continuous crop damage and climate change. Therefore, this paper proposes a model that can predict the physiologically active ingredient index according to the climate change scenario of Cnidium officinale, a representative medicinal crop vulnerable to climate change. In this paper, data was first augmented using the CTGAN algorithm to solve the problem of data imbalance in the collection of environment information, physiological reactions, and physiological active ingredient information. Column Shape and Column Pair Trends were used to measure augmented data quality, and overall quality of 88% was achieved on average. In addition, five models RF, SVR, XGBoost, AdaBoost, and LightBGM were used to predict phenol and flavonoid content by dividing them into ground and underground using augmented data. As a result of model evaluation, the XGBoost model showed the best performance in predicting the physiological active ingredients of the sacrum, and it was confirmed to be about twice as accurate as the SVR model.","PeriodicalId":249252,"journal":{"name":"Korean Institute of Smart Media","volume":"67 13","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139206587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kyungho Yu, Hyungju Kim, Jeongin Kim, Chanjun Chun, Pankoo Kim
{"title":"A Study on the Generation of Webtoons through Fine-Tuning of Diffusion Models","authors":"Kyungho Yu, Hyungju Kim, Jeongin Kim, Chanjun Chun, Pankoo Kim","doi":"10.30693/smj.2023.12.7.76","DOIUrl":"https://doi.org/10.30693/smj.2023.12.7.76","url":null,"abstract":"This study proposes a method to assist webtoon artists in the process of webtoon creation by utilizing a pretrained Text-to-Image model to generate webtoon images from text. The proposed approach involves fine-tuning a pretrained Stable Diffusion model using a webtoon dataset transformed into the desired webtoon style. The fine-tuning process, using LoRA technique, completes in a quick training time of approximately 4.5 hours with 30,000 steps. The generated images exhibit the representation of shapes and backgrounds based on the input text, resulting in the creation of webtoon-like images. Furthermore, the quantitative evaluation using the Inception score shows that the proposed method outperforms DCGAN-based Text-to-Image models. If webtoon artists adopt the proposed Text-to-Image model for webtoon creation, it is expected to significantly reduce the time required for the creative process.","PeriodicalId":249252,"journal":{"name":"Korean Institute of Smart Media","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128645135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
YuJin Ko, HyunJun Lee, HeeJa Jeong, Li Yu, NamHo Kim
{"title":"Deep Learning-based system for plant disease detection and classification","authors":"YuJin Ko, HyunJun Lee, HeeJa Jeong, Li Yu, NamHo Kim","doi":"10.30693/smj.2023.12.7.9","DOIUrl":"https://doi.org/10.30693/smj.2023.12.7.9","url":null,"abstract":"Plant diseases and pests affect the growth of various plants, so it is very important to identify pests at an early stage. Although many machine learning (ML) models have already been used for the inspection and classification of plant pests, advances in deep learning (DL), a subset of machine learning, have led to many advances in this field of research. In this study, disease and pest inspection of abnormal crops and maturity classification were performed for normal crops using YOLOX detector and MobileNet classifier. Through this method, various plant pest features can be effectively extracted. For the experiment, image datasets of various resolutions related to strawberries, peppers, and tomatoes were prepared and used for plant pest classification. According to the experimental results, it was confirmed that the average test accuracy was 84% and the maturity classification accuracy was 83.91% in images with complex background conditions. This model was able to effectively detect 6 diseases of 3 plants and classify the maturity of each plant in natural conditions.","PeriodicalId":249252,"journal":{"name":"Korean Institute of Smart Media","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128693040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Performance Comparison for Exercise Motion classification using Deep Learing-based OpenPose","authors":"Nam Rye Son, Min A Jung","doi":"10.30693/smj.2023.12.7.59","DOIUrl":"https://doi.org/10.30693/smj.2023.12.7.59","url":null,"abstract":"Recently, research on behavior analysis tracking human posture and movement has been actively conducted. In particular, OpenPose, an open-source software developed by CMU in 2017, is a representative method for estimating human appearance and behavior. OpenPose can detect and estimate various body parts of a person, such as height, face, and hands in real-time, making it applicable to various fields such as smart healthcare, exercise training, security systems, and medical fields. In this paper, we propose a method for classifying four exercise movements - Squat, Walk, Wave, and Fall-down - which are most commonly performed by users in the gym, using OpenPose-based deep learning models, DNN and CNN. The training data is collected by capturing the user's movements through recorded videos and real-time camera captures. The colle cted dataset undergoes preprocessing using OpenPose. The preprocessed dataset is then used to train the proposed DNN and CNN models for exercise movement classification. The performance errors of the proposed models are evaluated using MSE, RMSE, and MAE. The performance evaluation results showed that the proposed DNN model outperformed the proposed CNN model.","PeriodicalId":249252,"journal":{"name":"Korean Institute of Smart Media","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129004406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Role Based Smart Health Service Access Control in F2C environment","authors":"Mi Sun Kim, Kyung Woo Park, Jae Hyun Seo","doi":"10.30693/smj.2023.12.7.27","DOIUrl":"https://doi.org/10.30693/smj.2023.12.7.27","url":null,"abstract":"The development of cloud services and IoT technology has radically changed the cloud environment, and has evolved into a new concept called fog computing and F2C (fog-to-cloud). However, as heterogeneous cloud/fog layers are integrated, problems of access control and security management for end users and edge devices may occur. In this paper, an F2C-based IoT smart health monitoring system architecture was designed to operate a medical information service that can quickly respond to medical emergencies. In addition, a role-based service access control technology was proposed to enhance the security of user's personal health information and sensor information during service interoperability. Through simulation, it was shown that role-based access control is achieved by sharing role registration and user role token issuance information through blockchain. End users can receive services from the device with the fastest response time, and by performing service access control according to roles, direct access to data can be minimized and security for perso","PeriodicalId":249252,"journal":{"name":"Korean Institute of Smart Media","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114967764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Comparative Study of the CNN Model for AD Diagnosis","authors":"Ramineni Vyshnavi, Goo-Rak Kwon","doi":"10.30693/smj.2023.12.7.52","DOIUrl":"https://doi.org/10.30693/smj.2023.12.7.52","url":null,"abstract":"Alzheimer’s disease is one type of dementia, the symptoms can be treated by detecting the disease at its early stages. Recently, many computer-aided diagnosis using magnetic resonance image(MRI) have shown a good results in the classification of AD. Taken these MRI images and feed to Free surfer software to extra the features. In consideration, using T1-weighted images and classifying using the convolution neural network (CNN) model are proposed. In this paper, taking the subjects from ADNI of subcortical and cortical features of 190 subjects. Consider the study to reduce the complexity of the model by using the single layer in the Res-Net, VGG, and Alex Net. Multi-class classification is used to classify four different stages, CN, EMCI, LMCI, AD. The following experiment shows for respective classification Res-Net, VGG, and Alex Net with the best accuracy with VGG at 96%, Res-Net, GoogLeNet and Alex Net at 91%, 93% and 89% respectively.","PeriodicalId":249252,"journal":{"name":"Korean Institute of Smart Media","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117164337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The impact of virtual Brand experience using Metaverse on Interest, Immersion, and Recommendation intention","authors":"Sung Bok Chang","doi":"10.30693/smj.2023.12.7.84","DOIUrl":"https://doi.org/10.30693/smj.2023.12.7.84","url":null,"abstract":"This study tested the hypothesis through confirmatory factor analysis to confirm the relationship between Brand experiences (Deviant, Entertainment, and Aesthetic experiences) in Metaverse on Interest and Immersion, and to verify whether these Interests and Immersion have a significant impact on Recommendation intention. As a result of the study, it was confirmed that all Brand experience factors had a positive (+) effect on Interest and Immersion, Interest had a positive (+) effect on Immersion, and Interest and Immersion had a positive (+) effect on Recommendation intention.","PeriodicalId":249252,"journal":{"name":"Korean Institute of Smart Media","volume":"435 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121461411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Study on the Speed of Message Output for Smooth Cummunication for Chat Platform between Idol and Fandom","authors":"Jungha Kim","doi":"10.30693/smj.2023.12.7.68","DOIUrl":"https://doi.org/10.30693/smj.2023.12.7.68","url":null,"abstract":"In this study, I would like to conduct a study on the design of chat UX between fandom and idol. The 1:多 form of message exposure method between idols and fandom was studied and the optimal exposure form and speed were tested. Based on the results of the experiment, 30 sentences were selected per second among various options, and based on this, the results were applied to the actual Universe platform to facilitate communication between idols and fans.","PeriodicalId":249252,"journal":{"name":"Korean Institute of Smart Media","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128569325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
So Young Lee, Hye Seon Jeong, Yoon Sung Choi, Choong Kwon Lee
{"title":"Textile material classification in clothing images using deep learning","authors":"So Young Lee, Hye Seon Jeong, Yoon Sung Choi, Choong Kwon Lee","doi":"10.30693/smj.2023.12.7.43","DOIUrl":"https://doi.org/10.30693/smj.2023.12.7.43","url":null,"abstract":"As online transactions increase, the image of clothing has a great influence on consumer purchasing decisions. The importance of image information for clothing materials has been emphasized, and it is important for the fashion industry to analyze clothing images and grasp the materials used. Textile materials used for clothing are difficult to identify with the naked eye, and much time and cost are consumed in sorting. This study aims to classify the materials of textiles from clothing images based on deep learning algorithms. Classifying materials can help reduce clothing production costs, increase the efficiency of the manufacturing process, and contribute to the service of recommending products of specific materials to consumers. We used machine vision-based deep learning algorithms ResNet and Vision Transformer to classify clothing images. A total of 760,949 images were collected and preprocessed to detect abnormal images. Finally, a total of 167,299 clothing images, 19 textile labels and 20 fabric labels were used. We used ResNet and Vision Transformer to classify clothing materials and compared the performance of the algorithms with the Top-k Accuracy Score metric. As a result of comparing the performance, the Vision Transformer algorithm outperforms ResNet.","PeriodicalId":249252,"journal":{"name":"Korean Institute of Smart Media","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131115062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A climbing movement detection system through efficient cow behavior recognition based on YOLOX and OC-SORT","authors":"Li Yu, NamHo Kim","doi":"10.30693/smj.2023.12.7.18","DOIUrl":"https://doi.org/10.30693/smj.2023.12.7.18","url":null,"abstract":"In this study, we propose a cow behavior recognition system based on YOLOX and OC-SORT. YOLOX detects targets in real-time and provides information on cow location and behavior. The OC-SORT module tracks cows in the video and assigns unique IDs. The quantitative analysis module analyzes the behavior and location information of cows. Experimental results show that our system demonstrates high accuracy and precision in target detection and tracking. The average precision (AP) of YOLOX was 82.2%, the average recall (AR) was 85.5%, the number of parameters was 54.15M, and the computation was 194.16GFLOPs. OC-SORT was able to maintain high-precision real-time target tracking in complex environments and occlusion situations. By analyzing changes in cow movement and frequency of mounting behavior, our system can help more accurately discern the estrus behavior of cows.","PeriodicalId":249252,"journal":{"name":"Korean Institute of Smart Media","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127825292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}