2022 14th International Conference on Knowledge and Smart Technology (KST)最新文献

筛选
英文 中文
Comparison Analysis of Data Augmentation using Bootstrap, GANs and Autoencoder 自举、gan和自编码器数据增强的比较分析
2022 14th International Conference on Knowledge and Smart Technology (KST) Pub Date : 2022-01-26 DOI: 10.1109/KST53302.2022.9729065
Mukrin Nakhwan, Rakkrit Duangsoithong
{"title":"Comparison Analysis of Data Augmentation using Bootstrap, GANs and Autoencoder","authors":"Mukrin Nakhwan, Rakkrit Duangsoithong","doi":"10.1109/KST53302.2022.9729065","DOIUrl":"https://doi.org/10.1109/KST53302.2022.9729065","url":null,"abstract":"In order to improve predictive accuracy for insufficient observations, data augmentation is a well-known and commonly useful technique to increase more samples by generating new data which can avoid data collection problems. This paper presents comparison analysis of three data augmentation methods using Bootstrap method, Generative Adversarial Networks (GANs) and Autoencoder for increasing a number of samples. The proposal is applied on 8 datasets with binary classification from repository data websites. The research is mainly evaluated by generating new additional data using data augmentation. Secondly, combining generated samples and original data. Finally, validating performance on four classifier models. The experimental result showed that the proposed approach of increasing samples by Autoencoder and GANs achieved better predictive performance than the original data. Conversely, increasing samples by Bootstrap method provided lowest predictive performance.","PeriodicalId":433638,"journal":{"name":"2022 14th International Conference on Knowledge and Smart Technology (KST)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125194690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Traffic Light and Crosswalk Detection and Localization Using Vehicular Camera 基于车载摄像头的红绿灯和人行横道检测与定位
2022 14th International Conference on Knowledge and Smart Technology (KST) Pub Date : 2022-01-26 DOI: 10.1109/KST53302.2022.9729066
S. Wangsiripitak, Keisuke Hano, S. Kuchii
{"title":"Traffic Light and Crosswalk Detection and Localization Using Vehicular Camera","authors":"S. Wangsiripitak, Keisuke Hano, S. Kuchii","doi":"10.1109/KST53302.2022.9729066","DOIUrl":"https://doi.org/10.1109/KST53302.2022.9729066","url":null,"abstract":"An improved convolutional neural network model for traffic light and crosswalk detection and localization using visual information from a vehicular camera is proposed. Yolov4 darknet and its pretrained model are used in transfer learning using our datasets of traffic lights and crosswalks; the trained model is supposed to be used for red-light running detection of the preceding vehicle. Experimental results, compared to the result of the pretrained model learned only from the Microsoft COCO dataset, showed an improved performance of traffic light detection on our test images which were taken under various lighting conditions and interferences; 36.91% higher recall and 39.21% less false positive rate. The crosswalk, which is incapable of detection in the COCO model, could be detected with 93.37% recall and 7.74% false-positive rate.","PeriodicalId":433638,"journal":{"name":"2022 14th International Conference on Knowledge and Smart Technology (KST)","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126660901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Developing an Automatic Speech Recognizer For Filipino with English Code-Switching in News Broadcast 新闻广播中英语语码转换的菲律宾语自动语音识别器的研制
2022 14th International Conference on Knowledge and Smart Technology (KST) Pub Date : 2022-01-26 DOI: 10.1109/KST53302.2022.9727235
Mark Louis Lim, A. J. Xu, C. Lin, Zi-He Chen, Ronald M. Pascual
{"title":"Developing an Automatic Speech Recognizer For Filipino with English Code-Switching in News Broadcast","authors":"Mark Louis Lim, A. J. Xu, C. Lin, Zi-He Chen, Ronald M. Pascual","doi":"10.1109/KST53302.2022.9727235","DOIUrl":"https://doi.org/10.1109/KST53302.2022.9727235","url":null,"abstract":"Closed-captioning systems are well-known for video-based broadcasting companies as society transitions into internet-based information consumption. These captioning systems are utilized to cater to most consumers. However, a captioning system for the Filipino language is not readily available to the public. News anchors in the Philippines tend to incorporate a code-switching behavior that mixes English and Filipino languages, which are the two major languages that Filipinos use. The goal of this research is to develop an automatic speech recognizer (ASR) for a captioning system for Filipino news broadcast domain videos. Experiments on finding the optimal speech models and features, and on how code-switching affects the system were conducted. Best results were obtained by using linear discriminant analysis with maximum likelihood linear transform (LDA+MLLT) and speaker adaptive training (SAT) for acoustic modeling. Initial investigation also shows that there is no general pattern for the ASR's performance as a function of code-switching frequency.","PeriodicalId":433638,"journal":{"name":"2022 14th International Conference on Knowledge and Smart Technology (KST)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125918631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Development of Anomaly Detection Model for Welding Classification Using Arc Sound 基于电弧声的焊接分类异常检测模型的开发
2022 14th International Conference on Knowledge and Smart Technology (KST) Pub Date : 2022-01-26 DOI: 10.1109/KST53302.2022.9729058
Phongsin Jirapipattanaporn, Worawat Lawanont
{"title":"Development of Anomaly Detection Model for Welding Classification Using Arc Sound","authors":"Phongsin Jirapipattanaporn, Worawat Lawanont","doi":"10.1109/KST53302.2022.9729058","DOIUrl":"https://doi.org/10.1109/KST53302.2022.9729058","url":null,"abstract":"This study introduces the method to classify weld bead type from arc sound of the gas metal arc welding process by applying machine learning techniques. In this research, we mainly focused on two types of weld bead which were normal weld bead and burn-through weld bead. The signal processing technique was implemented in this work to visualize welding sound data, recorded with a microphone array. All recorded sounds are imported for generating the spectrogram using Python programming and Fourier transformation to analyze and explore the difference of each sound that occurred from different weld bead types. The feature extraction from the sound data is used to construct the dataset for developing the model. Three machine learning models were trained by three different algorithms. Which were recurrent neural network (RNN), Long-short Term Memory (LSTM), and one-class Support Vector Machine (one-class SVM). Each model was evaluated with accuracy and confusion matrix. After a train and testing each model, the result showed that each model performs with an overall accuracy greater than 80 percent for each model. Given the performance of the model developed in this research, these models can be applied to the welding process. And the method from this research can also be applied with another manufacturing process in future work.","PeriodicalId":433638,"journal":{"name":"2022 14th International Conference on Knowledge and Smart Technology (KST)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130464246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Novel Relational Deep Network for Single Object Tracking 一种用于单目标跟踪的新型关系深度网络
2022 14th International Conference on Knowledge and Smart Technology (KST) Pub Date : 2022-01-26 DOI: 10.1109/KST53302.2022.9729070
Pimpa Cheewaprakobkit, T. Shih, Chih-Yang Lin, Hung-Chun Liao
{"title":"A Novel Relational Deep Network for Single Object Tracking","authors":"Pimpa Cheewaprakobkit, T. Shih, Chih-Yang Lin, Hung-Chun Liao","doi":"10.1109/KST53302.2022.9729070","DOIUrl":"https://doi.org/10.1109/KST53302.2022.9729070","url":null,"abstract":"Virtual object tracking is an active research area in computer vision. It aims to estimate the location of the target object in video frames. For the past few years, the deep learning method has been widely used for object tracking to improve accuracy. However, there are still challenges of performance problems and accuracy. This study aims to enhance the performance of an object detection model by focusing on single object tracking using Siamese network architecture and a correlation filter to find the relationship between the target object and search object from a series of continuous images. We mitigate some challenging problems in the Siamese network by adding variance loss to improve the model to distinguish between the foreground and the background. Furthermore, we add the attention mechanism and process the cropped image to find the relationship between objects and objects. Our experiment used the VOT2019 dataset for testing object tracking and the CUHK03 dataset for the training model. The result demonstrates that the proposed model achieves promising prediction performance to solve the image occlusion problem and reduce false alarms from object detection. We achieved an accuracy of 0.608, a robustness of 0.539, and an expected average overlap (EAO) score of 0.217. Our tracker runs at approximately 26 fps on GPU.","PeriodicalId":433638,"journal":{"name":"2022 14th International Conference on Knowledge and Smart Technology (KST)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129671759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VAPE-BRIDGE: Bridging OpenVAS Results for Automating Metasploit Framework VAPE-BRIDGE:桥接OpenVAS结果用于自动化Metasploit框架
2022 14th International Conference on Knowledge and Smart Technology (KST) Pub Date : 2022-01-26 DOI: 10.1109/KST53302.2022.9729085
Kankanok Vimala, S. Fugkeaw
{"title":"VAPE-BRIDGE: Bridging OpenVAS Results for Automating Metasploit Framework","authors":"Kankanok Vimala, S. Fugkeaw","doi":"10.1109/KST53302.2022.9729085","DOIUrl":"https://doi.org/10.1109/KST53302.2022.9729085","url":null,"abstract":"Vulnerability assessment (VA) and penetration test (PenTest) are required by many organizations to satisfy their security auditing and compliance. VA and PenTest are conducted in the different stage and they are done through the software tools. Implementing the system that is able to convert the VA scan result to be rendered in the PenTest tool is a real challenge. This paper proposes a design and development of a system called VAPE-BRIDGE that provides the automatic conversion of the scan result of Open Vulnerability assessment scanner (OpenVAS) to be the exploitable scripts that will be executed in the Metasploit which is a widely-used opensource PenTest program. Specifically, the tool is designed to automatically extract the vulnerabilities listed in Open Web Application Security Project 10 (OWASP 10) and exploit them to be tested in the Metasploit. Our VAPE-BRIDGE encompasses three main components including (1) Scan Result Extraction responsible for extracting the VA scan results related to OWASP10 (2) Target List Repository responsible for retaining lists of vulnerabilities to be used in the process of Metasploit, and (3) Automated Shell Scripts Exploitation responsible for generating the script to render the exploit module to be executed in Metasploit. For the implementation, the VAPE-Bridge protype system was tested with a number of test cases in converting the scan results into shell code and rendering results to be tested in Metasploit. The experimental results showed that the system is functionally correct for all cases.","PeriodicalId":433638,"journal":{"name":"2022 14th International Conference on Knowledge and Smart Technology (KST)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129031018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Efficient Image Embedding for Fine-Grained Visual Classification 用于细粒度视觉分类的高效图像嵌入
2022 14th International Conference on Knowledge and Smart Technology (KST) Pub Date : 2022-01-26 DOI: 10.1109/KST53302.2022.9729062
Soranan Payatsuporn, B. Kijsirikul
{"title":"Efficient Image Embedding for Fine-Grained Visual Classification","authors":"Soranan Payatsuporn, B. Kijsirikul","doi":"10.1109/KST53302.2022.9729062","DOIUrl":"https://doi.org/10.1109/KST53302.2022.9729062","url":null,"abstract":"Fine-grained visual classification (FGVC) is a task belonging to multiple sub-categories classification. It is a challenging task due to high intraclass variation and inter-class similarity. Most exiting methods pay attention to capture discriminative semantic parts to address those problems. In this paper, we introduce a two-level network which consists of raw-level and object-level networks, and we name it “Efficient Image Embedding”. Its training procedure has two stages which the raw-level is for localization by the aggregation of feature maps, and the last is for classification. The two-level use Adaptive Angular Margin loss (AAM-loss), which improve an intra-class compactness and inter-class variety of image embedding. Our approach is to identify object regions without any hand-crafted bounding-box, and can be trained in an end-to-end manner. It has achieved better accuracy on two datasets compared to the existing work, which are 89.0% for CUB200-2011 and 93.3% for FGVC-Aircraft.","PeriodicalId":433638,"journal":{"name":"2022 14th International Conference on Knowledge and Smart Technology (KST)","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122432574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Hybrid Deep Neural Network for Classifying Transportation Modes based on Human Activity Vibration 基于人体活动振动的混合深度神经网络交通方式分类
2022 14th International Conference on Knowledge and Smart Technology (KST) Pub Date : 2022-01-26 DOI: 10.1109/KST53302.2022.9729079
S. Mekruksavanich, Ponnipa Jantawong, I. You, A. Jitpattanakul
{"title":"A Hybrid Deep Neural Network for Classifying Transportation Modes based on Human Activity Vibration","authors":"S. Mekruksavanich, Ponnipa Jantawong, I. You, A. Jitpattanakul","doi":"10.1109/KST53302.2022.9729079","DOIUrl":"https://doi.org/10.1109/KST53302.2022.9729079","url":null,"abstract":"Sensor advanced technologies have facilitated the growth of various solutions for recognizing human movement through wearable devices. Characterization of the means of transportation has become beneficial applications in an intelligent transportation system since it enables context-aware support for the implementation of systems such as driver assistance and intelligent transportation management. Smartphone sensing technology has been employed to capture accurate real-time transportation information to improve urban transportation planning. Recently, several studies introduced machine learning and deep learning techniques to investigate transportation utilization from multimodal sensors, including accelerometer, gyroscope, and magnetometer sensors. However, prior work has been constrained by impractical mobile computing with a large number of model parameters. We tackle this issue in this study by providing a hybrid deep learning model for identifying vehicle usages utilizing data from smartphone sensors. We conducted experiments on a publicly available dataset of human activity vibrations called the HAV dataset. The proposed model is evaluated with a variety of conventional deep learning algorithms. The performance assessment demonstrates that the proposed hybrid deep learning model classifies people's transportation behaviors more accurately than previous studies.","PeriodicalId":433638,"journal":{"name":"2022 14th International Conference on Knowledge and Smart Technology (KST)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123911852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Loan Default Risk Prediction Using Knowledge Graph 基于知识图谱的贷款违约风险预测
2022 14th International Conference on Knowledge and Smart Technology (KST) Pub Date : 2022-01-26 DOI: 10.1109/KST53302.2022.9729073
Md. Nurul Alam, M. Ali
{"title":"Loan Default Risk Prediction Using Knowledge Graph","authors":"Md. Nurul Alam, M. Ali","doi":"10.1109/KST53302.2022.9729073","DOIUrl":"https://doi.org/10.1109/KST53302.2022.9729073","url":null,"abstract":"Credit risk, also known as loan default risk, is one of the significant financial challenges in banking and financial institutions since it involves the uncertainty of the borrowers' ability to perform their contractual obligation. Banks and financial institutions rely on statistical and machine learning methods in predicting loan default to reduce the potential losses of issued loans. These machine learning applications may never achieve their full potential without the semantic context in the data. A knowledge graph is a collection of linked entities and objects that include semantic information to contextualize them. Knowledge graphs allow machines to incorporate human expertise into their decision-making and provide context to machine learning applications. Therefore, we proposed a loan default prediction model based on knowledge graph technology to improve the prediction model's accuracy and interpretability. The experimental results demonstrated that incorporating knowledge graph embedding as features can boost the performance of the conventional machine learning classifiers in predicting loan default risk.","PeriodicalId":433638,"journal":{"name":"2022 14th International Conference on Knowledge and Smart Technology (KST)","volume":"256 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121236876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Exploring Machine Learning Pipelines for Raman Spectral Classification of COVID-19 Samples 探索COVID-19样本拉曼光谱分类的机器学习管道
2022 14th International Conference on Knowledge and Smart Technology (KST) Pub Date : 2022-01-26 DOI: 10.1109/KST53302.2022.9729081
S. Deepaisarn, Chanvichet Vong, M. Perera
{"title":"Exploring Machine Learning Pipelines for Raman Spectral Classification of COVID-19 Samples","authors":"S. Deepaisarn, Chanvichet Vong, M. Perera","doi":"10.1109/KST53302.2022.9729081","DOIUrl":"https://doi.org/10.1109/KST53302.2022.9729081","url":null,"abstract":"Raman Spectroscopy can analyze and identify the chemical compositions of samples. This study aims to develop a computational method based on machine learning algorithms to classify Raman spectra of serum samples from COVID-19 infected and non-infected human subjects. The method can potentially serve as a tool for rapid and accurate classification of COVID-19 versus non-COVID-19 patients and toward a direction for biomarker discoveries in research. Different machine learning classifiers were compared using pipelines with different dimensionality reduction and scaler techniques. The performance of each pipeline was investigated by varying the associate parameters. Assessment of dimensionality reduction application suggests that the pipelines generally performed better when the number of components does not exceed 50. The LightGBM model with ICA and MMScaler applied, yielded the highest test accuracy of 98.38% for pipelines with dimensionality reduction while the SVM model with MMScaler applied yielded the highest test accuracy of 96.77% for pipelines without dimensionality reduction. This study shows the effectiveness of Raman spectroscopy to classify COVID-19-induced characteristics in serum samples.","PeriodicalId":433638,"journal":{"name":"2022 14th International Conference on Knowledge and Smart Technology (KST)","volume":"224 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134136967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信