Len Wirz, Rinrada Tanthanathewin, Asipan Ketphet, S. Fugkeaw
{"title":"Design and Development of A Cloud-Based IDS using Apache Kafka and Spark Streaming","authors":"Len Wirz, Rinrada Tanthanathewin, Asipan Ketphet, S. Fugkeaw","doi":"10.1109/jcsse54890.2022.9836264","DOIUrl":"https://doi.org/10.1109/jcsse54890.2022.9836264","url":null,"abstract":"Owing to the efficient resource management, accessibility, and high service availability, cloud computing has been leveraged by several intensive-data processing applications such as big data analytics, social media applications. These applications are typically based on the development of web service and web application. Even though web-based technology offers effective communication and implementation, it has been susceptible to various kinds of attack. In this paper, we investigate possible attacks on REST which is a commonly used protocol for the web service implementation. In REST, HTTP requests are mapped to GET, POST, PUT, and DELETE that have been proven to be prone to common attacks including Automated Brute Forcing on web-based login, HTTP flood attacks, SQL injections (SQLi), and Cross-Site Scripting (XSS). To this end, we propose a design and implementation of the cloud-based IDS to detect such attacks by employing Apache Kafka and Spark streaming to classify and process the high volume of user inputs in REST HTTP communication. To detect the anomalous inputs, we apply the signature-based approach to construct an IDS engine based on a set of known attack patterns that will be leveraged by the Spark Streaming. Specifically, we introduce a new string comparison collection that improves the False Positive (FP) rate in SQL injection detection, which has been a major issue in most proposed IDS currently available. In our experiment, the system is able to determine malicious patterns with high performance as well as to generate SMS alerts and log the event in a Google Cloud Storage Bucket in an efficient manner.","PeriodicalId":284735,"journal":{"name":"2022 19th International Joint Conference on Computer Science and Software Engineering (JCSSE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129089519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using Latent Dirichlet Allocation to investigate guest experience in Airbnb accommodation during COVID-19 pandemic in the United Kingdom","authors":"Nathakit Keawtoomla, Arinya Pongwat, Jakramate Bootkrajang","doi":"10.1109/jcsse54890.2022.9836314","DOIUrl":"https://doi.org/10.1109/jcsse54890.2022.9836314","url":null,"abstract":"The sharing economy in the accommodation business provides the alternatives for travelers, while this market segment is growing significantly. It is important to understand the requirement of the guests' experience in order to provide the better service comparing to the traditional accommodation services, especially during the pandemic crisis of Covid-19 when tourism industry was frozen globally. The current study explores the reviews on Airbnb platform by employing the Latent Dirichlet Allocation technique in order to understand the experiences among Airbnb guests during the Covid-19 crisis. The results revealed that several latent topics from previous studies were discovered, such as accommodation, location, neighborhood, accessibility, amenities, etc., with some unique topics that can be suggested to the existing knowledge. The theoretical and practical contributions in both tourism and the analysis technique contexts were discussed.","PeriodicalId":284735,"journal":{"name":"2022 19th International Joint Conference on Computer Science and Software Engineering (JCSSE)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125168609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MAX-CUT on Samplings of Dense Graphs","authors":"Jittat Fakcharoenphol, Phanu Vajanopath","doi":"10.1109/jcsse54890.2022.9836261","DOIUrl":"https://doi.org/10.1109/jcsse54890.2022.9836261","url":null,"abstract":"The maximum cut problem finds a partition of a graph that maximizes the number of crossing edges. When the graph is dense or is sampled based on certain planted assumptions, there exist polynomial-time approximation schemes that given a fixed <tex>$epsilon > 0$</tex>., find a solution whose value is at least <tex>$1-epsilon$</tex> of the optimal value. This paper presents another random model relating to both successful cases. Consider an n-vertex graph <tex>$G$</tex> whose edges are sampled from an unknown dense graph <tex>$H$</tex> independently with probability <tex>$p=Omega(1/sqrt{log n});$</tex> this input graph <tex>$G$</tex> has <tex>$O(n^{2}/sqrt{log n})$</tex> edges and is no longer dense. We show how to modify a PTAS by de la Vega for dense graphs to find an <tex>$(1-epsilon)$</tex> -approximate solution for <tex>$G$</tex>. Although our algorithm works for a very narrow range of sampling probability <tex>$p$</tex>, the sampling model itself generalizes the planted models fairly well.","PeriodicalId":284735,"journal":{"name":"2022 19th International Joint Conference on Computer Science and Software Engineering (JCSSE)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132797606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Polpinij, Umaporn Saisangchan, Vorakit Vorakitphan, B. Luaphol
{"title":"Identifying Significant Customer Opinion Information of Each Aspect from Hotel Reviews","authors":"J. Polpinij, Umaporn Saisangchan, Vorakit Vorakitphan, B. Luaphol","doi":"10.1109/jcsse54890.2022.9836251","DOIUrl":"https://doi.org/10.1109/jcsse54890.2022.9836251","url":null,"abstract":"Recognizing whether customers like or dislike a product or service from online reviews may not be sufficient for other customers to make decisions or for owners to improve their merchandising. This was taken up as a challenge in this study that focused on finding significant sentiment information from customer reviews on each hotel aspect. The proposed framework first separated customer reviews into sentences, and then assembled all customer review sentences relating to each aspect of customer reviews using the k-means clustering. Later, those customer sentences are classified them into positive and negative sentiment polarity classes. The classifier was developed by Support Vector Machines (SVM). This can help other customers or the owner to understand why customers like or dislike a particular hotel aspect. The experimental results were evaluated using recall, precision, F1 and accuracy. The clustering method returned satisfactory results of 0.81, 0.80, 0.80 and 0.80, respectively. Meanwhile, the classification method also gave satisfactory results at 0.81, 0.79, 0.80 and 0.79, respectively. Compared to the baseline using F1 and accuracy, our proposed method produces very similar experimental results to the baseline method but our proposed method requires less computational time than the baseline.","PeriodicalId":284735,"journal":{"name":"2022 19th International Joint Conference on Computer Science and Software Engineering (JCSSE)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116282574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automatic Thai Ticket Classification By Using Machine Learning For IT Infrastructure Company","authors":"Kraidet Khowongprasoed, Taravichet Titijaroonroj","doi":"10.1109/jcsse54890.2022.9836250","DOIUrl":"https://doi.org/10.1109/jcsse54890.2022.9836250","url":null,"abstract":"Ticket classification is a process to define the category name of each ticket before assigning the resolution team to serve each ticket. It is an important process to support the customers inside and outside the company. It can make customer dissatisfaction if the processing time is high or delayed. Based on the recording data in 2019-2021 at the studying company, we found that the manual ticket classification got an error rate about 53 percent because the office workers misunderstand. To alleviate this problem, we propose the methodology for automatic Thai ticket classification by using Term Frequency-Inverse Document Frequency with Support Vector Machine. The experimental result shows that the performance of the proposed methodology is higher than the manual classification by 2 times or 41 percent.","PeriodicalId":284735,"journal":{"name":"2022 19th International Joint Conference on Computer Science and Software Engineering (JCSSE)","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129951599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thana Hongsuwan, Nattarat Serirat, Noppanat Panlutan, Thanapat Danpattanachaikul, C. Jinjakam
{"title":"Outcome Based Education: An Evaluation from SOs to PLOs","authors":"Thana Hongsuwan, Nattarat Serirat, Noppanat Panlutan, Thanapat Danpattanachaikul, C. Jinjakam","doi":"10.1109/jcsse54890.2022.9836307","DOIUrl":"https://doi.org/10.1109/jcsse54890.2022.9836307","url":null,"abstract":"The main processes of outcome-based education management are setting learning outcomes, measuring, evaluating learning results, and improving. Teachers must plan exercises, homework, classroom activities, midterm exams, and final exams that are consistent and align with program outcomes and educational standards. The proposed Outcome-Based Education (OBED) Web Application supports administrators and faculty staff for program and course management. After administrators or course committees designed the program learning outcomes to meet the referred educational standards (ex. TQF and TABEE), this web app supports their alignment PLOs to courses. Then, faculty staff can create expected course learning outcomes connected to the assigned PLOs. In addition, the system supports data transfer from the learning management system. With this web app, administrators and teachers can observe student outcomes from student activity results compared with the expected program learning outcome reported by course, cohort, or individual student. Therefore, teachers can adjust activities during the semester to get much closer to the target expected PLOs. Moreover, by OBED app support, semester assessment reports are ready to use according to the form prescribed by departments and educational standards.","PeriodicalId":284735,"journal":{"name":"2022 19th International Joint Conference on Computer Science and Software Engineering (JCSSE)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131331947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pawarut Karaked, Watcharapol Saengphet, S. Tantrairatn
{"title":"Multi-Sensor Fusion with Extended Kalman Filter for Indoor Localization system of Multirotor UAV","authors":"Pawarut Karaked, Watcharapol Saengphet, S. Tantrairatn","doi":"10.1109/jcsse54890.2022.9836275","DOIUrl":"https://doi.org/10.1109/jcsse54890.2022.9836275","url":null,"abstract":"This research presents the method to improve the robustness of indoor UAV localization via fusion of visual SLAM and Lidar SLAM with Extended Kalman Filter (EKF). The visual and Lidar SLAM methodologies are applied to compensate for different pose errors in various situations, such as various lighting and reflection, respectively. In the experiment, Lidar and a stereo camera with SLAM methods are installed on the drone. When starting SLAM in both methods will localize and provide position and orientation data. The data will be fused by Extended Kalman Filter and provides updated data. Therefore, if there is an error in either of the SLAM methods, the system will continue to work properly. In the test, the drone was conducted in various situations where the drone is used to have an error using both SLAM. A result shows that the data is obtained from the EKF remains normal in various situations.","PeriodicalId":284735,"journal":{"name":"2022 19th International Joint Conference on Computer Science and Software Engineering (JCSSE)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126765183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Sriman, Nittaya Muangnak, Chaiwat Sirawattananon
{"title":"Automated Clinical Assessment in Diabetic Retinopathy Retinal Images: A Review","authors":"B. Sriman, Nittaya Muangnak, Chaiwat Sirawattananon","doi":"10.1109/jcsse54890.2022.9836245","DOIUrl":"https://doi.org/10.1109/jcsse54890.2022.9836245","url":null,"abstract":"There are no early symptoms associated with retinal diseases. Diabetic retinopathy (DR) is the leading cause of macular degeneration in people with diabetes in their 40s and 50s. It is a critical step in determining the stage of an ophthalmology preliminary abnormality diagnosis. DR lesions detected on images taken with the hospital's high-quality imaging equipment can now be screened and identified automatically by an image processing system. It is proposed to screen for early symptoms of DR by detecting abnormalities within retinal images using computer-based imaging. The purpose of this study is to conduct a review of existing works in the fields of artificial intelligence and image processing to develop an algorithm for an automatic DR screening system. A review paper on the use of deep learning with DR detection was introduced, as well as a section experimenting with DR in retinal fundus images from publicly available datasets. To enhance DR detection performance, feature extraction techniques would be suggested.","PeriodicalId":284735,"journal":{"name":"2022 19th International Joint Conference on Computer Science and Software Engineering (JCSSE)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129207916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rachata Chaiprasongsuk, Paniti Achararit, Pasit Jarutatsanangkoon, Pawaree Nonthasaen, Wiriya Mahikul, Anyamanee Chaiprasongsuk, U. Panich, P. Prombut
{"title":"Quantitative analysis of the 2D tissue skin layer with fluorescent dyes","authors":"Rachata Chaiprasongsuk, Paniti Achararit, Pasit Jarutatsanangkoon, Pawaree Nonthasaen, Wiriya Mahikul, Anyamanee Chaiprasongsuk, U. Panich, P. Prombut","doi":"10.1109/jcsse54890.2022.9836243","DOIUrl":"https://doi.org/10.1109/jcsse54890.2022.9836243","url":null,"abstract":"Development of an automated process for quantifying proteins in multiple layers of skin is essential to research in the field of cell biology. Researchers aimed to quantify the cell density and levels of biomarkers on tissue samples. Immunofluorescence tissue staining is one of the standard methods used for the protein detection by tagging with the fluorescein isothiocyanate (FITC), the green fluorescent agent. Recently, the single-cell (monolayer tissue) analysis software automation has been obtained to quantify the protein levels. However, there are limitations for the analysis of overlapping multilayer tissue. Since the automatic analysis cannot be operated on the rough and muti-layered surface, the analysis is then manually performed using ImageJ software which process more than 5 minutes/image. Problems arise when many images of tissues need to be analyzed. Therefore, an automatic process has been developed for the image analysis to reduce time and error. This study developed the automatic process by adjusting the pixel intensity and splitting into the RGB channels and converting into gray with different intensities. The intensity can be measured on grayscale from 0 to 255 units based on the KODAK grayscale zone. The FITC staining is converted to be the grayscale intensity for the pixel intensity calculation. The gray level < 25 units was setup to be the exclusion criteria to solve the problem of over-exposure due to background noise. Therefore, this automated data processing enhances the technical capabilities and computational accuracy, which proposes an alternative approach to improve the tissue analysis for cell biology research.","PeriodicalId":284735,"journal":{"name":"2022 19th International Joint Conference on Computer Science and Software Engineering (JCSSE)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116879586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Two-Dimensional Variational Mode Decomposition with Texture Feature Extraction for Glaucoma Classification from Retinal Images","authors":"Aekapop Bunpeng, Ungsumalee Suttapakti","doi":"10.1109/jcsse54890.2022.9836303","DOIUrl":"https://doi.org/10.1109/jcsse54890.2022.9836303","url":null,"abstract":"Image decomposition is very important for glaucoma classification from retinal images. Conventional methods can extract features, but the performance of those methods is insufficient because of loss information from the decomposition step. In this paper, 2D-VMD with texture feature extraction is proposed for classifying glaucoma. It decomposes a retinal image into different frequency sub-images by means of two-dimensional variational mode decomposition due to adaptive decomposition according to its data. Texture features are extracted by using GLCM with statistical approaches. Significant texture features are selected with high t-test values. From 1,544 retinal images in the Harvard dataverse dataset, the proposed method achieves 98.19%, which is higher than the conventional methods. Our method can extract the significant texture features with high accuracy, improving the performance of glaucoma classification.","PeriodicalId":284735,"journal":{"name":"2022 19th International Joint Conference on Computer Science and Software Engineering (JCSSE)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131278639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}