{"title":"Optimization of Vehicle Object Detection Based on UAV Dataset: CNN Model and Darknet Algorithm","authors":"A. H. Rangkuti, Varyl Hasbi Athala","doi":"10.30630/joiv.7.1.1159","DOIUrl":"https://doi.org/10.30630/joiv.7.1.1159","url":null,"abstract":"This study was conducted to identify several types of vehicles taken using drone technology or Unmanned Aerial Vehicles (UAV). The introduction of vehicles from above an altitude of more than 300-400 meters that pass the highway above ground level becomes a problem that needs optimum investigation so that there are no errors in determining the type of vehicle. This study was conducted at mining sites to identify the class of vehicles that pass through the highway and how many types of vehicles pass through the road for vehicle recognition using a deep learning algorithm using several CNN models such as Yolo V4, Yolo V3, Densenet 201, CsResNext –Panet 50 and supported by the Darknet algorithm to support the training process. In this study, several experiments were carried out with other CNN models, but with peripherals and hardware devices, only 4 CNN models resulted in optimal accuracy. Based on the experimental results, the CSResNext-Panet 50 model has the highest accuracy and can detect 100% of the captured UAV video data, including the number of detected vehicle volumes, then Densenet and Yolo V4, which can detect up to 98% - 99%. This research needs to continue to be developed by knowing all classes affordable by UAV technology but must be supported by hardware and peripheral technology to support the training process.","PeriodicalId":32468,"journal":{"name":"JOIV International Journal on Informatics Visualization","volume":"63 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80970466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Wicaksono, Sheila Fitria Al asqalani, Yufis Azhar, N. Hidayah, Andreawana Andreawana
{"title":"Automatic Summarization of Court Decision Documents over Narcotic Cases Using BERT","authors":"G. Wicaksono, Sheila Fitria Al asqalani, Yufis Azhar, N. Hidayah, Andreawana Andreawana","doi":"10.30630/joiv.7.2.1811","DOIUrl":"https://doi.org/10.30630/joiv.7.2.1811","url":null,"abstract":"Reviewing court decision documents for references in handling similar cases can be time-consuming. From this perspective, we need a system that can allow the summarization of court decision documents to enable adequate information extraction. This study used 50 court decision documents taken from the official website of the Supreme Court of the Republic of Indonesia, with the cases raised being Narcotics and Psychotropics. The court decision document dataset was divided into two types, court decision documents with the identity of the defendant and court decision documents without the defendant's identity. We used BERT specific to the IndoBERT model to summarize the court decision documents. This study uses four types of IndoBert models: IndoBERT-Base-Phase 1, IndoBERT-Lite-Bas-Phase 1, IndoBERT-Large-Phase 1, and IndoBERT-Lite-Large-Phase 1. This study also uses three types of ratios and ROUGE-N in summarizing court decision documents consisting of ratios of 20%, 30%, and 40% ratios, as well as ROUGE1, ROUGE2, and ROUGE3. The results have found that IndoBERT pre-trained model had a better performance in summarizing court decision documents with or without the defendant's identity with a 40% summarizing ratio. The highest ROUGE score produced by IndoBERT was found in the INDOBERT-LITE-BASE PHASE 1 model with a ROUGE value of 1.00 for documents with the defendant's identity and 0.970 for documents without the defendant's identity at a ratio of 40% in R-1. For future research, it is expected to be able to use other types of Bert models such as IndoBERT Phase-2, LegalBert, etc.","PeriodicalId":32468,"journal":{"name":"JOIV International Journal on Informatics Visualization","volume":"76 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83857789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hemp-Alumina Composite Radar Absorption Reflection Loss Classification","authors":"Muhlasah Novitasari Mara, Budi Basuki Subagio, Efrilia M. Khusna, Bagus Satrio Utomo","doi":"10.30630/joiv.7.2.1169","DOIUrl":"https://doi.org/10.30630/joiv.7.2.1169","url":null,"abstract":"The Radar Absorption Material (RAM) method is a coating for reducing the energy of electromagnetic waves received by converting the electromagnetic waves emitted by radar into heat energy. Hemp has been studied to have the strongest and most stable tensile characteristics of 5.5 g/den and has higher heat resistance compared to other natural fibers. Combining the characteristics of hemp with alumina powder (Al2O3) and epoxy resin could provide a stealth technology system that is able to absorb radar waves more optimally, considering that alumina has light, anti-rust and conductive properties. The electromagnetic properties of absorbent coatings can be predicted using machine learning. This study classifies the reflection loss of Hemp-Alumina Composite using Random Forest, ANN, KNN, Logistic Regression, and Decision Tree. These machine learning classifiers are able to generate predictions immediately and can learn critical spectral properties across a wide energy range without the influence of data human bias. The frequency range of 2-12 GHz was used for the measurements. Hemp-Alumina composite has result that the most effective structure thickness is 5mm, used as a RAM with optimum absorption in S-Band frequencies of -15,158 dB, C-Band of -16,398 dB and X-Band of -23,135 dB. The highest and optimum reflection loss value is found in the X-Band frequency with a thickness of 5mm which is equal to -23.135 dB with an absorption bandwidth of 1000 MHz and efficiencyof 93.1%. From this result, it is proven that Hemp-Alumina Composite is very effective to be used as a RAM on X-Band frequency. Based on the results of the experiments, the Random Forest Classifier has the highest values of accuracy (0.97) and F1 score (0.98). The F1 score and accuracy of Random Forest are 0.96 and 0.97, respectively, and do not significantly differ from KNN. ","PeriodicalId":32468,"journal":{"name":"JOIV International Journal on Informatics Visualization","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87207088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using Various Convolutional Neural Network to Detect Pneumonia from Chest X-Ray Images: A Systematic Literature Review","authors":"Darnell Kikoo, Bryan Tamin, Stephen Hardjadilaga, -. Anderies, Irene Anindaputri Iswanto","doi":"10.30630/joiv.7.2.1015","DOIUrl":"https://doi.org/10.30630/joiv.7.2.1015","url":null,"abstract":"Pneumonia is one of the world's top causes of mortality, especially for children. Chest X-rays serve an important part in diagnosing pneumonia due to the cost-effectiveness and quick advancement of the technology. Detecting Pneumonia through Chest X-rays (CXR) is a challenging and time-consuming process requiring trained professionals. This issue has been solved by the development of automation technology which is machine learning. Moreover, Deep Learning (DL), a machine learning specification that uses an algorithm that resembles the human brain, can predict more accurately and is now dependable enough to predict pneumonia. As time passes, another Deep Learning improvement has been made to produce a new method called Transfer Learning, that is done by extracting specific layers from some pre-trained network to be used on other datasets, which reduces the training time and improves the model performance. Although numerous algorithms are already available for pneumonia identification, a comprehensive literature evaluation and clinical recommendations are still small in numbers. This research will assist practitioners in choosing some of the best procedures from the recent research, reviewing the available datasets, and comprehending the outcomes gained in this domain. The reviewed papers show that the best score for predicting pneumonia using DL from CXR was 99.4% accuracy. The exceptional techniques and results from the reviewed papers served as great references for future research.","PeriodicalId":32468,"journal":{"name":"JOIV International Journal on Informatics Visualization","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79370994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. Nurhamidah, Rafika Andari, A. Junaidi, D. Daoed
{"title":"Evaluation of the Compatibility of TRMM Satellite Data with Precipitation Observation Data","authors":"N. Nurhamidah, Rafika Andari, A. Junaidi, D. Daoed","doi":"10.30630/joiv.7.2.1578","DOIUrl":"https://doi.org/10.30630/joiv.7.2.1578","url":null,"abstract":"The availability of hydrological data is one of the challenges associated with developing water infrastructure in different areas. This led to the TRMM (Tropical Precipitation Measurement Mission) design by NASA, which involves using satellite weather monitoring technology to monitor and analyze tropical precipitation in different parts of the world. Therefore, this validation study was conducted to compare TRMM precipitation data with observed precipitation to determine its application as an alternate source of hydrological data. The Kuranji watershed was selected as the study site due to the availability of suitable data. Moreover, the validation analyses applied include the Root Mean Squared Error (RMSE), Nash-Sutcliffe Efficiency (NSE), Coefficient Correlation (R), and Relative Error (RE). These used two calculation forms: one for the uncorrected data and another for the corrected data. The results showed that the best-adjusted data validation from the Gunung Nago station in 2016 was recorded to be RMSE = 62,298, NSE = 0.044, R = 0.902, and RE = 11,328. The closeness of the R-value to one implies that the corrected TRMM data outperforms the uncorrected ones. Therefore, it was generally concluded that the TRMM data matches the observed precipitation data and can be used for hydrological study in the Kuranji watershed","PeriodicalId":32468,"journal":{"name":"JOIV International Journal on Informatics Visualization","volume":"64 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87234983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christian Garcia Villegas, Nilson Augusto Lemos Aguero
{"title":"The Gamification of E-learning Environments for Learning Programming","authors":"Christian Garcia Villegas, Nilson Augusto Lemos Aguero","doi":"10.30630/joiv.7.2.1602","DOIUrl":"https://doi.org/10.30630/joiv.7.2.1602","url":null,"abstract":"Gamification is the most active methodology utilized in the E-learning environment for teaching-learning in computing; however, this does not restrict its use in other areas of knowledge. Gamification combines elements of play and its design techniques in a non-ludic context, achieving a motivation factor for the students. This systematic study aimed to collect and synthesize scientific evidence from the gamification field for learning programming through the E-learning environment. In order to do this, a systematic literature review was done, following the guidelines proposed by Petersen, which propose the definition of questions, search strategies, inclusion/exclusion criteria, and characterization. As a result of this process, eighty-one works were completely reviewed, analyzed, and categorized. The results revealed favorable learning among the students, the most used platforms and gamification elements, the most used languages and focuses of programming, and the education level, where gamification is most used to learn to program in an E-learning environment. These findings evidenced that gamification is a good active strategy for introducing beginning students to programming through an E-learning environment. Within this context, Learning programming through the use of gamification is a topic that is growing and taking force, and after what occurred during the pandemic, it is projected that there will continue to be more students who are focused on understanding its implementation and the impact it has on the different levels of education and the areas of knowledge.","PeriodicalId":32468,"journal":{"name":"JOIV International Journal on Informatics Visualization","volume":"123 2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77450586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Maskat, S. M. Shaharudin, Deden Witarsyah, H. Mahdin
{"title":"A Survey on Forms of Visualization and Tools Used in Topic Modelling","authors":"R. Maskat, S. M. Shaharudin, Deden Witarsyah, H. Mahdin","doi":"10.30630/joiv.7.2.1313","DOIUrl":"https://doi.org/10.30630/joiv.7.2.1313","url":null,"abstract":"In this paper, we surveyed recent publications on topic modeling and analyzed the forms of visualizations and tools used. Expectedly, this information will help Natural Language Processing (NLP) researchers to make better decisions about which types of visualization are appropriate for them and which tools can help them. This could also spark further development of existing visualizations or the emergence of new visualizations if a gap is present. Topic modeling is an NLP technique used to identify topics hidden in a collection of documents. Visualizing these topics permits a faster understanding of the underlying subject matter in terms of its domain. This survey covered publications from 2017 to early 2022. The PRISMA methodology was used to review the publications. One hundred articles were collected, and 42 were found eligible for this study after filtration. Two research questions were formulated. The first question asks, \"What are the different forms of visualizations used to display the result of topic modeling?\" and the second question is \"What visualization software or API is used? From our results, we discovered that different forms of visualizations meet different purposes of their display. We categorized them as maps, networks, evolution-based charts, and others. We also discovered that LDAvis is the most frequently used software/API, followed by the R language packages and D3.js. The primary limitation of this survey is it is not exhaustive. Hence, some eligible publications may not be included.","PeriodicalId":32468,"journal":{"name":"JOIV International Journal on Informatics Visualization","volume":"68 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91152669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Face Recognition Using Convolution Neural Network Method with Discrete Cosine Transform Image for Login System","authors":"Ari Setiawan, R. Sigit, Rika Rokhana","doi":"10.30630/joiv.7.2.1546","DOIUrl":"https://doi.org/10.30630/joiv.7.2.1546","url":null,"abstract":"These days, the application of image processing in computer vision is becoming more crucial. Some situations necessitate a solution based on computer vision and growing deep learning. One method continuously developed in deep learning is the Convolutional Neural Network, with MobileNet, EfficientNet, VGG16, and others being widely used architectures. Using the CNN architecture, the dataset consists primarily of images; the more datasets there are, the more image storage space will be required. Compression via the discrete cosine transform technique is a method to address this issue. We implement the DCT compression method in the present research to get around the system's limited storage space. Using DCT, we also compare compressed and uncompressed images. All users who had been trained with each test 5 times for a total of 150 tests were given the test. Based on testing findings, the size reduction rate for compressed and uncompressed images is measured at 25%. The case study presented is face recognition, and the training results indicate that the accuracy of compressed images using the DCT approach ranges from 91.33% to 100%. Still, the accuracy of uncompressed facial images ranges from 98.17% to 100%. In addition, the accuracy of the proposed CNN architecture has increased to 87.43%, while the accuracy of MobileNet has increased by 16.75%. The accuracy of EfficientNetB1 with noisy-student weights is measured at 74.91%, and the accuracy of EfficientNetB1 with imageNet weights can reach 100%. Facial biometric authentication using a deep learning algorithm and DCT-compressed images was successfully accomplished with an accuracy value of 95.33% and an error value of 4.67%.","PeriodicalId":32468,"journal":{"name":"JOIV International Journal on Informatics Visualization","volume":"56 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85755116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Marjudi, Roziyani Setik, R. M. T. Raja Lope Ahmad, W. A. Wan Hassan, A. A. Md Kassim
{"title":"Utilization of Business Analytics by SMEs In Halal Supply Chain Management Transactions","authors":"S. Marjudi, Roziyani Setik, R. M. T. Raja Lope Ahmad, W. A. Wan Hassan, A. A. Md Kassim","doi":"10.30630/joiv.7.2.1308","DOIUrl":"https://doi.org/10.30630/joiv.7.2.1308","url":null,"abstract":"Halal supply chain management has transformed beyond food and beverage certification. However, extant literature shows that Halal transaction management still has much to improve in terms of transaction permissibility, with the main gap in understanding Halal businesses and their transactions limited to a system that separately defines e-commerce and financial technology data into its IT business environment. This study aims to demonstrate the usefulness of managing Halal transactions and its permissibility analysis through a proposed Halal Supply Chain Management Transactions (HSCMT) model and prototype by applying a business analytic approach to integrate both e-commerce and financial technology data. The study uses literature analysis to ensure the correct structure of the integrated datasets, before modeling the transaction's permissibility and prototyping its analytics into decision-making analytics. The developed HSCMT prototype uses a payment gateway that can be embedded into a Halal SME owners' e-commerce site. This creates a holistic Halal Financial technology (FinTech) transaction permissibility dashboard, increasing the effectiveness of HSCMT for Malaysia Halal SME Owners (MHSO) by an average usability score of 83.67%. Results also indicate that the key basic mechanisms to verify transactional permissibility are the source of the transaction, the use of the transaction, transaction flow, and transaction agreement. Furthermore, its mechanisms must be mapped onto a submodule post-transformation and modeling of the transaction dataset. Further improvements in multisource data points can be further considered, as this research only focuses on local data points from one payment gateway service. This is due to restrictions in data policy when involving overseas supply chain and transaction documentation. This research utilizes available data in business through data management, optimization, mining, and visualization to measure performance and drive a company's growth. The competency of business analytics can be beneficial to Halal SMEs players because it can provide them with insights into the permissibility decision-making process.","PeriodicalId":32468,"journal":{"name":"JOIV International Journal on Informatics Visualization","volume":"16 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87327166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Inversed Control Parameter in Whale Optimization Algorithm and Grey Wolf Optimizer for Wrapper-based Feature Selection: A comparative study","authors":"Liu Yab, Noorhaniza Wahid, Rahayu A Hamid","doi":"10.30630/joiv.7.2.1509","DOIUrl":"https://doi.org/10.30630/joiv.7.2.1509","url":null,"abstract":"Whale Optimization Algorithm (WOA) and Grey Wolf Optimizer (GWO) are well-perform metaheuristic algorithms used by various researchers in solving feature selection problems. Yet, the slow convergence speed issue in Whale Optimization Algorithm and Grey Wolf Optimizer could demote the performance of feature selection and classification accuracy. Therefore, to overcome this issue, a modified WOA (mWOA) and modified GWO (mGWO) for wrapper-based feature selection were proposed in this study. The proposed mWOA and mGWO were given a new inversed control parameter which was expected to enable more search area for the search agents in the early phase of the algorithms and resulted in a faster convergence speed. The objective of this comparative study is to investigate and compare the effectiveness of the inversed control parameter in the proposed methods against the original algorithms in terms of the number of selected features and the classification accuracy. The proposed methods were implemented in MATLAB where 12 datasets with different dimensionality from the UCI repository were used. kNN was chosen as the classifier to evaluate the classification accuracy of the selected features. Based on the experimental results, mGWO did not show significant improvements in feature reduction and maintained similar accuracy as the original GWO. On the contrary, mWOA outperformed the original WOA in terms of the two criteria mentioned even on high-dimensional datasets. Evaluating the execution time of the proposed methods, utilizing different classifiers, and hybridizing proposed methods with other metaheuristic algorithms to solve feature selection problems would be future works worth exploring.","PeriodicalId":32468,"journal":{"name":"JOIV International Journal on Informatics Visualization","volume":"3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84394806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}