Muhammad Aiman Mohd Razin, M. A. Husman, S. Toha, Aisyah Ibrahim
{"title":"Design of Smart Shoes for Blind People","authors":"Muhammad Aiman Mohd Razin, M. A. Husman, S. Toha, Aisyah Ibrahim","doi":"10.51662/jiae.v3i1.89","DOIUrl":"https://doi.org/10.51662/jiae.v3i1.89","url":null,"abstract":"Our daily lives depend heavily on our eyes. Eyesight is our most valuable gift, enabling us to see the world around us. However, some people suffer from visual impairments that hinder their ability to visualize such things. As a result, such people will experience difficulties moving comfortably in public places. One crucial aspect of mobile accessibility is detecting elevation changes. These include changes in the height of the ground or a floor, such as stairs, curbing, and potholes. They are common in both indoor and outdoor environments. People who are blind or visually impaired must detect these changes and assess their distance and extent to navigate them safely and effectively. Depth perception is essential to doing so and can be challenging for those with visual impairments. Therefore, this research aims to design a smart shoe that assists in climbing up and down the stairs using an IMU sensor to detect the user's movement. Before constructing a controller, the system is modelled using mathematical and physical modelling. Mathematical modelling is derived based on the mobility of people with visual impairment. The smart shoes are modelled in a 3D virtual world using the SolidWorks software. In addition, the shoe integrates with ultrasonic sensors whenever it detects any obstacles or barriers; they alert the users via vibration. This resulted in the intelligent shoes unlocking the heels whenever the low or high elevation was detected and vibrating if there was an obstacle. With the help of this device, the confidence level of people with visual impairment to walk independently will be improved.","PeriodicalId":424190,"journal":{"name":"Journal of Integrated and Advanced Engineering (JIAE)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115570996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Design and Simulation High Pass Filter Second Order and C-Type Filter for Reducing Harmonics as Power Quality Repair Effort in the Automotive Industry","authors":"Mochamad Irlan Malik, E. Ihsanto","doi":"10.51662/jiae.v3i1.79","DOIUrl":"https://doi.org/10.51662/jiae.v3i1.79","url":null,"abstract":"Electrical distribution is one of the most important parameters in industrial processes. Therefore, good power quality is needed as a supply to industrial machines. The use of industrial machines has an impact on the emergence of harmonics. As a result of the large Harmonics, the quality of power is getting worse, affecting productivity in the industry. Therefore, samples were taken using a Power Quality Analyzer on an 800 kVA transformer on the secondary side of the transformer to maximise the supply of electricity to consumers. Then obtained THDi Phase L1 of 23.1%, phase L2 of 24.7% and phase L3 of 21% and IHDi on the 5th order in phase L1 18.3%, phase L2 20.7% and phase L3 16.6% regarding (IEEE Std 3002.8-2018) and (SPLN D5.004-1:2012) the IHDi value should not be more than 7%. Then simulated using MATLAB/Simulink by designing the Second Order High Pass Filter and C-Type Filter. The results obtained by combining the two filters gained THDi results in the L1 phase at 2.53%, the L2 phase at 2.69% and the L3 phase at 2.22% and the IHDi at the 5th order of the L1 phase at 1.48%, the L2 phase 1.62% and L3 phase 1.33%.","PeriodicalId":424190,"journal":{"name":"Journal of Integrated and Advanced Engineering (JIAE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128867038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Counting Various Vehicles using YOLOv4 and DeepSORT","authors":"Alfan Pahreza Kusumah, Dena Djayusman, Galih Rizki Setiadi, Ade Chandra Nugraha, Priyanto Hidayatullah","doi":"10.51662/jiae.v3i1.68","DOIUrl":"https://doi.org/10.51662/jiae.v3i1.68","url":null,"abstract":"The Ministry of Public Works and Public Housing (PUPR) conducted a traffic survey to determine the total number of vehicles and classify them according to the Bina Marga vehicle categorisation. The survey has thus far been carried out manually. As a result, surveys take a lot of time and money to perform. Additionally, as the survey scope grows, so will the requirement for surveyors. Therefore, a substitute that can execute the survey procedure automatically and with tolerable accuracy is required. One solution is to utilise deep learning technology to detect and categorise vehicles that can be used in apps. The program is designed as a web application that provides a summary of vehicle calculations and receives video data from traffic recordings. The deep learning model used is YOLOv4 which is trained to recognise vehicle classes following Bina Marga vehicle types. The model was trained and tested using the Python programming language and the Darknet framework on the Google Colab platform. The YOLOv4 and DeepSORT method with custom dataset reached a decent accuracy of 67.94%, considering the limited 1000 images used for training the model.","PeriodicalId":424190,"journal":{"name":"Journal of Integrated and Advanced Engineering (JIAE)","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126223358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Identification of Whatsapp Digital Evidence on Android Smartphones using The Android Backup APK (Application Package Kit) Downgrade Method","authors":"Deny Sulisdyantoro, M. I. Marzuki","doi":"10.51662/jiae.v3i1.70","DOIUrl":"https://doi.org/10.51662/jiae.v3i1.70","url":null,"abstract":"The use of WhatsApp for actions that lead to unlawful acts is a serious matter that needs to be proven in court. Android and the WhatsApp messaging application continue to update their features and security to provide maximum service and protection to its users, such as the WhatsApp database encryption using crypt14. With crypt14 encryption on the WhatsApp database, investigations of WhatsApp digital evidence against Electronic Evidence (BBE) require an acquisition and extraction method to identify artefacts relevant to digital evidence needs. The National Institute Standard Technology (NIST) reference methodology, from the collection, examination, and analysis to reporting stages, has become a widely used framework for digital forensics against BBE. The Android Backup Application Package Kit (APK) Downgrade method can decrypt the WhatsApp database crypt14 to become a solution that can be used in the framework of mobile forensics to answer the needs of investigations into certain criminal cases, including data that users have deleted. With the Cellebrite tools, the Android Backup Application Package Kit (APK) Downgrade method can identify approximately 651% more artefacts than the Android Backup and logical acquisition methods using the FinalData and MobilEdit tools.","PeriodicalId":424190,"journal":{"name":"Journal of Integrated and Advanced Engineering (JIAE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129779089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Akeem Olowolayemo, Maymuna Gulfam Tanni, Intiser Ahmed Emon, Umayma Ahhmed, ‘Arisya Mohd Dzahier, Md Rounak Safin, Nusrat Zahan Nisha
{"title":"Conversational Analysis Agents for Depression Detection: A Systematic Review","authors":"Akeem Olowolayemo, Maymuna Gulfam Tanni, Intiser Ahmed Emon, Umayma Ahhmed, ‘Arisya Mohd Dzahier, Md Rounak Safin, Nusrat Zahan Nisha","doi":"10.51662/jiae.v3i1.85","DOIUrl":"https://doi.org/10.51662/jiae.v3i1.85","url":null,"abstract":"Depression is known as a non-cognitive disturbance that can be seen among different people all over the world. This pertains to disorders that have affected cognitions and behaviors that arise from overt disorders in cerebral function. It is more common for young adults to elderly people based on lifestyles, work pressure, personal problems, diseases, people who had strokes or hemorrhages, certain brain diseases, and paralysis. This paper is focused on reviewing the research papers previously done on detecting depression. Utilizing predefined search systems, we have gone through a couple of studies zeroing in on gloom and involved conversational information for location and conclusion. The objective of this research is to review large research studies on whether conversational agents can detect and diagnose depression by using smart texting analysis. The study was done by searching IEEE Xplore, Sci-hub, Doi, Scopus, and Pubmed using a predefined search strategy. This review was focused on studies that include the possibilities and steps of detecting depression and diagnosis that involved conversational data or analysis agents after assessing them by independent reviewers and relevancy for eligibility. After retrieving more than 117 references initially it was narrowed down to 95 references that were found relevant as most of them applied analytical techniques and technology-based solutions. Detecting depression and diagnosing it through smart texting analysis is a broad and emerging field and has a promising future but not every research studies were robust enough to get valid results in the end. This study aimed to keep the review as precise and informative as possible. ","PeriodicalId":424190,"journal":{"name":"Journal of Integrated and Advanced Engineering (JIAE)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114147908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Iffah Nadhirah Joharee, Nik Nur Wahidah Nik Hashim, Nur Syahirah Mohd Shah
{"title":"Sentiment Analysis and Text Classification for Depression Detection","authors":"Iffah Nadhirah Joharee, Nik Nur Wahidah Nik Hashim, Nur Syahirah Mohd Shah","doi":"10.51662/jiae.v3i1.86","DOIUrl":"https://doi.org/10.51662/jiae.v3i1.86","url":null,"abstract":"Depression is an illness that can harm someone's life. However, many people still do not know that they are having depression and tend to express their feelings through text or social media. Thus, text-based depression detection could help in identifying the early detection of the illness. Therefore, the research aims to build a depression detection that can identify possible depression cues based on Bahasa Malaysia text. The data, in the form of text, has been collected from depressed and healthy people via a google form. There are three questions asked which are “Apakah kenangan manis yang anda ingat?”, “Apakah rutin harian anda?” and “Apakah keadaan yang membuatkan anda stress?” which obtained 172, 169 and 170 responses for each question respectively. All the datasets are stored in a CSV file. Using Python, TF-IDF was extracted as the feature and pipeline into several classifier models such as Random Forest, Multinomial Naïve Bayes, and Logistic Regression. The results were presented using the classification metrics of confusion matrix, accuracy, and F1-score. Also, another method has been conducted using the text sentiment techniques Vader and Text Blob onto the datasets to identify whether depressive text falls under negative sentiment or vice versa. The percentage differences were determined between the actual sentiment compared to Vader and Text Blob sentiment. From the experiment, the highest score is achieved by AdaBoost Classifier with a 0.66-F1 score. The best model is chosen to be utilized in the Graphical User Interface (GUI).","PeriodicalId":424190,"journal":{"name":"Journal of Integrated and Advanced Engineering (JIAE)","volume":"401 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126872036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ahmed Rimaz Faizabadi, Hasan Firdaus Mohd Zaki, Z. Zainal Abidin, M. A. Husman, Nik Nur Wahidah Nik Hashim
{"title":"Learning a Multimodal 3D Face Embedding for Robust RGBD Face Recognition","authors":"Ahmed Rimaz Faizabadi, Hasan Firdaus Mohd Zaki, Z. Zainal Abidin, M. A. Husman, Nik Nur Wahidah Nik Hashim","doi":"10.51662/jiae.v3i1.84","DOIUrl":"https://doi.org/10.51662/jiae.v3i1.84","url":null,"abstract":"Machine vision will play a significant role in the next generation of IR 4.0 systems. Recognition and analysis of faces are essential in many vision-based applications. Deep Learning provides the thrust for the advancement in visual recognition. An important tool for visual recognition tasks is Convolution Neural networks (CNN). However, the 2D methods for machine vision suffer from Pose, Illumination, and Expression (PIE) challenges and occlusions. The 3D Race Recognition (3DFR) is very promising for dealing with PIE and a certain degree of occlusions and is suitable for unconstrained environments. However, the 3D data is highly irregular, affecting the performance of deep networks. Most of the 3D Face recognition models are implemented from a research aspect and rarely find a complete 3DFR application. This work attempts to implement a complete end-to-end robust 3DFR pipeline. For this purpose, we implemented a CuteFace3D. This face recognition model is trained on the most challenging dataset, where the state-of-the-art model had below 95% accuracy. An accuracy of 98.89% is achieved on the intellifusion test dataset. Further, for open world and unseen domain adaptation, embeddings learning is achieved using KNN. Then a complete FR pipeline for RGBD face recognition is implemented using a RealSense D435 depth camera. With the KNN classifier and k-fold validation, we achieved 99.997% for the open set RGBD pipeline on registered users. The proposed method with early fusion four-channel input is found to be more robust and has achieved higher accuracy in the benchmark dataset.","PeriodicalId":424190,"journal":{"name":"Journal of Integrated and Advanced Engineering (JIAE)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121795996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Arselan Ashraf, A. Sophian, A. Shafie, T. Gunawan, N. N. Ismail, Ali Aryo Bawono
{"title":"Detection of Road Cracks Using Convolutional Neural Networks and Threshold Segmentation","authors":"Arselan Ashraf, A. Sophian, A. Shafie, T. Gunawan, N. N. Ismail, Ali Aryo Bawono","doi":"10.51662/jiae.v2i2.82","DOIUrl":"https://doi.org/10.51662/jiae.v2i2.82","url":null,"abstract":"Automatic road crack detection is an important transportation maintenance responsibility for ensuring driving comfort and safety. Manual inspection is considered to be a risky method because it is time consuming, costly, and dangerous for the inspectors. Automated road crack detecting techniques have been extensively researched and developed in order to overcome this issue. Despite the difficulties, most of the proposed methodologies and solutions involve machine vision and machine learning, which have lately acquired traction largely due to the increasingly more affordable processing power. Nonetheless, it remains a difficult task due to the inhomogeneity of crack intensity and the intricacy of the background. In this paper, a convolutional neural network-based method for crack detection is proposed. The method is inspired from recent advancements in applying machine learning to computer vision. The primary goal of this work is to employ convolutional neural networks to detect the road crack. Data in the form of images has been used as input, preprocessing and threshold segmentation is applied to the input data. The processed output is feed to CNN for feature extraction and classification. The training accuracy was found to be 96.20 %, the validation accuracy to be 96.50 %, and the testing accuracy to be 94.5 %.","PeriodicalId":424190,"journal":{"name":"Journal of Integrated and Advanced Engineering (JIAE)","volume":"95 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114091684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Muhammad Imran Ismael, Nik Nur Wahidah Nik Hashim, Nur Syahirah Mohd Shah, Nur Syuhada Mohd Munir
{"title":"Chatbot System for Mental Health in Bahasa Malaysia","authors":"Muhammad Imran Ismael, Nik Nur Wahidah Nik Hashim, Nur Syahirah Mohd Shah, Nur Syuhada Mohd Munir","doi":"10.51662/jiae.v2i2.83","DOIUrl":"https://doi.org/10.51662/jiae.v2i2.83","url":null,"abstract":"Chatbot has been the driving force of modern communication for business, customer service and even mental healthcare. At the same time, there are not many research and project regarding mental health chatbots in Bahasa Malaysia. This project focuses on developing a chatbot application for mental healthcare in Bahasa Malaysia. This chatbot system is integrated with artificial intelligence and natural language processing. This chatbot utilize the feedforward neural network model to train the datasets. Apart from the backend of the application, Kivy and KivyMD are used to create the app's graphical user interface.","PeriodicalId":424190,"journal":{"name":"Journal of Integrated and Advanced Engineering (JIAE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130804986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Design & Fabrication of Automatic Color & Weight-Based Sorting System on Conveyor Belt","authors":"Tasnuva Jahan Nuva, Md Imteaz Ahmed, S. Mahmud","doi":"10.51662/jiae.v2i2.87","DOIUrl":"https://doi.org/10.51662/jiae.v2i2.87","url":null,"abstract":"Object sorting is a basic process that is employed in a variety of disciplines in our daily lives for our convenience. Previously, the sorting operation was done by hand using labor justification. Because product quality does not remain consistent during the typical sorting process, it adds complexity to the segregation of products based on their height, color, size, and weight. This method is also time-consuming and slows down output. To overcome these problems, Low-Cost Automation (LCA) has been implemented in the sorting system, which aims to reduce production time, labor cost, and processing complexity, improve product quality, increase production rate, etc. So, in this project, an effective method has been developed for automatically sorting the object based on color and weight. This method uses a conveyor belt, strain gauge load cell, DC motor, servo motor, TCS 34725 RGB color sensor, LCD, LED, and LDR to identify, separate, and collect the objects according to their color and weight. Arduino is used to controlling all the processes. This work has sorted three types of colors -red, , green, and blue, and the weight of different ranges. Firstly, the weights have been sorted by load cell, and then the desired colors have been sorted by color sensor. A bucket at the end of the conveyor belt can be rotated depending on the signal sent from the Arduino to collect the box. The collecting box has a specific portion in a particular color. Hence, it could be rotated at a specific angle for an exact weight for red, blue, and green colors.","PeriodicalId":424190,"journal":{"name":"Journal of Integrated and Advanced Engineering (JIAE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130800173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}