{"title":"Hybrid Machine Learning Model for Lie-Detection","authors":"Rupali J Dhabarde, D. V. Kodawade, Sheetal Zalte","doi":"10.1109/I2CT57861.2023.10126460","DOIUrl":"https://doi.org/10.1109/I2CT57861.2023.10126460","url":null,"abstract":"A technique for recognizing a person from his photograph is facial recognition. Due to its extensive range of applications in several fields, it has drawn the attention of numerous researchers in the field of computer vision in recent years (Cyber security, crime cases, and biometrics). This technology's operation is based on the extraction of features from an input picture using methods like PCA, ICA, LDA etc. After comparing them with others from another image to verify or assert an individual's identification. Via this work, we applied amalgamation of CNN and SVM techniques to two face datasets that will be split into two groups in a machine learning-based methodology. We assessed different machine learning-based lie detectors using our amassed dataset. Our findings demonstrate that combined CNN with SVM task achieved accuracy up to 58%.","PeriodicalId":150346,"journal":{"name":"2023 IEEE 8th International Conference for Convergence in Technology (I2CT)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115580918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
V. Khatavkar, Snehal Andhale, Panchshila Pillewar, Utkarsh Alset
{"title":"Relative Study of Intelligent Control Techniques to Maintain Variable Pitch-Angle of the Wind Turbine","authors":"V. Khatavkar, Snehal Andhale, Panchshila Pillewar, Utkarsh Alset","doi":"10.1109/I2CT57861.2023.10126335","DOIUrl":"https://doi.org/10.1109/I2CT57861.2023.10126335","url":null,"abstract":"The wind turbine requires a robust and time-responsive system to control the pitch–angle (Pit–Ang) of the mechanical actuator. If the response of speed is very efficient then, the controller can act according to the prescribed logic and its mechanical mechanism can work faster with its response time. In this paper, real-time Data from IMD (Indian Meteorological Department) is used for the relative study of the model of wind turbine created in MATLAB / Simulink® environment using Fuzzy Logic Toolbox™. The principle of wind turbine used here is to supply a controlled input to the system and after synthesis, these rules in the form of signal are transferred to the plant which has a drive train and pitch actuator. The output responses of the proposed controller are compared amongst proportional– integral–derivative (PID), fuzzy, and adaptive fuzzy–PID Controllers. The simulation results seen between the adaptive fuzzy– based PID controller surpasses the expected results by Tr = 95.454%, Ts = 61.409% and negligible overshoot as compared to open–looped and conventional responsive controller.","PeriodicalId":150346,"journal":{"name":"2023 IEEE 8th International Conference for Convergence in Technology (I2CT)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115634459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Survey on Smartphone Sensors and User Intent in Smartphone Usage","authors":"Priyanka Bhatele, Dr Mangesh Bedekar","doi":"10.1109/I2CT57861.2023.10126192","DOIUrl":"https://doi.org/10.1109/I2CT57861.2023.10126192","url":null,"abstract":"Smartphone/Tablet users are approximately 3 million all over the world. It is likely to increase by several 100 million in the next few years. Around 40% of these users read online. Explicit means of feedback system is strongly based. It provides the most accuracy when rating an online learning application. Increase in the availability of content over the web and high user engagements, has led to the demand of the means that implicitly provide feedback. Implicit feedback relies on understanding the quality of the content based on the user activities performed over the web applications. Less accuracy is the limitation. It needs to stand with a support to provide as strong base as the explicit model does. Clipboard copy operations on the webpage provide an implicit insight to the user intentions. Screen activities like scrolling and pinch to zoom further can statistically be proven the positive indicators of user interest. Smartphone sensors like Gyroscope and Accelerometer silently sense human screen activities and mobile gestures. This review paper is based on the understanding of smartphone sensors and the inferences of user intent through it. The dig is based on various implicit indicators like mobile gestures, smartphone sensors and clipboard copy operations.","PeriodicalId":150346,"journal":{"name":"2023 IEEE 8th International Conference for Convergence in Technology (I2CT)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114583812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Prachi V. Karlekar, Swapna Choudhary, Atul Deshmukh, Harish Banote
{"title":"Design of an Efficient Bioinspired Model for Optimizing Robotic Arm Movements via Ensemble Learning Operations","authors":"Prachi V. Karlekar, Swapna Choudhary, Atul Deshmukh, Harish Banote","doi":"10.1109/I2CT57861.2023.10126406","DOIUrl":"https://doi.org/10.1109/I2CT57861.2023.10126406","url":null,"abstract":"Robotic arm movements are highly dependent on design and deployment of sensors & actuation devices & their duty cycles. Optimizing current-level duty cycles for these devices can reduce the power consumption, and maximize the efficiency of control for different device operations. Existing duty cycle control models for robotic arms are highly complex, or have lower efficiency levels. To overcome these issues, this text proposes design of an efficient bioinspired model for optimizing robotic arm movements via ensemble learning operations. The arm is built using Arduino controller along with stepper motors, which assist in controlled movements for different arm operations. The proposed model uses Mayfly Optimization (MO) in order to identify duty cycles of different arm components for different movement types. The MO Model uses delay, energy and jitter parameters in order to estimate a fitness function that is optimized in order to identify arm movement sets. These movement sets are classified into performance-aware movements via a combination of Naïve Bayes (NB), k Nearest Neighbours (kNN), Support Vector Machine (SVM), Logistic Regression (LR), and Multilayer Perceptron (MLP) classifiers. Due to which the model is able to reduce the delay needed for control the arms by 8.3%, reduce the energy needed for control operations by 2.9%, and reduce the control jitter by 4.5% under real-time scenarios.","PeriodicalId":150346,"journal":{"name":"2023 IEEE 8th International Conference for Convergence in Technology (I2CT)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116748604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Strategies for Improving Object Detection in Real-Time Projects that use Deep Learning Technology","authors":"Niloofar Abed, Ramu Murugan","doi":"10.1109/I2CT57861.2023.10126449","DOIUrl":"https://doi.org/10.1109/I2CT57861.2023.10126449","url":null,"abstract":"The popularity and prevalence of devices equipped with object detection technology and controllable via the Internet of Things (IoT) have increased, especially in the post-Corona era. The development of neural networks and artificial intelligence by combining them with IoT systems has achieved acceptable satisfaction among users in adverse conditions by reducing the need for manpower and increasing productivity. Therefore, the scope of using such mechanisms has expanded in most fields, from self-driving vehicles to agricultural crops. Beginners will be confronted with a massive amount of complex information as a result of the design and application of such technologies in interdisciplinary fields. Due to the popularity of using the You Only Look Once (YOLO) object detection algorithm, this article provided a guideline as a traffic light subject classification and, offers suggested solutions and exclusive approches to increase the accuracy of object detection in real-time projects with a practical application attitude for the enthusiasts and developers particularly in object detection scenarios by employing YOLO.","PeriodicalId":150346,"journal":{"name":"2023 IEEE 8th International Conference for Convergence in Technology (I2CT)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115886182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Forest Fire Detection using Convolutional Neural Network Model","authors":"Shubham Sah, S. Prakash, S. Meena","doi":"10.1109/I2CT57861.2023.10126370","DOIUrl":"https://doi.org/10.1109/I2CT57861.2023.10126370","url":null,"abstract":"Everyone recalls the destruction brought on by the Australian forest fires in 2019. Even though there isn’t much we can do to battle forest fires on our own, we can always rely on technology. By this we are trying to predict the accuracy of these models on forest fire data set. We are trying to detect forest fire in dense forest; our data set is very diverse and consist of images having forest fires, smokes, non-smoke and fire images. We have found out that Sensor detection and real-time geological data analysis are two methods for detecting forest fires. However, using image classification, for which Deep learning is the most efficient solution, is one of the best methods for detecting fire. In addition, these algorithms can be integrated with drones using deep learning techniques so that images can be taken frequently from the sky with ease, smoke can be detected in dense forests, and the authorities can be notified to take immediate action. The convolutional neural network algorithm for fire detection was the sole focus of our paper. The value of various epochs is used to evaluate these results.","PeriodicalId":150346,"journal":{"name":"2023 IEEE 8th International Conference for Convergence in Technology (I2CT)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115472819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Banerjee, V. Kukreja, S. Hariharan, Vishal Jain
{"title":"Enhancing Mango Fruit Disease Severity Assessment with CNN and SVM-Based Classification","authors":"D. Banerjee, V. Kukreja, S. Hariharan, Vishal Jain","doi":"10.1109/I2CT57861.2023.10126397","DOIUrl":"https://doi.org/10.1109/I2CT57861.2023.10126397","url":null,"abstract":"The mango leaf powdery mildew disease poses a serious threat to mango production society globally by significantly lowering yield and quality. For timely intervention and efficient management, early disease detection and classification are important. In this research and education area, a novel hybrid approach utilizes Convolutional Neural Networks (CNN) and Support Vector Machines to identify the mango leaf powdery mildew disease based on four severity levels (SVM). Three phases make up the proposed approach: data structure, CNN-selected attributes, and SVM classification. We collect and preprocess images of mango leaves during the data organization step, and in the CNN - attributes selection phase, we apply a CNN model for feature extraction and selection. For the mango leaf powdery mildew dataset, we improve the CNN model to find the most relevant features for the classification task. The SVM - classification step includes training an SVM model on the obtained features and refining the hyperparameters via k-fold cross-validation. The proposed CNN and SVM hybrid multi-classification model for mango leaf powdery mildew disease achieved an overall accuracy of 89.29%. A dataset of 2559 images with 4 severity levels was utilized. The model works well overall, as a macro-average F1-score of 90.10, the weighted average F1-score's minimal value of 53.85%. The model is less successful in predicting instances for classes with smaller support proportions, as shown by the micro-average F1-score, which is 89.29% and is lower overall than the macro-average F1-score.","PeriodicalId":150346,"journal":{"name":"2023 IEEE 8th International Conference for Convergence in Technology (I2CT)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123135445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Eye Health Monitoring System","authors":"Krishi Godhani, Adit Patel, Harsh Shah, Achal Mehta, Devlina Adhikari","doi":"10.1109/I2CT57861.2023.10126343","DOIUrl":"https://doi.org/10.1109/I2CT57861.2023.10126343","url":null,"abstract":"The current research focuses on examining the negative impacts of blue light on human eyes. With the increasing usage of digital devices such as laptops, smartphones, and televisions, individuals are spending most of their time in front of screens. This prolonged screen time puts immense strain on the eyes, and blue light with wavelengths between 415 nm and 455 nm is a significant contributor to eye strain and damage. To understand the extent of damage, we considered various parameters such as the size of the screen, light intensity, and luminous intensity. We used a TCS34725 RGB sensor to measure the blue light emissions reaching the human eye and established a relationship between sensor outputs and light intensity. To classify the data, we utilized both KNN and Naïve Bayes algorithms for efficient analysis and quicker results.","PeriodicalId":150346,"journal":{"name":"2023 IEEE 8th International Conference for Convergence in Technology (I2CT)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123163045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optimized Recognition Of CAPTCHA Through Attention Models","authors":"Raghavendra A Hallyal, S. C, P. Desai, M. M","doi":"10.1109/I2CT57861.2023.10126193","DOIUrl":"https://doi.org/10.1109/I2CT57861.2023.10126193","url":null,"abstract":"Information retrieval from the CAPTCHA is a crucial part, this CAPTCHA always contains some unwanted information along with required information, so attention technique comes in handy to select such useful information discarding the unwanted part. The attention concept has become a very important part in the field of deep learning which uses Natural Language Processing(NLP) and Computer Vision(CV). Attention mechanism is rigorously used in OCR based applications which requires generating of selected information rather than every information available. Our work includes implementation of general, global and local Attention mechanisms used with two different models which were transfer learning model and the parameter search model. As OCR with attention technique is computationally costly it is required to optimize the entire process so we suggest optimized retrieval of information from CAPTCHA using parameter search algorithm. This retrieval includes using weights that reduced the training time from 4.03 minutes to 3.33 minutes and the number of training images which were used for training were reduced than before. We obtained the highest accuracy of 87.34% for general attention with parameter search model and local attention model with parameter search model proved to have less computation and less training time than the general attention with parameter search model.","PeriodicalId":150346,"journal":{"name":"2023 IEEE 8th International Conference for Convergence in Technology (I2CT)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123330931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comparison of VGG-19 and RESNET-50 Algorithms in Brain Tumor Detection","authors":"J. Periasamy, Buvana S, J. P","doi":"10.1109/I2CT57861.2023.10126451","DOIUrl":"https://doi.org/10.1109/I2CT57861.2023.10126451","url":null,"abstract":"The brain is the organ that governs all of the body's functions. A brain tumor is a malignant or noncancerous development of aberrant cells and tissues in the brain. The average survival rate for people with primary brain tumors is 75.2 percent, thus early detection is critical. The identification of brain tumors is a crucial but time-consuming procedure. Traditional procedures are time-consuming and prone to human error. Computer-assisted diagnosis of brain cancers is unavoidable to overcome these constraints. Automated Brain Tumor Recognition from Magnetic Resonance Images could be a good answer to this problem.This study uses Deep Learning models to diagnose a brain tumor based on MRI scan results. The Brain tumor detection system analyzes MRI data using image processing and deep learning algorithms to detect cancers. This study compares the VGG19, and ResNet50 models for processing and detecting brain cancers based on their accuracy while using the same dataset.","PeriodicalId":150346,"journal":{"name":"2023 IEEE 8th International Conference for Convergence in Technology (I2CT)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124972184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}