{"title":"Automatic Sleep Quality Analysis Using an Artificial Intelligence algorithm and EEG Signal Processing","authors":"M. Touil, Lhoussain Bahatti, Abdelmounime Elmagri","doi":"10.37965/jait.2023.0424","DOIUrl":"https://doi.org/10.37965/jait.2023.0424","url":null,"abstract":"Sleep is not a luxury. It is a necessity. If people sleep well, they will be more productive and start the morning in an excellent mood. On the other hand, people who don’t sleep well. They start their morning very drowsy irrespective of the other effects on their health. Such as the disturbance of the circadian rhythm and so on. In this paper an automatic hybrid algorithm is developed to analyze sleep quality using basically the EEG (Electroencephalogram) signal and polysomnographic report. The idea behind this is to perform the EEG signal processing in such a way as to be classified according to the sleep stages. Finally, we check if the subject passed through all the sleep cycles or not. in order to carry out this work Python version 3 was used.","PeriodicalId":135863,"journal":{"name":"Journal of Artificial Intelligence and Technology","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138966170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Recruiting PE Teachers Based on Regional Socio-Economic Status Evaluation and Recommendation Algorithm","authors":"Haitao Long, Yinfu Lu","doi":"10.37965/jait.2023.0446","DOIUrl":"https://doi.org/10.37965/jait.2023.0446","url":null,"abstract":"The most important step in creating a teaching force for physical education (PE) is finding enough qualified teachers. In order to better absorb the PE teaching talents who are more suitable for the job requirements, the ability variables of sports talents, the expected regional social and economic status, and historical data are considered, the intelligent matching of talents and positions is made, and the Bayesian variational network recommendation model considering the needs are constructed. According to the experimental findings, this model's highest recommendation accuracy in the normal scenario is 0.5888 and its maximum recommendation accuracy in the training and test sets is roughly 0.6 and 0.68. The model has good convergence and high accuracy of recommendation, which is conducive to matching PE teaching talents and teaching positions, providing job seekers with positions that meet their needs, providing teaching talents to meet the requirements, and creating a team of PE teachers that match people and posts.","PeriodicalId":135863,"journal":{"name":"Journal of Artificial Intelligence and Technology","volume":"9 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138980202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Novel Approach for Iris Recognition System Using Genetic Algorithm","authors":"J. Sarwade, Sandip Bankar, Surekha Janrao, Kishor Sakure, Rohini Patil, Shudhodhan Bokefode, Nilesh Kulal","doi":"10.37965/jait.2023.0434","DOIUrl":"https://doi.org/10.37965/jait.2023.0434","url":null,"abstract":"In a brand-new era, with chaotic scenario that exists within the world, people are undermined with diverse psychological assaults. There have been numerous sensible approaches on the way to understand and lessen those attacks. Bioscrypt developments have verified to be one of the beneficial approaches for intercepting these troubles. Identifying recognition through human iris organ is said as one of the well-known biometric strategies because of its reliability and higher accurate return in comparison to different developments. Reviewing beyond literatures, terrible imaging condition, low flexibility of version, and small length iris image dataset are the constraints desiring solutions. Among these kinds of developments, the iris popularity structures are suitable gear for the human identification. Iris popularity has been an energetic studies location for the duration of previous couple of decades, due to its extensive packages in the areas, from airports to native land protection border protection. In the past, various functions and methods for iris recognition have been presented. Despite of the very fact that there are many approaches published in this field, there are still liberal amount of problems in this methodology like tedious and computational intricacy. We suggest an all-encompassing deep learning architecture for iris recognition supported by a genetic algorithm and a Wavelet Transformation, which may jointly learn the feature representation and perform recognition to realize high efficiency. With just a few training photos from each class, we train our model on a well-known iris recognition dataset and demonstrate improvements over prior methods. We think that this architecture can be frequently employed for various biometric recognition jobs, assisting in the development of a more scalable and precise system. The exploratory aftereffects of the proposed technique uncover that the strategy is effective inside the iris acknowledgment.","PeriodicalId":135863,"journal":{"name":"Journal of Artificial Intelligence and Technology","volume":"71 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138596188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yufeng Guo, Feiba Chang, Xiaoyu Chen, Fengjun Sun, Zihong Wang
{"title":"Brain Tumor Segmentation Based on The Learning Statistical Texture","authors":"Yufeng Guo, Feiba Chang, Xiaoyu Chen, Fengjun Sun, Zihong Wang","doi":"10.37965/jait.2023.0442","DOIUrl":"https://doi.org/10.37965/jait.2023.0442","url":null,"abstract":"Achieving accurate segmentation of brain tumors in Magnetic Resonance Imaging (MRI) is important for clinical diagnosis and accurate treatment, and the efficient extraction and analysis of MRI multimodal feature information is the key to achieving accurate segmentation. In this paper, we propose a multimodal information fusion method for brain tumor segmentation, aimed at achieving full utilization of multimodal information for accurate segmentation in MRI. In our method, the semantic information processing module (SIPM) and Multimodal Feature Reasoning Module (MFRM) are included: (1) SIPM is introduced to achieve free multiscale feature enhancement and extraction; (2) MFRM is constructed to process both the backbone network feature information layer and semantic feature information layer. Using extensive experiments, the proposed method is validated. The experimental results based on BraTS2018 and BraTS2019 datasets show that the method has unique advantages over existing brain tumor segmentation methods.","PeriodicalId":135863,"journal":{"name":"Journal of Artificial Intelligence and Technology","volume":"44 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139229449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pallavi Deshpande, Mohammed Wasim Bhatt, S. Shinde, Neelam Labhade-Kumar, N. Ashokkumar, K. G. S. Venkatesan, F. D. Shadrach
{"title":"Combining Handcrafted Features and Deep Learning for Automatic Classification of Lung Cancer on CT Scans","authors":"Pallavi Deshpande, Mohammed Wasim Bhatt, S. Shinde, Neelam Labhade-Kumar, N. Ashokkumar, K. G. S. Venkatesan, F. D. Shadrach","doi":"10.37965/jait.2023.0388","DOIUrl":"https://doi.org/10.37965/jait.2023.0388","url":null,"abstract":"On a global scale, lung cancer is responsible for around 27% of all cancer fatalities. Even though there have been great strides in diagnosis and therapy in recent years, the five-year cure rate is just 19%. Classification is crucial for diagnosing lung nodules. This is especially true today that automated categorization may provide a professional opinion that can be used by doctors. New computer vision and machine learning techniques have made possible accurate and quick categorization of CT images. This field of research has exploded in popularity in recent years because of its high efficiency and ability to decrease labour requirements. Here, they want to look carefully at the current state of automated categorization of lung nodules. General-purpose structures are briefly discussed, and typical algorithms are described. Our results show deep learning-based lung nodule categorization quickly becomes the industry standard. Therefore, it is critical to pay greater attention to the coherence of the data inside the study and the consistency of the research topic. Furthermore, there should be greater collaboration between designers, medical experts, and others in the field.","PeriodicalId":135863,"journal":{"name":"Journal of Artificial Intelligence and Technology","volume":"198 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139241199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optimizing the Isolation Forest Algorithm for Identifying Abnormal Behaviors of Students in Education Management Big Data","authors":"Bibo Feng, Lingling Zhang","doi":"10.37965/jait.2023.0445","DOIUrl":"https://doi.org/10.37965/jait.2023.0445","url":null,"abstract":"With the changes in educational models, applying computer algorithms and artificial intelligence technologies to data analysis in universities has become a research hotspot in the field of intelligent education. In response to the increasing amount of student data in universities, this study proposes to use an optimized isolated forest algorithm for recognizing features to detect abnormal student behavior concealed in big data for educational management. Firstly, it uses logistic regression algorithm to update the calculation method of isolated forest weights, and then uses residual statistics to eliminate redundant forests. Finally, it utilizes discrete particle swarm optimization to optimize the isolated forest algorithm. On this basis, improvements have also been made to the traditional gated loop unit network. It merges the two improved algorithm models and builds an anomaly detection model for collecting college student education data. The experiment shows that the optimized isolated forest algorithm has a recognition accuracy of 0.986 and a training time of 1 second. The recognition accuracy of the improved gated loop unit network is 0.965, and the training time is 0.16 seconds. In summary, the constructed model can effectively identify abnormal data of college students, thereby helping educators to detect students' problems in time and helping students to improve their learning status.","PeriodicalId":135863,"journal":{"name":"Journal of Artificial Intelligence and Technology","volume":"252 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139246356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abubeker K M, None Abhijit, None Akhil S, None Akshat Kumar V K, None Ben K Jose
{"title":"Computer Vision Assisted Real-Time Bird Eye Chili Classification Using YOLO V5 Framework","authors":"Abubeker K M, None Abhijit, None Akhil S, None Akshat Kumar V K, None Ben K Jose","doi":"10.37965/jait.2023.0251","DOIUrl":"https://doi.org/10.37965/jait.2023.0251","url":null,"abstract":"Computer vision-based classification systems have become increasingly popular in the agricultural industry in recent years. This paper proposes a computer vision-assisted bird eye chili or 'kantahri mulaku' classification framework using the You Only Look Once V5 (YOLO V5) object detection model. Automated sorting systems based on computer vision can accurately identify and classify chilies based on attributes such as size, shape, colour, and texture. The dataset for the research consists of images of bird-eye chilies in different positions and backgrounds. The model was trained using this dataset, and it could correctly identify and categorize bird-eye chili. The chilies was then picked up by a robot manipulator and sorted by ripeness. Bird-eye chili images captured in real agricultural situations have used to assess the effectiveness of the suggested framework. Images of red and green chili was taken from above using a high-resolution Raspberry pi 4B camera attached to a custom-built 3-degrees-of-freedom (DoF) robot arm. We used public and real-time images to train the YOLO algorithm on photographs of bird-eye chili captured in real-time. As the robot arm goes around the chili plants, this model is connected with the robot's software control system to allow real-time detection and localization of the chili's. By automating bird-eye chili crop monitoring and management, this system has the potential to significantly contribute to the growth and viability of the agricultural sector. We got a mAP of 0.94 and an average accuracy of 0.90 with the suggested method. Using a robotic manipulator for chili grading improves productivity and reduces human error compared to traditional methods. To test the robustness of the YOLO V5 framework, it has implemented on the Raspberry pi 4B graphical processing unit (GPU) computer.","PeriodicalId":135863,"journal":{"name":"Journal of Artificial Intelligence and Technology","volume":"53 11","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135036761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
None Zikrul Hakiem Ishak, None Sallehuddin Mohamed Haris
{"title":"Indoor Positioning System Based on UWB Rapid Integration with Unity Cross Platform Development Engine through IoT","authors":"None Zikrul Hakiem Ishak, None Sallehuddin Mohamed Haris","doi":"10.37965/jait.2023.0365","DOIUrl":"https://doi.org/10.37965/jait.2023.0365","url":null,"abstract":"The changes in time have made divergences of endless possibilities in localization technology. Localization in an indoor environment is surely a concerning matter as several shortcomings always arise when dealing with indoor localization. To optimize localization in an indoor environment, tracking a subject’s position in real-time is a certainly vital interest. Challenges in obtaining an accurate position in precise millimetre accuracy whilst the subject perceive visual information rendered in real-time is somewhat always a matter in hand to address in an indoor environment. The main objective of this research is to implement a positioning method in an indoor environment base on ultra-wideband (UWB) technology to obtain position accuracy in millimetres by rapidly integrating with Unity three-dimensional (3D) engine hence obtaining a detailed inertial measurement unit (IMU) data via wireless Message Queuing Telemetry Transport (MQTT) network protocol. The key results of this research should ensure an establishment of an indoor positioning system based on providing the finest selection of positioning and UWB parameters. These fine selections are an important design choice impacting the system’s performance in obtaining an accurate position within the range of 0.15mm to 115mm. The technological benefits involve the innovation of wireless communication based on the internet of things (IoT) concept relevant to the Industrial Revolution 4.0 (IR 4.0) enabling participants to move freely in an indoor environment whilst obtaining accurate and precise positioning coordinates. Future recommendations comprise of real-time two-dimensional (2D) Pozyx Creator Controller integrated with Unity 3D user graphical user interface (GUI) purposes intended for mobile mapping navigations.","PeriodicalId":135863,"journal":{"name":"Journal of Artificial Intelligence and Technology","volume":"279 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136233447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cognitive Inspired & Computationally-Intelligent Early Melanoma Detection Using Feature Analysis Techniques","authors":"Sunil Gupta, Neha Sharma, Ritu Tyagi, Pardeep Singh, Alankrita Aggarwal, Sunil Chawla","doi":"10.37965/jait.2023.0334","DOIUrl":"https://doi.org/10.37965/jait.2023.0334","url":null,"abstract":"Melanoma is the most malignant kind of skin cancer, and it is responsible for the majority of deaths caused by skin cancer. However, this can be easily addressed by reverting to the standard method of damage removal if it is discovered in a timely manner. In this view, it is of the utmost importance to develop procedures for the early and reliable identification of melanomas. Since images for melanoma diagnosis are recorded in the clinic as an epiluminance image using a specific kind of equipment, the technique is machine-dependent. We make use of a method for managing images as well as a high-resolution shading image of a skin ulcer captured by a high-resolution camera or other device. Instead of depending on the epilumination images that are produced by the emergency equipment, the initial task that needs to be done in the present research is to capture high-resolution photographs of the skin injuries. The category limit will be determined with the use of machine learning, and then highlight extraction will take place. The research focuses on clinical photos obtained from fast cameras that were taken of individuals suffering from skin cancer. The problem of uneven illumination was kept at a strategic distance by the use of medial separation and pre-processing using the histogram. Utilisation of a brand new image segmentation method called \"Otsu\" for the extraction of sores. The extraction of ABCD (Asymmetry, Border, Color, and Dimension) involves the use of innovative methodologies as well as the Total Dermoscopic Value in order to characterise the weight coefficients. A solution to the problem of upgrading the categorization error that is based on machine learning. The parameters that are taken into consideration for evaluating the proposed model are senstivity, specificity, precision and accuracy and the results obtained are 1, 0.93, 0.93, and 0.93 respectively. It has been observed that the proposed model performs better as compared to the ones present in existing literature.","PeriodicalId":135863,"journal":{"name":"Journal of Artificial Intelligence and Technology","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135197871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Ensemble Classification Method Based on Deep Neural Networks for Breast Cancer Diagnosis","authors":"Yan Gao, None Amin Rezaeipanah","doi":"10.37965/jait.2023.0310","DOIUrl":"https://doi.org/10.37965/jait.2023.0310","url":null,"abstract":"Advances in technology have led to advances in breast cancer screening by detecting symptoms that doctors have overlooked. In this paper, an automatic detection system for breast cancer cases based on Internet of Things (IoT) is proposed. First, using IoT technology, direct medical images are sent to the data repository after the suspicious person's visit through medical equipment equipped with IoT. Then, in order to help radiologists, interpret medical images as best as possible, we use four pre-trained convolutional neural network models including InceptionResNetV2, InceptionV3, VGG19 and ResNet152. These models are combined by an ensemble classifier. Also, these models are used to accurately predict cases with breast cancer, healthy people, and cases with pneumonia by using two datasets of X-RAY and CT-scan in a three-class classification. Finally, the best result obtained for CT-scan images belongs to InceptionResNetV2 architecture with 99.36% accuracy and for X-RAY images belongs to InceptionV3 architecture with 96.94% accuracy. The results show that this method leads to a reduction in daily visits to medical centers and thus reduces the pressure on the medical care system. It also helps radiologists and medical staff to detect breast cancer in its early stages.","PeriodicalId":135863,"journal":{"name":"Journal of Artificial Intelligence and Technology","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135407204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}