{"title":"Automatic Lung Cancer Detection Using Computed Tomography Based on Chan Vese Segmentation and SENET","authors":"C. S. Parvathy, J. P. Jayan","doi":"10.3103/S1060992X2470022X","DOIUrl":"10.3103/S1060992X2470022X","url":null,"abstract":"<p>Lung cancer is the most common cancer and the primary reason for cancer related fatalities globally. Lung cancer patients have a 14% overall survival rate. If the cancer is found in the early stages, the lives of patients with the disease may be preserved. A variety of conventional machine and deep learning algorithms have been developed for the effective automatic diagnosis of lung cancer. But they still have issues with recognition accuracy and take longer to analyze. To overcome these issues, this paper presents deep learning assisted Squeeze and Excitation Convolutional Neural Networks (SENET) to predict lung cancer on computed tomography images. This paper uses lung CT images for prediction. These raw images are preprocessed using Adaptive Bilateral Filter (ABF) and Reformed Histogram Equalization (RHE) to remove noise and enhance an image’s clarity. To determine the tunable parameters of the RHE approach Tuna Swam optimization algorithm is used in this proposed method. This preprocessed image is then given to the segmentation process to divide the image. This proposed approach uses the Chan vese segmentation model to segment the image. Segmentation output is then fed into the classifier for final classification. SENET classifier is utilized in this proposed approach to final lung cancer prediction. The outcomes of the test assessment demonstrated that the proposed model could identify lung cancer with 99.2% accuracy, 99.1% precision, and 0.8% error. The proposed SENET system predicts CT scanning images of lung cancer successfully.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"33 3","pages":"339 - 354"},"PeriodicalIF":1.0,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142414131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Enhancement of Neural Network Performance with the Use of Two Novel Activation Functions: modExp and modExpm","authors":"Heena Kalim, Anuradha Chug, Amit Prakash Singh","doi":"10.3103/S1060992X24700152","DOIUrl":"10.3103/S1060992X24700152","url":null,"abstract":"<p>The paper introduces two novel activation functions known as modExp and modExp<sub>m</sub>. The activation functions possess several desirable properties, such as being continuously differentiable, bounded, smooth, and non-monotonic. Our studies have shown that modExp and modExp<sub>m</sub> consistently outperform ReLU and other activation functions across a range of challenging datasets and complex models. Initially, the experiments involve training and classifying using a multi-layer perceptron (MLP) on benchmark data sets like the Diagnostic Wisconsin Breast Cancer and Iris Flower datasets. Both modExp and modExp<sub>m</sub> demonstrate impressive performance, with modExp achieving 94.15 and 95.56% and modExp<sub>m</sub> achieving 94.15 and 95.56% respectively, when compared to ReLU, ELU, Tanh, Mish, Softsign, Leaky ReLU, and TanhExp. In addition, a series of experiments were carried out on five different depths of deeper neural networks, ranging from five to eight layers, using MNIST datasets. The modExp<sub>m</sub> activation function demonstrated superior performance accuracy on various neural network configurations, achieving 95.56, 95.43, 94.72, 95.14, and 95.61% on wider 5 layers, slimmer 5 layers, 6 layers, 7 layers, and 8 layers respectively. The modExp activation function also performed well, achieving the second highest accuracy of 95.42, 94.33, 94.76, 95.06, and 95.37% on the same network configurations, outperforming ReLU, ELU, Tanh, Mish, Softsign, Leaky ReLU, and TanhExp. The results of the statistical feature measures show that both activation functions have the highest mean accuracy, the lowest standard deviation, the lowest Root Mean squared Error, the lowest variance, and the lowest Mean squared Error. According to the experiment, both functions converge more quickly than ReLU, which is a significant advantage in Neural network learning.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"33 3","pages":"286 - 301"},"PeriodicalIF":1.0,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142414006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On Recognition Capacity of a Phase Neural Network","authors":"B. V. Kryzhanovsky","doi":"10.3103/S1060992X24700188","DOIUrl":"10.3103/S1060992X24700188","url":null,"abstract":"<p>The paper studies the properties of a fully connected neural network built around phase neurons. The signals traveling through the interconnections of the network are unit pulses with fixed phases. The phases encoding the components of associative memory vectors are distributed at random within the interval [0, 2π]. The simplest case in which the connection matrix is defined according to Hebbian learning rule is considered. The Chernov–Chebyshev technique, which is independent of the type of distribution of encoding phases, is used to evaluate the recognition error. The associative memory of this type of network is shown to be four times as large as that of a conventional Hopfield-type network using binary patterns. Correspondingly, the radius of the domain of attraction is also four times larger.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"33 3","pages":"259 - 263"},"PeriodicalIF":1.0,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142414036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Numerical Analysis of All-Optical Binary to Gray Code Converter Using Silicon Microring Resonator","authors":"Manjur Hossain, Kalimuddin Mondal","doi":"10.3103/S1060992X24700085","DOIUrl":"10.3103/S1060992X24700085","url":null,"abstract":"<p>Present manuscript designs and analyzes numerically all-optical binary-to-gray code (BTGC) converter utilizing silicon microring resonator. A waveguide-based silicon microring resonator has been employed to achieve optical switching under low-power conditions using the two-photon absorption effect. Gray code (GC) is a binary numerical system in which two consecutive codes distinguished by only one bit. The GC is critical in optics communication because it prevents spurious output from optical switches and facilitates error correction in optical communications. MATLAB is used to design and analyze the architecture at almost 260 Gbps operational speed. The faster response times and compact design of the demonstrated circuits make them especially useful for optical communication systems. Performance indicating factors evaluated from MATLAB results and analyzed. Design parameters that are optimized have been chosen in order to construct the model practically.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"33 2","pages":"193 - 204"},"PeriodicalIF":1.0,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141552013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DAGM-Mono: Deformable Attention-Guided Modeling for Monocular 3D Reconstruction","authors":"Youshaa Murhij, Dmitry Yudin","doi":"10.3103/S1060992X2470005X","DOIUrl":"10.3103/S1060992X2470005X","url":null,"abstract":"<p>Accurate 3D pose estimation and shape reconstruction from monocular images is a challenging task in the field of autonomous driving. Our work introduces a novel approach to solve this task for vehicles called Deformable Attention-Guided Modeling for Monocular 3D Reconstruction (DAGM-Mono). Our proposed solution addresses the challenge of detailed shape reconstruction by leveraging deformable attention mechanisms. Specifically, given 2D primitives, DAGM-Mono reconstructs vehicles shapes using deformable attention-guided modeling, considering the relevance between detected objects and vehicle shape priors. Our method introduces two additional loss functions: Chamfer Distance (CD) and Hierarchical Chamfer Distance to enhance the process of shape reconstruction by additionally capturing fine-grained shape details at different scales. Our bi-contextual deformable attention framework estimates 3D object pose, capturing both inter-object relations and scene context. Experiments on the ApolloCar3D dataset demonstrate that DAGM-Mono achieves state-of-the-art performance and significantly enhances the performance of mature monocular 3D object detectors. Code and data are publicly available at: https://github.com/YoushaaMurhij/DAGM-Mono.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"33 2","pages":"144 - 156"},"PeriodicalIF":1.0,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141552129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Swarnalata Rath, Nilima R. Das, Binod Kumar Pattanayak
{"title":"Stacked BI-LSTM and E-Optimized CNN-A Hybrid Deep Learning Model for Stock Price Prediction","authors":"Swarnalata Rath, Nilima R. Das, Binod Kumar Pattanayak","doi":"10.3103/S1060992X24700024","DOIUrl":"10.3103/S1060992X24700024","url":null,"abstract":"<p>Univariate stocks and multivariate equities are more common due to partnerships. Accurate future stock predictions benefit investors and stakeholders. The study has limitations, but hybrid architectures can outperform single deep learning approach (DL) in price prediction. This study presents a hybrid attention-based optimal DL model that leverages multiple neural networks to enhance stock price prediction accuracy. The model uses strategic optimization of individual model components, extracting crucial insights from stock price time series data. The process involves initial pre-processing, wavelet transform denoising, and min-max normalization, followed by data division into training and test sets. The proposed model integrates stacked Bi-directional Long Short Term Memory (Bi-LSTM), an attention module, and an Equilibrium optimized 1D Convolutional Neural Network (CNN). Stacked Bi-LSTM networks shoot enriched temporal features, while the attention mechanism reduces historical data loss and highlights significant information. A dropout layer with tailored dropout rates is introduced to address overfitting. The Conv1D layer within the 1D CNN detects abrupt data changes using residual features from the dropout layer. The model incorporates Equilibrium Optimization (EO) for training the CNN, allowing the algorithm to select optimal weights based on mean square error. Model efficiency is evaluated through diverse metrics, including Mean Absolute Error (MAE), Mean Square Error (MSE), Root Mean Square Error (RMSE), and R-squared (R2), to confirm the model’s predictive performance.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"33 2","pages":"102 - 120"},"PeriodicalIF":1.0,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141552127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Latent Semantic Index Based Feature Reduction for Enhanced Severity Prediction of Road Accidents","authors":"Saurabh Jaglan, Sunita Kumari, Praveen Aggarwal","doi":"10.3103/S1060992X24700103","DOIUrl":"10.3103/S1060992X24700103","url":null,"abstract":"<p>Traditional approaches do not have the capability to analyse the road accident severity with different road characteristics, area and type of injury. Hence, the road accident severity prediction model with variable factors is designed using the ANN algorithm. In this designed model, the past accident records with road characteristics are obtained and pre-processed utilizing adaptive data cleaning as well as the min-max normalization technique. These techniques are used to remove and separate the collected data according to their relation. The Pearson correlation coefficient is utilized to separate the features from the pre-processed data. The ANN algorithm is used to train and validate these retrieved features. The proposed model’s performance values are 99, 98, 99 and 98% for accuracy, precision, specificity and recall. Thus, the resultant values of the designed road accident severity prediction model with variable factors using the ANN algorithm perform better compared to the existing techniques.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"33 2","pages":"221 - 235"},"PeriodicalIF":1.0,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141552131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Transfer Learning Based Face Emotion Recognition Using Meshed Faces and Oval Cropping: A Novel Approach","authors":"Ennaji Fatima Zohra, El Kabtane Hamada","doi":"10.3103/S1060992X24700073","DOIUrl":"10.3103/S1060992X24700073","url":null,"abstract":"<p>The potential applications of emotion recognition from facial expressions have generated considerable interest across multiple domains, encompassing areas such as human-computer interaction, camera and mental health analysis. In this article, a novel approach has been proposed for face emotion recognition (FER) using several data preprocessing and Feature extraction steps such as Face Mesh, data augmentation and oval cropping of the faces. A transfer learning using VGG19 architecture and a Deep Convolution Neural Network (DCNN) have been proposed. We demonstrate the effectiveness of the proposed approach through extensive experiments on the Cohn-Kanade+ (CK+) dataset, comparing it with existing state-of-the-art methods. An accuracy of 99.79% was found using the VGG19. Finally, a set of images collected from an AI tool that generates images based on textual description have been done and tested using our model. The results indicate that the solution achieves superior performance, offering a promising solution for accurate and real-time face emotion recognition.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"33 2","pages":"178 - 192"},"PeriodicalIF":1.0,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141552130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Aspect Based Suggestion Classification Using Deep Neural Network and Principal Component Analysis with Honey Badger Optimization","authors":"Nandula Anuradha, Panuganti VijayaPal Reddy","doi":"10.3103/S1060992X24700036","DOIUrl":"10.3103/S1060992X24700036","url":null,"abstract":"<p>Aspect based suggestion is the process of analyzing the aspect of the review and classifying them as suggestion or non-suggestion comment. Today, online reviews are becoming a more popular way to express suggestions. To manually analyze and extract recommendations from such a large volume of reviews is practically impossible. However, the existing algorithm yields low accuracy with more errors. A deep learning-based DNN (Deep Neural Network) is created to address these problems. Raw data’s are collected and pre-processed to remove the unnecessary contents. After that, a count vectorizer is utilized to convert the words into vectors as well as to extract features from the data. Then, reducing the dimension of the feature vector by applying a hybrid PCA-HBA (Principal Component Analysis-Honey Badger Algorithm). HBA optimization is utilized to select the optimal number of components to enhance the accuracy of the proposed model. Then, the features are classified using two trained deep neural network. One trained model is utilized to identify the aspect of the review, and another trained model is utilized to identify whether the aspect is a suggestion or non-suggestion. The experimental analysis shows that the proposed approach achieves 93% accuracy and 93% specificity for aspect identification as well as 87% accuracy and 66% specificity for the classification of suggestions. Thus, the designed model is the best choice for aspect-based suggestion classification.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"33 2","pages":"121 - 132"},"PeriodicalIF":1.0,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141552161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Seyed Sina Mohammadi, Mohammadreza Salehirad, Mohammad Mollaie Emamzadeh, Mojtaba Barkhordari Yazdi
{"title":"Improved Equilibrium Optimizer for Accurate Training of Feedforward Neural Networks","authors":"Seyed Sina Mohammadi, Mohammadreza Salehirad, Mohammad Mollaie Emamzadeh, Mojtaba Barkhordari Yazdi","doi":"10.3103/S1060992X24700048","DOIUrl":"10.3103/S1060992X24700048","url":null,"abstract":"<p>One of the most demanding applications of accurate Artificial Neural Networks (ANN) can be found in medical fields, mainly to make critical decisions<b>.</b> To achieve this goal, an efficient optimization and training method is required to tune the parameters of ANN and to reach the global solutions of these parameters. Equilibrium Optimizer (EO) has recently been introduced to solve optimization problems more reliably than other optimization methods which have the ability to escape from the local optima solutions and to reach the global optimum solution. In this paper, to achieve a higher performance, some modifications are applied to the EO algorithm and the Improved Equilibrium Optimizer (IEO) method is presented which have enough accuracy and reliability to be used in crucial and accurate medical applications. Then, this IEO approach is utilized to learn ANN, and IEO-ANN algorithm will be introduced. The proposed IEO-ANN will be implemented to solve real-world medical problems such as breast cancer detection and heart failure prediction. The obtained results of IEO are compared with those of three other well-known approaches: EO, Particle Swarm Optimizer (PSO), Salp Swarm Optimizer (SSO), and Back Propagation (BP). The recorded results have shown that the proposed IEO algorithm has much higher prediction accuracy than others. Therefore, the presented IEO can improve the accuracy and convergence rate of tuning neural networks, so that the proposed IEO-ANN is a suitable classifying and predicting approach for crucial medical decisions where high accuracy is needed.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"33 2","pages":"133 - 143"},"PeriodicalIF":1.0,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141552128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}