{"title":"Parametric optimization for electrical discharge diamond grinding (EDDG) system using dual approach.","authors":"Vijay Kumar, Shailendra Kumar Jha","doi":"10.1080/0954898X.2025.2525564","DOIUrl":"https://doi.org/10.1080/0954898X.2025.2525564","url":null,"abstract":"<p><p>Generally, electrically conductive materials are extremely sturdy and stiff, electric discharge milling (EDM) is a broadly utilized method. The usage of diamond grinding together with EDM in a machine is called the \" and Electrical Discharge Diamond Grinding \" (EDDG) gadget is an extensively used method for producing strong, long-lasting electrically conductive substances. The Modified Ant Lion Optimization- Artificial Neural Network (MALO-ANN) technique is recommended to boost the performance of EDDG machine. The MALO technique improves the overall performance of ANN by optimizing hidden layers and weights, which are regularly the cause of issues in traditional models. Input factors, along with grit size, pulse-on/off duration, height modern and pulse-off duration, are analysed to see if they affect Material Removal Rate (MRR) along with Surface Roughness (SR). The findings suggest that the MALO-ANN method greatly enhances the parametric optimization of EDDG gadget. The result indicates tremendous ability in improving the efficiency of EDDG systems, because conventional ANN models regularly struggle because of insifficient hidden layers and weights. The best MRR and SR were obtained with an absolute error interval ranging from 1.03% to 4.49%, achieving a convergence rate of 89%, performing enhanced accuracy in EDDG processes.</p>","PeriodicalId":520718,"journal":{"name":"Network (Bristol, England)","volume":" ","pages":"1-26"},"PeriodicalIF":1.6,"publicationDate":"2025-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144819059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J Deepa, Liya Badhu Sasikala, P Indumathy, A Jerrin Simla
{"title":"A novel lung cancer diagnosis model using hybrid convolution (2D/3D)-based adaptive DenseUnet with attention mechanism.","authors":"J Deepa, Liya Badhu Sasikala, P Indumathy, A Jerrin Simla","doi":"10.1080/0954898X.2025.2533871","DOIUrl":"https://doi.org/10.1080/0954898X.2025.2533871","url":null,"abstract":"<p><p>Existing Lung Cancer Diagnosis (LCD) models have difficulty in detecting early-stage lung cancer due to the asymptomatic nature of the disease which leads to an increased death rate of patients. Therefore, it is important to diagnose lung disease at an early stage to save the lives of affected persons. Hence, the research work aims to develop an efficient lung disease diagnosis using deep learning techniques for the early and accurate detection of lung cancer. This is achieved by. Initially, the proposed model collects the mandatory CT images from the standard benchmark datasets. Then, the lung cancer segmentation is done by using the development of Hybrid Convolution (2D/3D)-based Adaptive DenseUnet with Attention mechanism (HC-ADAM). The Hybrid Sewing Training with Spider Monkey Optimization (HSTSMO) is introduced to optimize the parameters in the developed HC-ADAM segmentation approach. Finally, the dissected lung nodule imagery is considered for the lung cancer classification stage, where the Hybrid Adaptive Dilated Networks with Attention mechanism (HADN-AM) are implemented with the serial cascading of ResNet and Long Short Term Memory (LSTM) for attaining better categorization performance. The accuracy, precision, and F1-score of the developed model for the LIDC-IDRI dataset are 96.3%, 96.38%, and 96.36%, respectively.</p>","PeriodicalId":520718,"journal":{"name":"Network (Bristol, England)","volume":" ","pages":"1-58"},"PeriodicalIF":1.6,"publicationDate":"2025-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144786370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sk Mahmudul Hassan, Keshab Nath, Michal Jasinski, Arnab Kumar Maji
{"title":"AI-driven plant disease detection with tailored convolutional neural network.","authors":"Sk Mahmudul Hassan, Keshab Nath, Michal Jasinski, Arnab Kumar Maji","doi":"10.1080/0954898X.2025.2537680","DOIUrl":"https://doi.org/10.1080/0954898X.2025.2537680","url":null,"abstract":"<p><p>In recent times, deep learning has been widely used in agriculture fields to identify diseases in crops, weather prediction, and crop yield prediction. However, designing efficient deep learning models that are lightweight, cost-effective, and suitable for deployment on small devices remains a challenge. This paper addresses this gap by proposing a Convolutional Neural Network (CNN) architecture optimized using a Genetic Algorithm (GA) to automate the selection of critical hyperparameters, such as the number and size of filters, ensuring high performance with minimal computational overhead. In this work, we have built our own tea leaf disease dataset consisting of three different tea leaf diseases, two diseases caused by pests, and one due to pathogens (infectious organisms) and environmental conditions. The proposed genetic algorithm-based CNN achieved an accuracy rate of 97.6% on the tea leaf disease dataset. To further validate its robustness, the model was tested on two additional datasets, namely PlantVillage and Rice leaf disease dataset, achieving accuracies of 96.99% and 99%, respectively. Performances of the proposed model are also compared with several state-of-the-art deep learning models, and the results show that the proposed model outperforms several DL architectures with fewer parameters.</p>","PeriodicalId":520718,"journal":{"name":"Network (Bristol, England)","volume":" ","pages":"1-26"},"PeriodicalIF":1.6,"publicationDate":"2025-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144763035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hybrid optimization enabled Eff-FDMNet for Parkinson's disease detection and classification in federated learning.","authors":"Sangeetha Subramaniam, Umarani Balakrishnan","doi":"10.1080/0954898X.2025.2514187","DOIUrl":"https://doi.org/10.1080/0954898X.2025.2514187","url":null,"abstract":"<p><p>Parkinson's Disease (PD) is a progressive neurodegenerative disorder and the early diagnosis is crucial for managing symptoms and slowing disease progression. This paper proposes a framework named Federated Learning Enabled Waterwheel Shuffled Shepherd Optimization-based Efficient-Fuzzy Deep Maxout Network (FedL_WSSO based Eff-FDMNet) for PD detection and classification. In local training model, the input image from the database \"Image and Data Archive (IDA)\" is given for preprocessing that is performed using Gaussian filter. Consequently, image augmentation takes place and feature extraction is conducted. These processes are executed for every input image. Therefore, the collected outputs of images are used for PD detection using Shepard Convolutional Neural Network Fuzzy Zeiler and Fergus Net (ShCNN-Fuzzy-ZFNet). Then, PD classification is accomplished using Eff-FDMNet, which is trained using WSSO. At last, based on CAViaR, local updation and aggregation are changed in server. The developed method obtained highest accuracy as 0.927, mean average precision as 0.905, lowest false positive rate (FPR) as 0.082, loss as 0.073, Mean Squared Error (MSE) as 0.213, and Root Mean Squared Error (RMSE) as 0.461. The high accuracy and low error rates indicate that the potent framework can enhance patient outcomes by enabling more reliable and personalized diagnosis.</p>","PeriodicalId":520718,"journal":{"name":"Network (Bristol, England)","volume":" ","pages":"1-45"},"PeriodicalIF":1.6,"publicationDate":"2025-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144763036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Layer modified residual Unet++ for speech enhancement using Aquila Black widow optimizer algorithm.","authors":"Thangappanpillai Murugan Minipriya, Ramadoss Rajavel","doi":"10.1080/0954898X.2025.2533866","DOIUrl":"10.1080/0954898X.2025.2533866","url":null,"abstract":"<p><p>Speech enhancement techniques face computational demands, well-developed datasets, and better quality speech signals. Deep learners help deal with different noise types; still, the challenges offered by environmental noises require highly efficient and robust systems. This paper presents a lightweight deep-learning design with a heuristic-inspired model for generating an enhanced speech signal from noisy speech data. The model aims to remove different environmental noises affecting the speech signal. The noisy speech data are converted into spectrograms with Short-Time Fourier Transform (STFT). The noisy spectrogram is processed through the newly developed speech enhancement model namely, Layer Modified Residual Unet++ (LMResUnet++). The developed LMResUnet++ is designed through an atrous convolution layer, and it can capture multi-scale information without additional training parameter requirements. Also, the design is made compactable through the proposed hybrid optimization algorithm namely, Aquila Black Widow Optimization (ABWO), and it optimizes various hyperparameters of the developed LMResUnet++. The final denoised spectrogram from the LMResUnet++ undergoes Inverse STFT, and the final enhanced speech signal is restored. Further, different experiments are held to prove the efficacy of the system. Results prove that the developed LMResUnet++ achieved PESQ values of 7.93%, 5.75%, 3.86%, and 1.90% improved than DeepUnet, MTCNN, STCNN, and ResUnet++ respectively.</p>","PeriodicalId":520718,"journal":{"name":"Network (Bristol, England)","volume":" ","pages":"1-49"},"PeriodicalIF":1.6,"publicationDate":"2025-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144736588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
John Peter K, SylajaVallee Narayan S R, Muthuvairavan Pillai N, Predeep Kumar S P
{"title":"Hybrid deep learning model for image de-noising and de-mosaicking with adaptive Gannet optimization algorithm.","authors":"John Peter K, SylajaVallee Narayan S R, Muthuvairavan Pillai N, Predeep Kumar S P","doi":"10.1080/0954898X.2025.2529299","DOIUrl":"https://doi.org/10.1080/0954898X.2025.2529299","url":null,"abstract":"<p><p>Image reconstruction is a critical step in various applications, such as art restoration, medical image processing, and agriculture, but it faces challenges due to noise and mosaic artefacts. In this research, a novel approach is introduced for de-noising and de-mosaicking images to enhance image reconstruction quality. The proposed model consists of three main steps: detail layer extraction, image de-noising using an Efficient Generative Adversarial Network (E-GAN), and de-mosaicking using an Adaptive Gannet-based Residual DenseNet (AG_DenseResNet). The publicly available Kodak dataset is utilized for the evaluation of the proposed model. The results show that the proposed outperforms conventional methods in terms of Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), Mean Squared Error (MSE), and Learned Perceptual Image Patch Similarity (LPIPS) and acquired the values of 53.93, 0.98, 2.76, and 0.23, respectively.</p>","PeriodicalId":520718,"journal":{"name":"Network (Bristol, England)","volume":" ","pages":"1-27"},"PeriodicalIF":0.0,"publicationDate":"2025-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144693111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohamed Aboukhair, Fahad Alsheref, Adel Assiri, Abdelrahim Koura, Mohammed Kayed
{"title":"CNN filter sizes, effects, limitations, and challenges: An exploratory study.","authors":"Mohamed Aboukhair, Fahad Alsheref, Adel Assiri, Abdelrahim Koura, Mohammed Kayed","doi":"10.1080/0954898X.2025.2533865","DOIUrl":"https://doi.org/10.1080/0954898X.2025.2533865","url":null,"abstract":"<p><p>This study explores the impacts of filter sizes on convolutional neural networks (CNNs) models, moving away from the common belief that small filters (3x3) give better results. The goal is to highlight the potential of large filters and encourage researchers to investigate their capabilities. The usage of large filters will increase the computational power which leads common researchers to reduce the filter size to reserve this power; however, other researchers address the potential of large filters to enhance the performance of CNN models. Currently, there are few pure CNN models that achieve optimal performance with large filters showing how far the large filter sizes topic is not addressed well by the community. As the availability of computer power and image sizes increase, traditional obstacles that hinder researchers from using large filter sizes will gradually diminish. This paper guides researchers by analysing and exploring the limitations, challenges, and impacts of CNN filter sizes on different CNN architectures. This will help utilize large filters' distinctive opportunities and potential. To our knowledge, we find four opportunities from utilizing large filters. A comprehensive comparison of researches on different CNN architectures shows a bias for small filters (3x3) and the possible potential of large filters.</p>","PeriodicalId":520718,"journal":{"name":"Network (Bristol, England)","volume":" ","pages":"1-29"},"PeriodicalIF":0.0,"publicationDate":"2025-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144677139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hybrid optimization with constraints handling for combinatorial test case prioritization problems.","authors":"Selvakumar J, Sudhir Sharma, Mukesh Kumar Tripathi","doi":"10.1080/0954898X.2025.2517130","DOIUrl":"https://doi.org/10.1080/0954898X.2025.2517130","url":null,"abstract":"<p><p>In software development, software testing is very crucial for developing good quality software, where the effectiveness of software is to be tested. For software testing, test suites and test cases need to be prepared in minimum execution time with the test case prioritization (TCP) problems. Generally, some of the researchers mainly focus on the constraint problems, such as time and fault on TCP. In this research, the novel Fractional Hybrid Leader Based Optimization (FHLO) is introduced with constraint handling for combinatorial TCP. To detect faults earlier, the TCP is an important technique as it reduces the regression testing cost and prioritizes the test case execution. Based on the detected fault and branch coverage, the priority of the test case for program execution is decided. Furthermore, the FHLO algorithm establishes the TCP for detecting the program fault, which prioritizes the test case, and relies on maximum values of Average Percentage of Branch Coverage (APBC) and Average Percentage of Fault Detected (APFD). From the analysis, the devised FHLO algorithm attains a maximum value of 0.966 for APFD and 0.888 for APBC.</p>","PeriodicalId":520718,"journal":{"name":"Network (Bristol, England)","volume":" ","pages":"1-31"},"PeriodicalIF":0.0,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144610954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kishore Bhamidipati, G Anuradha, Satish Muppidi, S Anjali Devi
{"title":"Gradient energy valley optimization enabled segmentation and Spinal VGG-16 Net for brain tumour detection.","authors":"Kishore Bhamidipati, G Anuradha, Satish Muppidi, S Anjali Devi","doi":"10.1080/0954898X.2025.2513690","DOIUrl":"10.1080/0954898X.2025.2513690","url":null,"abstract":"<p><p>The anomalous enlargement of brain cells is known as Brain Tumour (BT), which can cause serious damage to different blood vessel and nerve in human body. A precise and early detection of BT is foremost important to eliminate severe illness. Thus, a SpinalNet Visual Geometry Group-16 (Spinal VGG-16-Net) is introduced for early BT detection. At first, Magnetic Resonance Imaging (MRI) of image obtained from data sample is subjected to image denoising by bilateral filter. Then, BT area is segmented from the image using entropy-based Kapur thresholding technique, where threshold values are ideally selected by Gradient Energy Valley Optimization (GEVO), which is designed by incorporating Energy Valley Optimization (EVO) with Stochastic Gradient Descent (SGD) algorithm. Then, process of image augmentation is worked and later, feature extraction is performed to mine the most significant features. Finally, BT is detected using proposed Spinal VGG-16Net, which is devised by combining both SpinalNet and VGG-16 Net. The Spinal VGG-16-Net is compared with some of the existing schemes, and it attained maximum accuracy of 92.14%, True Positive Rate (TPR) of 93.16%, True Negative Rate (TNR) of 91.35%, Negative Predictive Value (NPV) 89.73%, and Positive Predictive Value (PPV) o of 92.13%.</p>","PeriodicalId":520718,"journal":{"name":"Network (Bristol, England)","volume":" ","pages":"1-35"},"PeriodicalIF":0.0,"publicationDate":"2025-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144370074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A smoothing gradient-based neural network strategy for solving semidefinite programming problems.","authors":"Asiye Nikseresht, Alireza Nazemi","doi":"10.1080/0954898X.2022.2104463","DOIUrl":"https://doi.org/10.1080/0954898X.2022.2104463","url":null,"abstract":"<p><p>Linear semidefinite programming problems have received a lot of attentions because of large variety of applications. This paper deals with a smooth gradient neural network scheme for solving semidefinite programming problems. According to some properties of convex analysis and using a merit function in matrix form, a neural network model is constructed. It is shown that the proposed neural network is asymptotically stable and converges to an exact optimal solution of the semidefinite programming problem. Numerical simulations are given to show that the numerical behaviours are in good agreement with the theoretical results.</p>","PeriodicalId":520718,"journal":{"name":"Network (Bristol, England)","volume":" ","pages":"187-213"},"PeriodicalIF":7.8,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40668778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}