{"title":"Hybrid machine learning-based 3-dimensional UAV node localization for UAV-assisted wireless networks","authors":"Workeneh Geleta Negassa, Demissie J. Gelmecha, Ram Sewak Singh, Davinder Singh Rathee","doi":"10.1016/j.cogr.2025.01.002","DOIUrl":"10.1016/j.cogr.2025.01.002","url":null,"abstract":"<div><div>This paper presents a hybrid machine-learning framework for optimizing 3-Dimensional (3D) Unmanned Aerial Vehicles (UAV) node localization and resource distribution in UAV-assisted THz 6G networks to ensure efficient coverage in dynamic, high-density environments. The proposed model efficiently managed interference, adapted to UAV mobility, and ensured optimal throughput by dynamically optimizing UAV trajectories. The hybrid framework combined the strengths of Graph Neural Networks (GNN) for feature aggregation, Deep Neural Networks (DNN) for efficient resource allocation, and Double Deep Q-Networks (DDQN) for distributed decision-making. Simulation results demonstrated that the proposed model outperformed traditional machine learning models, significantly improving energy efficiency, latency, and throughput. The hybrid model achieved an optimized energy efficiency of 90 Tbps/J, reduced latency to 0.0105 ms, and delivered a network throughput of approximately 96 Tbps. The model adapts to varying link densities, maintaining stable performance even in high-density scenarios. These findings underscore the framework's potential to address key challenges in UAV-assisted 6G networks, paving the way for scalable and efficient communication in next-generation wireless systems.</div></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"5 ","pages":"Pages 61-76"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143143956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A transformation model for vision-based navigation of agricultural robots","authors":"Abdelkrim Abanay , Lhoussaine Masmoudi , Dirar Benkhedra , Khalid El Amraoui , Mouataz Lghoul , Javier-Gonzalez Jimenez , Francisco-Angel Moreno","doi":"10.1016/j.cogr.2025.03.002","DOIUrl":"10.1016/j.cogr.2025.03.002","url":null,"abstract":"<div><div>This paper presents a Top-view Transformation Model (TTM) for a vision-based autonomous navigation of an agricultural mobile robot. The TTM transforms images captured by an onboard camera into a virtual Top-view, eliminating perspective distortions such as the vanishing point effect and ensuring uniform pixel distribution. The transformed images are analyzed to ensure an autonomous navigation of the robot between crop rows. The navigation method involves real-time estimation of the robot's position relative to crop rows and the control low is derived from the estimated robot's heading and lateral offset for steering the robot along the crop rows. A simulated scenario has been generated in Gazebo in order to implement the developed approach using the Robot Operating System (ROS), while an evaluation on a real agricultural mobile robot has also been performed. The experimental results demonstrate the feasibility of the TTM approach and its implementation for autonomous navigation, reaching good performance.</div></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"5 ","pages":"Pages 140-151"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143759893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improvement of multi-parameter anomaly detection method: Addition of a relational token between parameters","authors":"Hironori Uchida , Keitaro Tominaga , Hideki Itai , Yujie Li , Yoshihisa Nakatoh","doi":"10.1016/j.cogr.2025.03.004","DOIUrl":"10.1016/j.cogr.2025.03.004","url":null,"abstract":"<div><div>In the continuous development of systems, the increasing volume and complexity of data that engineers must analyze have become significant challenges. To address this issue, extensive research has been conducted on automated anomaly detection in logs. However, due to the limited variety of available datasets, most studies have focused on sequence-based anomalies in logs, with relatively little attention paid to parameter-based anomaly detection. To bridge this gap, we prepared a labeled dataset specifically designed for parameter-based anomaly detection and propose a novel method utilizing BERTMaskedLM. Since continuously changing logs in system development are difficult to label, we also propose a method that enables learning without labeled data. Previous studies have employed BERTMaskedLM to capture relationships between parameters in multi-parameter logs for anomaly detection. However, a known issue arises when the ranges of numerical parameters overlap, resulting in reduced detection accuracy. To mitigate this, we introduced tokens that encode the relationships between parameters, improving the independence of parameter combinations and enhancing anomaly detection accuracy (increasing the F1-score by more than 0.002). In this study, we employed a simple yet effective approach by using the total value of each token as the added token. Since only the parameter portions vary within the same log template structure, these proposed tokens effectively capture the relationships between parameters. Additionally, we visualized the influence of the added tokens and conducted experiments using a new dataset to assess the reliability of our proposed method.</div></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"5 ","pages":"Pages 176-191"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143868045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Biyong Deng , Jiashan Pan , Xiaoyu Tang , Haitao Fu , Shushan Hu
{"title":"A multi-view graph neural network approach for magnetic resonance imaging-based diagnosis of knee injuries","authors":"Biyong Deng , Jiashan Pan , Xiaoyu Tang , Haitao Fu , Shushan Hu","doi":"10.1016/j.cogr.2025.05.001","DOIUrl":"10.1016/j.cogr.2025.05.001","url":null,"abstract":"<div><div>The knee plays a pivotal role in the human anatomy, serving as a cornerstone for support, mobility, shock attenuation, and balance. Currently, magnetic resonance imaging (MRI) remains the preferred method for diagnosing knee injuries, including anterior cruciate ligament (ACL) tears and meniscal tears, due to its efficiency and accuracy in medical imaging. However, the interpretation and understanding of knee MRI images are time-consuming, laborious, require sufficient expertise, and are also prone to diagnostic errors. Thus, it is imperative to devise a computational method employing knee MRI for intelligent diagnosis of knee injuries, as this could expedite medical assessments by physicians, reduce costs, and substantially reduce the risk of misdiagnosis. Although several computational methods have been proposed to diagnose knee injuries, most rely heavily on local features in MRI images and exhibit low prediction accuracy. In this paper, we proposed a novel multi-view graph neural network, abbreviated as MVGNN, to identify knee injuries (specifically ACL tears and meniscal tears) by leveraging graph representations derived from multiple MRI views. Comprehensive experiments demonstrate that MVGNN achieves state-of-the-art results for diagnosing knee injuries, with a 5.9% improvement in accuracy on ACL data and a 6.5% improvement on Men data, compared to the second-best method, MVCNN.</div></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"5 ","pages":"Pages 201-210"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144106866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robotic terrain classification based on convolutional and long short-term memory neural networks","authors":"YiGe Hu","doi":"10.1016/j.cogr.2025.04.002","DOIUrl":"10.1016/j.cogr.2025.04.002","url":null,"abstract":"<div><div>Robotic mobility remains constrained by complex terrains and technological limitations, hindering real-world applications. This study presents a terrain classification framework integrating Fourier transform, adaptive filtering, and deep learning to enhance adaptability. Leveraging CNNs, LSTMs, and an attention mechanism, the approach improves feature fusion and classification accuracy. Evaluations on the Tampere University dataset demonstrate an 81 % classification accuracy, validating its effectiveness in terrain perception and autonomous navigation. The findings contribute to advancing robotic mobility in unstructured environments.</div></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"5 ","pages":"Pages 166-175"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143864324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hamed Khudair Khalil, Laith Ali Abdul Rahaim, Shamam Fadhil Alwash
{"title":"Design cloud computing to monitor and controller for high voltage networks 400 KV","authors":"Hamed Khudair Khalil, Laith Ali Abdul Rahaim, Shamam Fadhil Alwash","doi":"10.1016/j.cogr.2025.03.005","DOIUrl":"10.1016/j.cogr.2025.03.005","url":null,"abstract":"<div><div>A high-voltage network (400 kV) is a system that has multiple control and communication elements and acts as a link between generating stations and transmission lines; it is considered one of the smart networks. The advantage of a smart grid over a traditional utility grid is that it uses a two-way communication mechanism. The monitoring and control system for this network utilizes SCADA and RTU, but it comes at a high cost. Nonetheless, it is preferable to have a system that is economical, intelligent, and dependable. In this research, we will design a remote monitoring and control system for high-voltage networks using cloud computing technology with IoT applications that support the above-mentioned systems and can be developed in case of any expansion in electrical networks. We use this system to remotely monitor smart network equipment and control the closing and opening of breakers using protection relays and sensors. This proposed system uses the ESP 32 microcontroller to send warning signals to remote operators via the Internet, utilizing the MQTT protocol. This system utilizes the Thing Board platform in conjunction with Quick Set (5030) software, enabling control via a laptop and smartphone.</div></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"5 ","pages":"Pages 192-200"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143922363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"LiPE: Lightweight human pose estimator for mobile applications towards automated pose analysis","authors":"Chengxiu Li , Ni Duan","doi":"10.1016/j.cogr.2024.11.005","DOIUrl":"10.1016/j.cogr.2024.11.005","url":null,"abstract":"<div><div>Current human pose estimation models adopt heavy backbones and complex feature enhance- ment modules to pursue higher accuracy. However, they ignore the need for model efficiency in real-world applications. In real-world scenarios such as sports teaching and automated sports analysis for better preservation of traditional folk sports, human pose estimation often needs to be performed on mobile devices with limited computing resources. In this paper, we propose a lightweight human pose estimator termed LiPE. LiPE adopts a lightweight MobileNetV2 backbone for feature extraction and lightweight depthwise separable deconvolution modules for upsampling. Predictions are made at a high resolution with a lightweight prediction head. Compared with the baseline, our model reduces MACs by 93.2 %, and reduces the number of parameters by 93.9 %, while the accuracy drops by only 3.2 %. Based on LiPE, we develop a real- time human pose estimation and evaluation system for automated pose analysis. Experimental results show that our LiPE achieves high computational efficiency and good accuracy for application on mobile devices.</div></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"5 ","pages":"Pages 26-36"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143143537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shripad V. Deshpande , Harikrishnan R , Babul Salam KSM Kader Ibrahim , Mahesh Datta Sai Ponnuru
{"title":"Mobile robot path planning using deep deterministic policy gradient with differential gaming (DDPG-DG) exploration","authors":"Shripad V. Deshpande , Harikrishnan R , Babul Salam KSM Kader Ibrahim , Mahesh Datta Sai Ponnuru","doi":"10.1016/j.cogr.2024.08.002","DOIUrl":"10.1016/j.cogr.2024.08.002","url":null,"abstract":"<div><p>Mobile robot path planning involves decision-making in uncertain, dynamic conditions, where Reinforcement Learning (RL) algorithms excel in generating safe and optimal paths. The Deep Deterministic Policy Gradient (DDPG) is an RL technique focused on mobile robot navigation. RL algorithms must balance exploitation and exploration to enable effective learning. The balance between these actions directly impacts learning efficiency.</p><p>This research proposes a method combining the DDPG strategy for exploitation with the Differential Gaming (DG) strategy for exploration. The DG algorithm ensures the mobile robot always reaches its target without collisions, thereby adding positive learning episodes to the memory buffer. An epsilon-greedy strategy determines whether to explore or exploit. When exploration is chosen, the DG algorithm is employed. The combination of DG strategy with DDPG facilitates faster learning by increasing the number of successful episodes and reducing the number of failure episodes in the experience buffer. The DDPG algorithm supports continuous state and action spaces, resulting in smoother, non-jerky movements and improved control over the turns when navigating obstacles. Reward shaping considers finer details, ensuring even small advantages in each iteration contribute to learning.</p><p>Through diverse test scenarios, it is demonstrated that DG exploration, compared to random exploration, results in an average increase of 389% in successful target reaches and a 39% decrease in collisions. Additionally, DG exploration shows a 69% improvement in the number of episodes where convergence is achieved within a maximum of 2000 steps.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"4 ","pages":"Pages 156-173"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667241324000119/pdfft?md5=8c083de5d6ac1af9d3cedcb0733a30fa&pid=1-s2.0-S2667241324000119-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142271646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sk. Khairul Hasan, Subodh B. Bhujel, Gabrielle Sara Niemiec
{"title":"Emerging trends in human upper extremity rehabilitation robot","authors":"Sk. Khairul Hasan, Subodh B. Bhujel, Gabrielle Sara Niemiec","doi":"10.1016/j.cogr.2024.09.001","DOIUrl":"10.1016/j.cogr.2024.09.001","url":null,"abstract":"<div><p>Stroke is a leading cause of neurological disorders that result in physical disability, particularly among the elderly. Neurorehabilitation plays a crucial role in helping stroke patients recover from physical impairments and regain mobility. Physical therapy is one of the most effective forms of neurorehabilitation, but the growing number of patients requires a large workforce of trained therapists, which is currently insufficient. Robotic rehabilitation offers a promising alternative, capable of supplementing or even replacing human-assisted physical therapy through the use of rehabilitation robots. To design effective robotic devices for rehabilitation, a solid foundation of knowledge is essential. This article provides a comprehensive overview of the key elements needed to develop human upper extremity rehabilitation robots. It covers critical aspects such as upper extremity anatomy, joint range of motion, anthropometric parameters, disability assessment techniques, and robot-assisted training methods. Additionally, it reviews recent advancements in rehabilitation robots, including exoskeletons, end-effector-based robots, and planar robots. The article also evaluates existing upper extremity rehabilitation robots based on their mechanical design and functionality, identifies their limitations, and suggests future research directions for further improvement.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"4 ","pages":"Pages 174-190"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667241324000120/pdfft?md5=a51e80d94f3f2f6ca53c667c4682ef83&pid=1-s2.0-S2667241324000120-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142271647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fourier Hilbert: The input transformation to enhance CNN models for speech emotion recognition","authors":"Bao Long Ly","doi":"10.1016/j.cogr.2024.11.002","DOIUrl":"10.1016/j.cogr.2024.11.002","url":null,"abstract":"<div><div>Signal processing in general, and speech emotion recognition in particular, have long been familiar Artificial Intelligence (AI) tasks. With the explosion of deep learning, CNN models are used more frequently, accompanied by the emergence of many signal transformations. However, these methods often require significant hardware and runtime. In an effort to address these issues, we analyze and learn from existing transformations, leading us to propose a new method: Fourier Hilbert Transformation (FHT). In general, this method applies the Hilbert curve to Fourier images. The resulting images are small and dense, which is a shape well-suited to the CNN architecture. Additionally, the better distribution of information on the image allows the filters to fully utilize their power. These points support the argument that FHT provides an optimal input for CNN. Experiments conducted on popular datasets yielded promising results. FHT saves a large amount of hardware usage and runtime while maintaining high performance, even offers greater stability compared to existing methods. This opens up opportunities for deploying signal processing tasks on real-time systems with limited hardware.</div></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"4 ","pages":"Pages 228-236"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142748300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}