Sameer Agrawal , Bhumeshwar K. Patle , Sudarshan Sanap
{"title":"Navigation control of unmanned aerial vehicles in dynamic collaborative indoor environment using probability fuzzy logic approach","authors":"Sameer Agrawal , Bhumeshwar K. Patle , Sudarshan Sanap","doi":"10.1016/j.cogr.2025.02.002","DOIUrl":"10.1016/j.cogr.2025.02.002","url":null,"abstract":"<div><div>The development of drones in various applications makes it essential to address the critical issue of providing collision-free and optimal navigation in uncertain environments. The current research work aims to develop, simulate and experiment with the Probability Fuzzy Logic (PFL) controller for route planning and obstacle avoidance for drones in uncertain static and dynamic environments. The PFL system uses probability-based impact assessment and fuzzy logic rules to deal with unknowns and environmental changes. The fuzzy logic system takes in input about the distance of objects from the drone's front, left, and right sides, as well as the probability of collision based on the drone's speed and how close it is to the obstacles. The set of thirty fuzzy rules based on the distance of the obstacle from front left and right are defined to decide the output, i.e. speed of the drone and heading angle. The simulation environment is developed using MATLAB, with grid-based motion planning that accounts for static and dynamic obstacles. The system's performance is validated through simulations and real-world experiments, comparing path length and travel time. On comparing the simulation and experimental results, the proposed PFL-based controller has been proven to be efficient, accurate, and robust for both static and dynamic and simple to complex environments. The drones can plan the shortest and most collision-free path across all the scenarios, as depicted in the simulation and experimentation results. However, due to communication delay, inaccuracy of sensor response, environmental impact and motor delay, there are slight deviations between the simulation and experimentation values. Upon performing the error analysis, it is found that the error between the simulation and experimental value is within the range of 6.66 % in all the studied scenarios.</div></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"5 ","pages":"Pages 86-113"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143579569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Zero-shot intelligent fault diagnosis via semantic fusion embedding","authors":"Honghua Xu, Zijian Hu, Ziqiang Xu, Qilong Qian","doi":"10.1016/j.cogr.2024.12.001","DOIUrl":"10.1016/j.cogr.2024.12.001","url":null,"abstract":"<div><div>Most fault diagnosis studies rely on the man-made data collected in laboratory where the operation conditions are under control and stable. However, they can hardly adapt to the practical conditions since the man-made data can hardly model the fault patterns across domains. Aiming to solve this problem, this paper proposes a novel deep fault semantic fusion embedding model (DFSFEM) to realize zero-shot intelligent fault diagnosis. The novelties of DFSFEM lie in two aspects. On the one hand, a novel semantic fusion embedding module is proposed to enhance the representability and adaptability of the feature learning across domains. On the other hand, a neural network-based metric module is designed to replace traditional distance measurements, enhancing the transferring capability between domains. These novelties jointly help DFSFEM provide prominent faithful diagnosis on unseen fault types. Experiments on bearing datasets are conducted to evaluate the zero-shot intelligent fault diagnosis performance. Extensive experimental results and comprehensive analysis demonstrate the superiority of the proposed DFSFEM in terms of diagnosis correctness and adaptability.</div></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"5 ","pages":"Pages 37-47"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143143539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DECTNet: A detail enhanced CNN-Transformer network for single-image deraining","authors":"Liping Wang , Guangwei Gao","doi":"10.1016/j.cogr.2024.12.002","DOIUrl":"10.1016/j.cogr.2024.12.002","url":null,"abstract":"<div><div>Recently, Convolutional Neural Networks (CNN) and Transformers have been widely adopted in image restoration tasks. While CNNs are highly effective at extracting local information, they struggle to capture global context. Conversely, Transformers excel at capturing global information but often face challenges in preserving spatial and structural details. To address these limitations and harness both global and local features for single-image deraining, we propose a novel approach called the Detail Enhanced CNN-Transformer Network (DECTNet). DECTNet integrates two key components: the Enhanced Residual Feature Distillation Block (ERFDB) and the Dual Attention Spatial Transformer Block (DASTB). In the ERFDB, we introduce a mixed attention mechanism, incorporating channel information-enhanced layers within the residual feature distillation structure. This design facilitates a more effective step-by-step extraction of detailed information, enabling the network to restore fine-grained image details progressively. Additionally, in the DASTB, we utilize spatial attention to refine features obtained from multi-head self-attention, while the feed-forward network leverages channel information to enhance detail preservation further. This complementary use of CNNs and Transformers allows DECTNet to balance global context understanding with detailed spatial restoration. Extensive experiments have demonstrated that DECTNet outperforms some state-of-the-art methods on single-image deraining tasks. Furthermore, our model achieves competitive results on three low-light datasets and a single-image desnowing dataset, highlighting its versatility and effectiveness across different image restoration challenges.</div></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"5 ","pages":"Pages 48-60"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143143957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Attention-assisted dual-branch interactive face super-resolution network","authors":"Xujie Wan , Siyu Xu , Guangwei Gao","doi":"10.1016/j.cogr.2025.01.001","DOIUrl":"10.1016/j.cogr.2025.01.001","url":null,"abstract":"<div><div>We propose a deep learning-based Attention-Assisted Dual-Branch Interactive Network (ADBINet) to improve facial super-resolution by addressing key challenges like inadequate feature extraction and poor multi-scale information handling. ADBINet features a multi-scale encoder-decoder architecture that captures and integrates features across scales, enhancing detail and reconstruction quality. The key to our approach is the Transformer and CNN Interaction Module (TCIM), which includes a Dual Attention Collaboration Module (DACM) for improved local and spatial feature extraction. The Channel Attention Guidance Module (CAGM) refines CNN and Transformer fusion, ensuring precise facial detail restoration. Additionally, the Attention Feature Fusion Unit (AFFM) optimizes multi-scale feature integration. Experimental results demonstrate that ADBINet outperforms existing methods in both quantitative and qualitative facial super-resolution metrics.</div></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"5 ","pages":"Pages 77-85"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143143955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Small target drone algorithm in low-altitude complex urban scenarios based on ESMS-YOLOv7","authors":"Yuntao Wei, Xiujia Wang, Chunjuan Bo, Zhan Shi","doi":"10.1016/j.cogr.2024.11.004","DOIUrl":"10.1016/j.cogr.2024.11.004","url":null,"abstract":"<div><div>The increasing use and militarization of UAV technology presents significant challenges to nations and societies. Notably, there is a deficit in anti- UAV technologies for civilian use, particularly in complex urban environments at low altitudes. This paper proposes the ESMS-YOLOv7 algorithm, which is specifically engineered to detect small target UAVs in such challenging urban landscapes. The algorithm focuses on the extraction of features from small target UAVs in urban contexts. Enhancements to YOLOv7 include the integration of the ELAN-C module, the SimSPPFCSPC-R module, and the MP-CBAM module, which collectively improve the network's ability to extract features and focus on small target UAVs. Additionally, the SIOU loss function is employed to increase the model's robustness. The effectiveness of the ESMS-YOLOv7 algorithm is validated through its performance on the DUT Anti-UAV dataset, where it exhibits superior capabilities relative to other leading algorithms.</div></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"5 ","pages":"Pages 14-25"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143143536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Rehab-Bot: A home-based lower-extremity rehabilitation robot for muscle recovery","authors":"Sandro Mihradi , Edgar Buwana Sutawika , Vani Virdyawan , Rachmat Zulkarnain Goesasi , Masahiro Todoh","doi":"10.1016/j.cogr.2025.02.001","DOIUrl":"10.1016/j.cogr.2025.02.001","url":null,"abstract":"<div><div>This paper presents a proof-of-concept for a lower-extremity rehabilitation device, called Rehab-bot, that would aid patients with lower-limb impairments in continuing their rehabilitation in its required intensity at home after inpatient care. This research focuses on developing the patient‘s muscle training feature using admittance control to generate resistance for isotonic exercise, particularly emphasizing the potential for progressive resistance training. The mechanical structure of the Rehab-bot was inspired by a continuous passive motion machine that can be optimized to be a light and compact device suitable for home-based use. Systems design, development, and experimental evaluation are presented. Experiments were performed with one healthy subject by monitoring two parameters: the forces exerted by leg muscles through a force sensor and the resulting position of the foot support that is actuated by the robot. Results have shown that Rehab-bot can demonstrate lower-limb isotonic exercise by generating a virtual load that can be progressively increased.</div></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"5 ","pages":"Pages 114-125"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143601104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Zero-dynamics attack detection based on data association in feedback pathway","authors":"Zeyu Zhang , Hongran Li , Yuki Todo","doi":"10.1016/j.cogr.2025.03.003","DOIUrl":"10.1016/j.cogr.2025.03.003","url":null,"abstract":"<div><div>This paper considers the security of non-minimum phase systems, a typical kind of cyber-physical systems. Non-minimum phase systems are characterized by unstable zeros in their transfer functions, making them particularly susceptible to disturbances and attacks. The non-minimum phase systems are more vulnerable to zero-dynamics attack (ZDA) than minimum phase systems. ZDA is a stealthy attack strategy that exploits the internal dynamics of a system, remaining undetectable while causing gradual system destabilization. Recent cyber incidents have demonstrated the increasing risk of such hidden attacks in critical infrastructures, such as power grids and transportation systems. This paper first demonstrates that the existing ZDA has the limitation of falling into local convergence, and then proposes an enhanced zero-dynamics attack (EZDA), which overcomes local convergence by diverging system data. Furthermore, this paper presents an autoregressive model which can build the data association between the original data and the forged data. By observing the fluctuations in state values, the presented model can detect not only ZDA, but also EZDA. Finally, numerical simulations and an application example are provided to verify the theoretical results.</div></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"5 ","pages":"Pages 126-139"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143739005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Integrated model for segmentation of glomeruli in kidney images","authors":"Gurjinder Kaur, Meenu Garg, Sheifali Gupta","doi":"10.1016/j.cogr.2024.11.007","DOIUrl":"10.1016/j.cogr.2024.11.007","url":null,"abstract":"<div><div>Kidney diseases, especially those that affect the glomeruli, have become more common worldwide in recent years. Accurate and early detection of glomeruli is critical for accurately diagnosing kidney problems and determining the most effective treatment options. Our study proposed an advanced model, FResMRCNN, an enhanced version of Mask R-CNN, for automatically detecting and segmenting the glomeruli in PAS-stained human kidney images. The model integrates the power of FPN with a ResNet101 backbone, which was selected after assessing seven different backbone architectures. The integration of FPN and ResNet101 into the FResMRCNN model improves glomeruli detection, segmentation accuracy and stability by representing multi-scale features. We trained and tested our model using the HuBMAP Kidney dataset, which contains high-resolution PAS-stained microscopy images. During the study, the effectiveness of our proposed model is examined by generating bounding boxes and predicted masks of glomeruli. The performance of the FResMRCNN model is evaluated using three performance metrics, including the Dice coefficient, Jaccard index, and binary cross-entropy loss, which show promising results in accurately segmenting glomeruli.</div></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"5 ","pages":"Pages 1-13"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143143538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hybrid machine learning-based 3-dimensional UAV node localization for UAV-assisted wireless networks","authors":"Workeneh Geleta Negassa, Demissie J. Gelmecha, Ram Sewak Singh, Davinder Singh Rathee","doi":"10.1016/j.cogr.2025.01.002","DOIUrl":"10.1016/j.cogr.2025.01.002","url":null,"abstract":"<div><div>This paper presents a hybrid machine-learning framework for optimizing 3-Dimensional (3D) Unmanned Aerial Vehicles (UAV) node localization and resource distribution in UAV-assisted THz 6G networks to ensure efficient coverage in dynamic, high-density environments. The proposed model efficiently managed interference, adapted to UAV mobility, and ensured optimal throughput by dynamically optimizing UAV trajectories. The hybrid framework combined the strengths of Graph Neural Networks (GNN) for feature aggregation, Deep Neural Networks (DNN) for efficient resource allocation, and Double Deep Q-Networks (DDQN) for distributed decision-making. Simulation results demonstrated that the proposed model outperformed traditional machine learning models, significantly improving energy efficiency, latency, and throughput. The hybrid model achieved an optimized energy efficiency of 90 Tbps/J, reduced latency to 0.0105 ms, and delivered a network throughput of approximately 96 Tbps. The model adapts to varying link densities, maintaining stable performance even in high-density scenarios. These findings underscore the framework's potential to address key challenges in UAV-assisted 6G networks, paving the way for scalable and efficient communication in next-generation wireless systems.</div></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"5 ","pages":"Pages 61-76"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143143956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A transformation model for vision-based navigation of agricultural robots","authors":"Abdelkrim Abanay , Lhoussaine Masmoudi , Dirar Benkhedra , Khalid El Amraoui , Mouataz Lghoul , Javier-Gonzalez Jimenez , Francisco-Angel Moreno","doi":"10.1016/j.cogr.2025.03.002","DOIUrl":"10.1016/j.cogr.2025.03.002","url":null,"abstract":"<div><div>This paper presents a Top-view Transformation Model (TTM) for a vision-based autonomous navigation of an agricultural mobile robot. The TTM transforms images captured by an onboard camera into a virtual Top-view, eliminating perspective distortions such as the vanishing point effect and ensuring uniform pixel distribution. The transformed images are analyzed to ensure an autonomous navigation of the robot between crop rows. The navigation method involves real-time estimation of the robot's position relative to crop rows and the control low is derived from the estimated robot's heading and lateral offset for steering the robot along the crop rows. A simulated scenario has been generated in Gazebo in order to implement the developed approach using the Robot Operating System (ROS), while an evaluation on a real agricultural mobile robot has also been performed. The experimental results demonstrate the feasibility of the TTM approach and its implementation for autonomous navigation, reaching good performance.</div></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"5 ","pages":"Pages 140-151"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143759893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}