Sameer Agrawal , Bhumeshwar K. Patle , Sudarshan Sanap
{"title":"Navigation control of unmanned aerial vehicles in dynamic collaborative indoor environment using probability fuzzy logic approach","authors":"Sameer Agrawal , Bhumeshwar K. Patle , Sudarshan Sanap","doi":"10.1016/j.cogr.2025.02.002","DOIUrl":"10.1016/j.cogr.2025.02.002","url":null,"abstract":"<div><div>The development of drones in various applications makes it essential to address the critical issue of providing collision-free and optimal navigation in uncertain environments. The current research work aims to develop, simulate and experiment with the Probability Fuzzy Logic (PFL) controller for route planning and obstacle avoidance for drones in uncertain static and dynamic environments. The PFL system uses probability-based impact assessment and fuzzy logic rules to deal with unknowns and environmental changes. The fuzzy logic system takes in input about the distance of objects from the drone's front, left, and right sides, as well as the probability of collision based on the drone's speed and how close it is to the obstacles. The set of thirty fuzzy rules based on the distance of the obstacle from front left and right are defined to decide the output, i.e. speed of the drone and heading angle. The simulation environment is developed using MATLAB, with grid-based motion planning that accounts for static and dynamic obstacles. The system's performance is validated through simulations and real-world experiments, comparing path length and travel time. On comparing the simulation and experimental results, the proposed PFL-based controller has been proven to be efficient, accurate, and robust for both static and dynamic and simple to complex environments. The drones can plan the shortest and most collision-free path across all the scenarios, as depicted in the simulation and experimentation results. However, due to communication delay, inaccuracy of sensor response, environmental impact and motor delay, there are slight deviations between the simulation and experimentation values. Upon performing the error analysis, it is found that the error between the simulation and experimental value is within the range of 6.66 % in all the studied scenarios.</div></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"5 ","pages":"Pages 86-113"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143579569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Zero-shot intelligent fault diagnosis via semantic fusion embedding","authors":"Honghua Xu, Zijian Hu, Ziqiang Xu, Qilong Qian","doi":"10.1016/j.cogr.2024.12.001","DOIUrl":"10.1016/j.cogr.2024.12.001","url":null,"abstract":"<div><div>Most fault diagnosis studies rely on the man-made data collected in laboratory where the operation conditions are under control and stable. However, they can hardly adapt to the practical conditions since the man-made data can hardly model the fault patterns across domains. Aiming to solve this problem, this paper proposes a novel deep fault semantic fusion embedding model (DFSFEM) to realize zero-shot intelligent fault diagnosis. The novelties of DFSFEM lie in two aspects. On the one hand, a novel semantic fusion embedding module is proposed to enhance the representability and adaptability of the feature learning across domains. On the other hand, a neural network-based metric module is designed to replace traditional distance measurements, enhancing the transferring capability between domains. These novelties jointly help DFSFEM provide prominent faithful diagnosis on unseen fault types. Experiments on bearing datasets are conducted to evaluate the zero-shot intelligent fault diagnosis performance. Extensive experimental results and comprehensive analysis demonstrate the superiority of the proposed DFSFEM in terms of diagnosis correctness and adaptability.</div></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"5 ","pages":"Pages 37-47"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143143539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DECTNet: A detail enhanced CNN-Transformer network for single-image deraining","authors":"Liping Wang , Guangwei Gao","doi":"10.1016/j.cogr.2024.12.002","DOIUrl":"10.1016/j.cogr.2024.12.002","url":null,"abstract":"<div><div>Recently, Convolutional Neural Networks (CNN) and Transformers have been widely adopted in image restoration tasks. While CNNs are highly effective at extracting local information, they struggle to capture global context. Conversely, Transformers excel at capturing global information but often face challenges in preserving spatial and structural details. To address these limitations and harness both global and local features for single-image deraining, we propose a novel approach called the Detail Enhanced CNN-Transformer Network (DECTNet). DECTNet integrates two key components: the Enhanced Residual Feature Distillation Block (ERFDB) and the Dual Attention Spatial Transformer Block (DASTB). In the ERFDB, we introduce a mixed attention mechanism, incorporating channel information-enhanced layers within the residual feature distillation structure. This design facilitates a more effective step-by-step extraction of detailed information, enabling the network to restore fine-grained image details progressively. Additionally, in the DASTB, we utilize spatial attention to refine features obtained from multi-head self-attention, while the feed-forward network leverages channel information to enhance detail preservation further. This complementary use of CNNs and Transformers allows DECTNet to balance global context understanding with detailed spatial restoration. Extensive experiments have demonstrated that DECTNet outperforms some state-of-the-art methods on single-image deraining tasks. Furthermore, our model achieves competitive results on three low-light datasets and a single-image desnowing dataset, highlighting its versatility and effectiveness across different image restoration challenges.</div></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"5 ","pages":"Pages 48-60"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143143957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DSR-YOLO: A lightweight and efficient YOLOv8 model for enhanced pedestrian detection","authors":"Mustapha Oussouaddi , Omar Bouazizi , Aimad El mourabit , Zine el Abidine Alaoui Ismaili , Yassine Attaoui , Mohamed Chentouf","doi":"10.1016/j.cogr.2025.04.001","DOIUrl":"10.1016/j.cogr.2025.04.001","url":null,"abstract":"<div><div>This paper presents DSR-YOLO, a pedestrian detection network that addresses critical challenges, such as scale variations and complex backgrounds. Built on the lightweight YOLOv8n architecture, it incorporates DCNv4 modules to enhance the detection rates and reduce missed detections by effectively learning key pedestrian features. A new head component enables detection across various scales, whereas RFB modules improve accuracy for smaller or occluded objects. Additionally, we enhance the initial C2f layers with a modified block that integrates SimAM and DCNv4, minimizing the background noise and sharpening the focus on the relevant features. A second version of the C2f block using SimAM and standard convolutions ensures robust feature extraction in deeper layers with optimized computational efficiency. The WIoUv3 loss function was utilized to reduce the regression loss associated with bounding boxes, further boosting the performance. Evaluated on the CityPersons dataset, DSR-YOLO outperformed YOLOv8n with a 14.9 % increase in mAP@50 and 6.3 % increase in mAP@50:95, while maintaining competitive FLOPS, parameter counts, and inference speed.</div></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"5 ","pages":"Pages 152-165"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143844145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Attention-assisted dual-branch interactive face super-resolution network","authors":"Xujie Wan , Siyu Xu , Guangwei Gao","doi":"10.1016/j.cogr.2025.01.001","DOIUrl":"10.1016/j.cogr.2025.01.001","url":null,"abstract":"<div><div>We propose a deep learning-based Attention-Assisted Dual-Branch Interactive Network (ADBINet) to improve facial super-resolution by addressing key challenges like inadequate feature extraction and poor multi-scale information handling. ADBINet features a multi-scale encoder-decoder architecture that captures and integrates features across scales, enhancing detail and reconstruction quality. The key to our approach is the Transformer and CNN Interaction Module (TCIM), which includes a Dual Attention Collaboration Module (DACM) for improved local and spatial feature extraction. The Channel Attention Guidance Module (CAGM) refines CNN and Transformer fusion, ensuring precise facial detail restoration. Additionally, the Attention Feature Fusion Unit (AFFM) optimizes multi-scale feature integration. Experimental results demonstrate that ADBINet outperforms existing methods in both quantitative and qualitative facial super-resolution metrics.</div></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"5 ","pages":"Pages 77-85"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143143955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Rehab-Bot: A home-based lower-extremity rehabilitation robot for muscle recovery","authors":"Sandro Mihradi , Edgar Buwana Sutawika , Vani Virdyawan , Rachmat Zulkarnain Goesasi , Masahiro Todoh","doi":"10.1016/j.cogr.2025.02.001","DOIUrl":"10.1016/j.cogr.2025.02.001","url":null,"abstract":"<div><div>This paper presents a proof-of-concept for a lower-extremity rehabilitation device, called Rehab-bot, that would aid patients with lower-limb impairments in continuing their rehabilitation in its required intensity at home after inpatient care. This research focuses on developing the patient‘s muscle training feature using admittance control to generate resistance for isotonic exercise, particularly emphasizing the potential for progressive resistance training. The mechanical structure of the Rehab-bot was inspired by a continuous passive motion machine that can be optimized to be a light and compact device suitable for home-based use. Systems design, development, and experimental evaluation are presented. Experiments were performed with one healthy subject by monitoring two parameters: the forces exerted by leg muscles through a force sensor and the resulting position of the foot support that is actuated by the robot. Results have shown that Rehab-bot can demonstrate lower-limb isotonic exercise by generating a virtual load that can be progressively increased.</div></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"5 ","pages":"Pages 114-125"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143601104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Small target drone algorithm in low-altitude complex urban scenarios based on ESMS-YOLOv7","authors":"Yuntao Wei, Xiujia Wang, Chunjuan Bo, Zhan Shi","doi":"10.1016/j.cogr.2024.11.004","DOIUrl":"10.1016/j.cogr.2024.11.004","url":null,"abstract":"<div><div>The increasing use and militarization of UAV technology presents significant challenges to nations and societies. Notably, there is a deficit in anti- UAV technologies for civilian use, particularly in complex urban environments at low altitudes. This paper proposes the ESMS-YOLOv7 algorithm, which is specifically engineered to detect small target UAVs in such challenging urban landscapes. The algorithm focuses on the extraction of features from small target UAVs in urban contexts. Enhancements to YOLOv7 include the integration of the ELAN-C module, the SimSPPFCSPC-R module, and the MP-CBAM module, which collectively improve the network's ability to extract features and focus on small target UAVs. Additionally, the SIOU loss function is employed to increase the model's robustness. The effectiveness of the ESMS-YOLOv7 algorithm is validated through its performance on the DUT Anti-UAV dataset, where it exhibits superior capabilities relative to other leading algorithms.</div></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"5 ","pages":"Pages 14-25"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143143536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Zero-dynamics attack detection based on data association in feedback pathway","authors":"Zeyu Zhang , Hongran Li , Yuki Todo","doi":"10.1016/j.cogr.2025.03.003","DOIUrl":"10.1016/j.cogr.2025.03.003","url":null,"abstract":"<div><div>This paper considers the security of non-minimum phase systems, a typical kind of cyber-physical systems. Non-minimum phase systems are characterized by unstable zeros in their transfer functions, making them particularly susceptible to disturbances and attacks. The non-minimum phase systems are more vulnerable to zero-dynamics attack (ZDA) than minimum phase systems. ZDA is a stealthy attack strategy that exploits the internal dynamics of a system, remaining undetectable while causing gradual system destabilization. Recent cyber incidents have demonstrated the increasing risk of such hidden attacks in critical infrastructures, such as power grids and transportation systems. This paper first demonstrates that the existing ZDA has the limitation of falling into local convergence, and then proposes an enhanced zero-dynamics attack (EZDA), which overcomes local convergence by diverging system data. Furthermore, this paper presents an autoregressive model which can build the data association between the original data and the forged data. By observing the fluctuations in state values, the presented model can detect not only ZDA, but also EZDA. Finally, numerical simulations and an application example are provided to verify the theoretical results.</div></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"5 ","pages":"Pages 126-139"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143739005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A transformation model for vision-based navigation of agricultural robots","authors":"Abdelkrim Abanay , Lhoussaine Masmoudi , Dirar Benkhedra , Khalid El Amraoui , Mouataz Lghoul , Javier-Gonzalez Jimenez , Francisco-Angel Moreno","doi":"10.1016/j.cogr.2025.03.002","DOIUrl":"10.1016/j.cogr.2025.03.002","url":null,"abstract":"<div><div>This paper presents a Top-view Transformation Model (TTM) for a vision-based autonomous navigation of an agricultural mobile robot. The TTM transforms images captured by an onboard camera into a virtual Top-view, eliminating perspective distortions such as the vanishing point effect and ensuring uniform pixel distribution. The transformed images are analyzed to ensure an autonomous navigation of the robot between crop rows. The navigation method involves real-time estimation of the robot's position relative to crop rows and the control low is derived from the estimated robot's heading and lateral offset for steering the robot along the crop rows. A simulated scenario has been generated in Gazebo in order to implement the developed approach using the Robot Operating System (ROS), while an evaluation on a real agricultural mobile robot has also been performed. The experimental results demonstrate the feasibility of the TTM approach and its implementation for autonomous navigation, reaching good performance.</div></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"5 ","pages":"Pages 140-151"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143759893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improvement of multi-parameter anomaly detection method: Addition of a relational token between parameters","authors":"Hironori Uchida , Keitaro Tominaga , Hideki Itai , Yujie Li , Yoshihisa Nakatoh","doi":"10.1016/j.cogr.2025.03.004","DOIUrl":"10.1016/j.cogr.2025.03.004","url":null,"abstract":"<div><div>In the continuous development of systems, the increasing volume and complexity of data that engineers must analyze have become significant challenges. To address this issue, extensive research has been conducted on automated anomaly detection in logs. However, due to the limited variety of available datasets, most studies have focused on sequence-based anomalies in logs, with relatively little attention paid to parameter-based anomaly detection. To bridge this gap, we prepared a labeled dataset specifically designed for parameter-based anomaly detection and propose a novel method utilizing BERTMaskedLM. Since continuously changing logs in system development are difficult to label, we also propose a method that enables learning without labeled data. Previous studies have employed BERTMaskedLM to capture relationships between parameters in multi-parameter logs for anomaly detection. However, a known issue arises when the ranges of numerical parameters overlap, resulting in reduced detection accuracy. To mitigate this, we introduced tokens that encode the relationships between parameters, improving the independence of parameter combinations and enhancing anomaly detection accuracy (increasing the F1-score by more than 0.002). In this study, we employed a simple yet effective approach by using the total value of each token as the added token. Since only the parameter portions vary within the same log template structure, these proposed tokens effectively capture the relationships between parameters. Additionally, we visualized the influence of the added tokens and conducted experiments using a new dataset to assess the reliability of our proposed method.</div></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"5 ","pages":"Pages 176-191"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143868045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}