{"title":"Zero-shot intelligent fault diagnosis via semantic fusion embedding","authors":"Honghua Xu, Zijian Hu, Ziqiang Xu, Qilong Qian","doi":"10.1016/j.cogr.2024.12.001","DOIUrl":"10.1016/j.cogr.2024.12.001","url":null,"abstract":"<div><div>Most fault diagnosis studies rely on the man-made data collected in laboratory where the operation conditions are under control and stable. However, they can hardly adapt to the practical conditions since the man-made data can hardly model the fault patterns across domains. Aiming to solve this problem, this paper proposes a novel deep fault semantic fusion embedding model (DFSFEM) to realize zero-shot intelligent fault diagnosis. The novelties of DFSFEM lie in two aspects. On the one hand, a novel semantic fusion embedding module is proposed to enhance the representability and adaptability of the feature learning across domains. On the other hand, a neural network-based metric module is designed to replace traditional distance measurements, enhancing the transferring capability between domains. These novelties jointly help DFSFEM provide prominent faithful diagnosis on unseen fault types. Experiments on bearing datasets are conducted to evaluate the zero-shot intelligent fault diagnosis performance. Extensive experimental results and comprehensive analysis demonstrate the superiority of the proposed DFSFEM in terms of diagnosis correctness and adaptability.</div></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"5 ","pages":"Pages 37-47"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143143539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DECTNet: A detail enhanced CNN-Transformer network for single-image deraining","authors":"Liping Wang , Guangwei Gao","doi":"10.1016/j.cogr.2024.12.002","DOIUrl":"10.1016/j.cogr.2024.12.002","url":null,"abstract":"<div><div>Recently, Convolutional Neural Networks (CNN) and Transformers have been widely adopted in image restoration tasks. While CNNs are highly effective at extracting local information, they struggle to capture global context. Conversely, Transformers excel at capturing global information but often face challenges in preserving spatial and structural details. To address these limitations and harness both global and local features for single-image deraining, we propose a novel approach called the Detail Enhanced CNN-Transformer Network (DECTNet). DECTNet integrates two key components: the Enhanced Residual Feature Distillation Block (ERFDB) and the Dual Attention Spatial Transformer Block (DASTB). In the ERFDB, we introduce a mixed attention mechanism, incorporating channel information-enhanced layers within the residual feature distillation structure. This design facilitates a more effective step-by-step extraction of detailed information, enabling the network to restore fine-grained image details progressively. Additionally, in the DASTB, we utilize spatial attention to refine features obtained from multi-head self-attention, while the feed-forward network leverages channel information to enhance detail preservation further. This complementary use of CNNs and Transformers allows DECTNet to balance global context understanding with detailed spatial restoration. Extensive experiments have demonstrated that DECTNet outperforms some state-of-the-art methods on single-image deraining tasks. Furthermore, our model achieves competitive results on three low-light datasets and a single-image desnowing dataset, highlighting its versatility and effectiveness across different image restoration challenges.</div></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"5 ","pages":"Pages 48-60"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143143957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Attention-assisted dual-branch interactive face super-resolution network","authors":"Xujie Wan , Siyu Xu , Guangwei Gao","doi":"10.1016/j.cogr.2025.01.001","DOIUrl":"10.1016/j.cogr.2025.01.001","url":null,"abstract":"<div><div>We propose a deep learning-based Attention-Assisted Dual-Branch Interactive Network (ADBINet) to improve facial super-resolution by addressing key challenges like inadequate feature extraction and poor multi-scale information handling. ADBINet features a multi-scale encoder-decoder architecture that captures and integrates features across scales, enhancing detail and reconstruction quality. The key to our approach is the Transformer and CNN Interaction Module (TCIM), which includes a Dual Attention Collaboration Module (DACM) for improved local and spatial feature extraction. The Channel Attention Guidance Module (CAGM) refines CNN and Transformer fusion, ensuring precise facial detail restoration. Additionally, the Attention Feature Fusion Unit (AFFM) optimizes multi-scale feature integration. Experimental results demonstrate that ADBINet outperforms existing methods in both quantitative and qualitative facial super-resolution metrics.</div></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"5 ","pages":"Pages 77-85"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143143955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Small target drone algorithm in low-altitude complex urban scenarios based on ESMS-YOLOv7","authors":"Yuntao Wei, Xiujia Wang, Chunjuan Bo, Zhan Shi","doi":"10.1016/j.cogr.2024.11.004","DOIUrl":"10.1016/j.cogr.2024.11.004","url":null,"abstract":"<div><div>The increasing use and militarization of UAV technology presents significant challenges to nations and societies. Notably, there is a deficit in anti- UAV technologies for civilian use, particularly in complex urban environments at low altitudes. This paper proposes the ESMS-YOLOv7 algorithm, which is specifically engineered to detect small target UAVs in such challenging urban landscapes. The algorithm focuses on the extraction of features from small target UAVs in urban contexts. Enhancements to YOLOv7 include the integration of the ELAN-C module, the SimSPPFCSPC-R module, and the MP-CBAM module, which collectively improve the network's ability to extract features and focus on small target UAVs. Additionally, the SIOU loss function is employed to increase the model's robustness. The effectiveness of the ESMS-YOLOv7 algorithm is validated through its performance on the DUT Anti-UAV dataset, where it exhibits superior capabilities relative to other leading algorithms.</div></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"5 ","pages":"Pages 14-25"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143143536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Integrated model for segmentation of glomeruli in kidney images","authors":"Gurjinder Kaur, Meenu Garg, Sheifali Gupta","doi":"10.1016/j.cogr.2024.11.007","DOIUrl":"10.1016/j.cogr.2024.11.007","url":null,"abstract":"<div><div>Kidney diseases, especially those that affect the glomeruli, have become more common worldwide in recent years. Accurate and early detection of glomeruli is critical for accurately diagnosing kidney problems and determining the most effective treatment options. Our study proposed an advanced model, FResMRCNN, an enhanced version of Mask R-CNN, for automatically detecting and segmenting the glomeruli in PAS-stained human kidney images. The model integrates the power of FPN with a ResNet101 backbone, which was selected after assessing seven different backbone architectures. The integration of FPN and ResNet101 into the FResMRCNN model improves glomeruli detection, segmentation accuracy and stability by representing multi-scale features. We trained and tested our model using the HuBMAP Kidney dataset, which contains high-resolution PAS-stained microscopy images. During the study, the effectiveness of our proposed model is examined by generating bounding boxes and predicted masks of glomeruli. The performance of the FResMRCNN model is evaluated using three performance metrics, including the Dice coefficient, Jaccard index, and binary cross-entropy loss, which show promising results in accurately segmenting glomeruli.</div></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"5 ","pages":"Pages 1-13"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143143538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hybrid machine learning-based 3-dimensional UAV node localization for UAV-assisted wireless networks","authors":"Workeneh Geleta Negassa, Demissie J. Gelmecha, Ram Sewak Singh, Davinder Singh Rathee","doi":"10.1016/j.cogr.2025.01.002","DOIUrl":"10.1016/j.cogr.2025.01.002","url":null,"abstract":"<div><div>This paper presents a hybrid machine-learning framework for optimizing 3-Dimensional (3D) Unmanned Aerial Vehicles (UAV) node localization and resource distribution in UAV-assisted THz 6G networks to ensure efficient coverage in dynamic, high-density environments. The proposed model efficiently managed interference, adapted to UAV mobility, and ensured optimal throughput by dynamically optimizing UAV trajectories. The hybrid framework combined the strengths of Graph Neural Networks (GNN) for feature aggregation, Deep Neural Networks (DNN) for efficient resource allocation, and Double Deep Q-Networks (DDQN) for distributed decision-making. Simulation results demonstrated that the proposed model outperformed traditional machine learning models, significantly improving energy efficiency, latency, and throughput. The hybrid model achieved an optimized energy efficiency of 90 Tbps/J, reduced latency to 0.0105 ms, and delivered a network throughput of approximately 96 Tbps. The model adapts to varying link densities, maintaining stable performance even in high-density scenarios. These findings underscore the framework's potential to address key challenges in UAV-assisted 6G networks, paving the way for scalable and efficient communication in next-generation wireless systems.</div></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"5 ","pages":"Pages 61-76"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143143956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"LiPE: Lightweight human pose estimator for mobile applications towards automated pose analysis","authors":"Chengxiu Li , Ni Duan","doi":"10.1016/j.cogr.2024.11.005","DOIUrl":"10.1016/j.cogr.2024.11.005","url":null,"abstract":"<div><div>Current human pose estimation models adopt heavy backbones and complex feature enhance- ment modules to pursue higher accuracy. However, they ignore the need for model efficiency in real-world applications. In real-world scenarios such as sports teaching and automated sports analysis for better preservation of traditional folk sports, human pose estimation often needs to be performed on mobile devices with limited computing resources. In this paper, we propose a lightweight human pose estimator termed LiPE. LiPE adopts a lightweight MobileNetV2 backbone for feature extraction and lightweight depthwise separable deconvolution modules for upsampling. Predictions are made at a high resolution with a lightweight prediction head. Compared with the baseline, our model reduces MACs by 93.2 %, and reduces the number of parameters by 93.9 %, while the accuracy drops by only 3.2 %. Based on LiPE, we develop a real- time human pose estimation and evaluation system for automated pose analysis. Experimental results show that our LiPE achieves high computational efficiency and good accuracy for application on mobile devices.</div></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"5 ","pages":"Pages 26-36"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143143537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shripad V. Deshpande , Harikrishnan R , Babul Salam KSM Kader Ibrahim , Mahesh Datta Sai Ponnuru
{"title":"Mobile robot path planning using deep deterministic policy gradient with differential gaming (DDPG-DG) exploration","authors":"Shripad V. Deshpande , Harikrishnan R , Babul Salam KSM Kader Ibrahim , Mahesh Datta Sai Ponnuru","doi":"10.1016/j.cogr.2024.08.002","DOIUrl":"10.1016/j.cogr.2024.08.002","url":null,"abstract":"<div><p>Mobile robot path planning involves decision-making in uncertain, dynamic conditions, where Reinforcement Learning (RL) algorithms excel in generating safe and optimal paths. The Deep Deterministic Policy Gradient (DDPG) is an RL technique focused on mobile robot navigation. RL algorithms must balance exploitation and exploration to enable effective learning. The balance between these actions directly impacts learning efficiency.</p><p>This research proposes a method combining the DDPG strategy for exploitation with the Differential Gaming (DG) strategy for exploration. The DG algorithm ensures the mobile robot always reaches its target without collisions, thereby adding positive learning episodes to the memory buffer. An epsilon-greedy strategy determines whether to explore or exploit. When exploration is chosen, the DG algorithm is employed. The combination of DG strategy with DDPG facilitates faster learning by increasing the number of successful episodes and reducing the number of failure episodes in the experience buffer. The DDPG algorithm supports continuous state and action spaces, resulting in smoother, non-jerky movements and improved control over the turns when navigating obstacles. Reward shaping considers finer details, ensuring even small advantages in each iteration contribute to learning.</p><p>Through diverse test scenarios, it is demonstrated that DG exploration, compared to random exploration, results in an average increase of 389% in successful target reaches and a 39% decrease in collisions. Additionally, DG exploration shows a 69% improvement in the number of episodes where convergence is achieved within a maximum of 2000 steps.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"4 ","pages":"Pages 156-173"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667241324000119/pdfft?md5=8c083de5d6ac1af9d3cedcb0733a30fa&pid=1-s2.0-S2667241324000119-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142271646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sk. Khairul Hasan, Subodh B. Bhujel, Gabrielle Sara Niemiec
{"title":"Emerging trends in human upper extremity rehabilitation robot","authors":"Sk. Khairul Hasan, Subodh B. Bhujel, Gabrielle Sara Niemiec","doi":"10.1016/j.cogr.2024.09.001","DOIUrl":"10.1016/j.cogr.2024.09.001","url":null,"abstract":"<div><p>Stroke is a leading cause of neurological disorders that result in physical disability, particularly among the elderly. Neurorehabilitation plays a crucial role in helping stroke patients recover from physical impairments and regain mobility. Physical therapy is one of the most effective forms of neurorehabilitation, but the growing number of patients requires a large workforce of trained therapists, which is currently insufficient. Robotic rehabilitation offers a promising alternative, capable of supplementing or even replacing human-assisted physical therapy through the use of rehabilitation robots. To design effective robotic devices for rehabilitation, a solid foundation of knowledge is essential. This article provides a comprehensive overview of the key elements needed to develop human upper extremity rehabilitation robots. It covers critical aspects such as upper extremity anatomy, joint range of motion, anthropometric parameters, disability assessment techniques, and robot-assisted training methods. Additionally, it reviews recent advancements in rehabilitation robots, including exoskeletons, end-effector-based robots, and planar robots. The article also evaluates existing upper extremity rehabilitation robots based on their mechanical design and functionality, identifies their limitations, and suggests future research directions for further improvement.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"4 ","pages":"Pages 174-190"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667241324000120/pdfft?md5=a51e80d94f3f2f6ca53c667c4682ef83&pid=1-s2.0-S2667241324000120-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142271647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fourier Hilbert: The input transformation to enhance CNN models for speech emotion recognition","authors":"Bao Long Ly","doi":"10.1016/j.cogr.2024.11.002","DOIUrl":"10.1016/j.cogr.2024.11.002","url":null,"abstract":"<div><div>Signal processing in general, and speech emotion recognition in particular, have long been familiar Artificial Intelligence (AI) tasks. With the explosion of deep learning, CNN models are used more frequently, accompanied by the emergence of many signal transformations. However, these methods often require significant hardware and runtime. In an effort to address these issues, we analyze and learn from existing transformations, leading us to propose a new method: Fourier Hilbert Transformation (FHT). In general, this method applies the Hilbert curve to Fourier images. The resulting images are small and dense, which is a shape well-suited to the CNN architecture. Additionally, the better distribution of information on the image allows the filters to fully utilize their power. These points support the argument that FHT provides an optimal input for CNN. Experiments conducted on popular datasets yielded promising results. FHT saves a large amount of hardware usage and runtime while maintaining high performance, even offers greater stability compared to existing methods. This opens up opportunities for deploying signal processing tasks on real-time systems with limited hardware.</div></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"4 ","pages":"Pages 228-236"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142748300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}