Frontiers in Neurorobotics最新文献

筛选
英文 中文
Design and analysis of combined discrete-time zeroing neural network for solving time-varying nonlinear equation with robot application. 用于求解时变非线性方程的组合离散时间归零神经网络的设计与分析。
IF 2.8 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2025-07-11 eCollection Date: 2025-01-01 DOI: 10.3389/fnbot.2025.1576473
Zhisheng Ma, Shaobin Huang
{"title":"Design and analysis of combined discrete-time zeroing neural network for solving time-varying nonlinear equation with robot application.","authors":"Zhisheng Ma, Shaobin Huang","doi":"10.3389/fnbot.2025.1576473","DOIUrl":"10.3389/fnbot.2025.1576473","url":null,"abstract":"<p><p>Zeroing neural network (ZNN) is viewed as an effective solution to time-varying nonlinear equation (TVNE). In this paper, a further study is shown by proposing a novel combined discrete-time ZNN (CDTZNN) model for solving TVNE. Specifically, a new difference formula, which is called the Taylor difference formula, is constructed for first-order derivative approximation by following Taylor series expansion. The Taylor difference formula is then used to discretize the continuous-time ZNN model in the previous study. The corresponding DTZNN model is obtained, where the direct Jacobian matrix inversion is required (being time consuming). Another DTZNN model for computing the inverse of Jacobian matrix is established to solve the aforementioned limitation. The novel CDTZNN model for solving the TVNE is thus developed by combining the two models. Theoretical analysis and numerical results demonstrate the efficacy of the proposed CDTZNN model. The CDTZNN applicability is further indicated by applying the proposed model to the motion planning of robot manipulators.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1576473"},"PeriodicalIF":2.8,"publicationDate":"2025-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12289663/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144729707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A robust and effective framework for 3D scene reconstruction and high-quality rendering in nasal endoscopy surgery. 鼻内窥镜手术中三维场景重建和高质量渲染的鲁棒有效框架。
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2025-06-27 eCollection Date: 2025-01-01 DOI: 10.3389/fnbot.2025.1630728
Xueqin Ji, Shuting Zhao, Di Liu, Feng Wang, Xinrong Chen
{"title":"A robust and effective framework for 3D scene reconstruction and high-quality rendering in nasal endoscopy surgery.","authors":"Xueqin Ji, Shuting Zhao, Di Liu, Feng Wang, Xinrong Chen","doi":"10.3389/fnbot.2025.1630728","DOIUrl":"10.3389/fnbot.2025.1630728","url":null,"abstract":"<p><p>In nasal endoscopic surgery, the narrow nasal cavity restricts the surgical field of view and the manipulation of surgical instruments. Therefore, precise real-time intraoperative navigation, which can provide precise 3D information, plays a crucial role in avoiding critical areas with dense blood vessels and nerves. Although significant progress has been made in endoscopic 3D reconstruction methods, their application in nasal scenarios still faces numerous challenges. On the one hand, there is a lack of high-quality, annotated nasal endoscopy datasets. On the other hand, issues such as motion blur and soft tissue deformations complicate the nasal endoscopy reconstruction process. To tackle these challenges, a series of nasal endoscopy examination videos are collected, and the pose information for each frame is recorded. Additionally, a novel model named Mip-EndoGS is proposed, which integrates 3D Gaussian Splatting for reconstruction and rendering and a diffusion module to reduce image blurring in endoscopic data. Meanwhile, by incorporating an adaptive low-pass filter into the rendering pipeline, the aliasing artifacts (jagged edges) are mitigated, which occur during the rendering process. Extensive quantitative and visual experiments show that the proposed model is capable of reconstructing 3D scenes within the nasal cavity in real-time, thereby offering surgeons more detailed and precise information about the surgical scene. Moreover, the proposed approach holds great potential for integration with AR-based surgical navigation systems to enhance intraoperative guidance.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1630728"},"PeriodicalIF":2.6,"publicationDate":"2025-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12245865/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144626010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Understanding human co-manipulation via motion and haptic information to enable future physical human-robotic collaborations. 通过运动和触觉信息了解人类的协同操作,以实现未来的物理人机协作。
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2025-06-19 eCollection Date: 2025-01-01 DOI: 10.3389/fnbot.2025.1480399
Kody Shaw, John L Salmon, Marc D Killpack
{"title":"Understanding human co-manipulation via motion and haptic information to enable future physical human-robotic collaborations.","authors":"Kody Shaw, John L Salmon, Marc D Killpack","doi":"10.3389/fnbot.2025.1480399","DOIUrl":"10.3389/fnbot.2025.1480399","url":null,"abstract":"<p><p>Human teams intuitively and effectively collaborate to move large, heavy, or unwieldy objects. However, understanding of this interaction in literature is limited. This is especially problematic given our goal to enable human-robot teams to work together. Therefore, to better understand how human teams work together to eventually enable intuitive human-robot interaction, in this paper we examine four sub-components of collaborative manipulation (co-manipulation), using motion and haptics. We define co-manipulation as a group of two or more agents collaboratively moving an object. We present a study that uses a large object for co-manipulation as we vary the number of participants (two or three) and the roles of the participants (leaders or followers), and the degrees of freedom necessary to complete the defined motion for the object. In analyzing the results, we focus on four key components related to motion and haptics. Specifically, we first define and examine a static or rest state to demonstrate a method of detecting transitions between the static state and an active state, where one or more agents are moving toward an intended goal. Secondly, we analyze a variety of signals (e.g. force, acceleration, etc.) during movements in each of the six rigid-body degrees of freedom of the co-manipulated object. This data allows us to identify the best signals that correlate with the desired motion of the team. Third, we examine the completion percentage of each task. The completion percentage for each task can be used to determine which motion objectives can be communicated via haptic feedback. Finally, we define a metric to determine if participants divide two degree-of-freedom tasks into separate degrees of freedom or if they take the most direct path. These four components contribute to the necessary groundwork for advancing intuitive human-robot interaction.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1480399"},"PeriodicalIF":2.6,"publicationDate":"2025-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12222233/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144559877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodal fusion image enhancement technique and CFEC-YOLOv7 for underwater target detection algorithm research. 多模态融合图像增强技术与CFEC-YOLOv7水下目标检测算法研究。
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2025-06-19 eCollection Date: 2025-01-01 DOI: 10.3389/fnbot.2025.1616919
Xiaorong Qiu, Yingzhong Shi
{"title":"Multimodal fusion image enhancement technique and CFEC-YOLOv7 for underwater target detection algorithm research.","authors":"Xiaorong Qiu, Yingzhong Shi","doi":"10.3389/fnbot.2025.1616919","DOIUrl":"10.3389/fnbot.2025.1616919","url":null,"abstract":"<p><p>The underwater environment is more complex than that on land, resulting in severe static and dynamic blurring in underwater images, reducing the recognition accuracy of underwater targets and failing to meet the needs of underwater environment detection. Firstly, for the static blurring problem, we propose an adaptive color compensation algorithm and an improved MSR algorithm. Secondly, for the problem of dynamic blur, we adopt the Restormer network to eliminate the dynamic blur caused by the combined effects of camera shake, camera out-of-focus and relative motion displacement, etc. then, through qualitative analysis, quantitative analysis and underwater target detection on the enhanced dataset, the feasibility of our underwater enhancement method is verified. Finally, we propose a target recognition network suitable for the complex underwater environment. The local and global information is fused through the CCBC module and the ECLOU loss function to improve the positioning accuracy. The FasterNet module is introduced to reduce redundant computations and parameter counting. The experimental results show that the CFEC-YOLOv7 model and the underwater image enhancement method proposed by us exhibit excellent performance, can better adapt to the underwater target recognition task, and have a good application prospect.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1616919"},"PeriodicalIF":2.6,"publicationDate":"2025-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12222134/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144559876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
User recommendation method integrating hierarchical graph attention network with multimodal knowledge graph. 结合层次图关注网络和多模态知识图的用户推荐方法。
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2025-06-18 eCollection Date: 2025-01-01 DOI: 10.3389/fnbot.2025.1587973
Xiaofei Han, Xin Dou
{"title":"User recommendation method integrating hierarchical graph attention network with multimodal knowledge graph.","authors":"Xiaofei Han, Xin Dou","doi":"10.3389/fnbot.2025.1587973","DOIUrl":"10.3389/fnbot.2025.1587973","url":null,"abstract":"<p><p>In common graph neural network (GNN), although incorporating social network information effectively utilizes interactions between users, it often overlooks the deeper semantic relationships between items and fails to integrate visual and textual feature information. This limitation can restrict the diversity and accuracy of recommendation results. To address this, the present study combines knowledge graph, GNN, and multimodal information to enhance feature representations of both users and items. The inclusion of knowledge graph not only provides a better understanding of the underlying logic behind user interests and preferences but also aids in addressing the cold-start problem for new users and items. Moreover, in improving recommendation accuracy, visual and textual features of items are incorporated as supplementary information. Therefore, a user recommendation model is proposed that integrates hierarchical graph attention network with multimodal knowledge graph. The model consists of four key components: a collaborative knowledge graph neural layer, an image feature extraction layer, a text feature extraction layer, and a prediction layer. The first three layers extract user and item features, and the recommendation is completed in the prediction layer. Experimental results based on two public datasets demonstrate that the proposed model significantly outperforms existing recommendation methods in terms of recommendation performance.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1587973"},"PeriodicalIF":2.6,"publicationDate":"2025-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12213718/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144553235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Context-Aware Enhanced Feature Refinement for small object detection with Deformable DETR. 上下文感知增强特征细化小对象检测与变形的DETR。
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2025-06-10 eCollection Date: 2025-01-01 DOI: 10.3389/fnbot.2025.1588565
Donghao Shi, Cunbin Zhao, Jianwen Shao, Minjie Feng, Lei Luo, Bing Ouyang, Jiamin Huang
{"title":"Context-Aware Enhanced Feature Refinement for small object detection with Deformable DETR.","authors":"Donghao Shi, Cunbin Zhao, Jianwen Shao, Minjie Feng, Lei Luo, Bing Ouyang, Jiamin Huang","doi":"10.3389/fnbot.2025.1588565","DOIUrl":"10.3389/fnbot.2025.1588565","url":null,"abstract":"<p><p>Small object detection is a critical task in applications like autonomous driving and ship black smoke detection. While Deformable DETR has advanced small object detection, it faces limitations due to its reliance on CNNs for feature extraction, which restricts global context understanding and results in suboptimal feature representation. Additionally, it struggles with detecting small objects that occupy only a few pixels due to significant size disparities. To overcome these challenges, we propose the Context-Aware Enhanced Feature Refinement Deformable DETR, an improved Deformable DETR network. Our approach introduces Mask Attention in the backbone to improve feature extraction while effectively suppressing irrelevant background information. Furthermore, we propose a Context-Aware Enhanced Feature Refinement Encoder to address the issue of small objects with limited pixel representation. Experimental results demonstrate that our method outperforms the baseline, achieving a 2.1% improvement in mAP.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1588565"},"PeriodicalIF":2.6,"publicationDate":"2025-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12185399/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144484070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Depth-aware unpaired image-to-image translation for autonomous driving test scenario generation using a dual-branch GAN. 使用双分支GAN生成自动驾驶测试场景的深度感知非配对图像到图像转换。
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2025-05-30 eCollection Date: 2025-01-01 DOI: 10.3389/fnbot.2025.1603964
Donghao Shi, Chenxin Zhao, Cunbin Zhao, Zhou Fang, Chonghao Yu, Jian Li, Minjie Feng
{"title":"Depth-aware unpaired image-to-image translation for autonomous driving test scenario generation using a dual-branch GAN.","authors":"Donghao Shi, Chenxin Zhao, Cunbin Zhao, Zhou Fang, Chonghao Yu, Jian Li, Minjie Feng","doi":"10.3389/fnbot.2025.1603964","DOIUrl":"10.3389/fnbot.2025.1603964","url":null,"abstract":"<p><p>Reliable visual perception is essential for autonomous driving test scenario generation, yet adverse weather and lighting variations pose significant challenges to simulation robustness and generalization. Traditional unpaired image-to-image translation methods primarily rely on RGB-based transformations, often resulting in geometric distortions and loss of structural consistency, which can negatively impact the realism and accuracy of generated test scenarios. To address these limitations, we propose a Depth-Aware Dual-Branch Generative Adversarial Network (DAB-GAN) that explicitly incorporates depth information to preserve spatial structures during scenario generation. The dual-branch generator processes both RGB and depth inputs, ensuring geometric fidelity, while a self-attention mechanism enhances spatial dependencies and local detail refinement. This enables the creation of realistic and structure-preserving test environments that are crucial for evaluating autonomous driving perception systems, especially under adverse weather conditions. Experimental results demonstrate that DAB-GAN outperforms existing unpaired image-to-image translation methods, achieving superior visual fidelity and maintaining depth-aware structural integrity. This approach provides a robust framework for generating diverse and challenging test scenarios, enhancing the development and validation of autonomous driving systems under various real-world conditions.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1603964"},"PeriodicalIF":2.6,"publicationDate":"2025-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12162506/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144301898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gait analysis system for assessing abnormal patterns in individuals with hemiparetic stroke during robot-assisted gait training: a criterion-related validity study in healthy adults. 步态分析系统在机器人辅助的步态训练中评估偏瘫中风患者的异常模式:一项健康成人标准相关的有效性研究。
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2025-05-21 eCollection Date: 2025-01-01 DOI: 10.3389/fnbot.2025.1558009
Issei Nakashima, Daisuke Imoto, Satoshi Hirano, Hitoshi Konosu, Yohei Otaka
{"title":"Gait analysis system for assessing abnormal patterns in individuals with hemiparetic stroke during robot-assisted gait training: a criterion-related validity study in healthy adults.","authors":"Issei Nakashima, Daisuke Imoto, Satoshi Hirano, Hitoshi Konosu, Yohei Otaka","doi":"10.3389/fnbot.2025.1558009","DOIUrl":"10.3389/fnbot.2025.1558009","url":null,"abstract":"<p><strong>Introduction: </strong>Gait robots have the potential to analyze gait characteristics during gait training using mounted sensors in addition to robotic assistance of the individual's movements. However, no systems have been proposed to analyze gait performance during robot-assisted gait training. Our newly developed gait robot,\" Welwalk WW-2000 (WW-2000)\" is equipped with a gait analysis system to analyze abnormal gait patterns during robot-assisted gait training. We previously investigated the validity of the index values for the nine abnormal gait patterns. Here, we proposed new index values for four abnormal gait patterns, which are anterior trunk tilt, excessive trunk shifts over the affected side, excessive knee joint flexion, and swing difficulty; we investigated the criterion validity of the WW-2000 gait analysis system in healthy adults for these new index values.</p><p><strong>Methods: </strong>Twelve healthy participants simulated four abnormal gait patterns manifested in individuals with hemiparetic stroke while wearing the robot. Each participant was instructed to perform 16 gait trials, with four grades of severity for each of the four abnormal gait patterns. Twenty strides were recorded for each gait trial using a gait analysis system in the WW-2000 and video cameras. Abnormal gait patterns were assessed using the two parameters: the index values calculated for each stride from the WW-2000 gait analysis system, and assessor's severity scores for each stride. The correlation of the index values between the two methods was evaluated using the Spearman rank correlation coefficient for each gait pattern in each participant.</p><p><strong>Results: </strong>The median (minimum to maximum) values of Spearman rank correlation coefficient among the 12 participants between the index value calculated using the WW-2000 gait analysis system and the assessor's severity scores for anterior trunk tilt, excessive trunk shifts over the affected side, excessive knee joint flexion, and swing difficulty were 0.892 (0.749-0.969), 0.859 (0.439-0.923), 0.920 (0.738-0.969), and 0.681 (0.391-0.889), respectively.</p><p><strong>Discussion: </strong>The WW-2000 gait analysis system captured four new abnormal gait patterns observed in individuals with hemiparetic stroke with high validity, in addition to nine previously validated abnormal gait patterns. Assessing abnormal gait patterns is important as improving them contributes to stroke rehabilitation.</p><p><strong>Clinical trial registration: </strong>https://jrct.niph.go.jp, identifier jRCT 042190109.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1558009"},"PeriodicalIF":2.6,"publicationDate":"2025-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12133724/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144225249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hexapod robot motion planning investigation under the influence of multi-dimensional terrain features. 多维地形特征影响下的六足机器人运动规划研究。
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2025-05-21 eCollection Date: 2025-01-01 DOI: 10.3389/fnbot.2025.1605938
Chen Chen, Junbo Lin, Bo You, Jiayu Li, Biao Gao
{"title":"Hexapod robot motion planning investigation under the influence of multi-dimensional terrain features.","authors":"Chen Chen, Junbo Lin, Bo You, Jiayu Li, Biao Gao","doi":"10.3389/fnbot.2025.1605938","DOIUrl":"10.3389/fnbot.2025.1605938","url":null,"abstract":"<p><p>To address the challenges arising from the coupled interactions between multi-dimensional terrain features-encompassing both geometric and physical properties of complex field environments-and the locomotion stability of hexapod robots, this paper presents a comprehensive motion planning framework incorporating multi-dimensional terrain information. The proposed methodology systematically extracts multi-dimensional geometric and physical terrain features from a multi-layered environmental map. Based on these features, a traversal cost map is synthesized, and an enhanced A* algorithm is developed that incorporates terrain traversal metrics to optimize path planning safety across complex field environments. Furthermore, the framework introduces a foothold cost map derived from multi-dimensional terrain data, coupled with a fault-tolerant free gait planning algorithm based on foothold cost evaluation. This approach enables dynamic gait modulation to enhance overall locomotion stability while maintaining safe trajectory planning. The efficacy of the proposed framework is validated through both simulation studies and physical experiments on a hexapod robotic platform. Experimental results demonstrate that, compared to conventional hexapod motion planning approaches, the proposed multi-dimensional terrain-aware planning framework significantly enhances both locomotion safety and stability across complex field environments.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1605938"},"PeriodicalIF":2.6,"publicationDate":"2025-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12133957/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144225250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analysis and experiment of a positioning and pointing mechanism based on the stick-slip driving principle. 一种基于粘滑驱动原理的定位指向机构的分析与实验。
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2025-05-15 eCollection Date: 2025-01-01 DOI: 10.3389/fnbot.2025.1567291
Yongqi Zhu, Juan Li, Jianbin Huang, Weida Li, Gai Liu, Lining Sun
{"title":"Analysis and experiment of a positioning and pointing mechanism based on the stick-slip driving principle.","authors":"Yongqi Zhu, Juan Li, Jianbin Huang, Weida Li, Gai Liu, Lining Sun","doi":"10.3389/fnbot.2025.1567291","DOIUrl":"10.3389/fnbot.2025.1567291","url":null,"abstract":"<p><strong>Introduction: </strong>Traditional positioning and pointing mechanisms often face limitations in simultaneously achieving high speed and high resolution, and their travel range is typically constrained. To overcome these challenges, we propose a novel positioning and pointing mechanism driven by piezoelectric ceramics in this study. This mechanism is capable of achieving both high speed and high resolution by using two driving principles: resonance and stick-slip. This paper will focus on analyzing the stick-slip driving principle.</p><p><strong>Methods: </strong>We propose a configuration of the drive module within the positioning and pointing mechanism. By applying a low-frequency sawtooth wave excitation to the piezoelectric ceramics, the mechanism achieves high resolution based on the stick-slip driving principle. First, a simplified dynamic model of the drive module is established. The motion process of the drive module in stick-slip driving is divided into the stick phase and slip phase. With static and transient dynamic analyses conducted for each phase, the relationship between the output shaft angle, resolution, and driving voltage is derived. It is observed that during the stick phase, the output shaft angle and the driving voltage exhibit an approximately linear relationship, while in the slip phase, the output shaft angle and the driving voltage display nonlinearity due to impact forces and vibrations. Finally, a prototype of the positioning and pointing mechanism is designed, and an experimental platform is constructed to test the resolution of the prototype.</p><p><strong>Results: </strong>We construct a prototype of a dual-axis positioning and pointing mechanism composed of multiple drive modules and conduct resolution tests using two control methods: synchronous control and independent control. When synchronous control is used, the output shaft achieves a resolution of 0.38<i>μrad</i>, while with independent control, the resolution of the output shaft reaches 0.0276<i>μrad</i>.</p><p><strong>Discussion: </strong>The research results show that the positioning and pointing mechanism proposed in this study achieves high resolution through stick-slip driving principle, offering a novel approach for the advancement of such mechanisms.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1567291"},"PeriodicalIF":2.6,"publicationDate":"2025-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12119557/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144181186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信