Frontiers in Neurorobotics最新文献

筛选
英文 中文
A multimodal educational robots driven via dynamic attention. 通过动态注意力驱动的多模式教育机器人
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2024-10-31 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1453061
An Jianliang
{"title":"A multimodal educational robots driven via dynamic attention.","authors":"An Jianliang","doi":"10.3389/fnbot.2024.1453061","DOIUrl":"10.3389/fnbot.2024.1453061","url":null,"abstract":"<p><strong>Introduction: </strong>With the development of artificial intelligence and robotics technology, the application of educational robots in teaching is becoming increasingly popular. However, effectively evaluating and optimizing multimodal educational robots remains a challenge.</p><p><strong>Methods: </strong>This study introduces Res-ALBEF, a multimodal educational robot framework driven by dynamic attention. Res-ALBEF enhances the ALBEF (Align Before Fuse) method by incorporating residual connections to align visual and textual data more effectively before fusion. In addition, the model integrates a VGG19-based convolutional network for image feature extraction and utilizes a dynamic attention mechanism to dynamically focus on relevant parts of multimodal inputs. Our model was trained using a diverse dataset consisting of 50,000 multimodal educational instances, covering a variety of subjects and instructional content.</p><p><strong>Results and discussion: </strong>The evaluation on an independent validation set of 10,000 samples demonstrated significant performance improvements: the model achieved an overall accuracy of 97.38% in educational content recognition. These results highlight the model's ability to improve alignment and fusion of multimodal information, making it a robust solution for multimodal educational robots.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1453061"},"PeriodicalIF":2.6,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11560911/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142618559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LS-VIT: Vision Transformer for action recognition based on long and short-term temporal difference. LS-VIT:基于长短时间差的动作识别视觉转换器。
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2024-10-31 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1457843
Dong Chen, Peisong Wu, Mingdong Chen, Mengtao Wu, Tao Zhang, Chuanqi Li
{"title":"LS-VIT: Vision Transformer for action recognition based on long and short-term temporal difference.","authors":"Dong Chen, Peisong Wu, Mingdong Chen, Mengtao Wu, Tao Zhang, Chuanqi Li","doi":"10.3389/fnbot.2024.1457843","DOIUrl":"10.3389/fnbot.2024.1457843","url":null,"abstract":"<p><p>Over the past few years, a growing number of researchers have dedicated their efforts to focusing on temporal modeling. The advent of transformer-based methods has notably advanced the field of 2D image-based vision tasks. However, with respect to 3D video tasks such as action recognition, applying temporal transformations directly to video data significantly increases both computational and memory demands. This surge in resource consumption is due to the multiplication of data patches and the added complexity of self-aware computations. Accordingly, building efficient and precise 3D self-attentive models for video content represents as a major challenge for transformers. In our research, we introduce an Long and Short-term Temporal Difference Vision Transformer (LS-VIT). This method incorporates short-term motion details into images by weighting the difference across several consecutive frames, thereby equipping the original image with the ability to model short-term motions. Concurrently, we integrate a module designed to understand long-term motion details. This module enhances the model's capacity for long-term motion modeling by directly integrating temporal differences from various segments via motion excitation. Our thorough analysis confirms that the LS-VIT achieves high recognition accuracy across multiple benchmarks (e.g., UCF101, HMDB51, Kinetics-400). These research results indicate that LS-VIT has the potential for further optimization, which can improve real-time performance and action prediction capabilities.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1457843"},"PeriodicalIF":2.6,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11560894/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142618575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neuro-motor controlled wearable augmentations: current research and emerging trends. 神经运动控制可穿戴增强设备:当前研究与新兴趋势。
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2024-10-31 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1443010
Haneen Alsuradi, Joseph Hong, Helin Mazi, Mohamad Eid
{"title":"Neuro-motor controlled wearable augmentations: current research and emerging trends.","authors":"Haneen Alsuradi, Joseph Hong, Helin Mazi, Mohamad Eid","doi":"10.3389/fnbot.2024.1443010","DOIUrl":"10.3389/fnbot.2024.1443010","url":null,"abstract":"<p><p>Wearable augmentations (WAs) designed for movement and manipulation, such as exoskeletons and supernumerary robotic limbs, are used to enhance the physical abilities of healthy individuals and substitute or restore lost functionality for impaired individuals. Non-invasive neuro-motor (NM) technologies, including electroencephalography (EEG) and sufrace electromyography (sEMG), promise direct and intuitive communication between the brain and the WA. After presenting a historical perspective, this review proposes a conceptual model for NM-controlled WAs, analyzes key design aspects, such as hardware design, mounting methods, control paradigms, and sensory feedback, that have direct implications on the user experience, and in the long term, on the embodiment of WAs. The literature is surveyed and categorized into three main areas: hand WAs, upper body WAs, and lower body WAs. The review concludes by highlighting the primary findings, challenges, and trends in NM-controlled WAs. This review motivates researchers and practitioners to further explore and evaluate the development of WAs, ensuring a better quality of life.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1443010"},"PeriodicalIF":2.6,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11560910/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142618577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial: Assistive and service robots for health and home applications (RH3 - Robot Helpers in Health and Home). 社论:用于健康和家庭应用的辅助和服务机器人(RH3--健康和家庭机器人助手)。
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2024-10-29 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1503038
Paloma de la Puente, Markus Vincze, Diego Guffanti, Daniel Galan
{"title":"Editorial: Assistive and service robots for health and home applications (RH3 - Robot Helpers in Health and Home).","authors":"Paloma de la Puente, Markus Vincze, Diego Guffanti, Daniel Galan","doi":"10.3389/fnbot.2024.1503038","DOIUrl":"https://doi.org/10.3389/fnbot.2024.1503038","url":null,"abstract":"","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1503038"},"PeriodicalIF":2.6,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11554614/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142618571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Erratum: Swimtrans Net: a multimodal robotic system for swimming action recognition driven via Swin-Transformer. 更正:Swimtrans Net:通过斯温变换器驱动的游泳动作识别多模式机器人系统。
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2024-10-21 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1508032
{"title":"Erratum: Swimtrans Net: a multimodal robotic system for swimming action recognition driven via Swin-Transformer.","authors":"","doi":"10.3389/fnbot.2024.1508032","DOIUrl":"https://doi.org/10.3389/fnbot.2024.1508032","url":null,"abstract":"<p><p>[This corrects the article DOI: 10.3389/fnbot.2024.1452019.].</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1508032"},"PeriodicalIF":2.6,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11551536/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142618573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Based on cross-scale fusion attention mechanism network for semantic segmentation for street scenes. 基于跨尺度融合注意力机制网络对街景进行语义分割。
IF 3.1 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2023-08-31 eCollection Date: 2023-01-01 DOI: 10.3389/fnbot.2023.1204418
Xin Ye, Lang Gao, Jichen Chen, Mingyue Lei
{"title":"Based on cross-scale fusion attention mechanism network for semantic segmentation for street scenes.","authors":"Xin Ye,&nbsp;Lang Gao,&nbsp;Jichen Chen,&nbsp;Mingyue Lei","doi":"10.3389/fnbot.2023.1204418","DOIUrl":"10.3389/fnbot.2023.1204418","url":null,"abstract":"<p><p>Semantic segmentation, which is a fundamental task in computer vision. Every pixel will have a specific semantic class assigned to it through semantic segmentation methods. Embedded systems and mobile devices are difficult to deploy high-accuracy segmentation algorithms. Despite the rapid development of semantic segmentation, the balance between speed and accuracy must be improved. As a solution to the above problems, we created a cross-scale fusion attention mechanism network called CFANet, which fuses feature maps from different scales. We first design a novel efficient residual module (ERM), which applies both dilation convolution and factorized convolution. Our CFANet is mainly constructed from ERM. Subsequently, we designed a new multi-branch channel attention mechanism (MCAM) to refine the feature maps at different levels. Experiment results show that CFANet achieved 70.6% mean intersection over union (mIoU) and 67.7% mIoU on Cityscapes and CamVid datasets, respectively, with inference speeds of 118 FPS and 105 FPS on NVIDIA RTX2080Ti GPU cards with 0.84M parameters.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"17 ","pages":"1204418"},"PeriodicalIF":3.1,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10501793/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10635112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ring attractor bio-inspired neural network for social robot navigation. 用于机器人社交导航的环形吸引子生物启发神经网络。
IF 3.1 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2023-08-31 eCollection Date: 2023-01-01 DOI: 10.3389/fnbot.2023.1211570
Jesús D Rivero-Ortega, Juan S Mosquera-Maturana, Josh Pardo-Cabrera, Julián Hurtado-López, Juan D. Hernández, Victor Romero-Cano, David F Ramírez-Moreno
{"title":"Ring attractor bio-inspired neural network for social robot navigation.","authors":"Jesús D Rivero-Ortega, Juan S Mosquera-Maturana, Josh Pardo-Cabrera, Julián Hurtado-López, Juan D. Hernández, Victor Romero-Cano, David F Ramírez-Moreno","doi":"10.3389/fnbot.2023.1211570","DOIUrl":"10.3389/fnbot.2023.1211570","url":null,"abstract":"<p><strong>Introduction: </strong>We introduce a bio-inspired navigation system for a robot to guide a social agent to a target location while avoiding static and dynamic obstacles. Robot navigation can be accomplished through a model of ring attractor neural networks. This connectivity pattern between neurons enables the generation of stable activity patterns that can represent continuous variables such as heading direction or position. The integration of sensory representation, decision-making, and motor control through ring attractor networks offers a biologically-inspired approach to navigation in complex environments.</p><p><strong>Methods: </strong>The navigation system is divided into perception, planning, and control stages. Our approach is compared to the widely-used Social Force Model and Rapidly Exploring Random Tree Star methods using the Social Individual Index and Relative Motion Index as metrics in simulated experiments. We created a virtual scenario of a pedestrian area with various obstacles and dynamic agents.</p><p><strong>Results: </strong>The results obtained in our experiments demonstrate the effectiveness of this architecture in guiding a social agent while avoiding obstacles, and the metrics used for evaluating the system indicate that our proposal outperforms the widely used Social Force Model.</p><p><strong>Discussion: </strong>Our approach points to improving safety and comfort specifically for human-robot interactions. By integrating the Social Individual Index and Relative Motion Index, this approach considers both social comfort and collision avoidance features, resulting in better human-robot interactions in a crowded environment.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"17 ","pages":"1211570"},"PeriodicalIF":3.1,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10501606/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10339561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Uncertainty-aware automated assessment of the arm impedance with upper-limb exoskeletons. 上肢外骨骼手臂阻抗的不确定性自动评估。
IF 3.1 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2023-08-24 eCollection Date: 2023-01-01 DOI: 10.3389/fnbot.2023.1167604
Samuel Tesfazgi, Ronan Sangouard, Satoshi Endo, Sandra Hirche
{"title":"Uncertainty-aware automated assessment of the arm impedance with upper-limb exoskeletons.","authors":"Samuel Tesfazgi,&nbsp;Ronan Sangouard,&nbsp;Satoshi Endo,&nbsp;Sandra Hirche","doi":"10.3389/fnbot.2023.1167604","DOIUrl":"10.3389/fnbot.2023.1167604","url":null,"abstract":"<p><p>Providing high degree of personalization to a specific need of each patient is invaluable to improve the utility of robot-driven neurorehabilitation. For the desired customization of treatment strategies, precise and reliable estimation of the patient's state becomes important, as it can be used to continuously monitor the patient during training and to document the rehabilitation progress. Wearable robotics have emerged as a valuable tool for this quantitative assessment as the actuation and sensing are performed on the joint level. However, upper-limb exoskeletons introduce various sources of uncertainty, which primarily result from the complex interaction dynamics at the physical interface between the patient and the robotic device. These sources of uncertainty must be considered to ensure the correctness of estimation results when performing the clinical assessment of the patient state. In this work, we analyze these sources of uncertainty and quantify their influence on the estimation of the human arm impedance. We argue that this mitigates the risk of relying on overconfident estimates and promotes more precise computational approaches in robot-based neurorehabilitation.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"17 ","pages":"1167604"},"PeriodicalIF":3.1,"publicationDate":"2023-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10490610/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10224635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Metric networks for enhanced perception of non-local semantic information. 用于增强非本地语义信息感知的度量网络。
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2023-08-09 eCollection Date: 2023-01-01 DOI: 10.3389/fnbot.2023.1234129
Jia Li, Yu-Qian Zhou, Qiu-Yan Zhang
{"title":"Metric networks for enhanced perception of non-local semantic information.","authors":"Jia Li, Yu-Qian Zhou, Qiu-Yan Zhang","doi":"10.3389/fnbot.2023.1234129","DOIUrl":"10.3389/fnbot.2023.1234129","url":null,"abstract":"<p><strong>Introduction: </strong>Metric learning, as a fundamental research direction in the field of computer vision, has played a crucial role in image matching. Traditional metric learning methods aim at constructing two-branch siamese neural networks to address the challenge of image matching, but they often overlook to cross-source and cross-view scenarios.</p><p><strong>Methods: </strong>In this article, a multi-branch metric learning model is proposed to address these limitations. The main contributions of this work are as follows: Firstly, we design a multi-branch siamese network model that enhances measurement reliability through information compensation among data points. Secondly, we construct a non-local information perception and fusion model, which accurately distinguishes positive and negative samples by fusing information at different scales. Thirdly, we enhance the model by integrating semantic information and establish an information consistency mapping between multiple branches, thereby improving the robustness in cross-source and cross-view scenarios.</p><p><strong>Results: </strong>Experimental tests which demonstrate the effectiveness of the proposed method are carried out under various conditions, including homologous, heterogeneous, multi-view, and crossview scenarios. Compared to the state-of-the-art comparison algorithms, our proposed algorithm achieves an improvement of ~1, 2, 1, and 1% in terms of similarity measurement Recall@10, respectively, under these four conditions.</p><p><strong>Discussion: </strong>In addition, our work provides an idea for improving the crossscene application ability of UAV positioning and navigation algorithm.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"17 ","pages":"1234129"},"PeriodicalIF":2.6,"publicationDate":"2023-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10445135/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10075676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The hybrid bio-robotic swarm as a powerful tool for collective motion research: a perspective. 将混合生物机器人群作为集体运动研究的有力工具:一个视角。
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2023-07-14 eCollection Date: 2023-01-01 DOI: 10.3389/fnbot.2023.1215085
Amir Ayali, Gal A Kaminka
{"title":"The hybrid bio-robotic swarm as a powerful tool for collective motion research: a perspective.","authors":"Amir Ayali, Gal A Kaminka","doi":"10.3389/fnbot.2023.1215085","DOIUrl":"10.3389/fnbot.2023.1215085","url":null,"abstract":"<p><p>Swarming or collective motion is ubiquitous in natural systems, and instrumental in many technological applications. Accordingly, research interest in this phenomenon is crossing discipline boundaries. A common major question is that of the intricate interactions between the individual, the group, and the environment. There are, however, major gaps in our understanding of swarming systems, very often due to the theoretical difficulty of relating embodied properties to the physical agents-individual animals or robots. Recently, there has been much progress in exploiting the complementary nature of the two disciplines: biology and robotics. This, unfortunately, is still uncommon in swarm research. Specifically, there are very few examples of joint research programs that investigate multiple biological and synthetic agents concomitantly. Here we present a novel research tool, enabling a unique, tightly integrated, bio-inspired, and robot-assisted study of major questions in swarm collective motion. Utilizing a quintessential model of collective behavior-locust nymphs and our recently developed Nymbots (locust-inspired robots)-we focus on fundamental questions and gaps in the scientific understanding of swarms, providing novel interdisciplinary insights and sharing ideas disciplines. The Nymbot-Locust bio-hybrid swarm enables the investigation of biology hypotheses that would be otherwise difficult, or even impossible to test, and to discover technological insights that might otherwise remain hidden from view.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"17 ","pages":"1215085"},"PeriodicalIF":2.6,"publicationDate":"2023-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10375296/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9910490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信