Frontiers in Neurorobotics最新文献

筛选
英文 中文
Real-time location of acupuncture points based on anatomical landmarks and pose estimation models. 基于解剖标志和位姿估计模型的穴位实时定位。
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2024-11-08 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1484038
Hadi Sedigh Malekroodi, Seon-Deok Seo, Jinseong Choi, Chang-Soo Na, Byeong-Il Lee, Myunggi Yi
{"title":"Real-time location of acupuncture points based on anatomical landmarks and pose estimation models.","authors":"Hadi Sedigh Malekroodi, Seon-Deok Seo, Jinseong Choi, Chang-Soo Na, Byeong-Il Lee, Myunggi Yi","doi":"10.3389/fnbot.2024.1484038","DOIUrl":"https://doi.org/10.3389/fnbot.2024.1484038","url":null,"abstract":"<p><strong>Introduction: </strong>Precise identification of acupuncture points (acupoints) is essential for effective treatment, but manual location by untrained individuals can often lack accuracy and consistency. This study proposes two approaches that use artificial intelligence (AI) specifically computer vision to automatically and accurately identify acupoints on the face and hand in real-time, enhancing both precision and accessibility in acupuncture practices.</p><p><strong>Methods: </strong>The first approach applies a real-time landmark detection system to locate 38 specific acupoints on the face and hand by translating anatomical landmarks from image data into acupoint coordinates. The second approach uses a convolutional neural network (CNN) specifically optimized for pose estimation to detect five key acupoints on the arm and hand (LI11, LI10, TE5, TE3, LI4), drawing on constrained medical imaging data for training. To validate these methods, we compared the predicted acupoint locations with those annotated by experts.</p><p><strong>Results: </strong>Both approaches demonstrated high accuracy, with mean localization errors of less than 5 mm when compared to expert annotations. The landmark detection system successfully mapped multiple acupoints across the face and hand even in complex imaging scenarios. The data-driven approach accurately detected five arm and hand acupoints with a mean Average Precision (mAP) of 0.99 at OKS 50%.</p><p><strong>Discussion: </strong>These AI-driven methods establish a solid foundation for the automated localization of acupoints, enhancing both self-guided and professional acupuncture practices. By enabling precise, real-time localization of acupoints, these technologies could improve the accuracy of treatments, facilitate self-training, and increase the accessibility of acupuncture. Future developments could expand these models to include additional acupoints and incorporate them into intuitive applications for broader use.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1484038"},"PeriodicalIF":2.6,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11609928/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142768252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Vahagn: VisuAl Haptic Attention Gate Net for slip detection. Vahagn:用于滑倒检测的可视触觉注意力门网
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2024-11-06 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1484751
Jinlin Wang, Yulong Ji, Hongyu Yang
{"title":"Vahagn: VisuAl Haptic Attention Gate Net for slip detection.","authors":"Jinlin Wang, Yulong Ji, Hongyu Yang","doi":"10.3389/fnbot.2024.1484751","DOIUrl":"10.3389/fnbot.2024.1484751","url":null,"abstract":"<p><strong>Introduction: </strong>Slip detection is crucial for achieving stable grasping and subsequent operational tasks. A grasp action is a continuous process that requires information from multiple sources. The success of a specific grasping maneuver is contingent upon the confluence of two factors: the spatial accuracy of the contact and the stability of the continuous process.</p><p><strong>Methods: </strong>In this paper, for the task of perceiving grasping results using visual-haptic information, we propose a new method for slip detection, which synergizes visual and haptic information from spatial-temporal dual dimensions. Specifically, the method takes as input a sequence of visual images from a first-person perspective and a sequence of haptic images from a gripper. Then, it extracts time-dependent features of the whole process and spatial features matching the importance of different parts with different attention mechanisms. Inspired by neurological studies, during the information fusion process, we adjusted temporal and spatial information from vision and haptic through a combination of two-step fusion and gate units.</p><p><strong>Results and discussion: </strong>To validate the effectiveness of method, we compared it with traditional CNN net and models with attention. It is anticipated that our method achieves a classification accuracy of 93.59%, which is higher than that of previous works. Attention visualization is further presented to support the validity.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1484751"},"PeriodicalIF":2.6,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11576469/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142681508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A multimodal educational robots driven via dynamic attention. 通过动态注意力驱动的多模式教育机器人
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2024-10-31 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1453061
An Jianliang
{"title":"A multimodal educational robots driven via dynamic attention.","authors":"An Jianliang","doi":"10.3389/fnbot.2024.1453061","DOIUrl":"10.3389/fnbot.2024.1453061","url":null,"abstract":"<p><strong>Introduction: </strong>With the development of artificial intelligence and robotics technology, the application of educational robots in teaching is becoming increasingly popular. However, effectively evaluating and optimizing multimodal educational robots remains a challenge.</p><p><strong>Methods: </strong>This study introduces Res-ALBEF, a multimodal educational robot framework driven by dynamic attention. Res-ALBEF enhances the ALBEF (Align Before Fuse) method by incorporating residual connections to align visual and textual data more effectively before fusion. In addition, the model integrates a VGG19-based convolutional network for image feature extraction and utilizes a dynamic attention mechanism to dynamically focus on relevant parts of multimodal inputs. Our model was trained using a diverse dataset consisting of 50,000 multimodal educational instances, covering a variety of subjects and instructional content.</p><p><strong>Results and discussion: </strong>The evaluation on an independent validation set of 10,000 samples demonstrated significant performance improvements: the model achieved an overall accuracy of 97.38% in educational content recognition. These results highlight the model's ability to improve alignment and fusion of multimodal information, making it a robust solution for multimodal educational robots.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1453061"},"PeriodicalIF":2.6,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11560911/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142618559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LS-VIT: Vision Transformer for action recognition based on long and short-term temporal difference. LS-VIT:基于长短时间差的动作识别视觉转换器。
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2024-10-31 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1457843
Dong Chen, Peisong Wu, Mingdong Chen, Mengtao Wu, Tao Zhang, Chuanqi Li
{"title":"LS-VIT: Vision Transformer for action recognition based on long and short-term temporal difference.","authors":"Dong Chen, Peisong Wu, Mingdong Chen, Mengtao Wu, Tao Zhang, Chuanqi Li","doi":"10.3389/fnbot.2024.1457843","DOIUrl":"10.3389/fnbot.2024.1457843","url":null,"abstract":"<p><p>Over the past few years, a growing number of researchers have dedicated their efforts to focusing on temporal modeling. The advent of transformer-based methods has notably advanced the field of 2D image-based vision tasks. However, with respect to 3D video tasks such as action recognition, applying temporal transformations directly to video data significantly increases both computational and memory demands. This surge in resource consumption is due to the multiplication of data patches and the added complexity of self-aware computations. Accordingly, building efficient and precise 3D self-attentive models for video content represents as a major challenge for transformers. In our research, we introduce an Long and Short-term Temporal Difference Vision Transformer (LS-VIT). This method incorporates short-term motion details into images by weighting the difference across several consecutive frames, thereby equipping the original image with the ability to model short-term motions. Concurrently, we integrate a module designed to understand long-term motion details. This module enhances the model's capacity for long-term motion modeling by directly integrating temporal differences from various segments via motion excitation. Our thorough analysis confirms that the LS-VIT achieves high recognition accuracy across multiple benchmarks (e.g., UCF101, HMDB51, Kinetics-400). These research results indicate that LS-VIT has the potential for further optimization, which can improve real-time performance and action prediction capabilities.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1457843"},"PeriodicalIF":2.6,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11560894/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142618575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neuro-motor controlled wearable augmentations: current research and emerging trends. 神经运动控制可穿戴增强设备:当前研究与新兴趋势。
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2024-10-31 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1443010
Haneen Alsuradi, Joseph Hong, Helin Mazi, Mohamad Eid
{"title":"Neuro-motor controlled wearable augmentations: current research and emerging trends.","authors":"Haneen Alsuradi, Joseph Hong, Helin Mazi, Mohamad Eid","doi":"10.3389/fnbot.2024.1443010","DOIUrl":"10.3389/fnbot.2024.1443010","url":null,"abstract":"<p><p>Wearable augmentations (WAs) designed for movement and manipulation, such as exoskeletons and supernumerary robotic limbs, are used to enhance the physical abilities of healthy individuals and substitute or restore lost functionality for impaired individuals. Non-invasive neuro-motor (NM) technologies, including electroencephalography (EEG) and sufrace electromyography (sEMG), promise direct and intuitive communication between the brain and the WA. After presenting a historical perspective, this review proposes a conceptual model for NM-controlled WAs, analyzes key design aspects, such as hardware design, mounting methods, control paradigms, and sensory feedback, that have direct implications on the user experience, and in the long term, on the embodiment of WAs. The literature is surveyed and categorized into three main areas: hand WAs, upper body WAs, and lower body WAs. The review concludes by highlighting the primary findings, challenges, and trends in NM-controlled WAs. This review motivates researchers and practitioners to further explore and evaluate the development of WAs, ensuring a better quality of life.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1443010"},"PeriodicalIF":2.6,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11560910/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142618577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial: Assistive and service robots for health and home applications (RH3 - Robot Helpers in Health and Home). 社论:用于健康和家庭应用的辅助和服务机器人(RH3--健康和家庭机器人助手)。
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2024-10-29 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1503038
Paloma de la Puente, Markus Vincze, Diego Guffanti, Daniel Galan
{"title":"Editorial: Assistive and service robots for health and home applications (RH3 - Robot Helpers in Health and Home).","authors":"Paloma de la Puente, Markus Vincze, Diego Guffanti, Daniel Galan","doi":"10.3389/fnbot.2024.1503038","DOIUrl":"https://doi.org/10.3389/fnbot.2024.1503038","url":null,"abstract":"","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1503038"},"PeriodicalIF":2.6,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11554614/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142618571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A modified A* algorithm combining remote sensing technique to collect representative samples from unmanned surface vehicles. 一种结合遥感技术的改良 A* 算法,用于从无人驾驶地表飞行器上采集具有代表性的样本。
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2024-10-22 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1488337
Lei Wang, Danping Liu, Jun Wang
{"title":"A modified A* algorithm combining remote sensing technique to collect representative samples from unmanned surface vehicles.","authors":"Lei Wang, Danping Liu, Jun Wang","doi":"10.3389/fnbot.2024.1488337","DOIUrl":"10.3389/fnbot.2024.1488337","url":null,"abstract":"<p><p>Ensuring representativeness of collected samples is the most critical requirement of water sampling. Unmanned surface vehicles (USVs) have been widely adopted in water sampling, but current USV sampling path planning tend to overemphasize path optimization, neglecting the representative samples collection. This study proposed a modified A* algorithm that combined remote sensing technique while considering both path length and the representativeness of collected samples. Water quality parameters were initially retrieved using satellite remote sensing imagery and a deep belief network model, with the parameter value incorporated as coefficient <i>Q</i> in the heuristic function of A* algorithm. The adjustment coefficient <i>k</i> was then introduced into the coefficient <i>Q</i> to optimize the trade-off between sampling representativeness and path length. To evaluate the effectiveness of this algorithm, Chlorophyll-a concentration (Chl-a) was employed as the test parameter, with Chaohu Lake as the study area. Results showed that the algorithm was effective in collecting more representative samples in real-world conditions. As the coefficient <i>k</i> increased, the representativeness of collected samples enhanced, indicated by the Chl-a closely approximating the overall mean Chl-a and exhibiting a gradient distribution. This enhancement was also associated with increased path length. This study is significant in USV water sampling and water environment protection.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1488337"},"PeriodicalIF":2.6,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11535655/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142582574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TL-CStrans Net: a vision robot for table tennis player action recognition driven via CS-Transformer. TL-CStrans Net:通过 CS 变压器驱动的乒乓球运动员动作识别视觉机器人。
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2024-10-21 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1443177
Libo Ma, Yan Tong
{"title":"TL-CStrans Net: a vision robot for table tennis player action recognition driven via CS-Transformer.","authors":"Libo Ma, Yan Tong","doi":"10.3389/fnbot.2024.1443177","DOIUrl":"10.3389/fnbot.2024.1443177","url":null,"abstract":"<p><p>Currently, the application of robotics technology in sports training and competitions is rapidly increasing. Traditional methods mainly rely on image or video data, neglecting the effective utilization of textual information. To address this issue, we propose: TL-CStrans Net: A vision robot for table tennis player action recognition driven via CS-Transformer. This is a multimodal approach that combines CS-Transformer, CLIP, and transfer learning techniques to effectively integrate visual and textual information. Firstly, we employ the CS-Transformer model as the neural computing backbone. By utilizing the CS-Transformer, we can effectively process visual information extracted from table tennis game scenes, enabling accurate stroke recognition. Then, we introduce the CLIP model, which combines computer vision and natural language processing. CLIP allows us to jointly learn representations of images and text, thereby aligning the visual and textual modalities. Finally, to reduce training and computational requirements, we leverage pre-trained CS-Transformer and CLIP models through transfer learning, which have already acquired knowledge from relevant domains, and apply them to table tennis stroke recognition tasks. Experimental results demonstrate the outstanding performance of TL-CStrans Net in table tennis stroke recognition. Our research is of significant importance in promoting the application of multimodal robotics technology in the field of sports and bridging the gap between neural computing, computer vision, and neuroscience.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1443177"},"PeriodicalIF":2.6,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11532032/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142575211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Erratum: Swimtrans Net: a multimodal robotic system for swimming action recognition driven via Swin-Transformer. 更正:Swimtrans Net:通过斯温变换器驱动的游泳动作识别多模式机器人系统。
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2024-10-21 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1508032
{"title":"Erratum: Swimtrans Net: a multimodal robotic system for swimming action recognition driven via Swin-Transformer.","authors":"","doi":"10.3389/fnbot.2024.1508032","DOIUrl":"https://doi.org/10.3389/fnbot.2024.1508032","url":null,"abstract":"<p><p>[This corrects the article DOI: 10.3389/fnbot.2024.1452019.].</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1508032"},"PeriodicalIF":2.6,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11551536/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142618573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cascade contour-enhanced panoptic segmentation for robotic vision perception. 用于机器人视觉感知的级联轮廓增强全景分割。
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2024-10-21 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1489021
Yue Xu, Runze Liu, Dongchen Zhu, Lili Chen, Xiaolin Zhang, Jiamao Li
{"title":"Cascade contour-enhanced panoptic segmentation for robotic vision perception.","authors":"Yue Xu, Runze Liu, Dongchen Zhu, Lili Chen, Xiaolin Zhang, Jiamao Li","doi":"10.3389/fnbot.2024.1489021","DOIUrl":"10.3389/fnbot.2024.1489021","url":null,"abstract":"<p><p>Panoptic segmentation plays a crucial role in enabling robots to comprehend their surroundings, providing fine-grained scene understanding information for robots' intelligent tasks. Although existing methods have made some progress, they are prone to fail in areas with weak textures, small objects, etc. Inspired by biological vision research, we propose a cascaded contour-enhanced panoptic segmentation network called CCPSNet, attempting to enhance the discriminability of instances through structural knowledge. To acquire the scene structure, a cascade contour detection stream is designed, which extracts comprehensive scene contours using channel regulation structural perception module and coarse-to-fine cascade strategy. Furthermore, the contour-guided multi-scale feature enhancement stream is developed to boost the discrimination ability for small objects and weak textures. The stream integrates contour information and multi-scale context features through structural-aware feature modulation module and inverse aggregation technique. Experimental results show that our method improves accuracy on the Cityscapes (61.2 PQ) and COCO (43.5 PQ) datasets while also demonstrating robustness in challenging simulated real-world complex scenarios faced by robots, such as dirty cameras and rainy conditions. The proposed network promises to help the robot perceive the real scene. In future work, an unsupervised training strategy for the network could be explored to reduce the training cost.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1489021"},"PeriodicalIF":2.6,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11532083/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142577450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信