Cognitive Robotics最新文献

筛选
英文 中文
Development of a user-following mobile robot with a stand-up assistance function 具有站立辅助功能的用户跟随移动机器人的研制
Cognitive Robotics Pub Date : 2022-01-01 DOI: 10.1016/j.cogr.2022.03.003
Shenglin Mu, Satoru Shibata, Tomonori Yamamoto
{"title":"Development of a user-following mobile robot with a stand-up assistance function","authors":"Shenglin Mu,&nbsp;Satoru Shibata,&nbsp;Tomonori Yamamoto","doi":"10.1016/j.cogr.2022.03.003","DOIUrl":"https://doi.org/10.1016/j.cogr.2022.03.003","url":null,"abstract":"<div><p>In this paper, a user-following mobile robot which tracks and follows the user, offering stand-up assistance function is proposed. The proposed robot plays the role of a chair where the user can sit on, and offers a stand-up assistance function compensating the lack of muscle strength. In the proposed robot, a sensing method for buttocks recognition using a depth sensor is proposed. By measuring the distance from the user’s buttocks, the walking state is recognized and the tracking is performed at a fixed distance. As an approach to realize the tracking function, a human tracking method for mobile robots using PD control is constructed. According experimental study, usefulness of the proposed mobile robot with the function of user-following and stand-up assistance is confirmed. The user recognition method and the tracking method using PD control are confirmed effective. With the proposed robot system, improvement in welfare field can be expected.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"2 ","pages":"Pages 83-95"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667241322000064/pdfft?md5=eb234cb3880e54b18b3ff70643c72736&pid=1-s2.0-S2667241322000064-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92091649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Resource allocation in UAV assisted air ground intelligent inspection system 无人机辅助地空智能巡检系统中的资源分配
Cognitive Robotics Pub Date : 2022-01-01 DOI: 10.1016/j.cogr.2021.12.002
Zhuoya Zhang , Fei Xu , Zengshi Qin , Yue Xie
{"title":"Resource allocation in UAV assisted air ground intelligent inspection system","authors":"Zhuoya Zhang ,&nbsp;Fei Xu ,&nbsp;Zengshi Qin ,&nbsp;Yue Xie","doi":"10.1016/j.cogr.2021.12.002","DOIUrl":"10.1016/j.cogr.2021.12.002","url":null,"abstract":"<div><p>With the progress of power grid technology and intelligent technology, intelligent inspection robot (IR) came into being and are expected to become the main force of substation inspection in the future. Among them, mobile edge computing provides a promising architecture to meet the explosive growth of communication and computing needs of inspection robot. Inspection robot can transmit the collected High Definition (HD) video to adjacent edge servers for data processing and state research and judgment. However, the communication constraints of long-distance transmission, high reliability and low delay pose challenges to task offloading optimization. Therefore, this paper introduced Unmanned Aerial Vehicle (UAV) and established UAV assisted mobile edge computing system. UAV assisted and mobile edge computing are combined to form edge computing nodes. In this way, it provided communication and computing services to the IR for fast data processing. Specifically, in order to optimize the system energy consumption, a resource allocation strategy based on genetic algorithm is proposed. By optimizing the offloading decision and computing resource allocation of the IRs, the computing task of the IRs are offloaded to an energy-efficient UAV. The experimental results show that the resource allocation strategy based on genetic algorithm can effectively reduce the energy consumption and cost of UAVs and IRs, and effectively realize the reasonable allocation of resources. The results verify the effectiveness and reliability of the algorithm in the real scene.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"2 ","pages":"Pages 1-12"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667241321000215/pdfft?md5=52655729279f3a497faeb732baa533df&pid=1-s2.0-S2667241321000215-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80206059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A novel level set model initialized with guided filter for automated PET-CT image segmentation 一种新的引导滤波初始化水平集模型用于PET-CT图像自动分割
Cognitive Robotics Pub Date : 2022-01-01 DOI: 10.1016/j.cogr.2022.08.003
Shuhua Bai , Xiaojian Qiu , Rongqun Hu , Yunqiang Wu
{"title":"A novel level set model initialized with guided filter for automated PET-CT image segmentation","authors":"Shuhua Bai ,&nbsp;Xiaojian Qiu ,&nbsp;Rongqun Hu ,&nbsp;Yunqiang Wu","doi":"10.1016/j.cogr.2022.08.003","DOIUrl":"10.1016/j.cogr.2022.08.003","url":null,"abstract":"<div><p>Positron emission tomography (PET) and computed tomography (CT) scanner image analysis plays an important role in clinical radiotherapy treatment. PET and CT images provide complementary cues for identifying tumor tissues. In specific, PET images can clearly denote the tumor tissue, whereas this source suffers from the problem of low spatial resolution. On the contrary, CT images have a high resolution, but they can not recognize the tumor from normal tissues. In this work, we firstly fuse PET and CT images by using the guided filter. Then, a region and edge-based level set model is proposed to segment PET-CT fusion images. At last, a normalization term is designed by combining length, distance and H<sup>1</sup> terms with the aim to improve segmentation accuracy. The proposed method was validated in the robust delineation of lung tumor tissues on 20 PET-CT samples. Both qualitative and quantitative results demonstrate significant improvement compared to both the data-independent and deep learning based segmentation methods.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"2 ","pages":"Pages 193-201"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667241322000180/pdfft?md5=19e625c37228b4881aaccfb4c3123000&pid=1-s2.0-S2667241322000180-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80496687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Spatiotemporal cue fusion-based saliency extraction and its application in video compression 基于时空线索融合的显著性提取及其在视频压缩中的应用
Cognitive Robotics Pub Date : 2022-01-01 DOI: 10.1016/j.cogr.2022.06.003
Ke Li , Zhonghua Luo , Tong Zhang , Yinglan Ruan , Dan Zhou
{"title":"Spatiotemporal cue fusion-based saliency extraction and its application in video compression","authors":"Ke Li ,&nbsp;Zhonghua Luo ,&nbsp;Tong Zhang ,&nbsp;Yinglan Ruan ,&nbsp;Dan Zhou","doi":"10.1016/j.cogr.2022.06.003","DOIUrl":"10.1016/j.cogr.2022.06.003","url":null,"abstract":"<div><p>Extracting salient regions plays an important role in computer vision tasks, e.g., object detection, recognition and video compression. Previous saliency detection study is mostly conducted on individual frames and tends to extract saliency with spatial cues. The development of various motion feature further extends the saliency concept to the motion saliency from videos. In contrast to image-based saliency extraction, video-based saliency extraction is more challenging due to the complicated distractors, e.g., the background dynamics and shadows. In this paper, we propose a novel saliency extraction method by fusing temporal and spatial cues. In specific, the long-term and short-term variations are comprehensively fused to extract the temporal cue, which is then utilized to establish the background guidance for generating the spatial cue. Herein, the long-term variations and spatial cues jointly highlight the contrast between objects and the background, which can solve the problem caused by shadows. The short-term variations contribute to the removal of background dynamics. Spatiotemporal cues are fully exploited to constrain the saliency extraction across frames. The saliency extraction performance of our method is demonstrated by comparing it to both unsupervised and supervised methods. Moreover, this novel saliency extraction model is applied in the video compression tasks, helping to accelerate the video compression task and achieve a larger PSNR value for the region of interest (ROI).</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"2 ","pages":"Pages 177-185"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667241322000131/pdfft?md5=181cb8030eca6d4778b64500c49f1fa8&pid=1-s2.0-S2667241322000131-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76038010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Knowledge graph embedding based on semantic hierarchy 基于语义层次的知识图嵌入
Cognitive Robotics Pub Date : 2022-01-01 DOI: 10.1016/j.cogr.2022.06.002
Fan Linjuan, Sun Yongyong, Xu Fei, Zhou Hnghang
{"title":"Knowledge graph embedding based on semantic hierarchy","authors":"Fan Linjuan,&nbsp;Sun Yongyong,&nbsp;Xu Fei,&nbsp;Zhou Hnghang","doi":"10.1016/j.cogr.2022.06.002","DOIUrl":"10.1016/j.cogr.2022.06.002","url":null,"abstract":"<div><p>In view of the current knowledge graph embedding, it mainly focuses on symmetry/opposition, inversion and combination of relationship patterns, and does not fully consider the structure of the knowledge graph. We propose a Knowledge Graph Embedding Based on Semantic Hierarchy (SHKE), which fully considers the information of knowledge graph by fusing the semantic information of the knowledge graph and the hierarchical information. The knowledge graph is mapped to a polar coordinate system, where concentric circles naturally reflect the hierarchy, and entities can be divided into modulus parts and phase parts, and then the modulus part of the polar coordinate system is mapped to the relational vector space through the relational vector, thus the modulus part takes into account the semantic information of the knowledge graph, and the phase part takes into account the hierarchical information. Experiments show that compared with other models, the proposed model improves the knowledge graph link prediction index Hits@10% by about 10% and the accuracy of the triple group classification experiment by about 10%.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"2 ","pages":"Pages 147-154"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S266724132200012X/pdfft?md5=eff502f209037b9c55f942f433d918f1&pid=1-s2.0-S266724132200012X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83608118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on plant disease identification based on CNN 基于CNN的植物病害识别研究
Cognitive Robotics Pub Date : 2022-01-01 DOI: 10.1016/j.cogr.2022.07.001
Xuewei Sun , Guohou Li , Peixin Qu , Xiwang Xie , Xipeng Pan , Weidong Zhang
{"title":"Research on plant disease identification based on CNN","authors":"Xuewei Sun ,&nbsp;Guohou Li ,&nbsp;Peixin Qu ,&nbsp;Xiwang Xie ,&nbsp;Xipeng Pan ,&nbsp;Weidong Zhang","doi":"10.1016/j.cogr.2022.07.001","DOIUrl":"https://doi.org/10.1016/j.cogr.2022.07.001","url":null,"abstract":"<div><p>Traditional digital image processing methods extract disease features manually, which have low efficiency and low recognition accuracy. To solve this problem, In this paper, we propose a convolutional neural network architecture FL-EfficientNet (Focal loss EfficientNet), which is used for multi-category identification of plant disease images. Firstly, through the Neural Architecture Search technology, the network width, network depth, and image resolution are adaptively adjusted according to a group of composite coefficients, to improve the balance of network dimension and model stability; Secondly, the valuable features in the disease image are extracted by introducing the moving flip bottleneck convolution and attention mechanism; Finally, the Focal loss function is used to replace the traditional Cross-Entropy loss function, to improve the ability of the network model to focus on the samples that are not easy to identify. The experiment uses the public data set new plant diseases dataset (NPDD) and compares it with ResNet50, DenseNet169, and EfficientNet. The experimental results show that the accuracy of FL-EfficientNet in identifying 10 diseases of 5 kinds of crops is 99.72%, which is better than the above comparison network. At the same time, FL-EfficientNet has the fastest convergence speed, and the training time of 15 epochs is 4.7 h.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"2 ","pages":"Pages 155-163"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667241322000143/pdfft?md5=7eb49b1ffcca835453b31264121944ff&pid=1-s2.0-S2667241322000143-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92080103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine learning model for discrimination of mild dementia patients using acoustic features 基于声学特征的轻度痴呆患者识别机器学习模型
Cognitive Robotics Pub Date : 2022-01-01 DOI: 10.1016/j.cogr.2021.12.003
Kazu Nishikawa, Kuwahara Akihiro, Rin Hirakawa, Hideaki Kawano, Yoshihisa Nakatoh
{"title":"Machine learning model for discrimination of mild dementia patients using acoustic features","authors":"Kazu Nishikawa,&nbsp;Kuwahara Akihiro,&nbsp;Rin Hirakawa,&nbsp;Hideaki Kawano,&nbsp;Yoshihisa Nakatoh","doi":"10.1016/j.cogr.2021.12.003","DOIUrl":"10.1016/j.cogr.2021.12.003","url":null,"abstract":"<div><p>In previous research on dementia discrimination by voice, a method using multiple acoustic features by machine learning has been proposed. However, they do not focus on speech analysis in mild dementia patients (MCI). Therefore, we propose a dementia discrimination system based on the analysis of vowel utterance features. The analysis results indicated that some cases of dementia appeared in the voice of mild dementia patients. These results can also be used as an index for future improvement of speech sounds in dementia. Taking advantage of these results, we propose an ensemble discrimination system using a classifier with statistical acoustic features and a Neural Network of transformer models, and the F-score is 0.907, which is better than the state-of-the-art methods.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"2 ","pages":"Pages 21-29"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667241321000288/pdfft?md5=01f437a574b872e24a624b0dbf0fd73d&pid=1-s2.0-S2667241321000288-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76595547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Joint extraction of entities and relations by entity role recognition 基于实体角色识别的实体和关系的联合抽取
Cognitive Robotics Pub Date : 2022-01-01 DOI: 10.1016/j.cogr.2022.11.001
Xi Han, Qi-Ming Liu
{"title":"Joint extraction of entities and relations by entity role recognition","authors":"Xi Han,&nbsp;Qi-Ming Liu","doi":"10.1016/j.cogr.2022.11.001","DOIUrl":"10.1016/j.cogr.2022.11.001","url":null,"abstract":"<div><p>Joint extracting entities and relations from unstructured text is a fundamental task in information extraction and a key step in constructing large knowledge graphs, entities and relations are constructed as relational triples of the form (subject, relation, object) or (s, r, o). Although triple extraction has been extremely successful, there are still continuing challenges due to factors such as entity overlap. Recent work has shown us the excellent performance of joint extraction models, however these methods still suffer from some problems, such as the redundancy prediction problem. Traditional methods for solving the overlap problem require triple extraction under the full class of relations defined in the dataset, however the number of relations in a sentence is much smaller than the full relational class, which leads to a large number of redundant predictions. To solve this problem, this paper decomposes the task into two steps: entity and potential relation extraction and entity-semantic role determination of triples. Specifically, we design several modules to extract the entities and relations in the sentence separately, and we use these entities and relations to construct possible candidate triples and predict the semantic roles (subject or object) of the entities under the relational constraints to obtain the correct triples. In general we propose a model for identifying the semantic roles of entities in triples under relation constraints, which can effectively solve the problem of redundant prediction, We also evaluated our model on two widely used public datasets, and our model achieved advanced performance with F1 scores of 90.8 and 92.4 on NYT and WebNLG, respectively.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"2 ","pages":"Pages 234-241"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667241322000210/pdfft?md5=52b08deb4b35e7b962f6357768547469&pid=1-s2.0-S2667241322000210-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80809723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Eye fatigue estimation using blink detection based on Eye Aspect Ratio Mapping(EARM) 基于眼宽比映射(EARM)的眨眼检测眼疲劳估计
Cognitive Robotics Pub Date : 2022-01-01 DOI: 10.1016/j.cogr.2022.01.003
Akihiro Kuwahara, Kazu Nishikawa, Rin Hirakawa, Hideaki Kawano, Yoshihisa Nakatoh
{"title":"Eye fatigue estimation using blink detection based on Eye Aspect Ratio Mapping(EARM)","authors":"Akihiro Kuwahara,&nbsp;Kazu Nishikawa,&nbsp;Rin Hirakawa,&nbsp;Hideaki Kawano,&nbsp;Yoshihisa Nakatoh","doi":"10.1016/j.cogr.2022.01.003","DOIUrl":"10.1016/j.cogr.2022.01.003","url":null,"abstract":"<div><p>With the advent of the information society, the eyes' health is threatened all over the world. Rules and systems have been proposed to avoid these problems, but most users do not use them due to the physical and time constraints and costs involved and the lack of awareness of eye health. In this paper, we estimate the eye fatigue sensitivity by detecting spontaneous blinks with high accuracy. The experimental results show that the proposed Eye Aspect Ratio Mapping can classify blinks with high accuracy at a low cost. We also found a strong correlation between the median SBR (Spontaneous Blink Rate) and the time between the objective estimation of eye fatigue and the subject's awareness of eye fatigue.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"2 ","pages":"Pages 50-59"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667241322000039/pdfft?md5=c2e21075b740c06c6149dbaff21cd926&pid=1-s2.0-S2667241322000039-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90674752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Pinocchio: A language for action representation 皮诺曹:一种动作表示语言
Cognitive Robotics Pub Date : 2022-01-01 DOI: 10.1016/j.cogr.2022.03.007
Pietro Morasso , Vishwanathan Mohan
{"title":"Pinocchio: A language for action representation","authors":"Pietro Morasso ,&nbsp;Vishwanathan Mohan","doi":"10.1016/j.cogr.2022.03.007","DOIUrl":"10.1016/j.cogr.2022.03.007","url":null,"abstract":"<div><p>The development of a language of action representation is a central issue for cognitive robotics, motor neuroscience, ergonomics, sport, and arts with a double goal: analysis and synthesis of action sequences that preserve the spatiotemporal invariants of biological motion, including the associated goals of learning and training. However, the notation systems proposed so far only achieved inconclusive results. By reviewing the underlying rationale of such systems, it is argued that the common flaw is the choice of the ‘primitives’ to be combined to produce complex gestures: basic movements with a different degree of “granularity”. The problem is that in motor cybernetics movements do not add: whatever the degree of granularity of the chosen primitives their simple summation is unable to produce the spatiotemporal invariants that characterize biological motion. The proposed alternative is based on the Equilibrium Point Hypothesis and, in particular, on a computational formulation named Passive Motion Paradigm, where whole-body gestures are produced by applying a small set of force fields to specific key points of the internal body schema: its animation by carefully selected force fields is analogous to the animation of a marionette using wires or strings. The crucial point is that force fields do add, thus suggesting to use force fields as a consistent set of primitives instead of basic movements. This is the starting point for suggesting a force field-based language of action representation, named Pinocchio in analogy with the famous marionette. The proposed language for action description and generation includes three main modules: 1) Primitive force field generators, 2) a Body-Model to be animated by the primitive generators, and 3) a graphical staff system for expressing any specific notated gesture. We suggest that such language is a crucial building block for the development of a cognitive architecture of cooperative robots.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"2 ","pages":"Pages 119-131"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667241322000106/pdfft?md5=a0ea6d039e0a4dc852711de82c9c4bd5&pid=1-s2.0-S2667241322000106-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91431809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信