Cognitive Robotics最新文献

筛选
英文 中文
Visual-based data exchange system for internal and external networks in physical isolation 用于物理隔离的内部和外部网络的基于可视化的数据交换系统
Cognitive Robotics Pub Date : 2021-01-01 DOI: 10.1016/j.cogr.2021.08.002
Xin Jin , Fengyi Li , Xiaodong Li , Weihan Tian , Biao Wang , Li Chen , Xin Wang
{"title":"Visual-based data exchange system for internal and external networks in physical isolation","authors":"Xin Jin ,&nbsp;Fengyi Li ,&nbsp;Xiaodong Li ,&nbsp;Weihan Tian ,&nbsp;Biao Wang ,&nbsp;Li Chen ,&nbsp;Xin Wang","doi":"10.1016/j.cogr.2021.08.002","DOIUrl":"10.1016/j.cogr.2021.08.002","url":null,"abstract":"<div><p>How to realize data transmission between Intranet and extranet devices in physical isolation is an important problem. QR code can be used to identify text, image information, can be used as a carrier of information exchange. In this paper, QR code technology is used to propose a data transmission system between the Internet and the Internet under the physical isolation state based on visual recognition, which makes the information transmitted only through visible light, enhances the confidentiality, and can meet the daily office needs. At the sending end, the division of the transferred file is completed and multiple corresponding QR codes are generated. After the receiving end scans and recognizes the QR code, it finally forms a complete file identical to the original file. Then, the multi-QR code picture transmission was upgraded to the QR code stream transmission mode, and the transmission efficiency was improved by more than 10 times compared with similar work. In face of the real needs, we have added the function of retrieving pictures or database information from another computer and sending it back to the sending computer according to the instructions of the sending end. The paper also carries on the test analysis to each work.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"1 ","pages":"Pages 134-144"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.cogr.2021.08.002","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88897848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Simulations versus tests for dynamic engagement characteristics of wet clutch 湿式离合器动态接合特性的仿真与试验
Cognitive Robotics Pub Date : 2021-01-01 DOI: 10.1016/j.cogr.2021.07.002
Zhen Zhang, Liucun Zhu, Xiaodong Zheng
{"title":"Simulations versus tests for dynamic engagement characteristics of wet clutch","authors":"Zhen Zhang,&nbsp;Liucun Zhu,&nbsp;Xiaodong Zheng","doi":"10.1016/j.cogr.2021.07.002","DOIUrl":"10.1016/j.cogr.2021.07.002","url":null,"abstract":"<div><p>In this paper, the dynamic engagement characteristics of wet clutch are simulated by finite element method. In the fluid friction, the average Reynolds equation is amended and dimensionless parameters are involved, which is applied to calculate the viscous torque. In the boundary friction, a surface elastic contact model is established to calculate rough contact torque. In the mixed friction, total torque consists of viscus torque and rough contact torque. Experimental comparisons between the simulations and the SAE#2 bench tests are provide to verify the validity of the proposed method, the engagement time errors, the output torques maximum errors and the output torques average errors are utmost 4.86%, 3.87% and 0.73% respectively. The proposed method can be used to guide the design of wet clutches in early stages of product development.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"1 ","pages":"Pages 125-133"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.cogr.2021.07.002","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88155385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Vision-based intelligent path planning for SCARA arm 基于视觉的SCARA臂智能路径规划
Cognitive Robotics Pub Date : 2021-01-01 DOI: 10.1016/j.cogr.2021.09.002
Yogesh Gautam , Bibek Prajapati , Sandeep Dhakal , Bibek Pandeya , Bijendra Prajapati
{"title":"Vision-based intelligent path planning for SCARA arm","authors":"Yogesh Gautam ,&nbsp;Bibek Prajapati ,&nbsp;Sandeep Dhakal ,&nbsp;Bibek Pandeya ,&nbsp;Bijendra Prajapati","doi":"10.1016/j.cogr.2021.09.002","DOIUrl":"10.1016/j.cogr.2021.09.002","url":null,"abstract":"<div><p>This paper proposes a novel algorithm combining object detection and potential field algorithm for autonomous operation of SCARA arm. The start, obstacles, and goal states are located and detected through the RetinaNet Model. The model uses standard pre-trained weights as checkpoints which is trained with images from the working environment of the SCARA arm. The potential field algorithm then plans a suitable path from start to goal state avoiding obstacle state based on results from the object detection model. The algorithm is tested with a real prototype with promising results.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"1 ","pages":"Pages 168-181"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667241321000161/pdfft?md5=e9df1be748e973a1418b8b610e72d135&pid=1-s2.0-S2667241321000161-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83111735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
MetaSeg: A survey of meta-learning for image segmentation MetaSeg:图像分割的元学习研究综述
Cognitive Robotics Pub Date : 2021-01-01 DOI: 10.1016/j.cogr.2021.06.003
Jiaxing Sun, Yujie Li
{"title":"MetaSeg: A survey of meta-learning for image segmentation","authors":"Jiaxing Sun,&nbsp;Yujie Li","doi":"10.1016/j.cogr.2021.06.003","DOIUrl":"10.1016/j.cogr.2021.06.003","url":null,"abstract":"<div><p>Big data-driven deep learning methods have been widely used in image or video segmentation. However, in practical applications, training a deep learning model requires a large amount of labeled data, which is difficult to achieve. Meta-learning, as one of the most promising research areas in the field of artificial intelligence, is believed to be a key tool for approaching artificial general intelligence. Compared with the traditional deep learning algorithm, meta-learning can update the learning task quickly and complete the corresponding learning with less data. To the best of our knowledge, there exist few researches in the meta-learning-based visual segmentation. To this end, this paper summarizes the algorithms and current situation of image or video segmentation technologies based on meta-learning and point out the future trends of meta-learning. Meta-learning has the characteristics of segmentation that based on semi-supervised or unsupervised learning, all the recent novel methods are summarized in this paper. The principle, advantages and disadvantages of each algorithms are also compared and analyzed.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"1 ","pages":"Pages 83-91"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.cogr.2021.06.003","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89860935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A survey on robots controlled by motor imagery brain-computer interfaces 运动图像脑机接口控制机器人的研究进展
Cognitive Robotics Pub Date : 2021-01-01 DOI: 10.1016/j.cogr.2021.02.001
Jincai Zhang, Mei Wang
{"title":"A survey on robots controlled by motor imagery brain-computer interfaces","authors":"Jincai Zhang,&nbsp;Mei Wang","doi":"10.1016/j.cogr.2021.02.001","DOIUrl":"10.1016/j.cogr.2021.02.001","url":null,"abstract":"<div><p>A brain-computer interface (BCI) can provide a communication approach conveying brain information to the outside. Especially, the BCIs based on motor imagery play the important role for the brain-controlled robots, such as the rehabilitation robots, the wheelchair robots, the nursing bed robots, the unmanned aerial vehicles and so on. In this paper, the developments of the robots based on motor imagery BCIs are reviewed from three aspects: the electroencephalogram (EEG) evocation paradigms, the signal processing algorithms and the applications. First, the different types of the brain-controlled robots are reviewed and classified from the perspective of the evocation paradigms. Second, the relevant algorithms for the EEG signal processing are introduced, which including feature extraction methods and the classification algorithms. Third, the applications of the motor imagery brain-controlled robots are summarized. Finally, the current challenges and the future research directions of the robots controlled by the motor imagery BCIs are discussed.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"1 ","pages":"Pages 12-24"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.cogr.2021.02.001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"94276831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Deep learning method for makeup style transfer: A survey 化妆风格转移的深度学习方法研究
Cognitive Robotics Pub Date : 2021-01-01 DOI: 10.1016/j.cogr.2021.09.001
Xiaohan Ma , Fengquan Zhang , Huan Wei , Liuqing Xu
{"title":"Deep learning method for makeup style transfer: A survey","authors":"Xiaohan Ma ,&nbsp;Fengquan Zhang ,&nbsp;Huan Wei ,&nbsp;Liuqing Xu","doi":"10.1016/j.cogr.2021.09.001","DOIUrl":"10.1016/j.cogr.2021.09.001","url":null,"abstract":"<div><p>Makeup transfer is one of the applications of image style transfer, which refers to transfer the reference makeup to the face without makeup, and maintaining the original appearance of the plain face and the makeup style of the reference face. In order to understand the research status of makeup transfer, this paper systematically sorts out makeup transfer technology. According to the development process of the method of makeup transfer, our paper first introduces and analyzes the traditional methods of makeup transfer. In particular, the methods of makeup transfer based on deep learning framework are summarized, covering both disadvantages and advantages. Finally, some key points in the current challenges and future development direction of makeup transfer technology are discussed.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"1 ","pages":"Pages 182-187"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S266724132100015X/pdfft?md5=c5178cad6941ffa98c8c774fb2ac3ca3&pid=1-s2.0-S266724132100015X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82114756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Substantial capabilities of robotics in enhancing industry 4.0 implementation 机器人在促进工业4.0实施方面的实质性能力
Cognitive Robotics Pub Date : 2021-01-01 DOI: 10.1016/j.cogr.2021.06.001
Mohd Javaid , Abid Haleem , Ravi Pratap Singh , Rajiv Suman
{"title":"Substantial capabilities of robotics in enhancing industry 4.0 implementation","authors":"Mohd Javaid ,&nbsp;Abid Haleem ,&nbsp;Ravi Pratap Singh ,&nbsp;Rajiv Suman","doi":"10.1016/j.cogr.2021.06.001","DOIUrl":"10.1016/j.cogr.2021.06.001","url":null,"abstract":"<div><p>There is the increased application of new technologies in manufacturing, service, and communications. Industry 4.0 is the new fourth industrial revolution, which supports organisational efficiency. Robotics is an important technology of Industry 4.0, which provides extensive capabilities in the field of manufacturing. This technology has enhanced automation systems and does repetitive jobs precisely and at a lower cost. Robotics is progressively leading to the manufacturing of quality products while maintaining the value of existing collaborators schemes. The primary outcome of Industry 4.0 is intelligent factories developed with the aid of advanced robotics, massive data, cloud computing, solid safety, intelligent sensors, the Internet of things, and other advanced technological developments to be highly powerful, safe, and cost-effective. Thus, businesses will refine their manufacturing for mass adaptation by improving the workplace's safety and reliability on actual work and saving costs. This paper discusses the significant potential of Robotics in the field of manufacturing and allied areas. The paper discusses eighteen major applications of Robotics for Industry 4.0. Robots are ideal for collecting mysterious manufacturing data as they operate closer to the component than most other factory machines. This technology is helpful to perform a complex hazardous job, automation, sustain high temperature, working entire time and for a long duration in assembly lines. Many robots operating in intelligent factories use artificial intelligence to perform high-level tasks. Now they can also decide and learn from experience in various ongoing situations.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"1 ","pages":"Pages 58-75"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.cogr.2021.06.001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"108045927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 111
Review of the emotional feature extraction and classification using EEG signals 基于脑电信号的情绪特征提取与分类研究进展
Cognitive Robotics Pub Date : 2021-01-01 DOI: 10.1016/j.cogr.2021.04.001
Jiang Wang, Mei Wang
{"title":"Review of the emotional feature extraction and classification using EEG signals","authors":"Jiang Wang,&nbsp;Mei Wang","doi":"10.1016/j.cogr.2021.04.001","DOIUrl":"10.1016/j.cogr.2021.04.001","url":null,"abstract":"<div><p>As a subjectively psychological and physiological response to external stimuli, emotion is ubiquitous in our daily life. With the continuous development of the artificial intelligence and brain science, emotion recognition rapidly becomes a multiple discipline research field through EEG signals. This paper investigates the relevantly scientific literature in the past five years and reviews the emotional feature extraction methods and the classification methods using EEG signals. Commonly used feature extraction analysis methods include time domain analysis, frequency domain analysis, and time-frequency domain analysis. The widely used classification methods include machine learning algorithms based on Support Vector Machine (SVM), k-Nearest Neighbor (KNN), Naive Bayes (NB), etc., and their classification accuracy ranges from 57.50% to 95.70%. The classification accuracy of the deep learning algorithms based on Neural Network (NN), Long and Short-Term Memory (LSTM), and Deep Belief Network (DBN) ranges from 63.38% to 97.56%.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"1 ","pages":"Pages 29-40"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.cogr.2021.04.001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"95534010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 48
Coming up With Good Excuses: What to do When no Plan Can be Found 找好借口:找不到计划时该怎么办
Cognitive Robotics Pub Date : 2010-05-12 DOI: 10.1609/icaps.v20i1.13421
M. Göbelbecker, Thomas Keller, Patrick Eyerich, Michael Brenner, B. Nebel
{"title":"Coming up With Good Excuses: What to do When no Plan Can be Found","authors":"M. Göbelbecker, Thomas Keller, Patrick Eyerich, Michael Brenner, B. Nebel","doi":"10.1609/icaps.v20i1.13421","DOIUrl":"https://doi.org/10.1609/icaps.v20i1.13421","url":null,"abstract":"\u0000 \u0000 When using a planner-based agent architecture, many things can go wrong. First and foremost, an agent might fail to execute one of the planned actions for some reasons. Even more annoying, however, is a situation where the agent is incompetent, i.e., unable to come up with a plan. This might be due to the fact that there are principal reasons that prohibit a successful plan or simply because the task's description is incomplete or incorrect. In either case, an explanation for such a failure would be very helpful. We will address this problem and provide a formalization of coming up with excuses for not being able to find a plan. Based on that, we will present an algorithm that is able to find excuses and demonstrate that such excuses can be found in practical settings in reasonable time.\u0000 \u0000","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"68 1","pages":"81-88"},"PeriodicalIF":0.0,"publicationDate":"2010-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78646645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 120
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信