Dandan Huang , Mei Wang , Jianping Wang , Jiaxin Yan
{"title":"A survey of quantum computing hybrid applications with brain-computer interface","authors":"Dandan Huang , Mei Wang , Jianping Wang , Jiaxin Yan","doi":"10.1016/j.cogr.2022.07.002","DOIUrl":"10.1016/j.cogr.2022.07.002","url":null,"abstract":"<div><p>In recent years, researchers have paid more attention to the hybrid applications of quantum computing and brain-computer interfaces. With the development of neural technology and artificial intelligence, scientists have become more and more researching brain-computer interface, and the application of brain-computer interface technology to more fields has gradually become the focus of research. While the field of brain-computer interface has evolved rapidly over the past decades, the core technologies and innovative ideas behind seemingly unrelated brain-computer interface systems are rarely summarized from the point of integration with quantum. This paper provides a detailed report on the hybrid applications of quantum computing and brain-computer interface, indicates the current problems, and gives suggestions on the hybrid application research direction.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"2 ","pages":"Pages 164-176"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667241322000155/pdfft?md5=d3e94765005e1d76d972377ee08bd0a0&pid=1-s2.0-S2667241322000155-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85395111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohd Javaid , Abid Haleem , Ravi Pratap Singh , Shanay Rab , Rajiv Suman
{"title":"Significant applications of Cobots in the field of manufacturing","authors":"Mohd Javaid , Abid Haleem , Ravi Pratap Singh , Shanay Rab , Rajiv Suman","doi":"10.1016/j.cogr.2022.10.001","DOIUrl":"10.1016/j.cogr.2022.10.001","url":null,"abstract":"<div><p>The term \"collaborative robot\" is commonly known as Cobot, which refers to a partnership between a robot and a human. Aside from providing physical contact between a robot and a person on the same production line simultaneously, the Cobot is designed as user-friendly. They enable operators to respond immediately to work done by the robot based on the company's urgent needs. This paper aims to explore the potential of Cobots in manufacturing. Cobots are widely employed in various industries such as life science, automotive, manufacturing, electronics, aerospace, packaging, plastics, and healthcare. For many of these businesses, the capacity to maintain a lucrative man-machine shared workplace can provide a considerable competitive edge. Cobots are simple to use while being dependable, safe, and precise. A literature review was carried out from the database from ScienceDirect, Scopus, Google Scholar, ResearchGate and other research platforms on the keyword “Cobots” or “Collaborative robots” for manufacturing. The Paper briefly discusses and provides the capabilities of this technology in manufacturing. Cobots are programmed to do crucial things such as handling poisonous substances, from putting screws on a vehicle body to cooking a meal, etc. Human operators can readily control this technology remotely and perform dangerous jobs. This paper's overview of Cobots and how it is differentiated from Robot is briefly described. The typical Features, Capabilities, Collaboration & Industrial Scenarios with Cobots are also discussed briefly. Further, the study identified and discussed the significant applications of Cobots for manufacturing. Cobots are utilised in several methods and a wide range of application areas. These elevate manufacturing and other operations to new heights. They also collaborate with humans to balance the demand for safety and the need for flexibility and efficiency.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"2 ","pages":"Pages 222-233"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667241322000209/pdfft?md5=3d05e788ca43f15b3a9104328498ef7b&pid=1-s2.0-S2667241322000209-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88829188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ruoyu Zhang, Pengyu Zhao, Weiyu Guo, Rongyao Wang, Wenpeng Lu
{"title":"Medical named entity recognition based on dilated convolutional neural network","authors":"Ruoyu Zhang, Pengyu Zhao, Weiyu Guo, Rongyao Wang, Wenpeng Lu","doi":"10.1016/j.cogr.2021.11.002","DOIUrl":"10.1016/j.cogr.2021.11.002","url":null,"abstract":"<div><p>Named entity recognition (NER) is a fundamental and important task in natural language processing. Existing methods attempt to utilize convolutional neural network (CNN) to solve NER task. However, a disadvantage of CNN is that it fails to obtain the global information of texts, leading to an unsatisfied performance on medical NER task. In view of the disadvantages of CNN in medical NER task, this paper proposes to utilize the dilated convolutional neural network (DCNN) and bidirectional long short-term memory (BiLSTM) for hierarchical encoding, and make use of the advantages of DCNN to capture global information with fast computing speed. At the same time, multiple feature words are inserted into the medical text datasets for improving the performance of medical NER. Extensive experiments are done on three real-world datasets, which demonstrate that our method is superior to the compared models.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"2 ","pages":"Pages 13-20"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667241321000197/pdfft?md5=d7c76f5b56d0a24ccedc158c4fd7c2cb&pid=1-s2.0-S2667241321000197-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82664175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Overview of robotic grasp detection from 2D to 3D","authors":"Zhiyun Yin, Yujie Li","doi":"10.1016/j.cogr.2022.03.002","DOIUrl":"https://doi.org/10.1016/j.cogr.2022.03.002","url":null,"abstract":"<div><p>With the wide application of robots in life and production, robotic grasping is also experiencing continuous development. However, in practical application, some external environmental factors and the factors of the object itself have an impact on the accuracy of grasping detection. There are many classification methods of grasping detection. In this paper, the parallel gripper is used as the end of grasping to carry out research. Aiming at the angle problem of robot grasping, this paper summarizes some research status of grasping detection from 2D image to 3D space. According to their respective application, advantages, and disadvantages, this paper analyzes the development trend of the two methods. At the same time, several commonly used grasping datasets are introduced and compared.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"2 ","pages":"Pages 73-82"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667241322000052/pdfft?md5=6378a377be535e9f4a4497eee9251a1d&pid=1-s2.0-S2667241322000052-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92091650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Large scale log anomaly detection via spatial pooling","authors":"Rin Hirakawa , Hironori Uchida , Asato Nakano , Keitaro Tominaga , Yoshihisa Nakatoh","doi":"10.1016/j.cogr.2021.10.001","DOIUrl":"10.1016/j.cogr.2021.10.001","url":null,"abstract":"<div><p>Log data is an important clue to understanding the behaviour of a system at runtime, but the complexity of software systems in recent years has made the data that engineers need to analyse enormous and difficult to understand. While log-based anomaly detection methods based on deep learning have enabled highly accurate detection, the computational performance required to operate the models has become very high. In this study, we propose an anomaly detection method, SPClassifier, based on sparse features and the internal state of the model, and investigate the feasibility of anomaly detection that can be utilized in environments without computing resources such as GPUs. Benchmark with the latest deep learning models on the BGL dataset shows that the proposed method can achieve competitive accuracy with these methods and has a high level of anomaly detection performance even when the amount of training data is small.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"1 ","pages":"Pages 188-196"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667241321000173/pdfft?md5=7d47126ac817ab84febc1c4f3273aa7d&pid=1-s2.0-S2667241321000173-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78453136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Decentralised task allocation using GDL negotiations in Multi-agent system","authors":"Hui Zou , Yan Xi","doi":"10.1016/j.cogr.2021.07.003","DOIUrl":"10.1016/j.cogr.2021.07.003","url":null,"abstract":"<div><p>In large distributed systems, the optimization algorithm of task scheduling may not meet the special requirements of the domain control mechanism, i.e. robustness, optimality, timeliness of solution and computational ease of processing under limited communication. In or- der to satisfy these requirements, a novel decentralized agent scheduling method for dynamic task allocation problems based on Game Descrip- tion Language (GDL) and Game Theory is proposed. Specifically, we define the task allocation problem as a stochastic game model, in which the agent's utility is derived from the marginal utility, and then prove that the global optimal task allocation scheme resides in the Nash equi- librium set by the non-cooperative game. In order to generate an optimal solution, we define Multi-agent Negotiation Game (MNG), in which ne- gotiations are held between agents to decide which tasks to act on next. Building on this, we make a simple extension to adopt GDL more suit- able for negotiations and propose to use it to model such negotiation scenarios. Finally, we use a negotiation example to show that our ap- proach is more amenable to automatic processing by autonomous agents and of great practicality than a centralized task scheduler.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"1 ","pages":"Pages 197-204"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.cogr.2021.07.003","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83838608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Unbundling the significance of cognitive robots and drones deployed to tackle COVID-19 pandemic: A rapid review to unpack emerging opportunities to improve healthcare in sub-Saharan Africa","authors":"Elliot Mbunge , Itai Chitungo , Tafadzwa Dzinamarira","doi":"10.1016/j.cogr.2021.11.001","DOIUrl":"10.1016/j.cogr.2021.11.001","url":null,"abstract":"<div><p>The emergence of COVID-19 brought unprecedented opportunities to deploy emerging digital technologies such as robotics and drones to provide contactless services. Robots and drones transformed initial approaches to tackle COVID-19 and have proven to be effective in curbing the risk of COVID-19 in developed countries. Despite the significant impact of robots and drones in reducing the burden of frontline healthcare professionals, there is still limited literature on their utilization to fight the pandemic in sub-Saharan Africa. Therefore, this rapid review provides significant capabilities of robots and drones while introspecting at the challenges and barriers that may hinder their implementation in developing countries. The study revealed that robots and drones have been used for disinfection, delivery of medical supplies, surveillance, consultation and screening and diagnosis. The study revealed that adopting robots and drones face challenges such as infrastructural, financial, technological barriers, security and privacy issues, lack of policies and frameworks regulating the use of robots and drones in healthcare. We, therefore, propose a collaborative approach to mobilise resources and invest in infrastructure to bridge the digital divide , craft policies and frameworks for effectively integrating robots and drones in healthcare. There is a need to include robotics in the medical education and training of health workers and develop indigenous knowledge and encourage international collaboration. Partnership with civil aviation authorities to license and monitor drones to improve monitoring and security of drone activities could also be helpful. Robots and drones should guarantee superior safety features since it either directly interacts with human or works in a densely populated environment. However, future work should focus on the long term consequences of robots and drones on human behavior and interaction as well as in healthcare.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"1 ","pages":"Pages 205-213"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667241321000185/pdfft?md5=ecb71beb73a0b3cba7d022c118e690a4&pid=1-s2.0-S2667241321000185-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83117275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SMILE: A verbal and graphical user interface tool for speech-control of soccer robots in Ghana","authors":"Patrick Fiati","doi":"10.1016/j.cogr.2021.03.001","DOIUrl":"10.1016/j.cogr.2021.03.001","url":null,"abstract":"<div><p>SMILE (Smartphone Intuitive Likeness and Engagement) application, a portable Android application that allows a human to control a robot using speech input. SMILE is a novel open source and platform independent tool that will contribute to the robot soccer research by allowing robot handlers to verbally command robots. The application resides on a smartphone embedded in the face of a humanoid robot, using a speech recognition engine to analyze user speech input while using facial expressions and speech generation to express comprehension feedback to the user. With the introduction of intuitive human robot interaction into the arena of robot soccer, we discuss a couple specific scenarios in which SMILE could improve both the pace of the game and autonomous appearance of the robots. The ability of humans to communicate verbally is essential for any cooperative task, especially fast-paced sports. In the game of soccer, players must speak with coaches, referees, and other players on either team. Therefore, if humanoids are expected to compete on the same playing field as elite soccer players in the near future, then we must expect them to be treated like humans, which include the ability to listen and converse. SMILE (Smartphone Intuitive Likeness and Engagement) is the first platform independent smartphone based tool to equip robots with these capabilities. Currently, humanoid soccer research is heavily focused on walking dynamics, computer vision, and intelligent systems; however human-robot interaction (HRI) is overlooked. We delved into this area of robot soccer by implementing SMILE, an Android application that sends data packets to the robot's onboard computer upon verbal interaction with a user.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"1 ","pages":"Pages 25-28"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.cogr.2021.03.001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76128699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chunyan Ma , Xin Li , Yujie Li , Xinliang Tian , Yichuan Wang , Hyoungseop Kim , Seiichi Serikawa
{"title":"Visual information processing for deep-sea visual monitoring system","authors":"Chunyan Ma , Xin Li , Yujie Li , Xinliang Tian , Yichuan Wang , Hyoungseop Kim , Seiichi Serikawa","doi":"10.1016/j.cogr.2020.12.002","DOIUrl":"10.1016/j.cogr.2020.12.002","url":null,"abstract":"<div><p>Due to the rising demand for minerals and metals, various deep-sea mining systems have been developed for the detection of mines and mine-like objects on the seabed. However, many of them contain some issues due to the diffusion of dangerous substances and radioactive substances in water. Therefore, efficient and accurate visual monitoring is expected by introducing artificial intelligence. Most recent deep-sea mining machines have little intelligence in visual monitoring systems. Intelligent robotics, e.g., deep learning-based edge computing for deep-sea visual monitoring systems, have not yet been established. In this paper, we propose the concept of a learning-based deep-sea visual monitoring system and use testbeds to show the efficiency of the proposed system. For example, to sense the underwater environment in real time, a large quantity of observation data, including captured images, must be transmitted from the seafloor to the ship, but large-capacity underwater communication is difficult. In this paper, we propose using deep compressed learning for real-time communication. In addition, we propose the gradient generation adversarial network (GGAN) to recover the heavily destroyed underwater images. In the application layer, wavelet-aware superresolution is used to show high-resolution images. Therefore, the development of an intelligent remote control deep-sea mining system with good convenience using deep learning applicable to deep-sea mining is expected.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"1 ","pages":"Pages 3-11"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.cogr.2020.12.002","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"106753648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Gesture formation: A crucial building block for cognitive-based Human–Robot Partnership","authors":"Pietro Morasso","doi":"10.1016/j.cogr.2021.06.004","DOIUrl":"10.1016/j.cogr.2021.06.004","url":null,"abstract":"<div><p>The next generation of robotic agents, to employed both in industrial and service robotic applications, will be characterized by a high degree of Human–Robot Partnership that implies, for example, sharing common objectives, bidirectional flow of information, capability to learn from each other, and availability to mutual training. Moreover, there is a widespread feeling in the research community that probably Humans will not accept Robots as trustable Partners if they cannot ascribe some form of awareness and true understanding to them. This means that, in addition to the incremental improvements of <em>Robotic-Bodyware,</em> there will be the need for a substantial jump of the <em>Robotic-Cogniware</em>, namely a new class of Cognitive Architectures for Robots (CARs) that match the requirements and specific constraints of Human–Robot Partnership. The working hypothesis that underlies this paper is that such class of CARs must be bio-inspired, not in the sense of fine-grain imitation of neurobiology but the large framework of embodied cognition. In our opinion, trajectory/gesture formation should be one of the building blocks of bio-inspired CARs because biological motion is a fundamental channel of inter-human partnership, a true body language that allows mutual understanding of intentions. Moreover, one of the main concepts of embodied cognition, related to the importance of motor imagery, is that real (or <em>overt</em>) actions and mental (or <em>covert</em>) actions are generated by the same internal model and support the cognitive capabilities of human skilled subjects. The paper reviews the field of human trajectory formation, revealing in a novel manner the fil rouge that runs through motor neuroscience and proposes a computational framework for a robotic formulation that also addresses the Degrees of Freedom Problem and is formulated in terms of the force-field-based Passive Motion Paradigm.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"1 ","pages":"Pages 92-110"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.cogr.2021.06.004","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77844253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}