Cognitive Robotics最新文献

筛选
英文 中文
Overview of robotic grasp detection from 2D to 3D 机器人抓取检测从2D到3D的概述
Cognitive Robotics Pub Date : 2022-01-01 DOI: 10.1016/j.cogr.2022.03.002
Zhiyun Yin, Yujie Li
{"title":"Overview of robotic grasp detection from 2D to 3D","authors":"Zhiyun Yin,&nbsp;Yujie Li","doi":"10.1016/j.cogr.2022.03.002","DOIUrl":"https://doi.org/10.1016/j.cogr.2022.03.002","url":null,"abstract":"<div><p>With the wide application of robots in life and production, robotic grasping is also experiencing continuous development. However, in practical application, some external environmental factors and the factors of the object itself have an impact on the accuracy of grasping detection. There are many classification methods of grasping detection. In this paper, the parallel gripper is used as the end of grasping to carry out research. Aiming at the angle problem of robot grasping, this paper summarizes some research status of grasping detection from 2D image to 3D space. According to their respective application, advantages, and disadvantages, this paper analyzes the development trend of the two methods. At the same time, several commonly used grasping datasets are introduced and compared.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"2 ","pages":"Pages 73-82"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667241322000052/pdfft?md5=6378a377be535e9f4a4497eee9251a1d&pid=1-s2.0-S2667241322000052-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92091650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Large scale log anomaly detection via spatial pooling 基于空间池化的大规模日志异常检测
Cognitive Robotics Pub Date : 2021-01-01 DOI: 10.1016/j.cogr.2021.10.001
Rin Hirakawa , Hironori Uchida , Asato Nakano , Keitaro Tominaga , Yoshihisa Nakatoh
{"title":"Large scale log anomaly detection via spatial pooling","authors":"Rin Hirakawa ,&nbsp;Hironori Uchida ,&nbsp;Asato Nakano ,&nbsp;Keitaro Tominaga ,&nbsp;Yoshihisa Nakatoh","doi":"10.1016/j.cogr.2021.10.001","DOIUrl":"10.1016/j.cogr.2021.10.001","url":null,"abstract":"<div><p>Log data is an important clue to understanding the behaviour of a system at runtime, but the complexity of software systems in recent years has made the data that engineers need to analyse enormous and difficult to understand. While log-based anomaly detection methods based on deep learning have enabled highly accurate detection, the computational performance required to operate the models has become very high. In this study, we propose an anomaly detection method, SPClassifier, based on sparse features and the internal state of the model, and investigate the feasibility of anomaly detection that can be utilized in environments without computing resources such as GPUs. Benchmark with the latest deep learning models on the BGL dataset shows that the proposed method can achieve competitive accuracy with these methods and has a high level of anomaly detection performance even when the amount of training data is small.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"1 ","pages":"Pages 188-196"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667241321000173/pdfft?md5=7d47126ac817ab84febc1c4f3273aa7d&pid=1-s2.0-S2667241321000173-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78453136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Decentralised task allocation using GDL negotiations in Multi-agent system 多智能体系统中基于GDL协商的分散任务分配
Cognitive Robotics Pub Date : 2021-01-01 DOI: 10.1016/j.cogr.2021.07.003
Hui Zou , Yan Xi
{"title":"Decentralised task allocation using GDL negotiations in Multi-agent system","authors":"Hui Zou ,&nbsp;Yan Xi","doi":"10.1016/j.cogr.2021.07.003","DOIUrl":"10.1016/j.cogr.2021.07.003","url":null,"abstract":"<div><p>In large distributed systems, the optimization algorithm of task scheduling may not meet the special requirements of the domain control mechanism, i.e. robustness, optimality, timeliness of solution and computational ease of processing under limited communication. In or- der to satisfy these requirements, a novel decentralized agent scheduling method for dynamic task allocation problems based on Game Descrip- tion Language (GDL) and Game Theory is proposed. Specifically, we define the task allocation problem as a stochastic game model, in which the agent's utility is derived from the marginal utility, and then prove that the global optimal task allocation scheme resides in the Nash equi- librium set by the non-cooperative game. In order to generate an optimal solution, we define Multi-agent Negotiation Game (MNG), in which ne- gotiations are held between agents to decide which tasks to act on next. Building on this, we make a simple extension to adopt GDL more suit- able for negotiations and propose to use it to model such negotiation scenarios. Finally, we use a negotiation example to show that our ap- proach is more amenable to automatic processing by autonomous agents and of great practicality than a centralized task scheduler.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"1 ","pages":"Pages 197-204"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.cogr.2021.07.003","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83838608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unbundling the significance of cognitive robots and drones deployed to tackle COVID-19 pandemic: A rapid review to unpack emerging opportunities to improve healthcare in sub-Saharan Africa 解读部署认知机器人和无人机应对COVID-19大流行的重要性:快速回顾撒哈拉以南非洲改善医疗保健的新机遇
Cognitive Robotics Pub Date : 2021-01-01 DOI: 10.1016/j.cogr.2021.11.001
Elliot Mbunge , Itai Chitungo , Tafadzwa Dzinamarira
{"title":"Unbundling the significance of cognitive robots and drones deployed to tackle COVID-19 pandemic: A rapid review to unpack emerging opportunities to improve healthcare in sub-Saharan Africa","authors":"Elliot Mbunge ,&nbsp;Itai Chitungo ,&nbsp;Tafadzwa Dzinamarira","doi":"10.1016/j.cogr.2021.11.001","DOIUrl":"10.1016/j.cogr.2021.11.001","url":null,"abstract":"<div><p>The emergence of COVID-19 brought unprecedented opportunities to deploy emerging digital technologies such as robotics and drones to provide contactless services. Robots and drones transformed initial approaches to tackle COVID-19 and have proven to be effective in curbing the risk of COVID-19 in developed countries. Despite the significant impact of robots and drones in reducing the burden of frontline healthcare professionals, there is still limited literature on their utilization to fight the pandemic in sub-Saharan Africa. Therefore, this rapid review provides significant capabilities of robots and drones while introspecting at the challenges and barriers that may hinder their implementation in developing countries. The study revealed that robots and drones have been used for disinfection, delivery of medical supplies, surveillance, consultation and screening and diagnosis. The study revealed that adopting robots and drones face challenges such as infrastructural, financial, technological barriers, security and privacy issues, lack of policies and frameworks regulating the use of robots and drones in healthcare. We, therefore, propose a collaborative approach to mobilise resources and invest in infrastructure to bridge the digital divide , craft policies and frameworks for effectively integrating robots and drones in healthcare. There is a need to include robotics in the medical education and training of health workers and develop indigenous knowledge and encourage international collaboration. Partnership with civil aviation authorities to license and monitor drones to improve monitoring and security of drone activities could also be helpful. Robots and drones should guarantee superior safety features since it either directly interacts with human or works in a densely populated environment. However, future work should focus on the long term consequences of robots and drones on human behavior and interaction as well as in healthcare.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"1 ","pages":"Pages 205-213"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667241321000185/pdfft?md5=ecb71beb73a0b3cba7d022c118e690a4&pid=1-s2.0-S2667241321000185-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83117275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
SMILE: A verbal and graphical user interface tool for speech-control of soccer robots in Ghana SMILE:加纳足球机器人语音控制的语言和图形用户界面工具
Cognitive Robotics Pub Date : 2021-01-01 DOI: 10.1016/j.cogr.2021.03.001
Patrick Fiati
{"title":"SMILE: A verbal and graphical user interface tool for speech-control of soccer robots in Ghana","authors":"Patrick Fiati","doi":"10.1016/j.cogr.2021.03.001","DOIUrl":"10.1016/j.cogr.2021.03.001","url":null,"abstract":"<div><p>SMILE (Smartphone Intuitive Likeness and Engagement) application, a portable Android application that allows a human to control a robot using speech input. SMILE is a novel open source and platform independent tool that will contribute to the robot soccer research by allowing robot handlers to verbally command robots. The application resides on a smartphone embedded in the face of a humanoid robot, using a speech recognition engine to analyze user speech input while using facial expressions and speech generation to express comprehension feedback to the user. With the introduction of intuitive human robot interaction into the arena of robot soccer, we discuss a couple specific scenarios in which SMILE could improve both the pace of the game and autonomous appearance of the robots. The ability of humans to communicate verbally is essential for any cooperative task, especially fast-paced sports. In the game of soccer, players must speak with coaches, referees, and other players on either team. Therefore, if humanoids are expected to compete on the same playing field as elite soccer players in the near future, then we must expect them to be treated like humans, which include the ability to listen and converse. SMILE (Smartphone Intuitive Likeness and Engagement) is the first platform independent smartphone based tool to equip robots with these capabilities. Currently, humanoid soccer research is heavily focused on walking dynamics, computer vision, and intelligent systems; however human-robot interaction (HRI) is overlooked. We delved into this area of robot soccer by implementing SMILE, an Android application that sends data packets to the robot's onboard computer upon verbal interaction with a user.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"1 ","pages":"Pages 25-28"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.cogr.2021.03.001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76128699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Visual information processing for deep-sea visual monitoring system 深海视觉监测系统的视觉信息处理
Cognitive Robotics Pub Date : 2021-01-01 DOI: 10.1016/j.cogr.2020.12.002
Chunyan Ma , Xin Li , Yujie Li , Xinliang Tian , Yichuan Wang , Hyoungseop Kim , Seiichi Serikawa
{"title":"Visual information processing for deep-sea visual monitoring system","authors":"Chunyan Ma ,&nbsp;Xin Li ,&nbsp;Yujie Li ,&nbsp;Xinliang Tian ,&nbsp;Yichuan Wang ,&nbsp;Hyoungseop Kim ,&nbsp;Seiichi Serikawa","doi":"10.1016/j.cogr.2020.12.002","DOIUrl":"10.1016/j.cogr.2020.12.002","url":null,"abstract":"<div><p>Due to the rising demand for minerals and metals, various deep-sea mining systems have been developed for the detection of mines and mine-like objects on the seabed. However, many of them contain some issues due to the diffusion of dangerous substances and radioactive substances in water. Therefore, efficient and accurate visual monitoring is expected by introducing artificial intelligence. Most recent deep-sea mining machines have little intelligence in visual monitoring systems. Intelligent robotics, e.g., deep learning-based edge computing for deep-sea visual monitoring systems, have not yet been established. In this paper, we propose the concept of a learning-based deep-sea visual monitoring system and use testbeds to show the efficiency of the proposed system. For example, to sense the underwater environment in real time, a large quantity of observation data, including captured images, must be transmitted from the seafloor to the ship, but large-capacity underwater communication is difficult. In this paper, we propose using deep compressed learning for real-time communication. In addition, we propose the gradient generation adversarial network (GGAN) to recover the heavily destroyed underwater images. In the application layer, wavelet-aware superresolution is used to show high-resolution images. Therefore, the development of an intelligent remote control deep-sea mining system with good convenience using deep learning applicable to deep-sea mining is expected.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"1 ","pages":"Pages 3-11"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.cogr.2020.12.002","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"106753648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 60
Gesture formation: A crucial building block for cognitive-based Human–Robot Partnership 手势形成:基于认知的人机伙伴关系的重要组成部分
Cognitive Robotics Pub Date : 2021-01-01 DOI: 10.1016/j.cogr.2021.06.004
Pietro Morasso
{"title":"Gesture formation: A crucial building block for cognitive-based Human–Robot Partnership","authors":"Pietro Morasso","doi":"10.1016/j.cogr.2021.06.004","DOIUrl":"10.1016/j.cogr.2021.06.004","url":null,"abstract":"<div><p>The next generation of robotic agents, to employed both in industrial and service robotic applications, will be characterized by a high degree of Human–Robot Partnership that implies, for example, sharing common objectives, bidirectional flow of information, capability to learn from each other, and availability to mutual training. Moreover, there is a widespread feeling in the research community that probably Humans will not accept Robots as trustable Partners if they cannot ascribe some form of awareness and true understanding to them. This means that, in addition to the incremental improvements of <em>Robotic-Bodyware,</em> there will be the need for a substantial jump of the <em>Robotic-Cogniware</em>, namely a new class of Cognitive Architectures for Robots (CARs) that match the requirements and specific constraints of Human–Robot Partnership. The working hypothesis that underlies this paper is that such class of CARs must be bio-inspired, not in the sense of fine-grain imitation of neurobiology but the large framework of embodied cognition. In our opinion, trajectory/gesture formation should be one of the building blocks of bio-inspired CARs because biological motion is a fundamental channel of inter-human partnership, a true body language that allows mutual understanding of intentions. Moreover, one of the main concepts of embodied cognition, related to the importance of motor imagery, is that real (or <em>overt</em>) actions and mental (or <em>covert</em>) actions are generated by the same internal model and support the cognitive capabilities of human skilled subjects. The paper reviews the field of human trajectory formation, revealing in a novel manner the fil rouge that runs through motor neuroscience and proposes a computational framework for a robotic formulation that also addresses the Degrees of Freedom Problem and is formulated in terms of the force-field-based Passive Motion Paradigm.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"1 ","pages":"Pages 92-110"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.cogr.2021.06.004","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77844253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A Survey Of zero shot detection: Methods and applications 零弹检测综述:方法与应用
Cognitive Robotics Pub Date : 2021-01-01 DOI: 10.1016/j.cogr.2021.08.001
Chufeng Tan, Xing Xu, Fumin Shen
{"title":"A Survey Of zero shot detection: Methods and applications","authors":"Chufeng Tan,&nbsp;Xing Xu,&nbsp;Fumin Shen","doi":"10.1016/j.cogr.2021.08.001","DOIUrl":"10.1016/j.cogr.2021.08.001","url":null,"abstract":"<div><p>Zero shot learning (ZSL) is aim to identify objects whose label is unavailable during training. This learning paradigm makes classifier has the ability to distinguish unseen class. The traditional ZSL method only focuses on the image recognition problems that the objects only appear in the central part of images. But real-world applications are far from ideal, which images can contain various objects. Zero shot detection (ZSD) is proposed to simultaneously localizing and recognizing unseen objects belongs to novel categories. We propose a detailed survey about zero shot detection in this paper. First, we summarize the background of zero shot detection and give the definition of zero shot detection. Second, based on the combination of traditional detection framework and zero shot learning methods, we categorize existing zero shot detection methods into two different classes, and the representative methods under each category are introduced. Third, we discuss some possible application scenario of zero shot detection and we propose some future research directions of zero-shot detection.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"1 ","pages":"Pages 159-167"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.cogr.2021.08.001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"108494145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Recent trending on learning based video compression: A survey 基于学习的视频压缩的最新趋势:一项调查
Cognitive Robotics Pub Date : 2021-01-01 DOI: 10.1016/j.cogr.2021.08.003
Trinh Man Hoang M.E , Jinjia Zhou PhD
{"title":"Recent trending on learning based video compression: A survey","authors":"Trinh Man Hoang M.E ,&nbsp;Jinjia Zhou PhD","doi":"10.1016/j.cogr.2021.08.003","DOIUrl":"10.1016/j.cogr.2021.08.003","url":null,"abstract":"<div><p>The increase of video content and video resolution drive more exploration of video compression techniques recently. Meanwhile, learning-based video compression is receiving much attention over the past few years because of its content adaptivity and parallelable computation. Although several promising reports were introduced, there is no breakthrough work that can further go out of the research area. In this work, we provide an up-to-date overview of learning-based video compression research and its milestones. In particular, the research idea of recent works on learning-based modules for conventional codec adaption and the learning-based end-to-end video compression are reported along with their advantages and disadvantages. According to the review, compare to the current video compression standard like HEVC or VVC, from 3% to 12% BD-rate reduction have been achieved with integrated approaches while outperformed results on perceptual quality and structure similarity were reported for end-to-end approaches. Furthermore, the future research suggestion is provided based on the current obstacles. We conclude that, for a long-term benefit, the computation complexity is the major problem that needed to be solved, especially on the decoder-end. Whereas the rate-dependent and generative designs are optimistic to provide a more low-complex efficient learning-based codec.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"1 ","pages":"Pages 145-158"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.cogr.2021.08.003","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"102171552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
A review of electroencephalogram signal processing methods for brain-controlled robots 脑控机器人脑电图信号处理方法综述
Cognitive Robotics Pub Date : 2021-01-01 DOI: 10.1016/j.cogr.2021.07.001
Ziyang Huang, Mei Wang
{"title":"A review of electroencephalogram signal processing methods for brain-controlled robots","authors":"Ziyang Huang,&nbsp;Mei Wang","doi":"10.1016/j.cogr.2021.07.001","DOIUrl":"10.1016/j.cogr.2021.07.001","url":null,"abstract":"<div><p>Brain-computer interface (BCI) based on electroencephalogram (EEG) signals can provide a way for human to communicate with the outside world. This approach is independent of the body's peripheral nerves and muscle tissue. The brain-controlled robot is a new technology based on the brain-computer interface technology and the robot control technology. This technology allows the human brain to control a robot to perform a series of actions. The processing of EEG signals plays a vital role in the technology of brain-controlled robots. In this paper, the methods of EEG signal processing in recent years are summarized. In order to better develop the EEG signal processing methods in brain-controlled robots, this paper elaborate on three parts: EEG signal pre-processing, feature extraction and feature classification. At the same time, the correlation analysis methods and research contents are introduced. The advantages and disadvantages of these methods are analyzed and compared in this paper. Finally, this article looks forward to the EEG signal processing methods in the process of brain-controlled robots.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"1 ","pages":"Pages 111-124"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.cogr.2021.07.001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81126835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信