Cognitive Robotics最新文献

筛选
英文 中文
Unbundling the significance of cognitive robots and drones deployed to tackle COVID-19 pandemic: A rapid review to unpack emerging opportunities to improve healthcare in sub-Saharan Africa 解读部署认知机器人和无人机应对COVID-19大流行的重要性:快速回顾撒哈拉以南非洲改善医疗保健的新机遇
Cognitive Robotics Pub Date : 2021-01-01 DOI: 10.1016/j.cogr.2021.11.001
Elliot Mbunge , Itai Chitungo , Tafadzwa Dzinamarira
{"title":"Unbundling the significance of cognitive robots and drones deployed to tackle COVID-19 pandemic: A rapid review to unpack emerging opportunities to improve healthcare in sub-Saharan Africa","authors":"Elliot Mbunge ,&nbsp;Itai Chitungo ,&nbsp;Tafadzwa Dzinamarira","doi":"10.1016/j.cogr.2021.11.001","DOIUrl":"10.1016/j.cogr.2021.11.001","url":null,"abstract":"<div><p>The emergence of COVID-19 brought unprecedented opportunities to deploy emerging digital technologies such as robotics and drones to provide contactless services. Robots and drones transformed initial approaches to tackle COVID-19 and have proven to be effective in curbing the risk of COVID-19 in developed countries. Despite the significant impact of robots and drones in reducing the burden of frontline healthcare professionals, there is still limited literature on their utilization to fight the pandemic in sub-Saharan Africa. Therefore, this rapid review provides significant capabilities of robots and drones while introspecting at the challenges and barriers that may hinder their implementation in developing countries. The study revealed that robots and drones have been used for disinfection, delivery of medical supplies, surveillance, consultation and screening and diagnosis. The study revealed that adopting robots and drones face challenges such as infrastructural, financial, technological barriers, security and privacy issues, lack of policies and frameworks regulating the use of robots and drones in healthcare. We, therefore, propose a collaborative approach to mobilise resources and invest in infrastructure to bridge the digital divide , craft policies and frameworks for effectively integrating robots and drones in healthcare. There is a need to include robotics in the medical education and training of health workers and develop indigenous knowledge and encourage international collaboration. Partnership with civil aviation authorities to license and monitor drones to improve monitoring and security of drone activities could also be helpful. Robots and drones should guarantee superior safety features since it either directly interacts with human or works in a densely populated environment. However, future work should focus on the long term consequences of robots and drones on human behavior and interaction as well as in healthcare.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"1 ","pages":"Pages 205-213"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667241321000185/pdfft?md5=ecb71beb73a0b3cba7d022c118e690a4&pid=1-s2.0-S2667241321000185-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83117275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
SMILE: A verbal and graphical user interface tool for speech-control of soccer robots in Ghana SMILE:加纳足球机器人语音控制的语言和图形用户界面工具
Cognitive Robotics Pub Date : 2021-01-01 DOI: 10.1016/j.cogr.2021.03.001
Patrick Fiati
{"title":"SMILE: A verbal and graphical user interface tool for speech-control of soccer robots in Ghana","authors":"Patrick Fiati","doi":"10.1016/j.cogr.2021.03.001","DOIUrl":"10.1016/j.cogr.2021.03.001","url":null,"abstract":"<div><p>SMILE (Smartphone Intuitive Likeness and Engagement) application, a portable Android application that allows a human to control a robot using speech input. SMILE is a novel open source and platform independent tool that will contribute to the robot soccer research by allowing robot handlers to verbally command robots. The application resides on a smartphone embedded in the face of a humanoid robot, using a speech recognition engine to analyze user speech input while using facial expressions and speech generation to express comprehension feedback to the user. With the introduction of intuitive human robot interaction into the arena of robot soccer, we discuss a couple specific scenarios in which SMILE could improve both the pace of the game and autonomous appearance of the robots. The ability of humans to communicate verbally is essential for any cooperative task, especially fast-paced sports. In the game of soccer, players must speak with coaches, referees, and other players on either team. Therefore, if humanoids are expected to compete on the same playing field as elite soccer players in the near future, then we must expect them to be treated like humans, which include the ability to listen and converse. SMILE (Smartphone Intuitive Likeness and Engagement) is the first platform independent smartphone based tool to equip robots with these capabilities. Currently, humanoid soccer research is heavily focused on walking dynamics, computer vision, and intelligent systems; however human-robot interaction (HRI) is overlooked. We delved into this area of robot soccer by implementing SMILE, an Android application that sends data packets to the robot's onboard computer upon verbal interaction with a user.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"1 ","pages":"Pages 25-28"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.cogr.2021.03.001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76128699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Visual information processing for deep-sea visual monitoring system 深海视觉监测系统的视觉信息处理
Cognitive Robotics Pub Date : 2021-01-01 DOI: 10.1016/j.cogr.2020.12.002
Chunyan Ma , Xin Li , Yujie Li , Xinliang Tian , Yichuan Wang , Hyoungseop Kim , Seiichi Serikawa
{"title":"Visual information processing for deep-sea visual monitoring system","authors":"Chunyan Ma ,&nbsp;Xin Li ,&nbsp;Yujie Li ,&nbsp;Xinliang Tian ,&nbsp;Yichuan Wang ,&nbsp;Hyoungseop Kim ,&nbsp;Seiichi Serikawa","doi":"10.1016/j.cogr.2020.12.002","DOIUrl":"10.1016/j.cogr.2020.12.002","url":null,"abstract":"<div><p>Due to the rising demand for minerals and metals, various deep-sea mining systems have been developed for the detection of mines and mine-like objects on the seabed. However, many of them contain some issues due to the diffusion of dangerous substances and radioactive substances in water. Therefore, efficient and accurate visual monitoring is expected by introducing artificial intelligence. Most recent deep-sea mining machines have little intelligence in visual monitoring systems. Intelligent robotics, e.g., deep learning-based edge computing for deep-sea visual monitoring systems, have not yet been established. In this paper, we propose the concept of a learning-based deep-sea visual monitoring system and use testbeds to show the efficiency of the proposed system. For example, to sense the underwater environment in real time, a large quantity of observation data, including captured images, must be transmitted from the seafloor to the ship, but large-capacity underwater communication is difficult. In this paper, we propose using deep compressed learning for real-time communication. In addition, we propose the gradient generation adversarial network (GGAN) to recover the heavily destroyed underwater images. In the application layer, wavelet-aware superresolution is used to show high-resolution images. Therefore, the development of an intelligent remote control deep-sea mining system with good convenience using deep learning applicable to deep-sea mining is expected.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"1 ","pages":"Pages 3-11"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.cogr.2020.12.002","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"106753648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 60
Gesture formation: A crucial building block for cognitive-based Human–Robot Partnership 手势形成:基于认知的人机伙伴关系的重要组成部分
Cognitive Robotics Pub Date : 2021-01-01 DOI: 10.1016/j.cogr.2021.06.004
Pietro Morasso
{"title":"Gesture formation: A crucial building block for cognitive-based Human–Robot Partnership","authors":"Pietro Morasso","doi":"10.1016/j.cogr.2021.06.004","DOIUrl":"10.1016/j.cogr.2021.06.004","url":null,"abstract":"<div><p>The next generation of robotic agents, to employed both in industrial and service robotic applications, will be characterized by a high degree of Human–Robot Partnership that implies, for example, sharing common objectives, bidirectional flow of information, capability to learn from each other, and availability to mutual training. Moreover, there is a widespread feeling in the research community that probably Humans will not accept Robots as trustable Partners if they cannot ascribe some form of awareness and true understanding to them. This means that, in addition to the incremental improvements of <em>Robotic-Bodyware,</em> there will be the need for a substantial jump of the <em>Robotic-Cogniware</em>, namely a new class of Cognitive Architectures for Robots (CARs) that match the requirements and specific constraints of Human–Robot Partnership. The working hypothesis that underlies this paper is that such class of CARs must be bio-inspired, not in the sense of fine-grain imitation of neurobiology but the large framework of embodied cognition. In our opinion, trajectory/gesture formation should be one of the building blocks of bio-inspired CARs because biological motion is a fundamental channel of inter-human partnership, a true body language that allows mutual understanding of intentions. Moreover, one of the main concepts of embodied cognition, related to the importance of motor imagery, is that real (or <em>overt</em>) actions and mental (or <em>covert</em>) actions are generated by the same internal model and support the cognitive capabilities of human skilled subjects. The paper reviews the field of human trajectory formation, revealing in a novel manner the fil rouge that runs through motor neuroscience and proposes a computational framework for a robotic formulation that also addresses the Degrees of Freedom Problem and is formulated in terms of the force-field-based Passive Motion Paradigm.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"1 ","pages":"Pages 92-110"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.cogr.2021.06.004","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77844253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A Survey Of zero shot detection: Methods and applications 零弹检测综述:方法与应用
Cognitive Robotics Pub Date : 2021-01-01 DOI: 10.1016/j.cogr.2021.08.001
Chufeng Tan, Xing Xu, Fumin Shen
{"title":"A Survey Of zero shot detection: Methods and applications","authors":"Chufeng Tan,&nbsp;Xing Xu,&nbsp;Fumin Shen","doi":"10.1016/j.cogr.2021.08.001","DOIUrl":"10.1016/j.cogr.2021.08.001","url":null,"abstract":"<div><p>Zero shot learning (ZSL) is aim to identify objects whose label is unavailable during training. This learning paradigm makes classifier has the ability to distinguish unseen class. The traditional ZSL method only focuses on the image recognition problems that the objects only appear in the central part of images. But real-world applications are far from ideal, which images can contain various objects. Zero shot detection (ZSD) is proposed to simultaneously localizing and recognizing unseen objects belongs to novel categories. We propose a detailed survey about zero shot detection in this paper. First, we summarize the background of zero shot detection and give the definition of zero shot detection. Second, based on the combination of traditional detection framework and zero shot learning methods, we categorize existing zero shot detection methods into two different classes, and the representative methods under each category are introduced. Third, we discuss some possible application scenario of zero shot detection and we propose some future research directions of zero-shot detection.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"1 ","pages":"Pages 159-167"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.cogr.2021.08.001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"108494145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Recent trending on learning based video compression: A survey 基于学习的视频压缩的最新趋势:一项调查
Cognitive Robotics Pub Date : 2021-01-01 DOI: 10.1016/j.cogr.2021.08.003
Trinh Man Hoang M.E , Jinjia Zhou PhD
{"title":"Recent trending on learning based video compression: A survey","authors":"Trinh Man Hoang M.E ,&nbsp;Jinjia Zhou PhD","doi":"10.1016/j.cogr.2021.08.003","DOIUrl":"10.1016/j.cogr.2021.08.003","url":null,"abstract":"<div><p>The increase of video content and video resolution drive more exploration of video compression techniques recently. Meanwhile, learning-based video compression is receiving much attention over the past few years because of its content adaptivity and parallelable computation. Although several promising reports were introduced, there is no breakthrough work that can further go out of the research area. In this work, we provide an up-to-date overview of learning-based video compression research and its milestones. In particular, the research idea of recent works on learning-based modules for conventional codec adaption and the learning-based end-to-end video compression are reported along with their advantages and disadvantages. According to the review, compare to the current video compression standard like HEVC or VVC, from 3% to 12% BD-rate reduction have been achieved with integrated approaches while outperformed results on perceptual quality and structure similarity were reported for end-to-end approaches. Furthermore, the future research suggestion is provided based on the current obstacles. We conclude that, for a long-term benefit, the computation complexity is the major problem that needed to be solved, especially on the decoder-end. Whereas the rate-dependent and generative designs are optimistic to provide a more low-complex efficient learning-based codec.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"1 ","pages":"Pages 145-158"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.cogr.2021.08.003","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"102171552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
A review of electroencephalogram signal processing methods for brain-controlled robots 脑控机器人脑电图信号处理方法综述
Cognitive Robotics Pub Date : 2021-01-01 DOI: 10.1016/j.cogr.2021.07.001
Ziyang Huang, Mei Wang
{"title":"A review of electroencephalogram signal processing methods for brain-controlled robots","authors":"Ziyang Huang,&nbsp;Mei Wang","doi":"10.1016/j.cogr.2021.07.001","DOIUrl":"10.1016/j.cogr.2021.07.001","url":null,"abstract":"<div><p>Brain-computer interface (BCI) based on electroencephalogram (EEG) signals can provide a way for human to communicate with the outside world. This approach is independent of the body's peripheral nerves and muscle tissue. The brain-controlled robot is a new technology based on the brain-computer interface technology and the robot control technology. This technology allows the human brain to control a robot to perform a series of actions. The processing of EEG signals plays a vital role in the technology of brain-controlled robots. In this paper, the methods of EEG signal processing in recent years are summarized. In order to better develop the EEG signal processing methods in brain-controlled robots, this paper elaborate on three parts: EEG signal pre-processing, feature extraction and feature classification. At the same time, the correlation analysis methods and research contents are introduced. The advantages and disadvantages of these methods are analyzed and compared in this paper. Finally, this article looks forward to the EEG signal processing methods in the process of brain-controlled robots.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"1 ","pages":"Pages 111-124"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.cogr.2021.07.001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81126835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Research on semi-partitioned scheduling algorithm in mixed-criticality system 混合临界系统半分区调度算法研究
Cognitive Robotics Pub Date : 2021-01-01 DOI: 10.1016/j.cogr.2021.12.001
Zhang Qian, Wang Jianguo, Xu Fei, Huang Shujuan
{"title":"Research on semi-partitioned scheduling algorithm in mixed-criticality system","authors":"Zhang Qian,&nbsp;Wang Jianguo,&nbsp;Xu Fei,&nbsp;Huang Shujuan","doi":"10.1016/j.cogr.2021.12.001","DOIUrl":"10.1016/j.cogr.2021.12.001","url":null,"abstract":"<div><p>In order to overcome the problem that in a mixed-critical system, once the critical level of the system changes, lower-critical tasks may be abandoned in order to ensure the schedulability of higher-critical tasks. A semi-partitioned scheduling algorithm SPBRC, which is based on a homogeneous multi-processor mixed-criticality platform and integrates the advantages and disadvantages of global scheduling and partitioned scheduling is proposed. First-fit and worst-fit bin-packing algorithms are firstly used in this method to sort high and low critical tasks separately, all high critical tasks as fixed task allocation in different processors in turns, and then distribute the lower-critical tasks. When the criticality of processor changes, lower-cirtical tasks will be allowed to migrate to the processor that is paired with the processor and is in low-critical mode, rather than abandoned. Thus, the overall performance of the system is improved. The simulation experiment verifies the effectiveness of this method in reducing the task loss rate and job loss rate.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"1 ","pages":"Pages 214-221"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667241321000203/pdfft?md5=f3b760b39d0cd5667f98c36b0c8452fa&pid=1-s2.0-S2667241321000203-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86569726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Control-theory based security control of cyber-physical power system under multiple cyber-attacks within unified model framework 统一模型框架下基于控制理论的网络物理电力系统多重网络攻击安全控制
Cognitive Robotics Pub Date : 2021-01-01 DOI: 10.1016/j.cogr.2021.05.001
Zi-gang Zhao , Rong-bo Ye , Chang Zhou , Da-hai Wang , Tao Shi
{"title":"Control-theory based security control of cyber-physical power system under multiple cyber-attacks within unified model framework","authors":"Zi-gang Zhao ,&nbsp;Rong-bo Ye ,&nbsp;Chang Zhou ,&nbsp;Da-hai Wang ,&nbsp;Tao Shi","doi":"10.1016/j.cogr.2021.05.001","DOIUrl":"10.1016/j.cogr.2021.05.001","url":null,"abstract":"<div><p>Due to the integration of information and internet, the power network has facing more and more uncertain risks of malicious attacks. In response to this problem, we studied it from following four aspects. First of all, multiple cyber-attacks (Denial-of-service, information disclosure, replay attack and deception attack) are analyzed from each operating mechanism. Then, the subsystems are concluded to be a generic modeling frame with considering different type cyber-attack. Secondly, secure defense scenarios are proposed according to each kind cyber-attacks based on the mechanism details. Thirdly, security control conditions are derived by utilizing control theory of stability. Finally, IEEE-14 and IEEE-39 system are used as typical cases to illustrate and analyze the impact of dynamic load altering attack on some of these nodes.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"1 ","pages":"Pages 41-57"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.cogr.2021.05.001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"111455416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Characteristics based visual servo for 6DOF robot arm control 基于视觉伺服特性的六自由度机械臂控制
Cognitive Robotics Pub Date : 2021-01-01 DOI: 10.1016/j.cogr.2021.06.002
Shinya Tsuchida, Huimin Lu, Tohru Kamiya, Seiichi Serikawa
{"title":"Characteristics based visual servo for 6DOF robot arm control","authors":"Shinya Tsuchida,&nbsp;Huimin Lu,&nbsp;Tohru Kamiya,&nbsp;Seiichi Serikawa","doi":"10.1016/j.cogr.2021.06.002","DOIUrl":"10.1016/j.cogr.2021.06.002","url":null,"abstract":"<div><p>Visual servo is a method for robot arm motion control. It is controlled by the end effector velocity that is the result in calculating the internal Jacobi matrix and vector of feature error. In general, automatic robotic task require high quality sensor that can measure a 3-dimential distance, and do calibration in order to suit the sensor frame and robot frame in Euclidean space. In this paper, we only use RGB camera as the data collection, which not requiring the calibration in sensor frame. Thus, our method is simpler than any other automatic motion methods. Meanwhile, the proposed characteristics based visual servo method has varying the hyper parameter, and show the effectiveness for indicating the precision of pose error both simulation and actual environments.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"1 ","pages":"Pages 76-82"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.cogr.2021.06.002","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113432278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信