Autonomous Robots最新文献

筛选
英文 中文
Haptic-guided grasping to minimise torque effort during robotic telemanipulation 触觉引导抓取,以尽量减少扭矩努力在机器人遥控
IF 3.5 3区 计算机科学
Autonomous Robots Pub Date : 2023-04-12 DOI: 10.1007/s10514-023-10096-7
Rahaf Rahal, Amir M. Ghalamzan-E., Firas Abi-Farraj, Claudio Pacchierotti, Paolo Robuffo Giordano
{"title":"Haptic-guided grasping to minimise torque effort during robotic telemanipulation","authors":"Rahaf Rahal,&nbsp;Amir M. Ghalamzan-E.,&nbsp;Firas Abi-Farraj,&nbsp;Claudio Pacchierotti,&nbsp;Paolo Robuffo Giordano","doi":"10.1007/s10514-023-10096-7","DOIUrl":"10.1007/s10514-023-10096-7","url":null,"abstract":"<div><p>Teleoperating robotic manipulators can be complicated and cognitively demanding for the human operator. Despite these difficulties, teleoperated robotic systems are still popular in several industrial applications, e.g., remote handling of hazardous material. In this context, we present a novel haptic shared control method for minimising the manipulator torque effort during remote manipulative actions in which an operator is assisted in selecting a suitable grasping pose for then displacing an object along a desired trajectory. Minimising torque is important because it reduces the system operating cost and extends the range of objects that can be manipulated. We demonstrate the effectiveness of the proposed approach in a series of representative real-world pick-and-place experiments as well as in a human subjects study. The reported results prove the effectiveness of our shared control vs. a standard teleoperation approach. We also find that haptic-only guidance performs better than visual-only guidance, although combining them together leads to the best overall results.\u0000</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2023-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48097502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robotic hand synergies for in-hand regrasping driven by object information 由物体信息驱动的机器人手在手再生中的协同作用
IF 3.5 3区 计算机科学
Autonomous Robots Pub Date : 2023-04-11 DOI: 10.1007/s10514-023-10101-z
Dimitrios Dimou, José Santos-Victor, Plinio Moreno
{"title":"Robotic hand synergies for in-hand regrasping driven by object information","authors":"Dimitrios Dimou,&nbsp;José Santos-Victor,&nbsp;Plinio Moreno","doi":"10.1007/s10514-023-10101-z","DOIUrl":"10.1007/s10514-023-10101-z","url":null,"abstract":"<div><p>We develop a conditional generative model to represent dexterous grasp postures of a robotic hand and use it to generate in-hand regrasp trajectories. Our model learns to encode the robotic grasp postures into a low-dimensional space, called Synergy Space, while taking into account additional information about the object such as its size and its shape category. We then generate regrasp trajectories through linear interpolation in this low-dimensional space. The result is that the hand configuration moves from one grasp type to another while keeping the object stable in the hand. We show that our model achieves higher success rate on in-hand regrasping compared to previous methods used for synergy extraction, by taking advantage of the grasp size conditional variable.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2023-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10101-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46694831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning instance-level N-ary semantic knowledge at scale for robots operating in everyday environments 为在日常环境中运行的机器人大规模学习实例级n元语义知识
IF 3.5 3区 计算机科学
Autonomous Robots Pub Date : 2023-04-06 DOI: 10.1007/s10514-023-10099-4
Weiyu Liu, Dhruva Bansal, Angel Daruna, Sonia Chernova
{"title":"Learning instance-level N-ary semantic knowledge at scale for robots operating in everyday environments","authors":"Weiyu Liu,&nbsp;Dhruva Bansal,&nbsp;Angel Daruna,&nbsp;Sonia Chernova","doi":"10.1007/s10514-023-10099-4","DOIUrl":"10.1007/s10514-023-10099-4","url":null,"abstract":"<div><p>Robots operating in everyday environments need to effectively perceive, model, and infer semantic properties of objects. Existing knowledge reasoning frameworks only model binary relations between an object’s class label and its semantic properties, unable to collectively reason about object properties detected by different perception algorithms and grounded in diverse sensory modalities. We bridge the gap between multimodal perception and knowledge reasoning by introducing an n-ary representation that models complex, inter-related object properties. To tackle the problem of collecting n-ary semantic knowledge at scale, we propose transformer neural networks that generalize knowledge from observations of object instances by learning to predict single missing properties or predict joint probabilities of all properties. The learned models can reason at different levels of abstraction, effectively predicting unknown properties of objects in different environmental contexts given different amounts of observed information. We quantitatively validate our approach against prior methods on LINK, a unique dataset we contribute that contains 1457 object instances in different situations, amounting to 15 multimodal properties types and 200 total properties. Compared to the top-performing baseline, a Markov Logic Network, our models obtain a 10% improvement in predicting unknown properties of novel object instances while reducing training and inference time by more than 150 times. Additionally, we apply our work to a mobile manipulation robot, demonstrating its ability to leverage n-ary reasoning to retrieve objects and actively detect object properties. The code and data are available at https://github.com/wliu88/LINK.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2023-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46907553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Multimodal embodied attribute learning by robots for object-centric action policies 以对象为中心的动作策略的机器人多模态嵌入属性学习
IF 3.5 3区 计算机科学
Autonomous Robots Pub Date : 2023-03-29 DOI: 10.1007/s10514-023-10098-5
Xiaohan Zhang, Saeid Amiri, Jivko Sinapov, Jesse Thomason, Peter Stone, Shiqi Zhang
{"title":"Multimodal embodied attribute learning by robots for object-centric action policies","authors":"Xiaohan Zhang,&nbsp;Saeid Amiri,&nbsp;Jivko Sinapov,&nbsp;Jesse Thomason,&nbsp;Peter Stone,&nbsp;Shiqi Zhang","doi":"10.1007/s10514-023-10098-5","DOIUrl":"10.1007/s10514-023-10098-5","url":null,"abstract":"<div><p>Robots frequently need to perceive object attributes, such as <span>red</span>, <span>heavy</span>, and <span>empty</span>, using multimodal exploratory behaviors, such as <i>look</i>, <i>lift</i>, and <i>shake</i>. One possible way for robots to do so is to learn a classifier for each perceivable attribute given an exploratory behavior. Once the attribute classifiers are learned, they can be used by robots to select actions and identify attributes of new objects, answering questions, such as “<i>Is this object</i> <span>red</span> <i> and</i> <span>empty</span> ?” In this article, we introduce a robot interactive perception problem, called <b>M</b>ultimodal <b>E</b>mbodied <b>A</b>ttribute <b>L</b>earning (<span>meal</span>), and explore solutions to this new problem. Under different assumptions, there are two classes of <span>meal</span> problems. <span>offline-meal</span> problems are defined in this article as learning attribute classifiers from pre-collected data, and sequencing actions towards attribute identification under the challenging trade-off between information gains and exploration action costs. For this purpose, we introduce <b>M</b>ixed <b>O</b>bservability <b>R</b>obot <b>C</b>ontrol (<span>morc</span>), an algorithm for <span>offline-meal</span> problems, that dynamically constructs both fully and partially observable components of the state for multimodal attribute identification of objects. We further investigate a more challenging class of <span>meal</span> problems, called <span>online-meal</span>, where the robot assumes no pre-collected data, and works on both attribute classification and attribute identification at the same time. Based on <span>morc</span>, we develop an algorithm called <b>I</b>nformation-<b>T</b>heoretic <b>R</b>eward <b>S</b>haping (<span>morc</span>-<span>itrs</span>) that actively addresses the trade-off between exploration and exploitation in <span>online-meal</span> problems. <span>morc</span> and <span>morc</span>-<span>itrs</span> are evaluated in comparison with competitive <span>meal</span> baselines, and results demonstrate the superiority of our methods in learning efficiency and identification accuracy.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2023-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46355867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Co-design of communication and machine inference for cloud robotics 云机器人通信与机器推理协同设计
IF 3.5 3区 计算机科学
Autonomous Robots Pub Date : 2023-03-20 DOI: 10.1007/s10514-023-10093-w
Manabu Nakanoya, Sai Shankar Narasimhan, Sharachchandra Bhat, Alexandros Anemogiannis, Akul Datta, Sachin Katti, Sandeep Chinchali, Marco Pavone
{"title":"Co-design of communication and machine inference for cloud robotics","authors":"Manabu Nakanoya,&nbsp;Sai Shankar Narasimhan,&nbsp;Sharachchandra Bhat,&nbsp;Alexandros Anemogiannis,&nbsp;Akul Datta,&nbsp;Sachin Katti,&nbsp;Sandeep Chinchali,&nbsp;Marco Pavone","doi":"10.1007/s10514-023-10093-w","DOIUrl":"10.1007/s10514-023-10093-w","url":null,"abstract":"<div><p>Today, even the most compute-and-power constrained robots can measure complex, high data-rate video and LIDAR sensory streams. Often, such robots, ranging from low-power drones to space and subterranean rovers, need to transmit high-bitrate sensory data to a remote compute server if they are uncertain or cannot scalably run complex perception or mapping tasks locally. However, today’s representations for sensory data are mostly designed for <i>human, not robotic</i>, perception and thus often waste precious compute or wireless network resources to transmit unimportant parts of a scene that are unnecessary for a high-level robotic task. This paper presents an algorithm to learn <i>task-relevant</i> representations of sensory data that are co-designed with a pre-trained robotic perception model’s ultimate objective. Our algorithm aggressively compresses robotic sensory data by up to 11<span>(times )</span> more than competing methods. Further, it achieves high accuracy and robust generalization on diverse tasks including Mars terrain classification with low-power deep learning accelerators, neural motion planning, and environmental timeseries classification.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2023-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10093-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41639268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
HeRo 2.0: a low-cost robot for swarm robotics research HeRo 2.0:用于群体机器人研究的低成本机器人
IF 3.5 3区 计算机科学
Autonomous Robots Pub Date : 2023-03-20 DOI: 10.1007/s10514-023-10100-0
Paulo Rezeck, Héctor Azpúrua, Maurício F. S. Corrêa, Luiz Chaimowicz
{"title":"HeRo 2.0: a low-cost robot for swarm robotics research","authors":"Paulo Rezeck,&nbsp;Héctor Azpúrua,&nbsp;Maurício F. S. Corrêa,&nbsp;Luiz Chaimowicz","doi":"10.1007/s10514-023-10100-0","DOIUrl":"10.1007/s10514-023-10100-0","url":null,"abstract":"<div><p>The current state of electronic component miniaturization coupled with the increasing efficiency in hardware and software allow the development of smaller and compact robotic systems. The convenience of using these small, simple, yet capable robots has gathered the research community’s attention towards practical applications of swarm robotics. This paper presents the design of a novel platform for swarm robotics applications that is low cost, easy to assemble using off-the-shelf components, and deeply integrated with the most used robotic framework available today: ROS (Robot Operating System). The robotic platform is entirely open, composed of a 3D printed body and open-source software. We describe its architecture, present its main features, and evaluate its functionalities executing experiments using a couple of robots. Results demonstrate that the proposed mobile robot is capable of performing different swarm tasks, given its small size and reduced cost, being suitable for swarm robotics research and education.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2023-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91282746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Visuo-haptic object perception for robots: an overview 机器人视觉触觉对象感知研究综述
IF 3.5 3区 计算机科学
Autonomous Robots Pub Date : 2023-03-14 DOI: 10.1007/s10514-023-10091-y
Nicolás Navarro-Guerrero, Sibel Toprak, Josip Josifovski, Lorenzo Jamone
{"title":"Visuo-haptic object perception for robots: an overview","authors":"Nicolás Navarro-Guerrero,&nbsp;Sibel Toprak,&nbsp;Josip Josifovski,&nbsp;Lorenzo Jamone","doi":"10.1007/s10514-023-10091-y","DOIUrl":"10.1007/s10514-023-10091-y","url":null,"abstract":"<div><p>The object perception capabilities of humans are impressive, and this becomes even more evident when trying to develop solutions with a similar proficiency in autonomous robots. While there have been notable advancements in the technologies for artificial vision and touch, the effective integration of these two sensory modalities in robotic applications still needs to be improved, and several open challenges exist. Taking inspiration from how humans combine visual and haptic perception to perceive object properties and drive the execution of manual tasks, this article summarises the current state of the art of visuo-haptic object perception in robots. Firstly, the biological basis of human multimodal object perception is outlined. Then, the latest advances in sensing technologies and data collection strategies for robots are discussed. Next, an overview of the main computational techniques is presented, highlighting the main challenges of multimodal machine learning and presenting a few representative articles in the areas of robotic object recognition, peripersonal space representation and manipulation. Finally, informed by the latest advancements and open challenges, this article outlines promising new research directions.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2023-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10091-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46918377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Point-based metric and topological localisation between lidar and overhead imagery 激光雷达和头顶图像之间基于点的度量和拓扑定位
IF 3.5 3区 计算机科学
Autonomous Robots Pub Date : 2023-03-02 DOI: 10.1007/s10514-023-10085-w
Tim Yuqing Tang, Daniele De Martini, Paul Newman
{"title":"Point-based metric and topological localisation between lidar and overhead imagery","authors":"Tim Yuqing Tang,&nbsp;Daniele De Martini,&nbsp;Paul Newman","doi":"10.1007/s10514-023-10085-w","DOIUrl":"10.1007/s10514-023-10085-w","url":null,"abstract":"<div><p>In this paper, we present a method for solving the localisation of a ground lidar using overhead imagery only. Public overhead imagery such as Google satellite images are readily available resources. They can be used as the map proxy for robot localisation, relaxing the requirement for a prior traversal for mapping as in traditional approaches. While prior approaches have focused on the metric localisation between range sensors and overhead imagery, our method is the first to learn both place recognition and metric localisation of a ground lidar using overhead imagery, and also outperforms prior methods on metric localisation with large initial pose offsets. To bridge the drastic domain gap between lidar data and overhead imagery, our method learns to transform an overhead image into a collection of 2D points, emulating the resulting point-cloud scanned by a lidar sensor situated near the centre of the overhead image. After both modalities are expressed as point sets, point-based machine learning methods for localisation are applied.\u0000</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2023-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10085-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45125784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Robust inverse dynamics by evaluating Newton–Euler equations with respect to a moving reference and measuring angular acceleration 通过评估相对于移动参考的牛顿-欧拉方程和测量角加速度的鲁棒逆动力学
IF 3.5 3区 计算机科学
Autonomous Robots Pub Date : 2023-02-28 DOI: 10.1007/s10514-023-10092-x
Maximilian Gießler, Bernd Waltersberger
{"title":"Robust inverse dynamics by evaluating Newton–Euler equations with respect to a moving reference and measuring angular acceleration","authors":"Maximilian Gießler,&nbsp;Bernd Waltersberger","doi":"10.1007/s10514-023-10092-x","DOIUrl":"10.1007/s10514-023-10092-x","url":null,"abstract":"<div><p>Maintaining stability while walking on arbitrary surfaces or dealing with external perturbations is of great interest in humanoid robotics research. Increasing the system’s autonomous robustness to a variety of postural threats during locomotion is the key despite the need to evaluate noisy sensor signals. The equations of motion are the foundation of all published approaches. In contrast, we propose a more adequate evaluation of the equations of motion with respect to an arbitrary moving reference point in a non-inertial reference frame. Conceptual advantages are, e.g., getting independent of global position and velocity vectors estimated by sensor fusions or calculating the imaginary zero-moment point walking on different inclined ground surfaces. Further, we improve the calculation results by reducing noise-amplifying methods in our algorithm and using specific characteristics of physical robots. We use simulation results to compare our algorithm with established approaches and test it with experimental robot data.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2023-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10092-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47285445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Automated group motion control of magnetically actuated millirobots 磁驱动微型机器人的自动群运动控制
IF 3.5 3区 计算机科学
Autonomous Robots Pub Date : 2023-02-25 DOI: 10.1007/s10514-023-10084-x
Pouria Razzaghi, Ehab Al Khatib, Yildirim Hurmuzlu
{"title":"Automated group motion control of magnetically actuated millirobots","authors":"Pouria Razzaghi,&nbsp;Ehab Al Khatib,&nbsp;Yildirim Hurmuzlu","doi":"10.1007/s10514-023-10084-x","DOIUrl":"10.1007/s10514-023-10084-x","url":null,"abstract":"<div><p>Small-size robots offer access to spaces that are inaccessible to larger ones. This type of access is crucial in applications such as drug delivery, environmental detection, and collection of small samples. However, there are some tasks that are not possible to perform using only one robot including assembly and manufacturing at small scales, manipulation of micro- and nano- objects, and robot-based structuring of small-scale materials. In this article, we focus on tasks that can be achieved using a group of small-scale robots like pattern formation. These robots are typically externally actuated due to their size limitation. Yet, one faces the challenge of controlling a group of robots using a single global input. In this study, we propose a control algorithm to position individual members of a group in predefined positions. In our previous work, we presented a small-scaled magnetically actuated millirobot. An electromagnetic coil system applied external force and steered the millirobots in various modes of motion such as pivot walking and tumbling. In this paper, we propose two new designs of these millirobots. In the first design, the magnets are placed at the center of body to reduce the magnetic attraction force between the millirobots. In the second design, the millirobots are of identical length with two extra legs acting as the pivot points and varying pivot separation in design to take advantage of variable speed in pivot walking mode while keeping the speed constant in tumbling mode. This paper presents an algorithm for positional control of <i>n</i> millirobots with different lengths to move them from given initial positions to final desired ones. This method is based on choosing a leader that is fully controllable. Then, the motions of other millirobots are regulated by following the leader and determining their appropriate pivot separations in order to implement the intended group motion. Simulations and hardware experiments validate these results.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2023-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10084-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44859563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信