{"title":"介绍Botmark:服务机器人的计算机基准","authors":"Andrew Murtagh, Patrick Lynch, C. McGinn","doi":"10.1109/IRC.2020.00046","DOIUrl":null,"url":null,"abstract":"Central to any robot is a computing system tasked with controlling it; acting as the robot's primary means of computation, its performance significantly impacts the performance of the robot as a whole. Accordingly, developing a rigorous and repeatable methodology to assess the performance of computers in this role is a important step towards improving the performance of robots. For this reason, we present Botmark in the first part of this paper: a benchmark suite designed exclusively for evaluating the performance of computers in the application of mobile service robotics. It comprises seven workloads representing common functionality of mobile service robots including path planning, SLAM, facial recognition, among others. In order to determine the functions to be included, we use the popular RoboCup@Home competition as a proxy for ‘real-world’ robotics applications and derive workloads for the benchmark from the tests in the competition. In the second part of this paper, we demonstrate the potential value of benchmarking tools by conducting an experiment evaluation using Botmark. We firstly make a comparison of various computing platforms representative of those commonly used in robotics and see a wide variability in the performance of each. We then look at the difference in performance between running the benchmark natively, in a virtual machine and in a container. To the extent of the authors' knowledge, this is the first computer benchmarking suite designed to address the area of mobile service robotics extensively. The benchmark is available for free use and we invite the community to use it to evaluate their platforms and submit their results to the authors for dissemination.","PeriodicalId":232817,"journal":{"name":"2020 Fourth IEEE International Conference on Robotic Computing (IRC)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Presenting Botmark: a Computer Benchmark for Service Robotics\",\"authors\":\"Andrew Murtagh, Patrick Lynch, C. McGinn\",\"doi\":\"10.1109/IRC.2020.00046\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Central to any robot is a computing system tasked with controlling it; acting as the robot's primary means of computation, its performance significantly impacts the performance of the robot as a whole. Accordingly, developing a rigorous and repeatable methodology to assess the performance of computers in this role is a important step towards improving the performance of robots. For this reason, we present Botmark in the first part of this paper: a benchmark suite designed exclusively for evaluating the performance of computers in the application of mobile service robotics. It comprises seven workloads representing common functionality of mobile service robots including path planning, SLAM, facial recognition, among others. In order to determine the functions to be included, we use the popular RoboCup@Home competition as a proxy for ‘real-world’ robotics applications and derive workloads for the benchmark from the tests in the competition. In the second part of this paper, we demonstrate the potential value of benchmarking tools by conducting an experiment evaluation using Botmark. We firstly make a comparison of various computing platforms representative of those commonly used in robotics and see a wide variability in the performance of each. We then look at the difference in performance between running the benchmark natively, in a virtual machine and in a container. To the extent of the authors' knowledge, this is the first computer benchmarking suite designed to address the area of mobile service robotics extensively. The benchmark is available for free use and we invite the community to use it to evaluate their platforms and submit their results to the authors for dissemination.\",\"PeriodicalId\":232817,\"journal\":{\"name\":\"2020 Fourth IEEE International Conference on Robotic Computing (IRC)\",\"volume\":\"37 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 Fourth IEEE International Conference on Robotic Computing (IRC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IRC.2020.00046\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 Fourth IEEE International Conference on Robotic Computing (IRC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IRC.2020.00046","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Presenting Botmark: a Computer Benchmark for Service Robotics
Central to any robot is a computing system tasked with controlling it; acting as the robot's primary means of computation, its performance significantly impacts the performance of the robot as a whole. Accordingly, developing a rigorous and repeatable methodology to assess the performance of computers in this role is a important step towards improving the performance of robots. For this reason, we present Botmark in the first part of this paper: a benchmark suite designed exclusively for evaluating the performance of computers in the application of mobile service robotics. It comprises seven workloads representing common functionality of mobile service robots including path planning, SLAM, facial recognition, among others. In order to determine the functions to be included, we use the popular RoboCup@Home competition as a proxy for ‘real-world’ robotics applications and derive workloads for the benchmark from the tests in the competition. In the second part of this paper, we demonstrate the potential value of benchmarking tools by conducting an experiment evaluation using Botmark. We firstly make a comparison of various computing platforms representative of those commonly used in robotics and see a wide variability in the performance of each. We then look at the difference in performance between running the benchmark natively, in a virtual machine and in a container. To the extent of the authors' knowledge, this is the first computer benchmarking suite designed to address the area of mobile service robotics extensively. The benchmark is available for free use and we invite the community to use it to evaluate their platforms and submit their results to the authors for dissemination.