Autonomous RobotsPub Date : 2023-07-23DOI: 10.1007/s10514-023-10117-5
Zhengguo Zhu, Weiliang Zhu, Guoteng Zhang, Teng Chen, Yibin Li, Xuewen Rong, Rui Song, Daoling Qin, Qiang Hua, Shugen Ma
{"title":"Design and control of BRAVER: a bipedal robot actuated via proprioceptive electric motors","authors":"Zhengguo Zhu, Weiliang Zhu, Guoteng Zhang, Teng Chen, Yibin Li, Xuewen Rong, Rui Song, Daoling Qin, Qiang Hua, Shugen Ma","doi":"10.1007/s10514-023-10117-5","DOIUrl":"10.1007/s10514-023-10117-5","url":null,"abstract":"<div><p>This paper presents the design and control of a high-speed running bipedal robot, BRAVER. The robot, which weighs 8.6 kg and is 0.36 m tall, has six active degrees, all of which are driven by custom back-driveable modular actuators, which enable high-bandwidth force control and proprioceptive torque feedback. We present the details of the hardware design, including the actuator, leg, foot, and onboard control systems, as well as the locomotion controller design for high dynamic tasks and improving robustness. We have demonstrated the performance of BRAVER using a series of experiments, including multi-terrains walking, up and down 15<span>(^{circ })</span> slopes, pushing recovery, and running. The maximum running speed of BRAVER reaches 1.75 m/s.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 8","pages":"1229 - 1243"},"PeriodicalIF":3.5,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43379062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autonomous RobotsPub Date : 2023-07-23DOI: 10.1007/s10514-023-10121-9
John Harwell, Angel Sylvester, Maria Gini
{"title":"An empirical characterization of ODE models of swarm behaviors in common foraging scenarios","authors":"John Harwell, Angel Sylvester, Maria Gini","doi":"10.1007/s10514-023-10121-9","DOIUrl":"10.1007/s10514-023-10121-9","url":null,"abstract":"<div><p>There is a large class of real-world problems, such as warehouse transport, at different scales, swarm densities, etc., that can be characterized as Central Place Foraging Problems (CPFPs). We contribute to swarm engineering by designing an Ordinary Differential Equation (ODE) model that strives to capture the underlying behavioral dynamics of the CPFP in these application areas. Our simulation results show that a hybrid ODE modeling approach combining analytic parameter calculations and post-hoc (i.e., after running experiments) parameter fitting can be just as effective as a purely post-hoc approach to computing parameters via simulations, while requiring less tuning and iterative refinement. This makes it easier to design systems with provable bounds on behavior. Additionally, the resulting model parameters are more understandable because their values can be traced back to problem features, such as system size, robot control algorithm, etc. Finally, we perform real-robot experiments to further understand the limits of our model from an engineering standpoint.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 7","pages":"963 - 977"},"PeriodicalIF":3.5,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43611429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autonomous RobotsPub Date : 2023-07-22DOI: 10.1007/s10514-023-10122-8
Riku Murai, Sajad Saeedi, Paul H. J. Kelly
{"title":"High-frame rate homography and visual odometry by tracking binary features from the focal plane","authors":"Riku Murai, Sajad Saeedi, Paul H. J. Kelly","doi":"10.1007/s10514-023-10122-8","DOIUrl":"10.1007/s10514-023-10122-8","url":null,"abstract":"<div><p>Robotics faces a long-standing obstacle in which the speed of the vision system’s scene understanding is insufficient, impeding the robot’s ability to perform agile tasks. Consequently, robots must often rely on interpolation and extrapolation of the vision data to accomplish tasks in a timely and effective manner. One of the primary reasons for these delays is the analog-to-digital conversion that occurs on a per-pixel basis across the image sensor, along with the transfer of pixel-intensity information to the host device. This results in significant delays and power consumption in modern visual processing pipelines. The SCAMP-5—a general-purpose Focal-plane Sensor-processor array (FPSP)—used in this research performs computations in the analog domain prior to analog-to-digital conversion. By extracting features from the image on the focal plane, the amount of data that needs to be digitised and transferred is reduced. This allows for a high frame rate and low energy consumption for the SCAMP-5. The focus of our work is on localising the camera within the scene, which is crucial for scene understanding and for any downstream robotics tasks. We present a localisation system that utilise the FPSP in two parts. First, a 6-DoF odometry system is introduced, which efficiently estimates its position against a known marker at over 400 FPS. Second, our work is extended to implement BIT-VO—6-DoF visual odometry system which operates under an unknown natural environment at 300 FPS.\u0000</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 8","pages":"1579 - 1592"},"PeriodicalIF":3.5,"publicationDate":"2023-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10122-8.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47385185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autonomous RobotsPub Date : 2023-07-21DOI: 10.1007/s10514-023-10124-6
J. Betancourt, P. Castillo, P. García, V. Balaguer, R. Lozano
{"title":"Robust bounded control scheme for quadrotor vehicles under high dynamic disturbances","authors":"J. Betancourt, P. Castillo, P. García, V. Balaguer, R. Lozano","doi":"10.1007/s10514-023-10124-6","DOIUrl":"10.1007/s10514-023-10124-6","url":null,"abstract":"<div><p>In this paper, an optimal bounded robust control algorithm for secure autonomous navigation in quadcopter vehicles is proposed. The controller is developed combining two parts; one dedicated to stabilize the closed-loop system and the second one for dealing and estimating external disturbances as well unknown nonlinearities inherent to the real system’s operations. For bounding the energy used by the system during a mission and, without losing its robustness properties, the quadratic problem formulation is used considering the actuators system constraints. The resulting optimal bounded control scheme improves considerably the stability and robustness of the closed-loop system and at the same time bounds the motor control inputs. The controller is validated in real-time flights and in unconventional conditions for high wind-gusts and Loss of Effectiveness in two rotors. The experimental results demonstrate the good performance of the proposed controller in both scenarios.\u0000</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 8","pages":"1245 - 1254"},"PeriodicalIF":3.5,"publicationDate":"2023-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42302811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autonomous RobotsPub Date : 2023-07-14DOI: 10.1007/s10514-023-10111-x
Salvador Canas-Moreno, Enrique Piñero-Fuentes, Antonio Rios-Navarro, Daniel Cascado-Caballero, Fernando Perez-Peña, Alejandro Linares-Barranco
{"title":"Towards neuromorphic FPGA-based infrastructures for a robotic arm","authors":"Salvador Canas-Moreno, Enrique Piñero-Fuentes, Antonio Rios-Navarro, Daniel Cascado-Caballero, Fernando Perez-Peña, Alejandro Linares-Barranco","doi":"10.1007/s10514-023-10111-x","DOIUrl":"10.1007/s10514-023-10111-x","url":null,"abstract":"<div><p>Muscles are stretched with bursts of spikes that come from motor neurons connected to the cerebellum through the spinal cord. Then, alpha motor neurons directly innervate the muscles to complete the motor command coming from upper biological structures. Nevertheless, classical robotic systems usually require complex computational capabilities and relative high-power consumption to process their control algorithm, which requires information from the robot’s proprioceptive sensors. The way in which the information is encoded and transmitted is an important difference between biological systems and robotic machines. Neuromorphic engineering mimics these behaviors found in biology into engineering solutions to produce more efficient systems and for a better understanding of neural systems. This paper presents the application of a Spike-based Proportional-Integral-Derivative controller to a 6-DoF Scorbot ER-VII robotic arm, feeding the motors with Pulse-Frequency-Modulation instead of Pulse-Width-Modulation, mimicking the way in which motor neurons act over muscles. The presented frameworks allow the robot to be commanded and monitored locally or remotely from both a Python software running on a computer or from a spike-based neuromorphic hardware. Multi-FPGA and single-PSoC solutions are compared. These frameworks are intended for experimental use of the neuromorphic community as a testbed platform and for dataset recording for machine learning purposes.\u0000</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 7","pages":"947 - 961"},"PeriodicalIF":3.5,"publicationDate":"2023-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10111-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45151320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autonomous RobotsPub Date : 2023-07-10DOI: 10.1007/s10514-023-10120-w
Michael Burke, Katie Lu, Daniel Angelov, Artūras Straižys, Craig Innes, Kartic Subr, Subramanian Ramamoorthy
{"title":"Learning rewards from exploratory demonstrations using probabilistic temporal ranking","authors":"Michael Burke, Katie Lu, Daniel Angelov, Artūras Straižys, Craig Innes, Kartic Subr, Subramanian Ramamoorthy","doi":"10.1007/s10514-023-10120-w","DOIUrl":"10.1007/s10514-023-10120-w","url":null,"abstract":"<div><p>Informative path-planning is a well established approach to visual-servoing and active viewpoint selection in robotics, but typically assumes that a suitable cost function or goal state is known. This work considers the inverse problem, where the goal of the task is unknown, and a reward function needs to be inferred from exploratory example demonstrations provided by a demonstrator, for use in a downstream informative path-planning policy. Unfortunately, many existing reward inference strategies are unsuited to this class of problems, due to the exploratory nature of the demonstrations. In this paper, we propose an alternative approach to cope with the class of problems where these sub-optimal, exploratory demonstrations occur. We hypothesise that, in tasks which require discovery, successive states of any demonstration are progressively more likely to be associated with a higher reward, and use this hypothesis to generate time-based binary comparison outcomes and infer reward functions that support these ranks, under a probabilistic generative model. We formalise this <i>probabilistic temporal ranking</i> approach and show that it improves upon existing approaches to perform reward inference for autonomous ultrasound scanning, a novel application of learning from demonstration in medical imaging while also being of value across a broad range of goal-oriented learning from demonstration tasks.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 6","pages":"733 - 751"},"PeriodicalIF":3.5,"publicationDate":"2023-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10120-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48400024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autonomous RobotsPub Date : 2023-07-06DOI: 10.1007/s10514-023-10118-4
Tianyu Wang, Vikas Dhiman, Nikolay Atanasov
{"title":"Inverse reinforcement learning for autonomous navigation via differentiable semantic mapping and planning","authors":"Tianyu Wang, Vikas Dhiman, Nikolay Atanasov","doi":"10.1007/s10514-023-10118-4","DOIUrl":"10.1007/s10514-023-10118-4","url":null,"abstract":"<div><p>This paper focuses on inverse reinforcement learning for autonomous navigation using distance and semantic category observations. The objective is to infer a cost function that explains demonstrated behavior while relying only on the expert’s observations and state-control trajectory. We develop a map encoder, that infers semantic category probabilities from the observation sequence, and a cost encoder, defined as a deep neural network over the semantic features. Since the expert cost is not directly observable, the model parameters can only be optimized by differentiating the error between demonstrated controls and a control policy computed from the cost estimate. We propose a new model of expert behavior that enables error minimization using a closed-form subgradient computed only over a subset of promising states via a motion planning algorithm. Our approach allows generalizing the learned behavior to new environments with new spatial configurations of the semantic categories. We analyze the different components of our model in a minigrid environment. We also demonstrate that our approach learns to follow traffic rules in the autonomous driving CARLA simulator by relying on semantic observations of buildings, sidewalks, and road lanes.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 6","pages":"809 - 830"},"PeriodicalIF":3.5,"publicationDate":"2023-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10118-4.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44835312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AGRI-SLAM: a real-time stereo visual SLAM for agricultural environment","authors":"Rafiqul Islam, Habibullah Habibullah, Tagor Hossain","doi":"10.1007/s10514-023-10110-y","DOIUrl":"10.1007/s10514-023-10110-y","url":null,"abstract":"<div><p>In this research, we proposed a stereo visual simultaneous localisation and mapping (SLAM) system that efficiently works in agricultural scenarios without compromising the performance and accuracy in contrast to the other state-of-the-art methods. The proposed system is equipped with an image enhancement technique for the ORB point and LSD line features recovery, which enables it to work in broader scenarios and gives extensive spatial information from the low-light and hazy agricultural environment. Firstly, the method has been tested on the standard dataset, i.e., KITTI and EuRoC, to validate the localisation accuracy by comparing it with the other state-of-the-art methods, namely VINS-SLAM, PL-SLAM, and ORB-SLAM2. The experimental results evidence that the proposed method obtains superior localisation and mapping accuracy than the other visual SLAM methods. Secondly, the proposed method is tested on the ROSARIO dataset, our low-light agricultural dataset, and O-HAZE dataset to validate the performance in agricultural environments. In such cases, while other methods fail to operate in such complex agricultural environments, our method successfully operates with high localisation and mapping accuracy.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 6","pages":"649 - 668"},"PeriodicalIF":3.5,"publicationDate":"2023-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10110-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45660238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autonomous RobotsPub Date : 2023-07-04DOI: 10.1007/s10514-023-10112-w
Xupeng Zhu, Dian Wang, Guanang Su, Ondrej Biza, Robin Walters, Robert Platt
{"title":"On robot grasp learning using equivariant models","authors":"Xupeng Zhu, Dian Wang, Guanang Su, Ondrej Biza, Robin Walters, Robert Platt","doi":"10.1007/s10514-023-10112-w","DOIUrl":"10.1007/s10514-023-10112-w","url":null,"abstract":"<div><p>Real-world grasp detection is challenging due to the stochasticity in grasp dynamics and the noise in hardware. Ideally, the system would adapt to the real world by training directly on physical systems. However, this is generally difficult due to the large amount of training data required by most grasp learning models. In this paper, we note that the planar grasp function is <span>(textrm{SE}(2))</span>-equivariant and demonstrate that this structure can be used to constrain the neural network used during learning. This creates an inductive bias that can significantly improve the sample efficiency of grasp learning and enable end-to-end training from scratch on a physical robot with as few as 600 grasp attempts. We call this method Symmetric Grasp learning (SymGrasp) and show that it can learn to grasp “from scratch” in less that 1.5 h of physical robot time. This paper represents an expanded and revised version of the conference paper Zhu et al. (2022).\u0000</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 8","pages":"1175 - 1193"},"PeriodicalIF":3.5,"publicationDate":"2023-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10112-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138473184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"TNES: terrain traversability mapping, navigation and excavation system for autonomous excavators on worksite","authors":"Tianrui Guan, Zhenpeng He, Ruitao Song, Liangjun Zhang","doi":"10.1007/s10514-023-10113-9","DOIUrl":"10.1007/s10514-023-10113-9","url":null,"abstract":"<div><p>We present a terrain traversability mapping and navigation system (TNS) for autonomous excavator applications in an unstructured environment. We use an efficient approach to extract terrain features from RGB images and 3D point clouds and incorporate them into a global map for planning and navigation. Our system can adapt to changing environments and update the terrain information in real-time. Moreover, we present a novel dataset, the Complex Worksite Terrain dataset, which consists of RGB images from construction sites with seven categories based on navigability. Our novel algorithms improve the mapping accuracy over previous methods by 4.17–30.48<span>(%)</span> and reduce MSE on the traversability map by 13.8–71.4<span>(%)</span>. We have combined our mapping approach with planning and control modules in an autonomous excavator navigation system and observe <span>(49.3%)</span> improvement in the overall success rate. Based on TNS, we demonstrate the first autonomous excavator that can navigate through unstructured environments consisting of deep pits, steep hills, rock piles, and other complex terrain features. In addition, we combine the proposed TNS with the autonomous excavation system (AES), and deploy the new pipeline, TNES, on a more complex construction site. With minimum human intervention, we demonstrate autonomous navigation capability with excavation tasks.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 6","pages":"695 - 714"},"PeriodicalIF":3.5,"publicationDate":"2023-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41582747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}