{"title":"Design of human motion detection for non-verbal collaborative robot communication cue","authors":"Wendy Cahya Kurniawan, Yeoh Wen Liang, Hiroshi Okumura, Osamu Fukuda","doi":"10.1007/s10015-024-01000-2","DOIUrl":"10.1007/s10015-024-01000-2","url":null,"abstract":"<div><p>The integration of modern manufacturing systems has promised increased flexibility, productivity, and efficiency. In such an environment, collaboration between humans and robots in a shared workspace is essential to effectively accomplish shared tasks. Strong communication among partners is essential for collaborative efficiency. This research investigates an approach to non-verbal communication cues. The system focuses on integrating human motion detection with vision sensors. This method addresses the bias human action detection in frames and enhances the accuracy of perception as information about human activities to the robot. By interpreting spatial and temporal data, the system detects human movements through sequences of human activity frames while working together. The training and validation results confirm that the approach achieves an accuracy of 91%. The sequential testing performance showed an average detection of 83%. This research not only emphasizes the importance of advanced communication in human–robot collaboration, but also effectively promotes future developments in collaborative robotics.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 1","pages":"12 - 20"},"PeriodicalIF":0.8,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143481147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Artificial life and robotics celebrates its 30th anniversary","authors":"Fumitoshi Matsuno","doi":"10.1007/s10015-025-01009-1","DOIUrl":"10.1007/s10015-025-01009-1","url":null,"abstract":"","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 1","pages":"1 - 2"},"PeriodicalIF":0.8,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143481146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robust pig extraction using ground base depth images for automatic weight estimation","authors":"Khin Dagon Win, Kikuhito Kawasue, Tadaaki Tokunaga","doi":"10.1007/s10015-025-01004-6","DOIUrl":"10.1007/s10015-025-01004-6","url":null,"abstract":"<div><p>Dark colored pigs (Berkshire, Duroc, etc.) are widely recognized nationwide in Japan for their exceptional taste, with the southern Kyushu region being a renowned production area for these esteemed breeds. However, estimating the weight of these pigs using a camera presents a unique challenge. The key process in a camera-based weight estimation system is the precise extraction of the target pig from the background. Typically, cameras capture images from above, as the top-view images provide the most specific growth indicators. However, the image from above contains a ground image. Since Berkshire and Duroc pigs are black and red, respectively, they blend into the ground, making it difficult to accurately segment the pigs in the images. Thus, it is crucial to perfectly distinguish between the ground and the pigs. Therefore, a new extraction method is proposed to distinguish between the ground and pigs by converting depth data based on the pig's position. To enhance the efficiency of pig farming and alleviate the burden on workers, our goal is to develop a system that automatically measures the weight of Berkshire pigs for shipment without background interference. In this study, we installed the system at a Berkshire pig farm and demonstrated the effectiveness of this innovative extraction method for camera-based weight estimation.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 1","pages":"42 - 50"},"PeriodicalIF":0.8,"publicationDate":"2025-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10015-025-01004-6.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143480909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optimization of garbage collection routes for evidence-based policy-making","authors":"Tomoki Kaho, Kazutoshi Sakakibara, Mikiharu Arimura, Shinya Watanabe","doi":"10.1007/s10015-024-00988-x","DOIUrl":"10.1007/s10015-024-00988-x","url":null,"abstract":"<div><p>This study models garbage collection in a local city in Hokkaido, Japan, driven by the increasing burden of collection costs despite a declining population. A unique problem in this city is the large number of garbage stations, which exacerbates the collection burden. We examine the impact of waste volume fluctuations and the number and layout of garbage stations on collection routes and costs to find solutions to this issue. This research aims to develop cost-effective and feasible garbage collection strategies to support evidence-based policymaking. We formulated a garbage collection challenge using mixed integer linear programming to minimize travel distances and operational burdens within vehicle capacity constraints. Numerical simulations reveal significant findings: (i) optimized routes reduce total travel distance by <span>(sim)</span>25% compared to existing routes, (ii) increased waste volumes lead to non-linear increases in route lengths, and (iii) the aggregation strength of garbage stations significantly impacts route efficiency and the number of required stations. Conclusively, this study provides empirical evidence to guide policymakers in optimizing garbage collection systems, ensuring effective resource utilization and maintaining service quality.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 1","pages":"156 - 164"},"PeriodicalIF":0.8,"publicationDate":"2025-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143481043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optimal group structure for group chase and escape","authors":"Kohsuke Somemori, Takashi Shimada","doi":"10.1007/s10015-024-00991-2","DOIUrl":"10.1007/s10015-024-00991-2","url":null,"abstract":"<div><p>Chasing multiple escapees by a group of chasers is an important problem for many living animal species and for various agent systems. On this group chase problem, it has been reported that having two distinct types of chasers in the group, namely diligent and totally lazy chasers, can improve the efficiency of the group of catching all the targets. In this paper, we search for a better group structure by letting each agent have moderate laziness. We find that there exists an optimal group structure, which performs better than the previously reported group which consists of binary (fully diligent and totally lazy) types.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 1","pages":"143 - 147"},"PeriodicalIF":0.8,"publicationDate":"2024-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143480986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Discrimination of structures in plant using deep learning models trained by 3D CAD semantics","authors":"Takashi Imabuchi, Kuniaki Kawabata","doi":"10.1007/s10015-024-00989-w","DOIUrl":"10.1007/s10015-024-00989-w","url":null,"abstract":"<div><p>This paper describes a 3D point cloud segmentation pipeline that contributes to the efficiency of decommissioning works at the Fukushima Daiichi Nuclear Power Station. For decommissioning works, simulations and calculations for preliminary work planning using 3D structural models are crucial from a safety and efficiency viewpoint. However, 3D modeling works typically require high costs. Therefore, we aim to improve the efficiency of 3D modeling by segmenting geometric shape regions into categories in a 3D point cloud state using deep learning. Our pipeline uses 3D computer-aided design semantics to create a training dataset that reduces annotation costs and helps learn human knowledge. Performance evaluation results show that the discriminator can discriminate major structural categories with high accuracy using deep learning models. However, we confirm that even the state-of-the-art model has limitations in discriminating structures containing similar shapes between categories and structures in categories with a small number of training data. In the analysis of evaluation results, we discuss challenges encountered by our pipeline for practical applications.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 1","pages":"184 - 195"},"PeriodicalIF":0.8,"publicationDate":"2024-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143480954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Proposal of SSVEP ratio for efficient ear-EEG SSVEP-BCI development and evaluation","authors":"Sodai Kondo, Hideyuki Harafuji, Hisaya Tanaka","doi":"10.1007/s10015-024-01002-0","DOIUrl":"10.1007/s10015-024-01002-0","url":null,"abstract":"<div><p>Ear electroencephalogram (ear-EEG) records electrical signals around the ear, offering a more casual and user-friendly approach to EEG measurement. Steady-state visual evoked potential (SSVEP) are brain responses elicited by gazing at flickering stimuli. Ear-EEG can enhance comfort in SSVEP-based brain–computer interface (SSVEP-BCI), but its performance is typically low behind traditional SSVEP-BCI. Additionally, predicting the performance of ear-EEG SSVEP-BCIs before experimentation is challenging, often increasing design costs. This study proposes the SSVEP ratio as a supplementary index to traditional metrics such as information transfer rate (ITR) and BCI accuracy. Using the SSVEP ratio and the KNN algorithm, we predicted BCI accuracy and ITR, aiming to lower design costs. The developed four-inputs ear-EEG SSVEP-BCI achieved a maximum BCI accuracy of 89.17 ± 3.62% and an ITR of 10.60 ± 0.36 bits/min. Predicted BCI accuracy was 90.21 ± 3.25% and an ITR was 9.43 ± 0.96 bits/min in ear-EEG SSVEP-BCI. Predicted values matched the actual results, demonstrating that the SSVEP ratio can effectively predict BCI accuracy, thereby streamlining the design process for ear-EEG SSVEP-BCI.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 1","pages":"32 - 41"},"PeriodicalIF":0.8,"publicationDate":"2024-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10015-024-01002-0.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143480942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shuxin Lyu, Yuya Tamaki, Katsuyuki Morishita, Ken Saito
{"title":"Development of rotary-type electrostatic motor for MEMS microrobot","authors":"Shuxin Lyu, Yuya Tamaki, Katsuyuki Morishita, Ken Saito","doi":"10.1007/s10015-024-00996-x","DOIUrl":"10.1007/s10015-024-00996-x","url":null,"abstract":"<div><p>Recently, many researchers have expected millimeter-sized microrobots to work in narrow spaces. However, it is challenging to integrate the actuators, controllers, sensors, and energy sources into millimeter-sized microrobots. A small actuator with low power consumption is required to realize millimeter-sized microrobots. Previously, the authors developed a new linear electrostatic motor for microrobots. However, most microrobots rely on rotary actuators to expand their application scenarios and enhance adaptability. In this paper, the authors designed and developed a rotary-type electrostatic motor to provide a low-power drive solution for microrobots to address the limitations of linear motors and broaden their range of applications. Through experimentation, we identified an issue with reverse rotation in the electrostatic motor and analyzed its causes. To address the reverse-rotation issue, we proposed improvements, including optimizing the electrode structure and adjusting the drive waveform, which significantly enhanced the stability of forward rotation. The author plans to refine the motor's design further and integrate it into a microrobot system.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 1","pages":"148 - 155"},"PeriodicalIF":0.8,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143480970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Actuator for endoscope-connected microrobot driven by compressed gas","authors":"Takamichi Funakoshi, Yuya Niki, Koki Takasumi, Chise Takeshita, Minami Kaneko, Fumio Uchikoba","doi":"10.1007/s10015-024-00994-z","DOIUrl":"10.1007/s10015-024-00994-z","url":null,"abstract":"<div><p>With the aim of reducing the mental and physical burden on physicians and patients in endoscopic treatment, an endoscope-connected microrobot actuator and a self-propelled wheeled microrobot that uses Reuleaux triangle as the wheel shape is described for the use of medical carbon dioxide gas. A turbine-type actuator measuring 5.17 mm (long) × 5.13 mm (wide) × 1.96 mm (thick) with a mass of 0.15 g showed rotational speeds of 26,784 rpm, 56,250 rpm, and 57,690 rpm at pressures of 0.1 MPa, 0.2 MPa, and 0.3 MPa and a flow rate of 1.0 L/min, respectively. The dimensions of the traveling microrobot with wheels attached to the actuator were 7.59 mm (length) × 6.49 mm (width) × 7.59 mm (height) (excluding the brass tube) with a mass of 0.25 g. The robot ran at 73 mm/s at a flow rate of 1.0 L/min at 0.3 MPa and at 56 mm/s at a flow rate of 0.9 L/min. The results confirmed that the flow rate of the material was 0.9 L/min at a pressure of 0.3 MPa.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 1","pages":"63 - 71"},"PeriodicalIF":0.8,"publicationDate":"2024-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143481110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Research on adaptive visual servo method for circular symmetrical objects","authors":"Tingting Wang, Yunlong Zhao, Kui Li, Yanyun Bi","doi":"10.1007/s10015-024-00995-y","DOIUrl":"10.1007/s10015-024-00995-y","url":null,"abstract":"<div><p>Circularly symmetric targets are widely used in industry; therefore, how to identify, locate, and grasp circularly symmetrical structures accurately is an important issue in the field of industrial robots. This paper proposed a more general visual servoing solution for circularly symmetric targets, and the proposed visual servoing scheme not only compensates for the limitation that ellipse features can only control 5-DOF (degrees of freedom) of the manipulator, but also solves the problem of slow convergence of image moment features when approaching the desired pose. An adaptive linear controller that combines ellipse features and image moment features is further proposed, thus achieving rapid convergence of the six degrees of freedom of the manipulator. Experimental results verify the effectiveness of the proposed method.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 1","pages":"98 - 106"},"PeriodicalIF":0.8,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143481220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}