Yu Wang, Zhenzhong Yan, Liting Yan, Xufei Liu, Yanpeng Liu
{"title":"Research on coordinated control strategy of distributed static synchronous series compensator based on multi-objective optimization immune algorithm","authors":"Yu Wang, Zhenzhong Yan, Liting Yan, Xufei Liu, Yanpeng Liu","doi":"10.1007/s10015-024-00967-2","DOIUrl":"10.1007/s10015-024-00967-2","url":null,"abstract":"<div><p>The distributed static synchronous series compensator can optimize the transmission capacity of the power grid. However, the research on the coordinated control and interaction between the devices is not mature enough, and it still needs to be further explored. Therefore, a coordinated control strategy based on multi-objective immune optimization algorithm is proposed in this paper. To realize the feasibility of the coordination strategy, simulation experiments were carried out. The results showed that through the coordination of multi-objective optimization artificial immune algorithm, the optimization rate of active power and reactive power of the line reached 89.88%, and the optimization rate of direct current capacitance and voltage also reached 51.45%, which confirmed the effectiveness of the coordination strategy. It can improve the application of distributed static synchronous series compensator in power grid transmission.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"29 4","pages":"567 - 572"},"PeriodicalIF":0.8,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142518547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AI robots pioneer the Smarter Inclusive Society","authors":"Yasuhisa Hirata","doi":"10.1007/s10015-024-00975-2","DOIUrl":"10.1007/s10015-024-00975-2","url":null,"abstract":"<div><p>This paper outlines a project aimed at realizing a “Smarter Inclusive Society” by 2050 through the integration of AI robots into various public facilities. Led by the Cabinet Office’s “Moonshot Research and Development Program,” the project focuses on developing Adaptable AI-enabled Robots that enhance self-efficacy by supporting users’ abilities while maintaining their sense of independence. Key to the project is the Robotic Nimbus, a soft and flexible robot designed to provide tailored assistance while preserving user agency. The concept of “Adaptable AI-enabled Robots” is introduced to ensure versatility in accommodating user needs and preferences. In addition to physical assistance, the project emphasizes creating engaging experiences through activities like dance and sports, fostering excitement and inclusivity. Collaborations, such as the “Yes We Dance!” performance, demonstrate the potential of AI technology in enhancing rehabilitation opportunities and promoting social participation. By 2050, the project aims to establish a society where AI robots contribute to mental, physical, and social wellbeing, empowering individuals to engage in independent activities and fostering a vibrant, inclusive community. This paper is a compilation of articles/papers/presentations previously presented on the Moonshot Hirata project.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"29 4","pages":"431 - 437"},"PeriodicalIF":0.8,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10015-024-00975-2.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142518546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Probabilistic model for high-level intention estimation and trajectory prediction in urban environments","authors":"Yunsoo Bok, Naoki Suganuma, Keisuke Yoneda","doi":"10.1007/s10015-024-00973-4","DOIUrl":"10.1007/s10015-024-00973-4","url":null,"abstract":"<div><p>To enable successful automated driving, precise behavior prediction of surrounding vehicles is indispensable in urban traffic scenarios. Furthermore, given that a vehicle’s behavior is influenced by the movements of other road users, it becomes crucial to estimate their intentions to anticipate precise future motion. However, the elevated complexity resulting from interdependencies among traffic participants and the uncertainty arising from the object recognition errors present additional challenges. Despite extensive research on inferring intentions, many studies have concentrated on estimating intentions from interactions, resulting in a lack of practicality in urban traffic environments due to low computational efficiency and low robustness against recognition failure of strongly interacting road users. In this paper, we introduce a practical stochastic model for intention estimation and trajectory prediction of surrounding vehicles in automated driving under urban traffic environments. The trajectory is forecasted based on hierarchically computed and probabilistically estimated intentions, which represent an interpretation of vehicle behavior, utilizing only the kinematic state of the focal vehicle and HD maps to ensure real-time performance and enhance robustness. The evaluated results demonstrate that the proposed model surpasses straightforward methods in terms of accuracy while maintaining computational efficiency and exhibits robustness against the recognition failure of traffic participants which strongly influence the focal vehicle.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"29 4","pages":"557 - 566"},"PeriodicalIF":0.8,"publicationDate":"2024-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10015-024-00973-4.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142518423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Preservation of emotional context in tweet embeddings on social networking sites","authors":"Osamu Maruyama, Asato Yoshinaga, Ken-ichi Sawai","doi":"10.1007/s10015-024-00974-3","DOIUrl":"10.1007/s10015-024-00974-3","url":null,"abstract":"<div><p>In communication, emotional information is crucial, yet its preservation in tweet embeddings remains a challenge. This study aims to address this gap by exploring three distinct methods for generating embedding vectors of tweets: word2vec models, pre-trained BERT models, and fine-tuned BERT models. We conducted an analysis to assess the degree to which emotional information is conserved in the resulting embedding vectors. Our findings indicate that the fine-tuned BERT model exhibits a higher level of preservation of emotional information compared to other methods. These results underscore the importance of utilizing advanced natural language processing techniques for preserving emotional context in text data, with potential implications for enhancing sentiment analysis and understanding human communication in social media contexts.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"29 4","pages":"486 - 493"},"PeriodicalIF":0.8,"publicationDate":"2024-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10015-024-00974-3.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142518443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Spiking neural networks-based generation of caterpillar-like soft robot crawling motions","authors":"SeanKein Yoshioka, Takahiro Iwata, Yuki Maruyama, Daisuke Miki","doi":"10.1007/s10015-024-00970-7","DOIUrl":"10.1007/s10015-024-00970-7","url":null,"abstract":"<div><p>Robots have been widely used in daily life in recent years. Unlike conventional robots made of rigid materials, soft robots utilize stretchable and flexible materials, allowing flexible movements similar to those of living organisms, which are difficult for traditional robots. Previous studies have used periodic signals to control soft robots, which lead to repetitive motions and make it challenging to generate environment-adapted motions. To address this issue, control methods can be learned through deep reinforcement learning to enable soft robots to select appropriate actions based on observations, improving their adaptability to environmental changes. In addition, as mobile robots have limited onboard resources, it is necessary to conserve battery consumption and achieve low-power control. Therefore, the use of spiking neural networks (SNNs) with neuromorphic chips enables low-power control of soft robots. In this study, we investigated the learning methods for SNNs aimed at controlling soft robots. Experiments were conducted using a caterpillar-like soft robot model based on previous studies, and the effectiveness of the learning method was evaluated.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"29 4","pages":"519 - 527"},"PeriodicalIF":0.8,"publicationDate":"2024-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10015-024-00970-7.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142518439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Real-time drowsiness evaluation system using marker-less facial motion capture","authors":"Yudai Koshi, Hisaya Tanaka","doi":"10.1007/s10015-024-00972-5","DOIUrl":"10.1007/s10015-024-00972-5","url":null,"abstract":"<div><p>This paper proposes a drowsiness expression rating system that can rate drowsiness in real time using only video information. Drowsiness in drivers is caused by various factors, including driving on monotonous roads, and can lead to numerous problems, e.g., traffic accidents. Previously, we developed an offline drowsiness evaluation system the uses only video image information from MediaPipe, which is a marker-less facial motion capture system. The proposed system can perform real-time drowsiness rating on multiple platforms and requires a smartphone or personal computer. Results of applied to car driving demonstrate that the accuracy of the proposed system was 89.7%, 78.8%, and 65.0% for binary, three-class, and five-class classification tasks, respectively. In addition, the proposed system outperformed existing systems in binary, three-class, and five-class classification tasks by 6.0%, 0.8%, and 4.3%, respectively. These results demonstrate that the proposed system exhibits a higher accuracy rate than the existing methods.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"29 4","pages":"573 - 578"},"PeriodicalIF":0.8,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142518521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A two-stage image segmentation method for harvest order decision of wood ear mushroom","authors":"Kazuya Okamura, Ryo Matsumura, Hironori Kitakaze","doi":"10.1007/s10015-024-00971-6","DOIUrl":"10.1007/s10015-024-00971-6","url":null,"abstract":"<div><p>This study proposes a method for determining the appropriate harvesting order for densely growing wood ear mushrooms by recognizing their growth stages and harvesting priorities from depth images obtained from a stereo camera. We aim to minimize crop damage and improve the quality of harvested crops during the harvesting of densely growing crops using a robot arm. The proposed two-stage method consists of two models—one of the models to recognize priority harvest regions, and the other model to identify individual wood ear mushroom regions and growth stages. The final harvesting order is determined based on the outputs of these models. The models were trained using simulated CGI data of wood ear mushroom growth. The experimental results show that the appropriate harvesting order can be outputted in 57.5% of the cases for the 40 sets of test data. The results show that it is possible to determine the harvesting order of dense wood ear mushrooms based solely on depth images. However, there is still room for improvement in operations in actual environments. Further work is needed to enhance the method’s robustness and accuracy.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"29 4","pages":"528 - 535"},"PeriodicalIF":0.8,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142518350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pierre-Marie Faure, Agnès Tixier-Mita, Timothée Levi
{"title":"A digital hardware system for real-time biorealistic stimulation on in vitro cardiomyocytes","authors":"Pierre-Marie Faure, Agnès Tixier-Mita, Timothée Levi","doi":"10.1007/s10015-024-00968-1","DOIUrl":"10.1007/s10015-024-00968-1","url":null,"abstract":"<div><p>Every year, cardiovascular diseases cause millions of deaths worldwide. These diseases involve complex mechanisms that are difficult to study. To remedy this problem, we propose to develop a heart–brain platform capable of reproducing the mechanisms involved in generating the heartbeat. The platform will be designed to operate in real time, with the most economical and integrated design possible. To achieve this, we are implementing highly biologically coherent cellular models on FPGA, which we interconnect with in vitro cell cultures. In our case, we are using the Maltsev–Lakatta cell model, which describes the behavior of the pacemaker cells responsible for the heart rhythm, to stimulate a cardiomyocyte culture.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"29 4","pages":"473 - 478"},"PeriodicalIF":0.8,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142519130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Biomimetic snake locomotion using central pattern generators network and bio-hybrid robot perspective","authors":"Jérémy Cheslet, Romain Beaubois, Tomoya Duenki, Farad Khoyratee, Takashi Kohno, Yoshiho Ikeuchi, Timothée Lévi","doi":"10.1007/s10015-024-00969-0","DOIUrl":"10.1007/s10015-024-00969-0","url":null,"abstract":"<div><p>Neurological disorders affect millions globally and necessitate advanced treatments, especially with an aging population. Brain Machine Interfaces (BMIs) and neuroprostheses show promise in addressing disabilities by mimicking biological dynamics through biomimetic Spiking Neural Networks (SNNs). Central Pattern Generators (CPGs) are small neural networks that, emulated through biomimetic networks, can replicate specific locomotion patterns. Our proposal involves a real-time implementation of a biomimetic SNN on FPGA, utilizing biomimetic models for neurons, synaptic receptors and synaptic plasticity. The system, integrated into a snake-like mobile robot where the neuronal activity is responsible for its locomotion, offers a versatile platform to study spinal cord injuries. Lastly, we present a preliminary closed-loop experiment involving bidirectional interaction between the artificial neural network and biological neuronal cells, paving the way for bio-hybrid robots and insights into neural population functioning.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"29 4","pages":"479 - 485"},"PeriodicalIF":0.8,"publicationDate":"2024-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142518730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mayuko Nakagawa, Kosuke Oiwa, Yasushi Nanai, Kent Nagumo, Akio Nozawa
{"title":"A comparative study of linear and nonlinear regression models for blood glucose estimation based on near-infrared facial images from 760 to 1650 nm wavelength","authors":"Mayuko Nakagawa, Kosuke Oiwa, Yasushi Nanai, Kent Nagumo, Akio Nozawa","doi":"10.1007/s10015-024-00961-8","DOIUrl":"10.1007/s10015-024-00961-8","url":null,"abstract":"<div><p>We have attempted to estimate blood glucose levels based on facial images measured in the near-infrared band, which is highly biopermeable, to establish a remote minimally invasive blood glucose measurement method. We measured facial images in the near-infrared wavelength range of 760–1650 nm, and constructed a general model for blood glucose level estimation by linear regression using the weights of spatial features of the measured facial images as explanatory variables. The results showed that the accuracy values of blood glucose estimation in the generalization performance evaluation were 43.02 mg/dL for NIR-I (760–1100 nm) and 43.61 mg/dL for NIR-II (1050–1650 nm) in the RMSE of the general model. Since biological information is nonlinear, it is necessary to explore suitable modeling methods for blood glucose estimation, including not only linear regression but also nonlinear regression. The purpose of this study is to explore suitable regression methods among linear and nonlinear regression methods to construct a blood glucose estimation model based on facial images with wavelengths from 760 to 1650 nm. The results showed that model using Random Forest had the best estimation accuracy with an RMSE of 36.02 mg/dL in NIR-I and the MR model had the best estimation accuracy with RMSE of 36.70 mg/dL in NIR-II under the current number of subjects and measurement data points. The independent components selected for the model have spatial features considered to be simply individual differences that are not related to blood glucose variation.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"29 4","pages":"501 - 509"},"PeriodicalIF":0.8,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142519068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}