{"title":"Multi-objective optimization of flight schedules to maximize constraint tolerance by local search and archive mechanisms","authors":"Tomoki Ishizuka, Akinori Murata, Hiroyuki Sato, Keiki Takadama","doi":"10.1007/s10015-025-01021-5","DOIUrl":"10.1007/s10015-025-01021-5","url":null,"abstract":"<div><p>To introduce the concept of the “constraint tolerance” (i.e., a feasibility of solutions) in the flight scheduling problem, this paper proposes the optimization method that can find the feasible flight schedules by optimizing the original objective function while maximizing the constraint tolerance as much as possible. The proposed method further is improved by integrating it with the local search and archive mechanisms to obtain a wide range of Pareto-optimal solutions with a high constraint tolerance. A comparison between the proposed method and the conventional methods with or without adding a new objective function to maximize the constraint tolerance shows the statistical superiority of the proposed method.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 2","pages":"289 - 302"},"PeriodicalIF":0.8,"publicationDate":"2025-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143925685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Stable dynamic patterns generated by retrograde model","authors":"Mari Nakamura","doi":"10.1007/s10015-025-01017-1","DOIUrl":"10.1007/s10015-025-01017-1","url":null,"abstract":"<div><p>A heterogeneous boid is a multi-agent system comprised of several types of agents that communicate locally. It forms diverse patterns of agent groups through various interactions. With appropriately tuned interactions, it forms stable patterns of a unified cluster with symmetrical structures that reflect local interactions. This ensures that these patterns remain stable, regardless of the number of agents (i.e., scalability). Prior research introduced the retrograde model, where two agent types exhibited reverse movement while a third type formed a unified cluster. By tuning the interaction, this model formed stable dynamic patterns. With a large number of agents, even under appropriate interactions, long-lasting metastable states emerge, making it difficult to distinguish them from stable patterns. In this study, by focusing on large-scale structures (cluster shape and agent flow), we reclassified three stable dynamic patterns formed by the retrograde model, removing the metastable states. We identify a new dynamic stable pattern, named as an irregular-oscillating pattern, by focusing on a cluster of specific shapes.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 2","pages":"236 - 244"},"PeriodicalIF":0.8,"publicationDate":"2025-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143925572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards flexible swarms: comparison of flocking models with varying complexity","authors":"Lauritz Keysberg, Naoki Wakamiya","doi":"10.1007/s10015-025-01016-2","DOIUrl":"10.1007/s10015-025-01016-2","url":null,"abstract":"<div><p>One remarkable feat of biological swarms is their ability to work under very different environmental circumstances and disturbances. They exhibit a flexible kind of robustness, which accommodates external events without staying on rigid positions. Based on the observation that conventionally robust flocking models can be very complex and use information unavailable to biological swarm, we undertook a wide investigation into the properties of existing flocking models such as Boid, Couzin, Vicsek, and Cucker–Smale. That is, to see if a similar “natural” flexibility could be observed in flocking models with lower complexity. We established a toolset of three metrics which allows for a comprehensive evaluation of different flocking models. These metrics measure general model performance, robustness under noise, as well as a naive complexity of the model itself. Our results show a general trend for divergence between performance and robustness. The most robust models had a medium–high complexity. While our results show no clear relation between robustness and low complexity, we discuss examples for robust behavior with simple rules.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 2","pages":"219 - 226"},"PeriodicalIF":0.8,"publicationDate":"2025-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10015-025-01016-2.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143925571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reinforcement learning-based autonomous driving control for efficient road utilization in lane-less environments","authors":"Mao Tobisawa, Kenji Matsuda, Tenta Suzuki, Tomohiro Harada, Junya Hoshino, Yuki Itoh, Kaito Kumagae, Johei Matsuoka, Kiyohiko Hattori","doi":"10.1007/s10015-025-01013-5","DOIUrl":"10.1007/s10015-025-01013-5","url":null,"abstract":"<div><p>In recent years, research on autonomous driving using reinforcement learning has been attracting attention. Much of the current research focuses on simply replacing human driving with autonomous driving. Compared to conventional human-driven vehicles, autonomous vehicles can utilize a wide variety of sensor measurements and share information with nearby vehicles through vehicle-to-vehicle communication for driving control. By actively utilizing these capabilities, we can consider overall optimal control through coordination of groups of autonomous vehicles, which is completely different from human driving control. One example is adaptive vehicle control in an environment that does not assume lane separation or directional separation (Single Carriageway Environment). In this study, we construct a simulation environment and focus on the efficient use of a Single Carriageway Environment, aiming to develop driving control strategies using reinforcement learning. In an environment with a road width equivalent to four lanes, without lane or directional separation, we acquire adaptive vehicle control through reinforcement learning using information obtained from sensors and vehicle-to-vehicle communication. To verify the effectiveness of the proposed method, we construct two types of environments: a Single Carriageway Environment and a conventional road environment with directional separation (Dual Carriageway Environment). We evaluate road utilization effectiveness by measuring the number of vehicles passing through and the average number of vehicles present on the road. The result of the evaluation shows that, in the Single Carriageway Environment, our method has adapted to road congestion and was seen to effectively utilize the available road space. Furthermore, both the number of vehicles passing through and the average number of vehicles present have also improved.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 2","pages":"276 - 288"},"PeriodicalIF":0.8,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143925711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Development of smart navigation robot for the visually impaired","authors":"Jin Yien Lee, Taiga Eguchi, Wen Liang Yeoh, Hiroshi Okumura, Osamu Fukuda","doi":"10.1007/s10015-025-01012-6","DOIUrl":"10.1007/s10015-025-01012-6","url":null,"abstract":"<div><p>Individuals with visual impairments often rely on assistive tools such as white canes and guide dogs to navigate their environments. While these tools provide a certain level of support, their effectiveness is frequently constrained in complex or dynamically changing environments, even with extensive user training. To address these limitations, we have developed a smart navigation robot that integrates artificial intelligence for object detection, offering a viable alternative to traditional assistive tools. The robot is designed to provide real-time assistance through auditory alerts, all while allowing the user to maintain full control over the robot’s direction according to their intentions. The robot’s effectiveness was evaluated through an experimental study in which participants navigated diverse environments using both the smart navigation robot and a white cane. Participant perceptions of the robot’s usability, reliability, safety, and interaction quality were evaluated using the Godspeed Questionnaire Series. The comparative analysis revealed that the smart navigation robot outperformed the white cane, particularly in dynamic scenarios. These findings suggest that the robot has the potential to substantially improve the quality of life and independence of individuals with visual impairments, offering a greater degree of freedom than was previously attainable.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 2","pages":"265 - 275"},"PeriodicalIF":0.8,"publicationDate":"2025-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143925597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Power-law distributions in an online video-sharing system and its long-term dynamics","authors":"Kiminori Ito, Takashi Shimada","doi":"10.1007/s10015-025-01007-3","DOIUrl":"10.1007/s10015-025-01007-3","url":null,"abstract":"<div><p>We study the data of Japanese video-sharing platform, Niconico, which contains 21 million videos. From our analysis, the rank size distribution of video views is found to exhibit a crossover from a power law with an exponent around <span>(-0.5)</span> for the top <span>(approx 10^5)</span> movies to another power low with exponent around <span>(-1)</span> for the movies in the following ranks. The probability density function of video views for the bottom <span>(90%)</span> movies is well fitted by log-normal distribution. This implies that, while videos in the top rank regime follow a different dynamics which yields the power law, videos in the middle and low rank regime seem to be evolving according to a random multiplicative process. Furthermore, we observe temporal relaxation process of video views for 3 years. Temporal relaxation process of video views is grouped by the size of the number of video views, and averaged within each size group. Interestingly, the daily video views universally show power-law relaxation in all view size, from the top total view group (<span>(10^6-10^7)</span>) to the bottom group (<span>(approx 10^2)</span>). This indicates the existence of memory processes longer than the exponential function, which are universally independent of video size.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 2","pages":"325 - 331"},"PeriodicalIF":0.8,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10015-025-01007-3.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143925555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A TDA-based performance analysis for neural networks with low-bit weights","authors":"Yugo Ogio, Naoki Tsubone, Yuki Minami, Masato Ishikawa","doi":"10.1007/s10015-025-01005-5","DOIUrl":"10.1007/s10015-025-01005-5","url":null,"abstract":"<div><p>Advances in neural network (NN) models and learning methods have resulted in breakthroughs in various fields. A larger NN model is more difficult to install on a computer with limited computing resources. One method for compressing NN models is to quantize the weights, in which the connection weights of the NNs are approximated with low-bit precision. The existing quantization methods for NN models can be categorized into two approaches: quantization-aware training (QAT) and post-training quantization (PTQ). In this study, we focused on the performance degradation of NN models using PTQ. This paper proposes a method for visually evaluating the performance of quantized NNs using topological data analysis (TDA). Subjecting the structure of NNs to TDA allows the performance of quantized NNs to be assessed without experiments or simulations. We developed a TDA-based evaluation method for NNs with low-bit weights by referring to previous research on a TDA-based evaluation method for NNs with high-bit weights. We also tested the TDA-based method using the MNIST dataset. Finally, we compared the performance of the quantized NNs generated by static and dynamic quantization through a visual demonstration.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 2","pages":"332 - 341"},"PeriodicalIF":0.8,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143925554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Joint pairwise learning and masked language models for neural machine translation of English","authors":"Shuhan Yang, Qun Yang","doi":"10.1007/s10015-025-01008-2","DOIUrl":"10.1007/s10015-025-01008-2","url":null,"abstract":"<div><p>The translation activity of language is a link and bridge for the integration of politics, economy, and culture in various countries. However, manual translation requires high quality of professional translators and takes a long time. The study attempts to introduce dual learning on the basis of traditional neural machine translation models. The improved neural machine translation model includes decoding of the source language and target language. With the help of the source language encoder, forward translation, backward backtranslation, and parallel decoding can be achieved; At the same time, adversarial training is carried out using a corpus containing noise to enhance the robustness of the model, enriching the technical and theoretical knowledge of existing neural machine translation models. The test results show that compared with the training speed of the baseline model, the training speed of the constructed model is 115 K words/s and the decoding speed is 2647 K words/s, which is 7.65 times faster than the decoding speed, and the translation quality loss is within the acceptable range. The mean bilingual evaluation score for the “two-step” training method was 16.51, an increase of 3.64 points from the lowest score, and the K-nearest-neighbor algorithm and the changing-character attack ensured the semantic integrity of noisy source language utterances to a greater extent. The translation quality of the changing character method outperformed that of the unrestricted noise attack method, with the highest bilingual evaluation study score value improving by 3.34 points and improving the robustness of the model. The translation model constructed by the study has been improved in terms of training speed and robustness performance, and is of practical use in many translation domains.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 2","pages":"342 - 353"},"PeriodicalIF":0.8,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143925547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Performance evaluation of ORB-SLAM3 with quantized images","authors":"Siyuan Tao, Yuki Minami, Masato Ishikawa","doi":"10.1007/s10015-025-01006-4","DOIUrl":"10.1007/s10015-025-01006-4","url":null,"abstract":"<div><p>Visual simultaneous localization and mapping (SLAM) is a critical technology for robots to perform high-precision navigation, increasing the focus among researchers to improve its accuracy. However, improvements in SLAM accuracy always come at the cost of an increased memory footprint, which limits the long-term operation of devices that operate under constrained hardware resources. Application of quantization methods is proposed as a promising solution to this problem. Since quantization can result in performance degradation, it is crucial to quantitatively evaluate the trade-off between potential degradation and memory savings to assess its practicality for visual SLAM. This paper introduces a mechanism to evaluate the influence of a quantization method on visual SLAM, and applies it to assess the impact of three different quantization methods on ORB-SLAM3. Specifically, we examine two static quantization methods and a dynamic quantization method called error diffusion, which can pseudo-preserve image shading information. The paper contributes to the conclusion that error diffusion, with controlled weight parameters in the error diffusion filter, can suppress degradation and reduce the memory footprint, demonstrating its effectiveness in dynamic environments.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 2","pages":"354 - 363"},"PeriodicalIF":0.8,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143925573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Design of human motion detection for non-verbal collaborative robot communication cue","authors":"Wendy Cahya Kurniawan, Yeoh Wen Liang, Hiroshi Okumura, Osamu Fukuda","doi":"10.1007/s10015-024-01000-2","DOIUrl":"10.1007/s10015-024-01000-2","url":null,"abstract":"<div><p>The integration of modern manufacturing systems has promised increased flexibility, productivity, and efficiency. In such an environment, collaboration between humans and robots in a shared workspace is essential to effectively accomplish shared tasks. Strong communication among partners is essential for collaborative efficiency. This research investigates an approach to non-verbal communication cues. The system focuses on integrating human motion detection with vision sensors. This method addresses the bias human action detection in frames and enhances the accuracy of perception as information about human activities to the robot. By interpreting spatial and temporal data, the system detects human movements through sequences of human activity frames while working together. The training and validation results confirm that the approach achieves an accuracy of 91%. The sequential testing performance showed an average detection of 83%. This research not only emphasizes the importance of advanced communication in human–robot collaboration, but also effectively promotes future developments in collaborative robotics.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 1","pages":"12 - 20"},"PeriodicalIF":0.8,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143481147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}