{"title":"Modeling task immersion based on goal activation mechanism","authors":"Kazuma Nagashima, Jumpei Nishikawa, Junya Morita","doi":"10.1007/s10015-024-00990-3","DOIUrl":"10.1007/s10015-024-00990-3","url":null,"abstract":"<div><p>Immersion in a task is a pre-requisite for creativity. However, excessive arousal in a single task has drawbacks, such as overlooking events outside of the task. To examine such a negative aspect, this study constructs a computational model of arousal dynamics where the excessively increased arousal makes the task transition difficult. The model was developed using functions integrated into the cognitive architecture Adaptive Control of Thought-Rational (ACT-R). Under the framework, arousal is treated as a coefficient affecting the overall activation level in the model. In our simulations, we set up two conditions demanding low and high arousal, trying to replicate corresponding human experiments. In each simulation condition, two sets of ACT-R parameters were assumed from different interpretations of the human experimental settings. The results showed consistency of behavior between humans and models both in the two different simulation settings. This result suggests the validity of our assumptions and has implications of controlling arousal in our daily life.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 1","pages":"72 - 87"},"PeriodicalIF":0.8,"publicationDate":"2024-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143481104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Design of crowd counting system based on improved CSRNet","authors":"Xiaochuan Tian, Hironori Hiraishi","doi":"10.1007/s10015-024-00993-0","DOIUrl":"10.1007/s10015-024-00993-0","url":null,"abstract":"<div><p>An advanced crowd counting algorithm based on CSRNet has been proposed in this study to improve the long training and convergence times. In this regard, three points were changed from the original CSRNet: (i) The first 12 layers in VGG19 were adopted in the front-end to enhance the capacity of the extracting features. (ii) The dilated convolutional network in the back-end was replaced with the standard convolutional network to speed up the processing time. (iii) Dense connection was applied in the back-end to reuse the output of the convolutional layer and achieve faster convergence. ShanghaiTech dataset was used to verify the improved CSRNet. In the case of high-density images, the accuracy was observed to be very close to the original CSRNet. Moreover, the average training time per sample was three times faster and average testing time per image was six times faster. In the case of low-density images, the accuracy was not close to that of the original CSRNet. However, the training time was 10 times faster and the testing time was six times faster. However, by dividing the image, the count number came close to the real count. The experimental results obtained from this study show that the improved CSRNet performs well. Although it is slightly less accurate than the original CSRNet, its processing time is much faster since it does not use dilated convolution. This indicates that it is more suitable for the actual needs of real-time detection. A system with improved CSRNet for counting people in real time has also been designed in this study.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 1","pages":"3 - 11"},"PeriodicalIF":0.8,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143481251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Muscle displacement and force related to walking by dynamics studies of musculoskeletal humanoid robot","authors":"Kentaro Yamazaki, Tatsumi Goto, Yugo Kokubun, Minami Kaneko, Fumio Uchikoba","doi":"10.1007/s10015-024-00986-z","DOIUrl":"10.1007/s10015-024-00986-z","url":null,"abstract":"<div><p>Conventional bipedal robots are mainly controlled by motors using central processing units (CPUs) and software, and they are being developed with control methods and mechanisms that are different from those used by humans. Humans generate basic movement patterns using a central pattern generator (CPG) localized in the spinal cord and create complex and efficient movements through muscle synergies that coordinate multiple muscles. For a robot to mimic the human musculoskeletal structure and reproduce walking movements, muscle parameters are required. In this paper, inverse dynamics analysis is used to determine the muscle displacements and forces required for walking in a musculoskeletal humanoid model, and forward dynamics analysis is used to investigate these values.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 1","pages":"88 - 97"},"PeriodicalIF":0.8,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143481250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Development of time-series point cloud data changes and automatic structure recognition system using Unreal Engine","authors":"Toru Kato, Hiroki Takahashi, Meguru Yamashita, Akio Doi, Takashi Imabuchi","doi":"10.1007/s10015-024-00983-2","DOIUrl":"10.1007/s10015-024-00983-2","url":null,"abstract":"<div><p>We have developed a point cloud processing system within the Unreal Engine to analyze changes in large time-series point cloud data collected by laser scanners and extract structured information. Currently, human interaction is required to create CAD data associated with the time-series point cloud data. The Unreal Engine, known for its 3D visualization capabilities, was chosen due to its suitability for data visualization and automation. Our system features a user interface that automates update procedures with a single button press, allowing for efficient evaluation of the interface’s effectiveness. The system effectively visualizes structural changes by extracting differences between pre- and post-change data, recognizing shape variations, and meshing the data. The difference extraction involves isolating only the added or deleted point clouds between the two datasets using the K-D tree method. Subsequent shape recognition utilizes pre-prepared training data associated with pipes and tanks, improving accuracy through classification into nine types and leveraging PointNet + + for deep learning recognition. Meshing of the shape-recognized point clouds, particularly those to be added, employs the ball pivoting algorithm (BPA), which was proven effective. Finally, the updated structural data are visualized by color-coding added and deleted data in red and blue, respectively, within the Unreal Engine. Despite increased processing time with a higher number of point cloud data, down sampling prior to difference extraction significantly reduces the automatic update time, enhancing overall efficiency.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 1","pages":"126 - 135"},"PeriodicalIF":0.8,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143480903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A 3D interactive scene construction method for interior design based on virtual reality","authors":"Yafei Fan, Lijuan Liang","doi":"10.1007/s10015-024-00985-0","DOIUrl":"10.1007/s10015-024-00985-0","url":null,"abstract":"<div><p>The demand for data information in indoor scenes has increased. However, the indoor scene model construction is relatively complex. Meanwhile, there are many measurement and positional deviations in the current scene. Therefore, virtual reality technology and deep learning algorithms are used to build indoor scenes. The deep neural network and multi-point perspective imaging algorithm are used to analyze the image pixels of the scene, reduce the noise in current scene image recognition, and achieve the three-dimensional model construction of indoor scenes. The research results indicated that the new method improved the accuracy of indoor 3D scenes by eliminating noise in 3D scene data and constructing image data. The accuracy of the new method for item recognition was above 93%. Simultaneously, it can complete the construction of 3D scenes. The accuracy value of the new method was 3.00% higher than that of the CNN algorithm and 4.00% higher than that of the SVO algorithm. The error value was stable within the range of 0.2–0.3. At the same time, the loss function value of the algorithm used in this study was relatively small. The algorithm performance is more stable. From this, the new method model can accurately construct scenes, which has certain research value for indoor 3D scene construction.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 1","pages":"173 - 183"},"PeriodicalIF":0.8,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143481099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Teng Limin, Shuntaro Hatori, Shunsuke Fukushi, Xing Yi, Kota Chiba, Yoritaka Akimoto, Takashi Yamaguchi, Yuta Nishiyama, Shusaku Nomura, E. A. Chayani Dilrukshi
{"title":"A preliminary study to assess the brain waves during walking: artifact elimination using soft dynamic time warping","authors":"Teng Limin, Shuntaro Hatori, Shunsuke Fukushi, Xing Yi, Kota Chiba, Yoritaka Akimoto, Takashi Yamaguchi, Yuta Nishiyama, Shusaku Nomura, E. A. Chayani Dilrukshi","doi":"10.1007/s10015-024-00981-4","DOIUrl":"10.1007/s10015-024-00981-4","url":null,"abstract":"<div><p>Existing electroencephalography (EEG) studies predominantly involve participants in stationary positions, which presents challenges in accurately capturing EEG data during physical activities due to motion-induced noise and artifacts. This study aims to assess and validate the efficacy of the Soft Dynamic Time Warping (Soft-DTW) clustering method for analyzing EEG data collected during physical activity, focusing on an oddball auditory task performed while walking. Employing a mobile active bio-amplifier, the study captures brain activity and assesses auditory event-related potentials (ERPs) under dynamic conditions. The comparative performance of five clustering techniques, k-shape, kernels, k-means, Dynamic Time Warping, and Soft-DTW, in terms of their effectiveness in artifact reduction, was analyzed. Results indicated a significant difference between target and non-target auditory stimuli, with the target stimuli exhibiting a positive (positive) potential, although of smaller magnitude. This outcome suggests that, despite significant artifact interference from walking, Soft-DTW facilitates extracting differences in cognitive processes for the oddball task from the EEG data.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 1","pages":"136 - 142"},"PeriodicalIF":0.8,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143480900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Development of an artificial spinal cord circuit for a musculoskeletal humanoid robot mimicking the neural network involved in human gait control","authors":"Tatsumi Goto, Kentaro Yamazaki, Yugo Kokubun, Ontatsu Haku, Ginjiro Takashi, Minami Kaneko, Fumio Uchikoba","doi":"10.1007/s10015-024-00980-5","DOIUrl":"10.1007/s10015-024-00980-5","url":null,"abstract":"<div><p>Artificial neural networks, which mimic the neural networks of living organisms, are being applied as advanced information processing systems in various fields such as robotics. Conventional artificial neural networks use CPUs and software programs, but huge numerical computations are required to imitate a large-scale neural network. On the other hand, hardware artificial neural networks have been proposed. Hardware models neurons and synapses using analog electronic circuits, and thus can mimic the neural signals generated by neural networks without the need for numerical calculations. We have been developing a hardware artificial neural network mimicking the neural network in the human brainstem and spinal cord that is involved in gait control, and applying it to a musculoskeletal humanoid robot that mimics the human musculature and skeletal structure. In this paper, we propose an artificial spinal cord circuit for gait control of a musculoskeletal humanoid robot. Focusing on the movement of stepping over an obstacle, we confirmed through circuit simulations that the artificial spinal cord circuit can generate stepping-over patterns arbitrarily while walking and running.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 1","pages":"51 - 62"},"PeriodicalIF":0.8,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143481201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluating how psychological senses and physical motions are affected by avatar shapes in a non-immersive environment","authors":"Yuki Kida, Tetsuro Ogi","doi":"10.1007/s10015-024-00979-y","DOIUrl":"10.1007/s10015-024-00979-y","url":null,"abstract":"<div><p>With the development of virtual reality technology, the use of avatars is attracting increasing attention. Recently, the effects of various avatars in immersive virtual reality environments on users' psychological senses and behavior, such as the sense of body ownership, sense of agency, the Proteus effect, etc., have been reported and actively studied. However, the effects of using various avatars in a non-immersive environment on users' psychological senses and behavior have not yet been fully examined. In this study, we examined how avatar shapes affect the user's psychological senses and physical motions in a non-immersive environment using a penguin avatar and a smoke avatar, with each avatar having a different shape and degrees of freedom and comparing them to a human avatar. Specifically, experiments in which whole-body physical motions were performed were conducted using these three avatars, subjective psychological senses were evaluated through questionnaires, and an objective evaluation was conducted through body-tracking data. The results suggested that the avatar shapes have an effect such that the user's body motion changes unconsciously in a non-immersive environment.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 1","pages":"165 - 172"},"PeriodicalIF":0.8,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143481219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Experimental study of the flow structure around the oral arms of a jellyfish-inspired pump mechanism","authors":"Poon Manakijsirisuthi, Kazunori Hosotani, Ryoji Oya","doi":"10.1007/s10015-024-00978-z","DOIUrl":"10.1007/s10015-024-00978-z","url":null,"abstract":"<div><p>To mitigate microplastic and suspended solid debris problems, various underwater debris-collecting devices have been proposed; however, due to concerns regarding blockage in these devices’ suction pumps, simple-structured pumps with high robustness are more suitable for long-term operation. Thus, we previously proposed a debris-capturing pump mechanism inspired by the jellyfish of the <i>Rhizostomeae</i> order’s simple anatomy, focusing on the flow around the oral arms, which is expected to greatly affect debris-collecting performance. In the current study, the vertically integrated two-dimensional jellyfish-inspired pump’s bell material and the installment angle of the rectifier plates mimicking the oral arms were varied across four configurations, and the flow fields generated by the pump with their governing dominant flow structures were investigated using particle image velocimetry (PIV) and proper orthogonal decomposition (POD) to evaluate the effect of both variables on the flow structure. Experimental results suggest that both variables affect the flow structure and reverse flow rate significantly. By increasing the bell’s elastic modulus and installing the plates at a moderate angle, the reverse flow in the bell-opening motion can be suppressed.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 1","pages":"107 - 117"},"PeriodicalIF":0.8,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143480966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Persistent surveillance by heterogeneous multi-agents using mutual information based on observation capability","authors":"Shohei Kobayashi, Kazuho Kobayashi, Takehiro Higuchi","doi":"10.1007/s10015-024-00976-1","DOIUrl":"10.1007/s10015-024-00976-1","url":null,"abstract":"<div><p>Using many agents with different characteristics is more effective than using a homogeneous agent to observe a large environment persistently. This study focuses on the heterogeneity of agents’ observation capabilities, such as sensor resolution, by representing these differences through probabilistic observation. This representation allows agents to compute mutual information when selecting surveillance areas and move to where they can obtain the most information from their observations. In addition, we introduce confidence decay for three or more states, a strategy to encourage agents to revisit locations that have not been observed for an extended period of time. Confidence decay represents a gradual decrease in the estimates’ reliability since the state may have changed during the unobserved period. This strategy increases the mutual information of locations that have not been observed for a long time so that the agents will move toward them. Simulations in a changing environment show that the proposed method enables heterogeneous multi-agents to perform persistent surveillance according to their observation capabilities. It also outperforms the existing partition and sweep method in a quantitative comparison of observation accuracy.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 1","pages":"118 - 125"},"PeriodicalIF":0.8,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143480937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}