{"title":"The impact of intelligent robot service failures on customer responses --a perspective based on mind perception theory.","authors":"Mengting Gong, Aimei Li, Junwei Zhang","doi":"10.3389/frobt.2025.1581083","DOIUrl":"10.3389/frobt.2025.1581083","url":null,"abstract":"<p><p>As intelligent robots are widely applied in people's work and daily life, intelligent robot service failures have drawn more attention from academics and practitioners. Under the scenarios of intelligent robot service failures, most existing studies focus on service providers' remedies for the failures and customers' psychological responses to such failures. However, few have systematically explored the impacts of intelligent robot service failures on customers and their internal psychological mechanisms. This paper adopts the framework of mind perception theory to systematically categorize the types of intelligent robot service failures and explores their impact on customer responses from the dimensions of agency and experience. By constructing a theoretical framework to analyze the effects of intelligent robot services on customers, it provides valuable theoretical insights for scholars in the field of intelligent marketing and sheds light on the psychological mechanisms of customers under intelligent robot service failure scenarios.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1581083"},"PeriodicalIF":2.9,"publicationDate":"2025-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12256243/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144638431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marco Job, David Botta, Victor Reijgwart, Luca Ebner, Andrej Studer, Roland Siegwart, Eleni Kelasidi
{"title":"Leveraging learned monocular depth prediction for pose estimation and mapping on unmanned underwater vehicles.","authors":"Marco Job, David Botta, Victor Reijgwart, Luca Ebner, Andrej Studer, Roland Siegwart, Eleni Kelasidi","doi":"10.3389/frobt.2025.1609765","DOIUrl":"https://doi.org/10.3389/frobt.2025.1609765","url":null,"abstract":"<p><p>This paper presents a general framework that integrates visual and acoustic sensor data to enhance localization and mapping in complex, highly dynamic underwater environments, with a particular focus on fish farming. The pipeline enables net-relative pose estimation for Unmanned Underwater Vehicles (UUVs) and depth prediction within net pens solely from visual data by combining deep learning-based monocular depth prediction with sparse depth priors derived from a classical Fast Fourier Transform (FFT)-based method. We further introduce a method to estimate a UUV's global pose by fusing these net-relative estimates with acoustic measurements, and demonstrate how the predicted depth images can be integrated into the wavemap mapping framework to generate detailed 3D maps in real-time. Extensive evaluations on datasets collected in industrial-scale fish farms confirm that the presented framework can be used to accurately estimate a UUV's net-relative and global position in real-time, and provide 3D maps suitable for autonomous navigation and inspection.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1609765"},"PeriodicalIF":2.9,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12240768/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144609964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neel Patel, Rwik Rana, Deepesh Kumar, Nitish V Thakor
{"title":"SuperTac - tactile data super-resolution via dimensionality reduction.","authors":"Neel Patel, Rwik Rana, Deepesh Kumar, Nitish V Thakor","doi":"10.3389/frobt.2025.1552922","DOIUrl":"10.3389/frobt.2025.1552922","url":null,"abstract":"<p><p>The advancement of tactile sensing in robotics and prosthetics is constrained by the trade-off between spatial and temporal resolution in artificial tactile sensors. To address this limitation, we propose SuperTac, a novel tactile super-resolution framework that enhances tactile perception beyond the sensor's inherent resolution. Unlike existing approaches, SuperTac combines dimensionality reduction and advanced upsampling to deliver high-resolution tactile information without compromising the performance. Drawing inspiration from the spatiotemporal processing of mechanoreceptors in human tactile systems, SuperTac bridges the gap between sensor limitations and practical applications. In this study, an in-house-built active robotic finger system equipped with a 4 × 4 tactile sensor array was used to palpate textured surfaces. The system, comprising a tactile sensor array mounted on a spring-loaded robotic finger connected to a 3D printer nozzle for precise spatial control, generated spatiotemporal tactile maps. These maps were processed by SuperTac, which integrates a Variational Autoencoder for dimensionality reduction and Residual-In-Residual Blocks (RIRB) for high-quality upsampling. The framework produces super-resolved tactile images (16 × 16), achieving a fourfold improvement in spatial resolution while maintaining computational efficiency for real-time use. Experimental results demonstrate that texture classification accuracy improves by 17% when using super-resolved tactile data compared to raw sensor data. This significant enhancement in classification accuracy highlights the potential of SuperTac for applications in robotic manipulation, object recognition, and haptic exploration. By enabling robots to perceive and interpret high-resolution tactile data, SuperTac marks a step toward bridging the gap between human and robotic tactile capabilities, advancing robotic perception in real-world scenarios.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1552922"},"PeriodicalIF":2.9,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12240741/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144609965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Stability and trajectory tracking of four- wheel steering trackless auxiliary transport robot via PID control.","authors":"Mingrui Hao, Yueqi Bi, Jie Ren, Lisen Ma, Jiaran Li, Sihai Zhao, Miao Wu","doi":"10.3389/frobt.2025.1617376","DOIUrl":"https://doi.org/10.3389/frobt.2025.1617376","url":null,"abstract":"<p><p>In the complex working environment of underground coal mines, narrow road conditions and deviation in the driving path of autonomous trackless auxiliary transport robots can easily lead to collisions with walls or obstacles. This issue can be effectively solved by a four-wheel steering system, as it can reduce the turning radius of the robot at low speeds and improve its maneuverability at high speeds. Thus, a linear two-degree-of-freedom dynamics model of trackless auxiliary transport robot is established and the steady-state lateral critical speed of 16.6 km/h is obtained. Then a four wheel steering PID trajectory tracking strategy were constructed. Experiments on different steering modes at low and high speeds, which include stepped steering angles and circular path tracking, for the front-wheel steering mode and four-wheel steering mode of the robot are conducted under loaded conditions. The experimental results show that in the low-speed 10 km/h step steering angle input test, compared with the front-wheel steering mode, the turning radius of the robot is reduced by 32.2%, which ensures it easier to pass through narrow tunnels. Under the conditions of a 40 km/h high-speed step steering angle input test, the handling stability has been improved. The results of the circular trajectory tracking test show that at low speeds (10 km/h), the average radius error of the robot is 0.3%, while the radius error of the front-wheel steering robot reaches 2.12%. At high speeds (40 km/h), the average radius error is 2.4%, while the radius error of front-wheel steering mode is 8.74%. The robot maintains good track tracking ability, reducing the risk of collision with tunnel walls and improving robot operation safety.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1617376"},"PeriodicalIF":2.9,"publicationDate":"2025-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12237676/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144601940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vinita Shukla, Amit Shukla, Surya Prakash S K, Shraddha Shukla
{"title":"A systematic survey: role of deep learning-based image anomaly detection in industrial inspection contexts.","authors":"Vinita Shukla, Amit Shukla, Surya Prakash S K, Shraddha Shukla","doi":"10.3389/frobt.2025.1554196","DOIUrl":"10.3389/frobt.2025.1554196","url":null,"abstract":"<p><p>Industrial automation is rapidly evolving, encompassing tasks from initial assembly to final product quality inspection. Accurate anomaly detection is crucial for ensuring the reliability and robustness of automated systems. The intelligence of an industrial automation system is directly linked to its ability to detect and rectify abnormalities, thereby maintaining optimal performance. To advance intelligent manufacturing, sophisticated methods for high-quality process inspection are indispensable. This paper presents a systematic review of existing deep learning methodologies specifically designed for image anomaly detection in the context of industrial manufacturing. Through a comprehensive comparison, traditional techniques are evaluated against state-of-the-art advancements in deep learning-based anomaly detection methodologies, including supervised, unsupervised, and semi-supervised learning methods. Addressing inherent challenges such as real-time processing constraints and imbalanced datasets, this review offers a systematic analysis and mitigation strategies. Additionally, we explore popular anomaly detection datasets for surface defect detection and industrial anomaly detection, along with a critical examination of common evaluation metrics used in image anomaly detection. This review includes an analysis of the performance of current anomaly detection methods on various datasets, elucidating strengths and limitations across different scenarios. Moreover, we delve into the domain of drone-based, manipulator-based and AGV-based anomaly detections using deep learning techniques, highlighting the innovative applications of these methodologies. Lastly, the paper offers scholarly rigor and foresight by addressing emerging challenges and charting a course for future research opportunities, providing valuable insights to researchers in the field of deep learning-based surface defect detection and industrial image anomaly detection.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1554196"},"PeriodicalIF":2.9,"publicationDate":"2025-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12230580/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144585276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Geometric line-of-sight guidance law with exponential switching sliding mode control for marine vehicles' path following.","authors":"Chengren Yuan, Changgeng Shuai, Zhanshuo Zhang, Buyun Li, Yuqiang Cheng, Jianguo Ma","doi":"10.3389/frobt.2025.1598982","DOIUrl":"10.3389/frobt.2025.1598982","url":null,"abstract":"<p><p>Marine vehicle guidance and control technology serves as the core support for advancing marine development and enabling scientific exploration. Its accuracy, autonomy, and environmental adaptability directly determine a vehicle's mission effectiveness in complex marine environments. This paper explores path following for marine vehicles in the horizontal plane. To tackle the limitation of a fixed look-ahead distance, we develop a novel geometric line-of-sight guidance law. It adapts to diverse compound paths by dynamically adjusting according to cross-track errors and local path curvature. Then, to enhance control performance, we present an improved exponential switching law for sliding mode control, enabling rapid convergence, disturbance rejection, and chatter reduction. Additionally, integral sliding mode control is integrated to stabilize yaw angular velocity, ensuring the system's global asymptotic stability. Through a series of numerical simulations, the effectiveness, robustness, and adaptability of our proposed methods are verified.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1598982"},"PeriodicalIF":2.9,"publicationDate":"2025-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12229861/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144585277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Translating human information into robot tasks: action sequence recognition and robot control based on human motions.","authors":"Taichi Obinata, Kazutomo Baba, Akira Uehara, Hiroaki Kawamoto, Yoshiyuki Sankai","doi":"10.3389/frobt.2025.1462833","DOIUrl":"10.3389/frobt.2025.1462833","url":null,"abstract":"<p><p>Long-term use and highly reliable batteries are essential for wearable cyborgs including Hybrid Assistive Limb and wearable vital sensing devices. Consequently, there is ongoing research and development aimed at creating safer next-generation batteries. Researchers, leveraging advanced specialized knowledge and skills, bring products to completion through trial-and-error processes that involve modifying materials, shapes, work protocols, and procedures. When robots can undertake the tedious, repetitive, and attention-demanding tasks currently performed by researchers within facility environments, it will reduce the workload on researchers and ensure reproducibility. In this study, aiming to reduce the workload on researchers and ensure reproducibility in trial-and-error tasks, we proposed and developed a system that collects human motion data, recognizes action sequences, and transfers both physical information (including skeletal coordinates) and task information to a robot. This enables the robot to perform sequential tasks that are traditionally performed by humans. The proposed system employs a non-contact method to acquire three-dimensional skeletal information over time, allowing for quantitative analysis without interfering with sequential tasks. In addition, we developed an action sequence recognition model based on skeletal information and object detection results, which operated independent of background information. This model can adapt to changes in work processes and environments. By translating the human information including the physical and semantic information of a sequential task performed by humans into a robot, the robot can perform the same task. An experiment was conducted to verify this capability using the proposed system. The proposed action sequence recognition method demonstrated high accuracy in recognizing human-performed tasks with an average Edit score of 95.39 and an average F1@10 score of 0.951. In two out of the four trials, the robot adapted to changes in work processes without misrecognizing action sequences and seamlessly executed the sequential task performed by the human. In conclusion, we confirmed the feasibility of using the proposed system.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1462833"},"PeriodicalIF":2.9,"publicationDate":"2025-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12229835/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144585206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Monica Nicolescu, Janelle Blankenburg, Bashira Akter Anima, Mariya Zagainova, Pourya Hoseini, Mircea Nicolescu, David Feil-Seifer
{"title":"Simulation theory of mind for heterogeneous human-robot teams.","authors":"Monica Nicolescu, Janelle Blankenburg, Bashira Akter Anima, Mariya Zagainova, Pourya Hoseini, Mircea Nicolescu, David Feil-Seifer","doi":"10.3389/frobt.2025.1533054","DOIUrl":"https://doi.org/10.3389/frobt.2025.1533054","url":null,"abstract":"<p><p>This paper focuses on the problem of collaborative task execution by teams comprising of people and multiple heterogeneous robots. In particular, the problem is motivated by the need for the team members to dynamically coordinate their execution, in order to avoid overlapping actions (i.e. multiple team members working on the same part of the task) and to ensure a correct execution of the task. This paper expands on our own prior work on collaborative task execution by single human-robot and single robot-robot teams, by taking an approach inspired by simulation Theory of Mind (ToM) to develop a real-time distributed architecture that enables collaborative execution of tasks with hierarchical representations and multiple types of execution constraints by teams of people and multiple robots with variable heterogeneity. First, the architecture presents a novel approach for concurrent coordination of task execution with both human and robot teammates. Second, a novel pipeline is developed in order to handle automatic grasping of objects with unknown initial locations. Furthermore, the architecture relies on a novel continuous-valued metric which accounts for a robot's capability to perform tasks during the dynamic, on-line task allocation process. To assess the proposed approach, the architecture is validated with: 1) a heterogeneous team of two humanoid robots and 2) a heterogeneous team of one human and two humanoid robots, performing a household task in different environmental conditions. The results support the proposed approach, as different environmental conditions result in different and continuously changing values for the robots' task execution abilities. Thus, the proposed architecture enables adaptive, real-time collaborative task execution through dynamic task allocation by a heterogeneous human-robot team, for tasks with hierarchical representations and multiple types of constraints.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1533054"},"PeriodicalIF":2.9,"publicationDate":"2025-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12209716/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144545476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards applied swarm robotics: current limitations and enablers.","authors":"Miquel Kegeleirs, Mauro Birattari","doi":"10.3389/frobt.2025.1607978","DOIUrl":"10.3389/frobt.2025.1607978","url":null,"abstract":"<p><p>Swarm robotics addresses the design, deployment, and analysis of large groups of robots that collaborate to perform tasks in a decentralized manner. Research in this field has predominantly relied on simulations or small-scale robots with limited sensing, actuation, and computational capabilities. Consequently, despite significant advancements, swarm robotics has yet to see widespread commercial or industrial application. A major barrier to practical deployment is the lack of affordable, modern, and robust platforms suitable for real-world scenarios. Moreover, a narrow definition of what swarm robotics should be has restricted the scope of potential applications. In this paper, we argue that the development of more advanced robotic platforms-incorporating state-of-the-art technologies such as SLAM, computer vision, and reliable communication systems-and the adoption of a broader interpretation of swarm robotics could significantly expand its range of applicability. This would enable robot swarms to tackle a wider variety of real-world tasks and integrate more effectively with existing systems, ultimately paving the way for successful deployment.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1607978"},"PeriodicalIF":2.9,"publicationDate":"2025-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12202227/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144530417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Metric scale non-fixed obstacles distance estimation using a 3D map and a monocular camera.","authors":"Daijiro Higashi, Naoki Fukuta, Tsuyoshi Tasaki","doi":"10.3389/frobt.2025.1560342","DOIUrl":"10.3389/frobt.2025.1560342","url":null,"abstract":"<p><p>Obstacle avoidance is important for autonomous driving. Metric scale obstacle detection using a monocular camera for obstacle avoidance has been studied. In this study, metric scale obstacle detection means detecting obstacles and measuring the distance to them with a metric scale. We have already developed PMOD-Net, which realizes metric scale obstacle detection by using a monocular camera and a 3D map for autonomous driving. However, PMOD-Net's distance error of non-fixed obstacles that do not exist on the 3D map is large. Accordingly, this study deals with the problem of improving distance estimation of non-fixed obstacles for obstacle avoidance. To solve the problem, we focused on the fact that PMOD-Net simultaneously performed object detection and distance estimation. We have developed a new loss function called \"DifSeg.\" DifSeg is calculated from the distance estimation results on the non-fixed obstacle region, which is defined based on the object detection results. Therefore, DifSeg makes PMOD-Net focus on non-fixed obstacles during training. We evaluated the effect of DifSeg by using CARLA simulator, KITTI, and an original indoor dataset. The evaluation results showed that the distance estimation accuracy was improved on all datasets. Especially in the case of KITTI, the distance estimation error of our method was 2.42 m, which was 2.14 m less than that of the latest monocular depth estimation method.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1560342"},"PeriodicalIF":2.9,"publicationDate":"2025-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12198967/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144508851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}