IEEE Robotics and Automation Letters最新文献

筛选
英文 中文
Robotic Assembly of Deformable Linear Objects via Curriculum Reinforcement Learning 基于课程强化学习的可变形线性物体机器人装配
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-03-21 DOI: 10.1109/LRA.2025.3553676
Kai Wu;Rongkang Chen;Qi Chen;Weihua Li
{"title":"Robotic Assembly of Deformable Linear Objects via Curriculum Reinforcement Learning","authors":"Kai Wu;Rongkang Chen;Qi Chen;Weihua Li","doi":"10.1109/LRA.2025.3553676","DOIUrl":"https://doi.org/10.1109/LRA.2025.3553676","url":null,"abstract":"The automated assembly of flexible objects presents significant challenges. Although significant progress has been made in the assembly of rigid objects, the methods used for rigid objects cannot be directly applied to flexible objects due to their infinite degrees of freedom. This study proposes a reinforcement learning (RL) based method for deformable cable insertion tasks executed with a universal 2-finger gripper. Firstly, a vision-based detection method is employed to monitor the cable's state in real time, while a state classifier is introduced to provide real-time reward feedback for RL training. Secondly, an adaptive curriculum learning (CL) method is proposed to adjust the initial degree of cable bending through the success rate in the training process, allowing the RL agent to learn progressively from easier to more difficult tasks. The validation experiments were conducted on a type-C cable insertion task, where the robot grips the cable portion of the electrical connector. The results indicate that our method is capable of adapting to various degrees of cable bending, successfully handling cable configurations bent up to a maximum of 40° from its straight, unbent state, with an assembly success rate of over 90%.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 5","pages":"4770-4777"},"PeriodicalIF":4.6,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143777849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SOUS VIDE: Cooking Visual Drone Navigation Policies in a Gaussian Splatting Vacuum 真空烹调:在高斯溅射真空中烹饪视觉无人机导航策略
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-03-21 DOI: 10.1109/LRA.2025.3553785
JunEn Low;Maximilian Adang;Javier Yu;Keiko Nagami;Mac Schwager
{"title":"SOUS VIDE: Cooking Visual Drone Navigation Policies in a Gaussian Splatting Vacuum","authors":"JunEn Low;Maximilian Adang;Javier Yu;Keiko Nagami;Mac Schwager","doi":"10.1109/LRA.2025.3553785","DOIUrl":"https://doi.org/10.1109/LRA.2025.3553785","url":null,"abstract":"We propose a new simulator, training approach, and policy architecture, collectively called SOUS VIDE, for end-to-end visual drone navigation. Our trained policies exhibit zero-shot sim-to-real transfer with robust real-world performance using only onboard perception and computation. Our simulator, called FiGS, couples a computationally simple drone dynamics model with a high visual fidelity Gaussian Splatting scene reconstruction. FiGS can quickly simulate drone flights producing photorealistic images at up to 130 fps. We use FiGS to collect 100 k–300 k image/state-action pairs from an expert MPC with privileged state and dynamics information, randomized over dynamics parameters and spatial disturbances. We then distill this expert MPC into an end-to-end visuomotor policy with a lightweight neural architecture, called SV-Net. SV-Net processes color image, optical flow and IMU data streams into low-level thrust and body rate commands at 20 Hz onboard a drone. Crucially, SV-Net includes a learned module for low-level control that adapts at runtime to variations in drone dynamics. In a campaign of 105 hardware experiments, we show SOUS VIDE policies to be robust to 30% mass variations, 40 m/s wind gusts, 60% changes in ambient brightness, shifting or removing objects from the scene, and people moving aggressively through the drone's visual field.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 5","pages":"5122-5129"},"PeriodicalIF":4.6,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143856199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DEFORM: Adaptive Formation Reconfiguration of Multi-Robot Systems in Confined Environments DEFORM:受限环境下多机器人系统的自适应队形重构
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-03-20 DOI: 10.1109/LRA.2025.3552998
Jin Li;Yang Xu;Xiufang Shi;Liang Li
{"title":"DEFORM: Adaptive Formation Reconfiguration of Multi-Robot Systems in Confined Environments","authors":"Jin Li;Yang Xu;Xiufang Shi;Liang Li","doi":"10.1109/LRA.2025.3552998","DOIUrl":"https://doi.org/10.1109/LRA.2025.3552998","url":null,"abstract":"Achieving desired formation patterns without collisions is rather challenging for multi-robot systems in unknown obstacle-rich and confined environments, especially in narrow corridor scenes containing large-volume obstacles. To address this, we propose an adaptive formation reconfiguration method that can dynamically switch to the optimal formation pattern based on the current obstacle distribution. Specifically, we develop a novel obstacle-free maximum passable width detection method to formulate recursive optimization problems, which can determine the currently best formation shape and refine local goals away from obstacles. Then, we design a task assignment module for the temporary leader robot and a consensus-based distributed formation controller for each robot using model predictive control to ensure rapid convergence to the suggested formation shape. In addition, we utilize the potential field approach for each robot to improve collision avoidance. Extensive Gazebo simulations and real-world experiments in confined and obstacle-rich scenes verify the efficient formation convergence of our methods compared to the previous methods.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 5","pages":"4706-4713"},"PeriodicalIF":4.6,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143777747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SLC$^{2}$-SLAM: Semantic-Guided Loop Closure Using Shared Latent Code for NeRF SLAM SLC$^{2}$-SLAM:使用NeRF SLAM共享潜在代码的语义引导循环闭包
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-03-20 DOI: 10.1109/LRA.2025.3553352
Yuhang Ming;Di Ma;Weichen Dai;Han Yang;Rui Fan;Guofeng Zhang;Wanzeng Kong
{"title":"SLC$^{2}$-SLAM: Semantic-Guided Loop Closure Using Shared Latent Code for NeRF SLAM","authors":"Yuhang Ming;Di Ma;Weichen Dai;Han Yang;Rui Fan;Guofeng Zhang;Wanzeng Kong","doi":"10.1109/LRA.2025.3553352","DOIUrl":"https://doi.org/10.1109/LRA.2025.3553352","url":null,"abstract":"Targeting the notorious cumulative drift errors in NeRF SLAM, we propose a Semantic-guided Loop Closure using Shared Latent Code, dubbed SLC<inline-formula><tex-math>$^{2}$</tex-math></inline-formula>-SLAM. We argue that latent codes stored in many NeRF SLAM systems are not fully exploited, as they are only used for better reconstruction. In this letter, we propose a simple yet effective way to detect potential loops using the same latent codes as local features. To further improve the loop detection performance, we use the semantic information, which are also decoded from the same latent codes to guide the aggregation of local features. Finally, with the potential loops detected, we close them with a graph optimization followed by bundle adjustment to refine both the estimated poses and the reconstructed scene. To evaluate the performance of our SLC<inline-formula><tex-math>$^{2}$</tex-math></inline-formula>-SLAM, we conduct extensive experiments on Replica and ScanNet datasets. Our proposed semantic-guided loop closure significantly outperforms the pre-trained NetVLAD and ORB combined with Bag-of-Words, which are used in all the other NeRF SLAM with loop closure. As a result, our SLC<inline-formula><tex-math>$^{2}$</tex-math></inline-formula>-SLAM also demonstrated better tracking and reconstruction performance, especially in larger scenes with more loops, like ScanNet.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 5","pages":"4978-4985"},"PeriodicalIF":4.6,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143817889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning Variable Whole-Body Control for Agile Aerial Manipulation in Strong Winds 学习在强风中灵活空中操纵的可变全身控制
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-03-20 DOI: 10.1109/LRA.2025.3553354
Ying Wu;Zida Zhou;Mingxin Wei;Lijie Xie;Renming Liu;Hui Cheng
{"title":"Learning Variable Whole-Body Control for Agile Aerial Manipulation in Strong Winds","authors":"Ying Wu;Zida Zhou;Mingxin Wei;Lijie Xie;Renming Liu;Hui Cheng","doi":"10.1109/LRA.2025.3553354","DOIUrl":"https://doi.org/10.1109/LRA.2025.3553354","url":null,"abstract":"Aerial manipulation provides an effective alternative to human labor in high-risk outdoor situations. Complex and variable environments demand the system to respond quickly with minimal latency to external disturbances. To address this challenge, we propose a learning-based variable whole-body model predictive controller designed to improve the adaptability and agility of the system through robotic arm-assisted motion. Given the limited onboard computing power, this low-level whole-body model predictive controller enhances computational efficiency without sacrificing accuracy by linearizing the highly coupled dynamics model and updating the linearized parameters in real-time. By incorporating updates of the disturbance values predicted by the Gaussian process into the linear model, the whole-body controller can swiftly react to perturbations. Additionally, it can employ robotic arm motions to perform agile maneuvers and counter disturbances, rather than merely adjusting the quadrotor's rotational movements. To further enhance agility and robustness, we train a high-level policy search using episode-based policy search and gradient descent techniques. For specific tasks and scenarios, this policy search can train a deep neural network to identify optimal decision variables that account for various wind disturbances for the low-level controller. We have carried out disturbance rejection and flip experiments on the aerial manipulation system in the wind tunnel, which demonstrate that the controller can operate stably and effectively under strong disturbance.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 5","pages":"4794-4801"},"PeriodicalIF":4.6,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143777753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Online System Identification Algorithm for Spherical Robot Using the Koopman Theory 使用库普曼理论的球形机器人在线系统识别算法
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-03-20 DOI: 10.1109/LRA.2025.3552997
Xiaoqing Guan;You Wang;Xiaomeng Kang;Wei Yao;Jin Zhang;Guang Li
{"title":"An Online System Identification Algorithm for Spherical Robot Using the Koopman Theory","authors":"Xiaoqing Guan;You Wang;Xiaomeng Kang;Wei Yao;Jin Zhang;Guang Li","doi":"10.1109/LRA.2025.3552997","DOIUrl":"https://doi.org/10.1109/LRA.2025.3552997","url":null,"abstract":"This letter proposes a novel linear online identification framework for the spherical robot to address the modeling difficulties posed by nonlinearity and time-varying characteristics. Firstly, the Koopman theory is applied to the spherical robot to build a linear model to approximate the nonlinearity. After selecting a set of observables based on spherical robots' dynamics, the model is identified from data. Then, a new online system identification algorithm based on the Kalman filter and noise estimation was presented, which updates the model in real time to track the time-varying system characteristics. Experiments demonstrate that the Koopman-based linear model meets the precision criteria for long-term forecasting of the nonlinear spherical robot. Through the Kalman filter online identification algorithm, the model parameters can achieve stable convergence, accurately tracking the system changes caused by elements like terrain, load, and actuators. The performance, robustness, and application potential of our algorithm significantly exceed those of traditional methods. After providing a foundation for the spherical robot's adaptive control, this study is also a useful reference for other robots' and nonlinear systems' online identification.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 5","pages":"4644-4651"},"PeriodicalIF":4.6,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143777854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Optimizing a Convex Cover of Collision-Free Space for Trajectory Generation 优化无碰撞空间的凸面覆盖以生成轨迹
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-03-20 DOI: 10.1109/LRA.2025.3553416
Yuwei Wu;Igor Spasojevic;Pratik Chaudhari;Vijay Kumar
{"title":"Towards Optimizing a Convex Cover of Collision-Free Space for Trajectory Generation","authors":"Yuwei Wu;Igor Spasojevic;Pratik Chaudhari;Vijay Kumar","doi":"10.1109/LRA.2025.3553416","DOIUrl":"https://doi.org/10.1109/LRA.2025.3553416","url":null,"abstract":"We propose an online iterative algorithm to optimize a convex cover to under-approximate the free space for autonomous navigation to delineate Safe Flight Corridors (SFC). The convex cover consists of a set of polytopes such that the union of the polytopes represents obstacle-free space, allowing us to find trajectories for robots that lie within the convex cover. In order to find the SFC that facilitates trajectory optimization, we iteratively find overlapping polytopes of maximum volumes that include specified waypoints initialized by a geometric or kinematic planner. Constraints at waypoints appear in two alternating stages of a joint optimization problem, which is solved by a novel heuristic-based iterative algorithm with partially distributed variables. We validate the effectiveness of our proposed algorithm using a range of parameterized environments and show its applications for two-stage motion planning.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 5","pages":"4762-4769"},"PeriodicalIF":4.6,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143777847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neuromorphic Attitude Estimation and Control 神经形态姿态估计与控制
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-03-20 DOI: 10.1109/LRA.2025.3553418
S. Stroobants;C. De Wagter;G. C. H. E. de Croon
{"title":"Neuromorphic Attitude Estimation and Control","authors":"S. Stroobants;C. De Wagter;G. C. H. E. de Croon","doi":"10.1109/LRA.2025.3553418","DOIUrl":"https://doi.org/10.1109/LRA.2025.3553418","url":null,"abstract":"The real-world application of small drones is mostly hampered by energy limitations. Neuromorphic computing promises extremely energy-efficient AI for autonomous flight but is still challenging to train and deploy on real robots. To reap the maximal benefits from neuromorphic computing, it is necessary to perform all autonomy functions end-to-end on a single neuromorphic chip, from low-level attitude control to high-level navigation. This research presents the first neuromorphic control system using a spiking neural network (SNN) to effectively map a drone's raw sensory input directly to motor commands. We apply this method to low-level attitude estimation and control for a quadrotor, deploying the SNN on a tiny Crazyflie. We propose a modular SNN, separately training and then merging estimation and control sub-networks. The SNN is trained with imitation learning, using a flight dataset of sensory-motor pairs. Post-training, the network is deployed on the Crazyflie, issuing control commands from sensor inputs at 500 Hz. Furthermore, for the training procedure we augmented training data by flying a controller with additional excitation and time-shifting the target data to enhance the predictive capabilities of the SNN. On the real drone, the perception-to-control SNN tracks attitude commands with an average error of 3.0 degrees, compared to 2.7 degrees for the regular flight stack. We also show the benefits of the proposed learning modifications for reducing the average tracking error and reducing oscillations. Our work shows the feasibility of performing neuromorphic end-to-end control, laying the basis for highly energy-efficient and low-latency neuromorphic autopilots.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 5","pages":"4858-4865"},"PeriodicalIF":4.6,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143792846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semantically Safe Robot Manipulation: From Semantic Scene Understanding to Motion Safeguards 语义安全机器人操作:从语义场景理解到运动保障
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-03-20 DOI: 10.1109/LRA.2025.3553046
Lukas Brunke;Yanni Zhang;Ralf Römer;Jack Naimer;Nikola Staykov;Siqi Zhou;Angela P. Schoellig
{"title":"Semantically Safe Robot Manipulation: From Semantic Scene Understanding to Motion Safeguards","authors":"Lukas Brunke;Yanni Zhang;Ralf Römer;Jack Naimer;Nikola Staykov;Siqi Zhou;Angela P. Schoellig","doi":"10.1109/LRA.2025.3553046","DOIUrl":"https://doi.org/10.1109/LRA.2025.3553046","url":null,"abstract":"Ensuring safe interactions in human-centric environments requires robots to understand and adhere to constraints recognized by humans as “common sense” (e.g., “<italic>moving a cup of water above a laptop is unsafe as the water may spill</i>” or “<italic>rotating a cup of water is unsafe as it can lead to pouring its content</i>”). Recent advances in computer vision and machine learning have enabled robots to acquire a semantic understanding of and reason about their operating environments. While extensive literature on safe robot decision-making exists, semantic understanding is rarely integrated into these formulations. In this work, we propose a semantic safety filter framework to certify robot inputs with respect to semantically defined constraints (e.g., unsafe spatial relationships, behaviors, and poses) and geometrically defined constraints (e.g., environment-collision and self-collision constraints). In our proposed approach, given perception inputs, we build a semantic map of the 3D environment and leverage the contextual reasoning capabilities of large language models to infer semantically unsafe conditions. These semantically unsafe conditions are then mapped to safe actions through a control barrier certification formulation. We demonstrate the proposed semantic safety filter in teleoperated manipulation tasks and with learned diffusion policies applied in a real-world kitchen environment that further showcases its effectiveness in addressing practical semantic safety constraints. Together, these experiments highlight our approach's capability to integrate semantics into safety certification, enabling safe robot operation beyond traditional collision avoidance.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 5","pages":"4810-4817"},"PeriodicalIF":4.6,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10933541","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143777855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AutoSelecter: Efficient Synthetic Nighttime Images Make Object Detector Stronger 自动选择器:高效合成夜间图像使对象检测器更强
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-03-20 DOI: 10.1109/LRA.2025.3552996
Meng Chao;Wang Mengjie;Shi Wenxiu;Zhu Huiping;Zhang Song;Zhang Rui;Yang Ming
{"title":"AutoSelecter: Efficient Synthetic Nighttime Images Make Object Detector Stronger","authors":"Meng Chao;Wang Mengjie;Shi Wenxiu;Zhu Huiping;Zhang Song;Zhang Rui;Yang Ming","doi":"10.1109/LRA.2025.3552996","DOIUrl":"https://doi.org/10.1109/LRA.2025.3552996","url":null,"abstract":"Object detection has achieved significant advancements despite the challenges posed by adverse conditions like low-light nighttime environments, where annotated data is not only scarce but also challenging to accurately label. Instead of designing special network, we focus on the creation and efficient utilization of synthetic data to address the problem. We generate synthetic data by employing an enhanced generative model that adeptly transforms daytime images into low-light nighttime ones. Furthermore, we introduce a data selection scheme, named AutoSelecter, which can be flexibly integrated into the training process of object detector, ensuring the selection of the most effective synthetic data. By efficiently utilizing synthetic data, our strategy achieves an average improvement of 5.2% and 6.1% in AP<inline-formula><tex-math>$_{50}$</tex-math></inline-formula> on the nighttime datasets of BDD100k and Waymo, respectively, for the YOLOv7, YOLOv8, and RT-DETR object detectors. We have insightfully discovered numerous missed and mislabeled annotations in manually annotated low-light nighttime datasets, which can significantly interfere with the accuracy of evaluation results during nighttime. Consequently, we also provide a manually annotated and more accurate dataset BDD100kValNight+ for better evaluation. On this refined dataset, our strategy achieves an average improvement of 5.1% in AP<inline-formula><tex-math>$_{50}$</tex-math></inline-formula> on the three detectors.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 5","pages":"4660-4665"},"PeriodicalIF":4.6,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143777846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信