Giulia D’Angelo, Simone Voto, Massimiliano Iacono, Arren Glover, Ernst Niebur, Chiara Bartolozzi
{"title":"Event-driven figure-ground organisation model for the humanoid robot iCub","authors":"Giulia D’Angelo, Simone Voto, Massimiliano Iacono, Arren Glover, Ernst Niebur, Chiara Bartolozzi","doi":"10.1038/s41467-025-56904-9","DOIUrl":null,"url":null,"abstract":"<p>Figure-ground organisation is a perceptual grouping mechanism for detecting objects and boundaries, essential for an agent interacting with the environment. Current figure-ground segmentation methods rely on classical computer vision or deep learning, requiring extensive computational resources, especially during training. Inspired by the primate visual system, we developed a bio-inspired perception system for the neuromorphic robot iCub. The model uses a hierarchical, biologically plausible architecture and event-driven vision to distinguish foreground objects from the background. Unlike classical approaches, event-driven cameras reduce data redundancy and computation. The system has been qualitatively and quantitatively assessed in simulations and with event-driven cameras on iCub in various scenarios. It successfully segments items in diverse real-world settings, showing comparable results to its frame-based version on simple stimuli and the Berkeley Segmentation dataset. This model enhances hybrid systems, complementing conventional deep learning models by processing only relevant data in Regions of Interest (ROI), enabling low-latency autonomous robotic applications.</p>","PeriodicalId":19066,"journal":{"name":"Nature Communications","volume":"23 1","pages":""},"PeriodicalIF":14.7000,"publicationDate":"2025-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Nature Communications","FirstCategoryId":"103","ListUrlMain":"https://doi.org/10.1038/s41467-025-56904-9","RegionNum":1,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
引用次数: 0
Abstract
Figure-ground organisation is a perceptual grouping mechanism for detecting objects and boundaries, essential for an agent interacting with the environment. Current figure-ground segmentation methods rely on classical computer vision or deep learning, requiring extensive computational resources, especially during training. Inspired by the primate visual system, we developed a bio-inspired perception system for the neuromorphic robot iCub. The model uses a hierarchical, biologically plausible architecture and event-driven vision to distinguish foreground objects from the background. Unlike classical approaches, event-driven cameras reduce data redundancy and computation. The system has been qualitatively and quantitatively assessed in simulations and with event-driven cameras on iCub in various scenarios. It successfully segments items in diverse real-world settings, showing comparable results to its frame-based version on simple stimuli and the Berkeley Segmentation dataset. This model enhances hybrid systems, complementing conventional deep learning models by processing only relevant data in Regions of Interest (ROI), enabling low-latency autonomous robotic applications.
期刊介绍:
Nature Communications, an open-access journal, publishes high-quality research spanning all areas of the natural sciences. Papers featured in the journal showcase significant advances relevant to specialists in each respective field. With a 2-year impact factor of 16.6 (2022) and a median time of 8 days from submission to the first editorial decision, Nature Communications is committed to rapid dissemination of research findings. As a multidisciplinary journal, it welcomes contributions from biological, health, physical, chemical, Earth, social, mathematical, applied, and engineering sciences, aiming to highlight important breakthroughs within each domain.