Lauren Moussallem, Lisa Lombardi, Myra Beth McGuinness, Maria Kolic, Elizabeth K Baglin, Rui Jin, Nariman Habili, Jessica Kvansakul, Samuel A Titchener, Carla J Abbott, Janine G Walker, Penelope J Allen, Matthew A Petoe, Nick Barnes
{"title":"Navigational outcomes with a depth-based vision processing algorithm in a second-generation suprachoroidal retinal prosthesis.","authors":"Lauren Moussallem, Lisa Lombardi, Myra Beth McGuinness, Maria Kolic, Elizabeth K Baglin, Rui Jin, Nariman Habili, Jessica Kvansakul, Samuel A Titchener, Carla J Abbott, Janine G Walker, Penelope J Allen, Matthew A Petoe, Nick Barnes","doi":"10.1088/1741-2552/adc83a","DOIUrl":null,"url":null,"abstract":"<p><p>Objective
To evaluate the effectiveness of a novel depth-based vision processing (VP) method, Local Background Enclosure (LBE), in comparison to the comprehensive VP method, Lanczos2 (L2), in suprachoroidal retinal prosthesis implant recipients during navigational tasks in laboratory and real-world settings. 
Approach
Four participants were acclimatized to both VP methods. Participants were asked to detect and navigate past five of eight possible obstacles in a white corridor across 20-30 trials. Randomized obstacles included black or white mannequins, black or white overhanging boxes, black or white bins and black or white stationary boxes. The same four participants underwent trials at three different real-word urban locations using both VP methods (randomized order). They were tasked with navigating a complex, dynamic pre-determined scene while detecting, verbally identifying, and avoiding obstacles in their path. 
Main results
The indoor obstacle course showed that the LBE method (63.6 ± 10.7%, mean ± SD) performed significantly better than L2 (48.5 ± 11.2%), for detection of obstacles (p<0.001, Mack-Skillings). The real-world assessment showed that of the objects detected, 50.2% (138/275) were correctly identified using LBE and 41.7% (138/331) using L2, corresponding to a risk difference of 8 percentage points, p=0.081). 
Significance
Real world outcomes can be improved using an enhanced vision processing algorithm, providing depth-based visual cues (#NCT05158049). 

.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of neural engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1088/1741-2552/adc83a","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Objective
To evaluate the effectiveness of a novel depth-based vision processing (VP) method, Local Background Enclosure (LBE), in comparison to the comprehensive VP method, Lanczos2 (L2), in suprachoroidal retinal prosthesis implant recipients during navigational tasks in laboratory and real-world settings.
Approach
Four participants were acclimatized to both VP methods. Participants were asked to detect and navigate past five of eight possible obstacles in a white corridor across 20-30 trials. Randomized obstacles included black or white mannequins, black or white overhanging boxes, black or white bins and black or white stationary boxes. The same four participants underwent trials at three different real-word urban locations using both VP methods (randomized order). They were tasked with navigating a complex, dynamic pre-determined scene while detecting, verbally identifying, and avoiding obstacles in their path.
Main results
The indoor obstacle course showed that the LBE method (63.6 ± 10.7%, mean ± SD) performed significantly better than L2 (48.5 ± 11.2%), for detection of obstacles (p<0.001, Mack-Skillings). The real-world assessment showed that of the objects detected, 50.2% (138/275) were correctly identified using LBE and 41.7% (138/331) using L2, corresponding to a risk difference of 8 percentage points, p=0.081).
Significance
Real world outcomes can be improved using an enhanced vision processing algorithm, providing depth-based visual cues (#NCT05158049).
.