M. Cotter, Siddharth Advani, J. Sampson, K. Irick, N. Vijaykrishnan
{"title":"A hardware accelerated multilevel visual classifier for embedded visual-assist systems","authors":"M. Cotter, Siddharth Advani, J. Sampson, K. Irick, N. Vijaykrishnan","doi":"10.1109/ICCAD.2014.7001338","DOIUrl":null,"url":null,"abstract":"Embedded visual assist systems are emerging as increasingly viable tools for aiding visually impaired persons in their day-to-day life activities. Novel wearable devices with imaging capabilities will be uniquely positioned to assist visually impaired in activities such as grocery shopping. However, supporting such time-sensitive applications on embedded platforms requires an intelligent trade-off between accuracy and computational efficiency. In order to maximize their utility in real-world scenarios, visual classifiers often need to recognize objects within large sets of object classes that are both diverse and deep. In a grocery market, simultaneously recognizing the appearance of people, shopping carts, and pasta is an example of a common diverse object classification task. Moreover, a useful visual-aid system would need deep classification capability to distinguish among the many styles and brands of pasta to direct attention to a particular box. Exemplar Support Vector Machines (ESVMs) provide a means of achieving this specificity, but are resource intensive as computation increases rapidly with the number of classes to be recognized. To maintain scalability without sacrificing accuracy, we examine the use of a biologically-inspired classifier (HMAX) as a front-end filter that can narrow the set of ESVMs to be evaluated. We show that a hierarchical classifier combining HMAX and ESVM performs better than either of the two individually. We achieve 12% improvement in accuracy over HMAX and 4% improvement over ESVM while reducing computational overhead of evaluating all possible exemplars.","PeriodicalId":426584,"journal":{"name":"2014 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCAD.2014.7001338","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8
Abstract
Embedded visual assist systems are emerging as increasingly viable tools for aiding visually impaired persons in their day-to-day life activities. Novel wearable devices with imaging capabilities will be uniquely positioned to assist visually impaired in activities such as grocery shopping. However, supporting such time-sensitive applications on embedded platforms requires an intelligent trade-off between accuracy and computational efficiency. In order to maximize their utility in real-world scenarios, visual classifiers often need to recognize objects within large sets of object classes that are both diverse and deep. In a grocery market, simultaneously recognizing the appearance of people, shopping carts, and pasta is an example of a common diverse object classification task. Moreover, a useful visual-aid system would need deep classification capability to distinguish among the many styles and brands of pasta to direct attention to a particular box. Exemplar Support Vector Machines (ESVMs) provide a means of achieving this specificity, but are resource intensive as computation increases rapidly with the number of classes to be recognized. To maintain scalability without sacrificing accuracy, we examine the use of a biologically-inspired classifier (HMAX) as a front-end filter that can narrow the set of ESVMs to be evaluated. We show that a hierarchical classifier combining HMAX and ESVM performs better than either of the two individually. We achieve 12% improvement in accuracy over HMAX and 4% improvement over ESVM while reducing computational overhead of evaluating all possible exemplars.
嵌入式视觉辅助系统正在成为帮助视障人士日常生活活动的越来越可行的工具。具有成像功能的新型可穿戴设备将具有独特的定位,以帮助视障人士进行购物等活动。然而,在嵌入式平台上支持这种对时间敏感的应用程序需要在准确性和计算效率之间进行明智的权衡。为了最大限度地发挥其在现实场景中的效用,视觉分类器通常需要识别大量对象类中的对象,这些对象类既多样又深入。在杂货市场中,同时识别人、购物车和面食的外观是一个常见的不同对象分类任务的示例。此外,一个有用的视觉辅助系统将需要深度分类能力,以区分许多风格和品牌的意大利面,将注意力引导到特定的盒子上。范例支持向量机(Exemplar Support Vector Machines, esvm)提供了一种实现这种专一性的方法,但由于计算量随着要识别的类数量的增加而迅速增加,因此需要大量的资源。为了在不牺牲准确性的情况下保持可扩展性,我们研究了使用生物启发分类器(HMAX)作为前端过滤器,可以缩小要评估的esvm集。我们表明,结合HMAX和ESVM的分层分类器比两者单独的任何一个都表现得更好。我们的精度比HMAX提高了12%,比ESVM提高了4%,同时减少了评估所有可能样本的计算开销。