{"title":"通过时间模拟视觉:运动图像视觉皮层的分层稀疏模型","authors":"A. Galbraith, S. Brumby, R. Chartrand","doi":"10.1109/AIPR.2012.6528200","DOIUrl":null,"url":null,"abstract":"Efficient pattern recognition in motion imagery has become a growing challenge as the number of video sources proliferates worldwide. Historically, automated analysis of motion imagery, such as object detection, classification and tracking, has been accomplished using hand-designed feature detectors. Though useful, these feature detectors are not easily extended to new data sets or new target categories since they are often task specific, and typically require substantial effort to design. Rather than hand-designing filters, recent advances in the field of image processing have resulted in a theoretical framework of sparse, hierarchical, learned representations that can describe video data of natural scenes at many spatial and temporal scales and many levels of object complexity. These sparse, hierarchical models learn the information content of imagery and video from the data itself and lead to state-of-the-art performance and more efficient processing. Processing efficiency is important as it allows scaling up of research to work with dataset sizes and numbers of categories approaching real-world conditions. We now describe recent work at Los Alamos National Laboratory developing hierarchical sparse learning computer vision models that can process high definition color video in real time. We present preliminary results extending our prior work on object classification in still imagery [1] to discovery of useful features at different time scales in motion imagery for detection, classification and tracking of objects.","PeriodicalId":406942,"journal":{"name":"2012 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Simulating vision through time: Hierarchical, sparse models of visual cortex for motion imagery\",\"authors\":\"A. Galbraith, S. Brumby, R. Chartrand\",\"doi\":\"10.1109/AIPR.2012.6528200\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Efficient pattern recognition in motion imagery has become a growing challenge as the number of video sources proliferates worldwide. Historically, automated analysis of motion imagery, such as object detection, classification and tracking, has been accomplished using hand-designed feature detectors. Though useful, these feature detectors are not easily extended to new data sets or new target categories since they are often task specific, and typically require substantial effort to design. Rather than hand-designing filters, recent advances in the field of image processing have resulted in a theoretical framework of sparse, hierarchical, learned representations that can describe video data of natural scenes at many spatial and temporal scales and many levels of object complexity. These sparse, hierarchical models learn the information content of imagery and video from the data itself and lead to state-of-the-art performance and more efficient processing. Processing efficiency is important as it allows scaling up of research to work with dataset sizes and numbers of categories approaching real-world conditions. We now describe recent work at Los Alamos National Laboratory developing hierarchical sparse learning computer vision models that can process high definition color video in real time. We present preliminary results extending our prior work on object classification in still imagery [1] to discovery of useful features at different time scales in motion imagery for detection, classification and tracking of objects.\",\"PeriodicalId\":406942,\"journal\":{\"name\":\"2012 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)\",\"volume\":\"6 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2012-10-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2012 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AIPR.2012.6528200\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AIPR.2012.6528200","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Simulating vision through time: Hierarchical, sparse models of visual cortex for motion imagery
Efficient pattern recognition in motion imagery has become a growing challenge as the number of video sources proliferates worldwide. Historically, automated analysis of motion imagery, such as object detection, classification and tracking, has been accomplished using hand-designed feature detectors. Though useful, these feature detectors are not easily extended to new data sets or new target categories since they are often task specific, and typically require substantial effort to design. Rather than hand-designing filters, recent advances in the field of image processing have resulted in a theoretical framework of sparse, hierarchical, learned representations that can describe video data of natural scenes at many spatial and temporal scales and many levels of object complexity. These sparse, hierarchical models learn the information content of imagery and video from the data itself and lead to state-of-the-art performance and more efficient processing. Processing efficiency is important as it allows scaling up of research to work with dataset sizes and numbers of categories approaching real-world conditions. We now describe recent work at Los Alamos National Laboratory developing hierarchical sparse learning computer vision models that can process high definition color video in real time. We present preliminary results extending our prior work on object classification in still imagery [1] to discovery of useful features at different time scales in motion imagery for detection, classification and tracking of objects.