Khaled Aboumerhi, R. Etienne-Cummings, Jonah P. Sengupta, J. Rattray
{"title":"手工制作和学习的时空过滤器来通知和跟踪视觉显著性","authors":"Khaled Aboumerhi, R. Etienne-Cummings, Jonah P. Sengupta, J. Rattray","doi":"10.1145/3461598.3461605","DOIUrl":null,"url":null,"abstract":"This paper describes an event-tracking algorithm based on an unsupervised learning method to follow salient features. By learning spatiotemporal filters using computationally inexpensive distance metrics such as determinant comparisons, we show that salient features are captured by the learned activation prototypes, known as spatiotemporal templates. First, we discuss previous hand-crafted filter methods to capture spike-based data. While spatial and temporal filters are easily crafted for obvious features, hand-crafted filters are not robust and exhaustive templates for detecting events that may not be so obvious. It becomes clear that learning filters is a more diverse, rectifying method in identifying important features while remaining independent from human observations. We then show how spatiotemporal filters are learned through a series of prototype clustering. In order to handle information over time, we propose a series of decision trees in the form of a random forest inspired by lifelong learning. Finally, we conclude promising results on feature tracking, as well as the need for a ground-truth spike-based data-set to validate saliency algorithms.","PeriodicalId":408426,"journal":{"name":"Proceedings of the 2021 5th International Conference on Intelligent Systems, Metaheuristics & Swarm Intelligence","volume":"636 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Hand-Crafted and Learned Spatiotemporal Filters to Inform and Track Visual Saliency\",\"authors\":\"Khaled Aboumerhi, R. Etienne-Cummings, Jonah P. Sengupta, J. Rattray\",\"doi\":\"10.1145/3461598.3461605\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper describes an event-tracking algorithm based on an unsupervised learning method to follow salient features. By learning spatiotemporal filters using computationally inexpensive distance metrics such as determinant comparisons, we show that salient features are captured by the learned activation prototypes, known as spatiotemporal templates. First, we discuss previous hand-crafted filter methods to capture spike-based data. While spatial and temporal filters are easily crafted for obvious features, hand-crafted filters are not robust and exhaustive templates for detecting events that may not be so obvious. It becomes clear that learning filters is a more diverse, rectifying method in identifying important features while remaining independent from human observations. We then show how spatiotemporal filters are learned through a series of prototype clustering. In order to handle information over time, we propose a series of decision trees in the form of a random forest inspired by lifelong learning. Finally, we conclude promising results on feature tracking, as well as the need for a ground-truth spike-based data-set to validate saliency algorithms.\",\"PeriodicalId\":408426,\"journal\":{\"name\":\"Proceedings of the 2021 5th International Conference on Intelligent Systems, Metaheuristics & Swarm Intelligence\",\"volume\":\"636 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-04-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2021 5th International Conference on Intelligent Systems, Metaheuristics & Swarm Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3461598.3461605\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2021 5th International Conference on Intelligent Systems, Metaheuristics & Swarm Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3461598.3461605","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Hand-Crafted and Learned Spatiotemporal Filters to Inform and Track Visual Saliency
This paper describes an event-tracking algorithm based on an unsupervised learning method to follow salient features. By learning spatiotemporal filters using computationally inexpensive distance metrics such as determinant comparisons, we show that salient features are captured by the learned activation prototypes, known as spatiotemporal templates. First, we discuss previous hand-crafted filter methods to capture spike-based data. While spatial and temporal filters are easily crafted for obvious features, hand-crafted filters are not robust and exhaustive templates for detecting events that may not be so obvious. It becomes clear that learning filters is a more diverse, rectifying method in identifying important features while remaining independent from human observations. We then show how spatiotemporal filters are learned through a series of prototype clustering. In order to handle information over time, we propose a series of decision trees in the form of a random forest inspired by lifelong learning. Finally, we conclude promising results on feature tracking, as well as the need for a ground-truth spike-based data-set to validate saliency algorithms.