{"title":"Lifelogging as a Memory Prosthetic","authors":"A. Smeaton","doi":"10.1145/3463948.3469271","DOIUrl":null,"url":null,"abstract":"Since computers were first used to address the challenge of managing information rather than performing calculations computing arithmetic values, or even before that since the time that MEMEX was designed by Vannevar Bush in the 1940s, we have been building systems that help people like us to find information accurately and quickly. These systems have grown to be technological marvels, discovering and indexing information almost as soon as it appears available online and making it available to billions of people for searching and delivery within fractions of a second, and across a range of devices. Yet it is well known that half the time people are actually searching for things that they once knew but have since forgotten, or can't remember where they found that information first time around, and need to re-find it. As our science of information seeking and information discovery has progressed, we rarely ask why people forgot those things in the first place. If we were allowed to jump back in time say 50 years, and to re-start the development of information retrieval as a technology then perhaps we would be build systems that help us to remember and to learn, rather than trying to plug the gap and find information for us when we forget. In separate but parallel and sometimes overlapping developments, the analysis and indexing of visual information -- images and video -- has also made spectacular progress mostly within the last decade. Using automated processes we can detect and track objects, we can describe visual content as tags or even as text captions, we can now generate realistic high quality visual content using machine learning and we can compute high-level abstract features of visual content like salience, aesthetics, and even memorability. One of the areas where information management/retrieval with its 50 years of technological progress meets computer vision with its recent decade of spectacular development is in lifelogging. At this intersection we can apply computer vision techniques to analyse and index visual lifelogs generated from wearable cameras, for example, in order to support lifelog search and browsing tasks. But we should ask ourselves whether this really is the right way for us to use our lifelogs. Memory is one of the core features that make us what we are yet it is fragile and only partly understood. We have no real control over what we remember and what we forget and when we really do need to remember something that could be important then we make ham-fisted efforts to consciously over-ride our natural tendency to forget. We do this, for example, rehearsing and replaying information, building on the Ebbinghaus principle of repeated conscious reviewing to overcome transience which is the general deterioration of memory over time. In this presentation I will probe deeper into memory, recall, recognition, memorability and memory triggers and how our lifelogs could really act as memory prosthetics, visual triggers for our own natural memory. This will allow us to ask whether the lifelog challenges that we build and run in events such as this Annual Lifelog Search Challenge meeting are appropriately framed and whether they are taking us in the direction where lifelogs are genuinely useful to a wide population rather than to a niche set of people. Finally I will address the frightening scenario where everything for us is potentially remembered and whether or not we want that to actually happen.","PeriodicalId":150532,"journal":{"name":"Proceedings of the 4th Annual on Lifelog Search Challenge","volume":"20 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 4th Annual on Lifelog Search Challenge","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3463948.3469271","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Since computers were first used to address the challenge of managing information rather than performing calculations computing arithmetic values, or even before that since the time that MEMEX was designed by Vannevar Bush in the 1940s, we have been building systems that help people like us to find information accurately and quickly. These systems have grown to be technological marvels, discovering and indexing information almost as soon as it appears available online and making it available to billions of people for searching and delivery within fractions of a second, and across a range of devices. Yet it is well known that half the time people are actually searching for things that they once knew but have since forgotten, or can't remember where they found that information first time around, and need to re-find it. As our science of information seeking and information discovery has progressed, we rarely ask why people forgot those things in the first place. If we were allowed to jump back in time say 50 years, and to re-start the development of information retrieval as a technology then perhaps we would be build systems that help us to remember and to learn, rather than trying to plug the gap and find information for us when we forget. In separate but parallel and sometimes overlapping developments, the analysis and indexing of visual information -- images and video -- has also made spectacular progress mostly within the last decade. Using automated processes we can detect and track objects, we can describe visual content as tags or even as text captions, we can now generate realistic high quality visual content using machine learning and we can compute high-level abstract features of visual content like salience, aesthetics, and even memorability. One of the areas where information management/retrieval with its 50 years of technological progress meets computer vision with its recent decade of spectacular development is in lifelogging. At this intersection we can apply computer vision techniques to analyse and index visual lifelogs generated from wearable cameras, for example, in order to support lifelog search and browsing tasks. But we should ask ourselves whether this really is the right way for us to use our lifelogs. Memory is one of the core features that make us what we are yet it is fragile and only partly understood. We have no real control over what we remember and what we forget and when we really do need to remember something that could be important then we make ham-fisted efforts to consciously over-ride our natural tendency to forget. We do this, for example, rehearsing and replaying information, building on the Ebbinghaus principle of repeated conscious reviewing to overcome transience which is the general deterioration of memory over time. In this presentation I will probe deeper into memory, recall, recognition, memorability and memory triggers and how our lifelogs could really act as memory prosthetics, visual triggers for our own natural memory. This will allow us to ask whether the lifelog challenges that we build and run in events such as this Annual Lifelog Search Challenge meeting are appropriately framed and whether they are taking us in the direction where lifelogs are genuinely useful to a wide population rather than to a niche set of people. Finally I will address the frightening scenario where everything for us is potentially remembered and whether or not we want that to actually happen.