Lifelogging as a Memory Prosthetic

A. Smeaton
{"title":"Lifelogging as a Memory Prosthetic","authors":"A. Smeaton","doi":"10.1145/3463948.3469271","DOIUrl":null,"url":null,"abstract":"Since computers were first used to address the challenge of managing information rather than performing calculations computing arithmetic values, or even before that since the time that MEMEX was designed by Vannevar Bush in the 1940s, we have been building systems that help people like us to find information accurately and quickly. These systems have grown to be technological marvels, discovering and indexing information almost as soon as it appears available online and making it available to billions of people for searching and delivery within fractions of a second, and across a range of devices. Yet it is well known that half the time people are actually searching for things that they once knew but have since forgotten, or can't remember where they found that information first time around, and need to re-find it. As our science of information seeking and information discovery has progressed, we rarely ask why people forgot those things in the first place. If we were allowed to jump back in time say 50 years, and to re-start the development of information retrieval as a technology then perhaps we would be build systems that help us to remember and to learn, rather than trying to plug the gap and find information for us when we forget. In separate but parallel and sometimes overlapping developments, the analysis and indexing of visual information -- images and video -- has also made spectacular progress mostly within the last decade. Using automated processes we can detect and track objects, we can describe visual content as tags or even as text captions, we can now generate realistic high quality visual content using machine learning and we can compute high-level abstract features of visual content like salience, aesthetics, and even memorability. One of the areas where information management/retrieval with its 50 years of technological progress meets computer vision with its recent decade of spectacular development is in lifelogging. At this intersection we can apply computer vision techniques to analyse and index visual lifelogs generated from wearable cameras, for example, in order to support lifelog search and browsing tasks. But we should ask ourselves whether this really is the right way for us to use our lifelogs. Memory is one of the core features that make us what we are yet it is fragile and only partly understood. We have no real control over what we remember and what we forget and when we really do need to remember something that could be important then we make ham-fisted efforts to consciously over-ride our natural tendency to forget. We do this, for example, rehearsing and replaying information, building on the Ebbinghaus principle of repeated conscious reviewing to overcome transience which is the general deterioration of memory over time. In this presentation I will probe deeper into memory, recall, recognition, memorability and memory triggers and how our lifelogs could really act as memory prosthetics, visual triggers for our own natural memory. This will allow us to ask whether the lifelog challenges that we build and run in events such as this Annual Lifelog Search Challenge meeting are appropriately framed and whether they are taking us in the direction where lifelogs are genuinely useful to a wide population rather than to a niche set of people. Finally I will address the frightening scenario where everything for us is potentially remembered and whether or not we want that to actually happen.","PeriodicalId":150532,"journal":{"name":"Proceedings of the 4th Annual on Lifelog Search Challenge","volume":"20 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 4th Annual on Lifelog Search Challenge","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3463948.3469271","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Since computers were first used to address the challenge of managing information rather than performing calculations computing arithmetic values, or even before that since the time that MEMEX was designed by Vannevar Bush in the 1940s, we have been building systems that help people like us to find information accurately and quickly. These systems have grown to be technological marvels, discovering and indexing information almost as soon as it appears available online and making it available to billions of people for searching and delivery within fractions of a second, and across a range of devices. Yet it is well known that half the time people are actually searching for things that they once knew but have since forgotten, or can't remember where they found that information first time around, and need to re-find it. As our science of information seeking and information discovery has progressed, we rarely ask why people forgot those things in the first place. If we were allowed to jump back in time say 50 years, and to re-start the development of information retrieval as a technology then perhaps we would be build systems that help us to remember and to learn, rather than trying to plug the gap and find information for us when we forget. In separate but parallel and sometimes overlapping developments, the analysis and indexing of visual information -- images and video -- has also made spectacular progress mostly within the last decade. Using automated processes we can detect and track objects, we can describe visual content as tags or even as text captions, we can now generate realistic high quality visual content using machine learning and we can compute high-level abstract features of visual content like salience, aesthetics, and even memorability. One of the areas where information management/retrieval with its 50 years of technological progress meets computer vision with its recent decade of spectacular development is in lifelogging. At this intersection we can apply computer vision techniques to analyse and index visual lifelogs generated from wearable cameras, for example, in order to support lifelog search and browsing tasks. But we should ask ourselves whether this really is the right way for us to use our lifelogs. Memory is one of the core features that make us what we are yet it is fragile and only partly understood. We have no real control over what we remember and what we forget and when we really do need to remember something that could be important then we make ham-fisted efforts to consciously over-ride our natural tendency to forget. We do this, for example, rehearsing and replaying information, building on the Ebbinghaus principle of repeated conscious reviewing to overcome transience which is the general deterioration of memory over time. In this presentation I will probe deeper into memory, recall, recognition, memorability and memory triggers and how our lifelogs could really act as memory prosthetics, visual triggers for our own natural memory. This will allow us to ask whether the lifelog challenges that we build and run in events such as this Annual Lifelog Search Challenge meeting are appropriately framed and whether they are taking us in the direction where lifelogs are genuinely useful to a wide population rather than to a niche set of people. Finally I will address the frightening scenario where everything for us is potentially remembered and whether or not we want that to actually happen.
作为记忆假肢的生活日志
自从计算机第一次被用来解决管理信息的挑战,而不是执行计算计算算术值,或者甚至在此之前,自Vannevar Bush在20世纪40年代设计MEMEX以来,我们一直在建立系统,帮助像我们这样的人准确快速地找到信息。这些系统已经发展成为技术上的奇迹,几乎在信息出现在网上时就能发现和索引信息,并使数十亿人能够在几分之一秒内通过各种设备进行搜索和传递。然而,众所周知,有一半的时间,人们实际上是在寻找他们曾经知道但后来忘记的东西,或者不记得他们第一次在哪里找到的信息,需要重新找到它。随着我们的信息搜寻和信息发现科学的进步,我们很少问为什么人们首先会忘记这些东西。如果我们被允许回到50年前,重新开始信息检索技术的发展,那么我们可能会建立帮助我们记忆和学习的系统,而不是试图填补空白,在我们忘记的时候为我们寻找信息。视觉信息(图像和视频)的分析和索引也在过去十年中取得了惊人的进展,这些发展是独立但平行的,有时是重叠的。使用自动化过程,我们可以检测和跟踪对象,我们可以将视觉内容描述为标签甚至文本标题,我们现在可以使用机器学习生成逼真的高质量视觉内容,我们可以计算视觉内容的高级抽象特征,如显著性,美学,甚至可记忆性。经历了50年技术进步的信息管理/检索与近10年发展迅猛的计算机视觉相结合的领域之一是生活记录。在这个交叉点,我们可以应用计算机视觉技术来分析和索引从可穿戴相机生成的视觉生活日志,例如,为了支持生活日志搜索和浏览任务。但我们应该问问自己,这真的是我们使用人生日志的正确方式吗?记忆是使我们成为我们的核心特征之一,但它是脆弱的,只是部分地被理解。我们无法真正控制我们记住和忘记的东西,当我们真的需要记住一些可能很重要的东西时,我们会下意识地努力克服我们自然的遗忘倾向。我们这样做,例如,排练和重播信息,建立在艾宾浩斯原则的基础上,反复有意识地回顾,以克服短暂性,即记忆随着时间的推移而普遍恶化。在这次演讲中,我将深入探讨记忆,回忆,识别,可记忆性和记忆触发,以及我们的生活日志如何真正充当记忆假肢,视觉触发我们自己的自然记忆。这将允许我们问自己,我们在诸如“年度生活日志搜索挑战会议”这样的活动中建立和运行的生活日志挑战是否有适当的框架,以及它们是否将我们带向一个方向,即生活日志对广泛的人群真正有用,而不是对一小部分人有用。最后,我将谈到一个可怕的场景,即我们的一切都可能被记住,以及我们是否希望这种情况真的发生。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信