The Three A's of Wearable and Ubiquitous Computing: Activity, Affect, and Attention

Kristof Van Laerhoven
{"title":"The Three A's of Wearable and Ubiquitous Computing: Activity, Affect, and Attention","authors":"Kristof Van Laerhoven","doi":"10.3389/fcomp.2021.691622","DOIUrl":null,"url":null,"abstract":"A long lasting challenge in wearable and ubiquitous computing has been to bridge the interaction gap between the users and their manifold computers. How can we as humans easily perceive and interpret contextual information? Noticing whether someone is bored, stressed, busy, or fascinated in face-to-face interactions, is still largely unsolved for computers in everyday life. The first message of this article is that much of the research of the past decades aiming to alleviate this context gap between computers and their users, has clustered into three fields. The aim is to model human users in different observable categories (alphabetically ordered): Activity, Affect, and Attention. A second important point to make is that the research fields aiming for machine recognition of these three A’s, thus far have had only a limited amount of overlap, but are bound to converge in terms of methodology and from a systems perspective. A final point then concludes with the following call to action: A consequence of such a possible merger between the three A’s is the need for a more consolidated way of performing solid, reproducible research studies. These fields can learn from each other’s best practices, and their interaction can both lead to the creation of overarching benchmarks, as well as establish common data pipelines. The opportunities are plenty. As early as 1960, J. C. R. Licklider regarded the symbiosis between human and machine as a flourishing field of research to come: “A multidisciplinary study group, examining future research and development problems of the Air Force, estimated that it would be 1980 before developments in artificial intelligence make it possible for machines alone to do much thinking or problem solving of military significance. That would leave, say, 5 years to develop mancomputer symbiosis and 15 years to use it. The 15 may be 10 or 500, but those years should be intellectually the most creative and exciting in the history of mankind.” (Licklider, 1960). Advances in Machine Learning, Deep Learning and Sensors Research have shown in the past years that computers have mastered many problem domains. Computers have improved immensely in tasks such as spotting objects from camera footage, or inferring our vital signs from miniature sensors placed on our skins. Keeping track of what the system’s user is doing (Activity), how they are feeling (Affect), and what they are focusing on (Attention), has proven a much more difficult task. There is no sensor that directly can measure even one of these A’s, and there are thus far no models for them to facilitate their machine recognition. This makes the three A’s an ideal “holy grail” to aim for, likely for the upcoming decade. The automatic detection of a user’s Activity, Affect, and Attention is on one hand more specific than the similar research field of context awareness (Schmidt et al., 1999), yet challenging and well-defined enough to spur (and require) multi-disciplinary and high-quality research. As Figure 1 shows, the ultimate goal here is to achieve a more descriptive and accurate model of the computer’s user, as sensed through wearable or ubiquitous technology. Activity. The research field of wearable activity recognition has grown tremendously in the last 2 decades and can be categorized in three overlapping stages (Figure 2). The initial research studies focused predominantly on proving feasibility of using certain wearable sensors to automatically detect an activity, at first basic activities such as “walking” or “climbing stairs”, later moving to more Edited and reviewed by: Kaleem Siddiqi, McGill University, Montreal, Canada","PeriodicalId":305963,"journal":{"name":"Frontiers Comput. Sci.","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers Comput. Sci.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/fcomp.2021.691622","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

A long lasting challenge in wearable and ubiquitous computing has been to bridge the interaction gap between the users and their manifold computers. How can we as humans easily perceive and interpret contextual information? Noticing whether someone is bored, stressed, busy, or fascinated in face-to-face interactions, is still largely unsolved for computers in everyday life. The first message of this article is that much of the research of the past decades aiming to alleviate this context gap between computers and their users, has clustered into three fields. The aim is to model human users in different observable categories (alphabetically ordered): Activity, Affect, and Attention. A second important point to make is that the research fields aiming for machine recognition of these three A’s, thus far have had only a limited amount of overlap, but are bound to converge in terms of methodology and from a systems perspective. A final point then concludes with the following call to action: A consequence of such a possible merger between the three A’s is the need for a more consolidated way of performing solid, reproducible research studies. These fields can learn from each other’s best practices, and their interaction can both lead to the creation of overarching benchmarks, as well as establish common data pipelines. The opportunities are plenty. As early as 1960, J. C. R. Licklider regarded the symbiosis between human and machine as a flourishing field of research to come: “A multidisciplinary study group, examining future research and development problems of the Air Force, estimated that it would be 1980 before developments in artificial intelligence make it possible for machines alone to do much thinking or problem solving of military significance. That would leave, say, 5 years to develop mancomputer symbiosis and 15 years to use it. The 15 may be 10 or 500, but those years should be intellectually the most creative and exciting in the history of mankind.” (Licklider, 1960). Advances in Machine Learning, Deep Learning and Sensors Research have shown in the past years that computers have mastered many problem domains. Computers have improved immensely in tasks such as spotting objects from camera footage, or inferring our vital signs from miniature sensors placed on our skins. Keeping track of what the system’s user is doing (Activity), how they are feeling (Affect), and what they are focusing on (Attention), has proven a much more difficult task. There is no sensor that directly can measure even one of these A’s, and there are thus far no models for them to facilitate their machine recognition. This makes the three A’s an ideal “holy grail” to aim for, likely for the upcoming decade. The automatic detection of a user’s Activity, Affect, and Attention is on one hand more specific than the similar research field of context awareness (Schmidt et al., 1999), yet challenging and well-defined enough to spur (and require) multi-disciplinary and high-quality research. As Figure 1 shows, the ultimate goal here is to achieve a more descriptive and accurate model of the computer’s user, as sensed through wearable or ubiquitous technology. Activity. The research field of wearable activity recognition has grown tremendously in the last 2 decades and can be categorized in three overlapping stages (Figure 2). The initial research studies focused predominantly on proving feasibility of using certain wearable sensors to automatically detect an activity, at first basic activities such as “walking” or “climbing stairs”, later moving to more Edited and reviewed by: Kaleem Siddiqi, McGill University, Montreal, Canada
可穿戴和普适计算的三个A:活动、影响和注意力
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信