Chris Baber, A. Khattab, J. Hermsdörfer, A. Wing, M. Russell
{"title":"Coaching through smart objects","authors":"Chris Baber, A. Khattab, J. Hermsdörfer, A. Wing, M. Russell","doi":"10.1145/3154862.3154938","DOIUrl":null,"url":null,"abstract":"We explore the ways in which smart objects can be used to cue actions as part of coaching for Activities of Daily Living (ADL) following brain damage or injury, such as might arise following a stroke. In this case, appropriate actions are cued for a given context. The context is defined by the intention of the users, the state of the objects and the tasks for which these objects can be used. This requires objects to be instrumented so that they can recognize the actions that users perform. In order to provide appropriate cues, the objects also need to be able to display information to users, e.g., by changing their physical appearance or by providing auditory output. We discuss the ways in which information can be displayed to cue user action.","PeriodicalId":200810,"journal":{"name":"Proceedings of the 11th EAI International Conference on Pervasive Computing Technologies for Healthcare","volume":"40 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 11th EAI International Conference on Pervasive Computing Technologies for Healthcare","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3154862.3154938","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
We explore the ways in which smart objects can be used to cue actions as part of coaching for Activities of Daily Living (ADL) following brain damage or injury, such as might arise following a stroke. In this case, appropriate actions are cued for a given context. The context is defined by the intention of the users, the state of the objects and the tasks for which these objects can be used. This requires objects to be instrumented so that they can recognize the actions that users perform. In order to provide appropriate cues, the objects also need to be able to display information to users, e.g., by changing their physical appearance or by providing auditory output. We discuss the ways in which information can be displayed to cue user action.