Yugo Nakamura, Rei Nakaoka, Yuki Matsuda, K. Yasumoto
{"title":"eat2pic: An Eating-Painting Interactive System to Nudge Users into Making Healthier Diet Choices","authors":"Yugo Nakamura, Rei Nakaoka, Yuki Matsuda, K. Yasumoto","doi":"10.1145/3580784","DOIUrl":null,"url":null,"abstract":"Fig. 1. By transforming eating into a task of progressively coloring a landscape projected onto a screen, the eat2pic system encourages users to eat more slowly and maintain a healthy balanced diet. The eat2pic system is composed of a calm sensing component based on a sensor-equipped chopstick (A) and visual feedback components using two types of digital canvases (C, E). The colors of the foods consumed by the user are shown on one part of a landscape displayed on two digital canvases to illustrate a single meal and the food consumed in a week as digital paintings generated by an automated system. The one-meal eat2pic (B, C) guides a user’s behavior through a single meal with real-time feedback, whereas the one-week eat2pic (D, E) guides a user’s food choices and eating behaviors with longer-term feedback accumulated over a full week.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"38 1","pages":"24:1-24:23"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3580784","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Fig. 1. By transforming eating into a task of progressively coloring a landscape projected onto a screen, the eat2pic system encourages users to eat more slowly and maintain a healthy balanced diet. The eat2pic system is composed of a calm sensing component based on a sensor-equipped chopstick (A) and visual feedback components using two types of digital canvases (C, E). The colors of the foods consumed by the user are shown on one part of a landscape displayed on two digital canvases to illustrate a single meal and the food consumed in a week as digital paintings generated by an automated system. The one-meal eat2pic (B, C) guides a user’s behavior through a single meal with real-time feedback, whereas the one-week eat2pic (D, E) guides a user’s food choices and eating behaviors with longer-term feedback accumulated over a full week.