Yi-Hung Wu, Hsin-Che Chiang, S. Shirmohammadi, Cheng-Hsin Hsu
{"title":"使用具有异构隐私敏感级别的传感器的食物摄入活动数据集","authors":"Yi-Hung Wu, Hsin-Che Chiang, S. Shirmohammadi, Cheng-Hsin Hsu","doi":"10.1145/3587819.3592553","DOIUrl":null,"url":null,"abstract":"Human activity recognition, which involves recognizing human activities from sensor data, has drawn a lot of interest from researchers and practitioners as a result of the advent of smart homes, smart cities, and smart systems. Existing studies on activity recognition mostly concentrate on coarse-grained activities like walking and jumping, while fine-grained activities like eating and drinking are understudied because it is more difficult to recognize fine-grained activities than coarse-grained ones. As such, food intake activity recognition in particular is under investigation in the literature despite its importance for human health and well-being, including telehealth and diet management. In order to determine sensors' practical recognition accuracy, preferably with the least amount of privacy intrusion, a dataset of food intake activities utilizing sensors with varying degrees of privacy sensitivity is required. In this study, we collected such a dataset by collecting fine-grained food intake activities using sensors of heterogeneous privacy sensitivity levels, namely a mmWave radar, an RGB camera, and a depth camera. Solutions to recognize food intake activities can be developed using this dataset, which may provide a more comprehensive picture of the accuracy and privacy trade-offs involved with heterogeneous sensors.","PeriodicalId":330983,"journal":{"name":"Proceedings of the 14th Conference on ACM Multimedia Systems","volume":"36 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"A Dataset of Food Intake Activities Using Sensors with Heterogeneous Privacy Sensitivity Levels\",\"authors\":\"Yi-Hung Wu, Hsin-Che Chiang, S. Shirmohammadi, Cheng-Hsin Hsu\",\"doi\":\"10.1145/3587819.3592553\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Human activity recognition, which involves recognizing human activities from sensor data, has drawn a lot of interest from researchers and practitioners as a result of the advent of smart homes, smart cities, and smart systems. Existing studies on activity recognition mostly concentrate on coarse-grained activities like walking and jumping, while fine-grained activities like eating and drinking are understudied because it is more difficult to recognize fine-grained activities than coarse-grained ones. As such, food intake activity recognition in particular is under investigation in the literature despite its importance for human health and well-being, including telehealth and diet management. In order to determine sensors' practical recognition accuracy, preferably with the least amount of privacy intrusion, a dataset of food intake activities utilizing sensors with varying degrees of privacy sensitivity is required. In this study, we collected such a dataset by collecting fine-grained food intake activities using sensors of heterogeneous privacy sensitivity levels, namely a mmWave radar, an RGB camera, and a depth camera. Solutions to recognize food intake activities can be developed using this dataset, which may provide a more comprehensive picture of the accuracy and privacy trade-offs involved with heterogeneous sensors.\",\"PeriodicalId\":330983,\"journal\":{\"name\":\"Proceedings of the 14th Conference on ACM Multimedia Systems\",\"volume\":\"36 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-06-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 14th Conference on ACM Multimedia Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3587819.3592553\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 14th Conference on ACM Multimedia Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3587819.3592553","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Dataset of Food Intake Activities Using Sensors with Heterogeneous Privacy Sensitivity Levels
Human activity recognition, which involves recognizing human activities from sensor data, has drawn a lot of interest from researchers and practitioners as a result of the advent of smart homes, smart cities, and smart systems. Existing studies on activity recognition mostly concentrate on coarse-grained activities like walking and jumping, while fine-grained activities like eating and drinking are understudied because it is more difficult to recognize fine-grained activities than coarse-grained ones. As such, food intake activity recognition in particular is under investigation in the literature despite its importance for human health and well-being, including telehealth and diet management. In order to determine sensors' practical recognition accuracy, preferably with the least amount of privacy intrusion, a dataset of food intake activities utilizing sensors with varying degrees of privacy sensitivity is required. In this study, we collected such a dataset by collecting fine-grained food intake activities using sensors of heterogeneous privacy sensitivity levels, namely a mmWave radar, an RGB camera, and a depth camera. Solutions to recognize food intake activities can be developed using this dataset, which may provide a more comprehensive picture of the accuracy and privacy trade-offs involved with heterogeneous sensors.