EAT -: The ICMI 2018 Eating Analysis and Tracking Challenge

Simone Hantke, Maximilian Schmitt, Panagiotis Tzirakis, Björn Schuller
{"title":"EAT -: The ICMI 2018 Eating Analysis and Tracking Challenge","authors":"Simone Hantke, Maximilian Schmitt, Panagiotis Tzirakis, Björn Schuller","doi":"10.1145/3242969.3243681","DOIUrl":null,"url":null,"abstract":"The multimodal recognition of eating condition - whether a person is eating or not - and if yes, which food type, is a new research domain in the area of speech and video processing that has many promising applications for future multimodal interfaces such as adapting speech recognition or lip reading systems to different eating conditions. We herein describe the ICMI 2018 Eating Analysis and Tracking (EAT) Challenge and address - for the first time in research competitions under well-defined conditions - new classification tasks in the area of user data analysis, namely audio-visual classifications of user eating conditions. We define three Sub-Challenges based on classification tasks in which participants are encouraged to use speech and/or video recordings of the audio-visual iHEARu-EAT database. In this paper, we describe the dataset, the Sub-Challenges, their conditions, and the baseline feature extraction and performance measures as provided to the participants.","PeriodicalId":308751,"journal":{"name":"Proceedings of the 20th ACM International Conference on Multimodal Interaction","volume":"3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"14","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 20th ACM International Conference on Multimodal Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3242969.3243681","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 14

Abstract

The multimodal recognition of eating condition - whether a person is eating or not - and if yes, which food type, is a new research domain in the area of speech and video processing that has many promising applications for future multimodal interfaces such as adapting speech recognition or lip reading systems to different eating conditions. We herein describe the ICMI 2018 Eating Analysis and Tracking (EAT) Challenge and address - for the first time in research competitions under well-defined conditions - new classification tasks in the area of user data analysis, namely audio-visual classifications of user eating conditions. We define three Sub-Challenges based on classification tasks in which participants are encouraged to use speech and/or video recordings of the audio-visual iHEARu-EAT database. In this paper, we describe the dataset, the Sub-Challenges, their conditions, and the baseline feature extraction and performance measures as provided to the participants.
EAT -: ICMI 2018饮食分析和跟踪挑战
进食状态的多模态识别——一个人是否在进食,如果是,是哪种食物类型——是语音和视频处理领域的一个新的研究领域,在未来的多模态界面中有许多有前途的应用,比如适应不同的进食状态的语音识别或唇读系统。我们在此描述ICMI 2018饮食分析和跟踪(EAT)挑战,并首次在明确定义条件下的研究竞赛中解决用户数据分析领域的新分类任务,即用户饮食条件的视听分类。我们根据分类任务定义了三个子挑战,鼓励参与者使用iHEARu-EAT视听数据库的语音和/或视频记录。在本文中,我们描述了数据集、子挑战、它们的条件,以及提供给参与者的基线特征提取和性能度量。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信