Towards the development of an automated robotic storyteller: comparing approaches for emotional story annotation for non-verbal expression via body language

IF 2.2 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Sophia C. Steinhaeusser, Albin Zehe, Peggy Schnetter, Andreas Hotho, Birgit Lugrin
{"title":"Towards the development of an automated robotic storyteller: comparing approaches for emotional story annotation for non-verbal expression via body language","authors":"Sophia C. Steinhaeusser, Albin Zehe, Peggy Schnetter, Andreas Hotho, Birgit Lugrin","doi":"10.1007/s12193-024-00429-w","DOIUrl":null,"url":null,"abstract":"<p>Storytelling is a long-established tradition and listening to stories is still a popular leisure activity. Caused by technization, storytelling media expands, e.g., to social robots acting as multi-modal storytellers, using different multimodal behaviours such as facial expressions or body postures. With the overarching goal to automate robotic storytelling, we have been annotating stories with emotion labels which the robot can use to automatically adapt its behavior. With it, three different approaches are compared in two studies in this paper: 1) manual labels by human annotators (MA), 2) software-based word-sensitive annotation using the Linguistic Inquiry and Word Count program (LIWC), and 3) a machine learning based approach (ML). In an online study showing videos of a storytelling robot, the annotations were validated, with LIWC and MA achieving the best, and ML the worst results. In a laboratory user study, the three versions of the story were compared regarding transportation and cognitive absorption, revealing no significant differences but a positive trend towards MA. On this empirical basis, the <i>Automated Robotic Storyteller</i> was implemented using manual annotations. Future iterations should include other robots and modalities, fewer emotion labels and their probabilities.</p>","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":null,"pages":null},"PeriodicalIF":2.2000,"publicationDate":"2024-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal on Multimodal User Interfaces","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s12193-024-00429-w","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Storytelling is a long-established tradition and listening to stories is still a popular leisure activity. Caused by technization, storytelling media expands, e.g., to social robots acting as multi-modal storytellers, using different multimodal behaviours such as facial expressions or body postures. With the overarching goal to automate robotic storytelling, we have been annotating stories with emotion labels which the robot can use to automatically adapt its behavior. With it, three different approaches are compared in two studies in this paper: 1) manual labels by human annotators (MA), 2) software-based word-sensitive annotation using the Linguistic Inquiry and Word Count program (LIWC), and 3) a machine learning based approach (ML). In an online study showing videos of a storytelling robot, the annotations were validated, with LIWC and MA achieving the best, and ML the worst results. In a laboratory user study, the three versions of the story were compared regarding transportation and cognitive absorption, revealing no significant differences but a positive trend towards MA. On this empirical basis, the Automated Robotic Storyteller was implemented using manual annotations. Future iterations should include other robots and modalities, fewer emotion labels and their probabilities.

Abstract Image

开发自动机器人讲故事器:比较通过肢体语言进行非语言表达的情感故事注释方法
讲故事是一种历史悠久的传统,听故事仍然是一种流行的休闲活动。随着技术的发展,讲故事的媒介也在不断扩展,例如,社交机器人可以作为多模态讲故事者,使用面部表情或身体姿势等不同的多模态行为。为了实现机器人自动讲故事的总体目标,我们一直在用情感标签注释故事,机器人可以利用这些情感标签自动调整自己的行为。本文通过两项研究比较了三种不同的方法:1) 由人工标注者手动标注(MA);2) 使用语言学调查和字数统计程序(LIWC)进行基于软件的词敏感标注;3) 基于机器学习的方法(ML)。在一项展示讲故事机器人视频的在线研究中,对注释进行了验证,LIWC 和 MA 的结果最好,而 ML 的结果最差。在一项实验室用户研究中,对三个版本的故事进行了运输和认知吸收方面的比较,结果显示没有明显差异,但MA版本有积极的趋势。在此经验基础上,自动机器人讲故事器通过手动注释得以实现。未来的迭代应包括其他机器人和模式、更少的情感标签及其概率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal on Multimodal User Interfaces
Journal on Multimodal User Interfaces COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, CYBERNETICS
CiteScore
6.90
自引率
3.40%
发文量
12
审稿时长
>12 weeks
期刊介绍: The Journal of Multimodal User Interfaces publishes work in the design, implementation and evaluation of multimodal interfaces. Research in the domain of multimodal interaction is by its very essence a multidisciplinary area involving several fields including signal processing, human-machine interaction, computer science, cognitive science and ergonomics. This journal focuses on multimodal interfaces involving advanced modalities, several modalities and their fusion, user-centric design, usability and architectural considerations. Use cases and descriptions of specific application areas are welcome including for example e-learning, assistance, serious games, affective and social computing, interaction with avatars and robots.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信