{"title":"Investigating the effect of various types of audio reinforcement on memory retention","authors":"Parisa Supitayakul, Zeynep Yücel, Misato Nose, Akito Monden","doi":"10.1109/IIAIAAI55812.2022.00057","DOIUrl":null,"url":null,"abstract":"Most e-learning systems deliver solely visual information, even though they boast a huge potential for supporting the learners using various other capabilities (e.g. camera, speakers) of the hosting platform (i.e. computer, smart phone etc.). In this study, we focus deploying one such potential, namely audio stimuli (informative and non-informative), for supporting rote learning of different types of learning material (i.e. easy verbal, hard verbal and numerical). Our results indicate that audio stimuli do not provide a significant benefit for studying easy verbal content, but there is a big room for improvement concerning other content types (hard verbal and numerical). Interestingly, despite the general implications of dual-coding theory, human-readout of hard verbal contents is observed not to provide any significant improvement over visual-only stimuli. However, to our surprise, non-informative audio stimuli (i.e. bell sound) are observed to provide an improvement, whereas numerical content is observed to benefit in a similar way from informative and non-informative audio. Based on these results, in the future we aim developing an automatic learning support system, which triggers the appropriate audio stimuli, taking in consideration the type of content.","PeriodicalId":156230,"journal":{"name":"2022 12th International Congress on Advanced Applied Informatics (IIAI-AAI)","volume":"253 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 12th International Congress on Advanced Applied Informatics (IIAI-AAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IIAIAAI55812.2022.00057","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Most e-learning systems deliver solely visual information, even though they boast a huge potential for supporting the learners using various other capabilities (e.g. camera, speakers) of the hosting platform (i.e. computer, smart phone etc.). In this study, we focus deploying one such potential, namely audio stimuli (informative and non-informative), for supporting rote learning of different types of learning material (i.e. easy verbal, hard verbal and numerical). Our results indicate that audio stimuli do not provide a significant benefit for studying easy verbal content, but there is a big room for improvement concerning other content types (hard verbal and numerical). Interestingly, despite the general implications of dual-coding theory, human-readout of hard verbal contents is observed not to provide any significant improvement over visual-only stimuli. However, to our surprise, non-informative audio stimuli (i.e. bell sound) are observed to provide an improvement, whereas numerical content is observed to benefit in a similar way from informative and non-informative audio. Based on these results, in the future we aim developing an automatic learning support system, which triggers the appropriate audio stimuli, taking in consideration the type of content.