{"title":"Distinguishing Intentional Actions from Accidental Actions","authors":"K. Harui, N. Oka, Y. Yamada","doi":"10.1109/DEVLRN.2005.1490972","DOIUrl":null,"url":null,"abstract":"Summary form only given. Although even human infants have the ability to recognize intention by Meltzoff (1995) and Tomasello (1997), its engineering realization has not been established yet. It is important to realize a man-machine interface which can adapt naturally to human by guessing whether the behavior of human is intentional or accidental. Various information, for example, voice, facial expression, and gesture can be used to distinguish whether a behavior is intentional or not, we however pay attention to the prosody and the timing of utterances in this study, because when one did an accidental movement, we think that he tends to utter words, e.g. `oops', in a characteristic fashion unintentionally. In this study, a video game was built in which one can play an agent with a ball and recorded the interaction between a subject and the agent. Then, a system was built using a decision tree by Quinlan (1996) that learns to distinguish intentional actions of subjects from accidental ones, and analyzed the precision of the trees. Continuous inputs for C4.5 algorithm, and discretized inputs at regular intervals for ID3 algorithm were used. The difference in inputs is the cause of the difference in the precision in table I","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DEVLRN.2005.1490972","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
Summary form only given. Although even human infants have the ability to recognize intention by Meltzoff (1995) and Tomasello (1997), its engineering realization has not been established yet. It is important to realize a man-machine interface which can adapt naturally to human by guessing whether the behavior of human is intentional or accidental. Various information, for example, voice, facial expression, and gesture can be used to distinguish whether a behavior is intentional or not, we however pay attention to the prosody and the timing of utterances in this study, because when one did an accidental movement, we think that he tends to utter words, e.g. `oops', in a characteristic fashion unintentionally. In this study, a video game was built in which one can play an agent with a ball and recorded the interaction between a subject and the agent. Then, a system was built using a decision tree by Quinlan (1996) that learns to distinguish intentional actions of subjects from accidental ones, and analyzed the precision of the trees. Continuous inputs for C4.5 algorithm, and discretized inputs at regular intervals for ID3 algorithm were used. The difference in inputs is the cause of the difference in the precision in table I