{"title":"利用注意焦点从多模态输入获取语言论证结构","authors":"G. Satish, A. Mukerjee","doi":"10.1109/DEVLRN.2008.4640803","DOIUrl":null,"url":null,"abstract":"This work is premised on three assumptions: that the semantics of certain actions may be learned prior to language, that objects in attentive focus are likely to indicate the arguments participating in that action, and that knowing such arguments helps align linguistic attention on the relevant predicate (verb). Using a computational model of dynamic attention, we present an algorithm that clusters visual events into action classes in an unsupervised manner using the Merge Neural Gas algorithm. With few clusters, the model correlates to coarse concepts such as come-closer, but with a finer granularity, it reveals hierarchical substructure such as come-closer-one-object-static and come-closer-both-moving. That the argument ordering is non-commutative is discovered for actions such as chase or come-closer-one-object-static. Knowing the arguments, and given that noun-referent mappings that are easily learned, language learning can now be constrained by considering only linguistic expressions and actions that refer to the objects in perceptual focus. We learn action schemas for linguistic units like ldquomoving towardsrdquo or ldquochaserdquo, and validate our results by producing output commentaries for 3D video.","PeriodicalId":366099,"journal":{"name":"2008 7th IEEE International Conference on Development and Learning","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2008-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":"{\"title\":\"Acquiring linguistic argument structure from multimodal input using attentive focus\",\"authors\":\"G. Satish, A. Mukerjee\",\"doi\":\"10.1109/DEVLRN.2008.4640803\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This work is premised on three assumptions: that the semantics of certain actions may be learned prior to language, that objects in attentive focus are likely to indicate the arguments participating in that action, and that knowing such arguments helps align linguistic attention on the relevant predicate (verb). Using a computational model of dynamic attention, we present an algorithm that clusters visual events into action classes in an unsupervised manner using the Merge Neural Gas algorithm. With few clusters, the model correlates to coarse concepts such as come-closer, but with a finer granularity, it reveals hierarchical substructure such as come-closer-one-object-static and come-closer-both-moving. That the argument ordering is non-commutative is discovered for actions such as chase or come-closer-one-object-static. Knowing the arguments, and given that noun-referent mappings that are easily learned, language learning can now be constrained by considering only linguistic expressions and actions that refer to the objects in perceptual focus. We learn action schemas for linguistic units like ldquomoving towardsrdquo or ldquochaserdquo, and validate our results by producing output commentaries for 3D video.\",\"PeriodicalId\":366099,\"journal\":{\"name\":\"2008 7th IEEE International Conference on Development and Learning\",\"volume\":\"9 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2008-10-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"12\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2008 7th IEEE International Conference on Development and Learning\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/DEVLRN.2008.4640803\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2008 7th IEEE International Conference on Development and Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DEVLRN.2008.4640803","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Acquiring linguistic argument structure from multimodal input using attentive focus
This work is premised on three assumptions: that the semantics of certain actions may be learned prior to language, that objects in attentive focus are likely to indicate the arguments participating in that action, and that knowing such arguments helps align linguistic attention on the relevant predicate (verb). Using a computational model of dynamic attention, we present an algorithm that clusters visual events into action classes in an unsupervised manner using the Merge Neural Gas algorithm. With few clusters, the model correlates to coarse concepts such as come-closer, but with a finer granularity, it reveals hierarchical substructure such as come-closer-one-object-static and come-closer-both-moving. That the argument ordering is non-commutative is discovered for actions such as chase or come-closer-one-object-static. Knowing the arguments, and given that noun-referent mappings that are easily learned, language learning can now be constrained by considering only linguistic expressions and actions that refer to the objects in perceptual focus. We learn action schemas for linguistic units like ldquomoving towardsrdquo or ldquochaserdquo, and validate our results by producing output commentaries for 3D video.