James R. Kubricht, A. Santamaría-Pang, Chinmaya Devaraj, Aritra Chowdhury, P. Tu
{"title":"基于预训练嵌入的涌现语言表征动态图像中的潜在概念","authors":"James R. Kubricht, A. Santamaría-Pang, Chinmaya Devaraj, Aritra Chowdhury, P. Tu","doi":"10.1142/s1793351x20400140","DOIUrl":null,"url":null,"abstract":"Recent unsupervised learning approaches have explored the feasibility of semantic analysis and interpretation of imagery using Emergent Language (EL) models. As EL requires some form of numerical embedding as input, it remains unclear which type is required in order for the EL to properly capture key semantic concepts associated with a given domain. In this paper, we compare unsupervised and supervised approaches for generating embeddings across two experiments. In Experiment 1, data are produced using a single-agent simulator. In each episode, a goal-driven agent attempts to accomplish a number of tasks in a synthetic cityscape environment which includes houses, banks, theaters and restaurants. In Experiment 2, a comparatively smaller dataset is produced where one or more objects demonstrate various types of physical motion in a 3D simulator environment. We investigate whether EL models generated from embeddings of raw pixel data produce expressions that capture key latent concepts (i.e. an agent’s motivations or physical motion types) in each environment. Our initial experiments show that the supervised learning approaches yield embeddings and EL descriptions that capture meaningful concepts from raw pixel inputs. Alternatively, embeddings from an unsupervised learning approach result in greater ambiguity with respect to latent concepts.","PeriodicalId":217956,"journal":{"name":"Int. J. Semantic Comput.","volume":"57 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Emergent Languages from Pretrained Embeddings Characterize Latent Concepts in Dynamic Imagery\",\"authors\":\"James R. Kubricht, A. Santamaría-Pang, Chinmaya Devaraj, Aritra Chowdhury, P. Tu\",\"doi\":\"10.1142/s1793351x20400140\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recent unsupervised learning approaches have explored the feasibility of semantic analysis and interpretation of imagery using Emergent Language (EL) models. As EL requires some form of numerical embedding as input, it remains unclear which type is required in order for the EL to properly capture key semantic concepts associated with a given domain. In this paper, we compare unsupervised and supervised approaches for generating embeddings across two experiments. In Experiment 1, data are produced using a single-agent simulator. In each episode, a goal-driven agent attempts to accomplish a number of tasks in a synthetic cityscape environment which includes houses, banks, theaters and restaurants. In Experiment 2, a comparatively smaller dataset is produced where one or more objects demonstrate various types of physical motion in a 3D simulator environment. We investigate whether EL models generated from embeddings of raw pixel data produce expressions that capture key latent concepts (i.e. an agent’s motivations or physical motion types) in each environment. Our initial experiments show that the supervised learning approaches yield embeddings and EL descriptions that capture meaningful concepts from raw pixel inputs. Alternatively, embeddings from an unsupervised learning approach result in greater ambiguity with respect to latent concepts.\",\"PeriodicalId\":217956,\"journal\":{\"name\":\"Int. J. Semantic Comput.\",\"volume\":\"57 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Int. J. Semantic Comput.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1142/s1793351x20400140\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Int. J. Semantic Comput.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1142/s1793351x20400140","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Emergent Languages from Pretrained Embeddings Characterize Latent Concepts in Dynamic Imagery
Recent unsupervised learning approaches have explored the feasibility of semantic analysis and interpretation of imagery using Emergent Language (EL) models. As EL requires some form of numerical embedding as input, it remains unclear which type is required in order for the EL to properly capture key semantic concepts associated with a given domain. In this paper, we compare unsupervised and supervised approaches for generating embeddings across two experiments. In Experiment 1, data are produced using a single-agent simulator. In each episode, a goal-driven agent attempts to accomplish a number of tasks in a synthetic cityscape environment which includes houses, banks, theaters and restaurants. In Experiment 2, a comparatively smaller dataset is produced where one or more objects demonstrate various types of physical motion in a 3D simulator environment. We investigate whether EL models generated from embeddings of raw pixel data produce expressions that capture key latent concepts (i.e. an agent’s motivations or physical motion types) in each environment. Our initial experiments show that the supervised learning approaches yield embeddings and EL descriptions that capture meaningful concepts from raw pixel inputs. Alternatively, embeddings from an unsupervised learning approach result in greater ambiguity with respect to latent concepts.