Yasuhiro Noguchi, M. Kondo, Satoru Kogure, Tatsuhiro Konishi, Y. Itoh, A. Takagi, H. Asoh, I. Kobayashi
{"title":"从视觉信息和用户指定的视点生成各种表达","authors":"Yasuhiro Noguchi, M. Kondo, Satoru Kogure, Tatsuhiro Konishi, Y. Itoh, A. Takagi, H. Asoh, I. Kobayashi","doi":"10.1109/ICAWST.2013.6765459","DOIUrl":null,"url":null,"abstract":"This paper reports the development and evaluation of a natural language generation system which generates a variety of language expressions from visual information taken by a CCD camera. The feature of this system is to generate a variety of language expressions from combinations of different syntactic structures and different sets of vocabulary, while managing the generation process based on the user-designated viewpoints. The system converts the visual information into a concept dependency structure using a semantic representation framework proposed by Takagi and Itoh. The system then transforms the structure and divides it into a set of words, deriving a word dependency structure, which is later arranged into a sentence. The transformation of a concept dependency structure and the variation in word segmentation allow the system to generate a variety of sentences from the same visual information. In this paper, we employ user-designated viewpoints to scenes containing more than one object. We designed the parameters of the user-designated viewpoints which enable the system to manage the generation process and to generate a variety of expressions. An evaluation has confirmed that the system generates certain variations according to parameter values set by the user. The variations include expressions referring to attribute values of the objects in the scenes and relative expressions denoting the relations between the targeted object and others.","PeriodicalId":68697,"journal":{"name":"炎黄地理","volume":"17 1","pages":"322-328"},"PeriodicalIF":0.0000,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Generating a variety of expressions from visual information and user-designated viewpoints\",\"authors\":\"Yasuhiro Noguchi, M. Kondo, Satoru Kogure, Tatsuhiro Konishi, Y. Itoh, A. Takagi, H. Asoh, I. Kobayashi\",\"doi\":\"10.1109/ICAWST.2013.6765459\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper reports the development and evaluation of a natural language generation system which generates a variety of language expressions from visual information taken by a CCD camera. The feature of this system is to generate a variety of language expressions from combinations of different syntactic structures and different sets of vocabulary, while managing the generation process based on the user-designated viewpoints. The system converts the visual information into a concept dependency structure using a semantic representation framework proposed by Takagi and Itoh. The system then transforms the structure and divides it into a set of words, deriving a word dependency structure, which is later arranged into a sentence. The transformation of a concept dependency structure and the variation in word segmentation allow the system to generate a variety of sentences from the same visual information. In this paper, we employ user-designated viewpoints to scenes containing more than one object. We designed the parameters of the user-designated viewpoints which enable the system to manage the generation process and to generate a variety of expressions. An evaluation has confirmed that the system generates certain variations according to parameter values set by the user. The variations include expressions referring to attribute values of the objects in the scenes and relative expressions denoting the relations between the targeted object and others.\",\"PeriodicalId\":68697,\"journal\":{\"name\":\"炎黄地理\",\"volume\":\"17 1\",\"pages\":\"322-328\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2013-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"炎黄地理\",\"FirstCategoryId\":\"1089\",\"ListUrlMain\":\"https://doi.org/10.1109/ICAWST.2013.6765459\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"炎黄地理","FirstCategoryId":"1089","ListUrlMain":"https://doi.org/10.1109/ICAWST.2013.6765459","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Generating a variety of expressions from visual information and user-designated viewpoints
This paper reports the development and evaluation of a natural language generation system which generates a variety of language expressions from visual information taken by a CCD camera. The feature of this system is to generate a variety of language expressions from combinations of different syntactic structures and different sets of vocabulary, while managing the generation process based on the user-designated viewpoints. The system converts the visual information into a concept dependency structure using a semantic representation framework proposed by Takagi and Itoh. The system then transforms the structure and divides it into a set of words, deriving a word dependency structure, which is later arranged into a sentence. The transformation of a concept dependency structure and the variation in word segmentation allow the system to generate a variety of sentences from the same visual information. In this paper, we employ user-designated viewpoints to scenes containing more than one object. We designed the parameters of the user-designated viewpoints which enable the system to manage the generation process and to generate a variety of expressions. An evaluation has confirmed that the system generates certain variations according to parameter values set by the user. The variations include expressions referring to attribute values of the objects in the scenes and relative expressions denoting the relations between the targeted object and others.