{"title":"A Dynamic Illustration Approach For Arabic Text","authors":"Jezia Zakraoui, J. Jaam","doi":"10.1109/GCC45510.2019.1570512466","DOIUrl":null,"url":null,"abstract":"In this paper, we present an approach to dynamically transform simple Modern Standard Arabic children’s stories scripts to the best representative images that can illustrate efficiently the meaning of words and word senses. We connect formally multiple datasets involved in our framework. We then apply several techniques to find the images and associate them with related word senses. First, we apply natural language processing techniques to analyze the text in stories and we build a semantic representation of main characters and events in each paragraph. Second, we apply various query formulation techniques as a brief scenario to enhance image web search which showed better accuracy as per preliminary results. Third, most significant queries are chosen to retrieve a list of possible candidate images from our multimedia database and search engines (i.e., Google and Bing). Instructors can then select and validate the ranked contextual images to compose the final visualization for each paragraph.","PeriodicalId":352754,"journal":{"name":"2019 IEEE 10th GCC Conference & Exhibition (GCC)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE 10th GCC Conference & Exhibition (GCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/GCC45510.2019.1570512466","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In this paper, we present an approach to dynamically transform simple Modern Standard Arabic children’s stories scripts to the best representative images that can illustrate efficiently the meaning of words and word senses. We connect formally multiple datasets involved in our framework. We then apply several techniques to find the images and associate them with related word senses. First, we apply natural language processing techniques to analyze the text in stories and we build a semantic representation of main characters and events in each paragraph. Second, we apply various query formulation techniques as a brief scenario to enhance image web search which showed better accuracy as per preliminary results. Third, most significant queries are chosen to retrieve a list of possible candidate images from our multimedia database and search engines (i.e., Google and Bing). Instructors can then select and validate the ranked contextual images to compose the final visualization for each paragraph.