{"title":"面向主题的自动补全模型:紧固自动补全系统的方法","authors":"S. Prisca, M. Dînsoreanu, C. Lemnaru","doi":"10.5220/0005597502410248","DOIUrl":null,"url":null,"abstract":"In this paper we propose an autocompletion approach suitable for mobile devices that aims to reduce the overall data model size and to speed up query processing while not employing any language specific processing. The approach relies on topic information from input documents to split the data models based on topics and index them in a way that allows fast identification through their corresponding topic. Doing so, the size of the data model used for prediction is decreased to almost one fifth of the size of a model that contains all topics, and the query processing becomes two times faster, while maintaining the same precision obtained by employing a model that contains all topics.","PeriodicalId":102743,"journal":{"name":"2015 7th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management (IC3K)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Topic oriented auto-completion models: Approaches towards fastening auto-completion systems\",\"authors\":\"S. Prisca, M. Dînsoreanu, C. Lemnaru\",\"doi\":\"10.5220/0005597502410248\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper we propose an autocompletion approach suitable for mobile devices that aims to reduce the overall data model size and to speed up query processing while not employing any language specific processing. The approach relies on topic information from input documents to split the data models based on topics and index them in a way that allows fast identification through their corresponding topic. Doing so, the size of the data model used for prediction is decreased to almost one fifth of the size of a model that contains all topics, and the query processing becomes two times faster, while maintaining the same precision obtained by employing a model that contains all topics.\",\"PeriodicalId\":102743,\"journal\":{\"name\":\"2015 7th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management (IC3K)\",\"volume\":\"82 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-11-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2015 7th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management (IC3K)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.5220/0005597502410248\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 7th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management (IC3K)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5220/0005597502410248","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Topic oriented auto-completion models: Approaches towards fastening auto-completion systems
In this paper we propose an autocompletion approach suitable for mobile devices that aims to reduce the overall data model size and to speed up query processing while not employing any language specific processing. The approach relies on topic information from input documents to split the data models based on topics and index them in a way that allows fast identification through their corresponding topic. Doing so, the size of the data model used for prediction is decreased to almost one fifth of the size of a model that contains all topics, and the query processing becomes two times faster, while maintaining the same precision obtained by employing a model that contains all topics.