Rafael Berlanga Llavori, Antonio Jimeno-Yepes, María Pérez, Indira Lanza-Cruz
{"title":"大型知识资源的粗粒度语义表征","authors":"Rafael Berlanga Llavori, Antonio Jimeno-Yepes, María Pérez, Indira Lanza-Cruz","doi":"10.1145/3230599.3230616","DOIUrl":null,"url":null,"abstract":"This work presents an experimental study about the automatic assignment of semantic groups to concepts of large knowledge resources (KR) such as DBpedia1 or BabelNet2. Our proposal combines a simple lexico-statistical method for hypernym extraction combined with document and word embeddings extracted from Wikipedia. Results are encouraging and open new directions for improving other tasks related to large KR management like debugging and semantic annotation.","PeriodicalId":448209,"journal":{"name":"Proceedings of the 5th Spanish Conference on Information Retrieval","volume":"33 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Coarse-grained Semantic Characterization of Large Knowledge Resources\",\"authors\":\"Rafael Berlanga Llavori, Antonio Jimeno-Yepes, María Pérez, Indira Lanza-Cruz\",\"doi\":\"10.1145/3230599.3230616\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This work presents an experimental study about the automatic assignment of semantic groups to concepts of large knowledge resources (KR) such as DBpedia1 or BabelNet2. Our proposal combines a simple lexico-statistical method for hypernym extraction combined with document and word embeddings extracted from Wikipedia. Results are encouraging and open new directions for improving other tasks related to large KR management like debugging and semantic annotation.\",\"PeriodicalId\":448209,\"journal\":{\"name\":\"Proceedings of the 5th Spanish Conference on Information Retrieval\",\"volume\":\"33 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-06-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 5th Spanish Conference on Information Retrieval\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3230599.3230616\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 5th Spanish Conference on Information Retrieval","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3230599.3230616","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Coarse-grained Semantic Characterization of Large Knowledge Resources
This work presents an experimental study about the automatic assignment of semantic groups to concepts of large knowledge resources (KR) such as DBpedia1 or BabelNet2. Our proposal combines a simple lexico-statistical method for hypernym extraction combined with document and word embeddings extracted from Wikipedia. Results are encouraging and open new directions for improving other tasks related to large KR management like debugging and semantic annotation.