{"title":"使用MapReduce的web级实体注释","authors":"Shashank Gupta, Varun Chandramouli, Soumen Chakrabarti","doi":"10.1109/HiPC.2013.6799137","DOIUrl":null,"url":null,"abstract":"Cloud computing frameworks such as map-reduce (MR) are widely used in the context of log mining, inverted indexing, and scientific data analysis. Here we address the new and important task of annotating token spans in billions of Web pages that mention named entities from a large entity catalog such as Wikipedia or Freebase. The key step in annotation is disambiguation: given the token Albert, use its mention context to determine which Albert is being mentioned. Disambiguation requires holding in RAM a machine-learnt statistical model for each mention phrase. In earlier work with only two million entities, we could fit all models in RAM, and stream rapidly through the corpus from disk. However, as the catalog grows to hundreds of millions of entities, this simple solution is no longer feasible. Simple adaptations like caching and evicting models online, or making multiple passes over the corpus while holding a fraction of models in RAM, showed unacceptable performance. Then we attempted to write a standard Hadoop MR application, but this hit a serious load skew problem (82.12% idle CPU). Skew in MR application seems widespread. Many skew mitigation approaches have been proposed recently. We tried SkewTune, which showed only modest improvement. We realized that reduce key splitting was essential, and designed simple but effective application-specific load estimation and key-splitting methods. A precise performance model was first created, which led to an objective function that we optimized heuristically. The resulting schedule was executed on Hadoop MR. This approach led to large benefits: our final annotator was 5.4× faster than standard Hadoop MR, and 5.2× faster than even SkewTune. Idle time was reduced to 3%. Although fine-tuned to our application, our technique may be of independent interest.","PeriodicalId":206307,"journal":{"name":"20th Annual International Conference on High Performance Computing","volume":"30 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Web-scale entity annotation using MapReduce\",\"authors\":\"Shashank Gupta, Varun Chandramouli, Soumen Chakrabarti\",\"doi\":\"10.1109/HiPC.2013.6799137\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Cloud computing frameworks such as map-reduce (MR) are widely used in the context of log mining, inverted indexing, and scientific data analysis. Here we address the new and important task of annotating token spans in billions of Web pages that mention named entities from a large entity catalog such as Wikipedia or Freebase. The key step in annotation is disambiguation: given the token Albert, use its mention context to determine which Albert is being mentioned. Disambiguation requires holding in RAM a machine-learnt statistical model for each mention phrase. In earlier work with only two million entities, we could fit all models in RAM, and stream rapidly through the corpus from disk. However, as the catalog grows to hundreds of millions of entities, this simple solution is no longer feasible. Simple adaptations like caching and evicting models online, or making multiple passes over the corpus while holding a fraction of models in RAM, showed unacceptable performance. Then we attempted to write a standard Hadoop MR application, but this hit a serious load skew problem (82.12% idle CPU). Skew in MR application seems widespread. Many skew mitigation approaches have been proposed recently. We tried SkewTune, which showed only modest improvement. We realized that reduce key splitting was essential, and designed simple but effective application-specific load estimation and key-splitting methods. A precise performance model was first created, which led to an objective function that we optimized heuristically. The resulting schedule was executed on Hadoop MR. This approach led to large benefits: our final annotator was 5.4× faster than standard Hadoop MR, and 5.2× faster than even SkewTune. Idle time was reduced to 3%. Although fine-tuned to our application, our technique may be of independent interest.\",\"PeriodicalId\":206307,\"journal\":{\"name\":\"20th Annual International Conference on High Performance Computing\",\"volume\":\"30 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2013-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"20th Annual International Conference on High Performance Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/HiPC.2013.6799137\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"20th Annual International Conference on High Performance Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HiPC.2013.6799137","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Cloud computing frameworks such as map-reduce (MR) are widely used in the context of log mining, inverted indexing, and scientific data analysis. Here we address the new and important task of annotating token spans in billions of Web pages that mention named entities from a large entity catalog such as Wikipedia or Freebase. The key step in annotation is disambiguation: given the token Albert, use its mention context to determine which Albert is being mentioned. Disambiguation requires holding in RAM a machine-learnt statistical model for each mention phrase. In earlier work with only two million entities, we could fit all models in RAM, and stream rapidly through the corpus from disk. However, as the catalog grows to hundreds of millions of entities, this simple solution is no longer feasible. Simple adaptations like caching and evicting models online, or making multiple passes over the corpus while holding a fraction of models in RAM, showed unacceptable performance. Then we attempted to write a standard Hadoop MR application, but this hit a serious load skew problem (82.12% idle CPU). Skew in MR application seems widespread. Many skew mitigation approaches have been proposed recently. We tried SkewTune, which showed only modest improvement. We realized that reduce key splitting was essential, and designed simple but effective application-specific load estimation and key-splitting methods. A precise performance model was first created, which led to an objective function that we optimized heuristically. The resulting schedule was executed on Hadoop MR. This approach led to large benefits: our final annotator was 5.4× faster than standard Hadoop MR, and 5.2× faster than even SkewTune. Idle time was reduced to 3%. Although fine-tuned to our application, our technique may be of independent interest.