M. Kikuchi, Kento Kawakami, Kazuho Watanabe, Mitsuo Yoshida, Kyoji Umemura
{"title":"高频到零频N-grams的统一似然比估计","authors":"M. Kikuchi, Kento Kawakami, Kazuho Watanabe, Mitsuo Yoshida, Kyoji Umemura","doi":"10.1587/transfun.2020EAP1088","DOIUrl":null,"url":null,"abstract":"Likelihood ratios (LRs), which are commonly used for probabilistic data processing, are often estimated based on the frequency counts of individual elements obtained from samples. In natural language processing, an element can be a continuous sequence of N items, called an N -gram, in which each item is a word, letter, etc. In this paper, we attempt to estimate LRs based on N -gram frequency information. A naive estimation approach that uses only N -gram frequencies is sensitive to low-frequency (rare) N -grams and not applicable to zero-frequency (unobserved) N -grams; these are known as the lowand zero-frequency problems, respectively. To address these problems, we propose a method for decomposing N -grams into item units and then applying their frequencies along with the original N -gram frequencies. Our method can obtain the estimates of unobserved N -grams by using the unit frequencies. Although using only unit frequencies ignores dependencies between items, our method takes advantage of the fact that certain items often co-occur in practice and therefore maintains their dependencies by using the relevant N -gram frequencies. We also introduce a regularization to achieve robust estimation for rare N -grams. Our experimental results demonstrate that our method is effective at solving both problems and can effectively control dependencies. key words: Likelihood ratio, the low-frequency problem, the zero-frequency problem, uLSIF.","PeriodicalId":348826,"journal":{"name":"IEICE Trans. Fundam. Electron. Commun. Comput. Sci.","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Unified Likelihood Ratio Estimation for High- to Zero-frequency N-grams\",\"authors\":\"M. Kikuchi, Kento Kawakami, Kazuho Watanabe, Mitsuo Yoshida, Kyoji Umemura\",\"doi\":\"10.1587/transfun.2020EAP1088\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Likelihood ratios (LRs), which are commonly used for probabilistic data processing, are often estimated based on the frequency counts of individual elements obtained from samples. In natural language processing, an element can be a continuous sequence of N items, called an N -gram, in which each item is a word, letter, etc. In this paper, we attempt to estimate LRs based on N -gram frequency information. A naive estimation approach that uses only N -gram frequencies is sensitive to low-frequency (rare) N -grams and not applicable to zero-frequency (unobserved) N -grams; these are known as the lowand zero-frequency problems, respectively. To address these problems, we propose a method for decomposing N -grams into item units and then applying their frequencies along with the original N -gram frequencies. Our method can obtain the estimates of unobserved N -grams by using the unit frequencies. Although using only unit frequencies ignores dependencies between items, our method takes advantage of the fact that certain items often co-occur in practice and therefore maintains their dependencies by using the relevant N -gram frequencies. We also introduce a regularization to achieve robust estimation for rare N -grams. Our experimental results demonstrate that our method is effective at solving both problems and can effectively control dependencies. key words: Likelihood ratio, the low-frequency problem, the zero-frequency problem, uLSIF.\",\"PeriodicalId\":348826,\"journal\":{\"name\":\"IEICE Trans. Fundam. Electron. Commun. Comput. Sci.\",\"volume\":\"4 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEICE Trans. Fundam. Electron. Commun. Comput. Sci.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1587/transfun.2020EAP1088\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEICE Trans. Fundam. Electron. Commun. Comput. Sci.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1587/transfun.2020EAP1088","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Unified Likelihood Ratio Estimation for High- to Zero-frequency N-grams
Likelihood ratios (LRs), which are commonly used for probabilistic data processing, are often estimated based on the frequency counts of individual elements obtained from samples. In natural language processing, an element can be a continuous sequence of N items, called an N -gram, in which each item is a word, letter, etc. In this paper, we attempt to estimate LRs based on N -gram frequency information. A naive estimation approach that uses only N -gram frequencies is sensitive to low-frequency (rare) N -grams and not applicable to zero-frequency (unobserved) N -grams; these are known as the lowand zero-frequency problems, respectively. To address these problems, we propose a method for decomposing N -grams into item units and then applying their frequencies along with the original N -gram frequencies. Our method can obtain the estimates of unobserved N -grams by using the unit frequencies. Although using only unit frequencies ignores dependencies between items, our method takes advantage of the fact that certain items often co-occur in practice and therefore maintains their dependencies by using the relevant N -gram frequencies. We also introduce a regularization to achieve robust estimation for rare N -grams. Our experimental results demonstrate that our method is effective at solving both problems and can effectively control dependencies. key words: Likelihood ratio, the low-frequency problem, the zero-frequency problem, uLSIF.