{"title":"Preprocessing text to improve compression ratios","authors":"H. Kruse, A. Mukherjee","doi":"10.1109/DCC.1998.672295","DOIUrl":null,"url":null,"abstract":"Summary form only given. We discuss the use of a text preprocessing algorithm that can improve the compression ratio of standard data compression algorithms, in particular 'bzip2', when used on text files, by up to 20%. The text preprocessing algorithm uses a static dictionary of the English language that is kept separately from the compressed file. The method in which the dictionary is used by the algorithm to transform the text is based on earlier work of Holger Kruse, Amar Mukherjee (see Proc. Data Comp. Conf., IEEE Comp. Society Press, p.447, 1997). The idea is to replace each word in the input text by a character sequence which encodes the position of the original word in the dictionary. The character sequences used for this encoding are chosen carefully in such a way that specific back-end compression algorithms can often compress these sequences more easily than the original words, increasing the overall compression ratio for the input text. In addition to the original method, this paper describes a variation of the method specifically for the 'bzip2' data compression algorithm. The new method yields an improvements in compression ratio of up to 20% over bzip2. We also describe methods how our algorithm can be used on wide area networks such as the Internet, and in particular how dictionaries can automatically be synchronized and kept up to date in a distributed environment, by using the existing system of URLs, caching and document types, and applying it to dictionaries and text files.","PeriodicalId":191890,"journal":{"name":"Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1998-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"46","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DCC.1998.672295","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 46
Abstract
Summary form only given. We discuss the use of a text preprocessing algorithm that can improve the compression ratio of standard data compression algorithms, in particular 'bzip2', when used on text files, by up to 20%. The text preprocessing algorithm uses a static dictionary of the English language that is kept separately from the compressed file. The method in which the dictionary is used by the algorithm to transform the text is based on earlier work of Holger Kruse, Amar Mukherjee (see Proc. Data Comp. Conf., IEEE Comp. Society Press, p.447, 1997). The idea is to replace each word in the input text by a character sequence which encodes the position of the original word in the dictionary. The character sequences used for this encoding are chosen carefully in such a way that specific back-end compression algorithms can often compress these sequences more easily than the original words, increasing the overall compression ratio for the input text. In addition to the original method, this paper describes a variation of the method specifically for the 'bzip2' data compression algorithm. The new method yields an improvements in compression ratio of up to 20% over bzip2. We also describe methods how our algorithm can be used on wide area networks such as the Internet, and in particular how dictionaries can automatically be synchronized and kept up to date in a distributed environment, by using the existing system of URLs, caching and document types, and applying it to dictionaries and text files.