{"title":"Lossless Compression of Hyperspectral Imagery","authors":"Raffaele Pizzolante","doi":"10.1109/CCP.2011.31","DOIUrl":"https://doi.org/10.1109/CCP.2011.31","url":null,"abstract":"In this paper we review the Spectral oriented Least SQuares (SLSQ) algorithm : an efficient and low complexity algorithm for Hyper spectral Image loss less compression, presented in [2]. Subsequently, we consider two important measures : Pearson's Correlation and Bhattacharyya distance and describe a band ordering approach based on this distances. Finally, we report experimental results achieved with a Java-based implementation of SLSQ on data cubes acquired by NASA JPL's Airborne Visible/Infrared Imaging Spectrometer (AVIRIS).","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123078323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluating New Cluster Setup on 10Gbit/s Network to Support the SuperB Computing Model","authors":"D. D. Prete, S. Pardi, G. Russo","doi":"10.1109/CCP.2011.33","DOIUrl":"https://doi.org/10.1109/CCP.2011.33","url":null,"abstract":"The new era of particle physics poses strong constraints on computing and storage availability for data analysis and data distribution. The SuperB project plans to produce and analyzes bulk of dataset two times bigger than the actual HEP experiment. In this scenario one of the main issues is to create a new cluster setup, able to scale for the next ten years and to take advantage from the new fabric technologies, included multicore and graphic programming units (GPUs). In this paper we propose a new site-wide cluster setup for Tier1 computer facilities, aimed to integrate storage and computing resources through a mix of high density storage solutions, cluster file system and Nx10Gbit/s network interfaces. The main idea is overcome the bottleneck due to the storage-computing decoupling through a scalable model composed by nodes with many cores and several disks in JBOD configuration. Preliminary tests made on 10Gbit/s cluster with a real SuperB use case, show the validity of our approach.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125806623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CoTracks: A New Lossy Compression Schema for Tracking Logs Data Based on Multiparametric Segmentation","authors":"W. Balzano, M. D. Sorbo","doi":"10.1109/CCP.2011.37","DOIUrl":"https://doi.org/10.1109/CCP.2011.37","url":null,"abstract":"A massive diffusion of positioning devices and services, transmitting and producing spatio-temporal data, raised space complexity problems and pulled the research focus toward efficient and specific algorithms to compress these huge amount of stored or flowing data. Co Tracks algorithm has been projected for a lossy compression of GPS data, exploiting analogies between all their spatio-temporal features. The original contribution of this algorithm is the consideration of the altitude of the track, an elaboration of 3D data and a dynamic vision of the moving point, because the speed, tightly linked to the time, is supposed to be one of the significant parameters in the uniformity search. Minimum Bounding Box has been the tool employed to group data points and to generate the key points of the approximated trajectory. The compression ratio, resulting also after a further Huffman coding, appears attractively high, suggesting new interesting developments of this new technique.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114504084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Straight-Line Programs: A Practical Test","authors":"I. Burmistrov, Lesha Khvorost","doi":"10.1109/CCP.2011.8","DOIUrl":"https://doi.org/10.1109/CCP.2011.8","url":null,"abstract":"We present an improvement of Rytter's algorithm that constructs a straight-line program for a given text and show that the improved algorithm is optimal in the worst case with respect to the number of AVL-tree rotations. Also we compare Rytter's and ours algorithms on various data sets and provide a comparative analysis of compression ratio achieved by these algorithms, by LZ77 and by LZW.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126666876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Overload Control through Multiprocessor Load Sharing in ATCA Architecture","authors":"S. Montagna, M. Pignolo","doi":"10.1109/CCP.2011.13","DOIUrl":"https://doi.org/10.1109/CCP.2011.13","url":null,"abstract":"This work will deal with overload control schemes within ATCA modules achieving IMS functionalities and exploiting the cooperation between processors. A performance evaluation will be carried out on two algorithms aimed at optimizing multiple processors workload within ATCA boards performing incoming traffic control. The driving policy of the first algorithm consists in a continuous estimation of the mean processors workload, while the gear of the other algorithm is a load balancing following a queue estimation. The Key Performance Indicator will be represented by the throughput, i.e. the number of sessions managed within a fixed time period.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124088201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cache Friendly Burrows-Wheeler Inversion","authors":"Juha Kärkkäinen, S. Puglisi","doi":"10.1109/CCP.2011.15","DOIUrl":"https://doi.org/10.1109/CCP.2011.15","url":null,"abstract":"The Burrows-Wheeler transform permutes the symbols of a string such that the permuted string can be compressed effectively with fast, simple techniques. Inversion of the transform is a bottleneck in practice. Inversion takes linear time, but, for each symbol decoded, folklore says that a random access into the transformed string (and so a CPU cache-miss) is necessary. In this paper we show how to mitigate cache misses and so speed inversion. Our main idea is to modify the standard inversion algorithm to detect and record repeated sub strings in the original string as it is recovered. Subsequent occurrences of these repetitions are then copied in a cache friendly way from the already recovered portion of the string, short cutting a series of random accesses by the standard inversion algorithm. We show experimentally that this approach leads to faster runtimes in general, and can drastically reduce inversion time for highly repetitive data.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125845245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cataloga: A Software for Semantic-Based Terminological Data Mining","authors":"A. Elia, Mario Monteleone, Alberto Postiglione","doi":"10.1109/CCP.2011.42","DOIUrl":"https://doi.org/10.1109/CCP.2011.42","url":null,"abstract":"This paper is focused on Catalog a, a software package based on Lexicon-Grammar theoretical and practical analytical framework and embedding a ling ware module built on compressed terminological electronic dictionaries. We will here show how Catalog a can be used to achieve efficient data mining and information retrieval by means of lexical ontology associated to terminology-based automatic textual analysis. Also, we will show how accurate data compression is necessary to build efficient textual analysis software. Therefore, we will here discuss the creation and functioning of a software for semantic-based terminological data mining, in which a crucial role is played by Italian simple and compound-word electronic dictionaries. Lexicon-Grammar is one of the most profitable and consistent methods for natural language formalization and automatic textual analysis it was set up by French linguist Maurice Gross during the '60s, and subsequently developed for and applied to Italian by Annibale Elia, Emilio D'Agostino and Maurizio Martin Elli. Basically, Lexicon-Grammar establishes morph syntactic and statistical sets of analytic rules to read and parse large textual corpora. The analytical procedure here described will prove itself appropriate for any type of digitalized text, and will represent a relevant support for the building and implementing of Semantic Web (SW) interactive platforms.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123407640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Combining Non-stationary Prediction, Optimization and Mixing for Data Compression","authors":"Christopher Mattern","doi":"10.1109/CCP.2011.22","DOIUrl":"https://doi.org/10.1109/CCP.2011.22","url":null,"abstract":"In this paper an approach to modelling nonstationary binary sequences, i.e., predicting the probability of upcoming symbols, is presented. After studying the prediction model we evaluate its performance in two non-artificial test cases. First the model is compared to the Laplace and Krichevsky-Trofimov estimators. Secondly a statistical ensemble model for compressing Burrows-Wheeler-Transform output is worked out and evaluated. A systematic approach to the parameter optimization of an individual model and the ensemble model is stated.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"2011 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121792579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Patch for Squashfs to Improve the Compressed Files Contents Search: HSFS","authors":"N. Corriero","doi":"10.1109/CCP.2011.34","DOIUrl":"https://doi.org/10.1109/CCP.2011.34","url":null,"abstract":"Squash FS is a Linux compress file system. Hixosfs is a file system to improve file content search by using metadata information's. In this paper we propose to use Hixosfs idea in Squash FS context by creating a new file system HSFS. HSFS is a compress Linux file system to store metadata within nodes. We compare our idea with other common solutions. We test our idea with DICOM file used to store medical images.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"2015 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121068281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Novel Approach to QoS Monitoring in the Cloud","authors":"L. Romano, D. Mari, Zbigniew Jerzak, C. Fetzer","doi":"10.1109/CCP.2011.49","DOIUrl":"https://doi.org/10.1109/CCP.2011.49","url":null,"abstract":"The availability of a dependable (i.e. reliable and timely) QoS monitoring facility is key for the real take up of cloud computing, since - by allowing organizations to receive the full value of cloud computing services - it would increase the level of trust they would place in this emerging technology. In this paper, we present a dependable QoS monitoring facility which relies on the \"as a Service\" paradigm, and can thus be made available to virtually all cloud users in a seamless way. Such a facility is called QoS-MONaaS, which stands for \"Quality of Service MONitoring as a Service\". Details are given about the internal design, current implementation, and experimental validation of the service.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126627524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}