2011 First International Conference on Data Compression, Communications and Processing最新文献

筛选
英文 中文
Lossless Compression of Hyperspectral Imagery 高光谱图像的无损压缩
Raffaele Pizzolante
{"title":"Lossless Compression of Hyperspectral Imagery","authors":"Raffaele Pizzolante","doi":"10.1109/CCP.2011.31","DOIUrl":"https://doi.org/10.1109/CCP.2011.31","url":null,"abstract":"In this paper we review the Spectral oriented Least SQuares (SLSQ) algorithm : an efficient and low complexity algorithm for Hyper spectral Image loss less compression, presented in [2]. Subsequently, we consider two important measures : Pearson's Correlation and Bhattacharyya distance and describe a band ordering approach based on this distances. Finally, we report experimental results achieved with a Java-based implementation of SLSQ on data cubes acquired by NASA JPL's Airborne Visible/Infrared Imaging Spectrometer (AVIRIS).","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123078323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Evaluating New Cluster Setup on 10Gbit/s Network to Support the SuperB Computing Model 在10Gbit/s网络上评估新的集群设置以支持卓越的计算模型
D. D. Prete, S. Pardi, G. Russo
{"title":"Evaluating New Cluster Setup on 10Gbit/s Network to Support the SuperB Computing Model","authors":"D. D. Prete, S. Pardi, G. Russo","doi":"10.1109/CCP.2011.33","DOIUrl":"https://doi.org/10.1109/CCP.2011.33","url":null,"abstract":"The new era of particle physics poses strong constraints on computing and storage availability for data analysis and data distribution. The SuperB project plans to produce and analyzes bulk of dataset two times bigger than the actual HEP experiment. In this scenario one of the main issues is to create a new cluster setup, able to scale for the next ten years and to take advantage from the new fabric technologies, included multicore and graphic programming units (GPUs). In this paper we propose a new site-wide cluster setup for Tier1 computer facilities, aimed to integrate storage and computing resources through a mix of high density storage solutions, cluster file system and Nx10Gbit/s network interfaces. The main idea is overcome the bottleneck due to the storage-computing decoupling through a scalable model composed by nodes with many cores and several disks in JBOD configuration. Preliminary tests made on 10Gbit/s cluster with a real SuperB use case, show the validity of our approach.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125806623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
CoTracks: A New Lossy Compression Schema for Tracking Logs Data Based on Multiparametric Segmentation CoTracks:一种基于多参数分割的日志数据跟踪有损压缩新模式
W. Balzano, M. D. Sorbo
{"title":"CoTracks: A New Lossy Compression Schema for Tracking Logs Data Based on Multiparametric Segmentation","authors":"W. Balzano, M. D. Sorbo","doi":"10.1109/CCP.2011.37","DOIUrl":"https://doi.org/10.1109/CCP.2011.37","url":null,"abstract":"A massive diffusion of positioning devices and services, transmitting and producing spatio-temporal data, raised space complexity problems and pulled the research focus toward efficient and specific algorithms to compress these huge amount of stored or flowing data. Co Tracks algorithm has been projected for a lossy compression of GPS data, exploiting analogies between all their spatio-temporal features. The original contribution of this algorithm is the consideration of the altitude of the track, an elaboration of 3D data and a dynamic vision of the moving point, because the speed, tightly linked to the time, is supposed to be one of the significant parameters in the uniformity search. Minimum Bounding Box has been the tool employed to group data points and to generate the key points of the approximated trajectory. The compression ratio, resulting also after a further Huffman coding, appears attractively high, suggesting new interesting developments of this new technique.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114504084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Straight-Line Programs: A Practical Test 直线程序:一个实用测试
I. Burmistrov, Lesha Khvorost
{"title":"Straight-Line Programs: A Practical Test","authors":"I. Burmistrov, Lesha Khvorost","doi":"10.1109/CCP.2011.8","DOIUrl":"https://doi.org/10.1109/CCP.2011.8","url":null,"abstract":"We present an improvement of Rytter's algorithm that constructs a straight-line program for a given text and show that the improved algorithm is optimal in the worst case with respect to the number of AVL-tree rotations. Also we compare Rytter's and ours algorithms on various data sets and provide a comparative analysis of compression ratio achieved by these algorithms, by LZ77 and by LZW.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126666876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Overload Control through Multiprocessor Load Sharing in ATCA Architecture ATCA体系结构中多处理器负载分担的过载控制
S. Montagna, M. Pignolo
{"title":"Overload Control through Multiprocessor Load Sharing in ATCA Architecture","authors":"S. Montagna, M. Pignolo","doi":"10.1109/CCP.2011.13","DOIUrl":"https://doi.org/10.1109/CCP.2011.13","url":null,"abstract":"This work will deal with overload control schemes within ATCA modules achieving IMS functionalities and exploiting the cooperation between processors. A performance evaluation will be carried out on two algorithms aimed at optimizing multiple processors workload within ATCA boards performing incoming traffic control. The driving policy of the first algorithm consists in a continuous estimation of the mean processors workload, while the gear of the other algorithm is a load balancing following a queue estimation. The Key Performance Indicator will be represented by the throughput, i.e. the number of sessions managed within a fixed time period.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124088201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cache Friendly Burrows-Wheeler Inversion 缓存友好的陋居-惠勒反转
Juha Kärkkäinen, S. Puglisi
{"title":"Cache Friendly Burrows-Wheeler Inversion","authors":"Juha Kärkkäinen, S. Puglisi","doi":"10.1109/CCP.2011.15","DOIUrl":"https://doi.org/10.1109/CCP.2011.15","url":null,"abstract":"The Burrows-Wheeler transform permutes the symbols of a string such that the permuted string can be compressed effectively with fast, simple techniques. Inversion of the transform is a bottleneck in practice. Inversion takes linear time, but, for each symbol decoded, folklore says that a random access into the transformed string (and so a CPU cache-miss) is necessary. In this paper we show how to mitigate cache misses and so speed inversion. Our main idea is to modify the standard inversion algorithm to detect and record repeated sub strings in the original string as it is recovered. Subsequent occurrences of these repetitions are then copied in a cache friendly way from the already recovered portion of the string, short cutting a series of random accesses by the standard inversion algorithm. We show experimentally that this approach leads to faster runtimes in general, and can drastically reduce inversion time for highly repetitive data.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125845245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Cataloga: A Software for Semantic-Based Terminological Data Mining Cataloga:基于语义的术语数据挖掘软件
A. Elia, Mario Monteleone, Alberto Postiglione
{"title":"Cataloga: A Software for Semantic-Based Terminological Data Mining","authors":"A. Elia, Mario Monteleone, Alberto Postiglione","doi":"10.1109/CCP.2011.42","DOIUrl":"https://doi.org/10.1109/CCP.2011.42","url":null,"abstract":"This paper is focused on Catalog a, a software package based on Lexicon-Grammar theoretical and practical analytical framework and embedding a ling ware module built on compressed terminological electronic dictionaries. We will here show how Catalog a can be used to achieve efficient data mining and information retrieval by means of lexical ontology associated to terminology-based automatic textual analysis. Also, we will show how accurate data compression is necessary to build efficient textual analysis software. Therefore, we will here discuss the creation and functioning of a software for semantic-based terminological data mining, in which a crucial role is played by Italian simple and compound-word electronic dictionaries. Lexicon-Grammar is one of the most profitable and consistent methods for natural language formalization and automatic textual analysis it was set up by French linguist Maurice Gross during the '60s, and subsequently developed for and applied to Italian by Annibale Elia, Emilio D'Agostino and Maurizio Martin Elli. Basically, Lexicon-Grammar establishes morph syntactic and statistical sets of analytic rules to read and parse large textual corpora. The analytical procedure here described will prove itself appropriate for any type of digitalized text, and will represent a relevant support for the building and implementing of Semantic Web (SW) interactive platforms.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123407640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Wireless Connectivity for Remote Objective Monitoring of Bio-signals 生物信号远程目标监测的无线连接
A. Aristama, W. Almuhtadi
{"title":"Wireless Connectivity for Remote Objective Monitoring of Bio-signals","authors":"A. Aristama, W. Almuhtadi","doi":"10.1109/CCP.2011.27","DOIUrl":"https://doi.org/10.1109/CCP.2011.27","url":null,"abstract":"In Remote Objective Monitoring of Bio-Signals(ROMOBS) project, an automated near real-time remote health-monitoring device is being developed. The goal of this device is to measure blood flow parameters (systolic/diastolic blood pressure, heart rate, etc.), report the measurement results to a medical centre, and get the response back to the outpatient, all in an autonomous fashion. The objective of this paper is to develop a communication protocol that will enable the measurement device to be efficiently and constantly connected to a server the medical staff works on. Steps toward completing this goal include figuring out the network scheme that would effectively do the job, while maintaining low level of complexity and complying with the requirements set by the project. It results in a hybrid Bluetooth/cellular wireless system that emerges as the primary choice of connectivity medium with an application that sits on the Bluetooth- and Java-enabled cell phone as the data carrier. This paper discusses the development progress, the technologies involved, and the creation process of an interactive and user-friendly ROMOBS application.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133271752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Backwards Search in Context Bound Text Transformations 上下文绑定文本转换中的向后搜索
M. Petri, G. Navarro, J. Culpepper, S. Puglisi
{"title":"Backwards Search in Context Bound Text Transformations","authors":"M. Petri, G. Navarro, J. Culpepper, S. Puglisi","doi":"10.1109/CCP.2011.18","DOIUrl":"https://doi.org/10.1109/CCP.2011.18","url":null,"abstract":"The Burrows-Wheeler Transform (bwt) is the basis for many of the most effective compression and self-indexing methods used today. A key to the versatility of the bwt is the ability to search for patterns directly in the transformed text. A backwards search for a pattern P can be performed on a transformed text by iteratively determining the range of suffixes that match P. The search can be further enhanced by constructing a wavelet tree over the output of the bwt in order to emulate a suffix array. In this paper, we investigate new algorithms for search derived from a variation of the bwt whereby rotations are only sorted to a depth k, commonly referred to as a context bound transform. Interestingly, this bwt variant can be used to mimic a k-gram index, which are used in a variety of applications that need to efficiently return occurrences in text position order. In this paper, we present the first backwards search algorithms on the k-bwt, and show how to construct a self-index containing many of the attractive properties of a k-gram index.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131219201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Verification of a Batch of Bad Signatures by Using the Matrix-Detection Algorithm 利用矩阵检测算法验证一批不良签名
Yi-Li Huang, Chu-Hsing Lin, Fang-Yie Leu
{"title":"Verification of a Batch of Bad Signatures by Using the Matrix-Detection Algorithm","authors":"Yi-Li Huang, Chu-Hsing Lin, Fang-Yie Leu","doi":"10.1109/CCP.2011.46","DOIUrl":"https://doi.org/10.1109/CCP.2011.46","url":null,"abstract":"Batch verification is a method devised to verify multiple signatures as a whole simultaneously. In literatures, we can see that some conventional batch verification schemes cannot effectively and efficiently identity bad signatures. Small Exponent test, a popular batch verification method, has its own problems, e.g., after a test, bad signatures still exist with some escape probabilities. In this paper, we propose a batch verification approach, called Matrix-Detection Algorithm (MDA for short), with which when a batch of signatures has less than four bad signatures or odd number of bad signatures, all bad signatures can be identified. Given 1024 signatures with 4 bad signatures, the maximum escape probability pmax of the MDA is 5.3×10-5 , and max p decreases as digital signatures or bad signatures increase. Analytic results show that the MDA is more secure and efficient than the SET.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"46 42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131190658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书