{"title":"Finding the best mismatched detector for channel coding and hypothesis testing","authors":"E. Abbe, M. Médard, Sean P. Meyn, Lizhong Zheng","doi":"10.1109/ITA.2007.4357593","DOIUrl":"https://doi.org/10.1109/ITA.2007.4357593","url":null,"abstract":"The mismatched-channel formulation is generalized to obtain simplified algorithms for computation of capacity bounds and improved signal constellation designs. The following issues are addressed: (i) For a given finite dimensional family of linear detectors, how can we compute the best in this class to maximize the reliably received rate? That is, what is the best mismatched detector in a given class? (ii) For computation of the best detector, a new algorithm is proposed based on a stochastic approximation implementation of the Newton-Raphson method, (iii) The geometric setting provides a unified treatment of channel coding and robust/adaptive hypothesis testing.","PeriodicalId":439952,"journal":{"name":"2007 Information Theory and Applications Workshop","volume":"7 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123475367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Applications of the Golden Code","authors":"E. Viterbo, Y. Hong","doi":"10.1109/ITA.2007.4357609","DOIUrl":"https://doi.org/10.1109/ITA.2007.4357609","url":null,"abstract":"The Golden code is a full rate, full diversity 2x2 linear dispersion space-time block code (STBC) that was constructed using cyclic division algebras. The underlaying algebraic structure provides the exceptional properties of the Golden code: cubic shaping and non-vanishing minimum determinant. In this paper, we first give a basic introduction about the Golden code. We discuss how to use the Golden code in a practical concatenated coding scheme for 2x2 MIMO systems based on OFDM, such as the ones proposed for high rate indoor wireless LAN communications (e.g. 802.11n). The proposed bandwidth efficient concatenated scheme uses the Golden code as an inner multidimensional modulation and a trellis code as outer code. Lattice set partitioning is designed in order to increase the minimum determinant. A general framework for code construction and optimization is developed. It is shown that this golden space-time trellis coded modulation scheme can provide excellent performance for high rate applications.","PeriodicalId":439952,"journal":{"name":"2007 Information Theory and Applications Workshop","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117047230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Universal Noiseless Compression for Noisy Data","authors":"G. I. Shamir, T. Tjalkens, Frans M. J. Willems","doi":"10.1109/ITA.2007.4357603","DOIUrl":"https://doi.org/10.1109/ITA.2007.4357603","url":null,"abstract":"We study universal compression for discrete data sequences that were corrupted by noise. We show that while, as expected, there exist many cases in which the entropy of these sequences increases from that of the original data, somewhat surprisingly and counter-intuitively, universal coding redundancy of such sequences cannot increase compared to the original data. We derive conditions that guarantee that this redundancy does not decrease asymptotically (in first order) from the original sequence redundancy in the stationary memoryless case. We then provide bounds on the redundancy for coding finite length (large) noisy blocks generated by stationary memoryless sources and corrupted by some specific memoryless channels. Finally, we propose a sequential probability estimation method that can be used to compress binary data corrupted by some noisy channel. While there is much benefit in using this method in compressing short blocks of noise corrupted data, the new method is more general and allows sequential compression of binary sequences for which the probability of a bit is known to be limited within any given interval (not necessarily between 0 and 1). Additionally, this method has many different applications, including, prediction, sequential channel estimation, and others.","PeriodicalId":439952,"journal":{"name":"2007 Information Theory and Applications Workshop","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129796518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Challenges of Intrusion Detection Compression Technology","authors":"K. Han, J. Kieffer","doi":"10.1109/ITA.2007.4357581","DOIUrl":"https://doi.org/10.1109/ITA.2007.4357581","url":null,"abstract":"Database management system (DBMS) controls and manages data to eliminate data redundancy and to ensure integrity, consistency and availability of the data, among other features. Even though DBMS vendors continue to offer greater automation and simplicity in managing databases, the need for specialized intrusion detection database compression technology has not yet been addressed. Our research focuses on developing such technology. The focus is not only on compression but also on database management through planning and best practice adoption to improve operational efficiency, and provide lower costs, privacy and security. The focus in this summary is on the compression part of the DMBS system for intrusion detection. We present a methodology employing grammar-based and large alphabet compression techniques which involves the generation of multiple dictionaries for compressing clustered subfiles of a very large data file. One of the dictionaries is a common dictionary which models features common to the subfiles. In addition, non-common features of each subfile are modeled via an auxiliary dictionary. Each clustered subfile is compressed using the augmented dictionary consisting of the common dictionary together with the auxiliary dictionary for that subfile.","PeriodicalId":439952,"journal":{"name":"2007 Information Theory and Applications Workshop","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129927496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robust Network Coding for Bidirected Networks","authors":"A. Sprintson, S. Rouayheb, C. Georghiades","doi":"10.1109/ITA.2007.4357606","DOIUrl":"https://doi.org/10.1109/ITA.2007.4357606","url":null,"abstract":"We consider the problem of finding a linear network code that guarantees an instantaneous recovery from edge failures in communication networks. With instantaneous recovery, lost data can be recovered at the destination without the need for path re-routing or packet re-transmission. We focus on a special class of bidirected networks. In such networks, for each edge there exists a corresponding edge in the reverse direction of equal capacity. We assume that at most one pair of bidirected edges can fail at any time. For unicast connections, we establish an upper bound of O (2 2h ) on the minimum required field size and present an algorithm that constructs a linear network code over GF (2 2h ). For multicast connections, we show that the minimum required field size is bounded by O (t ldr 2 2h ), where t is the number of terminals. We also discuss link- and flow-cyclic bidirected coding networks with instantaneous recovery.","PeriodicalId":439952,"journal":{"name":"2007 Information Theory and Applications Workshop","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125756784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"How Much Bandwidth Can Attack Bots Commandeer?","authors":"M. Greenwald, S. Khanna, S. Venkatesh","doi":"10.1109/ITA.2007.4357579","DOIUrl":"https://doi.org/10.1109/ITA.2007.4357579","url":null,"abstract":"In a shared channel model for Internet links, bandwidth is shared by principled users who abide by communal principles for snaring and using bandwidth and unprincipled scofflaws who seek to commandeer as much of the bandwidth as possible to effect disruptions such as spam and DoS attacks. Attacks are magnified by the spread of bots that surreptitiously take over the functioning of legitimate users. In such settings the natural filtering by router policies at ingress nodes and the rate of growth of link capacities towards the backbone play key roles in determining what fraction of the bandwidth is eventually commandeered. These considerations are presented in detail for a tree topology with users scattered at the leaves and with varying link capacity assignments and idealised router policies.","PeriodicalId":439952,"journal":{"name":"2007 Information Theory and Applications Workshop","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116763426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On Pricing of Spectrum in Secondary Markets","authors":"A. Al Daoud, M. Alanyali, D. Starobinski","doi":"10.1109/ITA.2007.4357554","DOIUrl":"https://doi.org/10.1109/ITA.2007.4357554","url":null,"abstract":"Optimal price of spectrum in secondary markets is studied. We consider a primary license holder who aims to lease the right to provide service in a given subset of its coverage area. Such a transaction has two contrasting economic implications for the seller: on the one hand the seller obtains a revenue due to the exercised price of the region. On the other hand, the seller incurs a cost due to (i) reduced spatial coverage of its network and (ii) possible interference from the leased region into the retained portion of its network. We formulate an optimization problem with the objective of profit maximization, and characterize its solutions based on a reduced load approximation. The form of optimal price suggests charging each admitted call in proportion to the attendant revenue loss due to the generated interference.","PeriodicalId":439952,"journal":{"name":"2007 Information Theory and Applications Workshop","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125874147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Efficient Method for Large-Scale l1-Regularized Convex Loss Minimization","authors":"Kwangmoo Koh, Seung-Jean Kim, Stephen Boyd","doi":"10.1109/ITA.2007.4357584","DOIUrl":"https://doi.org/10.1109/ITA.2007.4357584","url":null,"abstract":"Convex loss minimization with lscr1 regularization has been proposed as a promising method for feature selection in classification (e.g., lscr1-regularized logistic regression) and regression (e.g., lscr1-regularized least squares). In this paper we describe an efficient interior-point method for solving large-scale lscr1-regularized convex loss minimization problems that uses a preconditioned conjugate gradient method to compute the search step. The method can solve very large problems. For example, the method can solve an lscr1-regularized logistic regression problem with a million features and examples (e.g., the 20 Newsgroups data set), in a few minutes, on a PC.","PeriodicalId":439952,"journal":{"name":"2007 Information Theory and Applications Workshop","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126421430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Density Evolution for GF(q) LDPC Codes Via Simplified Message-passing Sets","authors":"B. Kurkoski, K. Yamaguchi, K. Kobayashi","doi":"10.1109/ITA.2007.4357586","DOIUrl":"https://doi.org/10.1109/ITA.2007.4357586","url":null,"abstract":"A message-passing decoder for GF(q) low-density parity-check codes is defined, which uses discrete messages from a subset of all possible binary vectors of length q. The proposed algorithm is a generalization to GF(q) of Richardson and Urbanke's decoding \"Algorithm E\" for binary codes. Density evolution requires a mapping between the probability distribution spaces for the channel, variable and check messages, and under the proposed algorithm, exact density evolution is possible. Symmetries in the message densities permit reduction in the size of the probability distribution space. Noise thresholds are obtained for LDPC codes on discrete memoryless channels, and as with Algorithm E, are remarkably close to noise thresholds under more complex belief propagation decoding.","PeriodicalId":439952,"journal":{"name":"2007 Information Theory and Applications Workshop","volume":"2012 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129577738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Generalized Stopping Sets and Stopping Redundancy","authors":"K. Abdel-Ghaffar, J. Weber","doi":"10.1109/ITA.2007.4357610","DOIUrl":"https://doi.org/10.1109/ITA.2007.4357610","url":null,"abstract":"Iterative decoding for linear block codes over erasure channels may be much simpler than optimal decoding but its performance is usually not as good. Here, we present a general iterative decoding technique that gives a more refined trade-off between complexity and performance. In each iteration, a system of equations is solved. In case the maximum number of equations to be solved is just one, the general iterative decoder reduces to the well-known iterative decoder. On the other hand, if the maximum number is set to the redundancy of the codes, the general iterative decoder gives the same performance as the optimal decoder. Varying the maximum number of equations to be solved in each iteration between these two extremes allows for a better match, in terms of performance and complexity, to the system specifications. Stopping sets and stopping redundancy are important concepts in the analysis of the performance and complexity of iterative decoders on the erasure channel. In consequence of the new generalized decoding procedure, the notions of stopping sets and stopping redundancy are generalized as well. Basic properties and examples of both generalized stopping sets and generalized stopping redundancy are presented in this paper.","PeriodicalId":439952,"journal":{"name":"2007 Information Theory and Applications Workshop","volume":"262 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115492089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}