2020 Information Theory and Applications Workshop (ITA)最新文献

筛选
英文 中文
A Brain-Inspired Framework for Evolutionary Artificial General Intelligence 进化人工通用智能的大脑启发框架
2020 Information Theory and Applications Workshop (ITA) Pub Date : 2020-02-02 DOI: 10.1109/ITA50056.2020.9245000
Mohammad Nadji-Tehrani, A. Eslami
{"title":"A Brain-Inspired Framework for Evolutionary Artificial General Intelligence","authors":"Mohammad Nadji-Tehrani, A. Eslami","doi":"10.1109/ITA50056.2020.9245000","DOIUrl":"https://doi.org/10.1109/ITA50056.2020.9245000","url":null,"abstract":"From the medical field to agriculture, from energy to transportation, every industry is going through a revolution by embracing artificial intelligence (AI); nevertheless, AI is still in its infancy. Inspired by the evolution of the human brain, this paper demonstrates a novel method and framework to synthesize an artificial brain with cognitive abilities by taking advantage of the same process responsible for the growth of the biological brain called \"neuroembryogenesis.\" This framework shares some of the key behavioral aspects of the biological brain such as spiking neurons, neuroplasticity, neuronal pruning, and excitatory and inhibitory interactions between neurons, together making it capable of learning and memorizing. One of the highlights of the proposed design is its potential to incrementally improve itself over generations based on system performance, using genetic algorithms. A proof of concept at the end of the paper demonstrates how a simplified implementation of the human visual cortex using the proposed framework is capable of character recognition. Our framework is open-source and the code is shared with the scientific community at www.feagi.org.","PeriodicalId":137257,"journal":{"name":"2020 Information Theory and Applications Workshop (ITA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125499340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Applications of Online Nonnegative Matrix Factorization to Image and Time-Series Data 在线非负矩阵分解在图像和时间序列数据中的应用
2020 Information Theory and Applications Workshop (ITA) Pub Date : 2020-02-02 DOI: 10.1109/ITA50056.2020.9245004
Hanbaek Lyu, G. Menz, D. Needell, Christopher Strohmeier
{"title":"Applications of Online Nonnegative Matrix Factorization to Image and Time-Series Data","authors":"Hanbaek Lyu, G. Menz, D. Needell, Christopher Strohmeier","doi":"10.1109/ITA50056.2020.9245004","DOIUrl":"https://doi.org/10.1109/ITA50056.2020.9245004","url":null,"abstract":"Online nonnegative matrix factorization (ONMF) is a matrix factorization technique in the online setting where data are acquired in a streaming fashion and the matrix factors are updated each time. This enables factor analysis to be performed concurrently with the arrival of new data samples. In this article, we demonstrate how one can use online nonnegative matrix factorization algorithms to learn joint dictionary atoms from an ensemble of correlated data sets. We propose a temporal dictionary learning scheme for time-series data sets, based on ONMF algorithms. We demonstrate our dictionary learning technique in the application contexts of historical temperature data, video frames, and color images.","PeriodicalId":137257,"journal":{"name":"2020 Information Theory and Applications Workshop (ITA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120890361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Residual Based Sampling for Online Low Rank Approximation 基于残差的在线低秩逼近抽样
2020 Information Theory and Applications Workshop (ITA) Pub Date : 2020-02-02 DOI: 10.1109/ITA50056.2020.9244974
Aditya Bhaskara, Silvio Lattanzi, Sergei Vassilvitskii, Morteza Zadimoghaddam
{"title":"Residual Based Sampling for Online Low Rank Approximation","authors":"Aditya Bhaskara, Silvio Lattanzi, Sergei Vassilvitskii, Morteza Zadimoghaddam","doi":"10.1109/ITA50056.2020.9244974","DOIUrl":"https://doi.org/10.1109/ITA50056.2020.9244974","url":null,"abstract":"We propose online algorithms for Column Subset Selection (CSS) and Principal Component Analysis (PCA), two methods that are widely employed for data analysis, summarization, and visualization. Given a data matrix A that is revealed one column at a time, the online CSS problems asks to keep a small set of columns, S, that best approximates the space spanned by the columns of A. As each column arrives, the algorithm must irrevocably decide whether to add it to S, or to ignore it. In the online PCA problem, the goal is to output a projection of each column to a low dimensional subspace. In other words, the algorithm must provide an embedding for each column as it arrives, which cannot be changed as new columns arrive.While both of these problems have been studied in the online setting, only additive approximations were known prior to our work. The core of our approach is an adaptive sampling technique that gives a practical and efficient algorithm for both of these problems. We prove that by sampling columns using their \"residual norm\" (i.e. their norm orthogonal to directions sampled so far), we end up with a significantly better dependence between the number of columns sampled, and the desired error in the approximation.We further show how to combine our algorithm \"in series\" with prior algorithms. In particular, using the results of Boutsidis et al. [5] and Frieze et al. [15] that have additive guarantees, we show how to improve the bounds on the error of our algorithm.","PeriodicalId":137257,"journal":{"name":"2020 Information Theory and Applications Workshop (ITA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116212465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Interference-Resilient Relay Beamforming Scheme Inspired by Back-Propagation Algorithm 一种基于反向传播算法的抗干扰中继波束形成方案
2020 Information Theory and Applications Workshop (ITA) Pub Date : 2020-02-02 DOI: 10.1109/ITA50056.2020.9245001
Rui Wang, Yinglin Jiang
{"title":"An Interference-Resilient Relay Beamforming Scheme Inspired by Back-Propagation Algorithm","authors":"Rui Wang, Yinglin Jiang","doi":"10.1109/ITA50056.2020.9245001","DOIUrl":"https://doi.org/10.1109/ITA50056.2020.9245001","url":null,"abstract":"A relay node can be used to improve the distance and service quality of a communication link, but not when it is being interfered. In this paper, we consider a relay network consisting of one source, one destination, and multiple relay nodes, and draw analogy between the relay network and a three-layer artificial neural network (ANN). Inspired by the classic back-propagation (BP) algorithm for the ANN, we develop an interference-resilient algorithm that can optimize the beamforming-and-forwarding weights of the relay nodes so that the interferences will be canceled at the destination. The proposed algorithm requires no channel state information (CSI), no data exchanges between the relay nodes; it requires that the source transmit training sequences in the forward channel (source-to-relays) and the destination transmit error sequences in the backward channel (destination-to-relays). The simulation results verify the effectiveness of the proposed scheme in the interference environment.","PeriodicalId":137257,"journal":{"name":"2020 Information Theory and Applications Workshop (ITA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116746577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
On-the-fly Uplink Training and Pilot Code Design for Massive MIMO Cellular Networks 大规模MIMO蜂窝网络的动态上行训练与导频码设计
2020 Information Theory and Applications Workshop (ITA) Pub Date : 2020-02-02 DOI: 10.1109/ITA50056.2020.9244985
Chenwei Wang, Zekun Zhang, H. Papadopoulos
{"title":"On-the-fly Uplink Training and Pilot Code Design for Massive MIMO Cellular Networks","authors":"Chenwei Wang, Zekun Zhang, H. Papadopoulos","doi":"10.1109/ITA50056.2020.9244985","DOIUrl":"https://doi.org/10.1109/ITA50056.2020.9244985","url":null,"abstract":"We investigate non-orthogonal uplink pilot designs for improving the area spectral efficiency in the downlink of TDD reciprocity-based massive MIMO cellular networks. In particular, we develop a class of pilot designs that are locally orthogonal within each cell, while maintaining low inner-product properties between codes in different cells. Using channel estimations provided by observations on these codes, each cell independently serves its locally active users with MU-MIMO transmission that is also designed to mitigate interference to a subset of \"strongly interfered\" out-of-cell users. As our analysis shows, such cellular operation based on the proposed codes yields improvements (with respect to conventional operation) in user-rate CDFs, cell-throughput and cell-edge throughput performance.","PeriodicalId":137257,"journal":{"name":"2020 Information Theory and Applications Workshop (ITA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131157125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Limits of Detecting Text Generated by Large-Scale Language Models 大规模语言模型生成文本检测的局限性
2020 Information Theory and Applications Workshop (ITA) Pub Date : 2020-02-02 DOI: 10.1109/ITA50056.2020.9245012
L. Varshney, N. Keskar, R. Socher
{"title":"Limits of Detecting Text Generated by Large-Scale Language Models","authors":"L. Varshney, N. Keskar, R. Socher","doi":"10.1109/ITA50056.2020.9245012","DOIUrl":"https://doi.org/10.1109/ITA50056.2020.9245012","url":null,"abstract":"Some consider large-scale language models that can generate long and coherent pieces of text as dangerous, since they may be used in misinformation campaigns. Here we formulate large-scale language model output detection as a hypothesis testing problem to classify text as genuine or generated. We show that error exponents for particular language models are bounded in terms of their perplexity, a standard measure of language generation performance. Under the assumption that human language is stationary and ergodic, the formulation is ex-tended from considering specific language models to considering maximum likelihood language models, among the class of k-order Markov approximations; error probabilities are characterized. Some discussion of incorporating semantic side information is also given.","PeriodicalId":137257,"journal":{"name":"2020 Information Theory and Applications Workshop (ITA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133092015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Non-Negative Matrix Factorization via Low-Rank Stochastic Manifold Optimization 基于低秩随机流形优化的非负矩阵分解
2020 Information Theory and Applications Workshop (ITA) Pub Date : 2020-02-02 DOI: 10.1109/ITA50056.2020.9244937
Ahmed Douik, B. Hassibi
{"title":"Non-Negative Matrix Factorization via Low-Rank Stochastic Manifold Optimization","authors":"Ahmed Douik, B. Hassibi","doi":"10.1109/ITA50056.2020.9244937","DOIUrl":"https://doi.org/10.1109/ITA50056.2020.9244937","url":null,"abstract":"Several real-world applications, notably in non-negative matrix factorization, graph-based clustering, and machine learning, require solving a convex optimization problem over the set of stochastic and doubly stochastic matrices. A common feature of these problems is that the optimal solution is generally a low-rank matrix. This paper suggests reformulating the problem by taking advantage of the low-rank factorization X = UVT and develops a Riemannian optimization framework for solving optimization problems on the set of low-rank stochastic and doubly stochastic matrices. In particular, this paper introduces and studies the geometry of the low-rank stochastic multinomial and the doubly stochastic manifold in order to derive first-order optimization algorithms. Being carefully designed and of lower dimension than the original problem, the proposed Riemannian optimization framework presents a clear complexity advantage. The claim is attested through numerical experiments on real-world and synthetic data for Non-negative Matrix Factorization (NFM) applications. The proposed algorithm is shown to outperform, in terms of running time, state-of-the-art methods for NFM.","PeriodicalId":137257,"journal":{"name":"2020 Information Theory and Applications Workshop (ITA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128585975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient Matrix Multiplication: The Sparse Power-of-2 Factorization 高效矩阵乘法:稀疏的2次方分解
2020 Information Theory and Applications Workshop (ITA) Pub Date : 2020-02-02 DOI: 10.1109/ITA50056.2020.9244952
R. Müller, Bernhard Gäde, Ali Bereyhi
{"title":"Efficient Matrix Multiplication: The Sparse Power-of-2 Factorization","authors":"R. Müller, Bernhard Gäde, Ali Bereyhi","doi":"10.1109/ITA50056.2020.9244952","DOIUrl":"https://doi.org/10.1109/ITA50056.2020.9244952","url":null,"abstract":"We present an algorithm to reduce the computational effort for the multiplication of a given matrix with an unknown column vector. The algorithm decomposes the given matrix into a product of matrices whose entries are either zero or integer powers of two utilizing the principles of sparse recovery. While classical low resolution quantization achieves an accuracy of 6 dB per bit, our method can achieve many times more than that for large matrices. Numerical and analytical evidence suggests that the improvement actually grows unboundedly with matrix size. Due to sparsity, the algorithm even allows for quantization levels below 1 bit per matrix entry while achieving highly accurate approximations for large matrices. Applications include, but are not limited to, neural networks, as well as fully digital beam-forming for massive MIMO and millimeter wave applications.","PeriodicalId":137257,"journal":{"name":"2020 Information Theory and Applications Workshop (ITA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129094347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
On Marton's Achievable Region: Local Tensorization for Product Channels with a Binary Component 关于马顿可达区域:具有二元分量的产品通道的局部张化
2020 Information Theory and Applications Workshop (ITA) Pub Date : 2020-02-02 DOI: 10.1109/ITA50056.2020.9244997
Chandra Nair
{"title":"On Marton's Achievable Region: Local Tensorization for Product Channels with a Binary Component","authors":"Chandra Nair","doi":"10.1109/ITA50056.2020.9244997","DOIUrl":"https://doi.org/10.1109/ITA50056.2020.9244997","url":null,"abstract":"We show that Marton's achievable rate region for product broadcast channels with one binary component satisfies a property called local tensorization. If a corresponding global tensorization property held for the same setting, then this would be equivalent to showing the optimality of Marton's achievable region for any two receiver broadcast channel with binary inputs.","PeriodicalId":137257,"journal":{"name":"2020 Information Theory and Applications Workshop (ITA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114842802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Identifying unpredictable test examples with worst-case guarantees 识别具有最坏情况保证的不可预测的测试示例
2020 Information Theory and Applications Workshop (ITA) Pub Date : 2020-02-02 DOI: 10.1109/ITA50056.2020.9244996
S. Goldwasser, A. Kalai, Y. Kalai, Omar Montasser
{"title":"Identifying unpredictable test examples with worst-case guarantees","authors":"S. Goldwasser, A. Kalai, Y. Kalai, Omar Montasser","doi":"10.1109/ITA50056.2020.9244996","DOIUrl":"https://doi.org/10.1109/ITA50056.2020.9244996","url":null,"abstract":"Often times, whether it be for adversarial or natural reasons, the distributions of test and training data differ. We give an algorithm that, given sets of training and test examples, identifies regions of test examples that cannot be predicted with low error. These regions are classified as ƒ or equivalently omitted from classification. Assuming only that labels are consistent with a family of classifiers of low VC dimension, the algorithm is shown to make few misclassification errors and few errors of omission in both adversarial and covariate-shift settings. Previous models of learning with different training and test distributions required assumptions connecting the two.","PeriodicalId":137257,"journal":{"name":"2020 Information Theory and Applications Workshop (ITA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121030559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信