Asian Conference on Machine Learning最新文献

筛选
英文 中文
Learning Practical Communication Strategies in Cooperative Multi-Agent Reinforcement Learning 协同多智能体强化学习中的实用沟通策略学习
Asian Conference on Machine Learning Pub Date : 2022-09-02 DOI: 10.48550/arXiv.2209.01288
Diyi Hu, Chi Zhang, V. Prasanna, Bhaskar, Krishnamachari
{"title":"Learning Practical Communication Strategies in Cooperative Multi-Agent Reinforcement Learning","authors":"Diyi Hu, Chi Zhang, V. Prasanna, Bhaskar, Krishnamachari","doi":"10.48550/arXiv.2209.01288","DOIUrl":"https://doi.org/10.48550/arXiv.2209.01288","url":null,"abstract":"In Multi-Agent Reinforcement Learning, communication is critical to encourage cooperation among agents. Communication in realistic wireless networks can be highly unreliable due to network conditions varying with agents' mobility, and stochasticity in the transmission process. We propose a framework to learn practical communication strategies by addressing three fundamental questions: (1) When: Agents learn the timing of communication based on not only message importance but also wireless channel conditions. (2) What: Agents augment message contents with wireless network measurements to better select the game and communication actions. (3) How: Agents use a novel neural message encoder to preserve all information from received messages, regardless of the number and order of messages. Simulating standard benchmarks under realistic wireless network settings, we show significant improvements in game performance, convergence speed and communication efficiency compared with state-of-the-art.","PeriodicalId":119756,"journal":{"name":"Asian Conference on Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133008170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sliced Wasserstein Variational Inference 切片Wasserstein变分推理
Asian Conference on Machine Learning Pub Date : 2022-07-26 DOI: 10.48550/arXiv.2207.13177
Mingxuan Yi, Song Liu
{"title":"Sliced Wasserstein Variational Inference","authors":"Mingxuan Yi, Song Liu","doi":"10.48550/arXiv.2207.13177","DOIUrl":"https://doi.org/10.48550/arXiv.2207.13177","url":null,"abstract":"Variational Inference approximates an unnormalized distribution via the minimization of Kullback-Leibler (KL) divergence. Although this divergence is efficient for computation and has been widely used in applications, it suffers from some unreasonable properties. For example, it is not a proper metric, i.e., it is non-symmetric and does not preserve the triangle inequality. On the other hand, optimal transport distances recently have shown some advantages over KL divergence. With the help of these advantages, we propose a new variational inference method by minimizing sliced Wasserstein distance, a valid metric arising from optimal transport. This sliced Wasserstein distance can be approximated simply by running MCMC but without solving any optimization problem. Our approximation also does not require a tractable density function of variational distributions so that approximating families can be amortized by generators like neural networks. Furthermore, we provide an analysis of the theoretical properties of our method. Experiments on synthetic and real data are illustrated to show the performance of the proposed method.","PeriodicalId":119756,"journal":{"name":"Asian Conference on Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114565771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Domain Alignment Meets Fully Test-Time Adaptation 域对齐满足完全的测试时间适应
Asian Conference on Machine Learning Pub Date : 2022-07-09 DOI: 10.48550/arXiv.2207.04185
Kowshik Thopalli, P. Turaga, Jayaraman J. Thiagarajan
{"title":"Domain Alignment Meets Fully Test-Time Adaptation","authors":"Kowshik Thopalli, P. Turaga, Jayaraman J. Thiagarajan","doi":"10.48550/arXiv.2207.04185","DOIUrl":"https://doi.org/10.48550/arXiv.2207.04185","url":null,"abstract":"A foundational requirement of a deployed ML model is to generalize to data drawn from a testing distribution that is different from training. A popular solution to this problem is to adapt a pre-trained model to novel domains using only unlabeled data. In this paper, we focus on a challenging variant of this problem, where access to the original source data is restricted. While fully test-time adaptation (FTTA) and unsupervised domain adaptation (UDA) are closely related, the advances in UDA are not readily applicable to TTA, since most UDA methods require access to the source data. Hence, we propose a new approach, CATTAn, that bridges UDA and FTTA, by relaxing the need to access entire source data, through a novel deep subspace alignment strategy. With a minimal overhead of storing the subspace basis set for the source data, CATTAn enables unsupervised alignment between source and target data during adaptation. Through extensive experimental evaluation on multiple 2D and 3D vision benchmarks (ImageNet-C, Office-31, OfficeHome, DomainNet, PointDA-10) and model architectures, we demonstrate significant gains in FTTA performance. Furthermore, we make a number of crucial findings on the utility of the alignment objective even with inherently robust models, pre-trained ViT representations and under low sample availability in the target domain.","PeriodicalId":119756,"journal":{"name":"Asian Conference on Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123102536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
AS-IntroVAE: Adversarial Similarity Distance Makes Robust IntroVAE AS-IntroVAE:对抗性相似距离产生稳健的IntroVAE
Asian Conference on Machine Learning Pub Date : 2022-06-28 DOI: 10.48550/arXiv.2206.13903
Chang-Tien Lu, Shen Zheng, Zirui Wang, O. Dib, Gaurav Gupta
{"title":"AS-IntroVAE: Adversarial Similarity Distance Makes Robust IntroVAE","authors":"Chang-Tien Lu, Shen Zheng, Zirui Wang, O. Dib, Gaurav Gupta","doi":"10.48550/arXiv.2206.13903","DOIUrl":"https://doi.org/10.48550/arXiv.2206.13903","url":null,"abstract":"Recently, introspective models like IntroVAE and S-IntroVAE have excelled in image generation and reconstruction tasks. The principal characteristic of introspective models is the adversarial learning of VAE, where the encoder attempts to distinguish between the real and the fake (i.e., synthesized) images. However, due to the unavailability of an effective metric to evaluate the difference between the real and the fake images, the posterior collapse and the vanishing gradient problem still exist, reducing the fidelity of the synthesized images. In this paper, we propose a new variation of IntroVAE called Adversarial Similarity Distance Introspective Variational Autoencoder (AS-IntroVAE). We theoretically analyze the vanishing gradient problem and construct a new Adversarial Similarity Distance (AS-Distance) using the 2-Wasserstein distance and the kernel trick. With weight annealing on AS-Distance and KL-Divergence, the AS-IntroVAE are able to generate stable and high-quality images. The posterior collapse problem is addressed by making per-batch attempts to transform the image so that it better fits the prior distribution in the latent space. Compared with the per-image approach, this strategy fosters more diverse distributions in the latent space, allowing our model to produce images of great diversity. Comprehensive experiments on benchmark datasets demonstrate the effectiveness of AS-IntroVAE on image generation and reconstruction tasks.","PeriodicalId":119756,"journal":{"name":"Asian Conference on Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126557518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
FLVoogd: Robust And Privacy Preserving Federated Learning FLVoogd:鲁棒和隐私保护联邦学习
Asian Conference on Machine Learning Pub Date : 2022-06-24 DOI: 10.48550/arXiv.2207.00428
Yuhang Tian, Rui Wang, Yan Qiao, E. Panaousis, K. Liang
{"title":"FLVoogd: Robust And Privacy Preserving Federated Learning","authors":"Yuhang Tian, Rui Wang, Yan Qiao, E. Panaousis, K. Liang","doi":"10.48550/arXiv.2207.00428","DOIUrl":"https://doi.org/10.48550/arXiv.2207.00428","url":null,"abstract":"In this work, we propose FLVoogd, an updated federated learning method in which servers and clients collaboratively eliminate Byzantine attacks while preserving privacy. In particular, servers use automatic Density-based Spatial Clustering of Applications with Noise (DBSCAN) combined with S2PC to cluster the benign majority without acquiring sensitive personal information. Meanwhile, clients build dual models and perform test-based distance controlling to adjust their local models toward the global one to achieve personalizing. Our framework is automatic and adaptive that servers/clients don't need to tune the parameters during the training. In addition, our framework leverages Secure Multi-party Computation (SMPC) operations, including multiplications, additions, and comparison, where costly operations, like division and square root, are not required. Evaluations are carried out on some conventional datasets from the image classification field. The result shows that FLVoogd can effectively reject malicious uploads in most scenarios; meanwhile, it avoids data leakage from the server-side.","PeriodicalId":119756,"journal":{"name":"Asian Conference on Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124371782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
An online semi-definite programming with a generalised log-determinant regularizer and its applications 具有广义对数行列式正则化器的在线半确定规划及其应用
Asian Conference on Machine Learning Pub Date : 2022-03-25 DOI: 10.3390/math10071055
Yaxiong Liu, Ken-ichiro Moridomi, Kohei Hatano, Eiji Takimoto
{"title":"An online semi-definite programming with a generalised log-determinant regularizer and its applications","authors":"Yaxiong Liu, Ken-ichiro Moridomi, Kohei Hatano, Eiji Takimoto","doi":"10.3390/math10071055","DOIUrl":"https://doi.org/10.3390/math10071055","url":null,"abstract":"We consider a variant of the online semi-definite programming problem (OSDP). Specifically, in our problem, the setting of the decision space is a set of positive semi-definite matrices constrained by two norms in parallel: the L∞ norm to the diagonal entries and the Γ-trace norm, which is a generalized trace norm with a positive definite matrix Γ. Our setting recovers the original one when Γ is an identity matrix. To solve this problem, we design a follow-the-regularized-leader algorithm with a Γ-dependent regularizer, which also generalizes the log-determinant function. Next, we focus on online binary matrix completion (OBMC) with side information and online similarity prediction with side information. By reducing to the OSDP framework and applying our proposed algorithm, we remove the logarithmic factors in the previous mistake bound of the above two problems. In particular, for OBMC, our bound is optimal. Furthermore, our result implies a better offline generalization bound for the algorithm, which is similar to those of SVMs with the best kernel, if the side information is involved in advance.","PeriodicalId":119756,"journal":{"name":"Asian Conference on Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127411502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Detecting Accounting Frauds in Publicly Traded U.S. Firms: A Machine Learning Approach 美国上市公司的会计欺诈检测:机器学习方法
Asian Conference on Machine Learning Pub Date : 2020-03-01 DOI: 10.2139/SSRN.2670703
Bin Li, Julia Yu, Jie Zhang, B. Ke
{"title":"Detecting Accounting Frauds in Publicly Traded U.S. Firms: A Machine Learning Approach","authors":"Bin Li, Julia Yu, Jie Zhang, B. Ke","doi":"10.2139/SSRN.2670703","DOIUrl":"https://doi.org/10.2139/SSRN.2670703","url":null,"abstract":"This paper studies how machine learning techniques can facilitate the detection of accounting fraud in publicly traded US rms. Existing studies often mimic human experts and employ the nancial or nonnancial","PeriodicalId":119756,"journal":{"name":"Asian Conference on Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130746043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Active Change-Point Detection 主动变更点检测
Asian Conference on Machine Learning Pub Date : 2019-10-15 DOI: 10.1527/tjsai.35-5_e-ja10
S. Hayashi, Yoshinobu Kawahara, H. Kashima
{"title":"Active Change-Point Detection","authors":"S. Hayashi, Yoshinobu Kawahara, H. Kashima","doi":"10.1527/tjsai.35-5_e-ja10","DOIUrl":"https://doi.org/10.1527/tjsai.35-5_e-ja10","url":null,"abstract":"We introduce Active Change-Point Detection (ACPD), a novel active learning problem for efficient change-point detection in situations where the cost of data acquisition is expensive. At each round of ACPD, the task is to adaptively determine the next input, in order to detect the change-point in a black-box expensive-to-evaluate function, with as few evaluations as possible. We propose a novel framework that can be generalized for different types of data and change-points, by utilizing an existing change-point detection method to compute change scores and a Bayesian optimization method to determine the next input. We demonstrate the efficiency of our proposed framework in different settings of datasets and change-points, using synthetic data and real-world data, such as material science data and seafloor depth data.","PeriodicalId":119756,"journal":{"name":"Asian Conference on Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120970584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Inverse Visual Question Answering with Multi-Level Attentions 多层次关注的逆向视觉问答
Asian Conference on Machine Learning Pub Date : 2019-09-17 DOI: 10.22215/etd/2019-13929
Yaser Alwatter, Yuhong Guo
{"title":"Inverse Visual Question Answering with Multi-Level Attentions","authors":"Yaser Alwatter, Yuhong Guo","doi":"10.22215/etd/2019-13929","DOIUrl":"https://doi.org/10.22215/etd/2019-13929","url":null,"abstract":"In this paper, we propose a novel deep multi-level attention model to address inverse visual question answering. The proposed model generates regional visual and semantic features at the object level and then enhances them with the answer cue by using attention mechanisms. Two levels of multiple attentions are employed in the model, including the dual attention at the partial question encoding step and the dynamic attention at the next question word generation step. We evaluate the proposed model on the VQA V1 dataset. It demonstrates state-of-the-art performance in terms of multiple commonly used metrics.","PeriodicalId":119756,"journal":{"name":"Asian Conference on Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116948454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Query Selection via Weighted Entropy in Graph-Based Semi-supervised Classification 基于加权熵的图半监督分类查询选择
Asian Conference on Machine Learning Pub Date : 2009-11-03 DOI: 10.1007/978-3-642-05224-8_22
Krikamol Muandet, S. Marukatat, C. Nattee
{"title":"Query Selection via Weighted Entropy in Graph-Based Semi-supervised Classification","authors":"Krikamol Muandet, S. Marukatat, C. Nattee","doi":"10.1007/978-3-642-05224-8_22","DOIUrl":"https://doi.org/10.1007/978-3-642-05224-8_22","url":null,"abstract":"","PeriodicalId":119756,"journal":{"name":"Asian Conference on Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127293371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信