Asian Conference on Machine Learning最新文献

筛选
英文 中文
RoLNiP: Robust Learning Using Noisy Pairwise Comparisons RoLNiP:基于噪声两两比较的鲁棒学习
Asian Conference on Machine Learning Pub Date : 2023-03-04 DOI: 10.48550/arXiv.2303.02341
Samartha S Maheshwara, Naresh Manwani
{"title":"RoLNiP: Robust Learning Using Noisy Pairwise Comparisons","authors":"Samartha S Maheshwara, Naresh Manwani","doi":"10.48550/arXiv.2303.02341","DOIUrl":"https://doi.org/10.48550/arXiv.2303.02341","url":null,"abstract":"This paper presents a robust approach for learning from noisy pairwise comparisons. We propose sufficient conditions on the loss function under which the risk minimization framework becomes robust to noise in the pairwise similar dissimilar data. Our approach does not require the knowledge of noise rate in the uniform noise case. In the case of conditional noise, the proposed method depends on the noise rates. For such cases, we offer a provably correct approach for estimating the noise rates. Thus, we propose an end-to-end approach to learning robust classifiers in this setting. We experimentally show that the proposed approach RoLNiP outperforms the robust state-of-the-art methods for learning with noisy pairwise comparisons.","PeriodicalId":119756,"journal":{"name":"Asian Conference on Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130343476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AIIR-MIX: Multi-Agent Reinforcement Learning Meets Attention Individual Intrinsic Reward Mixing Network air - mix:多智能体强化学习满足注意个体内在奖励混合网络
Asian Conference on Machine Learning Pub Date : 2023-02-19 DOI: 10.48550/arXiv.2302.09531
Wei Li, Weiyan Liu, Shitong Shao, Shiyi Huang
{"title":"AIIR-MIX: Multi-Agent Reinforcement Learning Meets Attention Individual Intrinsic Reward Mixing Network","authors":"Wei Li, Weiyan Liu, Shitong Shao, Shiyi Huang","doi":"10.48550/arXiv.2302.09531","DOIUrl":"https://doi.org/10.48550/arXiv.2302.09531","url":null,"abstract":"Deducing the contribution of each agent and assigning the corresponding reward to them is a crucial problem in cooperative Multi-Agent Reinforcement Learning (MARL). Previous studies try to resolve the issue through designing an intrinsic reward function, but the intrinsic reward is simply combined with the environment reward by summation in these studies, which makes the performance of their MARL framework unsatisfactory. We propose a novel method named Attention Individual Intrinsic Reward Mixing Network (AIIR-MIX) in MARL, and the contributions of AIIR-MIX are listed as follows:(a) we construct a novel intrinsic reward network based on the attention mechanism to make teamwork more effective. (b) we propose a Mixing network that is able to combine intrinsic and extrinsic rewards non-linearly and dynamically in response to changing conditions of the environment. We compare AIIR-MIX with many State-Of-The-Art (SOTA) MARL methods on battle games in StarCraft II. And the results demonstrate that AIIR-MIX performs admirably and can defeat the current advanced methods on average test win rate. To validate the effectiveness of AIIR-MIX, we conduct additional ablation studies. The results show that AIIR-MIX can dynamically assign each agent a real-time intrinsic reward in accordance with their actual contribution.","PeriodicalId":119756,"journal":{"name":"Asian Conference on Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125035330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the Interpretability of Attention Networks 论注意网络的可解释性
Asian Conference on Machine Learning Pub Date : 2022-12-30 DOI: 10.48550/arXiv.2212.14776
L. N. Pandey, Rahul Vashisht, H. G. Ramaswamy
{"title":"On the Interpretability of Attention Networks","authors":"L. N. Pandey, Rahul Vashisht, H. G. Ramaswamy","doi":"10.48550/arXiv.2212.14776","DOIUrl":"https://doi.org/10.48550/arXiv.2212.14776","url":null,"abstract":"Attention mechanisms form a core component of several successful deep learning architectures, and are based on one key idea: ''The output depends only on a small (but unknown) segment of the input.'' In several practical applications like image captioning and language translation, this is mostly true. In trained models with an attention mechanism, the outputs of an intermediate module that encodes the segment of input responsible for the output is often used as a way to peek into the `reasoning` of the network. We make such a notion more precise for a variant of the classification problem that we term selective dependence classification (SDC) when used with attention model architectures. Under such a setting, we demonstrate various error modes where an attention model can be accurate but fail to be interpretable, and show that such models do occur as a result of training. We illustrate various situations that can accentuate and mitigate this behaviour. Finally, we use our objective definition of interpretability for SDC tasks to evaluate a few attention model learning algorithms designed to encourage sparsity and demonstrate that these algorithms help improve interpretability.","PeriodicalId":119756,"journal":{"name":"Asian Conference on Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117349291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Evaluating the Perceived Safety of Urban City via Maximum Entropy Deep Inverse Reinforcement Learning 基于最大熵深度逆强化学习的城市感知安全评价
Asian Conference on Machine Learning Pub Date : 2022-11-19 DOI: 10.48550/arXiv.2211.10660
Yaxuan Wang, Zhixin Zeng, Qijun Zhao
{"title":"Evaluating the Perceived Safety of Urban City via Maximum Entropy Deep Inverse Reinforcement Learning","authors":"Yaxuan Wang, Zhixin Zeng, Qijun Zhao","doi":"10.48550/arXiv.2211.10660","DOIUrl":"https://doi.org/10.48550/arXiv.2211.10660","url":null,"abstract":"Inspired by expert evaluation policy for urban perception, we proposed a novel inverse reinforcement learning (IRL) based framework for predicting urban safety and recovering the corresponding reward function. We also presented a scalable state representation method to model the prediction problem as a Markov decision process (MDP) and use reinforcement learning (RL) to solve the problem. Additionally, we built a dataset called SmallCity based on the crowdsourcing method to conduct the research. As far as we know, this is the first time the IRL approach has been introduced to the urban safety perception and planning field to help experts quantitatively analyze perceptual features. Our results showed that IRL has promising prospects in this field. We will later open-source the crowdsourcing data collection site and the model proposed in this paper.","PeriodicalId":119756,"journal":{"name":"Asian Conference on Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126492699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
One Gradient Frank-Wolfe for Decentralized Online Convex and Submodular Optimization 离散在线凸与次模优化的一梯度Frank-Wolfe
Asian Conference on Machine Learning Pub Date : 2022-10-30 DOI: 10.48550/arXiv.2210.16790
T. Nguyen, K. Nguyen, D. Trystram
{"title":"One Gradient Frank-Wolfe for Decentralized Online Convex and Submodular Optimization","authors":"T. Nguyen, K. Nguyen, D. Trystram","doi":"10.48550/arXiv.2210.16790","DOIUrl":"https://doi.org/10.48550/arXiv.2210.16790","url":null,"abstract":"Decentralized learning has been studied intensively in recent years motivated by its wide applications in the context of federated learning. The majority of previous research focuses on the offline setting in which the objective function is static. However, the offline setting becomes unrealistic in numerous machine learning applications that witness the change of massive data. In this paper, we propose emph{decentralized online} algorithm for convex and continuous DR-submodular optimization, two classes of functions that are present in a variety of machine learning problems. Our algorithms achieve performance guarantees comparable to those in the centralized offline setting. Moreover, on average, each participant performs only a emph{single} gradient computation per time step. Subsequently, we extend our algorithms to the bandit setting. Finally, we illustrate the competitive performance of our algorithms in real-world experiments.","PeriodicalId":119756,"journal":{"name":"Asian Conference on Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129130559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-Scale Context Extracted Hashing for Fine-Grained Image Binary Encoding 用于细粒度图像二值编码的跨尺度上下文提取哈希
Asian Conference on Machine Learning Pub Date : 2022-10-14 DOI: 10.48550/arXiv.2210.07572
Xuetong Xue, Jiaying Shi, Xinxue He, Sheng Xu, Zhaoming Pan
{"title":"Cross-Scale Context Extracted Hashing for Fine-Grained Image Binary Encoding","authors":"Xuetong Xue, Jiaying Shi, Xinxue He, Sheng Xu, Zhaoming Pan","doi":"10.48550/arXiv.2210.07572","DOIUrl":"https://doi.org/10.48550/arXiv.2210.07572","url":null,"abstract":"Deep hashing has been widely applied to large-scale image retrieval tasks owing to efficient computation and low storage cost by encoding high-dimensional image data into binary codes. Since binary codes do not contain as much information as float features, the essence of binary encoding is preserving the main context to guarantee retrieval quality. However, the existing hashing methods have great limitations on suppressing redundant background information and accurately encoding from Euclidean space to Hamming space by a simple sign function. In order to solve these problems, a Cross-Scale Context Extracted Hashing Network (CSCE-Net) is proposed in this paper. Firstly, we design a two-branch framework to capture fine-grained local information while maintaining high-level global semantic information. Besides, Attention guided Information Extraction module (AIE) is introduced between two branches, which suppresses areas of low context information cooperated with global sliding windows. Unlike previous methods, our CSCE-Net learns a content-related Dynamic Sign Function (DSF) to replace the original simple sign function. Therefore, the proposed CSCE-Net is context-sensitive and able to perform well on accurate image binary encoding. We further demonstrate that our CSCE-Net is superior to the existing hashing methods, which improves retrieval performance on standard benchmarks.","PeriodicalId":119756,"journal":{"name":"Asian Conference on Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129838562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Semantic Cross Attention for Few-shot Learning 语义交叉注意在短时学习中的应用
Asian Conference on Machine Learning Pub Date : 2022-10-12 DOI: 10.48550/arXiv.2210.06311
Bin Xiao, Chien Liu, W. Hsaio
{"title":"Semantic Cross Attention for Few-shot Learning","authors":"Bin Xiao, Chien Liu, W. Hsaio","doi":"10.48550/arXiv.2210.06311","DOIUrl":"https://doi.org/10.48550/arXiv.2210.06311","url":null,"abstract":"Few-shot learning (FSL) has attracted considerable attention recently. Among existing approaches, the metric-based method aims to train an embedding network that can make similar samples close while dissimilar samples as far as possible and achieves promising results. FSL is characterized by using only a few images to train a model that can generalize to novel classes in image classification problems, but this setting makes it difficult to learn the visual features that can identify the images' appearance variations. The model training is likely to move in the wrong direction, as the images in an identical semantic class may have dissimilar appearances, whereas the images in different semantic classes may share a similar appearance. We argue that FSL can benefit from additional semantic features to learn discriminative feature representations. Thus, this study proposes a multi-task learning approach to view semantic features of label text as an auxiliary task to help boost the performance of the FSL task. Our proposed model uses word-embedding representations as semantic features to help train the embedding network and a semantic cross-attention module to bridge the semantic features into the typical visual modal. The proposed approach is simple, but produces excellent results. We apply our proposed approach to two previous metric-based FSL methods, all of which can substantially improve performance. The source code for our model is accessible from github.","PeriodicalId":119756,"journal":{"name":"Asian Conference on Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134258700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Pose Guided Human Image Synthesis with Partially Decoupled GAN 基于部分解耦GAN的姿态引导人体图像合成
Asian Conference on Machine Learning Pub Date : 2022-10-07 DOI: 10.48550/arXiv.2210.03627
Jianguo Wu, Jianzong Wang, Shijing Si, Xiaoyang Qu, Jing Xiao
{"title":"Pose Guided Human Image Synthesis with Partially Decoupled GAN","authors":"Jianguo Wu, Jianzong Wang, Shijing Si, Xiaoyang Qu, Jing Xiao","doi":"10.48550/arXiv.2210.03627","DOIUrl":"https://doi.org/10.48550/arXiv.2210.03627","url":null,"abstract":"Pose Guided Human Image Synthesis (PGHIS) is a challenging task of transforming a human image from the reference pose to a target pose while preserving its style. Most existing methods encode the texture of the whole reference human image into a latent space, and then utilize a decoder to synthesize the image texture of the target pose. However, it is difficult to recover the detailed texture of the whole human image. To alleviate this problem, we propose a method by decoupling the human body into several parts (eg, hair, face, hands, feet, etc) and then using each of these parts to guide the synthesis of a realistic image of the person, which preserves the detailed information of the generated images. In addition, we design a multi-head attention-based module for PGHIS. Because most convolutional neural network-based methods have difficulty in modeling long-range dependency due to the convolutional operation, the long-range modeling capability of attention mechanism is more suitable than convolutional neural networks for pose transfer task, especially for sharp pose deformation. Extensive experiments on Market-1501 and DeepFashion datasets reveal that our method almost outperforms other existing state-of-the-art methods in terms of both qualitative and quantitative metrics.","PeriodicalId":119756,"journal":{"name":"Asian Conference on Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115027913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Efficient Deep Clustering of Human Activities and How to Improve Evaluation 人类活动的高效深度聚类及其改进评价方法
Asian Conference on Machine Learning Pub Date : 2022-09-17 DOI: 10.48550/arXiv.2209.08335
Louis Mahon, Thomas Lukasiewicz
{"title":"Efficient Deep Clustering of Human Activities and How to Improve Evaluation","authors":"Louis Mahon, Thomas Lukasiewicz","doi":"10.48550/arXiv.2209.08335","DOIUrl":"https://doi.org/10.48550/arXiv.2209.08335","url":null,"abstract":"There has been much recent research on human activity re-cog-ni-tion (HAR), due to the proliferation of wearable sensors in watches and phones, and the advances of deep learning methods, which avoid the need to manually extract features from raw sensor signals. A significant disadvantage of deep learning applied to HAR is the need for manually labelled training data, which is especially difficult to obtain for HAR datasets. Progress is starting to be made in the unsupervised setting, in the form of deep HAR clustering models, which can assign labels to data without having been given any labels to train on, but there are problems with evaluating deep HAR clustering models, which makes assessing the field and devising new methods difficult. In this paper, we highlight several distinct problems with how deep HAR clustering models are evaluated, describing these problems in detail and conducting careful experiments to explicate the effect that they can have on results. We then discuss solutions to these problems, and suggest standard evaluation settings for future deep HAR clustering models. Additionally, we present a new deep clustering model for HAR. When tested under our proposed settings, our model performs better than (or on par with) existing models, while also being more efficient and better able to scale to more complex datasets by avoiding the need for an autoencoder.","PeriodicalId":119756,"journal":{"name":"Asian Conference on Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122817400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On PAC Learning Halfspaces in Non-interactive Local Privacy Model with Public Unlabeled Data 具有公共未标记数据的非交互式局部隐私模型的PAC学习半空间
Asian Conference on Machine Learning Pub Date : 2022-09-17 DOI: 10.48550/arXiv.2209.08319
Jinyan Su, Jinhui Xu, Di Wang
{"title":"On PAC Learning Halfspaces in Non-interactive Local Privacy Model with Public Unlabeled Data","authors":"Jinyan Su, Jinhui Xu, Di Wang","doi":"10.48550/arXiv.2209.08319","DOIUrl":"https://doi.org/10.48550/arXiv.2209.08319","url":null,"abstract":"In this paper, we study the problem of PAC learning halfspaces in the non-interactive local differential privacy model (NLDP). To breach the barrier of exponential sample complexity, previous results studied a relaxed setting where the server has access to some additional public but unlabeled data. We continue in this direction. Specifically, we consider the problem under the standard setting instead of the large margin setting studied before. Under different mild assumptions on the underlying data distribution, we propose two approaches that are based on the Massart noise model and self-supervised learning and show that it is possible to achieve sample complexities that are only linear in the dimension and polynomial in other terms for both private and public data, which significantly improve the previous results. Our methods could also be used for other private PAC learning problems.","PeriodicalId":119756,"journal":{"name":"Asian Conference on Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133978255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信