International Joint Conference on Artificial Intelligence最新文献

筛选
英文 中文
Character As Pixels: A Controllable Prompt Adversarial Attacking Framework for Black-Box Text Guided Image Generation Models 字符作为像素:黑箱文本引导图像生成模型的可控提示对抗攻击框架
International Joint Conference on Artificial Intelligence Pub Date : 2023-08-01 DOI: 10.24963/ijcai.2023/109
Ziyi Kou, Shichao Pei, Yijun Tian, Xiangliang Zhang
{"title":"Character As Pixels: A Controllable Prompt Adversarial Attacking Framework for Black-Box Text Guided Image Generation Models","authors":"Ziyi Kou, Shichao Pei, Yijun Tian, Xiangliang Zhang","doi":"10.24963/ijcai.2023/109","DOIUrl":"https://doi.org/10.24963/ijcai.2023/109","url":null,"abstract":"In this paper, we study a controllable prompt adversarial attacking problem for text guided image generation (Text2Image) models in the black-box scenario, where the goal is to attack specific visual subjects (e.g., changing a brown dog to white) in a generated image by slightly, if not imperceptibly, perturbing the characters of the driven prompt (e.g., ``brown'' to ``br0wn''). Our study is motivated by the limitations of current Text2Image attacking approaches that still rely on manual trials to create adversarial prompts. To address such limitations, we develop CharGrad, a character-level gradient based attacking framework that replaces specific characters of a prompt with pixel-level similar ones by interactively learning the perturbation direction for the prompt and updating the attacking examiner for the generated image based on a novel proxy perturbation representation for characters. We evaluate CharGrad using the texts from two public image captioning datasets. Results demonstrate that CharGrad outperforms existing text adversarial attacking approaches on attacking various subjects of generated images by black-box Text2Image models in a more effective and efficient way with less perturbation on the characters of the prompts.","PeriodicalId":394530,"journal":{"name":"International Joint Conference on Artificial Intelligence","volume":"317 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115947210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Counting and Sampling Models in First-Order Logic 一阶逻辑中的计数和抽样模型
International Joint Conference on Artificial Intelligence Pub Date : 2023-08-01 DOI: 10.24963/ijcai.2023/801
Ondřej Kuželka
{"title":"Counting and Sampling Models in First-Order Logic","authors":"Ondřej Kuželka","doi":"10.24963/ijcai.2023/801","DOIUrl":"https://doi.org/10.24963/ijcai.2023/801","url":null,"abstract":"First-order model counting (FOMC) is the task of counting models of a first-order logic sentence over a given set of domain elements. Its weighted variant, WFOMC, generalizes FOMC by assigning weights to the models and has many applications in statistical relational learning. More than ten years of research by various authors has led to identification of non-trivial classes of WFOMC problems that can be solved in time polynomial in the number of domain elements. In this paper, we describe recent works on WFOMC and the related problem of weighted first-order model sampling (WFOMS). We also discuss possible applications of WFOMC and WFOMS within statistical relational learning and beyond, e.g., automated solving of problems from enumerative combinatorics and elementary probability theory. Finally, we mention research problems that still need to be tackled in order to make applications of these methods really practical more broadly.","PeriodicalId":394530,"journal":{"name":"International Joint Conference on Artificial Intelligence","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121026596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Finding an ϵ-Close Minimal Variation of Parameters in Bayesian Networks 在贝叶斯网络中寻找ϵ-Close最小参数变化
International Joint Conference on Artificial Intelligence Pub Date : 2023-08-01 DOI: 10.24963/ijcai.2023/635
Bahar Salmani, J. Katoen
{"title":"Finding an ϵ-Close Minimal Variation of Parameters in Bayesian Networks","authors":"Bahar Salmani, J. Katoen","doi":"10.24963/ijcai.2023/635","DOIUrl":"https://doi.org/10.24963/ijcai.2023/635","url":null,"abstract":"This paper addresses the ε-close parameter tuning problem for Bayesian\u0000\u0000networks (BNs): find a minimal ε-close amendment of probability entries\u0000\u0000in a given set of (rows in) conditional probability tables that make a\u0000\u0000given quantitative constraint on the BN valid. Based on the\u0000\u0000state-of-the-art “region verification” techniques for parametric Markov\u0000\u0000chains, we propose an algorithm whose capabilities go\u0000\u0000beyond any existing techniques. Our experiments show that ε-close tuning\u0000\u0000of large BN benchmarks with up to eight parameters is feasible. In\u0000\u0000particular, by allowing (i) varied parameters in multiple CPTs and (ii)\u0000\u0000inter-CPT parameter dependencies, we treat subclasses of parametric BNs\u0000\u0000that have received scant attention so far.","PeriodicalId":394530,"journal":{"name":"International Joint Conference on Artificial Intelligence","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124939924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
What Lies beyond the Pareto Front? A Survey on Decision-Support Methods for Multi-Objective Optimization 帕累托前线之外是什么?多目标优化决策支持方法综述
International Joint Conference on Artificial Intelligence Pub Date : 2023-08-01 DOI: 10.24963/ijcai.2023/755
Zuzanna Osika, J. Z. Salazar, Diederik M. Roijers, F. Oliehoek, P. Murukannaiah
{"title":"What Lies beyond the Pareto Front? A Survey on Decision-Support Methods for Multi-Objective Optimization","authors":"Zuzanna Osika, J. Z. Salazar, Diederik M. Roijers, F. Oliehoek, P. Murukannaiah","doi":"10.24963/ijcai.2023/755","DOIUrl":"https://doi.org/10.24963/ijcai.2023/755","url":null,"abstract":"We present a review that unifies decision-support methods for exploring the solutions produced by multi-objective optimization (MOO) algorithms. As MOO is applied to solve diverse problems, approaches for analyzing the trade-offs offered by these algorithms are scattered across fields. We provide an overview of the current advances on this topic, including methods for visualization, mining the solution set, and uncertainty exploration as well as emerging research directions, including interactivity, explainability, and support on ethical aspects. We synthesize these methods drawing from different fields of research to enable building a unified approach, independent of the application. Our goals are to reduce the entry barrier for researchers and practitioners on using MOO algorithms and to provide novel research directions.","PeriodicalId":394530,"journal":{"name":"International Joint Conference on Artificial Intelligence","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125119149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Contrastive Learning and Reward Smoothing for Deep Portfolio Management 深度投资组合管理的对比学习与奖励平滑
International Joint Conference on Artificial Intelligence Pub Date : 2023-08-01 DOI: 10.24963/ijcai.2023/441
Yun-Hsuan Lien, Yuan-kui Li, Yu-Shuen Wang
{"title":"Contrastive Learning and Reward Smoothing for Deep Portfolio Management","authors":"Yun-Hsuan Lien, Yuan-kui Li, Yu-Shuen Wang","doi":"10.24963/ijcai.2023/441","DOIUrl":"https://doi.org/10.24963/ijcai.2023/441","url":null,"abstract":"In this study, we used reinforcement learning (RL) models to invest assets in order to earn returns. The models were trained to interact with a simulated environment based on historical market data and learn trading strategies. However, using deep neural networks based on the returns of each period can be challenging due to the unpredictability of financial markets. As a result, the policies learned from training data may not be effective when tested in real-world situations. To address this issue, we incorporated contrastive learning and reward smoothing into our training process. Contrastive learning allows the RL models to recognize patterns in asset states that may indicate future price movements. Reward smoothing, on the other hand, serves as a regularization technique to prevent the models from seeking immediate but uncertain profits. We tested our method against various traditional financial techniques and other deep RL methods, and found it to be effective in both the U.S. stock market and the cryptocurrency market. Our source code is available at https://github.com/sophialien/FinTech-DPM.","PeriodicalId":394530,"journal":{"name":"International Joint Conference on Artificial Intelligence","volume":"391 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114008326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A New ANN-SNN Conversion Method with High Accuracy, Low Latency and Good Robustness 一种高精度、低时延、鲁棒性好的ANN-SNN转换新方法
International Joint Conference on Artificial Intelligence Pub Date : 2023-08-01 DOI: 10.24963/ijcai.2023/342
Bingsen Wang, Jian Cao, Jue Chen, Shuo Feng, Yuan Wang
{"title":"A New ANN-SNN Conversion Method with High Accuracy, Low Latency and Good Robustness","authors":"Bingsen Wang, Jian Cao, Jue Chen, Shuo Feng, Yuan Wang","doi":"10.24963/ijcai.2023/342","DOIUrl":"https://doi.org/10.24963/ijcai.2023/342","url":null,"abstract":"Due to the advantages of low energy consumption, high robustness and fast inference speed, Spiking Neural Networks (SNNs), with good biological interpretability and the potential to be applied on neuromorphic hardware, are regarded as the third generation of Artificial Neural Networks (ANNs). Despite having so many advantages, the biggest challenge encountered by spiking neural networks is training difficulty caused by the non-differentiability of spike signals. ANN-SNN conversion is an effective method that solves the training difficulty by converting parameters in ANNs to those in SNNs through a specific algorithm. However, the ANN-SNN conversion method also suffers from accuracy degradation and long inference time. In this paper, we reanalyzed the relationship between Integrate-and-Fire (IF) neuron model and ReLU activation function, proposed a StepReLU activation function more suitable for SNNs under membrane potential encoding, and used it to train ANNs. Then we converted the ANNs to SNNs with extremely small conversion error and introduced leakage mechanism to the SNNs and get the final models, which have high accuracy, low latency and good robustness, and have achieved the state-of-the-art performance on various datasets such as CIFAR and ImageNet.","PeriodicalId":394530,"journal":{"name":"International Joint Conference on Artificial Intelligence","volume":"195 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122521538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generalization Bounds for Adversarial Metric Learning 对抗性度量学习的泛化界限
International Joint Conference on Artificial Intelligence Pub Date : 2023-08-01 DOI: 10.24963/ijcai.2023/489
Wen Wen, Han Li, H. Chen, Rui Wu, Lingjuan Wu, Liangxuan Zhu
{"title":"Generalization Bounds for Adversarial Metric Learning","authors":"Wen Wen, Han Li, H. Chen, Rui Wu, Lingjuan Wu, Liangxuan Zhu","doi":"10.24963/ijcai.2023/489","DOIUrl":"https://doi.org/10.24963/ijcai.2023/489","url":null,"abstract":"Recently, adversarial metric learning has been proposed to enhance the robustness of the learned distance metric against adversarial perturbations. Despite rapid progress in validating its effectiveness empirically, theoretical guarantees on adversarial robustness and generalization are far less understood. To fill this gap, this paper focuses on unveiling the generalization properties of adversarial metric learning by developing the uniform convergence analysis techniques. Based on the capacity estimation of covering numbers, we establish the first high-probability generalization bounds with order O(n^{-1/2}) for adversarial metric learning with pairwise perturbations and general losses, where n is the number of training samples. Moreover, we obtain the refined generalization bounds with order O(n^{-1}) for the smooth loss by using local Rademacher complexity, which is faster than the previous result of adversarial pairwise learning, e.g., adversarial bipartite ranking. Experimental evaluation on real-world datasets validates our theoretical findings.","PeriodicalId":394530,"journal":{"name":"International Joint Conference on Artificial Intelligence","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122716666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning to Binarize Continuous Features for Neuro-Rule Networks 学习神经规则网络的连续特征二值化
International Joint Conference on Artificial Intelligence Pub Date : 2023-08-01 DOI: 10.24963/ijcai.2023/510
Wei Zhang, Y. Liu, Zhuo Wang, Jianyong Wang
{"title":"Learning to Binarize Continuous Features for Neuro-Rule Networks","authors":"Wei Zhang, Y. Liu, Zhuo Wang, Jianyong Wang","doi":"10.24963/ijcai.2023/510","DOIUrl":"https://doi.org/10.24963/ijcai.2023/510","url":null,"abstract":"Neuro-Rule Networks (NRNs) emerge as a promising neuro-symbolic method, enjoyed by the ability to equate fully-connected neural networks with logic rules. To support learning logic rules consisting of boolean variables, converting input features into binary representations is required. Different from discrete features that could be directly transformed by one-hot encodings, continuous features need to be binarized based on some numerical intervals. Existing studies usually select the bound values of intervals based on empirical strategies (e.g., equal-width interval). However, it is not optimal since the bounds are fixed and cannot be optimized to accommodate the ultimate training target. In this paper, we propose AutoInt, an approach that automatically binarizes continuous features and enables the intervals to be optimized with NRNs in an end-to-end fashion. Specifically, AutoInt automatically selects an interval for a given continuous feature in a soft manner to enable a differentiable learning procedure of interval-related parameters. Moreover, it introduces an additional soft K-means clustering loss to make the interval centres approach the original feature value distribution, thus reducing the risk of overfitting intervals. We conduct comprehensive experiments on public datasets and demonstrate the effectiveness of AutoInt in boosting the performance of NRNs.","PeriodicalId":394530,"journal":{"name":"International Joint Conference on Artificial Intelligence","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122723225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Vision Language Navigation with Knowledge-driven Environmental Dreamer 视觉语言导航与知识驱动的环境梦想家
International Joint Conference on Artificial Intelligence Pub Date : 2023-08-01 DOI: 10.24963/ijcai.2023/204
Fengda Zhu, Vincent CS Lee, Xiaojun Chang, Xiaodan Liang
{"title":"Vision Language Navigation with Knowledge-driven Environmental Dreamer","authors":"Fengda Zhu, Vincent CS Lee, Xiaojun Chang, Xiaodan Liang","doi":"10.24963/ijcai.2023/204","DOIUrl":"https://doi.org/10.24963/ijcai.2023/204","url":null,"abstract":"Vision-language navigation (VLN) requires an agent to perceive visual observation in a house scene and navigate step-by-step following natural language instruction. Due to the high cost of data annotation and data collection, current VLN datasets provide limited instruction-trajectory data samples. Learning vision-language alignment for VLN from limited data is challenging since visual observation and language instruction are both complex and diverse. Previous works only generate augmented data based on original scenes while failing to generate data samples from unseen scenes, which limits the generalization ability of the navigation agent. In this paper, we introduce the Knowledge-driven Environmental Dreamer (KED), a method that leverages the knowledge of the embodied environment and generates unseen scenes for a navigation agent to learn. Generating an unseen environment with texture consistency and structure consistency is challenging. To address this problem, we incorporate three knowledge-driven regularization objectives into the KED and adopt a reweighting mechanism for self-adaptive optimization. Our KED method is able to generate unseen embodied environments without extra annotations. We use KED to successfully generate 270 houses and 500K instruction-trajectory pairs. The navigation agent with the KED method outperforms the state-of-the-art methods on various VLN benchmarks, such as R2R, R4R, and RxR. Both qualitative and quantitative experiments prove that our proposed KED method is able to high-quality augmentation data with texture consistency and structure consistency.","PeriodicalId":394530,"journal":{"name":"International Joint Conference on Artificial Intelligence","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122932507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Measuring a Priori Voting Power in Liquid Democracy 衡量流动民主中的先验投票权
International Joint Conference on Artificial Intelligence Pub Date : 2023-08-01 DOI: 10.24963/ijcai.2023/290
Rachael Colley, Théo Delemazure, Hugo Gilbert
{"title":"Measuring a Priori Voting Power in Liquid Democracy","authors":"Rachael Colley, Théo Delemazure, Hugo Gilbert","doi":"10.24963/ijcai.2023/290","DOIUrl":"https://doi.org/10.24963/ijcai.2023/290","url":null,"abstract":"We introduce new power indices to measure the a priori voting power of voters in liquid democracy elections where an underlying network restricts delegations. We argue that our power indices are natural extensions of the standard Penrose-Banzhaf index in simple voting games.\u0000\u0000We show that computing the criticality of a voter is #P-hard even in weighted games with weights polynomially-bounded in the size of the instance.\u0000\u0000However, for specific settings, such as when the underlying network is a bipartite or complete graph, recursive formulas can compute these indices for weighted voting games in pseudo-polynomial time.\u0000\u0000We highlight their theoretical properties and provide numerical results to illustrate how restricting the possible delegations can alter voters' voting power.","PeriodicalId":394530,"journal":{"name":"International Joint Conference on Artificial Intelligence","volume":"142 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131743570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书