International Conference on Agents and Artificial Intelligence最新文献

筛选
英文 中文
Probabilistic Model Checking of Stochastic Reinforcement Learning Policies 随机强化学习策略的概率模型检查
International Conference on Agents and Artificial Intelligence Pub Date : 2024-03-27 DOI: 10.5220/0012357700003636
Dennis Gross, Helge Spieker
{"title":"Probabilistic Model Checking of Stochastic Reinforcement Learning Policies","authors":"Dennis Gross, Helge Spieker","doi":"10.5220/0012357700003636","DOIUrl":"https://doi.org/10.5220/0012357700003636","url":null,"abstract":"We introduce a method to verify stochastic reinforcement learning (RL) policies. This approach is compatible with any RL algorithm as long as the algorithm and its corresponding environment collectively adhere to the Markov property. In this setting, the future state of the environment should depend solely on its current state and the action executed, independent of any previous states or actions. Our method integrates a verification technique, referred to as model checking, with RL, leveraging a Markov decision process, a trained RL policy, and a probabilistic computation tree logic (PCTL) formula to build a formal model that can be subsequently verified via the model checker Storm. We demonstrate our method's applicability across multiple benchmarks, comparing it to baseline methods called deterministic safety estimates and naive monolithic model checking. Our results show that our method is suited to verify stochastic RL policies.","PeriodicalId":174978,"journal":{"name":"International Conference on Agents and Artificial Intelligence","volume":"18 19","pages":"438-445"},"PeriodicalIF":0.0,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140375479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Manufacturing Quality Prediction Models Through the Integration of Explainability Methods 通过整合可解释性方法改进制造质量预测模型
International Conference on Agents and Artificial Intelligence Pub Date : 2024-03-27 DOI: 10.5220/0012417800003636
Dennis Gross, Helge Spieker, Arnaud Gotlieb, Ricardo Knoblauch
{"title":"Enhancing Manufacturing Quality Prediction Models Through the Integration of Explainability Methods","authors":"Dennis Gross, Helge Spieker, Arnaud Gotlieb, Ricardo Knoblauch","doi":"10.5220/0012417800003636","DOIUrl":"https://doi.org/10.5220/0012417800003636","url":null,"abstract":"This research presents a method that utilizes explainability techniques to amplify the performance of machine learning (ML) models in forecasting the quality of milling processes, as demonstrated in this paper through a manufacturing use case. The methodology entails the initial training of ML models, followed by a fine-tuning phase where irrelevant features identified through explainability methods are eliminated. This procedural refinement results in performance enhancements, paving the way for potential reductions in manufacturing costs and a better understanding of the trained ML models. This study highlights the usefulness of explainability techniques in both explaining and optimizing predictive models in the manufacturing realm.","PeriodicalId":174978,"journal":{"name":"International Conference on Agents and Artificial Intelligence","volume":"29 10","pages":"898-905"},"PeriodicalIF":0.0,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140375758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DeepTraderX: Challenging Conventional Trading Strategies with Deep Learning in Multi-Threaded Market Simulations DeepTraderX:在多线程市场模拟中利用深度学习挑战传统交易策略
International Conference on Agents and Artificial Intelligence Pub Date : 2024-02-06 DOI: 10.5220/0000183700003636
Armand Mihai Cismaru
{"title":"DeepTraderX: Challenging Conventional Trading Strategies with Deep Learning in Multi-Threaded Market Simulations","authors":"Armand Mihai Cismaru","doi":"10.5220/0000183700003636","DOIUrl":"https://doi.org/10.5220/0000183700003636","url":null,"abstract":"In this paper, we introduce DeepTraderX (DTX), a simple Deep Learning-based trader, and present results that demonstrate its performance in a multi-threaded market simulation. In a total of about 500 simulated market days, DTX has learned solely by watching the prices that other strategies produce. By doing this, it has successfully created a mapping from market data to quotes, either bid or ask orders, to place for an asset. Trained on historical Level-2 market data, i.e., the Limit Order Book (LOB) for specific tradable assets, DTX processes the market state $S$ at each timestep $T$ to determine a price $P$ for market orders. The market data used in both training and testing was generated from unique market schedules based on real historic stock market data. DTX was tested extensively against the best strategies in the literature, with its results validated by statistical analysis. Our findings underscore DTX's capability to rival, and in many instances, surpass, the performance of public-domain traders, including those that outclass human traders, emphasising the efficiency of simple models, as this is required to succeed in intricate multi-threaded simulations. This highlights the potential of leveraging\"black-box\"Deep Learning systems to create more efficient financial markets.","PeriodicalId":174978,"journal":{"name":"International Conference on Agents and Artificial Intelligence","volume":"42 1","pages":"412-421"},"PeriodicalIF":0.0,"publicationDate":"2024-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140461435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DGDNN: Decoupled Graph Diffusion Neural Network for Stock Movement Prediction DGDNN:用于股票走势预测的解耦图扩散神经网络
International Conference on Agents and Artificial Intelligence Pub Date : 2024-01-03 DOI: 10.5220/0012406400003636
Zinuo You, Zijian Shi, Hongbo Bo, John Cartlidge, Li Zhang, Yan Ge
{"title":"DGDNN: Decoupled Graph Diffusion Neural Network for Stock Movement Prediction","authors":"Zinuo You, Zijian Shi, Hongbo Bo, John Cartlidge, Li Zhang, Yan Ge","doi":"10.5220/0012406400003636","DOIUrl":"https://doi.org/10.5220/0012406400003636","url":null,"abstract":"Forecasting future stock trends remains challenging for academia and industry due to stochastic inter-stock dynamics and hierarchical intra-stock dynamics influencing stock prices. In recent years, graph neural networks have achieved remarkable performance in this problem by formulating multiple stocks as graph-structured data. However, most of these approaches rely on artificially defined factors to construct static stock graphs, which fail to capture the intrinsic interdependencies between stocks that rapidly evolve. In addition, these methods often ignore the hierarchical features of the stocks and lose distinctive information within. In this work, we propose a novel graph learning approach implemented without expert knowledge to address these issues. First, our approach automatically constructs dynamic stock graphs by entropy-driven edge generation from a signal processing perspective. Then, we further learn task-optimal dependencies between stocks via a generalized graph diffusion process on constructed stock graphs. Last, a decoupled representation learning scheme is adopted to capture distinctive hierarchical intra-stock features. Experimental results demonstrate substantial improvements over state-of-the-art baselines on real-world datasets. Moreover, the ablation study and sensitivity study further illustrate the effectiveness of the proposed method in modeling the time-evolving inter-stock and intra-stock dynamics.","PeriodicalId":174978,"journal":{"name":"International Conference on Agents and Artificial Intelligence","volume":"18 1","pages":"431-442"},"PeriodicalIF":0.0,"publicationDate":"2024-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140514402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Logic of Awareness in Agent's Reasoning 智能体推理中的意识逻辑
International Conference on Agents and Artificial Intelligence Pub Date : 2023-09-17 DOI: 10.5220/0011630300003393
Yudai Kubono, Teeradaj Racharak, S. Tojo
{"title":"Logic of Awareness in Agent's Reasoning","authors":"Yudai Kubono, Teeradaj Racharak, S. Tojo","doi":"10.5220/0011630300003393","DOIUrl":"https://doi.org/10.5220/0011630300003393","url":null,"abstract":"The aim of this study is to formally express awareness for modeling practical agent communication. The notion of awareness has been proposed as a set of propositions for each agent, to which he/she pays attention, and has contributed to avoiding textit{logical omniscience}. However, when an agent guesses another agent's knowledge states, what matters are not propositions but are accessible possible worlds. Therefore, we introduce a partition of possible worlds connected to awareness, that is an equivalence relation, to denote textit{indistinguishable} worlds. Our logic is called Awareness Logic with Partition ($mathcal{ALP}$). In this paper, we first show a running example to illustrate a practical social game. Thereafter, we introduce syntax and Kripke semantics of the logic and prove its completeness. Finally, we outline an idea to incorporate some epistemic actions with dynamic operators that change the state of awareness.","PeriodicalId":174978,"journal":{"name":"International Conference on Agents and Artificial Intelligence","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121986022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Robust Adaptive Workload Orchestration in Pure Edge Computing 纯边缘计算中鲁棒自适应工作负载编排
International Conference on Agents and Artificial Intelligence Pub Date : 2023-08-15 DOI: 10.5220/0011782500003393
Zahra Safavifar, Charafeddine Mechalikh, F. Golpayegani
{"title":"A Robust Adaptive Workload Orchestration in Pure Edge Computing","authors":"Zahra Safavifar, Charafeddine Mechalikh, F. Golpayegani","doi":"10.5220/0011782500003393","DOIUrl":"https://doi.org/10.5220/0011782500003393","url":null,"abstract":"Pure Edge computing (PEC) aims to bring cloud applications and services to the edge of the network to support the growing user demand for time-sensitive applications and data-driven computing. However, mobility and limited computational capacity of edge devices pose challenges in supporting some urgent and computationally intensive tasks with strict response time demands. If the execution results of these tasks exceed the deadline, they become worthless and can cause severe safety issues. Therefore, it is essential to ensure that edge nodes complete as many latency-sensitive tasks as possible. In this paper, we propose a Robust Adaptive Workload Orchestration (R-AdWOrch) model to minimize deadline misses and data loss by using priority definition and a reallocation strategy. The results show that R-AdWOrch can minimize deadline misses of urgent tasks while minimizing the data loss of lower priority tasks under all conditions.","PeriodicalId":174978,"journal":{"name":"International Conference on Agents and Artificial Intelligence","volume":"144 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129404776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Algorithmic Eta-reduction in Type-theory of Acyclic Recursion 非循环递归类型论中的eta约简算法
International Conference on Agents and Artificial Intelligence Pub Date : 2023-07-18 DOI: 10.5220/0009182410031010
Roussanka Loukanova
{"title":"Algorithmic Eta-reduction in Type-theory of Acyclic Recursion","authors":"Roussanka Loukanova","doi":"10.5220/0009182410031010","DOIUrl":"https://doi.org/10.5220/0009182410031010","url":null,"abstract":"We investigate the applicability of the classic eta-conversion in the type-theory of acyclic algorithms. While denotationally valid, classic eta-conversion is not algorithmically valid in the type theory of algorithms, with the exception of few limited cases. The paper shows how the restricted, algorithmic eta-rule can recover algorithmic eta-conversion in the reduction calculi of type-theory of algorithms.","PeriodicalId":174978,"journal":{"name":"International Conference on Agents and Artificial Intelligence","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130303329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Challenges in Domain-Specific Abstractive Summarization and How to Overcome Them 特定领域抽象总结的挑战及克服方法
International Conference on Agents and Artificial Intelligence Pub Date : 2023-07-03 DOI: 10.5220/0011744500003393
Anum Afzal, Juraj Vladika, Daniel Braun, Florian Matthes
{"title":"Challenges in Domain-Specific Abstractive Summarization and How to Overcome Them","authors":"Anum Afzal, Juraj Vladika, Daniel Braun, Florian Matthes","doi":"10.5220/0011744500003393","DOIUrl":"https://doi.org/10.5220/0011744500003393","url":null,"abstract":"Large Language Models work quite well with general-purpose data and many tasks in Natural Language Processing. However, they show several limitations when used for a task such as domain-specific abstractive text summarization. This paper identifies three of those limitations as research problems in the context of abstractive text summarization: 1) Quadratic complexity of transformer-based models with respect to the input text length; 2) Model Hallucination, which is a model's ability to generate factually incorrect text; and 3) Domain Shift, which happens when the distribution of the model's training and test corpus is not the same. Along with a discussion of the open research questions, this paper also provides an assessment of existing state-of-the-art techniques relevant to domain-specific text summarization to address the research gaps.","PeriodicalId":174978,"journal":{"name":"International Conference on Agents and Artificial Intelligence","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130070977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
German BERT Model for Legal Named Entity Recognition 合法命名实体识别的德国BERT模型
International Conference on Agents and Artificial Intelligence Pub Date : 2023-03-07 DOI: 10.5220/0011749400003393
Harsh Darji, Jelena Mitrović, M. Granitzer
{"title":"German BERT Model for Legal Named Entity Recognition","authors":"Harsh Darji, Jelena Mitrović, M. Granitzer","doi":"10.5220/0011749400003393","DOIUrl":"https://doi.org/10.5220/0011749400003393","url":null,"abstract":"The use of BERT, one of the most popular language models, has led to improvements in many Natural Language Processing (NLP) tasks. One such task is Named Entity Recognition (NER) i.e. automatic identification of named entities such as location, person, organization, etc. from a given text. It is also an important base step for many NLP tasks such as information extraction and argumentation mining. Even though there is much research done on NER using BERT and other popular language models, the same is not explored in detail when it comes to Legal NLP or Legal Tech. Legal NLP applies various NLP techniques such as sentence similarity or NER specifically on legal data. There are only a handful of models for NER tasks using BERT language models, however, none of these are aimed at legal documents in German. In this paper, we fine-tune a popular BERT language model trained on German data (German BERT) on a Legal Entity Recognition (LER) dataset. To make sure our model is not overfitting, we performed a stratified 10-fold cross-validation. The results we achieve by fine-tuning German BERT on the LER dataset outperform the BiLSTM-CRF+ model used by the authors of the same LER dataset. Finally, we make the model openly available via HuggingFace.","PeriodicalId":174978,"journal":{"name":"International Conference on Agents and Artificial Intelligence","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123517402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Shrinking the Inductive Programming Search Space with Instruction Subsets 利用指令子集压缩归纳规划搜索空间
International Conference on Agents and Artificial Intelligence Pub Date : 2023-02-10 DOI: 10.48550/arXiv.2302.05226
Edward McDaid, S. McDaid
{"title":"Shrinking the Inductive Programming Search Space with Instruction Subsets","authors":"Edward McDaid, S. McDaid","doi":"10.48550/arXiv.2302.05226","DOIUrl":"https://doi.org/10.48550/arXiv.2302.05226","url":null,"abstract":"Inductive programming frequently relies on some form of search in order to identify candidate solutions. However, the size of the search space limits the use of inductive programming to the production of relatively small programs. If we could somehow correctly predict the subset of instructions required for a given problem then inductive programming would be more tractable. We will show that this can be achieved in a high percentage of cases. This paper presents a novel model of programming language instruction co-occurrence that was built to support search space partitioning in the Zoea distributed inductive programming system. This consists of a collection of intersecting instruction subsets derived from a large sample of open source code. Using the approach different parts of the search space can be explored in parallel. The number of subsets required does not grow linearly with the quantity of code used to produce them and a manageable number of subsets is sufficient to cover a high percentage of unseen code. This approach also significantly reduces the overall size of the search space - often by many orders of magnitude.","PeriodicalId":174978,"journal":{"name":"International Conference on Agents and Artificial Intelligence","volume":"48 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122364504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信