International journal of smart computing and artificial intelligence最新文献

筛选
英文 中文
An Extension of Particle Swarm Optimization to Identify Multiple Peaks using Re-diversification in Static and Dynamic Environments 静态和动态环境下利用再多样化识别多峰的粒子群算法的扩展
International journal of smart computing and artificial intelligence Pub Date : 2023-01-01 DOI: 10.52731/ijscai.v7.i2.793
Stephen Raharja, Toshiharu Sugawara
{"title":"An Extension of Particle Swarm Optimization to Identify Multiple Peaks using Re-diversification in Static and Dynamic Environments","authors":"Stephen Raharja, Toshiharu Sugawara","doi":"10.52731/ijscai.v7.i2.793","DOIUrl":"https://doi.org/10.52731/ijscai.v7.i2.793","url":null,"abstract":"We propose an extension of the particle swarm optimization (PSO) algorithm for each particle to store multiple global optima internally for identifying multiple (top-k) peaks in static and dynamic environments. We then applied this technique to search and rescue problems of rescuing potential survivors urgently in life-threatening disaster scenarios. With the rapid development of robotics andcomputer technology, aerial drones can be programmed to implement search algorithms that locate potential survivors and relay their positions to rescue teams. We model an environment of a disaster area with potential survivors using randomizedbivariate normal distributions. We extended the Clerk-Kennedy PSO algorithm as top-k PSO by considering individual drones as particles, where each particle remembers a set of global optima to identify the top-k peaks. By comparing several otheralgorithms, including the canonical PSO, Clerk-Kennedy PSO, and NichePSO, we evaluated our proposed algorithm in static and dynamic environments. The experimental results show that the proposed algorithm was able to identify the top-kpeaks (optima) with a higher success rate than the baseline methods, although the rate gradually decreased with increasing movement speed of the peaks in dynamic environments.","PeriodicalId":495454,"journal":{"name":"International journal of smart computing and artificial intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135102880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving Multi-Agent Reinforcement Learning for Beer Game by Reward Design Based on Payment Mechanism 基于支付机制的奖励设计改进啤酒博弈多智能体强化学习
International journal of smart computing and artificial intelligence Pub Date : 2023-01-01 DOI: 10.52731/ijscai.v7.i2.789
Masaaki Hori, Toshihiro Matsui
{"title":"Improving Multi-Agent Reinforcement Learning for Beer Game by Reward Design Based on Payment Mechanism","authors":"Masaaki Hori, Toshihiro Matsui","doi":"10.52731/ijscai.v7.i2.789","DOIUrl":"https://doi.org/10.52731/ijscai.v7.i2.789","url":null,"abstract":"Supply chain management aims to maximize profits among supply chain partners by managing the flow of information and products. Multiagent reinforcement learning in artificial intelligence research fields has been applied to supply chain management. The beer game is an example problem in supply chain management and has also been studied as a cooperation problem in multiagent systems. In the previous study, a solution method SRDQN that is based on deep reinforcement learning and reward shaping has been applied to the beer game. By introducing a single reinforcement learning agent with SRDQN as a participant in the beer game, the cost of beer inventory was reduced. However, the previous study has not addressed the case of multiagent reinforcement learning due to the difficulties in cooperation among agents. To address the multiagent cases, we apply a reward shaping technique RDPM based on mechanism design to SRDQN and improve cooperative policies in multiagent reinforcement learning. Furthermore, we propose two reward design methods with modifications to the state value function designs in RDPM to address various consumer demands for beers in the supply chain. And then we empirically evaluate the effectiveness of the proposed approaches.","PeriodicalId":495454,"journal":{"name":"International journal of smart computing and artificial intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135448375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving Abstractive Summarization by Transfer Learning with Adaptive Document Selection 基于自适应文档选择的迁移学习改进抽象摘要
International journal of smart computing and artificial intelligence Pub Date : 2023-01-01 DOI: 10.52731/ijscai.v7.i2.701
Masato Shirai, Kei Wakabayashi
{"title":"Improving Abstractive Summarization by Transfer Learning with Adaptive Document Selection","authors":"Masato Shirai, Kei Wakabayashi","doi":"10.52731/ijscai.v7.i2.701","DOIUrl":"https://doi.org/10.52731/ijscai.v7.i2.701","url":null,"abstract":"ive document summarization based on neural networks is a promising approach to generate a flexible summary but requires a large amount of training data.While transfer learning can address this issue, there is a potential concern about the negative transfer effect that deteriorates the performance when we use training documents irrelevant to the target domain, which has not been explicitly explored in document summarization tasks.In this paper, we propose a method that selects training documents from the source domain that are expected to be useful for the target summarization.The proposed method is based on the similarity of word distributions between each source document and a set of target documents.We further propose an adaptive approach that builds a custom-made summarization model for each test document by selecting source documents similar to the test document.In the experiment, we confirmed that the negative transfer actually happens also in the document summarization tasks.Additionally, we show that the proposed method effectively avoids the negative transfer issue and improves summarization performance.","PeriodicalId":495454,"journal":{"name":"International journal of smart computing and artificial intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135101820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信