International Journal on Artificial Intelligence Tools最新文献

筛选
英文 中文
Recommender System Based on Unsupervised Clustering and Supervised Deep Learning 基于无监督聚类和有监督深度学习的推荐系统
IF 1.1 4区 计算机科学
International Journal on Artificial Intelligence Tools Pub Date : 2024-05-17 DOI: 10.1142/s0218213024500167
Dhiraj Khurana, D. Sahni, Yogesh Kumar
{"title":"Recommender System Based on Unsupervised Clustering and Supervised Deep Learning","authors":"Dhiraj Khurana, D. Sahni, Yogesh Kumar","doi":"10.1142/s0218213024500167","DOIUrl":"https://doi.org/10.1142/s0218213024500167","url":null,"abstract":"","PeriodicalId":50280,"journal":{"name":"International Journal on Artificial Intelligence Tools","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140962317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards a Hybrid Approach Combining Deep Learning and Case-Based Reasoning for Phishing Email Detection 开发一种结合深度学习和基于案例推理的混合方法来检测网络钓鱼邮件
IF 1.1 4区 计算机科学
International Journal on Artificial Intelligence Tools Pub Date : 2024-05-10 DOI: 10.1142/s0218213024500155
Mohamed Abdelkarim Remmide, Fatima Boumahdi, Narhimène Boustia
{"title":"Towards a Hybrid Approach Combining Deep Learning and Case-Based Reasoning for Phishing Email Detection","authors":"Mohamed Abdelkarim Remmide, Fatima Boumahdi, Narhimène Boustia","doi":"10.1142/s0218213024500155","DOIUrl":"https://doi.org/10.1142/s0218213024500155","url":null,"abstract":"","PeriodicalId":50280,"journal":{"name":"International Journal on Artificial Intelligence Tools","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140992941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessing and Addressing Model Trustworthiness Trade-offs in Trauma Triage 评估和解决创伤分流中模型可信度的权衡问题
IF 1.1 4区 计算机科学
International Journal on Artificial Intelligence Tools Pub Date : 2024-04-25 DOI: 10.1142/s0218213024600078
Douglas Talbert, Katherine L. Phillips, Katherine E. Brown, Steve Talbert
{"title":"Assessing and Addressing Model Trustworthiness Trade-offs in Trauma Triage","authors":"Douglas Talbert, Katherine L. Phillips, Katherine E. Brown, Steve Talbert","doi":"10.1142/s0218213024600078","DOIUrl":"https://doi.org/10.1142/s0218213024600078","url":null,"abstract":"Trauma triage occurs in suboptimal environments for making consequential decisions. Published triage studies demonstrate the extremes of the complexity/accuracy trade-off, either studying simple models with poor accuracy or very complex models with accuracies nearing published goals. Using a Level I Trauma Center’s registry cases (n = 50 644), this study describes, uses, and derives observations from a methodology to more thoroughly examine this trade-off. This or similar methods can provide the insight needed for practitioners to balance understandability with accuracy. Additionally, this study incorporates an evaluation of group-based fairness into this trade-off analysis to provide an additional dimension of insight into model selection. Lastly, this paper proposes and analyzes a multi-model approach to mitigating trust-related trade-offs. The experiments allow us to draw several conclusions regarding the machine learning models in the domain of trauma triage and demonstrate the value of our trade-off analysis to provide insight into choices regarding model complexity, model accuracy, and model fairness.","PeriodicalId":50280,"journal":{"name":"International Journal on Artificial Intelligence Tools","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140657010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reliable Estimation of Causal Effects Using Predictive Models 利用预测模型可靠地估计因果效应
IF 1.1 4区 计算机科学
International Journal on Artificial Intelligence Tools Pub Date : 2024-04-25 DOI: 10.1142/s0218213024600066
Mahdi Hadj Ali, Yann Le Biannic, Pierre-Henri Wuillemin
{"title":"Reliable Estimation of Causal Effects Using Predictive Models","authors":"Mahdi Hadj Ali, Yann Le Biannic, Pierre-Henri Wuillemin","doi":"10.1142/s0218213024600066","DOIUrl":"https://doi.org/10.1142/s0218213024600066","url":null,"abstract":"In recent years, machine learning algorithms have been widely adopted across many fields due to their efficiency and versatility. However, the complexity of predictive models has led to a lack of interpretability in automatic decision-making. Recent works have improved general interpretability by estimating the contributions of input features to the predictions of a pre-trained model. Drawing on these improvements, practitioners seek to gain causal insights into the underlying data-generating mechanisms. To this end, works have attempted to integrate causal knowledge into interpretability, as non-causal techniques can lead to paradoxical explanations. In this paper, we argue that each question about a causal effect requires its own reasoning and that relying on an initial predictive model trained on an arbitrary set of variables may result in quantification problems when estimating all possible effects. As an alternative, we advocate for a query-driven methodology that addresses each causal question separately. Assuming that the causal structure relating the variables is known, we propose to employ the tools of causal inference to quantify a particular effect as a formula involving observable probabilities. We then derive conditions on the selection of variables to train a predictive model that is tailored for the causal question of interest. Finally, we identify suitable eXplainable AI (XAI) techniques to estimate causal effects from the model predictions. Furthermore, we introduce a novel method for estimating direct effects through intervention on causal mechanisms.","PeriodicalId":50280,"journal":{"name":"International Journal on Artificial Intelligence Tools","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140656680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fairness for Deep Learning Predictions Using Bias Parity Score Based Loss Function Regularization 利用基于偏差奇偶校验得分的损失函数正则化实现深度学习预测的公平性
IF 1.1 4区 计算机科学
International Journal on Artificial Intelligence Tools Pub Date : 2024-04-25 DOI: 10.1142/s0218213024600030
Bhanu Jain, Manfred Huber, R. Elmasri
{"title":"Fairness for Deep Learning Predictions Using Bias Parity Score Based Loss Function Regularization","authors":"Bhanu Jain, Manfred Huber, R. Elmasri","doi":"10.1142/s0218213024600030","DOIUrl":"https://doi.org/10.1142/s0218213024600030","url":null,"abstract":"Rising acceptance of machine learning driven decision support systems underscores the need for ensuring fairness for all stakeholders. This work proposes a novel approach to increase a Neural Network model’s fairness during the training phase. We offer a frame-work to create a family of diverse fairness enhancing regularization components that can be used in tandem with the widely accepted binary-cross-entropy based accuracy loss. We use Bias Parity Score (BPS), a metric that quantifies model bias with a single value, to build loss functions pertaining to different statistical measures — even for those that may not be developed yet. We analyze behavior and impact of the newly minted regularization components on bias. We explore their impact in the realm of recidivism and census-based adult income prediction. The results illustrate that apt fairness loss functions can mitigate bias without forsaking accuracy even for imbalanced datasets.","PeriodicalId":50280,"journal":{"name":"International Journal on Artificial Intelligence Tools","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140656090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effects of Explanation Types on User Satisfaction and Performance in Human-agent Teams 解释类型对用户满意度和人类-代理团队绩效的影响
IF 1.1 4区 计算机科学
International Journal on Artificial Intelligence Tools Pub Date : 2024-04-25 DOI: 10.1142/s0218213024600042
Bryan Lavender, Sami Abuhaimed, Sandip Sen
{"title":"Effects of Explanation Types on User Satisfaction and Performance in Human-agent Teams","authors":"Bryan Lavender, Sami Abuhaimed, Sandip Sen","doi":"10.1142/s0218213024600042","DOIUrl":"https://doi.org/10.1142/s0218213024600042","url":null,"abstract":"Automated agents, with rapidly increasing capabilities and ease of deployment, will assume more key and decisive roles in our societies. We will encounter and work together with such agents in diverse domains and even in peer roles. To be trusted and for seamless coordination, these agents would be expected and required to explain their decision making, behaviors, and recommendations. We are interested in developing mechanisms that can be used by human-agent teams to maximally leverage relative strengths of human and automated reasoners. We are interested in ad hoc teams in which team members start to collaborate, often to respond to emergencies or short-term opportunities, without significant prior knowledge about each other. In this study, we use virtual ad hoc teams, consisting of a human and an agent, collaborating over a few episodes where each episode requires them to complete a set of tasks chosen from available task types. Team members are initially unaware of the capabilities of their partners for the available task types, and the agent task allocator must adapt the allocation process to maximize team performance. It is important in collaborative teams of humans and agents to establish user confidence and satisfaction, as well as to produce effective team performance. Explanations can increase user trust in agent team members and in team decisions. The focus of this paper is on analyzing how explanations of task allocation decisions can influence both user performance and the human workers’ perspective, including factors such as motivation and satisfaction. We evaluate different types of explanation, such as positive, strength-based explanations and negative, weakness-based explanations, to understand (a) how satisfaction and performance are improved when explanations are presented, and (b) how factors such as confidence, understandability, motivation, and explanatory power correlate with satisfaction and performance. We run experiments on the CHATboard platform that allows virtual collaboration over multiple episodes of task assignments, with MTurk workers. We present our analysis of the results and conclusions related to our research hypotheses.","PeriodicalId":50280,"journal":{"name":"International Journal on Artificial Intelligence Tools","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140658925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On Bounding the Behavior of Neurons 论神经元行为的边界
IF 1.1 4区 计算机科学
International Journal on Artificial Intelligence Tools Pub Date : 2024-04-25 DOI: 10.1142/s0218213024600029
Richard Borowski, Arthur Choi
{"title":"On Bounding the Behavior of Neurons","authors":"Richard Borowski, Arthur Choi","doi":"10.1142/s0218213024600029","DOIUrl":"https://doi.org/10.1142/s0218213024600029","url":null,"abstract":"A neuron with binary inputs and a binary output represents a Boolean function. Our goal is to extract this Boolean function into a tractable representation that will facilitate the explanation and formal verification of a neuron’s behavior. Unfortunately, extracting a neuron’s Boolean function is in general an NP-hard problem. However, it was recently shown that prime implicants of this Boolean function can be enumerated efficiently, with only polynomial time delay. Building on this result, we first propose a best-first search algorithm that is able to incrementally tighten the inner and outer bounds of a neuron’s Boolean function. Second, we show that these bounds correspond to truncated prime-implicant covers of the Boolean function. Next, we show how these bounds can be propagated in an elementary class of neural networks. Finally, we provide case studies that highlight our ability to bound the behavior of neurons.","PeriodicalId":50280,"journal":{"name":"International Journal on Artificial Intelligence Tools","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140655510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predictive Policing: A Fairness-aware Approach 预测性警务:公平意识方法
IF 1.1 4区 计算机科学
International Journal on Artificial Intelligence Tools Pub Date : 2024-04-25 DOI: 10.1142/s0218213024600054
Ava Downey, Sheikh Rabiul Islam, Md Kamruzzman Sarker
{"title":"Predictive Policing: A Fairness-aware Approach","authors":"Ava Downey, Sheikh Rabiul Islam, Md Kamruzzman Sarker","doi":"10.1142/s0218213024600054","DOIUrl":"https://doi.org/10.1142/s0218213024600054","url":null,"abstract":"As Artificial Intelligence (AI) systems become increasingly embedded in our daily lives, it is of utmost importance to ensure that they are both fair and reliable. Regrettably, this is not always the case for predictive policing systems, as evidence shows biases based on age, race, and sex, leading to wrongful identifications of individuals as potential criminals. Given the existing criticism of the system’s unjust treatment of minority groups, it becomes essential to address and mitigate this concerning trend. This study delved into the infusion of domain knowledge in the predictive policing system, aiming to minimize prevailing fairness issues. The experimental results indicate a considerable increase in fairness across all metrics for all protected classes, thus fostering greater trust in the predictive policing system by reducing the unfair treatment of individuals.","PeriodicalId":50280,"journal":{"name":"International Journal on Artificial Intelligence Tools","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140654850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advances in Explainable, Fair, and Trustworthy AI 可解释、公平和可信赖的人工智能的进步
IF 1.1 4区 计算机科学
International Journal on Artificial Intelligence Tools Pub Date : 2024-04-22 DOI: 10.1142/s0218213024030015
Sheikh Rabiul Islam, Ingrid Russell, William Eberle, Douglas Talbert, Md Golam Moula Mehedi Hasan
{"title":"Advances in Explainable, Fair, and Trustworthy AI","authors":"Sheikh Rabiul Islam, Ingrid Russell, William Eberle, Douglas Talbert, Md Golam Moula Mehedi Hasan","doi":"10.1142/s0218213024030015","DOIUrl":"https://doi.org/10.1142/s0218213024030015","url":null,"abstract":"This special issue encapsulates the multifaceted landscape of contemporary challenges and innovations in Artificial Intelligence (AI) and Machine Learning (ML), with a particular focus on issues related to explainability, fairness, and trustworthiness. The exploration begins with the computational intricacies of understanding and explaining the behavior of binary neurons within neural networks. Simultaneously, ethical dimensions in AI are scrutinized, emphasizing the nuanced considerations required in defining autonomous ethical agents. The pursuit of fairness is exemplified through frameworks and methodologies in machine learning, addressing biases and promoting trust, particularly in predictive policing systems. Human-agent interaction dynamics are elucidated, revealing the nuanced relationship between task allocation, performance, and user satisfaction. The imperative of interpretability in complex predictive models is highlighted, emphasizing a query-driven methodology. Lastly, in the context of trauma triage, the study underscores the delicate trade-off between model accuracy and practitioner-friendly interpretability, introducing innovative strategies to address biases and trust-related metrics.","PeriodicalId":50280,"journal":{"name":"International Journal on Artificial Intelligence Tools","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140674229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Winners of Nikolaos Bourbakis Award for 2023 2023 年尼古拉斯-布尔巴基斯奖获奖者
IF 1.1 4区 计算机科学
International Journal on Artificial Intelligence Tools Pub Date : 2024-04-22 DOI: 10.1142/s0218213024820013
{"title":"Winners of Nikolaos Bourbakis Award for 2023","authors":"","doi":"10.1142/s0218213024820013","DOIUrl":"https://doi.org/10.1142/s0218213024820013","url":null,"abstract":"","PeriodicalId":50280,"journal":{"name":"International Journal on Artificial Intelligence Tools","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140675055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信