International Conference of the Italian Association for Artificial Intelligence最新文献

筛选
英文 中文
Election Manipulation in Social Networks with Single-Peaked Agents 单峰代理社会网络中的选举操纵
International Conference of the Italian Association for Artificial Intelligence Pub Date : 2023-08-21 DOI: 10.48550/arXiv.2308.10845
V. Auletta, Francesco Carbone, Diodato Ferraioli
{"title":"Election Manipulation in Social Networks with Single-Peaked Agents","authors":"V. Auletta, Francesco Carbone, Diodato Ferraioli","doi":"10.48550/arXiv.2308.10845","DOIUrl":"https://doi.org/10.48550/arXiv.2308.10845","url":null,"abstract":"Several elections run in the last years have been characterized by attempts to manipulate the result of the election through the diffusion of fake or malicious news over social networks. This problem has been recognized as a critical issue for the robustness of our democracy. Analyzing and understanding how such manipulations may occur is crucial to the design of effective countermeasures to these practices. Many studies have observed that, in general, to design an optimal manipulation is usually a computationally hard task. Nevertheless, literature on bribery in voting and election manipulation has frequently observed that most hardness results melt down when one focuses on the setting of (nearly) single-peaked agents, i.e., when each voter has a preferred candidate (usually, the one closer to her own belief) and preferences of remaining candidates are inversely proportional to the distance between the candidate position and the voter's belief. Unfortunately, no such analysis has been done for election manipulations run in social networks. In this work, we try to close this gap: specifically, we consider a setting for election manipulation that naturally raises (nearly) single-peaked preferences, and we evaluate the complexity of election manipulation problem in this setting: while most of the hardness and approximation results still hold, we will show that single-peaked preferences allow to design simple, efficient and effective heuristics for election manipulation.","PeriodicalId":293643,"journal":{"name":"International Conference of the Italian Association for Artificial Intelligence","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131679670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unraveling ChatGPT: A Critical Analysis of AI-Generated Goal-Oriented Dialogues and Annotations 解开ChatGPT:人工智能生成的面向目标的对话和注释的关键分析
International Conference of the Italian Association for Artificial Intelligence Pub Date : 2023-05-23 DOI: 10.48550/arXiv.2305.14556
Tiziano Labruna, Sofia Brenna, Andrea Zaninello, B. Magnini
{"title":"Unraveling ChatGPT: A Critical Analysis of AI-Generated Goal-Oriented Dialogues and Annotations","authors":"Tiziano Labruna, Sofia Brenna, Andrea Zaninello, B. Magnini","doi":"10.48550/arXiv.2305.14556","DOIUrl":"https://doi.org/10.48550/arXiv.2305.14556","url":null,"abstract":"Large pre-trained language models have exhibited unprecedented capabilities in producing high-quality text via prompting techniques. This fact introduces new possibilities for data collection and annotation, particularly in situations where such data is scarce, complex to gather, expensive, or even sensitive. In this paper, we explore the potential of these models to generate and annotate goal-oriented dialogues, and conduct an in-depth analysis to evaluate their quality. Our experiments employ ChatGPT, and encompass three categories of goal-oriented dialogues (task-oriented, collaborative, and explanatory), two generation modes (interactive and one-shot), and two languages (English and Italian). Based on extensive human-based evaluations, we demonstrate that the quality of generated dialogues and annotations is on par with those generated by humans.","PeriodicalId":293643,"journal":{"name":"International Conference of the Italian Association for Artificial Intelligence","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123511026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Knowledge Acquisition and Completion for Long-Term Human-Robot Interactions using Knowledge Graph Embedding 基于知识图嵌入的长期人机交互知识获取与补全
International Conference of the Italian Association for Artificial Intelligence Pub Date : 2023-01-17 DOI: 10.48550/arXiv.2301.06834
E. Bartoli, F. Argenziano, V. Suriani, D. Nardi
{"title":"Knowledge Acquisition and Completion for Long-Term Human-Robot Interactions using Knowledge Graph Embedding","authors":"E. Bartoli, F. Argenziano, V. Suriani, D. Nardi","doi":"10.48550/arXiv.2301.06834","DOIUrl":"https://doi.org/10.48550/arXiv.2301.06834","url":null,"abstract":"In Human-Robot Interaction (HRI) systems, a challenging task is sharing the representation of the operational environment, fusing symbolic knowledge and perceptions, between users and robots. With the existing HRI pipelines, users can teach the robots some concepts to increase their knowledge base. Unfortunately, the data coming from the users are usually not enough dense for building a consistent representation. Furthermore, the existing approaches are not able to incrementally build up their knowledge base, which is very important when robots have to deal with dynamic contexts. To this end, we propose an architecture to gather data from users and environments in long-runs of continual learning. We adopt Knowledge Graph Embedding techniques to generalize the acquired information with the goal of incrementally extending the robot's inner representation of the environment. We evaluate the performance of the overall continual learning architecture by measuring the capabilities of the robot of learning entities and relations coming from unknown contexts through a series of incremental learning sessions.","PeriodicalId":293643,"journal":{"name":"International Conference of the Italian Association for Artificial Intelligence","volume":"462 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127534612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Combining Contrastive Learning and Knowledge Graph Embeddings to develop medical word embeddings for the Italian language 结合对比学习和知识图嵌入开发意大利语医学词嵌入
International Conference of the Italian Association for Artificial Intelligence Pub Date : 2022-11-09 DOI: 10.48550/arXiv.2211.05035
Denys Amore Bondarenko, Roger Ferrod, Luigi Di Caro
{"title":"Combining Contrastive Learning and Knowledge Graph Embeddings to develop medical word embeddings for the Italian language","authors":"Denys Amore Bondarenko, Roger Ferrod, Luigi Di Caro","doi":"10.48550/arXiv.2211.05035","DOIUrl":"https://doi.org/10.48550/arXiv.2211.05035","url":null,"abstract":"Word embeddings play a significant role in today's Natural Language Processing tasks and applications. While pre-trained models may be directly employed and integrated into existing pipelines, they are often fine-tuned to better fit with specific languages or domains. In this paper, we attempt to improve available embeddings in the uncovered niche of the Italian medical domain through the combination of Contrastive Learning (CL) and Knowledge Graph Embedding (KGE). The main objective is to improve the accuracy of semantic similarity between medical terms, which is also used as an evaluation task. Since the Italian language lacks medical texts and controlled vocabularies, we have developed a specific solution by combining preexisting CL methods (multi-similarity loss, contextualization, dynamic sampling) and the integration of KGEs, creating a new variant of the loss. Although without having outperformed the state-of-the-art, represented by multilingual models, the obtained results are encouraging, providing a significant leap in performance compared to the starting model, while using a significantly lower amount of data.","PeriodicalId":293643,"journal":{"name":"International Conference of the Italian Association for Artificial Intelligence","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132762885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Verifying a stochastic model for the spread of a SARS-CoV-2-like infection: opportunities and limitations 验证sars - cov -2样感染传播的随机模型:机会和局限性
International Conference of the Italian Association for Artificial Intelligence Pub Date : 2022-10-31 DOI: 10.48550/arXiv.2211.00605
Marco Roveri, Franc Ivankovic, L. Palopoli, D. Fontanelli
{"title":"Verifying a stochastic model for the spread of a SARS-CoV-2-like infection: opportunities and limitations","authors":"Marco Roveri, Franc Ivankovic, L. Palopoli, D. Fontanelli","doi":"10.48550/arXiv.2211.00605","DOIUrl":"https://doi.org/10.48550/arXiv.2211.00605","url":null,"abstract":"There is a growing interest in modeling and analyzing the spread of diseases like the SARS-CoV-2 infection using stochastic models. These models are typically analyzed quantitatively and are not often subject to validation using formal verification approaches, nor leverage policy syntheses and analysis techniques developed in formal verification. In this paper, we take a Markovian stochastic model for the spread of a SARSCoV-2-like infection. A state of this model represents the number of subjects in different health conditions. The considered model considers the different parameters that may have an impact on the spread of the disease and exposes the various decision variables that can be used to control it. We show that the modeling of the problem within state-of-the-art model checkers is feasible and it opens several opportunities. However, there are severe limitations due to i) the espressivity of the existing stochastic model checkers on one side, and ii) the size of the resulting Markovian model even for small population sizes.","PeriodicalId":293643,"journal":{"name":"International Conference of the Italian Association for Artificial Intelligence","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122137102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Deep learning for ECoG brain-computer interface: end-to-end vs. hand-crafted features ECoG脑机接口的深度学习:端到端vs.手工功能
International Conference of the Italian Association for Artificial Intelligence Pub Date : 2022-10-05 DOI: 10.48550/arXiv.2210.02544
Maciej Śliwowski, Matthieu Martin, A. Souloumiac, P. Blanchart, T. Aksenova
{"title":"Deep learning for ECoG brain-computer interface: end-to-end vs. hand-crafted features","authors":"Maciej Śliwowski, Matthieu Martin, A. Souloumiac, P. Blanchart, T. Aksenova","doi":"10.48550/arXiv.2210.02544","DOIUrl":"https://doi.org/10.48550/arXiv.2210.02544","url":null,"abstract":"In brain signal processing, deep learning (DL) models have become commonly used. However, the performance gain from using end-to-end DL models compared to conventional ML approaches is usually significant but moderate, typically at the cost of increased computational load and deteriorated explainability. The core idea behind deep learning approaches is scaling the performance with bigger datasets. However, brain signals are temporal data with a low signal-to-noise ratio, uncertain labels, and nonstationary data in time. Those factors may influence the training process and slow down the models' performance improvement. These factors' influence may differ for end-to-end DL model and one using hand-crafted features. As not studied before, this paper compares models that use raw ECoG signal and time-frequency features for BCI motor imagery decoding. We investigate whether the current dataset size is a stronger limitation for any models. Finally, obtained filters were compared to identify differences between hand-crafted features and optimized with backpropagation. To compare the effectiveness of both strategies, we used a multilayer perceptron and a mix of convolutional and LSTM layers that were already proved effective in this task. The analysis was performed on the long-term clinical trial database (almost 600 minutes of recordings) of a tetraplegic patient executing motor imagery tasks for 3D hand translation. For a given dataset, the results showed that end-to-end training might not be significantly better than the hand-crafted features-based model. The performance gap is reduced with bigger datasets, but considering the increased computational load, end-to-end training may not be profitable for this application.","PeriodicalId":293643,"journal":{"name":"International Conference of the Italian Association for Artificial Intelligence","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124571209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Neural Networks Reduction via Lumping 集总神经网络约简
International Conference of the Italian Association for Artificial Intelligence Pub Date : 2022-09-15 DOI: 10.48550/arXiv.2209.07475
Dalila Ressi, R. Romanello, S. Rossi, C. Piazza
{"title":"Neural Networks Reduction via Lumping","authors":"Dalila Ressi, R. Romanello, S. Rossi, C. Piazza","doi":"10.48550/arXiv.2209.07475","DOIUrl":"https://doi.org/10.48550/arXiv.2209.07475","url":null,"abstract":"The increasing size of recently proposed Neural Networks makes it hard to implement them on embedded devices, where memory, battery and computational power are a non-trivial bottleneck. For this reason during the last years network compression literature has been thriving and a large number of solutions has been been published to reduce both the number of operations and the parameters involved with the models. Unfortunately, most of these reducing techniques are actually heuristic methods and usually require at least one re-training step to recover the accuracy. The need of procedures for model reduction is well-known also in the fields of Verification and Performances Evaluation, where large efforts have been devoted to the definition of quotients that preserve the observable underlying behaviour. In this paper we try to bridge the gap between the most popular and very effective network reduction strategies and formal notions, such as lumpability, introduced for verification and evaluation of Markov Chains. Elaborating on lumpability we propose a pruning approach that reduces the number of neurons in a network without using any data or fine-tuning, while completely preserving the exact behaviour. Relaxing the constraints on the exact definition of the quotienting method we can give a formal explanation of some of the most common reduction techniques.","PeriodicalId":293643,"journal":{"name":"International Conference of the Italian Association for Artificial Intelligence","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126741626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Option Discovery for Autonomous Generation of Symbolic Knowledge 符号知识自主生成的选项发现
International Conference of the Italian Association for Artificial Intelligence Pub Date : 2022-06-03 DOI: 10.48550/arXiv.2206.01815
Gabriele Sartor, Davide Zollo, M. C. Mayer, A. Oddi, R. Rasconi, V. Santucci
{"title":"Option Discovery for Autonomous Generation of Symbolic Knowledge","authors":"Gabriele Sartor, Davide Zollo, M. C. Mayer, A. Oddi, R. Rasconi, V. Santucci","doi":"10.48550/arXiv.2206.01815","DOIUrl":"https://doi.org/10.48550/arXiv.2206.01815","url":null,"abstract":"In this work we present an empirical study where we demonstrate the possibility of developing an artificial agent that is capable to autonomously explore an experimental scenario. During the exploration, the agent is able to discover and learn interesting options allowing to interact with the environment without any pre-assigned goal, then abstract and re-use the acquired knowledge to solve possible tasks assigned ex-post. We test the system in the so-called Treasure Game domain described in the recent literature and we empirically demonstrate that the discovered options can be abstracted in an probabilistic symbolic planning model (using the PPDDL language), which allowed the agent to generate symbolic plans to achieve extrinsic goals.","PeriodicalId":293643,"journal":{"name":"International Conference of the Italian Association for Artificial Intelligence","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130816348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Knowledge Enhanced Neural Networks for relational domains 面向关系领域的知识增强神经网络
International Conference of the Italian Association for Artificial Intelligence Pub Date : 2022-05-31 DOI: 10.48550/arXiv.2205.15762
Alessandro Daniele, L. Serafini
{"title":"Knowledge Enhanced Neural Networks for relational domains","authors":"Alessandro Daniele, L. Serafini","doi":"10.48550/arXiv.2205.15762","DOIUrl":"https://doi.org/10.48550/arXiv.2205.15762","url":null,"abstract":"In the recent past, there has been a growing interest in Neural-Symbolic Integration frameworks, i.e., hybrid systems that integrate connectionist and symbolic approaches to obtain the best of both worlds. In this work we focus on a specific method, KENN (Knowledge Enhanced Neural Networks), a Neural-Symbolic architecture that injects prior logical knowledge into a neural network by adding on its top a residual layer that modifies the initial predictions accordingly to the knowledge. Among the advantages of this strategy, there is the inclusion of clause weights, learnable parameters that represent the strength of the clauses, meaning that the model can learn the impact of each rule on the final predictions. As a special case, if the training data contradicts a constraint, KENN learns to ignore it, making the system robust to the presence of wrong knowledge. In this paper, we propose an extension of KENN for relational data. One of the main advantages of KENN resides in its scalability, thanks to a flexible treatment of dependencies between the rules obtained by stacking multiple logical layers. We show experimentally the efficacy of this strategy. The results show that KENN is capable of increasing the performances of the underlying neural network, obtaining better or comparable accuracies in respect to other two related methods that combine learning with logic, requiring significantly less time for learning.","PeriodicalId":293643,"journal":{"name":"International Conference of the Italian Association for Artificial Intelligence","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134121930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Domino Saliency Metrics: Improving Existing Channel Saliency Metrics with Structural Information Domino显著性度量:利用结构信息改进现有的渠道显著性度量
International Conference of the Italian Association for Artificial Intelligence Pub Date : 2022-05-04 DOI: 10.48550/arXiv.2205.02131
Kaveena Persand, Andrew Anderson, David Gregg
{"title":"Domino Saliency Metrics: Improving Existing Channel Saliency Metrics with Structural Information","authors":"Kaveena Persand, Andrew Anderson, David Gregg","doi":"10.48550/arXiv.2205.02131","DOIUrl":"https://doi.org/10.48550/arXiv.2205.02131","url":null,"abstract":"Channel pruning is used to reduce the number of weights in a Convolutional Neural Network (CNN). Channel pruning removes slices of the weight tensor so that the convolution layer remains dense. The removal of these weight slices from a single layer causes mismatching number of feature maps between layers of the network. A simple solution is to force the number of feature map between layers to match through the removal of weight slices from subsequent layers. This additional constraint becomes more apparent in DNNs with branches where multiple channels need to be pruned together to keep the network dense. Popular pruning saliency metrics do not factor in the structural dependencies that arise in DNNs with branches. We propose Domino metrics (built on existing channel saliency metrics) to reflect these structural constraints. We test Domino saliency metrics against the baseline channel saliency metrics on multiple networks with branches. Domino saliency metrics improved pruning rates in most tested networks and up to 25% in AlexNet on CIFAR-10.","PeriodicalId":293643,"journal":{"name":"International Conference of the Italian Association for Artificial Intelligence","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117058658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信