{"title":"Detecting DoS and DDoS Attacks through Sparse U-Net-like Autoencoders","authors":"Nunzio Cassavia, Francesco Folino, M. Guarascio","doi":"10.1109/ICTAI56018.2022.00203","DOIUrl":"https://doi.org/10.1109/ICTAI56018.2022.00203","url":null,"abstract":"In the last few years, we experienced exponential growth in the number of cyber-attacks performed against com-panies and organizations. In particular, because of their ability to mask themselves as legitimate traffic, DoS and DDoS have become two of the most common kinds of attacks on computer networks. Modern Intrusion Detection Systems (IDSs) represent a precious tool to mitigate the risk of unauthorized network access as they allow for accurately discriminating between benign and malicious traffic. Among the plethora of approaches proposed in the literature for detecting network intrusions, Deep Learning (DL)-based IDSs have been proved to be an effective solution because of their ability to analyze low-level data (e.g., flow and packet traffic) directly. However, many current solutions require large amounts of labeled data to yield reliable models. Unfortunately, in real scenarios, small portions of data carry label information due to the cost of manual labeling conducted by human experts. Labels can even be completely missing for some reason (e.g., privacy concerns). To cope with the lack of labeled data, we propose an unsupervised DL-based intrusion detection methodology, combining an ad-hoc preprocessing procedure on input data with a sparse U-Net-like autoencoder architecture. The experimentation on an IDS benchmark dataset substantiates our approach's ability to recognize malicious behaviors correctly.","PeriodicalId":354314,"journal":{"name":"2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129949288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Is it a bug or a feature? Identifying software bugs using graph attention networks","authors":"Nikos Kanakaris, Ilias Siachos, N. Karacapilidis","doi":"10.1109/ICTAI56018.2022.00215","DOIUrl":"https://doi.org/10.1109/ICTAI56018.2022.00215","url":null,"abstract":"This paper proposes a novel approach for identifying software bugs by building on a meaningful combination of word embeddings, graph-based text representations and graph attention networks. Existing approaches aim to advance each of the above components individually, without considering an integrative approach. As a result, they ignore information that is related to either the structure of a given text or an individual word of the text. Instead, our approach seamlessly incorporates both semantic and structural characteristics into a graph, which are then fed to a graph attention network in order to classify GitHub issues as bugs or features. Our experimental results demonstrate a significant improvement in terms of accuracy, precision and recall of the proposed approach compared to a list of classical and graph-based machine learning models. The dataset for the experiments reported in this paper has been retrieved from the kaggle.com platform and concerns GitHub issues with short-text attributes.","PeriodicalId":354314,"journal":{"name":"2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128756094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"FoCA: Failure-oriented Class Augmentation for Robust Image Classification","authors":"M. K. Ahuja, Sahil Sahil, Helge Spieker","doi":"10.1109/ICTAI56018.2022.00144","DOIUrl":"https://doi.org/10.1109/ICTAI56018.2022.00144","url":null,"abstract":"Image classification with classes of varying difficulty can cause performance disparity in deep learning models and reduce the overall performance and reliability of the predictions. In this paper, we introduce a failure-oriented class augmentation (FoCA) technique to address the problem of imbalanced performance in image classification, where the trained model has performance deficits in some of the dataset's classes. By employing Generative Adversarial Networks (GANs) to augment these deficit classes, we finetune the model towards a balanced performance among the different classes and an overall better performance on the whole dataset. Unlike earlier works, during training, our method focuses on those classes with the lowest accuracy after the initial training phase. Only these classes are augmented to boost the accuracy, which leads to better performance. FoCA is designed to be used with a light-weight GAN method to make the GAN-based augmentation viable and effective, even for datasets with only few images per class, while simultaneously requiring less computation than other, more complex GAN methods. Our implementation of FoCA combines this light-weight GAN method for class-wise data augmentation with state-of-the-art deep neural network techniques for training. Experiments show an overall improvement from FoCA with competitive or better accuracy than the previous state-of-the-art on five datasets with different sizes and image resolutions.","PeriodicalId":354314,"journal":{"name":"2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128620947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stefano Bistarelli, Alessio Mancinelli, Francesco Santini, Carlo Taticchi
{"title":"Arg-XAI: a Tool for Explaining Machine Learning Results","authors":"Stefano Bistarelli, Alessio Mancinelli, Francesco Santini, Carlo Taticchi","doi":"10.1109/ICTAI56018.2022.00037","DOIUrl":"https://doi.org/10.1109/ICTAI56018.2022.00037","url":null,"abstract":"The requirement of explainability is gaining more and more importance in Artificial Intelligence applications based on Machine Learning techniques, especially in those contexts where critical decisions are entrusted to software systems (think, for example, of financial and medical consultancy). In this paper, we propose an Argumentation-based methodology for explaining the results predicted by Machine Learning models. Argumentation provides frameworks that can be used to represent and analyse logical relations between pieces of information, serving as a basis for constructing human tailored rational explanations to a given problem. In particular, we use extension-based semantics to find the rationale behind a class prediction.","PeriodicalId":354314,"journal":{"name":"2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129246182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Probabilistic Matrix Completion","authors":"Xuan Li, Li Zhang","doi":"10.1109/ICTAI56018.2022.00206","DOIUrl":"https://doi.org/10.1109/ICTAI56018.2022.00206","url":null,"abstract":"Collaborative filtering (CF) is typically a matrix completion (MC) problem where the unknown values of the rating matrix are predicted by finding similar rating patterns based on the given entries. The most common paradigm of MC is to factorize the rating matrix into two low-rank matrices. The basic matrix factorization (MF) and its extensions, i.e. conventional MF-based models, have achieved great success in the past and recently models based on deep learning have become popular. However, some recent works have pointed out that many newly proposed methods are outperformed by conventional MF-based models, which demonstrates the simplicity but effectiveness of the basic MF and its extensions. Finding the basic MF cannot be formulated by Probabilistic Matrix Factorization (PMF), this paper proposes a new model called Probabilistic Matrix Completion (PMC), which can interpret the basic MF from a probabilistic perspective. Unlike PMF, which samples each latent vector for each row in the rating matrix indiscriminately, PMC considers different sample frequency between rows (and columns) and computes the prior distribution based on the observed entries. To further demonstrate the difference between PMF and PMC, we incorporate geometric structure into PMC and finally get a model named GPMC that can outperform various state-of-the-art CF methods in terms of rating prediction. We validate our claims on six real-world datasets.","PeriodicalId":354314,"journal":{"name":"2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129886654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-branch selection fusion fine-grained classification algorithm based on coordinate attention localization","authors":"Feng Zhang, Gaocai Wang","doi":"10.1109/ICTAI56018.2022.00024","DOIUrl":"https://doi.org/10.1109/ICTAI56018.2022.00024","url":null,"abstract":"Object localization has been the focus of research in FGVC(Fine-Grained Visual Categorization). With the aim of improving the accuracy and precision of object localization in multi-branch networks, as well as the robustness and universality of object localization methods, our study mainly focus on how to combines coordinate attention and feature activation map for target localization. The model in this paper is a three-branch model including raw branch, object branch and part branch. The images are fed directly into the raw branch. CAOLM is used to localize and crop objects in the image to generate the input for the object branch. APPM is used to propose part regions at different scales. The three classes of input images undergo end-to-end weakly supervised learning through different branches of the network. The model expands the receptive field to capture multi-scale features by SB-ASPP. It can fuse the feature maps obtained from the raw branch and the object branch with SBBlock, and the complete features of the raw branch are used to supplement the missing information of the object branch. Extensive experimental results on CUB-200-2011, FGVC-Aircraft and Stanford Cars datasets show that our method has the best classification performance on FGVC-Aircraft and also has competitive performance on other datasets. Few parameters and fast inference speed are also the advantages of our model.","PeriodicalId":354314,"journal":{"name":"2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130532754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Building Height Restoration Method of Remote Sensing Images based on Faster RCNN","authors":"Biao Li, Xucan Chen, Zuo Lin","doi":"10.1109/ICTAI56018.2022.00146","DOIUrl":"https://doi.org/10.1109/ICTAI56018.2022.00146","url":null,"abstract":"To accurately obtain building height information from a single remote sensing image, we propose a height restoration method, which mainly is composed of two parts, building shadow rotation detection and building height calculation. The first part adds a skip connection structure and rotated branches based on Faster RCNN and achieves rotation shadow detection. The latter uses imaging date and geographic latitude to restore building height based on the geometric relationship between the building and its shadow. The experiment shows that the accuracy of height restoration is 95.04%. Compared with the state-of-the-art method, our method has the superiority of simple implementation, less data, fast speed, and high accuracy.","PeriodicalId":354314,"journal":{"name":"2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126311852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improved Faster-RCNN Based Biomarkers Detection in Retinal Optical Coherence Tomography Images","authors":"Xiaoming Liu, Kejie Zhou, Man Wang, Ying Zhang","doi":"10.1109/ICTAI56018.2022.00166","DOIUrl":"https://doi.org/10.1109/ICTAI56018.2022.00166","url":null,"abstract":"Optical coherence tomography (OCT) is an important ophthalmic imaging technique, which can generate high-resolution anatomical images and plays an important role in the detection of retinal biomarkers. However, the appearance of retinal biomarkers is complex, and some of these biomarkers differ greatly among different categories, while many features are similar. In addition, the boundaries of retinal biomarkers are often indistinguishable from the background. In this study, we propose a self-supervised contrastive boundary consistency network (SCB-Net) to detect retinal biomarkers in OCT images. A self-supervised contrastive classification module is proposed to improve the classification ability of the network between different categories of retinal biomarkers. Furthermore, in order to make the boundary of the retinal biomarkers located by the network closer to the ground truth, the boundary consistency is added on the basis of the original regressor to jointly constrain the boundary localization. The experimental results on a local dataset show that our proposed SCB-Net method achieves good detection performance compared with other detection methods.","PeriodicalId":354314,"journal":{"name":"2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126342800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learning Optimal Fair Scoring Systems for Multi-Class Classification","authors":"Julien Rouzot, Julien Ferry, Marie-José Huguet","doi":"10.1109/ICTAI56018.2022.00036","DOIUrl":"https://doi.org/10.1109/ICTAI56018.2022.00036","url":null,"abstract":"Machine Learning models are increasingly used for decision making, in particular in high-stakes applications such as credit scoring, medicine or recidivism prediction. However, there are growing concerns about these models with respect to their lack of interpretability and the undesirable biases they can generate or reproduce. While the concepts of interpretability and fairness have been extensively studied by the scientific community in recent years, few works have tackled the general multi-class classification problem under fairness constraints, and none of them proposes to generate fair and interpretable models for multi-class classification. In this paper, we use Mixed-Integer Linear Programming (MILP) techniques to produce inherently interpretable scoring systems under sparsity and fairness constraints, for the general multi-class classification setup. Our work generalizes the SLIM (Supersparse Linear Integer Models) framework that was proposed by Rudin and Ustun to learn optimal scoring systems for binary classification. The use of MILP techniques allows for an easy integration of diverse operational constraints (such as, but not restricted to, fairness or sparsity), but also for the building of certifiably optimal models (or sub-optimal models with bounded optimality gap).","PeriodicalId":354314,"journal":{"name":"2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130280639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Speech Signal Analysis of Autistic Children Based on Time-Frequency Domain Distinguishing Feature Extraction","authors":"Le Chen, Chao Zhang, Xiangping Gao","doi":"10.1109/ICTAI56018.2022.00164","DOIUrl":"https://doi.org/10.1109/ICTAI56018.2022.00164","url":null,"abstract":"With the rise of Autism Spectrum Disorders (ASD) incidence rate, a new screening method that is capable of diagnosing ASD in a more accurate and convenient way is urgently needed. Unlike traditional scales, electroencephalogram (EEG), and eye movement based methods, the acoustic analysis based method has inherent advantages in data collection and rich algorithms that can be employed in speech processing. In this paper, three methods are compared for the construction of acoustic features based on time-frequency independent component analysis (TF-ICA): (1) extracting and combining the rows of the unmixing matrix of each frequency point to build the feature vector; (2) using the separation results of each frequency point as time-frequency feature; (3) extracting time-domain features from the outputs of TF-ICA. Finally, the features are compared by a deep learning classifier on an ASD speech dataset. It is concluded from the experimental results that method 1 obtained the hiehest recognition rate of 98.51%.","PeriodicalId":354314,"journal":{"name":"2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"223 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134316374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}