2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA)最新文献

筛选
英文 中文
Size Does Matter: Overcoming Limitations during Training when using a Feature Pyramid Network 大小很重要:在使用特征金字塔网络时克服训练中的限制
2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA) Pub Date : 2021-12-01 DOI: 10.1109/ICMLA52953.2021.00249
Fabian Fallas-Moya, Manfred Gonzalez-Hernandez, Amir Sadovnik
{"title":"Size Does Matter: Overcoming Limitations during Training when using a Feature Pyramid Network","authors":"Fabian Fallas-Moya, Manfred Gonzalez-Hernandez, Amir Sadovnik","doi":"10.1109/ICMLA52953.2021.00249","DOIUrl":"https://doi.org/10.1109/ICMLA52953.2021.00249","url":null,"abstract":"State-of-the-art object detectors need to be trained with a wide variety of data in order to perform well in real-world problems. Training-data-diversity is very important to achieve good generalization. However, there are scenarios where we have training data with certain limitations. One such scenario is when the objects of the testing set have a different size (discrepancy) from the objects used during training. Another scenario is when we have high-resolution images with a dimension that is not supported by the model. To address these problems, we propose a novel pipeline that is able to handle high-resolution images by cropping the original image into sub-images and put them back in the end. Also, in the case of the discrepancy of object sizes, we propose two different techniques based on scaling the image up and down in order to have an acceptable performance. In addition, we also use the information from the Feature Pyramid Network to remove false-positives. Our proposed methods overcome state-of-the-art data augmentation policies and our models can generalize to different object sizes even though limited data is provided.","PeriodicalId":6750,"journal":{"name":"2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"42 1","pages":"1553-1560"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75456976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Financial Time Series Forecasting Enriched with Textual Information 丰富文本信息的金融时间序列预测
2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA) Pub Date : 2021-12-01 DOI: 10.1109/ICMLA52953.2021.00066
Lord Flaubert Steve Ataucuri Cruz, D. F. Silva
{"title":"Financial Time Series Forecasting Enriched with Textual Information","authors":"Lord Flaubert Steve Ataucuri Cruz, D. F. Silva","doi":"10.1109/ICMLA52953.2021.00066","DOIUrl":"https://doi.org/10.1109/ICMLA52953.2021.00066","url":null,"abstract":"The ability to extract knowledge and forecast stock trends is crucial to mitigate investors’ risks and uncertainties in the market. The stock trend is affected by non-linearity, complexity, noise, and especially the surrounding news. External factors such as daily news became one of the investors’ primary resources for buying or selling assets. However, this kind of information appears very fast. There are thousands of news generated by different web sources, taking a long time to analyze them, causing significant losses for investors due to late decisions. Although recent contextual language models have transformed the area of natural language processing, models to make predictions using news that influence stock values still face barriers such as unlabeled data and class imbalance. This paper proposes a hybrid methodology that enriches the time series forecasting considering textual knowledge extracted from sites without a widely annotated corpus. We show that the proposed method can improve forecasting using an empirical evaluation of Bitcoin prices prediction.","PeriodicalId":6750,"journal":{"name":"2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"43 1","pages":"385-390"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80033442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Retrieval Enhanced Ensemble Model Framework For Rumor Detection On Micro-blogging Platforms 微博平台谣言检测的检索增强集成模型框架
2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA) Pub Date : 2021-12-01 DOI: 10.1109/ICMLA52953.2021.00042
Rishab Sharma, F. H. Fard, Apurva Narayan
{"title":"Retrieval Enhanced Ensemble Model Framework For Rumor Detection On Micro-blogging Platforms","authors":"Rishab Sharma, F. H. Fard, Apurva Narayan","doi":"10.1109/ICMLA52953.2021.00042","DOIUrl":"https://doi.org/10.1109/ICMLA52953.2021.00042","url":null,"abstract":"Automatic rumor detection is the task of finding rumors on social networks. Previous techniques leveraged the propagation structure of tweets to detect the rumors, which makes the propagation of tweets necessary to detect rumors. However, current text-based works provide sub-optimal results as compared to propagation-based techniques. This work presents a retrieval-based framework that leverages the similar tweets from the given train set and chooses the best model from an ensemble of models to predict the test tweet label. Our proposed framework is based on transformers-based pre-trained models (PTM's). Experiments on two public data sets used in previous works, show that our framework can detect the tweets with equivalent accuracy as propagation-based techniques. The primary advantage of this work is in early rumor detection. The proposed framework can detect rumors in few minutes compared to propagation-based works, which requires a significant amount of propagation of tweets that can take hours before they can be detected.","PeriodicalId":6750,"journal":{"name":"2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"1 1","pages":"227-232"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80178867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Transformer-based Approach for Translating Natural Language to Bash Commands 将自然语言转换为Bash命令的基于转换器的方法
2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA) Pub Date : 2021-12-01 DOI: 10.1109/ICMLA52953.2021.00202
Quchen Fu, Zhongwei Teng, Jules White, D. Schmidt
{"title":"A Transformer-based Approach for Translating Natural Language to Bash Commands","authors":"Quchen Fu, Zhongwei Teng, Jules White, D. Schmidt","doi":"10.1109/ICMLA52953.2021.00202","DOIUrl":"https://doi.org/10.1109/ICMLA52953.2021.00202","url":null,"abstract":"This paper explores the translation of natural language into Bash Commands, which developers commonly use to accomplish command-line tasks in a terminal. In our approach a terminal takes a command as a sentence in plain English and translates it into the corresponding string of Bash Commands. The paper analyzes the performance of several architectures on this translation problem using the data from the NLC2CMD competition at the NeurIPS 2020 conference. The approach presented in this paper is the best performing architecture on this problem to date and improves the current state-of-the-art accuracy on this translation task from 13.8% to 53.2%.","PeriodicalId":6750,"journal":{"name":"2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"43 1","pages":"1245-1248"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76257267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
On Learning Probabilistic Partial Lexicographic Preference Trees 学习概率部分字典偏好树
2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA) Pub Date : 2021-12-01 DOI: 10.1109/ICMLA52953.2021.00051
Xudong Liu
{"title":"On Learning Probabilistic Partial Lexicographic Preference Trees","authors":"Xudong Liu","doi":"10.1109/ICMLA52953.2021.00051","DOIUrl":"https://doi.org/10.1109/ICMLA52953.2021.00051","url":null,"abstract":"Proposed by Liu and Truszczynski [1], partial lex-icographic preference trees, PLP-trees, for short, are intuitive and predictive data structures used to model qualitative user preferences over combinatorial domains. In this work, we introduce uncertainty into PLP-trees to propose probabilistic partial lexicographic preference trees, or PPLP-trees. We define such formalism, where uncertainty exhibits in the probability distributions on selecting both the next important feature throughout the model and the preferred value in the domain of every feature. We then define semantics of PPLP-trees in terms of the probability of some object strictly preferred over another object, the probability of some object equivalent with another object, and the probability of some object being optimal. We show that these probabilities can be computed in time polynomial in the size of the tree. To this end, we study the problem of passive learning of PPLP-trees from user examples and demonstrate our learning algorithm, a polynomial time greedy heuristic, bound by a branching factor throughout the construction of the tree.","PeriodicalId":6750,"journal":{"name":"2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"32 1","pages":"286-291"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76929391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using Generative Adversarial Networks and Non-Roadside Video Data to Generate Pedestrian Crossing Scenarios 使用生成对抗网络和非路边视频数据生成行人过街场景
2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA) Pub Date : 2021-12-01 DOI: 10.1109/ICMLA52953.2021.00228
James Spooner, V. Palade, A. Daneshkhah, S. Kanarachos
{"title":"Using Generative Adversarial Networks and Non-Roadside Video Data to Generate Pedestrian Crossing Scenarios","authors":"James Spooner, V. Palade, A. Daneshkhah, S. Kanarachos","doi":"10.1109/ICMLA52953.2021.00228","DOIUrl":"https://doi.org/10.1109/ICMLA52953.2021.00228","url":null,"abstract":"As fully autonomous driving is introduced on our roads, the safety of vulnerable road users is of the greatest importance. Available real-world data is limited and often lacks the variety required to ensure the safe deployment of new technologies. This paper builds on a novel generation method to generate pedestrian crossing scenarios for autonomous vehicle testing, known as the Ped-Cross GAN. While our previously developed Pedestrian Scenario dataset [1] is extremely detailed, there exist labels in the dataset where available data is severely imbalanced. In this paper, augmented non-roadside data is used to improve the generation results of pedestrians running at the roadside, increasing the classification accuracy from 20.95% to 82.56%, by increasing the training data by only 30%. This proves that researchers can generate rare, edge case scenarios using the Ped-Cross GAN, by successfully supplementing available data with additional non-roadside data. This will allow for adequate testing and greater test coverage when testing the performance of autonomous vehicles in pedestrian crossing scenarios. Ultimately, this will lead to fewer pedestrian casualties on our roads.","PeriodicalId":6750,"journal":{"name":"2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"59 1","pages":"1413-1420"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75964565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ConfusionTree-Pattern: A Hierarchical Design for an Efficient and Performant Multi-Class Pattern 混淆树模式:一种高效、高性能的多类模式的分层设计
2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA) Pub Date : 2021-12-01 DOI: 10.1109/ICMLA52953.2021.00125
M. F. Adesso, Nicola Wolpert, E. Schömer
{"title":"ConfusionTree-Pattern: A Hierarchical Design for an Efficient and Performant Multi-Class Pattern","authors":"M. F. Adesso, Nicola Wolpert, E. Schömer","doi":"10.1109/ICMLA52953.2021.00125","DOIUrl":"https://doi.org/10.1109/ICMLA52953.2021.00125","url":null,"abstract":"Developing neural networks for supervised multi-class classification has become important for theory and practice. An essential point is the design of the underlying network. Beside single-network approaches there are several multi-class patterns which decompose a classification problem into multiple sub-problems and derive systems of neural networks. We show that existing multi-class patterns can be improved by a new and simple labeling scheme for the training of the sub-problems. We efficiently derive a class hierarchy which is optimized for our labeling scheme and, unlike most of existing works, has no schematic restrictions. Based on that we introduce a hierarchical multi-class pattern, called ConfusionTree-pattern, which is able to reach high classification accuracies. Our experiments show that our multi-class ConfusionTree-pattern reaches state-of-the-art results regarding performance and efficiency.","PeriodicalId":6750,"journal":{"name":"2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"39 1","pages":"754-759"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87745149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Homology Preserving Graph Compression 保持同调的图压缩
2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA) Pub Date : 2021-12-01 DOI: 10.1109/ICMLA52953.2021.00153
M. E. Aktas, Thu Nguyen, Esra Akbas
{"title":"Homology Preserving Graph Compression","authors":"M. E. Aktas, Thu Nguyen, Esra Akbas","doi":"10.1109/ICMLA52953.2021.00153","DOIUrl":"https://doi.org/10.1109/ICMLA52953.2021.00153","url":null,"abstract":"Recently, topological data analysis (TDA) that studies the shape of data by extracting its topological features has become popular in applied network science. Although recent methods show promising performance for various applications, enormous sizes of real-world networks make the existing TDA solutions for graph mining problems hard to adapt with the high computation and space costs. This paper presents a graph compression method to reduce the size of the graph while preserving homology and persistent homology, which are the popular tools in TDA. The experimental studies in real-world large-scale graphs validate the efficiency of the proposed compression method.","PeriodicalId":6750,"journal":{"name":"2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"11 1","pages":"930-935"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88144610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Emotion Recognition and Sentiment Classification using BERT with Data Augmentation and Emotion Lexicon Enrichment 基于BERT的情感识别和情感分类
2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA) Pub Date : 2021-12-01 DOI: 10.1109/ICMLA52953.2021.00037
Vishwa Sai Kodiyala, Robert E. Mercer
{"title":"Emotion Recognition and Sentiment Classification using BERT with Data Augmentation and Emotion Lexicon Enrichment","authors":"Vishwa Sai Kodiyala, Robert E. Mercer","doi":"10.1109/ICMLA52953.2021.00037","DOIUrl":"https://doi.org/10.1109/ICMLA52953.2021.00037","url":null,"abstract":"The emergence of social networking sites has paved the way for researchers to collect and analyze massive data volumes. Twitter, being one of the leading micro-blogging sites worldwide, provides an excellent opportunity for its users to express their states of mind via short text messages known as tweets. Much research focusing on identifying emotions and sentiments conveyed through tweets has been done. We propose a BERT model fine-tuned to the emotion recognition and sentiment classification tasks and show that it performs better than previous models on standard datasets. We also explore the effectiveness of data augmentation and data enrichment for these tasks.","PeriodicalId":6750,"journal":{"name":"2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"78 1","pages":"191-198"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85400889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Incremental Learning Vector Auto Regression for Forecasting with Edge Devices 基于边缘设备的增量学习向量自回归预测
2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA) Pub Date : 2021-12-01 DOI: 10.1109/ICMLA52953.2021.00188
Venkata Pesala, T. Paul, Ken Ueno, H. P. Bugata, Ankit Kesarwani
{"title":"Incremental Learning Vector Auto Regression for Forecasting with Edge Devices","authors":"Venkata Pesala, T. Paul, Ken Ueno, H. P. Bugata, Ankit Kesarwani","doi":"10.1109/ICMLA52953.2021.00188","DOIUrl":"https://doi.org/10.1109/ICMLA52953.2021.00188","url":null,"abstract":"It is common to forecast time-series data in a cloud server environment by building a forecasting model after collecting all the time-series data at the server-side. However, this may not be efficient in time-critical forecasting, control, and decision-making due to high latency, bandwidth, and network connectivity issues. Hence, edge devices can be employed to make quick forecasting on a real-time basis. However, due to limited computing resources and processing power, edge devices cannot handle a huge volume of multivariate time-series data. Therefore, it is desirable to develop an algorithm that trains and updates a forecasting model incrementally. This can be done by using a small chunk of multivariate time-series data without sacrificing the forecasting accuracy, while training and inference can be executed in the edge device itself. In this context, we propose a new forecasting method called Incremental Learning Vector Auto Regression (ILVAR). It works by minimizing the variance difference between actual and forecasted values as a new chunk of time-series data arrives sequentially and thereby it updates the forecasting model incrementally. To show the effectiveness of the proposed method, experiments were performed on 11 publicly available datasets from diverse domains using Raspberry Pi-2 as an edge device and evaluated using five metrics such as MAPE, RMSE, $mathrm{R}^{2}$ score, Computational time, and Memory consumption for 1-step and 24-step ahead forecasting tasks. The performance was compared with the state-of-the-art methods such as Vector Auto Regression (VAR), Incremental Learning Extreme Learning Machine (ILELM), and Incremental Learning Long Short-Term Memory (ILLSTM). These experimental results suggest that our proposed method performs better than existing methods and is able to achieve the desired performance for forecasting with edge devices.","PeriodicalId":6750,"journal":{"name":"2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"185 1","pages":"1153-1159"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85455649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信