{"title":"Effective Discovery of Meaningful Outlier Relationships","authors":"Aline Bessa, J. Freire, T. Dasu, D. Srivastava","doi":"10.1145/3385192","DOIUrl":"https://doi.org/10.1145/3385192","url":null,"abstract":"We propose Predictable Outliers in Data-trendS (PODS), a method that, given a collection of temporal datasets, derives data-driven explanations for outliers by identifying meaningful relationships between them. First, we formalize the notion of meaningfulness, which so far has been informally framed in terms of explainability. Next, since outliers are rare and it is difficult to determine whether their relationships are meaningful, we develop a new criterion that does so by checking if these relationships could have been predicted from non-outliers, i.e., whether we could see the outlier relationships coming. Finally, searching for meaningful outlier relationships between every pair of datasets in a large data collection is computationally infeasible. To address that, we propose an indexing strategy that prunes irrelevant comparisons across datasets, making the approach scalable. We present the results of an experimental evaluation using real datasets and different baselines, which demonstrates the effectiveness, robustness, and scalability of our approach.","PeriodicalId":93404,"journal":{"name":"ACM/IMS transactions on data science","volume":"39 1","pages":"1 - 33"},"PeriodicalIF":0.0,"publicationDate":"2019-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73996633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multifaceted Analysis of Fine-Tuning in a Deep Model for Visual Recognition","authors":"Xiangyang Li, Luis Herranz, Shuqiang Jiang","doi":"10.1145/3319500","DOIUrl":"https://doi.org/10.1145/3319500","url":null,"abstract":"In recent years, convolutional neural networks (CNNs) have achieved impressive performance for various visual recognition scenarios. CNNs trained on large labeled datasets not only obtain significant performance on most challenging benchmarks but also provide powerful representations, which can be used for a wide range of other tasks. However, the requirement of massive amounts of data to train deep neural networks is a major drawback of these models, as the data available are usually limited or imbalanced. Fine-tuning is an effective way to transfer knowledge learned in a source dataset to a target task. In this article, we introduce and systematically investigate several factors that influence the performance of fine-tuning for visual recognition. These factors include parameters for the retraining procedure (e.g., the initial learning rate of fine-tuning), the distribution of the source and target data (e.g., the number of categories in the source dataset, the distance between the source and target datasets), and so on. We quantitatively and qualitatively analyze these factors, evaluate their influence, and present many empirical observations. The results reveal insights into what fine-tuning changes CNN parameters and provide useful and evidence-backed intuition about how to implement fine-tuning for computer vision tasks.","PeriodicalId":93404,"journal":{"name":"ACM/IMS transactions on data science","volume":"68 1","pages":"1 - 22"},"PeriodicalIF":0.0,"publicationDate":"2019-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90558920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tian Shi, Yaser Keneshloo, Naren Ramakrishnan, C. Reddy
{"title":"Neural Abstractive Text Summarization with Sequence-to-Sequence Models","authors":"Tian Shi, Yaser Keneshloo, Naren Ramakrishnan, C. Reddy","doi":"10.1145/3419106","DOIUrl":"https://doi.org/10.1145/3419106","url":null,"abstract":"In the past few years, neural abstractive text summarization with sequence-to-sequence (seq2seq) models have gained a lot of popularity. Many interesting techniques have been proposed to improve seq2seq models, making them capable of handling different challenges, such as saliency, fluency and human readability, and generate high-quality summaries. Generally speaking, most of these techniques differ in one of these three categories: network structure, parameter inference, and decoding/generation. There are also other concerns, such as efficiency and parallelism for training a model. In this article, we provide a comprehensive literature survey on different seq2seq models for abstractive text summarization from the viewpoint of network structures, training strategies, and summary generation algorithms. Several models were first proposed for language modeling and generation tasks, such as machine translation, and later applied to abstractive text summarization. Hence, we also provide a brief review of these models. As part of this survey, we also develop an open source library, namely, Neural Abstractive Text Summarizer (NATS) toolkit, for the abstractive text summarization. An extensive set of experiments have been conducted on the widely used CNN/Daily Mail dataset to examine the effectiveness of several different neural network components. Finally, we benchmark two models implemented in NATS on the two recently released datasets, namely, Newsroom and Bytecup.","PeriodicalId":93404,"journal":{"name":"ACM/IMS transactions on data science","volume":"1197 1","pages":"1 - 37"},"PeriodicalIF":0.0,"publicationDate":"2018-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85273720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"test paper to update gold OA language","authors":"","doi":"10.1145/3396300","DOIUrl":"https://doi.org/10.1145/3396300","url":null,"abstract":"","PeriodicalId":93404,"journal":{"name":"ACM/IMS transactions on data science","volume":"48 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87566787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}