ACM/IMS transactions on data science最新文献

筛选
英文 中文
Effective Discovery of Meaningful Outlier Relationships 有效发现有意义的离群值关系
ACM/IMS transactions on data science Pub Date : 2019-10-19 DOI: 10.1145/3385192
Aline Bessa, J. Freire, T. Dasu, D. Srivastava
{"title":"Effective Discovery of Meaningful Outlier Relationships","authors":"Aline Bessa, J. Freire, T. Dasu, D. Srivastava","doi":"10.1145/3385192","DOIUrl":"https://doi.org/10.1145/3385192","url":null,"abstract":"We propose Predictable Outliers in Data-trendS (PODS), a method that, given a collection of temporal datasets, derives data-driven explanations for outliers by identifying meaningful relationships between them. First, we formalize the notion of meaningfulness, which so far has been informally framed in terms of explainability. Next, since outliers are rare and it is difficult to determine whether their relationships are meaningful, we develop a new criterion that does so by checking if these relationships could have been predicted from non-outliers, i.e., whether we could see the outlier relationships coming. Finally, searching for meaningful outlier relationships between every pair of datasets in a large data collection is computationally infeasible. To address that, we propose an indexing strategy that prunes irrelevant comparisons across datasets, making the approach scalable. We present the results of an experimental evaluation using real datasets and different baselines, which demonstrates the effectiveness, robustness, and scalability of our approach.","PeriodicalId":93404,"journal":{"name":"ACM/IMS transactions on data science","volume":"39 1","pages":"1 - 33"},"PeriodicalIF":0.0,"publicationDate":"2019-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73996633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Multifaceted Analysis of Fine-Tuning in a Deep Model for Visual Recognition 视觉识别深度模型中微调的多方面分析
ACM/IMS transactions on data science Pub Date : 2019-07-11 DOI: 10.1145/3319500
Xiangyang Li, Luis Herranz, Shuqiang Jiang
{"title":"Multifaceted Analysis of Fine-Tuning in a Deep Model for Visual Recognition","authors":"Xiangyang Li, Luis Herranz, Shuqiang Jiang","doi":"10.1145/3319500","DOIUrl":"https://doi.org/10.1145/3319500","url":null,"abstract":"In recent years, convolutional neural networks (CNNs) have achieved impressive performance for various visual recognition scenarios. CNNs trained on large labeled datasets not only obtain significant performance on most challenging benchmarks but also provide powerful representations, which can be used for a wide range of other tasks. However, the requirement of massive amounts of data to train deep neural networks is a major drawback of these models, as the data available are usually limited or imbalanced. Fine-tuning is an effective way to transfer knowledge learned in a source dataset to a target task. In this article, we introduce and systematically investigate several factors that influence the performance of fine-tuning for visual recognition. These factors include parameters for the retraining procedure (e.g., the initial learning rate of fine-tuning), the distribution of the source and target data (e.g., the number of categories in the source dataset, the distance between the source and target datasets), and so on. We quantitatively and qualitatively analyze these factors, evaluate their influence, and present many empirical observations. The results reveal insights into what fine-tuning changes CNN parameters and provide useful and evidence-backed intuition about how to implement fine-tuning for computer vision tasks.","PeriodicalId":93404,"journal":{"name":"ACM/IMS transactions on data science","volume":"68 1","pages":"1 - 22"},"PeriodicalIF":0.0,"publicationDate":"2019-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90558920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Neural Abstractive Text Summarization with Sequence-to-Sequence Models 基于序列到序列模型的神经抽象文本摘要
ACM/IMS transactions on data science Pub Date : 2018-12-05 DOI: 10.1145/3419106
Tian Shi, Yaser Keneshloo, Naren Ramakrishnan, C. Reddy
{"title":"Neural Abstractive Text Summarization with Sequence-to-Sequence Models","authors":"Tian Shi, Yaser Keneshloo, Naren Ramakrishnan, C. Reddy","doi":"10.1145/3419106","DOIUrl":"https://doi.org/10.1145/3419106","url":null,"abstract":"In the past few years, neural abstractive text summarization with sequence-to-sequence (seq2seq) models have gained a lot of popularity. Many interesting techniques have been proposed to improve seq2seq models, making them capable of handling different challenges, such as saliency, fluency and human readability, and generate high-quality summaries. Generally speaking, most of these techniques differ in one of these three categories: network structure, parameter inference, and decoding/generation. There are also other concerns, such as efficiency and parallelism for training a model. In this article, we provide a comprehensive literature survey on different seq2seq models for abstractive text summarization from the viewpoint of network structures, training strategies, and summary generation algorithms. Several models were first proposed for language modeling and generation tasks, such as machine translation, and later applied to abstractive text summarization. Hence, we also provide a brief review of these models. As part of this survey, we also develop an open source library, namely, Neural Abstractive Text Summarizer (NATS) toolkit, for the abstractive text summarization. An extensive set of experiments have been conducted on the widely used CNN/Daily Mail dataset to examine the effectiveness of several different neural network components. Finally, we benchmark two models implemented in NATS on the two recently released datasets, namely, Newsroom and Bytecup.","PeriodicalId":93404,"journal":{"name":"ACM/IMS transactions on data science","volume":"1197 1","pages":"1 - 37"},"PeriodicalIF":0.0,"publicationDate":"2018-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85273720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 166
test paper to update gold OA language 试卷更新金OA语言
ACM/IMS transactions on data science Pub Date : 1900-01-01 DOI: 10.1145/3396300
{"title":"test paper to update gold OA language","authors":"","doi":"10.1145/3396300","DOIUrl":"https://doi.org/10.1145/3396300","url":null,"abstract":"","PeriodicalId":93404,"journal":{"name":"ACM/IMS transactions on data science","volume":"48 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87566787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信