Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval最新文献

筛选
英文 中文
Passage retrieval vs. document retrieval for factoid question answering 段落检索与伪题回答的文档检索
C. Clarke, E. Terra
{"title":"Passage retrieval vs. document retrieval for factoid question answering","authors":"C. Clarke, E. Terra","doi":"10.1145/860435.860534","DOIUrl":"https://doi.org/10.1145/860435.860534","url":null,"abstract":"Question answering (QA) systems often contain an information retrieval subsystem that identifies documents or passages where the answer to a question might appear [1–3, 5, 6, 10]. The QA system generates queries from the questions and submits them to the IR subsystem. The IR subsystem returns the top-ranked documents or passages, and the QA system selects the answers from them. In many QA systems, the IR component retrieves entire documents. Then, in a post-retrieval step, the system scans the retrieved documents and locates groups of sentences that contain most or all of the question keywords [3,10, and others]. These sentences are subjected to further analysis to select the answer. In other QA systems, a passage-retrieval technique is employed to directly identify locations within the document collection where the answer might be found, avoiding the post-retrieval step [1, 2, 5, 6, and others]. In this context, a “relevant” document or passage is one that contains an answer. We utilize this notion of relevance to evaluate an IR subsystem in isolation from the rest of its QA system by applying standard measures of IR effectiveness. By restricting our evaluation to a single subsystem we hope to gain experience that is applicable to QA systems beyond our own. An assumption inherent in this approach is that improved precision in the IR subsystem will translate to improved performance of the QA system as a whole. This assumption holds for our own system, and should (at least) hold for any system that exploits redundancy—that takes advantage of the observation that answers tend to occur in more than one retrieved passage [1, 2, 5]. In this paper we compare a successful passage-retrieval method [1, 5] with a well-known and effective documentretrieval method: Okapi BM25 [7]. Our goal is to examine","PeriodicalId":209809,"journal":{"name":"Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121252637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 54
Modeling annotated data 建模注释数据
D. Blei, Michael I. Jordan
{"title":"Modeling annotated data","authors":"D. Blei, Michael I. Jordan","doi":"10.1145/860435.860460","DOIUrl":"https://doi.org/10.1145/860435.860460","url":null,"abstract":"We consider the problem of modeling annotated data---data with multiple types where the instance of one type (such as a caption) serves as a description of the other type (such as an image). We describe three hierarchical probabilistic mixture models which aim to describe such data, culminating in correspondence latent Dirichlet allocation, a latent variable model that is effective at modeling the joint distribution of both types and the conditional distribution of the annotation given the primary type. We conduct experiments on the Corel database of images and captions, assessing performance in terms of held-out likelihood, automatic annotation, and text-based image retrieval.","PeriodicalId":209809,"journal":{"name":"Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126166639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1250
Word sense disambiguation in information retrieval revisited 再论信息检索中的词义消歧
Christopher Stokoe, M. Oakes, J. Tait
{"title":"Word sense disambiguation in information retrieval revisited","authors":"Christopher Stokoe, M. Oakes, J. Tait","doi":"10.1145/860435.860466","DOIUrl":"https://doi.org/10.1145/860435.860466","url":null,"abstract":"Word sense ambiguity is recognized as having a detrimental effect on the precision of information retrieval systems in general and web search systems in particular, due to the sparse nature of the queries involved. Despite continued research into the application of automated word sense disambiguation, the question remains as to whether less than 90% accurate automated word sense disambiguation can lead to improvements in retrieval effectiveness. In this study we explore the development and subsequent evaluation of a statistical word sense disambiguation system which demonstrates increased precision from a sense based vector space retrieval model over traditional TF*IDF techniques.","PeriodicalId":209809,"journal":{"name":"Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125161855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 223
Query length in interactive information retrieval 交互式信息检索中的查询长度
N. Belkin, D. Kelly, G. Kim, Ja-Young Kim, Hyuk-Jin Lee, G. Muresan, Muh-Chyun Tang, Xiaojun Yuan, Colleen Cool
{"title":"Query length in interactive information retrieval","authors":"N. Belkin, D. Kelly, G. Kim, Ja-Young Kim, Hyuk-Jin Lee, G. Muresan, Muh-Chyun Tang, Xiaojun Yuan, Colleen Cool","doi":"10.1145/860435.860474","DOIUrl":"https://doi.org/10.1145/860435.860474","url":null,"abstract":"Query length in best-match information retrieval (IR) systems is well known to be positively related to effectiveness in the IR task, when measured in experimental, non-interactive environments. However, in operational, interactive IR systems, query length is quite typically very short, on the order of two to three words. We report on a study which tested the effectiveness of a particular query elicitation technique in increasing initial searcher query length, and which tested the effectiveness of queries elicited using this technique, and the relationship in general between query length and search effectiveness in interactive IR. Results show that the specific technique results in longer queries than a standard query elicitation technique, that this technique is indeed usable, that the technique results in increased user satisfaction with the search, and that query length is positively correlated with user satisfaction with the search.","PeriodicalId":209809,"journal":{"name":"Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130470968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 161
Transliteration of proper names in cross-language applications 跨语言应用中专有名称的音译
Paola Virga, S. Khudanpur
{"title":"Transliteration of proper names in cross-language applications","authors":"Paola Virga, S. Khudanpur","doi":"10.1145/860435.860503","DOIUrl":"https://doi.org/10.1145/860435.860503","url":null,"abstract":"Translation of proper names is generally recognized as a significant problem in many multi-lingual text and speech processing applications. Even when large bilingual lexicons used for machine translation (MT) and cross-lingual information retrieval (CLIR) provide significant coverage of the words encountered in the text, a significant portion of the tokens not covered by such lexicons are proper names (cf e.g. [3]). For CLIR applications in particular, proper names and technical terms are particularly important, as they carry some of the more distinctive information in a query. In IR systems where users provide very short queries (e.g. 2-3 words), their importance grows even further. Proper names are amenable to a speech-inspired translation approach. When writing a foreign name in ones native language, one tries to preserve the way it sounds. i.e. one uses an orthographic representation which, when “read aloud” by a native speaker of the language sounds as it would when spoken by a speaker of the foreign language — a process referred to as transliteration. If mechanisms were available (a) to render, say, an English name in its phonemic form, and (b) to convert this phonemic string into the orthography of, say, Mandarin Chinese, then one would have a mechanism for transliterating English names using Chinese characters. The first part has been addressed extensively in the automatic textto-speech synthesis literature. This paper describes a statistical approach for the second part. Several techniques have been proposed in the recent past for name transliteration. Finite state transducers that implement transformation rules for back-transliteration from Japanese to English are described in [2], and extended to Arabic in [5]. In both cases, the goal is to recognize words in Japanese or Arabic text which happen to be transliterations of English names. The strongly phonetic orthography of Korean is exploited in [1] to obtain good transliteration using relatively simple HMM-based models. A set of handcrafted rules for locally editing the phonemic spelling of an English name to conform to Mandarin syllabification is provided to a transformation-based learning algorithm in [4], which then learns how to convert an English phoneme sequence to a Mandarin syllable sequence. We describe here a fully data driven counterpart to the technique of [4] for English-to-Mandarin name transliteration. In addition to intrinsic evaluation, we test our transliteration system extrinsically for cross-lingual spoken document retrieval by usThis research was partially supported by DARPA via Grant No N66001-00-2-8910 and ONR via Grant No N00014-01-1-0685. Copyright is held by the author/owner. SIGIR’03, July 28–August 1, 2003, Toronto, Canada. ACM 1-58113-646-3/03/0007. ing English text queries to retrieve Mandarin audio from the Topic Detection and Tracking (TDT) corpus.","PeriodicalId":209809,"journal":{"name":"Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133179889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 46
Text categorization by boosting automatically extracted concepts 文本分类通过提升自动提取的概念
Lijuan Cai, Thomas Hofmann
{"title":"Text categorization by boosting automatically extracted concepts","authors":"Lijuan Cai, Thomas Hofmann","doi":"10.1145/860435.860470","DOIUrl":"https://doi.org/10.1145/860435.860470","url":null,"abstract":"Term-based representations of documents have found wide-spread use in information retrieval. However, one of the main shortcomings of such methods is that they largely disregard lexical semantics and, as a consequence, are not sufficiently robust with respect to variations in word usage.In this paper we investigate the use of concept-based document representations to supplement word- or phrase-based features. The utilized concepts are automatically extracted from documents via probabilistic latent semantic analysis. We propose to use AdaBoost to optimally combine weak hypotheses based on both types of features. Experimental results on standard benchmarks confirm the validity of our approach, showing that AdaBoost achieves consistent improvements by including additional semantic features in the learned ensemble.","PeriodicalId":209809,"journal":{"name":"Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval","volume":"312 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131032525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 144
Document-self expansion for text categorization 用于文本分类的文档自展开
Yuen-Hsien Tseng, Da-Wei Juang
{"title":"Document-self expansion for text categorization","authors":"Yuen-Hsien Tseng, Da-Wei Juang","doi":"10.1145/860435.860520","DOIUrl":"https://doi.org/10.1145/860435.860520","url":null,"abstract":"Approaches to increase training examples to hopefully improve classification effectiveness are proposed in this work. The approaches were verified by use of two Chinese collections classified by two top-performing classifiers.","PeriodicalId":209809,"journal":{"name":"Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121755544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Structured use of external knowledge for event-based open domain question answering 结构化地使用外部知识进行基于事件的开放领域问答
G. Yang, Tat-Seng Chua, Shuguang Wang, Chun-Keat Koh
{"title":"Structured use of external knowledge for event-based open domain question answering","authors":"G. Yang, Tat-Seng Chua, Shuguang Wang, Chun-Keat Koh","doi":"10.1145/860435.860444","DOIUrl":"https://doi.org/10.1145/860435.860444","url":null,"abstract":"One of the major problems in question answering (QA) is that the queries are either too brief or often do not contain most relevant terms in the target corpus. In order to overcome this problem, our earlier work integrates external knowledge extracted from the Web and WordNet to perform Event-based QA on the TREC-11 task. This paper extends our approach to perform event-based QA by uncovering the structure within the external knowledge. The knowledge structure loosely models different facets of QA events, and is used in conjunction with successive constraint relaxation algorithm to achieve effective QA. Our results obtained on TREC-11 QA corpus indicate that the new approach is more effective and able to attain a confidence-weighted score of above 80%.","PeriodicalId":209809,"journal":{"name":"Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131453757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 117
Topic hierarchy generation via linear discriminant projection 基于线性判别投影的主题层次生成
Tao Li, Shenghuo Zhu, M. Ogihara
{"title":"Topic hierarchy generation via linear discriminant projection","authors":"Tao Li, Shenghuo Zhu, M. Ogihara","doi":"10.1145/860435.860531","DOIUrl":"https://doi.org/10.1145/860435.860531","url":null,"abstract":"Text categorization has been receiving more and more attention with the ever-increasing growth of the on-line information. Automated text categorization is generally a supervised learning problem, defined as the problem of assigning pre-defined category labels to new documents based on the likelihood suggested by labeled documents. Most studies in the area have been focused on flat classification, where the predefined categories are treated individually and separately [5]. As the available information increases, when the number of categories grows significantly large, it will become much more difficult to browse and search categories. The most successful paradigm for organizing this mass of information and making it compressible is by categorizing documents according to their topics where the topics are organized in a hierarchy of increasing specificity [3]. Hierarchical structures identify the relationships of dependence between the categories and provides a valuable information source for many problems. Recently several researchers have investigated the use of hierarchies for text classification and obtained promising results [1, 4]. However, little has been done to explore the approaches to automatically generate topic hierarchies. Most of the reported techniques have been conducted on existential hierarchically structured corpora. The aim of automatic hierarchy generation has several motivations. First, manually building hierarchies is an expensive task since it requires domain experts to evaluate the documents’ relevance to the topics. Second, existing hierarchies are optimized for human use based on “human semantics”, but not necessarily for classifier use. Automatic generated hierarchies can be incorporated into various classification methods","PeriodicalId":209809,"journal":{"name":"Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127994597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Collaborative filtering via gaussian probabilistic latent semantic analysis 基于高斯概率潜在语义分析的协同过滤
Thomas Hofmann
{"title":"Collaborative filtering via gaussian probabilistic latent semantic analysis","authors":"Thomas Hofmann","doi":"10.1145/860435.860483","DOIUrl":"https://doi.org/10.1145/860435.860483","url":null,"abstract":"Collaborative filtering aims at learning predictive models of user preferences, interests or behavior from community data, i.e. a database of available user preferences. In this paper, we describe a new model-based algorithm designed for this task, which is based on a generalization of probabilistic latent semantic analysis to continuous-valued response variables. More specifically, we assume that the observed user ratings can be modeled as a mixture of user communities or interest groups, where users may participate probabilistically in one or more groups. Each community is characterized by a Gaussian distribution on the normalized ratings for each item. The normalization of ratings is performed in a user-specific manner to account for variations in absolute shift and variance of ratings. Experiments on the EachMovie data set show that the proposed approach compares favorably with other collaborative filtering techniques.","PeriodicalId":209809,"journal":{"name":"Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121242397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 449
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信