arXiv - CS - Information Retrieval最新文献

筛选
英文 中文
HierLLM: Hierarchical Large Language Model for Question Recommendation HierLLM:用于问题推荐的分层大语言模型
arXiv - CS - Information Retrieval Pub Date : 2024-09-10 DOI: arxiv-2409.06177
Yuxuan Liu, Haipeng Liu, Ting Long
{"title":"HierLLM: Hierarchical Large Language Model for Question Recommendation","authors":"Yuxuan Liu, Haipeng Liu, Ting Long","doi":"arxiv-2409.06177","DOIUrl":"https://doi.org/arxiv-2409.06177","url":null,"abstract":"Question recommendation is a task that sequentially recommends questions for\u0000students to enhance their learning efficiency. That is, given the learning\u0000history and learning target of a student, a question recommender is supposed to\u0000select the question that will bring the most improvement for students. Previous\u0000methods typically model the question recommendation as a sequential\u0000decision-making problem, estimating students' learning state with the learning\u0000history, and feeding the learning state with the learning target to a neural\u0000network to select the recommended question from a question set. However,\u0000previous methods are faced with two challenges: (1) learning history is\u0000unavailable in the cold start scenario, which makes the recommender generate\u0000inappropriate recommendations; (2) the size of the question set is much large,\u0000which makes it difficult for the recommender to select the best question\u0000precisely. To address the challenges, we propose a method called hierarchical\u0000large language model for question recommendation (HierLLM), which is a\u0000LLM-based hierarchical structure. The LLM-based structure enables HierLLM to\u0000tackle the cold start issue with the strong reasoning abilities of LLM. The\u0000hierarchical structure takes advantage of the fact that the number of concepts\u0000is significantly smaller than the number of questions, narrowing the range of\u0000selectable questions by first identifying the relevant concept for the\u0000to-recommend question, and then selecting the recommended question based on\u0000that concept. This hierarchical structure reduces the difficulty of the\u0000recommendation.To investigate the performance of HierLLM, we conduct extensive\u0000experiments, and the results demonstrate the outstanding performance of\u0000HierLLM.","PeriodicalId":501281,"journal":{"name":"arXiv - CS - Information Retrieval","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142205327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
What makes a good concept anyway ? 什么才是好概念?
arXiv - CS - Information Retrieval Pub Date : 2024-09-10 DOI: arxiv-2409.06150
Naren Khatwani, James Geller
{"title":"What makes a good concept anyway ?","authors":"Naren Khatwani, James Geller","doi":"arxiv-2409.06150","DOIUrl":"https://doi.org/arxiv-2409.06150","url":null,"abstract":"A good medical ontology is expected to cover its domain completely and\u0000correctly. On the other hand, large ontologies are hard to build, hard to\u0000understand, and hard to maintain. Thus, adding new concepts (often multi-word\u0000concepts) to an existing ontology must be done judiciously. Only \"good\"\u0000concepts should be added; however, it is difficult to define what makes a\u0000concept good. In this research, we propose a metric to measure the goodness of\u0000a concept. We identified factors that appear to influence goodness judgments of\u0000medical experts and combined them into a single metric. These factors include\u0000concept name length (in words), concept occurrence frequency in the medical\u0000literature, and syntactic categories of component words. As an added factor, we\u0000used the simplicity of a term after mapping it into a specific foreign\u0000language. We performed Bayesian optimization of factor weights to achieve\u0000maximum agreement between the metric and three medical experts. The results\u0000showed that our metric had a 50.67% overall agreement with the experts, as\u0000measured by Krippendorff's alpha.","PeriodicalId":501281,"journal":{"name":"arXiv - CS - Information Retrieval","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142205328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Operational Advice for Dense and Sparse Retrievers: HNSW, Flat, or Inverted Indexes? 密集型和稀疏型检索器的操作建议:HNSW、平指数还是倒指数?
arXiv - CS - Information Retrieval Pub Date : 2024-09-10 DOI: arxiv-2409.06464
Jimmy Lin
{"title":"Operational Advice for Dense and Sparse Retrievers: HNSW, Flat, or Inverted Indexes?","authors":"Jimmy Lin","doi":"arxiv-2409.06464","DOIUrl":"https://doi.org/arxiv-2409.06464","url":null,"abstract":"Practitioners working on dense retrieval today face a bewildering number of\u0000choices. Beyond selecting the embedding model, another consequential choice is\u0000the actual implementation of nearest-neighbor vector search. While best\u0000practices recommend HNSW indexes, flat vector indexes with brute-force search\u0000represent another viable option, particularly for smaller corpora and for rapid\u0000prototyping. In this paper, we provide experimental results on the BEIR dataset\u0000using the open-source Lucene search library that explicate the tradeoffs\u0000between HNSW and flat indexes (including quantized variants) from the\u0000perspectives of indexing time, query evaluation performance, and retrieval\u0000quality. With additional comparisons between dense and sparse retrievers, our\u0000results provide guidance for today's search practitioner in understanding the\u0000design space of dense and sparse retrievers. To our knowledge, we are the first\u0000to provide operational advice supported by empirical experiments in this\u0000regard.","PeriodicalId":501281,"journal":{"name":"arXiv - CS - Information Retrieval","volume":"14 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142205352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interactive Counterfactual Exploration of Algorithmic Harms in Recommender Systems 交互式反事实探索推荐系统中的算法危害
arXiv - CS - Information Retrieval Pub Date : 2024-09-10 DOI: arxiv-2409.06916
Yongsu Ahn, Quinn K Wolter, Jonilyn Dick, Janet Dick, Yu-Ru Lin
{"title":"Interactive Counterfactual Exploration of Algorithmic Harms in Recommender Systems","authors":"Yongsu Ahn, Quinn K Wolter, Jonilyn Dick, Janet Dick, Yu-Ru Lin","doi":"arxiv-2409.06916","DOIUrl":"https://doi.org/arxiv-2409.06916","url":null,"abstract":"Recommender systems have become integral to digital experiences, shaping user\u0000interactions and preferences across various platforms. Despite their widespread\u0000use, these systems often suffer from algorithmic biases that can lead to unfair\u0000and unsatisfactory user experiences. This study introduces an interactive tool\u0000designed to help users comprehend and explore the impacts of algorithmic harms\u0000in recommender systems. By leveraging visualizations, counterfactual\u0000explanations, and interactive modules, the tool allows users to investigate how\u0000biases such as miscalibration, stereotypes, and filter bubbles affect their\u0000recommendations. Informed by in-depth user interviews, this tool benefits both\u0000general users and researchers by increasing transparency and offering\u0000personalized impact assessments, ultimately fostering a better understanding of\u0000algorithmic biases and contributing to more equitable recommendation outcomes.\u0000This work provides valuable insights for future research and practical\u0000applications in mitigating bias and enhancing fairness in machine learning\u0000algorithms.","PeriodicalId":501281,"journal":{"name":"arXiv - CS - Information Retrieval","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142205324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Sequential Recommendations through Multi-Perspective Reflections and Iteration 通过多视角反思和迭代改进顺序建议
arXiv - CS - Information Retrieval Pub Date : 2024-09-10 DOI: arxiv-2409.06377
Weicong Qin, Yi Xu, Weijie Yu, Chenglei Shen, Xiao Zhang, Ming He, Jianping Fan, Jun Xu
{"title":"Enhancing Sequential Recommendations through Multi-Perspective Reflections and Iteration","authors":"Weicong Qin, Yi Xu, Weijie Yu, Chenglei Shen, Xiao Zhang, Ming He, Jianping Fan, Jun Xu","doi":"arxiv-2409.06377","DOIUrl":"https://doi.org/arxiv-2409.06377","url":null,"abstract":"Sequence recommendation (SeqRec) aims to predict the next item a user will\u0000interact with by understanding user intentions and leveraging collaborative\u0000filtering information. Large language models (LLMs) have shown great promise in\u0000recommendation tasks through prompt-based, fixed reflection libraries, and\u0000fine-tuning techniques. However, these methods face challenges, including lack\u0000of supervision, inability to optimize reflection sources, inflexibility to\u0000diverse user needs, and high computational costs. Despite promising results,\u0000current studies primarily focus on reflections of users' explicit preferences\u0000(e.g., item titles) while neglecting implicit preferences (e.g., brands) and\u0000collaborative filtering information. This oversight hinders the capture of\u0000preference shifts and dynamic user behaviors. Additionally, existing approaches\u0000lack mechanisms for reflection evaluation and iteration, often leading to\u0000suboptimal recommendations. To address these issues, we propose the Mixture of\u0000REflectors (MoRE) framework, designed to model and learn dynamic user\u0000preferences in SeqRec. Specifically, MoRE introduces three reflectors for\u0000generating LLM-based reflections on explicit preferences, implicit preferences,\u0000and collaborative signals. Each reflector incorporates a self-improving\u0000strategy, termed refining-and-iteration, to evaluate and iteratively update\u0000reflections. Furthermore, a meta-reflector employs a contextual bandit\u0000algorithm to select the most suitable expert and corresponding reflections for\u0000each user's recommendation, effectively capturing dynamic preferences.\u0000Extensive experiments on three real-world datasets demonstrate that MoRE\u0000consistently outperforms state-of-the-art methods, requiring less training time\u0000and GPU memory compared to other LLM-based approaches in SeqRec.","PeriodicalId":501281,"journal":{"name":"arXiv - CS - Information Retrieval","volume":"117 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142205326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DV-FSR: A Dual-View Target Attack Framework for Federated Sequential Recommendation DV-FSR:联合序列推荐的双视角目标攻击框架
arXiv - CS - Information Retrieval Pub Date : 2024-09-10 DOI: arxiv-2409.07500
Qitao Qin, Yucong Luo, Mingyue Cheng, Qingyang Mao, Chenyi Lei
{"title":"DV-FSR: A Dual-View Target Attack Framework for Federated Sequential Recommendation","authors":"Qitao Qin, Yucong Luo, Mingyue Cheng, Qingyang Mao, Chenyi Lei","doi":"arxiv-2409.07500","DOIUrl":"https://doi.org/arxiv-2409.07500","url":null,"abstract":"Federated recommendation (FedRec) preserves user privacy by enabling\u0000decentralized training of personalized models, but this architecture is\u0000inherently vulnerable to adversarial attacks. Significant research has been\u0000conducted on targeted attacks in FedRec systems, motivated by commercial and\u0000social influence considerations. However, much of this work has largely\u0000overlooked the differential robustness of recommendation models. Moreover, our\u0000empirical findings indicate that existing targeted attack methods achieve only\u0000limited effectiveness in Federated Sequential Recommendation (FSR) tasks.\u0000Driven by these observations, we focus on investigating targeted attacks in FSR\u0000and propose a novel dualview attack framework, named DV-FSR. This attack method\u0000uniquely combines a sampling-based explicit strategy with a contrastive\u0000learning-based implicit gradient strategy to orchestrate a coordinated attack.\u0000Additionally, we introduce a specific defense mechanism tailored for targeted\u0000attacks in FSR, aiming to evaluate the mitigation effects of the attack method\u0000we proposed. Extensive experiments validate the effectiveness of our proposed\u0000approach on representative sequential models.","PeriodicalId":501281,"journal":{"name":"arXiv - CS - Information Retrieval","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142205314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Critical Features Tracking on Triangulated Irregular Networks by a Scale-Space Method 用尺度空间法追踪三角形不规则网络上的关键特征
arXiv - CS - Information Retrieval Pub Date : 2024-09-10 DOI: arxiv-2409.06638
Haoan Feng, Yunting Song, Leila De Floriani
{"title":"Critical Features Tracking on Triangulated Irregular Networks by a Scale-Space Method","authors":"Haoan Feng, Yunting Song, Leila De Floriani","doi":"arxiv-2409.06638","DOIUrl":"https://doi.org/arxiv-2409.06638","url":null,"abstract":"The scale-space method is a well-established framework that constructs a\u0000hierarchical representation of an input signal and facilitates coarse-to-fine\u0000visual reasoning. Considering the terrain elevation function as the input\u0000signal, the scale-space method can identify and track significant topographic\u0000features across different scales. The number of scales a feature persists,\u0000called its life span, indicates the importance of that feature. In this way,\u0000important topographic features of a landscape can be selected, which are useful\u0000for many applications, including cartography, nautical charting, and land-use\u0000planning. The scale-space methods developed for terrain data use gridded\u0000Digital Elevation Models (DEMs) to represent the terrain. However, gridded DEMs\u0000lack the flexibility to adapt to the irregular distribution of input data and\u0000the varied topological complexity of different regions. Instead, Triangulated\u0000Irregular Networks (TINs) can be directly generated from irregularly\u0000distributed point clouds and accurately preserve important features. In this\u0000work, we introduce a novel scale-space analysis pipeline for TINs, addressing\u0000the multiple challenges in extending grid-based scale-space methods to TINs.\u0000Our pipeline can efficiently identify and track topologically important\u0000features on TINs. Moreover, it is capable of analyzing terrains with irregular\u0000boundaries, which poses challenges for grid-based methods. Comprehensive\u0000experiments show that, compared to grid-based methods, our TIN-based pipeline\u0000is more efficient, accurate, and has better resolution robustness.","PeriodicalId":501281,"journal":{"name":"arXiv - CS - Information Retrieval","volume":"16 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142205325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rs4rs: Semantically Find Recent Publications from Top Recommendation System-Related Venues Rs4rs:从顶级推荐系统相关网站中语义查找最新出版物
arXiv - CS - Information Retrieval Pub Date : 2024-09-09 DOI: arxiv-2409.05570
Tri Kurniawan Wijaya, Edoardo D'Amico, Gabor Fodor, Manuel V. Loureiro
{"title":"Rs4rs: Semantically Find Recent Publications from Top Recommendation System-Related Venues","authors":"Tri Kurniawan Wijaya, Edoardo D'Amico, Gabor Fodor, Manuel V. Loureiro","doi":"arxiv-2409.05570","DOIUrl":"https://doi.org/arxiv-2409.05570","url":null,"abstract":"Rs4rs is a web application designed to perform semantic search on recent\u0000papers from top conferences and journals related to Recommender Systems.\u0000Current scholarly search engine tools like Google Scholar, Semantic Scholar,\u0000and ResearchGate often yield broad results that fail to target the most\u0000relevant high-quality publications. Moreover, manually visiting individual\u0000conference and journal websites is a time-consuming process that primarily\u0000supports only syntactic searches. Rs4rs addresses these issues by providing a\u0000user-friendly platform where researchers can input their topic of interest and\u0000receive a list of recent, relevant papers from top Recommender Systems venues.\u0000Utilizing semantic search techniques, Rs4rs ensures that the search results are\u0000not only precise and relevant but also comprehensive, capturing papers\u0000regardless of variations in wording. This tool significantly enhances research\u0000efficiency and accuracy, thereby benefitting the research community and public\u0000by facilitating access to high-quality, pertinent academic resources in the\u0000field of Recommender Systems. Rs4rs is available at https://rs4rs.com.","PeriodicalId":501281,"journal":{"name":"arXiv - CS - Information Retrieval","volume":"17 8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142205348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Graph Contrastive Learning with Reliable and Informative Augmentation for Recommendation 利用可靠且信息丰富的增强功能加强图表对比学习,以进行推荐
arXiv - CS - Information Retrieval Pub Date : 2024-09-09 DOI: arxiv-2409.05633
Bowen Zheng, Junjie Zhang, Hongyu Lu, Yu Chen, Ming Chen, Wayne Xin Zhao, Ji-Rong Wen
{"title":"Enhancing Graph Contrastive Learning with Reliable and Informative Augmentation for Recommendation","authors":"Bowen Zheng, Junjie Zhang, Hongyu Lu, Yu Chen, Ming Chen, Wayne Xin Zhao, Ji-Rong Wen","doi":"arxiv-2409.05633","DOIUrl":"https://doi.org/arxiv-2409.05633","url":null,"abstract":"Graph neural network (GNN) has been a powerful approach in collaborative\u0000filtering (CF) due to its ability to model high-order user-item relationships.\u0000Recently, to alleviate the data sparsity and enhance representation learning,\u0000many efforts have been conducted to integrate contrastive learning (CL) with\u0000GNNs. Despite the promising improvements, the contrastive view generation based\u0000on structure and representation perturbations in existing methods potentially\u0000disrupts the collaborative information in contrastive views, resulting in\u0000limited effectiveness of positive alignment. To overcome this issue, we propose\u0000CoGCL, a novel framework that aims to enhance graph contrastive learning by\u0000constructing contrastive views with stronger collaborative information via\u0000discrete codes. The core idea is to map users and items into discrete codes\u0000rich in collaborative information for reliable and informative contrastive view\u0000generation. To this end, we initially introduce a multi-level vector quantizer\u0000in an end-to-end manner to quantize user and item representations into discrete\u0000codes. Based on these discrete codes, we enhance the collaborative information\u0000of contrastive views by considering neighborhood structure and semantic\u0000relevance respectively. For neighborhood structure, we propose virtual neighbor\u0000augmentation by treating discrete codes as virtual neighbors, which expands an\u0000observed user-item interaction into multiple edges involving discrete codes.\u0000Regarding semantic relevance, we identify similar users/items based on shared\u0000discrete codes and interaction targets to generate the semantically relevant\u0000view. Through these strategies, we construct contrastive views with stronger\u0000collaborative information and develop a triple-view graph contrastive learning\u0000approach. Extensive experiments on four public datasets demonstrate the\u0000effectiveness of our proposed approach.","PeriodicalId":501281,"journal":{"name":"arXiv - CS - Information Retrieval","volume":"55 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142205353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NLLB-E5: A Scalable Multilingual Retrieval Model NLLB-E5:可扩展的多语言检索模型
arXiv - CS - Information Retrieval Pub Date : 2024-09-09 DOI: arxiv-2409.05401
Arkadeep Acharya, Rudra Murthy, Vishwajeet Kumar, Jaydeep Sen
{"title":"NLLB-E5: A Scalable Multilingual Retrieval Model","authors":"Arkadeep Acharya, Rudra Murthy, Vishwajeet Kumar, Jaydeep Sen","doi":"arxiv-2409.05401","DOIUrl":"https://doi.org/arxiv-2409.05401","url":null,"abstract":"Despite significant progress in multilingual information retrieval, the lack\u0000of models capable of effectively supporting multiple languages, particularly\u0000low-resource like Indic languages, remains a critical challenge. This paper\u0000presents NLLB-E5: A Scalable Multilingual Retrieval Model. NLLB-E5 leverages\u0000the in-built multilingual capabilities in the NLLB encoder for translation\u0000tasks. It proposes a distillation approach from multilingual retriever E5 to\u0000provide a zero-shot retrieval approach handling multiple languages, including\u0000all major Indic languages, without requiring multilingual training data. We\u0000evaluate the model on a comprehensive suite of existing benchmarks, including\u0000Hindi-BEIR, highlighting its robust performance across diverse languages and\u0000tasks. Our findings uncover task and domain-specific challenges, providing\u0000valuable insights into the retrieval performance, especially for low-resource\u0000languages. NLLB-E5 addresses the urgent need for an inclusive, scalable, and\u0000language-agnostic text retrieval model, advancing the field of multilingual\u0000information access and promoting digital inclusivity for millions of users\u0000globally.","PeriodicalId":501281,"journal":{"name":"arXiv - CS - Information Retrieval","volume":"63 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142205393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信