{"title":"图书馆搜索引擎的性能评估框架","authors":"M. S. Pera, Yiu-Kai Ng","doi":"10.1109/SITIS.2010.56","DOIUrl":null,"url":null,"abstract":"Libraries offer valuable resources to library patrons. Unfortunately, formulating library queries that match the rigid keywords chosen by the Library of Congress in library records to retrieve relevant results can be difficult. In solving this problem, we have developed a library search engine, called EnLibS, which allows library patrons to post a query Q with commonly-used words and ranks the retrieved library records according to their degrees of resemblance with Q. To evaluate the performance of EnLibS, it is imperative to conduct a thorough assessment. However, this performance evaluation cannot be conducted due to the lack of benchmark datasets and standardized metrics. To address this issue, in this paper we introduce an evaluation framework which (i) statistically determines the size of a test dataset, (ii) includes a controlled experiment that employs technically-sound approaches for calculating the ideal number of appraisers and queries to be used in the experiment, and (iii) establishes standard metrics for evaluating a library search engine. The proposed evaluation model can be applied to assess the performance of library search engines in (i) reducing the number of keyword queries that retrieve no results, (ii) obtaining high precision in retrieving and accurately ranking relevant library records, and (iii) achieving an acceptable query processing time. We present a case study in which we apply the proposed evaluation framework on the library search engine at Brigham Young University and EnLibS to assess, compare, and contrast their performance.","PeriodicalId":128396,"journal":{"name":"2010 Sixth International Conference on Signal-Image Technology and Internet Based Systems","volume":"22 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2010-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Performance Evaluation Framework for Library Search Engines\",\"authors\":\"M. S. Pera, Yiu-Kai Ng\",\"doi\":\"10.1109/SITIS.2010.56\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Libraries offer valuable resources to library patrons. Unfortunately, formulating library queries that match the rigid keywords chosen by the Library of Congress in library records to retrieve relevant results can be difficult. In solving this problem, we have developed a library search engine, called EnLibS, which allows library patrons to post a query Q with commonly-used words and ranks the retrieved library records according to their degrees of resemblance with Q. To evaluate the performance of EnLibS, it is imperative to conduct a thorough assessment. However, this performance evaluation cannot be conducted due to the lack of benchmark datasets and standardized metrics. To address this issue, in this paper we introduce an evaluation framework which (i) statistically determines the size of a test dataset, (ii) includes a controlled experiment that employs technically-sound approaches for calculating the ideal number of appraisers and queries to be used in the experiment, and (iii) establishes standard metrics for evaluating a library search engine. The proposed evaluation model can be applied to assess the performance of library search engines in (i) reducing the number of keyword queries that retrieve no results, (ii) obtaining high precision in retrieving and accurately ranking relevant library records, and (iii) achieving an acceptable query processing time. We present a case study in which we apply the proposed evaluation framework on the library search engine at Brigham Young University and EnLibS to assess, compare, and contrast their performance.\",\"PeriodicalId\":128396,\"journal\":{\"name\":\"2010 Sixth International Conference on Signal-Image Technology and Internet Based Systems\",\"volume\":\"22 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2010-12-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2010 Sixth International Conference on Signal-Image Technology and Internet Based Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SITIS.2010.56\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2010 Sixth International Conference on Signal-Image Technology and Internet Based Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SITIS.2010.56","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Performance Evaluation Framework for Library Search Engines
Libraries offer valuable resources to library patrons. Unfortunately, formulating library queries that match the rigid keywords chosen by the Library of Congress in library records to retrieve relevant results can be difficult. In solving this problem, we have developed a library search engine, called EnLibS, which allows library patrons to post a query Q with commonly-used words and ranks the retrieved library records according to their degrees of resemblance with Q. To evaluate the performance of EnLibS, it is imperative to conduct a thorough assessment. However, this performance evaluation cannot be conducted due to the lack of benchmark datasets and standardized metrics. To address this issue, in this paper we introduce an evaluation framework which (i) statistically determines the size of a test dataset, (ii) includes a controlled experiment that employs technically-sound approaches for calculating the ideal number of appraisers and queries to be used in the experiment, and (iii) establishes standard metrics for evaluating a library search engine. The proposed evaluation model can be applied to assess the performance of library search engines in (i) reducing the number of keyword queries that retrieve no results, (ii) obtaining high precision in retrieving and accurately ranking relevant library records, and (iii) achieving an acceptable query processing time. We present a case study in which we apply the proposed evaluation framework on the library search engine at Brigham Young University and EnLibS to assess, compare, and contrast their performance.