{"title":"衡量元搜索引擎获取信息的性能:基于精确度指标的探索性研究","authors":"Raj Kumar Bhardwaj, Ritesh Kumar, Mohammad Nazim","doi":"10.1108/pmm-09-2023-0028","DOIUrl":null,"url":null,"abstract":"<h3>Purpose</h3>\n<p>This paper evaluates the precision of four metasearch engines (MSEs) – DuckDuckGo, Dogpile, Metacrawler and Startpage, to determine which metasearch engine exhibits the highest level of precision and to identify the metasearch engine that is most likely to return the most relevant search results.</p><!--/ Abstract__block -->\n<h3>Design/methodology/approach</h3>\n<p>The research is divided into two parts: the first phase involves four queries categorized into two segments (4-Q-2-S), while the second phase includes six queries divided into three segments (6-Q-3-S). These queries vary in complexity, falling into three types: simple, phrase and complex. The precision, average precision and the presence of duplicates across all the evaluated metasearch engines are determined.</p><!--/ Abstract__block -->\n<h3>Findings</h3>\n<p>The study clearly demonstrated that Startpage returned the most relevant results and achieved the highest precision (0.98) among the four MSEs. Conversely, DuckDuckGo exhibited consistent performance across both phases of the study.</p><!--/ Abstract__block -->\n<h3>Research limitations/implications</h3>\n<p>The study only evaluated four metasearch engines, which may not be representative of all available metasearch engines. Additionally, a limited number of queries were used, which may not be sufficient to generalize the findings to all types of queries.</p><!--/ Abstract__block -->\n<h3>Practical implications</h3>\n<p>The findings of this study can be valuable for accreditation agencies in managing duplicates, improving their search capabilities and obtaining more relevant and precise results. These findings can also assist users in selecting the best metasearch engine based on precision rather than interface.</p><!--/ Abstract__block -->\n<h3>Originality/value</h3>\n<p>The study is the first of its kind which evaluates the four metasearch engines. No similar study has been conducted in the past to measure the performance of metasearch engines.</p><!--/ Abstract__block -->","PeriodicalId":44583,"journal":{"name":"Performance Measurement and Metrics","volume":null,"pages":null},"PeriodicalIF":1.8000,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Measuring performance of metasearch engines to access information: an exploratory study based on precision metrics\",\"authors\":\"Raj Kumar Bhardwaj, Ritesh Kumar, Mohammad Nazim\",\"doi\":\"10.1108/pmm-09-2023-0028\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<h3>Purpose</h3>\\n<p>This paper evaluates the precision of four metasearch engines (MSEs) – DuckDuckGo, Dogpile, Metacrawler and Startpage, to determine which metasearch engine exhibits the highest level of precision and to identify the metasearch engine that is most likely to return the most relevant search results.</p><!--/ Abstract__block -->\\n<h3>Design/methodology/approach</h3>\\n<p>The research is divided into two parts: the first phase involves four queries categorized into two segments (4-Q-2-S), while the second phase includes six queries divided into three segments (6-Q-3-S). These queries vary in complexity, falling into three types: simple, phrase and complex. The precision, average precision and the presence of duplicates across all the evaluated metasearch engines are determined.</p><!--/ Abstract__block -->\\n<h3>Findings</h3>\\n<p>The study clearly demonstrated that Startpage returned the most relevant results and achieved the highest precision (0.98) among the four MSEs. Conversely, DuckDuckGo exhibited consistent performance across both phases of the study.</p><!--/ Abstract__block -->\\n<h3>Research limitations/implications</h3>\\n<p>The study only evaluated four metasearch engines, which may not be representative of all available metasearch engines. Additionally, a limited number of queries were used, which may not be sufficient to generalize the findings to all types of queries.</p><!--/ Abstract__block -->\\n<h3>Practical implications</h3>\\n<p>The findings of this study can be valuable for accreditation agencies in managing duplicates, improving their search capabilities and obtaining more relevant and precise results. These findings can also assist users in selecting the best metasearch engine based on precision rather than interface.</p><!--/ Abstract__block -->\\n<h3>Originality/value</h3>\\n<p>The study is the first of its kind which evaluates the four metasearch engines. No similar study has been conducted in the past to measure the performance of metasearch engines.</p><!--/ Abstract__block -->\",\"PeriodicalId\":44583,\"journal\":{\"name\":\"Performance Measurement and Metrics\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.8000,\"publicationDate\":\"2024-03-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Performance Measurement and Metrics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1108/pmm-09-2023-0028\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"INFORMATION SCIENCE & LIBRARY SCIENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Performance Measurement and Metrics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1108/pmm-09-2023-0028","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"INFORMATION SCIENCE & LIBRARY SCIENCE","Score":null,"Total":0}
Measuring performance of metasearch engines to access information: an exploratory study based on precision metrics
Purpose
This paper evaluates the precision of four metasearch engines (MSEs) – DuckDuckGo, Dogpile, Metacrawler and Startpage, to determine which metasearch engine exhibits the highest level of precision and to identify the metasearch engine that is most likely to return the most relevant search results.
Design/methodology/approach
The research is divided into two parts: the first phase involves four queries categorized into two segments (4-Q-2-S), while the second phase includes six queries divided into three segments (6-Q-3-S). These queries vary in complexity, falling into three types: simple, phrase and complex. The precision, average precision and the presence of duplicates across all the evaluated metasearch engines are determined.
Findings
The study clearly demonstrated that Startpage returned the most relevant results and achieved the highest precision (0.98) among the four MSEs. Conversely, DuckDuckGo exhibited consistent performance across both phases of the study.
Research limitations/implications
The study only evaluated four metasearch engines, which may not be representative of all available metasearch engines. Additionally, a limited number of queries were used, which may not be sufficient to generalize the findings to all types of queries.
Practical implications
The findings of this study can be valuable for accreditation agencies in managing duplicates, improving their search capabilities and obtaining more relevant and precise results. These findings can also assist users in selecting the best metasearch engine based on precision rather than interface.
Originality/value
The study is the first of its kind which evaluates the four metasearch engines. No similar study has been conducted in the past to measure the performance of metasearch engines.
期刊介绍:
■Quantitative and qualitative analysis ■Benchmarking ■The measurement and role of information in enhancing organizational effectiveness ■Quality techniques and quality improvement ■Training and education ■Methods for performance measurement and metrics ■Standard assessment tools ■Using emerging technologies ■Setting standards or service quality