Cooperative crawling

M. Buzzi
{"title":"Cooperative crawling","authors":"M. Buzzi","doi":"10.1109/LAWEB.2003.1250300","DOIUrl":null,"url":null,"abstract":"Web crawler design presents many different challenges: architecture, strategies, performance and more. One of the most important research topics concerns improving the selection of \"interesting\" Web pages (for the user), according to importance metrics. Another relevant point is content freshness, i.e. maintaining freshness and consistency of temporary stored copies. For this, the crawler periodically repeats its activity going over stored contents (recrawling process). We propose a scheme to permit a crawler to acquire information about the global state of a Website before the crawling process takes place. This scheme requires Web server cooperation in order to collect and publish information on its content, useful for enabling a crawler to tune its visit strategy. If this information is unavailable or not updated the crawler still acts in the usual manner. In this sense the proposed scheme is not invasive and is independent from any crawling strategy and architecture.","PeriodicalId":376743,"journal":{"name":"Proceedings of the IEEE/LEOS 3rd International Conference on Numerical Simulation of Semiconductor Optoelectronic Devices (IEEE Cat. No.03EX726)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2003-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"14","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the IEEE/LEOS 3rd International Conference on Numerical Simulation of Semiconductor Optoelectronic Devices (IEEE Cat. No.03EX726)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/LAWEB.2003.1250300","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 14

Abstract

Web crawler design presents many different challenges: architecture, strategies, performance and more. One of the most important research topics concerns improving the selection of "interesting" Web pages (for the user), according to importance metrics. Another relevant point is content freshness, i.e. maintaining freshness and consistency of temporary stored copies. For this, the crawler periodically repeats its activity going over stored contents (recrawling process). We propose a scheme to permit a crawler to acquire information about the global state of a Website before the crawling process takes place. This scheme requires Web server cooperation in order to collect and publish information on its content, useful for enabling a crawler to tune its visit strategy. If this information is unavailable or not updated the crawler still acts in the usual manner. In this sense the proposed scheme is not invasive and is independent from any crawling strategy and architecture.
合作爬行
Web爬虫的设计提出了许多不同的挑战:架构、策略、性能等等。最重要的研究主题之一是根据重要性度量改进(对用户而言)“感兴趣”的网页的选择。另一个相关的问题是内容的新鲜度,即保持临时存储副本的新鲜度和一致性。为此,爬虫周期性地重复其活动,遍历存储的内容(重新爬行过程)。我们提出了一种方案,允许爬虫在抓取过程发生之前获取有关网站全局状态的信息。该方案需要Web服务器配合,以便收集和发布有关其内容的信息,这对于使爬虫能够调整其访问策略非常有用。如果此信息不可用或未更新,则爬行程序仍以通常的方式运行。从这个意义上说,所提出的方案不是侵入性的,并且独立于任何爬行策略和体系结构。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信