{"title":"Cooperative crawling","authors":"M. Buzzi","doi":"10.1109/LAWEB.2003.1250300","DOIUrl":null,"url":null,"abstract":"Web crawler design presents many different challenges: architecture, strategies, performance and more. One of the most important research topics concerns improving the selection of \"interesting\" Web pages (for the user), according to importance metrics. Another relevant point is content freshness, i.e. maintaining freshness and consistency of temporary stored copies. For this, the crawler periodically repeats its activity going over stored contents (recrawling process). We propose a scheme to permit a crawler to acquire information about the global state of a Website before the crawling process takes place. This scheme requires Web server cooperation in order to collect and publish information on its content, useful for enabling a crawler to tune its visit strategy. If this information is unavailable or not updated the crawler still acts in the usual manner. In this sense the proposed scheme is not invasive and is independent from any crawling strategy and architecture.","PeriodicalId":376743,"journal":{"name":"Proceedings of the IEEE/LEOS 3rd International Conference on Numerical Simulation of Semiconductor Optoelectronic Devices (IEEE Cat. No.03EX726)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2003-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"14","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the IEEE/LEOS 3rd International Conference on Numerical Simulation of Semiconductor Optoelectronic Devices (IEEE Cat. No.03EX726)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/LAWEB.2003.1250300","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 14
Abstract
Web crawler design presents many different challenges: architecture, strategies, performance and more. One of the most important research topics concerns improving the selection of "interesting" Web pages (for the user), according to importance metrics. Another relevant point is content freshness, i.e. maintaining freshness and consistency of temporary stored copies. For this, the crawler periodically repeats its activity going over stored contents (recrawling process). We propose a scheme to permit a crawler to acquire information about the global state of a Website before the crawling process takes place. This scheme requires Web server cooperation in order to collect and publish information on its content, useful for enabling a crawler to tune its visit strategy. If this information is unavailable or not updated the crawler still acts in the usual manner. In this sense the proposed scheme is not invasive and is independent from any crawling strategy and architecture.