{"title":"A novel approach for avoiding overload in the web crawling","authors":"Lavanya Pamulaparty, C. V. Rao, M. S. Rao","doi":"10.1109/ICHPCA.2014.7045332","DOIUrl":null,"url":null,"abstract":"In comparison to human searchers, crawlers are capable of retrieving data in greater depth and more quickly. As a result, it can face crippling impact on the performance of the site. Inessential to say, if a particular crawler is carrying out numerous requests per second and/or, Therefore, in a scenario involving multiple crawlers, it would be difficult for a server to handle requests in case each crawler downloads large files and/or performs numerous requests/second. So to avoid the overloading in the retrieving data in the proposed system a sequence flow protocol(Leakey Bucket) is which retrieves the data in the sequence order and also proposed mobbing unrestricted router (MUR)(CFR) which maintains three phases inbound, peak and outbound. If the inbound flow is less than the peak it allows the crawling, and at the same time if the inbound is greater than or equal to height the MUR protocol stops the crawling and request for the server to extend network resource. Is used for retrieving the data in sequential order.","PeriodicalId":197528,"journal":{"name":"2014 International Conference on High Performance Computing and Applications (ICHPCA)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 International Conference on High Performance Computing and Applications (ICHPCA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICHPCA.2014.7045332","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
In comparison to human searchers, crawlers are capable of retrieving data in greater depth and more quickly. As a result, it can face crippling impact on the performance of the site. Inessential to say, if a particular crawler is carrying out numerous requests per second and/or, Therefore, in a scenario involving multiple crawlers, it would be difficult for a server to handle requests in case each crawler downloads large files and/or performs numerous requests/second. So to avoid the overloading in the retrieving data in the proposed system a sequence flow protocol(Leakey Bucket) is which retrieves the data in the sequence order and also proposed mobbing unrestricted router (MUR)(CFR) which maintains three phases inbound, peak and outbound. If the inbound flow is less than the peak it allows the crawling, and at the same time if the inbound is greater than or equal to height the MUR protocol stops the crawling and request for the server to extend network resource. Is used for retrieving the data in sequential order.