Review of web crawlers

S. R. Sreeja, S. Chaudhari
{"title":"Review of web crawlers","authors":"S. R. Sreeja, S. Chaudhari","doi":"10.1504/IJKWI.2014.065035","DOIUrl":null,"url":null,"abstract":"The web is a repository of large amount of data. Information available in the web is organised in the form of pages. Due to the presence of unlimited amount of information, searching and finding out appropriate information from the web is a task which needs expertise. Web crawlers are programmes that assist search engines by automating the task of visiting web pages and downloading their contents. They also help in ranking the downloaded web pages. Thus, the search engines can produce a list of web pages ordered by their relevance and can display this list as a result of the search. Crawling also helps to validate web pages, analyse them, notify about page-updation, visualise web pages and sometimes for collecting e-mail addresses for spam purposes. They can be of different types, each one using different strategies and techniques to crawl web pages. This paper presents a review of various types of web crawlers.","PeriodicalId":113936,"journal":{"name":"Int. J. Knowl. Web Intell.","volume":"52 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Int. J. Knowl. Web Intell.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1504/IJKWI.2014.065035","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

The web is a repository of large amount of data. Information available in the web is organised in the form of pages. Due to the presence of unlimited amount of information, searching and finding out appropriate information from the web is a task which needs expertise. Web crawlers are programmes that assist search engines by automating the task of visiting web pages and downloading their contents. They also help in ranking the downloaded web pages. Thus, the search engines can produce a list of web pages ordered by their relevance and can display this list as a result of the search. Crawling also helps to validate web pages, analyse them, notify about page-updation, visualise web pages and sometimes for collecting e-mail addresses for spam purposes. They can be of different types, each one using different strategies and techniques to crawl web pages. This paper presents a review of various types of web crawlers.
回顾网络爬虫
网络是大量数据的储存库。网络上可用的信息以页面的形式组织起来。由于信息的存在是无限的,从网络上搜索和找到合适的信息是一项需要专业知识的任务。网络爬虫是通过自动访问网页和下载其内容来协助搜索引擎的程序。它们还有助于对下载的网页进行排名。因此,搜索引擎可以产生一个网页列表,按其相关性排序,并可以显示这个列表作为搜索的结果。抓取还有助于验证网页、分析网页、通知网页更新、可视化网页,有时还用于收集电子邮件地址以用于垃圾邮件目的。它们可以是不同的类型,每个都使用不同的策略和技术来抓取网页。本文介绍了各种类型的网络爬虫。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信