Applying Web Crawler Technologies for Compiling Parallel Corpora as one Stage of Natural Language Processing

Nilufar Abdurakhmonovaa, Ismailov Alisher, Guli Toirovaa
{"title":"Applying Web Crawler Technologies for Compiling Parallel Corpora as one Stage of Natural Language Processing","authors":"Nilufar Abdurakhmonovaa, Ismailov Alisher, Guli Toirovaa","doi":"10.1109/UBMK55850.2022.9919521","DOIUrl":null,"url":null,"abstract":"over the past decade, the amount of information on the internet has increased. A large amount of unstructured data, referred to as big data on the web, has been created. Finding and extracting data on the internet is called information retrieval. In the search for information, there are web crawler tools, which are a program that scans information on the internet and downloads web documents automatically. Search robot applications can be used in various fields, such as news, finance, medicine, etc. In this article, we will discuss the basic principle and characteristics of search engines as an example to build parallel corpora, as well as the classification of modern popular crawlers, strategies and current applications of crawlers. Finally, we will end this article with a discussion of future directions for research on crawlers.","PeriodicalId":417604,"journal":{"name":"2022 7th International Conference on Computer Science and Engineering (UBMK)","volume":"1214 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 7th International Conference on Computer Science and Engineering (UBMK)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/UBMK55850.2022.9919521","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

over the past decade, the amount of information on the internet has increased. A large amount of unstructured data, referred to as big data on the web, has been created. Finding and extracting data on the internet is called information retrieval. In the search for information, there are web crawler tools, which are a program that scans information on the internet and downloads web documents automatically. Search robot applications can be used in various fields, such as news, finance, medicine, etc. In this article, we will discuss the basic principle and characteristics of search engines as an example to build parallel corpora, as well as the classification of modern popular crawlers, strategies and current applications of crawlers. Finally, we will end this article with a discussion of future directions for research on crawlers.
应用网络爬虫技术编译并行语料库作为自然语言处理的一个阶段
在过去的十年里,互联网上的信息量增加了。大量的非结构化数据,被称为网络上的大数据,已经被创造出来。在互联网上查找和提取数据被称为信息检索。在搜索信息方面,有网络爬虫工具,这是一种扫描互联网上的信息并自动下载网络文档的程序。搜索机器人的应用可以应用于各个领域,如新闻、金融、医药等。在本文中,我们将以搜索引擎为例,讨论构建并行语料库的基本原理和特点,以及现代流行的爬虫的分类、爬虫的策略和目前的应用。最后,我们将对爬行器的未来研究方向进行讨论。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信