利用服务器日志改进网站超链接结构。

Ashwin Paranjape, Robert West, Leila Zia, Jure Leskovec
{"title":"利用服务器日志改进网站超链接结构。","authors":"Ashwin Paranjape,&nbsp;Robert West,&nbsp;Leila Zia,&nbsp;Jure Leskovec","doi":"10.1145/2835776.2835832","DOIUrl":null,"url":null,"abstract":"<p><p>Good websites should be easy to navigate via hyperlinks, yet maintaining a high-quality link structure is difficult. Identifying pairs of pages that should be linked may be hard for human editors, especially if the site is large and changes frequently. Further, given a set of useful link candidates, the task of incorporating them into the site can be expensive, since it typically involves humans editing pages. In the light of these challenges, it is desirable to develop data-driven methods for automating the link placement task. Here we develop an approach for automatically finding useful hyperlinks to add to a website. We show that passively collected server logs, beyond telling us which existing links are useful, also contain implicit signals indicating which nonexistent links would be useful if they were to be introduced. We leverage these signals to model the future usefulness of yet nonexistent links. Based on our model, we define the problem of link placement under budget constraints and propose an efficient algorithm for solving it. We demonstrate the effectiveness of our approach by evaluating it on Wikipedia, a large website for which we have access to both server logs (used for finding useful new links) and the complete revision history (containing a ground truth of new links). As our method is based exclusively on standard server logs, it may also be applied to any other website, as we show with the example of the biomedical research site Simtk.</p>","PeriodicalId":74530,"journal":{"name":"Proceedings of the ... International Conference on Web Search & Data Mining. International Conference on Web Search & Data Mining","volume":"2016 ","pages":"615-624"},"PeriodicalIF":0.0000,"publicationDate":"2016-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/2835776.2835832","citationCount":"39","resultStr":"{\"title\":\"Improving Website Hyperlink Structure Using Server Logs.\",\"authors\":\"Ashwin Paranjape,&nbsp;Robert West,&nbsp;Leila Zia,&nbsp;Jure Leskovec\",\"doi\":\"10.1145/2835776.2835832\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Good websites should be easy to navigate via hyperlinks, yet maintaining a high-quality link structure is difficult. Identifying pairs of pages that should be linked may be hard for human editors, especially if the site is large and changes frequently. Further, given a set of useful link candidates, the task of incorporating them into the site can be expensive, since it typically involves humans editing pages. In the light of these challenges, it is desirable to develop data-driven methods for automating the link placement task. Here we develop an approach for automatically finding useful hyperlinks to add to a website. We show that passively collected server logs, beyond telling us which existing links are useful, also contain implicit signals indicating which nonexistent links would be useful if they were to be introduced. We leverage these signals to model the future usefulness of yet nonexistent links. Based on our model, we define the problem of link placement under budget constraints and propose an efficient algorithm for solving it. We demonstrate the effectiveness of our approach by evaluating it on Wikipedia, a large website for which we have access to both server logs (used for finding useful new links) and the complete revision history (containing a ground truth of new links). As our method is based exclusively on standard server logs, it may also be applied to any other website, as we show with the example of the biomedical research site Simtk.</p>\",\"PeriodicalId\":74530,\"journal\":{\"name\":\"Proceedings of the ... International Conference on Web Search & Data Mining. International Conference on Web Search & Data Mining\",\"volume\":\"2016 \",\"pages\":\"615-624\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-02-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1145/2835776.2835832\",\"citationCount\":\"39\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the ... International Conference on Web Search & Data Mining. International Conference on Web Search & Data Mining\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/2835776.2835832\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the ... International Conference on Web Search & Data Mining. International Conference on Web Search & Data Mining","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2835776.2835832","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 39

摘要

好的网站应该很容易通过超链接导航,但保持高质量的链接结构是困难的。对于人工编辑来说,确定应该链接的页面对可能很困难,特别是如果站点很大且经常更改的话。此外,给定一组有用的候选链接,将它们合并到站点中的任务可能会很昂贵,因为它通常需要人工编辑页面。鉴于这些挑战,需要开发数据驱动的方法来自动化链接放置任务。在这里,我们开发了一种自动找到有用的超链接添加到网站的方法。我们展示了被动收集的服务器日志,除了告诉我们哪些现有的链接是有用的之外,还包含隐含的信号,指示哪些不存在的链接在引入时是有用的。我们利用这些信号来模拟尚未存在的链接的未来用途。在此模型的基础上,定义了预算约束下的链路放置问题,并提出了一种求解该问题的有效算法。我们通过在维基百科上进行评估来证明我们方法的有效性,维基百科是一个大型网站,我们可以访问服务器日志(用于寻找有用的新链接)和完整的修订历史(包含新链接的基本事实)。由于我们的方法完全基于标准服务器日志,因此它也可以应用于任何其他网站,正如我们以生物医学研究网站Simtk为例所示。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Improving Website Hyperlink Structure Using Server Logs.

Improving Website Hyperlink Structure Using Server Logs.

Improving Website Hyperlink Structure Using Server Logs.

Improving Website Hyperlink Structure Using Server Logs.

Good websites should be easy to navigate via hyperlinks, yet maintaining a high-quality link structure is difficult. Identifying pairs of pages that should be linked may be hard for human editors, especially if the site is large and changes frequently. Further, given a set of useful link candidates, the task of incorporating them into the site can be expensive, since it typically involves humans editing pages. In the light of these challenges, it is desirable to develop data-driven methods for automating the link placement task. Here we develop an approach for automatically finding useful hyperlinks to add to a website. We show that passively collected server logs, beyond telling us which existing links are useful, also contain implicit signals indicating which nonexistent links would be useful if they were to be introduced. We leverage these signals to model the future usefulness of yet nonexistent links. Based on our model, we define the problem of link placement under budget constraints and propose an efficient algorithm for solving it. We demonstrate the effectiveness of our approach by evaluating it on Wikipedia, a large website for which we have access to both server logs (used for finding useful new links) and the complete revision history (containing a ground truth of new links). As our method is based exclusively on standard server logs, it may also be applied to any other website, as we show with the example of the biomedical research site Simtk.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信