回顾搜索:探索和野心的未知领域

Can Urgun, Leeat Yariv
{"title":"回顾搜索:探索和野心的未知领域","authors":"Can Urgun, Leeat Yariv","doi":"10.2139/ssrn.3744458","DOIUrl":null,"url":null,"abstract":"The search for good outcomes-be it government policies, technological breakthroughs, or a lasting purchase-takes time and effort. In this paper, we consider a continuous-time search setting. Discoveries beget discoveries and their observations are correlated over time, which we model using a Brownian motion. A searching agent makes two critical decisions: how ambitiously or broadly to search at any point, and when to cease search. Once search stops, the agent is rewarded for the best outcome observed throughout her search. We call this search process retrospective search. We fully characterize the optimal search policy. The stopping boundary takes a simple form: the agent terminates search as soon as search outcomes fall a certain fixed distance below the best-observed outcome; that fixed distance is termed the drawdown size. Search scope is chosen to minimize the expected discounted costs before either a new best-outcome is observed or search is terminated. The optimal search scope is a U- shaped function of the difference between the best outcome and the current outcome; the scope is the smallest when the difference is half of the optimal drawdown size. Both the expected best outcome and expected discounted costs are increasing in drawdown size, and the optimal drawdown size is chosen to strike a balance between the two, given the U-shaped optimal scopes. The optimal policy exhibits natural comparative statics that we explore. We also show the special features that emerge from contracting with a retrospective searcher.","PeriodicalId":395676,"journal":{"name":"Proceedings of the 22nd ACM Conference on Economics and Computation","volume":"290 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":"{\"title\":\"Retrospective Search: Exploration and Ambition on Uncharted Terrain\",\"authors\":\"Can Urgun, Leeat Yariv\",\"doi\":\"10.2139/ssrn.3744458\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The search for good outcomes-be it government policies, technological breakthroughs, or a lasting purchase-takes time and effort. In this paper, we consider a continuous-time search setting. Discoveries beget discoveries and their observations are correlated over time, which we model using a Brownian motion. A searching agent makes two critical decisions: how ambitiously or broadly to search at any point, and when to cease search. Once search stops, the agent is rewarded for the best outcome observed throughout her search. We call this search process retrospective search. We fully characterize the optimal search policy. The stopping boundary takes a simple form: the agent terminates search as soon as search outcomes fall a certain fixed distance below the best-observed outcome; that fixed distance is termed the drawdown size. Search scope is chosen to minimize the expected discounted costs before either a new best-outcome is observed or search is terminated. The optimal search scope is a U- shaped function of the difference between the best outcome and the current outcome; the scope is the smallest when the difference is half of the optimal drawdown size. Both the expected best outcome and expected discounted costs are increasing in drawdown size, and the optimal drawdown size is chosen to strike a balance between the two, given the U-shaped optimal scopes. The optimal policy exhibits natural comparative statics that we explore. We also show the special features that emerge from contracting with a retrospective searcher.\",\"PeriodicalId\":395676,\"journal\":{\"name\":\"Proceedings of the 22nd ACM Conference on Economics and Computation\",\"volume\":\"290 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 22nd ACM Conference on Economics and Computation\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.2139/ssrn.3744458\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 22nd ACM Conference on Economics and Computation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2139/ssrn.3744458","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7

摘要

寻找好的结果——无论是政府政策、技术突破,还是持久的购买——都需要时间和精力。在本文中,我们考虑一个连续时间搜索设置。发现带来新的发现,他们的观察结果随着时间的推移是相关的,我们用布朗运动来建模。搜索代理要做出两个关键的决定:在任何点搜索的雄心或范围,以及何时停止搜索。一旦搜索停止,代理就会因为在整个搜索过程中观察到的最佳结果而获得奖励。我们称这个搜索过程为回顾性搜索。我们充分描述了最优搜索策略。停止边界的形式很简单:当搜索结果低于最佳观测结果某一固定距离时,智能体终止搜索;这个固定的距离被称为收缩大小。在观察到新的最佳结果或终止搜索之前,选择搜索范围以最小化预期折扣成本。最优搜索范围是最佳结果与当前结果之差的U型函数;当差异是最佳收缩大小的一半时,范围最小。期望最佳结果和期望折现成本都随着缩量规模的增大而增大,在u型最优范围下,选择最优缩量规模以达到两者之间的平衡。我们探索的最优政策表现出自然的比较静态。我们还展示了与回顾性搜索器签订合同后出现的特殊功能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Retrospective Search: Exploration and Ambition on Uncharted Terrain
The search for good outcomes-be it government policies, technological breakthroughs, or a lasting purchase-takes time and effort. In this paper, we consider a continuous-time search setting. Discoveries beget discoveries and their observations are correlated over time, which we model using a Brownian motion. A searching agent makes two critical decisions: how ambitiously or broadly to search at any point, and when to cease search. Once search stops, the agent is rewarded for the best outcome observed throughout her search. We call this search process retrospective search. We fully characterize the optimal search policy. The stopping boundary takes a simple form: the agent terminates search as soon as search outcomes fall a certain fixed distance below the best-observed outcome; that fixed distance is termed the drawdown size. Search scope is chosen to minimize the expected discounted costs before either a new best-outcome is observed or search is terminated. The optimal search scope is a U- shaped function of the difference between the best outcome and the current outcome; the scope is the smallest when the difference is half of the optimal drawdown size. Both the expected best outcome and expected discounted costs are increasing in drawdown size, and the optimal drawdown size is chosen to strike a balance between the two, given the U-shaped optimal scopes. The optimal policy exhibits natural comparative statics that we explore. We also show the special features that emerge from contracting with a retrospective searcher.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信