Operations Research最新文献

筛选
英文 中文
Adaptive Lagrangian Policies for a Multiwarehouse, Multistore Inventory System with Lost Sales 具有销售损失的多仓库、多分店库存系统的自适应拉格朗日政策
IF 2.7 3区 管理学
Operations Research Pub Date : 2024-04-16 DOI: 10.1287/opre.2022.0668
Xiuli Chao, Stefanus Jasin, Sentao Miao
{"title":"Adaptive Lagrangian Policies for a Multiwarehouse, Multistore Inventory System with Lost Sales","authors":"Xiuli Chao, Stefanus Jasin, Sentao Miao","doi":"10.1287/opre.2022.0668","DOIUrl":"https://doi.org/10.1287/opre.2022.0668","url":null,"abstract":"<p>We consider the inventory control problem of a multiwarehouse, multistore system over a time horizon when the warehouses receive no external replenishment. This problem is prevalent in retail settings, and it is referred to in the work of [Jackson PL (1988) Stock allocation in a two-echelon distribution system or “what to do until your ship comes in.” <i>Management Sci.</i> 34(7):880–895] as the problem of “what to do until your (external) shipment comes in.” The warehouses are stocked with initial inventories, and the stores are dynamically replenished from the warehouses in each period of the planning horizon. Excess demand in each period at a store is lost. The optimal policy for this problem is complex and state dependent, and because of the curse of dimensionality, computing the optimal policy using standard dynamic programming is numerically intractable. <i>Static</i> Lagrangian base-stock (LaBS) policies have been developed for this problem [Miao S, Jasin S, Chao X (2022) Asymptotically optimal Lagrangian policies for one-warehouse multi-store system with lost sales. <i>Oper. Res.</i> 70(1):141–159] and shown to be asymptotically optimal. In this paper, we develop <i>adaptive</i> policies that <i>dynamically</i> adjust the control parameters of a vanilla static LaBS policy using realized historical demands. We show, both theoretically and numerically, that adaptive policies significantly improve the performance of the LaBS policy, with the magnitude of improvement characterized by the number of policy adjustments. In particular, when the number of adjustments is a logarithm of the length of time horizon, the policy is rate optimal in the sense that the rate of the loss (in terms of the dependency on the length of the time horizon) matches that of the theoretical lower bound. Among other insights, our results also highlight the benefit of incorporating the “pooling effect” in designing a dynamic adjustment scheme.</p><p><b>Supplemental Material:</b> The online appendix is available at https://doi.org/10.1287/opre.2022.0668.</p>","PeriodicalId":54680,"journal":{"name":"Operations Research","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140599046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Model-Based Reinforcement Learning for Offline Zero-Sum Markov Games 基于模型的离线零和马尔可夫游戏强化学习
IF 2.7 3区 管理学
Operations Research Pub Date : 2024-04-02 DOI: 10.1287/opre.2022.0342
Yuling Yan, Gen Li, Yuxin Chen, Jianqing Fan
{"title":"Model-Based Reinforcement Learning for Offline Zero-Sum Markov Games","authors":"Yuling Yan, Gen Li, Yuxin Chen, Jianqing Fan","doi":"10.1287/opre.2022.0342","DOIUrl":"https://doi.org/10.1287/opre.2022.0342","url":null,"abstract":"<p>This paper makes progress toward learning Nash equilibria in two-player, zero-sum Markov games from offline data. Specifically, consider a <i>γ</i>-discounted, infinite-horizon Markov game with <i>S</i> states, in which the max-player has <i>A</i> actions and the min-player has <i>B</i> actions. We propose a pessimistic model–based algorithm with Bernstein-style lower confidence bounds—called the value iteration with lower confidence bounds for zero-sum Markov games—that provably finds an <i>ε</i>-approximate Nash equilibrium with a sample complexity no larger than <span><math altimg=\"eq-00001.gif\" display=\"inline\" overflow=\"scroll\"><mrow><mfrac><mrow><msubsup><mrow><mi>C</mi></mrow><mrow><mtext mathvariant=\"sans-serif\">clipped</mtext></mrow><mi>⋆</mi></msubsup><mi>S</mi><mo stretchy=\"false\">(</mo><mi>A</mi><mo>+</mo><mi>B</mi><mo stretchy=\"false\">)</mo></mrow><mrow><msup><mrow><mo stretchy=\"false\">(</mo><mn>1</mn><mo>−</mo><mi>γ</mi><mo stretchy=\"false\">)</mo></mrow><mn>3</mn></msup><msup><mrow><mi>ε</mi></mrow><mn>2</mn></msup></mrow></mfrac></mrow></math></span><span></span> (up to some log factor). Here, <span><math altimg=\"eq-00002.gif\" display=\"inline\" overflow=\"scroll\"><mrow><msubsup><mrow><mi>C</mi></mrow><mrow><mtext mathvariant=\"sans-serif\">clipped</mtext></mrow><mi>⋆</mi></msubsup></mrow></math></span><span></span> is some unilateral clipped concentrability coefficient that reflects the coverage and distribution shift of the available data (vis-à-vis the target data), and the target accuracy <i>ε</i> can be any value within <span><math altimg=\"eq-00003.gif\" display=\"inline\" overflow=\"scroll\"><mrow><mrow><mo>(</mo><mrow><mn>0</mn><mo>,</mo><mfrac><mn>1</mn><mrow><mn>1</mn><mo>−</mo><mi>γ</mi></mrow></mfrac></mrow><mo>]</mo></mrow></mrow></math></span><span></span>. Our sample complexity bound strengthens prior art by a factor of <span><math altimg=\"eq-00004.gif\" display=\"inline\" overflow=\"scroll\"><mrow><mi>min</mi><mo stretchy=\"false\">{</mo><mi>A</mi><mo>,</mo><mi>B</mi><mo stretchy=\"false\">}</mo></mrow></math></span><span></span>, achieving minimax optimality for a broad regime of interest. An appealing feature of our result lies in its algorithmic simplicity, which reveals the unnecessity of variance reduction and sample splitting in achieving sample optimality.</p><p><b>Funding:</b> Y. Yan is supported in part by the Charlotte Elizabeth Procter Honorific Fellowship from Princeton University and the Norbert Wiener Postdoctoral Fellowship from MIT. Y. Chen is supported in part by the Alfred P. Sloan Research Fellowship, the Google Research Scholar Award, the Air Force Office of Scientific Research [Grant FA9550-22-1-0198], the Office of Naval Research [Grant N00014-22-1-2354], and the National Science Foundation [Grants CCF-2221009, CCF-1907661, IIS-2218713, DMS-2014279, and IIS-2218773]. J. Fan is supported in part by the National Science Foundation [Grants DMS-1712591, DMS-2052926, DMS-2053832, and DMS-2210833] and Office of Naval","PeriodicalId":54680,"journal":{"name":"Operations Research","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140599044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Projected Inventory-Level Policies for Lost Sales Inventory Systems: Asymptotic Optimality in Two Regimes 销售损失库存系统的预测库存水平政策:两种状态下的渐近最优性
IF 2.7 3区 管理学
Operations Research Pub Date : 2024-04-01 DOI: 10.1287/opre.2021.0032
Willem van Jaarsveld, Joachim Arts
{"title":"Projected Inventory-Level Policies for Lost Sales Inventory Systems: Asymptotic Optimality in Two Regimes","authors":"Willem van Jaarsveld, Joachim Arts","doi":"10.1287/opre.2021.0032","DOIUrl":"https://doi.org/10.1287/opre.2021.0032","url":null,"abstract":"Operations Research, Ahead of Print. <br/>","PeriodicalId":54680,"journal":{"name":"Operations Research","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140599673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assigning and Scheduling Generalized Malleable Jobs Under Subadditive or Submodular Processing Speeds 在次正或次模态处理速度下分配和调度广义可变工作
IF 2.7 3区 管理学
Operations Research Pub Date : 2024-03-28 DOI: 10.1287/opre.2022.0168
Dimitris Fotakis, Jannik Matuschke, Orestis Papadigenopoulos
{"title":"Assigning and Scheduling Generalized Malleable Jobs Under Subadditive or Submodular Processing Speeds","authors":"Dimitris Fotakis, Jannik Matuschke, Orestis Papadigenopoulos","doi":"10.1287/opre.2022.0168","DOIUrl":"https://doi.org/10.1287/opre.2022.0168","url":null,"abstract":"<p>Malleable scheduling is a model that captures the possibility of parallelization to expedite the completion of time-critical tasks. A malleable job can be allocated and processed simultaneously on multiple machines, occupying the same time interval on all these machines. We study a general version of this setting, in which the functions determining the joint processing speed of machines for a given job follow different discrete concavity assumptions (subadditivity, fractional subadditivity, submodularity, and matroid ranks). We show that under these assumptions, the problem of scheduling malleable jobs at minimum makespan can be approximated by a considerably simpler assignment problem. Moreover, we provide efficient approximation algorithms for both the scheduling and the assignment problem, with increasingly stronger guarantees for increasingly stronger concavity assumptions, including a logarithmic approximation factor for the case of submodular processing speeds and a constant approximation factor when processing speeds are determined by matroid rank functions. Computational experiments indicate that our algorithms outperform the theoretical worst-case guarantees.</p><p><b>Funding:</b> D. Fotakis received financial support from the Hellenic Foundation for Research and Innovation (H.F.R.I.) [“First Call for H.F.R.I. Research Projects to Support Faculty Members and Researchers and the Procurement of High-Cost Research Equipment Grant,” Project BALSAM, HFRI-FM17-1424]. J. Matuschke received financial support from the Fonds Wetenschappelijk Onderzoek-Vlanderen [Research Project G072520N “Optimization and Analytics for Stochastic and Robust Project Scheduling”]. O. Papadigenopoulos received financial support from the National Science Foundation Institute for Machine Learning [Award 2019844].</p><p><b>Supplemental Material:</b> The online appendix is available at https://doi.org/10.1287/opre.2022.0168.</p>","PeriodicalId":54680,"journal":{"name":"Operations Research","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140599257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Technical Note—Production Management with General Demands and Lost Sales 技术说明--一般需求和销售损失的生产管理
IF 2.7 3区 管理学
Operations Research Pub Date : 2024-03-28 DOI: 10.1287/opre.2022.0191
Jinhui Han, Xiaolong Li, Suresh P. Sethi, Chi Chung Siu, S. Yam
{"title":"Technical Note—Production Management with General Demands and Lost Sales","authors":"Jinhui Han, Xiaolong Li, Suresh P. Sethi, Chi Chung Siu, S. Yam","doi":"10.1287/opre.2022.0191","DOIUrl":"https://doi.org/10.1287/opre.2022.0191","url":null,"abstract":"Analyzing Production-Inventory Systems with General Demand: Cost Minimization and Risk Analytics Frequent production rate changes are prohibitive because of high setup costs or setup times in producing such items as sugar, glass, computer displays, and cell-free proteins. Thus, constant production rates are deployed for producing these items even when their demands are random. In “Production Management with General Demands and Lost Sales,” Han, Li, Sethi, Siu, and Yam obtain the optimal constant production rate for a production-inventory system with Lévy demand for long-run average and expected discounted cost objectives, explicitly in some cases and numerically in general with a Fourier-cosine scheme they develop. This scheme can help in computing risk analytics of the inventory system, such as stockout probability and expected shortfall. These measures are particularly significant for assessing supply resilience, especially for emergency products or services like medicines and healthcare equipment. This study’s analytical and numerical findings contribute to enhancing efficiency and decision making in production management.","PeriodicalId":54680,"journal":{"name":"Operations Research","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140373425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimal Auction Design with Deferred Inspection and Reward 带有延迟检查和奖励的最佳拍卖设计
IF 2.7 3区 管理学
Operations Research Pub Date : 2024-03-28 DOI: 10.1287/opre.2020.0651
Saeed Alaei, Alexandre Belloni, Ali Makhdoumi, Azarakhsh Malekian
{"title":"Optimal Auction Design with Deferred Inspection and Reward","authors":"Saeed Alaei, Alexandre Belloni, Ali Makhdoumi, Azarakhsh Malekian","doi":"10.1287/opre.2020.0651","DOIUrl":"https://doi.org/10.1287/opre.2020.0651","url":null,"abstract":"<p>Consider a mechanism run by an auctioneer who can use both payment and inspection instruments to incentivize agents. The timeline of the events is as follows. Based on a prespecified allocation rule and the reported values of agents, the auctioneer allocates the item and secures the reported values as deposits. The auctioneer then inspects the values of agents and, using a prespecified reward rule, rewards the ones who have reported truthfully. Using techniques from convex analysis and calculus of variations, for any distribution of values, we fully characterize the optimal mechanism for a single agent. Using Border’s theorem and duality, we find conditions under which our characterization extends to multiple agents. Interestingly, the optimal allocation function, unlike the classic settings without inspection, is not a threshold strategy and instead is an increasing and continuous function of the types. We also present an implementation of our optimal auction and show that it achieves a higher revenue than auctions in classic settings without inspection. This is because the inspection enables the auctioneer to charge payments closer to the agents’ true values without creating incentives for them to deviate to lower types.</p><p><b>Supplemental Material:</b> The online appendix is available at https://doi.org/10.1287/opre.2020.0651.</p>","PeriodicalId":54680,"journal":{"name":"Operations Research","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140599143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
To Interfere or Not To Interfere: Information Revelation and Price-Setting Incentives in a Multiagent Learning Environment 干预还是不干预?多代理学习环境中的信息揭示与价格制定激励机制
IF 2.7 3区 管理学
Operations Research Pub Date : 2024-03-27 DOI: 10.1287/opre.2023.0363
John R. Birge, Hongfan (Kevin) Chen, N. Bora Keskin, Amy Ward
{"title":"To Interfere or Not To Interfere: Information Revelation and Price-Setting Incentives in a Multiagent Learning Environment","authors":"John R. Birge, Hongfan (Kevin) Chen, N. Bora Keskin, Amy Ward","doi":"10.1287/opre.2023.0363","DOIUrl":"https://doi.org/10.1287/opre.2023.0363","url":null,"abstract":"Operations Research, Ahead of Print. <br/>","PeriodicalId":54680,"journal":{"name":"Operations Research","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140323314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Slowly Varying Regression Under Sparsity 稀疏性下的缓慢变化回归
IF 2.7 3区 管理学
Operations Research Pub Date : 2024-03-27 DOI: 10.1287/opre.2022.0330
Dimitris Bertsimas, Vassilis Digalakis, Michael Lingzhi Li, Omar Skali Lami
{"title":"Slowly Varying Regression Under Sparsity","authors":"Dimitris Bertsimas, Vassilis Digalakis, Michael Lingzhi Li, Omar Skali Lami","doi":"10.1287/opre.2022.0330","DOIUrl":"https://doi.org/10.1287/opre.2022.0330","url":null,"abstract":"<p>We introduce the framework of slowly varying regression under sparsity, which allows sparse regression models to vary slowly and sparsely. We formulate the problem of parameter estimation as a mixed-integer optimization problem and demonstrate that it can be reformulated exactly as a binary convex optimization problem through a novel relaxation. The relaxation utilizes a new equality on Moore-Penrose inverses that convexifies the nonconvex objective function while coinciding with the original objective on all feasible binary points. This allows us to solve the problem significantly more efficiently and to provable optimality using a cutting plane–type algorithm. We develop a highly optimized implementation of such algorithm, which substantially improves upon the asymptotic computational complexity of a straightforward implementation. We further develop a fast heuristic method that is guaranteed to produce a feasible solution and, as we empirically illustrate, generates high-quality warm-start solutions for the binary optimization problem. To tune the framework’s hyperparameters, we propose a practical procedure relying on binary search that, under certain assumptions, is guaranteed to recover the true model parameters. We show, on both synthetic and real-world data sets, that the resulting algorithm outperforms competing formulations in comparable times across a variety of metrics, including estimation accuracy, predictive power, and computational time, and is highly scalable, enabling us to train models with tens of thousands of parameters.</p><p><b>Supplemental Material:</b> The online appendix is available at https://doi.org/10.1287/opre.2022.0330.</p>","PeriodicalId":54680,"journal":{"name":"Operations Research","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140323316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Outcome-Driven Dynamic Refugee Assignment with Allocation Balancing 成果驱动的动态难民分配与分配平衡
IF 2.7 3区 管理学
Operations Research Pub Date : 2024-03-25 DOI: 10.1287/opre.2022.0445
Kirk Bansak, Elisabeth Paulson
{"title":"Outcome-Driven Dynamic Refugee Assignment with Allocation Balancing","authors":"Kirk Bansak, Elisabeth Paulson","doi":"10.1287/opre.2022.0445","DOIUrl":"https://doi.org/10.1287/opre.2022.0445","url":null,"abstract":"<p>This study proposes two new dynamic assignment algorithms to match refugees and asylum seekers to geographic localities within a host country. The first, currently implemented in a multiyear randomized control trial in Switzerland, seeks to maximize the average predicted employment level (or any measured outcome of interest) of refugees through a minimum-discord online assignment algorithm. The performance of this algorithm is tested on real refugee resettlement data from both the United States and Switzerland, where we find that it is able to achieve near-optimal expected employment, compared with the hindsight-optimal solution, and is able to improve upon the status quo procedure by 40%–50%. However, pure outcome maximization can result in a periodically imbalanced allocation to the localities over time, leading to implementation difficulties and an undesirable workflow for resettlement resources and agents. To address these problems, the second algorithm balances the goal of improving refugee outcomes with the desire for an even allocation over time. We find that this algorithm can achieve near-perfect balance over time with only a small loss in expected employment compared with the employment-maximizing algorithm. In addition, the allocation balancing algorithm offers a number of ancillary benefits compared with pure outcome maximization, including robustness to unknown arrival flows and greater exploration.</p><p><b>Funding:</b> Financial support from the Charles Koch Foundation, Stanford Impact Labs, the Rockefeller Foundation, Google.org, Schmidt Futures, the Stanford Institute for Human-Centered Artificial Intelligence, and Stanford University is gratefully acknowledged.</p><p><b>Supplemental Material:</b> The online appendix is available at https://doi.org/10.1287/opre.2022.0445.</p>","PeriodicalId":54680,"journal":{"name":"Operations Research","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140298645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Random Consideration Set Model for Demand Estimation, Assortment Optimization, and Pricing 用于需求预测、分类优化和定价的随机考虑集模型
IF 2.7 3区 管理学
Operations Research Pub Date : 2024-03-25 DOI: 10.1287/opre.2019.0333
Guillermo Gallego, Anran Li
{"title":"A Random Consideration Set Model for Demand Estimation, Assortment Optimization, and Pricing","authors":"Guillermo Gallego, Anran Li","doi":"10.1287/opre.2019.0333","DOIUrl":"https://doi.org/10.1287/opre.2019.0333","url":null,"abstract":"Operations Research, Ahead of Print. <br/>","PeriodicalId":54680,"journal":{"name":"Operations Research","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140298656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信