Empirically grounding analytics (EGA) research in the Journal of Operations Management

IF 6.5 2区 管理学 Q1 MANAGEMENT
Suzanne de Treville, Tyson R. Browning, Rogelio Oliva
{"title":"Empirically grounding analytics (EGA) research in the Journal of Operations Management","authors":"Suzanne de Treville,&nbsp;Tyson R. Browning,&nbsp;Rogelio Oliva","doi":"10.1002/joom.1242","DOIUrl":null,"url":null,"abstract":"<p>Empirically grounding analytics (EGA) is an area of research that emerges at the intersection of empirical and analytical research. By “empirically grounding,” we mean both the empirical justification of model assumptions and parameters and the empirical assessment of model results and insights. EGA is a critical but largely missing aspect of operations management (OM) research. Spearman and Hopp (<span>2021</span>, p. 805) stated that “since empirical testing and refutation of operations models is not an accepted practice in the IE/OM research community, we are unlikely to leverage these to their full potential.” They named several “examples of overly simplistic building blocks leading to questionable representations of complex systems” (p. 805) and suggested that research using analytical tools like closed queuing network models and the Poisson model of demand processes could incorporate empirical experiments to improve understanding of where they do and do not fit reality, highlighting “the importance of making empirical tests of modeling assumptions, both to ensure the validity of the model for its proposed purpose and to identify opportunities for improving or extending our modeling capabilities. The fact that very few IE/OM papers make such empirical tests is an obstacle to progress in our field” (p. 808). They concluded that “Editors should push authors to compare mathematical models with empirical data. Showing that a result holds in one case but not another adds nuance and practicality to research results. It also provides stimulus for research progress” (p. 814). These arguments remind of Little's (1970) observation that many potentially useful analytical models are not widely adopted in practice. Thus, EGA research can help to close two major gaps between (1) the empirical and analytical subdivisions in the OM field and (2) scholarly output and practical relevance.</p><p>As a journal focused on empirical research, the <i>Journal of Operations Management</i> (<i>JOM</i>) seeks to encourage EGA submissions and publications, but doing so requires our community of authors, reviewers, and editors to share an understanding of the expectations. While such contributions have been encouraged for some time in the verbiage on the <i>JOM</i> website, a more formal effort to draw out examples of EGA research was driven by an editorial call (Browning &amp; de Treville, <span>2018</span>), and we have since had many discussions, panels, webinars, and workshops to continue to develop and communicate the expectations. This editorial represents another step in that development.</p><p>In a general sense, an EGA paper combines mathematical, stochastic, and/or economic modeling insights with empirical data. Modeling captures non-linearities and elements of distributions and allows these parameters to be incorporated into decision making, whereas empirical research transforms observations into knowledge. Analytical models are evaluated in terms of their results and insights, which might prompt further extensions to or modifications of the model, including new or different inputs and recalibrations. Most modeling papers stop there because the primary contribution is the analytical model. Although some realism is required, it falls short of empirical grounding, and a gap is often left between the model's insights and what implementation in practice will entail. Filling this gap by empirically grounding an analytic model creates knowledge by linking analytical insights to what has been observed using empirical methods (such as case studies, action research, field experiments, interviews, analysis of secondary data, etc.) to establish a theoretically and empirically relevant research question. Moreover, since analytical models tend to make many simplifying assumptions, EGA research can help tease out where these assumptions are valid and where they excessively bias results.</p><p>Figure 1 situates two kinds of EGA research with traditional analytical models. Typically, publications with analytical models focus on the center of the figure: the model details and the insights derived from it. The left side of the figure refers to the empirical grounding of the model, that is, whether there is empirical evidence to justify the model's assumptions, parameters, and specific calibrations. The right side of the figure refers to empirical evidence of the impact of the model, that is, whether the model fits the problem situation, can be used in real time, and provides useful output.</p><p>The concerns expressed above by Spearman and Hopp stem from the expectation that a single paper will present both the model and the empirical testing. This expectation leads to the situation in which empirical testing serves only to demonstrate the model in action, rather than preparing the way for the insights encapsulated in the model to be deployed in practice. Given the lack of openness (among some) to publishing further empirical testing, the model may be accepted by the research community based on its analytical strength—but the first question anyone from practice will ask is, “Who else has used this, and what were the results?” <i>JOM</i> is interested in papers that address questions related to both empirical sides of the development and use of analytical models—their grounding and their impact—that is, either side of Figure 1. Are data available for model parameters? How well do the results work in a variety of real situations? Are the results practically implementable? Are they useful to practitioners? Will managers actually use them? Figure 1 thus highlights important but often undervalued elements encountered in empirically grounding insights from analytical models. Both sides of Figure 1 require a significant amount of empirical research—and it is empirical work on either side of Figure 1 that is the primary contribution of an EGA paper in <i>JOM</i>. It is usually expecting too much of single paper for it to address both sides of Figure 1 sufficiently.</p><p>On the left side of the figure, analytical models are linked to data and observations of reality: Their assumptions, parameters, and calibration should bear resemblance to a real situation. Here, an empirical contribution focuses on the empirical discovery of a new regularity (new assumption) that leads to the development or revision of analytical models to exploit that new-found regularity. Contributions on the left side of Figure 1 represent the “heavy lifting” of empirically grounding models, transforming mathematical insights into a form that permits measurement and application, and making existing mathematical and modeling insights available to address an observed problem. Finding, collecting, preparing, and analyzing data requires a substantial amount of work—especially when it is impossible to obtain data from the company or situation on which a model was developed. Key parameter values may be unobservable and require estimation from available data. Also, the assumptions that made the model tractable may not hold in key contexts: Empirical research needs to address this tradeoff between parsimony and accuracy. At <i>JOM</i> we want the value of such research to be recognized.</p><p>Contributions on the right side of Figure 1 assess an existing model's performance in real contexts and address emerging issues. Experiments, field tests, and intervention-based research methods are likely candidates for this type of EGA research. These contributions typically build on the empirical insights from the left side of Figure 1 and the insights/results of prior analytical models, but they add the new knowledge created when the effect on decision making of the nonlinearities captured by analytical models is observed empirically. We classify these contributions as EGA as well, although one could also consider them as “analytically grounded empirics.”</p><p>Engaging in either side of Figure 1 will trigger an improvement process where the model is revised based on new assumptions or the availability of new data, and/or its effectiveness (usefulness) and efficiency are increased in the real-world context. This will require a toggling back and forth between inductive reasoning to capture the new empirical evidence, deductive reasoning through the analytical model, and abductive reasoning (Josephson &amp; Josephson, <span>1994</span>) to reconcile the emerging insights and empirical regularities. The surprising and unexpected results that trigger the abduction logic indicate that both the model and its empirical grounding matter to creating actionable knowledge. Creating space for abduction is one of the reasons why successful EGA contributions are more likely to come from the sides than the center of Figure 1. Again, <i>JOM</i> encourages papers that tackle either side of Figure 1 and empirically motivate a significant revision to existing models (see examples below).</p><p>The above-described empirical grounding is often replaced in the modeling literature (where the focus is the model formulation and insights) by either stylized assumptions (explicit simplifications that still capture key elements of the problem situation) or artificial (simulated) data to assess the model performance. Table 1 identifies four types of modeling efforts, depending on the source of assumptions and data for assessing model performance (cf. the left and right sides of Figure 1), together with the key contribution of each type of study (the italicized terms in each cell). The upper-left quadrant (a stylized model tested with artificial data) is common where an analytical insight is a paper's primary contribution. Empirical grounding can take the form of either moving to empirical data applied in an actual situation (lower-left quadrant) or observing areas in practice where the model requires extension (upper-right quadrant).</p><p>An effective EGA process will encourage moving across the quadrants in Table 1: Progress made in any quadrant can open new doors in adjacent quadrants. As we gain fluency in managing the EGA research process, it will become easier to take analytical insights into the field, transforming them into effective interventions through a multi-stage process that links analytical and empirical publication outlets. All else being equal, <i>JOM</i> is more interested in empirical studies that assess the effectiveness and usefulness of a model (despite its simplifying assumptions), that is, the right side of Figure 1. However, it should be noted that items in the lower half of the table—referring to analytical models that are fit to empirical data but provide no insight into how implementation of the model increased knowledge, understanding, or improvements to the model—typically do not qualify as EGA even though the research is carried out in a real context. The contribution from EGA papers (in <i>JOM</i>) must be foremost empirical—even if some of the insights arise from the analytical model—but the use of the data must translate into model improvements that further improve the results derived from the model. This strategy, however, should not be confused with intervention-based research (IBR), where the outcome of the intervention is to improve existing theories or develop new theoretical insights as a result of the engagement with the problem situation (Chandrasekaran et al., <span>2020</span>; Oliva, <span>2019</span>).</p><p>Tables 2 and 3 summarize aspects of several EGA papers that we will discuss further in this section. We begin with some example papers (in Table 2) that fit best on the left side of Figure 1, followed by papers (in Table 3) that fit best on the right side. Most of these papers have been published in <i>JOM</i> and exemplify the new space that we are seeking to develop, in which empirical work is done to improve the usability of a model.</p><p>EGA papers in <i>JOM</i> must have an empirical focus. The analytical insights to be explored empirically are likely to emerge from a model, or models, that have already been published elsewhere. The <i>JOM</i> paper would be evaluated primarily in terms of its empirical contribution rather than its modeling insights. To make this clear in a manuscript, we often advise authors to summarize the model in an appendix (citing its original publication, of course). When the development of an analytical model takes center stage in a paper, that is a sign that it is probably not a good fit for <i>JOM</i> (because the focus of the paper is on the center of Figure 1 rather than on either side of it).</p><p>How much empirical grounding is enough? No paper will ever be able to do this completely; it is a matter of degree. Whether the degree is sufficient is a question of warrant (Ketokivi &amp; Mantere, <span>2021</span>), and whether it is significant is largely subjective (more on this below). How much does the grounding add new insight or change the understanding? A manuscript must provide sufficient warrant for its claims of appropriate grounding and the significance of the new insights, often by showing how and why a model's assumptions, calibrations, factors, and/or results should be significantly different. It is incumbent upon authors to convince reviewers that grounding is sufficient and leads to something significant.</p><p>The requisite empirical grounding can be achieved by a variety of methods, both qualitative and quantitative. Model parameterization should similarly be grounded in empirical data, and assumptions that the model makes must be empirically reasonable. As with all research published in <i>JOM</i>, authors must seek a sense of generality, not just focus on a single instance of a problem. We encourage authors to make use of publicly available data in generating empirical insights from the application of the analytical model, while noting that reviewers are not always accustomed to this use of publicly available data: Authors should be prepared to carefully explain what they are doing and why their data set provides warrant for empirical grounding.</p><p>The other, usual expectations of a <i>JOM</i> paper also apply. For one, the paper should contribute to OM theory. This contribution distinguishes a <i>JOM</i> EGA paper from an article published in a journal such as the <i>INFORMS Journal of Applied Analytics</i> (formerly called <i>Interfaces</i>), wherein articles are oriented toward practitioners and designed to illustrate the use of analytical models in practice. An EGA contribution in <i>JOM</i> brings new knowledge and understanding, occupying a different space than practitioner-oriented usage guides and mere examples of model deployment and application. As with other types of papers in <i>JOM</i>, the paper's contribution must also be sufficiently significant rather than marginal. This criterion is admittedly subjective, with each reviewer bringing their own perspective on the size of a paper's contribution. As a general OM journal, <i>JOM</i> expects contributions to be generalizable rather than specifically applicable only to niche areas. Other author guidelines apply, including the maximum 40-page manuscript length guideline.</p><p><i>JOM</i> is announcing an open call for papers for a special issue on EGA. This call will mention further example papers from other journals. We expect this special issue to provide opportunities to develop and exhibit what <i>JOM</i> expects from EGA papers.</p>","PeriodicalId":51097,"journal":{"name":"Journal of Operations Management","volume":"69 2","pages":"337-348"},"PeriodicalIF":6.5000,"publicationDate":"2023-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/joom.1242","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Operations Management","FirstCategoryId":"91","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/joom.1242","RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MANAGEMENT","Score":null,"Total":0}
引用次数: 1

Abstract

Empirically grounding analytics (EGA) is an area of research that emerges at the intersection of empirical and analytical research. By “empirically grounding,” we mean both the empirical justification of model assumptions and parameters and the empirical assessment of model results and insights. EGA is a critical but largely missing aspect of operations management (OM) research. Spearman and Hopp (2021, p. 805) stated that “since empirical testing and refutation of operations models is not an accepted practice in the IE/OM research community, we are unlikely to leverage these to their full potential.” They named several “examples of overly simplistic building blocks leading to questionable representations of complex systems” (p. 805) and suggested that research using analytical tools like closed queuing network models and the Poisson model of demand processes could incorporate empirical experiments to improve understanding of where they do and do not fit reality, highlighting “the importance of making empirical tests of modeling assumptions, both to ensure the validity of the model for its proposed purpose and to identify opportunities for improving or extending our modeling capabilities. The fact that very few IE/OM papers make such empirical tests is an obstacle to progress in our field” (p. 808). They concluded that “Editors should push authors to compare mathematical models with empirical data. Showing that a result holds in one case but not another adds nuance and practicality to research results. It also provides stimulus for research progress” (p. 814). These arguments remind of Little's (1970) observation that many potentially useful analytical models are not widely adopted in practice. Thus, EGA research can help to close two major gaps between (1) the empirical and analytical subdivisions in the OM field and (2) scholarly output and practical relevance.

As a journal focused on empirical research, the Journal of Operations Management (JOM) seeks to encourage EGA submissions and publications, but doing so requires our community of authors, reviewers, and editors to share an understanding of the expectations. While such contributions have been encouraged for some time in the verbiage on the JOM website, a more formal effort to draw out examples of EGA research was driven by an editorial call (Browning & de Treville, 2018), and we have since had many discussions, panels, webinars, and workshops to continue to develop and communicate the expectations. This editorial represents another step in that development.

In a general sense, an EGA paper combines mathematical, stochastic, and/or economic modeling insights with empirical data. Modeling captures non-linearities and elements of distributions and allows these parameters to be incorporated into decision making, whereas empirical research transforms observations into knowledge. Analytical models are evaluated in terms of their results and insights, which might prompt further extensions to or modifications of the model, including new or different inputs and recalibrations. Most modeling papers stop there because the primary contribution is the analytical model. Although some realism is required, it falls short of empirical grounding, and a gap is often left between the model's insights and what implementation in practice will entail. Filling this gap by empirically grounding an analytic model creates knowledge by linking analytical insights to what has been observed using empirical methods (such as case studies, action research, field experiments, interviews, analysis of secondary data, etc.) to establish a theoretically and empirically relevant research question. Moreover, since analytical models tend to make many simplifying assumptions, EGA research can help tease out where these assumptions are valid and where they excessively bias results.

Figure 1 situates two kinds of EGA research with traditional analytical models. Typically, publications with analytical models focus on the center of the figure: the model details and the insights derived from it. The left side of the figure refers to the empirical grounding of the model, that is, whether there is empirical evidence to justify the model's assumptions, parameters, and specific calibrations. The right side of the figure refers to empirical evidence of the impact of the model, that is, whether the model fits the problem situation, can be used in real time, and provides useful output.

The concerns expressed above by Spearman and Hopp stem from the expectation that a single paper will present both the model and the empirical testing. This expectation leads to the situation in which empirical testing serves only to demonstrate the model in action, rather than preparing the way for the insights encapsulated in the model to be deployed in practice. Given the lack of openness (among some) to publishing further empirical testing, the model may be accepted by the research community based on its analytical strength—but the first question anyone from practice will ask is, “Who else has used this, and what were the results?” JOM is interested in papers that address questions related to both empirical sides of the development and use of analytical models—their grounding and their impact—that is, either side of Figure 1. Are data available for model parameters? How well do the results work in a variety of real situations? Are the results practically implementable? Are they useful to practitioners? Will managers actually use them? Figure 1 thus highlights important but often undervalued elements encountered in empirically grounding insights from analytical models. Both sides of Figure 1 require a significant amount of empirical research—and it is empirical work on either side of Figure 1 that is the primary contribution of an EGA paper in JOM. It is usually expecting too much of single paper for it to address both sides of Figure 1 sufficiently.

On the left side of the figure, analytical models are linked to data and observations of reality: Their assumptions, parameters, and calibration should bear resemblance to a real situation. Here, an empirical contribution focuses on the empirical discovery of a new regularity (new assumption) that leads to the development or revision of analytical models to exploit that new-found regularity. Contributions on the left side of Figure 1 represent the “heavy lifting” of empirically grounding models, transforming mathematical insights into a form that permits measurement and application, and making existing mathematical and modeling insights available to address an observed problem. Finding, collecting, preparing, and analyzing data requires a substantial amount of work—especially when it is impossible to obtain data from the company or situation on which a model was developed. Key parameter values may be unobservable and require estimation from available data. Also, the assumptions that made the model tractable may not hold in key contexts: Empirical research needs to address this tradeoff between parsimony and accuracy. At JOM we want the value of such research to be recognized.

Contributions on the right side of Figure 1 assess an existing model's performance in real contexts and address emerging issues. Experiments, field tests, and intervention-based research methods are likely candidates for this type of EGA research. These contributions typically build on the empirical insights from the left side of Figure 1 and the insights/results of prior analytical models, but they add the new knowledge created when the effect on decision making of the nonlinearities captured by analytical models is observed empirically. We classify these contributions as EGA as well, although one could also consider them as “analytically grounded empirics.”

Engaging in either side of Figure 1 will trigger an improvement process where the model is revised based on new assumptions or the availability of new data, and/or its effectiveness (usefulness) and efficiency are increased in the real-world context. This will require a toggling back and forth between inductive reasoning to capture the new empirical evidence, deductive reasoning through the analytical model, and abductive reasoning (Josephson & Josephson, 1994) to reconcile the emerging insights and empirical regularities. The surprising and unexpected results that trigger the abduction logic indicate that both the model and its empirical grounding matter to creating actionable knowledge. Creating space for abduction is one of the reasons why successful EGA contributions are more likely to come from the sides than the center of Figure 1. Again, JOM encourages papers that tackle either side of Figure 1 and empirically motivate a significant revision to existing models (see examples below).

The above-described empirical grounding is often replaced in the modeling literature (where the focus is the model formulation and insights) by either stylized assumptions (explicit simplifications that still capture key elements of the problem situation) or artificial (simulated) data to assess the model performance. Table 1 identifies four types of modeling efforts, depending on the source of assumptions and data for assessing model performance (cf. the left and right sides of Figure 1), together with the key contribution of each type of study (the italicized terms in each cell). The upper-left quadrant (a stylized model tested with artificial data) is common where an analytical insight is a paper's primary contribution. Empirical grounding can take the form of either moving to empirical data applied in an actual situation (lower-left quadrant) or observing areas in practice where the model requires extension (upper-right quadrant).

An effective EGA process will encourage moving across the quadrants in Table 1: Progress made in any quadrant can open new doors in adjacent quadrants. As we gain fluency in managing the EGA research process, it will become easier to take analytical insights into the field, transforming them into effective interventions through a multi-stage process that links analytical and empirical publication outlets. All else being equal, JOM is more interested in empirical studies that assess the effectiveness and usefulness of a model (despite its simplifying assumptions), that is, the right side of Figure 1. However, it should be noted that items in the lower half of the table—referring to analytical models that are fit to empirical data but provide no insight into how implementation of the model increased knowledge, understanding, or improvements to the model—typically do not qualify as EGA even though the research is carried out in a real context. The contribution from EGA papers (in JOM) must be foremost empirical—even if some of the insights arise from the analytical model—but the use of the data must translate into model improvements that further improve the results derived from the model. This strategy, however, should not be confused with intervention-based research (IBR), where the outcome of the intervention is to improve existing theories or develop new theoretical insights as a result of the engagement with the problem situation (Chandrasekaran et al., 2020; Oliva, 2019).

Tables 2 and 3 summarize aspects of several EGA papers that we will discuss further in this section. We begin with some example papers (in Table 2) that fit best on the left side of Figure 1, followed by papers (in Table 3) that fit best on the right side. Most of these papers have been published in JOM and exemplify the new space that we are seeking to develop, in which empirical work is done to improve the usability of a model.

EGA papers in JOM must have an empirical focus. The analytical insights to be explored empirically are likely to emerge from a model, or models, that have already been published elsewhere. The JOM paper would be evaluated primarily in terms of its empirical contribution rather than its modeling insights. To make this clear in a manuscript, we often advise authors to summarize the model in an appendix (citing its original publication, of course). When the development of an analytical model takes center stage in a paper, that is a sign that it is probably not a good fit for JOM (because the focus of the paper is on the center of Figure 1 rather than on either side of it).

How much empirical grounding is enough? No paper will ever be able to do this completely; it is a matter of degree. Whether the degree is sufficient is a question of warrant (Ketokivi & Mantere, 2021), and whether it is significant is largely subjective (more on this below). How much does the grounding add new insight or change the understanding? A manuscript must provide sufficient warrant for its claims of appropriate grounding and the significance of the new insights, often by showing how and why a model's assumptions, calibrations, factors, and/or results should be significantly different. It is incumbent upon authors to convince reviewers that grounding is sufficient and leads to something significant.

The requisite empirical grounding can be achieved by a variety of methods, both qualitative and quantitative. Model parameterization should similarly be grounded in empirical data, and assumptions that the model makes must be empirically reasonable. As with all research published in JOM, authors must seek a sense of generality, not just focus on a single instance of a problem. We encourage authors to make use of publicly available data in generating empirical insights from the application of the analytical model, while noting that reviewers are not always accustomed to this use of publicly available data: Authors should be prepared to carefully explain what they are doing and why their data set provides warrant for empirical grounding.

The other, usual expectations of a JOM paper also apply. For one, the paper should contribute to OM theory. This contribution distinguishes a JOM EGA paper from an article published in a journal such as the INFORMS Journal of Applied Analytics (formerly called Interfaces), wherein articles are oriented toward practitioners and designed to illustrate the use of analytical models in practice. An EGA contribution in JOM brings new knowledge and understanding, occupying a different space than practitioner-oriented usage guides and mere examples of model deployment and application. As with other types of papers in JOM, the paper's contribution must also be sufficiently significant rather than marginal. This criterion is admittedly subjective, with each reviewer bringing their own perspective on the size of a paper's contribution. As a general OM journal, JOM expects contributions to be generalizable rather than specifically applicable only to niche areas. Other author guidelines apply, including the maximum 40-page manuscript length guideline.

JOM is announcing an open call for papers for a special issue on EGA. This call will mention further example papers from other journals. We expect this special issue to provide opportunities to develop and exhibit what JOM expects from EGA papers.

Abstract Image

《运营管理杂志》上的基于经验的分析(EGA)研究
实证基础分析(EGA)是在实证研究和分析研究的交叉点出现的一个研究领域。通过“经验基础”,我们指的是模型假设和参数的经验证明,以及模型结果和见解的经验评估。EGA是运营管理(OM)研究中一个关键但却被忽视的方面。Spearman和Hopp (2021, p. 805)指出,“由于操作模型的实证测试和反驳在IE/OM研究界不是一种被接受的做法,我们不太可能充分利用这些模型的潜力。”他们列举了几个“过于简单的构建模块导致复杂系统的可疑表示的例子”(第805页),并建议使用封闭排队网络模型和需求过程泊松模型等分析工具进行研究,可以结合经验实验,以提高对它们适合和不适合现实的理解,强调“对建模假设进行经验检验的重要性,既要确保模型的有效性,又要确定改进或扩展我们的建模能力的机会。很少有IE/OM论文进行这样的实证检验,这一事实阻碍了我们领域的进步”(第808页)。他们的结论是:“编辑应该促使作者将数学模型与经验数据进行比较。表明一个结果在一种情况下成立,而在另一种情况下不成立,增加了研究结果的细微差别和实用性。它也为研究进展提供了动力”(第814页)。这些论点让人想起Little(1970)的观察,即许多潜在有用的分析模型在实践中没有被广泛采用。因此,EGA研究可以帮助缩小两个主要差距:(1)OM领域的实证和分析细分;(2)学术产出和实践相关性。作为一本专注于实证研究的期刊,《运营管理杂志》(JOM)寻求鼓励EGA的提交和发表,但这样做需要我们的作者、审稿人和编辑社区分享对期望的理解。虽然这样的贡献在JOM网站上已经被鼓励了一段时间,但一个更正式的努力是由一个编辑呼吁(Browning &de Treville, 2018),此后我们进行了许多讨论、小组讨论、网络研讨会和研讨会,以继续发展和交流这些期望。这篇社论代表了这一发展的又一步。一般来说,EGA论文将数学、随机和/或经济建模见解与经验数据结合起来。建模捕捉非线性和分布元素,并允许将这些参数纳入决策,而实证研究将观察结果转化为知识。分析模型根据其结果和见解进行评估,这可能促使对模型的进一步扩展或修改,包括新的或不同的输入和重新校准。大多数建模论文止步于此,因为主要贡献是分析模型。虽然需要一些现实主义,但它缺乏经验基础,并且在模型的见解和实践中的实施将需要的内容之间经常留下差距。通过将分析见解与使用经验方法(如案例研究、行动研究、实地实验、访谈、二手数据分析等)观察到的东西联系起来,建立理论和经验相关的研究问题,从而填补这一空白,从而创造知识。此外,由于分析模型倾向于做出许多简化的假设,EGA研究可以帮助梳理出这些假设在哪些地方是有效的,哪些地方会对结果产生过度的偏差。图1展示了两种使用传统分析模型的EGA研究。通常,带有分析模型的出版物关注于图形的中心:模型细节和从中获得的见解。图的左侧是指模型的经验基础,即是否有经验证据来证明模型的假设、参数和具体校准。图的右侧是指模型影响的经验证据,即模型是否贴合问题情况,能否实时使用,并提供有用的输出。斯皮尔曼和霍普所表达的上述担忧源于这样一种期望,即一篇论文将同时提出模型和实证检验。这种期望导致了这样一种情况,即经验测试只用于演示模型的运行,而不是为模型中封装的见解准备在实践中部署的方法。 考虑到对发表进一步的实证测试缺乏开放性(在一些人中),该模型可能会根据其分析能力被研究界所接受——但任何从事实践的人都会问的第一个问题是,“还有谁用过这个,结果如何?”JOM对解决与分析模型的开发和使用的经验方面相关的问题的论文感兴趣——它们的基础和影响——即图1的任何方面。是否有模型参数的数据?这些结果在各种实际情况下的效果如何?这些结果是否切实可行?它们对从业者有用吗?经理们真的会使用它们吗?因此,图1突出了在分析模型的经验基础见解中遇到的重要但经常被低估的元素。图1的两边都需要大量的实证研究——图1两边的实证工作是EGA论文在JOM中的主要贡献。它通常对一篇论文有太多的期望,以至于无法充分解决图1的两个方面。在图的左侧,分析模型与现实的数据和观察相关联:它们的假设、参数和校准应该与真实情况相似。在这里,经验贡献集中在经验发现的新规律(新假设),从而导致分析模型的发展或修订,以利用新发现的规律。图1左侧的贡献代表了经验基础模型的“繁重工作”,将数学见解转换为允许测量和应用的形式,并使现有的数学和建模见解可用于解决观察到的问题。查找、收集、准备和分析数据需要大量的工作,特别是当不可能从开发模型的公司或情况获得数据时。关键参数值可能无法观测到,需要根据现有数据进行估计。此外,使模型易于处理的假设可能在关键环境中不成立:实证研究需要解决节俭和准确性之间的权衡。在JOM,我们希望这种研究的价值得到认可。图1右侧的贡献评估了现有模型在实际环境中的性能,并解决了新出现的问题。实验、现场测试和基于干预的研究方法可能是这类EGA研究的候选者。这些贡献通常建立在图1左侧的经验见解和先前分析模型的见解/结果之上,但是当分析模型捕获的非线性对决策的影响被经验地观察到时,它们添加了新知识。我们也将这些贡献归类为EGA,尽管人们也可以将它们视为“基于分析的经验”。参与图1的任何一方都将触发一个改进过程,在这个过程中,模型将根据新的假设或新数据的可用性进行修改,并且/或者它在现实环境中的有效性(有用性)和效率将得到提高。这将需要在归纳推理(以获取新的经验证据)、演绎推理(通过分析模型)和溯因推理(Josephson &Josephson, 1994),以调和新兴的见解和经验规律。引发溯因逻辑的令人惊讶和意想不到的结果表明,模型及其经验基础对于创造可操作的知识都很重要。为外延创造空间是成功的EGA贡献更有可能来自图1的两侧而不是中心的原因之一。同样,JOM鼓励论文处理图1的任何一方,并从经验上激发对现有模型的重大修订(参见下面的示例)。上面描述的经验基础在建模文献中经常被风格化的假设(仍然捕获问题情况的关键元素的显式简化)或评估模型性能的人工(模拟)数据所取代(其中的重点是模型的表述和见解)。表1确定了四种类型的建模工作,这取决于用于评估模型性能的假设和数据的来源(参见图1的左侧和右侧),以及每种研究类型的关键贡献(每个单元格中的斜体项)。左上象限(用人工数据测试的程式化模型)是常见的,其中分析洞察力是论文的主要贡献。经验基础可以采取移动到实际情况中应用的经验数据(左下象限)或观察模型需要扩展的实践领域(右上象限)的形式。 有效的EGA过程将鼓励跨越表1中的象限:在任何象限中取得的进展都可以在相邻象限中打开新的大门。随着我们在管理EGA研究过程中获得流畅性,将分析性见解纳入该领域将变得更加容易,并通过将分析性和实证性出版物联系起来的多阶段过程将其转化为有效的干预措施。在其他条件相同的情况下,JOM对评估模型的有效性和有用性的实证研究更感兴趣(尽管它简化了假设),即图1的右侧。然而,应该注意的是,表下半部分的项目——即适合经验数据的分析模型,但没有提供对模型的实现如何增加知识、理解或改进模型的洞察力——通常不符合EGA的资格,即使研究是在真实环境中进行的。EGA论文(在JOM中)的贡献必须首先是经验性的——即使一些见解来自分析模型——但是数据的使用必须转化为模型的改进,从而进一步改进从模型派生的结果。然而,这一策略不应与基于干预的研究(IBR)相混淆,在IBR中,干预的结果是通过参与问题情况来改进现有理论或开发新的理论见解(Chandrasekaran et al., 2020;奥利瓦,2019)。表2和表3总结了几篇EGA论文的各个方面,我们将在本节中进一步讨论。我们从一些最适合图1左侧的示例论文(见表2)开始,然后是最适合图1右侧的示例论文(见表3)。这些论文中的大多数已经在JOM上发表,并举例说明了我们正在寻求开发的新空间,在这个空间中,实证工作是为了提高模型的可用性。JOM中的EGA论文必须以实证为重点。要从经验上探索的分析见解很可能来自一个或多个已经在其他地方发表的模型。JOM论文将主要根据其经验贡献而不是其建模见解进行评估。为了在手稿中明确这一点,我们经常建议作者在附录中总结模型(当然,引用其原始出版物)。当分析模型的开发在论文中占据中心位置时,这是一个信号,表明它可能不适合JOM(因为论文的焦点在图1的中心,而不是图1的两侧)。多少经验基础是足够的?没有一份报纸能够完全做到这一点;这是一个程度的问题。程度是否足够是一个授权的问题(凯托基维&;Mantere, 2021),而它是否重要在很大程度上是主观的(下文将详细介绍)。接地气在多大程度上增加了新的见解或改变了理解?一份手稿必须提供足够的保证,以证明其适当的基础和新见解的重要性,通常是通过展示模型的假设、校准、因素和/或结果如何以及为什么会有显著的不同。作者有责任让审稿人相信基础是充分的,并且会导致一些重要的东西。必要的经验基础可以通过各种定性和定量的方法来获得。模型参数化同样应该以经验数据为基础,模型所做的假设必须是经验上合理的。就像在JOM上发表的所有研究一样,作者必须寻求一种普遍性,而不是仅仅关注一个问题的单个实例。我们鼓励作者利用公开可用的数据,从分析模型的应用中获得经验见解,同时注意到审稿人并不总是习惯于这种公开可用数据的使用:作者应该准备好仔细解释他们在做什么,以及为什么他们的数据集为经验基础提供了保证。另外,通常对JOM论文的期望也适用。首先,这篇论文应该对OM理论有所贡献。这一贡献将JOM EGA论文与发表在期刊上的文章区别开来,例如INFORMS应用分析杂志(以前称为接口),其中的文章面向从业者,旨在说明分析模型在实践中的使用。EGA在JOM中的贡献带来了新的知识和理解,与面向从业者的使用指南和仅仅是模型部署和应用程序的示例相比,占据了不同的空间。与JOM中其他类型的论文一样,论文的贡献也必须足够重要,而不是微不足道。诚然,这个标准是主观的,每个审稿人对论文贡献的大小都有自己的看法。 作为一个通用的OM杂志,JOM希望贡献是可推广的,而不是专门适用于特定领域。其他作者指南适用,包括最多40页的手稿长度指南。JOM宣布公开征集关于EGA特刊的论文。本次电话会议将进一步提到其他期刊的范例论文。我们希望这期特刊能够提供机会来发展和展示JOM对EGA论文的期望。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of Operations Management
Journal of Operations Management 管理科学-运筹学与管理科学
CiteScore
11.00
自引率
15.40%
发文量
62
审稿时长
24 months
期刊介绍: The Journal of Operations Management (JOM) is a leading academic publication dedicated to advancing the field of operations management (OM) through rigorous and original research. The journal's primary audience is the academic community, although it also values contributions that attract the interest of practitioners. However, it does not publish articles that are primarily aimed at practitioners, as academic relevance is a fundamental requirement. JOM focuses on the management aspects of various types of operations, including manufacturing, service, and supply chain operations. The journal's scope is broad, covering both profit-oriented and non-profit organizations. The core criterion for publication is that the research question must be centered around operations management, rather than merely using operations as a context. For instance, a study on charismatic leadership in a manufacturing setting would only be within JOM's scope if it directly relates to the management of operations; the mere setting of the study is not enough. Published papers in JOM are expected to address real-world operational questions and challenges. While not all research must be driven by practical concerns, there must be a credible link to practice that is considered from the outset of the research, not as an afterthought. Authors are cautioned against assuming that academic knowledge can be easily translated into practical applications without proper justification. JOM's articles are abstracted and indexed by several prestigious databases and services, including Engineering Information, Inc.; Executive Sciences Institute; INSPEC; International Abstracts in Operations Research; Cambridge Scientific Abstracts; SciSearch/Science Citation Index; CompuMath Citation Index; Current Contents/Engineering, Computing & Technology; Information Access Company; and Social Sciences Citation Index. This ensures that the journal's research is widely accessible and recognized within the academic and professional communities.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信