{"title":"研究贡献的有意义的理论路径","authors":"Elliot Bendoly, Rogelio Oliva","doi":"10.1002/joom.1348","DOIUrl":null,"url":null,"abstract":"<p>Across fields of scholarship, ever since scholarship has existed, there have been numerous discussions opining on what theory is, why it is useful and how best to craft theoretical arguments and frameworks. Every few years, a new discussion particularly relevant to a domain of study emerges. Often the intention of such discussions is to reiterate critical points made in the past as still applicable. In other instances, the discussions attempt to recast and reshape perspectives on theory. Both reiteration and alternate perspectives can prove valuable, as new scholars enter the field and as priorities for journals, editors and review teams evolve.</p><p>These points are also of interest to contemporary discussions at the <i>Journal of Operations Management (JOM)</i>. As an outlet long regarded for impactful empirical work in the field, we have long been interested in the appropriate use of theory and have also had a long history of intervening in our field to re-emphasize the ‘what’, ‘why’ and ‘how’ of meaningful theoretical structures and argumentation. As editors of the journal, we believe it is valuable to reiterate what is well-accepted regarding the role and nature of effective theory in research, whether we are discussing grand theories, theoretical frameworks, mid-range theory or theoretical arguments for specific mechanisms. However, we also strongly believe that it is critically valuable to outline how theoretical contributions may differ, while still offering considerable value to a research effort and the field.</p><p>What is core to the substantive nature of theoretical contributions, of course, must be driven by priorities regarding its role; just as the selection of empirical methods must be driven by the claims emerging from theoretical arguments (even nascent ones), and insights for future scholars driven by observation and analysis. By outlining contemporary priorities that define meaningful theory we are in a far better position to simultaneously expand perspectives on how theoretical contributions can be made, as well as challenge or dispel some often difficult-to-justify criticisms that scholars (authors, reviewers and editors) confront regarding what is ‘good’ theory.</p><p>According to Fried (<span>2020</span>), this “statistical equivalency” is one of the fundamental reasons that we cannot escape the need for well-reasoned theoretical arguments, designed to help us make sense of highly complex settings, in which a wealth of observed signals is accompanied by a wealth of unobserved signals. It is exactly when phenomena are <i>not</i> straightforward and mechanisms are <i>not</i> obvious, where sensemaking, and associated deliberate research inquiry, is critical.</p><p>In the same vein, a ‘complete theory’, akin to a physical law, doesn't present much of a motivator for research—if there is no uncertainty regarding cause and effect, there is little reason to expect that an inquiry into such phenomena would be of interest to a research community. Fortunately, in the domains that are studied in management, we seldom come close to complete theories. Occasionally we find enough evidence to corroborate what we might refer to as grand theories and associated frameworks. More often, we observe, or perceive, phenomena that exhibit patterns (either across a body of literature or direct observations in the field) that inspire us to question whether such patterns are repeatable. Indeed, theories are never finished products but rather exist along a continuum of sensemaking from vague hunches to detailed accounts of causal mechanism (Mohr <span>1982</span>; Weick <span>1989</span>), where the initial phases of theorizing often include the creation or definition of constructs and narratives to account for the observed phenomenon.</p><p>With the rise of replication discussions so prominent today, it would be a mistake to forget that methods are merely a means to an end, that they are bound to be imperfectly replicable in observations and analyses they yield. The most critical aspect of replication comes down to whether we can reinforce existing understanding, or whether such attempts at sensemaking require modification, qualification or replacement. That should be the primacy of replication interest for research communities; with a possible exception for communities focused on methodological contributions. Similarly, researchers certainly must be permitted to demonstrate thought that aligns with (replicates) existing theoretical arguments, based on the identification of repeated insights from whatever source, just as they must be permitted to deviate from such arguments if the patterns they encounter do not align. In the complex contexts that characterize management research domains, it is not helpful to expect scholars to identify universal laws, nor is it appropriate to bind them to recognizing or aligning with claims that others have made to that end.</p><p>Furthermore, it should be noted that not all theoretical arguments (hypotheses or propositions) are created equal. There are potential explanations that are clearly better than others. How do we assess the quality of a potential explanation? Bunge (<span>1967</span>), articulates the desired attributes of well-formulated scientific hypotheses as (1) logically sound, (2) grounded in previous knowledge, and (3) empirically testable. We believe that the quality of a conjecture can be judged by the extent to which it fulfills these criteria.\n <sup>1</sup>\n Thus, while two alternative explanations might be equally capable of explaining the data, we can easily assess which has more scientific credibility based on those criteria, for example, ‘a hard object hit and broke the glass’ versus ‘a soft object hit and broke the glass.’</p><p>If we accept the three points listed above as fundamental to the value and role of theory and the desirable attributes of claims, it is also clear, based on our experience with the editorial process, that certain misconceptions regarding what makes “good theory” continue to exist. We outline a few of these fallacies here, along with why they must be deemed to be fundamentally flawed.</p><p>In recognizing what is truly important when it comes to theory, and pushing aside concerns that are not ‘real’ concerns, we can now focus on the fruitful pathways available to authors as their embark on theoretical considerations in their work, and as reviewers and editors approach efforts to further develop such work. Figure 1 presents a generalization of two paths available to authors as they leverage observations and theory to build meaningful contributions to the field.</p><p>The common path (Path A) that flows from left to right in Figure 1, often beginning with a more academic-literature inspired motivation, tends to have many recognizable attributes including a front end dominant theoretical positioning and a largely deductive approach to conclusions, albeit benefiting from at least some posteriori theoretical discussion (while avoiding HARK-ing, which we will return to). This research is normally motivated by the identification of research gaps made apparent by reviews of extant bodies of knowledge, leading through grounded argumentation to formal hypothesis testing. While this is, by far, the most common type of submission to <i>JOM</i>, this is clearly not the only approach scholars can and have taken in developing contributions.</p><p>An alternate path (Path B) draws inspiration and motivation predominantly from empirical observations, proceeding largely from right to left in the top of Figure 1. The observation of empirical regularities, which have not yet been fully rationalized by extant research, or the observation of phenomena that contradict existing theories, lead the scholarly effort down the path of “how can we explain what we are seeing?”, rather than “what do we expect to see, given our explanations?”\n <sup>2</sup>\n The outcome of this process does not need to be fully articulated theoretical statements. Rather, it can be tentative definitions of constructs and exploratory language to describe the observed phenomena. This approach, by its very nature, also provides an organic lead into abductive sensemaking, where we are creating theoretical arguments to explain precisely how observations fit into a broader phenomenon in ways that have not been previously articulated. In doing so, we are implicitly anticipating future observations in specific contexts, rather than using existing observations to support theoretical arguments. That is, the claims of such sensemaking arguments often take the form of propositions with the hope that they are eventually followed up by subsequent empirical efforts, utilizing alternate sources of evidence in support of deductive inquiry as well. This can come in the form of separate follow-on studies or a well-crafted multi-method effort. Nevertheless, the process of creating constructs and narratives to describe phenomena and the abductive articulation of theoretical arguments that match the criteria outlined in section 1 are as much as a contribution as the later empirical testing of those propositions.</p><p>How are these paths related to research approaches that we see across our corpus of research at <i>JOM</i>, from largely data-crunching for validation to eliciting real-world responses, to engaging with the real-world in developing theory? Any of these could potentially involve a heavier theory back end (posteriori theorization), with theory motivating approaches at stages of execution and certainly lending motivation, to some minimal degree, at the front end as well. Figure 2 presents the processes through which we see theory being inspired by, and opening the door to, a range of empirical tactics that make use of data from real-world processes—the domain of <i>JOM</i> inquiries—to develop or improve theories about those processes and how they should be managed.</p><p>One way of being empirical involves efforts to <i>observe</i> (access, document, and assess) the real-world processes and reflect on the potential causes for the observed regularities (top arc of Figure 2). If the observed regularities are not explained by existing theory or they constitute anomalies from what is expected from the theory, we need to propose potential constructs, language, and explanations; this is pathway B in Figure 1 and is characterized by the abductive process described above. Alternatively, if these observations, even if not inspired by theoretical predictions, do match existing theories and explanations, we can inductively gain confidence on the existing theory from the probabilistic encounters of specific instances.</p><p>A second way of being empirical is to <i>test</i> theoretically derived claims. Ideally, this takes place through experimentation: laboratory experiments attempt to maximize the control and the precision in measurement of variables, while filed experiments maximize the realism and generalizability of the findings (McGrarth <span>1982</span>). Given the high risk and cost of field experiments, efforts to scrutinize design early are clearly of benefit to all parties; hence the recent Registered Reports Review (3R) initiative put in place at <i>JOM</i> (Abdulla, Escamilla, and Oliva <span>2024</span>). Clearly, randomized controlled trials are not always possible and quasi-experimental designs (Shadish, Cook, and Campbell <span>2001</span>) or natural experiments (where the treatment is applied ‘haphazardly’ to some units but denied to others) are valid ways to either refute the claims or, if not rejected, increase their validity. An alternative way to test theoretically-derived claims is to rely on non-experimental data—either explicitly gathered for the study (primary data) or repurposed from other data gathering efforts (secondary data)—and establish causal claims through statistical estimation procedures (Cunningham <span>2021</span>; Pearl and Mackenzie <span>2018</span>). These approaches follow a Path A strategy and correspond to the loops in Figure 2 through “test claims”; one passing through the real-world process reflecting the treatment needed for quasi/experimental work, and the other emblematic of the fact that all observation and data acquisition is guided by the theoretical claims that are being tested.</p><p>A third way of being empirical is to <i>intervene</i> using theory to guide improvements in real-world processes; that is, use the theory to provide solutions. While <i>JOM</i> has explicit editorial policies not to focus on solutions as contributions (JOM <span>2004</span>)\n <sup>3</sup>\n , there is ample potential to learn about the relevance and usefulness of a theory when attempting to use it to control or improve a problem situation. The recent creation of the Intervention-based Research (IBR) department in <i>JOM</i> has opened the path to use interventions to test and develop theory within the context of a situation where the researcher engages with practitioners as an agent of change in the problem situation (Oliva <span>2019</span>). The fact that the intervention might require immediate changes to the implementation strategy and that outcomes are not often what was predicted by the theory, create the opportunity to document new data from the real word processes that could lead to modifications to the theory originally used to guide the intervention. As such, intervention-based research uses Path A to design the intervention (deductively from an existing theory) but leverages the data from the intervention to abductively derive insights for theory (Path B); see the loops created through “adapt” in Figure 2.</p><p>Regardless of the chosen empirical strategy (observation, testing, and intervention), the role of theoretical argumentation, both a priori and posteriori, with different degrees of emphasis depending on the chosen path, is fundamental regardless of what we do. It precedes specific actions, but also clearly emerges from others. Its role and placement are contingent on what is being accomplished, but we can't accomplish much of anything without it. At the end of the day, in a scientific endeavor, the criteria to assess the contribution of an empirical study is its contribution to theory. If the process is inductive/abductive and we are only making sense of unexplained regularities or anomalies, clearly the articulation of a new theory that can be subsequently tested is enough of a contribution. However, if the purpose of the study is to test existing theory (whether with secondary data or through experiments and intervention) then placing the findings from the study in the proper context—for example, how theories need to be updated? What are new research questions that are triggered be these results?—is a requirement for the contribution to be meaningful.</p><p>What does all this mean for reviewers and editors?</p><p>As we have affirmed in the <i>JOM</i> editorial team guidelines, all reviewers and their associated reviews are required to be developmental (see https://www.jom-hub.com/editorial-team). This is not wish, it is a mandate. It is also not merely wordplay. Developmental reviews have very specific properties. They identify weakness of papers but make deliberate efforts to help authors shore up those weaknesses. The role of reviewers, at <i>JOM</i>, is not that of ‘gatekeeper’. Their primary role is not to provide an up or down vote. Their primary role is that of providing substantial commentary and guidance. Reviews should never merely state generic grievances without options for redress where that exits.</p><p>Furthermore, with specific regard to theory, a review should never merely state generic disdain for a paper's theoretical elements. Reviews should also not fall victim to the fallacies posed in Section 2, such as a general failure to sufficiently reference extant theory, or comprehensively articulate mechanisms. If a relevant theory exists for use as analogy or comparison, and a reviewer is familiar with that theoretical reference, it is the job of the reviewer to be explicit in guiding the authors towards the consideration of that work. If a mechanism exists that the reviewer feels the authors should describe, it is incumbent on the review to be explicit regarding precisely what that mechanism might be. If as a reviewer you feel ‘something is missing, but can't say what’… Don't include that sentiment in your review as such a statement clearly doesn't serve to help develop a paper.</p><p>There are also, certainly, boundaries on the kind of guidance reviews and editors should give regarding theory. For example, reviewers and editors should not create “HARK-ing traps” for authors. That is, it is inappropriate for reviewers to request an author team to develop theoretical arguments to be positioned a priori, if the motivation for such is based on results emerging from the existing analysis demonstrated in the manuscript. While some authors may recognize such recommendations as overtly problematic, some may not and still others may feel it is the only way to get through the review process successfully. To be clear, such action on the part of reviewers or editors is inappropriate. Reviewers should help authors strengthen arguments that they have used to motivate their methods and analysis. It is also fully acceptable to position ‘new’ arguments posteriori (within Discussion sections) in the interest of future research. In both instances, reviewers are obliged to be developmentally constructive in this regard, offering specific recommendations rather than general requests for ‘more’. However, suggesting that unexpected findings brought forward by the analysis be accounted for by the addition of new front end theoretical arguments (as if they existed a priori) is not an acceptable path for reviewers to go down.</p><p>Furthermore, reviewers and editors need to be fully appreciative of the very real possibility that incredibly strong contributions can take on a structure that originates not from an identification of a research-literature gap, but rather from direct observation. If we are to encourage researchers at <i>JOM</i> and other journals to engage with practice, we must imagine that some of that engagement is going to lead to the recognition of regularities and anomalies that have not yet been explained, and that such observations are at least as important (if not more so) than inspirations drawn predominantly from extant published work. We must be open to these highly abductive paths taken by authors, while still expecting authors to fulfill what is required in the form of thoughtful sensemaking that all for impactful theoretical contributions.</p>","PeriodicalId":51097,"journal":{"name":"Journal of Operations Management","volume":"71 1","pages":"4-10"},"PeriodicalIF":6.5000,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/joom.1348","citationCount":"0","resultStr":"{\"title\":\"Meaningful Theoretical Pathways for Research Contributions\",\"authors\":\"Elliot Bendoly, Rogelio Oliva\",\"doi\":\"10.1002/joom.1348\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Across fields of scholarship, ever since scholarship has existed, there have been numerous discussions opining on what theory is, why it is useful and how best to craft theoretical arguments and frameworks. Every few years, a new discussion particularly relevant to a domain of study emerges. Often the intention of such discussions is to reiterate critical points made in the past as still applicable. In other instances, the discussions attempt to recast and reshape perspectives on theory. Both reiteration and alternate perspectives can prove valuable, as new scholars enter the field and as priorities for journals, editors and review teams evolve.</p><p>These points are also of interest to contemporary discussions at the <i>Journal of Operations Management (JOM)</i>. As an outlet long regarded for impactful empirical work in the field, we have long been interested in the appropriate use of theory and have also had a long history of intervening in our field to re-emphasize the ‘what’, ‘why’ and ‘how’ of meaningful theoretical structures and argumentation. As editors of the journal, we believe it is valuable to reiterate what is well-accepted regarding the role and nature of effective theory in research, whether we are discussing grand theories, theoretical frameworks, mid-range theory or theoretical arguments for specific mechanisms. However, we also strongly believe that it is critically valuable to outline how theoretical contributions may differ, while still offering considerable value to a research effort and the field.</p><p>What is core to the substantive nature of theoretical contributions, of course, must be driven by priorities regarding its role; just as the selection of empirical methods must be driven by the claims emerging from theoretical arguments (even nascent ones), and insights for future scholars driven by observation and analysis. By outlining contemporary priorities that define meaningful theory we are in a far better position to simultaneously expand perspectives on how theoretical contributions can be made, as well as challenge or dispel some often difficult-to-justify criticisms that scholars (authors, reviewers and editors) confront regarding what is ‘good’ theory.</p><p>According to Fried (<span>2020</span>), this “statistical equivalency” is one of the fundamental reasons that we cannot escape the need for well-reasoned theoretical arguments, designed to help us make sense of highly complex settings, in which a wealth of observed signals is accompanied by a wealth of unobserved signals. It is exactly when phenomena are <i>not</i> straightforward and mechanisms are <i>not</i> obvious, where sensemaking, and associated deliberate research inquiry, is critical.</p><p>In the same vein, a ‘complete theory’, akin to a physical law, doesn't present much of a motivator for research—if there is no uncertainty regarding cause and effect, there is little reason to expect that an inquiry into such phenomena would be of interest to a research community. Fortunately, in the domains that are studied in management, we seldom come close to complete theories. Occasionally we find enough evidence to corroborate what we might refer to as grand theories and associated frameworks. More often, we observe, or perceive, phenomena that exhibit patterns (either across a body of literature or direct observations in the field) that inspire us to question whether such patterns are repeatable. Indeed, theories are never finished products but rather exist along a continuum of sensemaking from vague hunches to detailed accounts of causal mechanism (Mohr <span>1982</span>; Weick <span>1989</span>), where the initial phases of theorizing often include the creation or definition of constructs and narratives to account for the observed phenomenon.</p><p>With the rise of replication discussions so prominent today, it would be a mistake to forget that methods are merely a means to an end, that they are bound to be imperfectly replicable in observations and analyses they yield. The most critical aspect of replication comes down to whether we can reinforce existing understanding, or whether such attempts at sensemaking require modification, qualification or replacement. That should be the primacy of replication interest for research communities; with a possible exception for communities focused on methodological contributions. Similarly, researchers certainly must be permitted to demonstrate thought that aligns with (replicates) existing theoretical arguments, based on the identification of repeated insights from whatever source, just as they must be permitted to deviate from such arguments if the patterns they encounter do not align. In the complex contexts that characterize management research domains, it is not helpful to expect scholars to identify universal laws, nor is it appropriate to bind them to recognizing or aligning with claims that others have made to that end.</p><p>Furthermore, it should be noted that not all theoretical arguments (hypotheses or propositions) are created equal. There are potential explanations that are clearly better than others. How do we assess the quality of a potential explanation? Bunge (<span>1967</span>), articulates the desired attributes of well-formulated scientific hypotheses as (1) logically sound, (2) grounded in previous knowledge, and (3) empirically testable. We believe that the quality of a conjecture can be judged by the extent to which it fulfills these criteria.\\n <sup>1</sup>\\n Thus, while two alternative explanations might be equally capable of explaining the data, we can easily assess which has more scientific credibility based on those criteria, for example, ‘a hard object hit and broke the glass’ versus ‘a soft object hit and broke the glass.’</p><p>If we accept the three points listed above as fundamental to the value and role of theory and the desirable attributes of claims, it is also clear, based on our experience with the editorial process, that certain misconceptions regarding what makes “good theory” continue to exist. We outline a few of these fallacies here, along with why they must be deemed to be fundamentally flawed.</p><p>In recognizing what is truly important when it comes to theory, and pushing aside concerns that are not ‘real’ concerns, we can now focus on the fruitful pathways available to authors as their embark on theoretical considerations in their work, and as reviewers and editors approach efforts to further develop such work. Figure 1 presents a generalization of two paths available to authors as they leverage observations and theory to build meaningful contributions to the field.</p><p>The common path (Path A) that flows from left to right in Figure 1, often beginning with a more academic-literature inspired motivation, tends to have many recognizable attributes including a front end dominant theoretical positioning and a largely deductive approach to conclusions, albeit benefiting from at least some posteriori theoretical discussion (while avoiding HARK-ing, which we will return to). This research is normally motivated by the identification of research gaps made apparent by reviews of extant bodies of knowledge, leading through grounded argumentation to formal hypothesis testing. While this is, by far, the most common type of submission to <i>JOM</i>, this is clearly not the only approach scholars can and have taken in developing contributions.</p><p>An alternate path (Path B) draws inspiration and motivation predominantly from empirical observations, proceeding largely from right to left in the top of Figure 1. The observation of empirical regularities, which have not yet been fully rationalized by extant research, or the observation of phenomena that contradict existing theories, lead the scholarly effort down the path of “how can we explain what we are seeing?”, rather than “what do we expect to see, given our explanations?”\\n <sup>2</sup>\\n The outcome of this process does not need to be fully articulated theoretical statements. Rather, it can be tentative definitions of constructs and exploratory language to describe the observed phenomena. This approach, by its very nature, also provides an organic lead into abductive sensemaking, where we are creating theoretical arguments to explain precisely how observations fit into a broader phenomenon in ways that have not been previously articulated. In doing so, we are implicitly anticipating future observations in specific contexts, rather than using existing observations to support theoretical arguments. That is, the claims of such sensemaking arguments often take the form of propositions with the hope that they are eventually followed up by subsequent empirical efforts, utilizing alternate sources of evidence in support of deductive inquiry as well. This can come in the form of separate follow-on studies or a well-crafted multi-method effort. Nevertheless, the process of creating constructs and narratives to describe phenomena and the abductive articulation of theoretical arguments that match the criteria outlined in section 1 are as much as a contribution as the later empirical testing of those propositions.</p><p>How are these paths related to research approaches that we see across our corpus of research at <i>JOM</i>, from largely data-crunching for validation to eliciting real-world responses, to engaging with the real-world in developing theory? Any of these could potentially involve a heavier theory back end (posteriori theorization), with theory motivating approaches at stages of execution and certainly lending motivation, to some minimal degree, at the front end as well. Figure 2 presents the processes through which we see theory being inspired by, and opening the door to, a range of empirical tactics that make use of data from real-world processes—the domain of <i>JOM</i> inquiries—to develop or improve theories about those processes and how they should be managed.</p><p>One way of being empirical involves efforts to <i>observe</i> (access, document, and assess) the real-world processes and reflect on the potential causes for the observed regularities (top arc of Figure 2). If the observed regularities are not explained by existing theory or they constitute anomalies from what is expected from the theory, we need to propose potential constructs, language, and explanations; this is pathway B in Figure 1 and is characterized by the abductive process described above. Alternatively, if these observations, even if not inspired by theoretical predictions, do match existing theories and explanations, we can inductively gain confidence on the existing theory from the probabilistic encounters of specific instances.</p><p>A second way of being empirical is to <i>test</i> theoretically derived claims. Ideally, this takes place through experimentation: laboratory experiments attempt to maximize the control and the precision in measurement of variables, while filed experiments maximize the realism and generalizability of the findings (McGrarth <span>1982</span>). Given the high risk and cost of field experiments, efforts to scrutinize design early are clearly of benefit to all parties; hence the recent Registered Reports Review (3R) initiative put in place at <i>JOM</i> (Abdulla, Escamilla, and Oliva <span>2024</span>). Clearly, randomized controlled trials are not always possible and quasi-experimental designs (Shadish, Cook, and Campbell <span>2001</span>) or natural experiments (where the treatment is applied ‘haphazardly’ to some units but denied to others) are valid ways to either refute the claims or, if not rejected, increase their validity. An alternative way to test theoretically-derived claims is to rely on non-experimental data—either explicitly gathered for the study (primary data) or repurposed from other data gathering efforts (secondary data)—and establish causal claims through statistical estimation procedures (Cunningham <span>2021</span>; Pearl and Mackenzie <span>2018</span>). These approaches follow a Path A strategy and correspond to the loops in Figure 2 through “test claims”; one passing through the real-world process reflecting the treatment needed for quasi/experimental work, and the other emblematic of the fact that all observation and data acquisition is guided by the theoretical claims that are being tested.</p><p>A third way of being empirical is to <i>intervene</i> using theory to guide improvements in real-world processes; that is, use the theory to provide solutions. While <i>JOM</i> has explicit editorial policies not to focus on solutions as contributions (JOM <span>2004</span>)\\n <sup>3</sup>\\n , there is ample potential to learn about the relevance and usefulness of a theory when attempting to use it to control or improve a problem situation. The recent creation of the Intervention-based Research (IBR) department in <i>JOM</i> has opened the path to use interventions to test and develop theory within the context of a situation where the researcher engages with practitioners as an agent of change in the problem situation (Oliva <span>2019</span>). The fact that the intervention might require immediate changes to the implementation strategy and that outcomes are not often what was predicted by the theory, create the opportunity to document new data from the real word processes that could lead to modifications to the theory originally used to guide the intervention. As such, intervention-based research uses Path A to design the intervention (deductively from an existing theory) but leverages the data from the intervention to abductively derive insights for theory (Path B); see the loops created through “adapt” in Figure 2.</p><p>Regardless of the chosen empirical strategy (observation, testing, and intervention), the role of theoretical argumentation, both a priori and posteriori, with different degrees of emphasis depending on the chosen path, is fundamental regardless of what we do. It precedes specific actions, but also clearly emerges from others. Its role and placement are contingent on what is being accomplished, but we can't accomplish much of anything without it. At the end of the day, in a scientific endeavor, the criteria to assess the contribution of an empirical study is its contribution to theory. If the process is inductive/abductive and we are only making sense of unexplained regularities or anomalies, clearly the articulation of a new theory that can be subsequently tested is enough of a contribution. However, if the purpose of the study is to test existing theory (whether with secondary data or through experiments and intervention) then placing the findings from the study in the proper context—for example, how theories need to be updated? What are new research questions that are triggered be these results?—is a requirement for the contribution to be meaningful.</p><p>What does all this mean for reviewers and editors?</p><p>As we have affirmed in the <i>JOM</i> editorial team guidelines, all reviewers and their associated reviews are required to be developmental (see https://www.jom-hub.com/editorial-team). This is not wish, it is a mandate. It is also not merely wordplay. Developmental reviews have very specific properties. They identify weakness of papers but make deliberate efforts to help authors shore up those weaknesses. The role of reviewers, at <i>JOM</i>, is not that of ‘gatekeeper’. Their primary role is not to provide an up or down vote. Their primary role is that of providing substantial commentary and guidance. Reviews should never merely state generic grievances without options for redress where that exits.</p><p>Furthermore, with specific regard to theory, a review should never merely state generic disdain for a paper's theoretical elements. Reviews should also not fall victim to the fallacies posed in Section 2, such as a general failure to sufficiently reference extant theory, or comprehensively articulate mechanisms. If a relevant theory exists for use as analogy or comparison, and a reviewer is familiar with that theoretical reference, it is the job of the reviewer to be explicit in guiding the authors towards the consideration of that work. If a mechanism exists that the reviewer feels the authors should describe, it is incumbent on the review to be explicit regarding precisely what that mechanism might be. If as a reviewer you feel ‘something is missing, but can't say what’… Don't include that sentiment in your review as such a statement clearly doesn't serve to help develop a paper.</p><p>There are also, certainly, boundaries on the kind of guidance reviews and editors should give regarding theory. For example, reviewers and editors should not create “HARK-ing traps” for authors. That is, it is inappropriate for reviewers to request an author team to develop theoretical arguments to be positioned a priori, if the motivation for such is based on results emerging from the existing analysis demonstrated in the manuscript. While some authors may recognize such recommendations as overtly problematic, some may not and still others may feel it is the only way to get through the review process successfully. To be clear, such action on the part of reviewers or editors is inappropriate. Reviewers should help authors strengthen arguments that they have used to motivate their methods and analysis. It is also fully acceptable to position ‘new’ arguments posteriori (within Discussion sections) in the interest of future research. In both instances, reviewers are obliged to be developmentally constructive in this regard, offering specific recommendations rather than general requests for ‘more’. However, suggesting that unexpected findings brought forward by the analysis be accounted for by the addition of new front end theoretical arguments (as if they existed a priori) is not an acceptable path for reviewers to go down.</p><p>Furthermore, reviewers and editors need to be fully appreciative of the very real possibility that incredibly strong contributions can take on a structure that originates not from an identification of a research-literature gap, but rather from direct observation. If we are to encourage researchers at <i>JOM</i> and other journals to engage with practice, we must imagine that some of that engagement is going to lead to the recognition of regularities and anomalies that have not yet been explained, and that such observations are at least as important (if not more so) than inspirations drawn predominantly from extant published work. We must be open to these highly abductive paths taken by authors, while still expecting authors to fulfill what is required in the form of thoughtful sensemaking that all for impactful theoretical contributions.</p>\",\"PeriodicalId\":51097,\"journal\":{\"name\":\"Journal of Operations Management\",\"volume\":\"71 1\",\"pages\":\"4-10\"},\"PeriodicalIF\":6.5000,\"publicationDate\":\"2025-01-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1002/joom.1348\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Operations Management\",\"FirstCategoryId\":\"91\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/joom.1348\",\"RegionNum\":2,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MANAGEMENT\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Operations Management","FirstCategoryId":"91","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/joom.1348","RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MANAGEMENT","Score":null,"Total":0}
引用次数: 0
摘要
有一些潜在的解释显然比其他的更好。我们如何评估一个潜在解释的质量?邦格(1967)阐明了精心制定的科学假设的期望属性:(1)逻辑上合理,(2)基于先前的知识,(3)经验上可测试。我们认为,一个猜想的质量可以通过它满足这些标准的程度来判断。因此,虽然两种不同的解释可能同样能够解释数据,但我们可以根据这些标准很容易地评估哪一种更具有科学可信度,例如,“硬物撞击并打破了玻璃”与“软物撞击并打破了玻璃”。“如果我们接受上述三点作为理论的价值和作用以及主张的可取属性的基础,那么根据我们在编辑过程中的经验,很明显,关于什么是‘好理论’的某些误解仍然存在。”我们在这里概述了其中一些谬论,以及为什么它们必须被认为是有根本缺陷的。认识到什么是真正重要的,当涉及到理论的时候,把那些不是“真正”关注的问题放在一边,我们现在可以把重点放在作者在他们的工作中开始理论考虑时可用的富有成效的途径上,以及当审稿人和编辑努力进一步发展这些工作时。图1展示了作者在利用观察和理论为该领域做出有意义贡献时可用的两条路径的概括。图1中从左向右流动的共同路径(路径A),通常以更受学术文献启发的动机开始,往往具有许多可识别的属性,包括前端主导的理论定位和很大程度上的演绎方法得出结论,尽管至少受益于一些后先验的理论讨论(同时避免了我们将回到的HARK-ing)。这项研究的动机通常是通过对现有知识体系的回顾,识别出明显的研究差距,从而通过有根据的论证进行正式的假设检验。虽然到目前为止,这是向JOM提交的最常见的类型,但这显然不是学者们在开发贡献时可以采取的唯一方法。另一条路径(路径B)主要从经验观察中获得灵感和动机,在图1的顶部从右到左进行。对尚未被现有研究完全合理化的经验规律的观察,或者对与现有理论相矛盾的现象的观察,将学术努力引向“我们如何解释我们所看到的?”,而不是“给出我们的解释,我们期望看到什么?”这一过程的结果不需要是完整的理论陈述。相反,它可以是构造的试探性定义和描述观察到的现象的探索性语言。这种方法,就其本质而言,也为溯因性语义提供了一个有机的引导,在这个过程中,我们正在创建理论论据,以精确地解释观察是如何以以前没有阐明的方式适应更广泛的现象的。在这样做的过程中,我们隐含地预测未来在特定背景下的观察结果,而不是用现有的观察结果来支持理论论点。也就是说,这种意义论证的主张通常采用命题的形式,希望它们最终能被后续的经验努力所跟进,同时利用替代的证据来源来支持演绎探究。这可以以单独的后续研究或精心设计的多方法努力的形式出现。然而,创造结构和叙述来描述现象的过程,以及与第1节中概述的标准相匹配的理论论证的溯因性表达,与后来对这些命题的经验检验一样多。这些路径与我们在JOM的研究语料库中看到的研究方法有什么关系?从验证的大量数据处理到引出现实世界的反应,再到在发展理论时与现实世界接触?其中任何一种都可能涉及到较重的理论后端(后推论),在执行阶段使用理论激励方法,并且在一定程度上在前端也提供动机。图2展示了一些过程,通过这些过程,我们可以看到理论受到了一系列经验策略的启发,这些策略利用了来自现实世界过程(JOM查询领域)的数据,从而开发或改进了关于这些过程以及应该如何管理这些过程的理论。 经验主义的一种方式包括努力观察(获取、记录和评估)现实世界的过程,并反思观察到的规律的潜在原因(图2的顶部弧线)。如果观察到的规律不能被现有理论解释,或者它们构成了与理论预期的异常,我们需要提出潜在的结构、语言和解释;这是图1中的路径B,其特征是上述的外展过程。或者,如果这些观察结果,即使没有受到理论预测的启发,确实与现有的理论和解释相匹配,我们可以从特定实例的概率遭遇中归纳地获得对现有理论的信心。经验主义的第二种方法是检验理论推导的主张。理想情况下,这是通过实验来实现的:实验室实验试图最大限度地控制和测量变量的精度,而现场实验则最大限度地提高了研究结果的现实性和普遍性(McGrarth 1982)。考虑到现场实验的高风险和高成本,早期仔细审查设计的努力显然对各方都有好处;因此,JOM (Abdulla, Escamilla, and Oliva 2024)最近实施了注册报告审查(3R)计划。显然,随机对照试验并不总是可能的,准实验设计(Shadish, Cook, and Campbell 2001)或自然实验(在某些单位“随意”应用治疗,但在其他单位被拒绝)是反驳主张的有效方法,或者,如果不被拒绝,则增加其有效性。检验理论推导结论的另一种方法是依赖非实验数据——要么是为研究明确收集的数据(主要数据),要么是从其他数据收集工作中重新利用的数据(次要数据)——并通过统计估计程序建立因果关系(Cunningham 2021;Pearl and Mackenzie 2018)。这些方法遵循路径a策略,并通过“测试声明”对应图2中的循环;一个通过现实世界的过程,反映了准/实验工作所需的处理,另一个象征着这样一个事实,即所有的观察和数据采集都是由正在测试的理论主张指导的。经验主义的第三种方式是用理论来指导现实世界过程的改进;也就是说,用理论提供解决方案。虽然JOM有明确的编辑政策,不把解决方案作为贡献来关注(JOM 2004) 3,但当试图使用理论来控制或改善问题情况时,有充分的潜力来了解理论的相关性和有用性。最近在JOM建立的基于干预的研究(IBR)部门开辟了一条道路,在研究人员作为问题情况变化的推动者与从业者接触的情况下,使用干预措施来测试和发展理论(Oliva 2019)。事实上,干预可能需要立即改变实施策略,而结果往往与理论所预测的不同,这为记录来自真实世界过程的新数据创造了机会,这些数据可能导致对最初用于指导干预的理论的修改。因此,基于干预的研究使用路径A来设计干预(从现有理论推导),但利用来自干预的数据来溯因性地获得理论见解(路径B);请参见图2中通过“adapt”创建的循环。无论选择何种经验策略(观察、检验和干预),理论论证的作用,无论是先验的还是后验的,根据所选择的路径有不同程度的强调,都是基本的,无论我们做什么。它先于具体的行动,但也明显地从其他行动中产生。它的作用和位置取决于要完成的任务,但没有它,我们无法完成任何事情。在一天结束的时候,在科学的努力中,评估实证研究贡献的标准是它对理论的贡献。如果这个过程是归纳/溯因的,而我们只是在理解无法解释的规律或异常,显然,提出一个可以随后检验的新理论就足够了。然而,如果研究的目的是测试现有的理论(无论是通过二手数据还是通过实验和干预),那么将研究结果置于适当的背景下-例如,理论需要如何更新?这些结果引发了哪些新的研究问题?——是贡献要有意义的必要条件。这一切对审稿人和编辑意味着什么?正如我们在JOM编辑团队指南中所确认的那样,所有审稿人及其相关的审稿都必须是发展性的(参见https://www.jom-hub)。 com/editorial-team)。这不是愿望,这是命令。这也不仅仅是文字游戏。发展性评论具有非常具体的特性。他们会发现论文的弱点,但会刻意帮助作者弥补这些弱点。在JOM,审稿人的角色并不是“看门人”。他们的主要作用不是提供赞成或反对的投票。他们的主要作用是提供实质性的评论和指导。审查不应该仅仅陈述一般性的不满,而没有补救的选择。此外,在具体的理论方面,评论不应该仅仅表示对论文理论元素的普遍蔑视。评论也不应该成为第2节中提出的谬论的受害者,例如普遍未能充分参考现有理论,或全面阐明机制。如果一个相关的理论存在,可以用作类比或比较,并且审稿人熟悉该理论参考,那么审稿人的工作就是明确地指导作者考虑该工作。如果审稿人认为作者应该描述的机制存在,那么审稿人就有责任明确指出该机制可能是什么。如果作为一名审稿人,你觉得“缺少了什么,但不能说什么”,那么不要在你的审稿中包含这种情绪,因为这样的陈述显然对论文的发展没有帮助。当然,审稿和编辑在理论方面应该给出什么样的指导也是有界限的。例如,审稿人和编辑不应该为作者制造“听音陷阱”。也就是说,如果这样做的动机是基于从手稿中展示的现有分析中出现的结果,审稿人要求作者团队发展理论论点来定位先验是不合适的。虽然有些作者可能会意识到这样的建议有明显的问题,但有些人可能不会,还有一些人可能会觉得这是成功通过审查过程的唯一途径。需要明确的是,审稿人或编辑的这种行为是不合适的。审稿人应该帮助作者加强他们用来激励他们的方法和分析的论据。为了未来研究的利益,在事后(在讨论部分)定位“新”论点也是完全可以接受的。在这两种情况下,审稿人都有义务在这方面具有发展建设性,提供具体的建议,而不是笼统地要求“更多”。然而,建议通过添加新的前端理论论据(就好像它们是先验存在的)来解释分析提出的意外发现,这对审稿人来说是不可接受的。此外,审稿人和编辑需要充分认识到一种非常现实的可能性,即令人难以置信的强大贡献可以采用一种结构,这种结构不是源于对研究文献差距的识别,而是源于直接观察。如果我们要鼓励JOM和其他期刊的研究人员参与实践,我们必须想象其中的一些参与将导致对尚未解释的规律和异常的认识,并且这些观察至少与主要从现有已发表的工作中获得的灵感一样重要(如果不是更重要的话)。我们必须对作者所采取的这些高度诱拐的路径持开放态度,同时仍然期望作者以深思熟虑的意义表达的形式满足要求,所有这些都是为了有影响力的理论贡献。
Meaningful Theoretical Pathways for Research Contributions
Across fields of scholarship, ever since scholarship has existed, there have been numerous discussions opining on what theory is, why it is useful and how best to craft theoretical arguments and frameworks. Every few years, a new discussion particularly relevant to a domain of study emerges. Often the intention of such discussions is to reiterate critical points made in the past as still applicable. In other instances, the discussions attempt to recast and reshape perspectives on theory. Both reiteration and alternate perspectives can prove valuable, as new scholars enter the field and as priorities for journals, editors and review teams evolve.
These points are also of interest to contemporary discussions at the Journal of Operations Management (JOM). As an outlet long regarded for impactful empirical work in the field, we have long been interested in the appropriate use of theory and have also had a long history of intervening in our field to re-emphasize the ‘what’, ‘why’ and ‘how’ of meaningful theoretical structures and argumentation. As editors of the journal, we believe it is valuable to reiterate what is well-accepted regarding the role and nature of effective theory in research, whether we are discussing grand theories, theoretical frameworks, mid-range theory or theoretical arguments for specific mechanisms. However, we also strongly believe that it is critically valuable to outline how theoretical contributions may differ, while still offering considerable value to a research effort and the field.
What is core to the substantive nature of theoretical contributions, of course, must be driven by priorities regarding its role; just as the selection of empirical methods must be driven by the claims emerging from theoretical arguments (even nascent ones), and insights for future scholars driven by observation and analysis. By outlining contemporary priorities that define meaningful theory we are in a far better position to simultaneously expand perspectives on how theoretical contributions can be made, as well as challenge or dispel some often difficult-to-justify criticisms that scholars (authors, reviewers and editors) confront regarding what is ‘good’ theory.
According to Fried (2020), this “statistical equivalency” is one of the fundamental reasons that we cannot escape the need for well-reasoned theoretical arguments, designed to help us make sense of highly complex settings, in which a wealth of observed signals is accompanied by a wealth of unobserved signals. It is exactly when phenomena are not straightforward and mechanisms are not obvious, where sensemaking, and associated deliberate research inquiry, is critical.
In the same vein, a ‘complete theory’, akin to a physical law, doesn't present much of a motivator for research—if there is no uncertainty regarding cause and effect, there is little reason to expect that an inquiry into such phenomena would be of interest to a research community. Fortunately, in the domains that are studied in management, we seldom come close to complete theories. Occasionally we find enough evidence to corroborate what we might refer to as grand theories and associated frameworks. More often, we observe, or perceive, phenomena that exhibit patterns (either across a body of literature or direct observations in the field) that inspire us to question whether such patterns are repeatable. Indeed, theories are never finished products but rather exist along a continuum of sensemaking from vague hunches to detailed accounts of causal mechanism (Mohr 1982; Weick 1989), where the initial phases of theorizing often include the creation or definition of constructs and narratives to account for the observed phenomenon.
With the rise of replication discussions so prominent today, it would be a mistake to forget that methods are merely a means to an end, that they are bound to be imperfectly replicable in observations and analyses they yield. The most critical aspect of replication comes down to whether we can reinforce existing understanding, or whether such attempts at sensemaking require modification, qualification or replacement. That should be the primacy of replication interest for research communities; with a possible exception for communities focused on methodological contributions. Similarly, researchers certainly must be permitted to demonstrate thought that aligns with (replicates) existing theoretical arguments, based on the identification of repeated insights from whatever source, just as they must be permitted to deviate from such arguments if the patterns they encounter do not align. In the complex contexts that characterize management research domains, it is not helpful to expect scholars to identify universal laws, nor is it appropriate to bind them to recognizing or aligning with claims that others have made to that end.
Furthermore, it should be noted that not all theoretical arguments (hypotheses or propositions) are created equal. There are potential explanations that are clearly better than others. How do we assess the quality of a potential explanation? Bunge (1967), articulates the desired attributes of well-formulated scientific hypotheses as (1) logically sound, (2) grounded in previous knowledge, and (3) empirically testable. We believe that the quality of a conjecture can be judged by the extent to which it fulfills these criteria.
1
Thus, while two alternative explanations might be equally capable of explaining the data, we can easily assess which has more scientific credibility based on those criteria, for example, ‘a hard object hit and broke the glass’ versus ‘a soft object hit and broke the glass.’
If we accept the three points listed above as fundamental to the value and role of theory and the desirable attributes of claims, it is also clear, based on our experience with the editorial process, that certain misconceptions regarding what makes “good theory” continue to exist. We outline a few of these fallacies here, along with why they must be deemed to be fundamentally flawed.
In recognizing what is truly important when it comes to theory, and pushing aside concerns that are not ‘real’ concerns, we can now focus on the fruitful pathways available to authors as their embark on theoretical considerations in their work, and as reviewers and editors approach efforts to further develop such work. Figure 1 presents a generalization of two paths available to authors as they leverage observations and theory to build meaningful contributions to the field.
The common path (Path A) that flows from left to right in Figure 1, often beginning with a more academic-literature inspired motivation, tends to have many recognizable attributes including a front end dominant theoretical positioning and a largely deductive approach to conclusions, albeit benefiting from at least some posteriori theoretical discussion (while avoiding HARK-ing, which we will return to). This research is normally motivated by the identification of research gaps made apparent by reviews of extant bodies of knowledge, leading through grounded argumentation to formal hypothesis testing. While this is, by far, the most common type of submission to JOM, this is clearly not the only approach scholars can and have taken in developing contributions.
An alternate path (Path B) draws inspiration and motivation predominantly from empirical observations, proceeding largely from right to left in the top of Figure 1. The observation of empirical regularities, which have not yet been fully rationalized by extant research, or the observation of phenomena that contradict existing theories, lead the scholarly effort down the path of “how can we explain what we are seeing?”, rather than “what do we expect to see, given our explanations?”
2
The outcome of this process does not need to be fully articulated theoretical statements. Rather, it can be tentative definitions of constructs and exploratory language to describe the observed phenomena. This approach, by its very nature, also provides an organic lead into abductive sensemaking, where we are creating theoretical arguments to explain precisely how observations fit into a broader phenomenon in ways that have not been previously articulated. In doing so, we are implicitly anticipating future observations in specific contexts, rather than using existing observations to support theoretical arguments. That is, the claims of such sensemaking arguments often take the form of propositions with the hope that they are eventually followed up by subsequent empirical efforts, utilizing alternate sources of evidence in support of deductive inquiry as well. This can come in the form of separate follow-on studies or a well-crafted multi-method effort. Nevertheless, the process of creating constructs and narratives to describe phenomena and the abductive articulation of theoretical arguments that match the criteria outlined in section 1 are as much as a contribution as the later empirical testing of those propositions.
How are these paths related to research approaches that we see across our corpus of research at JOM, from largely data-crunching for validation to eliciting real-world responses, to engaging with the real-world in developing theory? Any of these could potentially involve a heavier theory back end (posteriori theorization), with theory motivating approaches at stages of execution and certainly lending motivation, to some minimal degree, at the front end as well. Figure 2 presents the processes through which we see theory being inspired by, and opening the door to, a range of empirical tactics that make use of data from real-world processes—the domain of JOM inquiries—to develop or improve theories about those processes and how they should be managed.
One way of being empirical involves efforts to observe (access, document, and assess) the real-world processes and reflect on the potential causes for the observed regularities (top arc of Figure 2). If the observed regularities are not explained by existing theory or they constitute anomalies from what is expected from the theory, we need to propose potential constructs, language, and explanations; this is pathway B in Figure 1 and is characterized by the abductive process described above. Alternatively, if these observations, even if not inspired by theoretical predictions, do match existing theories and explanations, we can inductively gain confidence on the existing theory from the probabilistic encounters of specific instances.
A second way of being empirical is to test theoretically derived claims. Ideally, this takes place through experimentation: laboratory experiments attempt to maximize the control and the precision in measurement of variables, while filed experiments maximize the realism and generalizability of the findings (McGrarth 1982). Given the high risk and cost of field experiments, efforts to scrutinize design early are clearly of benefit to all parties; hence the recent Registered Reports Review (3R) initiative put in place at JOM (Abdulla, Escamilla, and Oliva 2024). Clearly, randomized controlled trials are not always possible and quasi-experimental designs (Shadish, Cook, and Campbell 2001) or natural experiments (where the treatment is applied ‘haphazardly’ to some units but denied to others) are valid ways to either refute the claims or, if not rejected, increase their validity. An alternative way to test theoretically-derived claims is to rely on non-experimental data—either explicitly gathered for the study (primary data) or repurposed from other data gathering efforts (secondary data)—and establish causal claims through statistical estimation procedures (Cunningham 2021; Pearl and Mackenzie 2018). These approaches follow a Path A strategy and correspond to the loops in Figure 2 through “test claims”; one passing through the real-world process reflecting the treatment needed for quasi/experimental work, and the other emblematic of the fact that all observation and data acquisition is guided by the theoretical claims that are being tested.
A third way of being empirical is to intervene using theory to guide improvements in real-world processes; that is, use the theory to provide solutions. While JOM has explicit editorial policies not to focus on solutions as contributions (JOM 2004)
3
, there is ample potential to learn about the relevance and usefulness of a theory when attempting to use it to control or improve a problem situation. The recent creation of the Intervention-based Research (IBR) department in JOM has opened the path to use interventions to test and develop theory within the context of a situation where the researcher engages with practitioners as an agent of change in the problem situation (Oliva 2019). The fact that the intervention might require immediate changes to the implementation strategy and that outcomes are not often what was predicted by the theory, create the opportunity to document new data from the real word processes that could lead to modifications to the theory originally used to guide the intervention. As such, intervention-based research uses Path A to design the intervention (deductively from an existing theory) but leverages the data from the intervention to abductively derive insights for theory (Path B); see the loops created through “adapt” in Figure 2.
Regardless of the chosen empirical strategy (observation, testing, and intervention), the role of theoretical argumentation, both a priori and posteriori, with different degrees of emphasis depending on the chosen path, is fundamental regardless of what we do. It precedes specific actions, but also clearly emerges from others. Its role and placement are contingent on what is being accomplished, but we can't accomplish much of anything without it. At the end of the day, in a scientific endeavor, the criteria to assess the contribution of an empirical study is its contribution to theory. If the process is inductive/abductive and we are only making sense of unexplained regularities or anomalies, clearly the articulation of a new theory that can be subsequently tested is enough of a contribution. However, if the purpose of the study is to test existing theory (whether with secondary data or through experiments and intervention) then placing the findings from the study in the proper context—for example, how theories need to be updated? What are new research questions that are triggered be these results?—is a requirement for the contribution to be meaningful.
What does all this mean for reviewers and editors?
As we have affirmed in the JOM editorial team guidelines, all reviewers and their associated reviews are required to be developmental (see https://www.jom-hub.com/editorial-team). This is not wish, it is a mandate. It is also not merely wordplay. Developmental reviews have very specific properties. They identify weakness of papers but make deliberate efforts to help authors shore up those weaknesses. The role of reviewers, at JOM, is not that of ‘gatekeeper’. Their primary role is not to provide an up or down vote. Their primary role is that of providing substantial commentary and guidance. Reviews should never merely state generic grievances without options for redress where that exits.
Furthermore, with specific regard to theory, a review should never merely state generic disdain for a paper's theoretical elements. Reviews should also not fall victim to the fallacies posed in Section 2, such as a general failure to sufficiently reference extant theory, or comprehensively articulate mechanisms. If a relevant theory exists for use as analogy or comparison, and a reviewer is familiar with that theoretical reference, it is the job of the reviewer to be explicit in guiding the authors towards the consideration of that work. If a mechanism exists that the reviewer feels the authors should describe, it is incumbent on the review to be explicit regarding precisely what that mechanism might be. If as a reviewer you feel ‘something is missing, but can't say what’… Don't include that sentiment in your review as such a statement clearly doesn't serve to help develop a paper.
There are also, certainly, boundaries on the kind of guidance reviews and editors should give regarding theory. For example, reviewers and editors should not create “HARK-ing traps” for authors. That is, it is inappropriate for reviewers to request an author team to develop theoretical arguments to be positioned a priori, if the motivation for such is based on results emerging from the existing analysis demonstrated in the manuscript. While some authors may recognize such recommendations as overtly problematic, some may not and still others may feel it is the only way to get through the review process successfully. To be clear, such action on the part of reviewers or editors is inappropriate. Reviewers should help authors strengthen arguments that they have used to motivate their methods and analysis. It is also fully acceptable to position ‘new’ arguments posteriori (within Discussion sections) in the interest of future research. In both instances, reviewers are obliged to be developmentally constructive in this regard, offering specific recommendations rather than general requests for ‘more’. However, suggesting that unexpected findings brought forward by the analysis be accounted for by the addition of new front end theoretical arguments (as if they existed a priori) is not an acceptable path for reviewers to go down.
Furthermore, reviewers and editors need to be fully appreciative of the very real possibility that incredibly strong contributions can take on a structure that originates not from an identification of a research-literature gap, but rather from direct observation. If we are to encourage researchers at JOM and other journals to engage with practice, we must imagine that some of that engagement is going to lead to the recognition of regularities and anomalies that have not yet been explained, and that such observations are at least as important (if not more so) than inspirations drawn predominantly from extant published work. We must be open to these highly abductive paths taken by authors, while still expecting authors to fulfill what is required in the form of thoughtful sensemaking that all for impactful theoretical contributions.
期刊介绍:
The Journal of Operations Management (JOM) is a leading academic publication dedicated to advancing the field of operations management (OM) through rigorous and original research. The journal's primary audience is the academic community, although it also values contributions that attract the interest of practitioners. However, it does not publish articles that are primarily aimed at practitioners, as academic relevance is a fundamental requirement.
JOM focuses on the management aspects of various types of operations, including manufacturing, service, and supply chain operations. The journal's scope is broad, covering both profit-oriented and non-profit organizations. The core criterion for publication is that the research question must be centered around operations management, rather than merely using operations as a context. For instance, a study on charismatic leadership in a manufacturing setting would only be within JOM's scope if it directly relates to the management of operations; the mere setting of the study is not enough.
Published papers in JOM are expected to address real-world operational questions and challenges. While not all research must be driven by practical concerns, there must be a credible link to practice that is considered from the outset of the research, not as an afterthought. Authors are cautioned against assuming that academic knowledge can be easily translated into practical applications without proper justification.
JOM's articles are abstracted and indexed by several prestigious databases and services, including Engineering Information, Inc.; Executive Sciences Institute; INSPEC; International Abstracts in Operations Research; Cambridge Scientific Abstracts; SciSearch/Science Citation Index; CompuMath Citation Index; Current Contents/Engineering, Computing & Technology; Information Access Company; and Social Sciences Citation Index. This ensures that the journal's research is widely accessible and recognized within the academic and professional communities.