{"title":"Statistically significant? Let us recognize that estimates of tested effects are uncertain","authors":"A. Bultez, C. Derbaix, Jean-Luc Herrmann","doi":"10.1177/20515707211040743","DOIUrl":null,"url":null,"abstract":"Haven’t all of us dreamt of concluding that our results be statistically significant, that is, characterized by a p-value lying below an arbitrary threshold, most often 5 % ? In this article, we, first, deplore that p has been largely misunderstood, and that its misinterpretation has entailed a fallacious dichotomization and an understatement of the uncertainty prevailing about the effect tested. Next, we introduce and explain a brand-new – direct – measure of the plausibility of the effect under study. Then, we illustrate the relevance of this indicator by revisiting a recently published marketing research case. We also insist on the necessity to contextualize it, using complementary credibility intervals graphically contrasted. Beyond making researchers aware of the exact meaning of test-related probabilities, the delineated approach invites them to formulate their inferences with prudence and modesty acknowledging how uncertain these are.","PeriodicalId":0,"journal":{"name":"","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/20515707211040743","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Haven’t all of us dreamt of concluding that our results be statistically significant, that is, characterized by a p-value lying below an arbitrary threshold, most often 5 % ? In this article, we, first, deplore that p has been largely misunderstood, and that its misinterpretation has entailed a fallacious dichotomization and an understatement of the uncertainty prevailing about the effect tested. Next, we introduce and explain a brand-new – direct – measure of the plausibility of the effect under study. Then, we illustrate the relevance of this indicator by revisiting a recently published marketing research case. We also insist on the necessity to contextualize it, using complementary credibility intervals graphically contrasted. Beyond making researchers aware of the exact meaning of test-related probabilities, the delineated approach invites them to formulate their inferences with prudence and modesty acknowledging how uncertain these are.