American StatisticianPub Date : 2011-01-01Epub Date: 2012-01-24DOI: 10.1198/tas.2011.10129
Dennis D Boos, Leonard A Stefanski
{"title":"P-Value Precision and Reproducibility.","authors":"Dennis D Boos, Leonard A Stefanski","doi":"10.1198/tas.2011.10129","DOIUrl":"https://doi.org/10.1198/tas.2011.10129","url":null,"abstract":"<p><p>P-values are useful statistical measures of evidence against a null hypothesis. In contrast to other statistical estimates, however, their sample-to-sample variability is usually not considered or estimated, and therefore not fully appreciated. Via a systematic study of log-scale p-value standard errors, bootstrap prediction bounds, and reproducibility probabilities for future replicate p-values, we show that p-values exhibit surprisingly large variability in typical data situations. In addition to providing context to discussions about the failure of statistical results to replicate, our findings shed light on the relative value of exact p-values vis-a-vis approximate p-values, and indicate that the use of *, **, and *** to denote levels .05, .01, and .001 of statistical significance in subject-matter journals is about the right level of precision for reporting p-values when judged by widely accepted rules for rounding statistical estimates.</p>","PeriodicalId":50801,"journal":{"name":"American Statistician","volume":"65 4","pages":"213-221"},"PeriodicalIF":1.8,"publicationDate":"2011-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1198/tas.2011.10129","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"30684527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Note on Comparing the Power of Test Statistics at Low Significance Levels.","authors":"Nathan Morris, Robert Elston","doi":"10.1198/tast.2011.10117","DOIUrl":"https://doi.org/10.1198/tast.2011.10117","url":null,"abstract":"<p><p>It is an obvious fact that the power of a test statistic is dependent upon the significance (alpha) level at which the test is performed. It is perhaps a less obvious fact that the <i>relative</i> performance of two statistics in terms of power is also a function of the alpha level. Through numerous personal discussions, we have noted that even some competent statisticians have the mistaken intuition that relative power comparisons at traditional levels such as <i>α</i> = 0.05 will be roughly similar to relative power comparisons at very low levels, such as the level <i>α</i> = 5 × 10<sup>-8</sup>, which is commonly used in genome-wide association studies. In this brief note, we demonstrate that this notion is in fact quite wrong, especially with respect to comparing tests with differing degrees of freedom. In fact, at very low alpha levels the cost of additional degrees of freedom is often comparatively low. Thus we recommend that statisticians exercise caution when interpreting the results of power comparison studies which use alpha levels that will not be used in practice.</p>","PeriodicalId":50801,"journal":{"name":"American Statistician","volume":"65 3","pages":""},"PeriodicalIF":1.8,"publicationDate":"2011-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1198/tast.2011.10117","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"31964197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Consistency of Normal Distribution Based Pseudo Maximum Likelihood Estimates When Data Are Missing at Random.","authors":"Ke-Hai Yuan, Peter M Bentler","doi":"10.1198/tast.2010.09203","DOIUrl":"https://doi.org/10.1198/tast.2010.09203","url":null,"abstract":"<p><p>This paper shows that, when variables with missing values are linearly related to observed variables, the normal-distribution-based pseudo MLEs are still consistent. The population distribution may be unknown while the missing data process can follow an arbitrary missing at random mechanism. Enough details are provided for the bivariate case so that readers having taken a course in statistics/probability can fully understand the development. Sufficient conditions for the consistency of the MLEs in higher dimensions are also stated, while the details are omitted.</p>","PeriodicalId":50801,"journal":{"name":"American Statistician","volume":"64 3","pages":"263-267"},"PeriodicalIF":1.8,"publicationDate":"2010-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1198/tast.2010.09203","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"29568679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jan Serroyen, Geert Molenberghs, Geert Verbeke, Marie Davidian
{"title":"Non-linear Models for Longitudinal Data.","authors":"Jan Serroyen, Geert Molenberghs, Geert Verbeke, Marie Davidian","doi":"10.1198/tast.2009.07256","DOIUrl":"https://doi.org/10.1198/tast.2009.07256","url":null,"abstract":"<p><p>While marginal models, random-effects models, and conditional models are routinely considered to be the three main modeling families for continuous and discrete repeated measures with linear and generalized linear mean structures, respectively, it is less common to consider non-linear models, let alone frame them within the above taxonomy. In the latter situation, indeed, when considered at all, the focus is often exclusively on random-effects models. In this paper, we consider all three families, exemplify their great flexibility and relative ease of use, and apply them to a simple but illustrative set of data on tree circumference growth of orange trees.</p>","PeriodicalId":50801,"journal":{"name":"American Statistician","volume":"63 4","pages":"378-388"},"PeriodicalIF":1.8,"publicationDate":"2009-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1198/tast.2009.07256","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"28718267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Rating Movies and Rating the Raters Who Rate Them.","authors":"Hua Zhou, Kenneth Lange","doi":"10.1198/tast.2009.08278","DOIUrl":"https://doi.org/10.1198/tast.2009.08278","url":null,"abstract":"<p><p>The movie distribution company Netflix has generated considerable buzz in the statistics community by offering a million dollar prize for improvements to its movie rating system. Among the statisticians and computer scientists who have disclosed their techniques, the emphasis has been on machine learning approaches. This article has the modest goal of discussing a simple model for movie rating and other forms of democratic rating. Because the model involves a large number of parameters, it is nontrivial to carry out maximum likelihood estimation. Here we derive a straightforward EM algorithm from the perspective of the more general MM algorithm. The algorithm is capable of finding the global maximum on a likelihood landscape littered with inferior modes. We apply two variants of the model to a dataset from the MovieLens archive and compare their results. Our model identifies quirky raters, redefines the raw rankings, and permits imputation of missing ratings. The model is intended to stimulate discussion and development of better theory rather than to win the prize. It has the added benefit of introducing readers to some of the issues connected with analyzing high-dimensional data.</p>","PeriodicalId":50801,"journal":{"name":"American Statistician","volume":"63 4","pages":"297-307"},"PeriodicalIF":1.8,"publicationDate":"2009-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1198/tast.2009.08278","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"29274882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Easy Multiplicity Control in Equivalence Testing Using Two One-sided Tests.","authors":"Carolyn Lauzon, Brian Caffo","doi":"10.1198/tast.2009.0029","DOIUrl":"https://doi.org/10.1198/tast.2009.0029","url":null,"abstract":"<p><p>Equivalence testing is growing in use in scientific research outside of its traditional role in the drug approval process. Largely due to its ease of use and recommendation from the United States Food and Drug Administration guidance, the most common statistical method for testing equivalence is the two one-sided tests procedure (TOST). Like classical point-null hypothesis testing, TOST is subject to multiplicity concerns as more comparisons are made. In this manuscript, a condition that bounds the family-wise error rate using TOST is given. This condition then leads to a simple solution for controlling the family-wise error rate. Specifically, we demonstrate that if all pair-wise comparisons of k independent groups are being evaluated for equivalence, then simply scaling the nominal Type I error rate down by (k - 1) is sufficient to maintain the family-wise error rate at the desired value or less. The resulting rule is much less conservative than the equally simple Bonferroni correction. An example of equivalence testing in a non drug-development setting is given.</p>","PeriodicalId":50801,"journal":{"name":"American Statistician","volume":"63 2","pages":"147-154"},"PeriodicalIF":1.8,"publicationDate":"2009-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1198/tast.2009.0029","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"28625984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elizabeth Koehler, Elizabeth Brown, Sebastien J-P A Haneuse
{"title":"On the Assessment of Monte Carlo Error in Simulation-Based Statistical Analyses.","authors":"Elizabeth Koehler, Elizabeth Brown, Sebastien J-P A Haneuse","doi":"10.1198/tast.2009.0030","DOIUrl":"https://doi.org/10.1198/tast.2009.0030","url":null,"abstract":"<p><p>Statistical experiments, more commonly referred to as Monte Carlo or simulation studies, are used to study the behavior of statistical methods and measures under controlled situations. Whereas recent computing and methodological advances have permitted increased efficiency in the simulation process, known as variance reduction, such experiments remain limited by their finite nature and hence are subject to uncertainty; when a simulation is run more than once, different results are obtained. However, virtually no emphasis has been placed on reporting the uncertainty, referred to here as Monte Carlo error, associated with simulation results in the published literature, or on justifying the number of replications used. These deserve broader consideration. Here we present a series of simple and practical methods for estimating Monte Carlo error as well as determining the number of replications required to achieve a desired level of accuracy. The issues and methods are demonstrated with two simple examples, one evaluating operating characteristics of the maximum likelihood estimator for the parameters in logistic regression and the other in the context of using the bootstrap to obtain 95% confidence intervals. The results suggest that in many settings, Monte Carlo error may be more substantial than traditionally thought.</p>","PeriodicalId":50801,"journal":{"name":"American Statistician","volume":" ","pages":"155-162"},"PeriodicalIF":1.8,"publicationDate":"2009-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1198/tast.2009.0030","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40192743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Fresh Look at the Discriminant Function Approach for Estimating Crude or Adjusted Odds Ratios.","authors":"Robert H Lyles, Ying Guo, Andrew N Hill","doi":"10.1198/tast.2009.08246","DOIUrl":"10.1198/tast.2009.08246","url":null,"abstract":"<p><p>Assuming a binary outcome, logistic regression is the most common approach to estimating a crude or adjusted odds ratio corresponding to a continuous predictor. We revisit a method termed the discriminant function approach, which leads to closed-form estimators and corresponding standard errors. In its most appealing application, we show that the approach suggests a multiple linear regression of the continuous predictor of interest on the outcome and other covariates, in place of the traditional logistic regression model. If standard diagnostics support the assumptions (including normality of errors) accompanying this linear regression model, the resulting estimator has demonstrable advantages over the usual maximum likelihood estimator via logistic regression. These include improvements in terms of bias and efficiency based on a minimum variance unbiased estimator of the log odds ratio, as well as the availability of an estimate when logistic regression fails to converge due to a separation of data points. Use of the discriminant function approach as described here for multivariable analysis requires less stringent assumptions than those for which it was historically criticized, and is worth considering when the adjusted odds ratio associated with a particular continuous predictor is of primary interest. Simulation and case studies illustrate these points.</p>","PeriodicalId":50801,"journal":{"name":"American Statistician","volume":"63 4","pages":""},"PeriodicalIF":1.8,"publicationDate":"2009-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3881534/pdf/nihms536814.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"32012162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Richard F Potthoff, Susan Halabi, Joellen M Schildkraut, Beth Newman
{"title":"Flexible Frames and Control Sampling in Case-Control Studies: Weighters (Survey Statisticians) Versus Anti-Weighters (Epidemiologists).","authors":"Richard F Potthoff, Susan Halabi, Joellen M Schildkraut, Beth Newman","doi":"10.1198/000313008X364525","DOIUrl":"https://doi.org/10.1198/000313008X364525","url":null,"abstract":"<p><p>We propose two innovations in statistical sampling for controls to enable better design of population-based case-control studies. The main innovation leads to novel solutions, without using weights, of the difficult and long-standing problem of selecting a control from persons in a household. Another advance concerns the drawing (at the outset) of the households themselves and involves random-digit dialing with atypical use of list-assisted sampling. A common element throughout is that one capitalizes on flexibility (not broadly available in usual survey settings) in choosing the frame, which specifies the population of persons from which both cases and controls come.</p>","PeriodicalId":50801,"journal":{"name":"American Statistician","volume":"62 4","pages":"307-313"},"PeriodicalIF":1.8,"publicationDate":"2008-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1198/000313008X364525","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"28405473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Linear Transformations and the k-Means Clustering Algorithm: Applications to Clustering Curves.","authors":"Thaddeus Tarpey","doi":"10.1198/000313007X171016","DOIUrl":"https://doi.org/10.1198/000313007X171016","url":null,"abstract":"<p><p>Functional data can be clustered by plugging estimated regression coefficients from individual curves into the k-means algorithm. Clustering results can differ depending on how the curves are fit to the data. Estimating curves using different sets of basis functions corresponds to different linear transformations of the data. k-means clustering is not invariant to linear transformations of the data. The optimal linear transformation for clustering will stretch the distribution so that the primary direction of variability aligns with actual differences in the clusters. It is shown that clustering the raw data will often give results similar to clustering regression coefficients obtained using an orthogonal design matrix. Clustering functional data using an L(2) metric on function space can be achieved by clustering a suitable linear transformation of the regression coefficients. An example where depressed individuals are treated with an antidepressant is used for illustration.</p>","PeriodicalId":50801,"journal":{"name":"American Statistician","volume":"61 1","pages":"34-40"},"PeriodicalIF":1.8,"publicationDate":"2007-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1198/000313007X171016","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"26612590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}