Francesco Mariani, Fulvio De Santis, Stefania Gubbiotti
{"title":"The distribution of power-related random variables (and their use in clinical trials)","authors":"Francesco Mariani, Fulvio De Santis, Stefania Gubbiotti","doi":"10.1007/s00362-024-01599-1","DOIUrl":"https://doi.org/10.1007/s00362-024-01599-1","url":null,"abstract":"<p>In the hybrid Bayesian-frequentist approach to hypotheses tests, the power function, i.e. the probability of rejecting the null hypothesis, is a random variable and a pre-experimental evaluation of the study is commonly carried out through the so-called probability of success (PoS). PoS is usually defined as the expected value of the random power that is not necessarily a well-representative summary of the entire distribution. Here, we consider the main definitions of PoS and investigate the power related random variables that induce them. We provide general expressions for their cumulative distribution and probability density functions, as well as closed-form expressions when the test statistic is, at least asymptotically, normal. The analysis of such distributions highlights discrepancies in the main definitions of PoS, leading us to prefer the one based on the utility function of the test. We illustrate our idea through an example and an application to clinical trials, which is a framework where PoS is commonly employed.</p>","PeriodicalId":51166,"journal":{"name":"Statistical Papers","volume":"26 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142268708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The cost of sequential adaptation and the lower bound for mean squared error","authors":"Sergey Tarima, Nancy Flournoy","doi":"10.1007/s00362-024-01565-x","DOIUrl":"https://doi.org/10.1007/s00362-024-01565-x","url":null,"abstract":"<p>Informative interim adaptations lead to random sample sizes. The random sample size becomes a component of the sufficient statistic and estimation based solely on observed samples or on the likelihood function does not use all available statistical evidence. The total Fisher Information (FI) is decomposed into the design FI and a conditional-on-design FI. The FI unspent by a design’s informative interim adaptation decomposes further into a weighted linear combination of FIs conditional-on-stopping decisions. Then, these components are used to determine the new lower mean squared error (MSE) in post-adaptation estimation because the Cramer–Rao lower bound (1945, 1946) and its sequential version suggested by Wolfowitz (Ann Math Stat 18(2):215–230, 1947) for non-informative stopping are not applicable to post-informative-adaptation estimation. In addition, we also show that the new proposed lower boundary on the MSE is reached by the maximum likelihood estimators in designs with informative adaptations when data are coming from one-parameter exponential family. Theoretical results are illustrated with simple normal samples collected according to a two-stage design with a possibility of early stopping.</p>","PeriodicalId":51166,"journal":{"name":"Statistical Papers","volume":"207 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142268706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Nested strong orthogonal arrays","authors":"Chunwei Zheng, Wenlong Li, Jian-Feng Yang","doi":"10.1007/s00362-024-01609-2","DOIUrl":"https://doi.org/10.1007/s00362-024-01609-2","url":null,"abstract":"<p>Nested space-filling designs are popular for conducting multiple computer experiments with different levels of accuracy. Strong orthogonal arrays (SOAs) are a special type of space-filling designs which possess attractive low-dimensional stratifications. Combining these two kinds of designs, we propose a new type of design called a nested strong orthogonal array. Such a design is a special nested space-filling design that consists of two layers, i.e., the large SOA and the small SOA, where they enjoy different strengths, and the small one is nested in the large one. The proposed construction method is easy to use, capable of accommodating a larger number of columns, and the resulting designs possess better stratifications than the existing nested space-filling designs in two dimensions. The construction method is based on regular second order saturated designs and nonregular designs. Some comparisons with the existing nested space-filling designs are given to show the usefulness of the proposed designs.</p>","PeriodicalId":51166,"journal":{"name":"Statistical Papers","volume":"16 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142251181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Tests for time-varying coefficient spatial autoregressive panel data model with fixed effects","authors":"Lingling Tian, Yunan Su, Chuanhua Wei","doi":"10.1007/s00362-024-01607-4","DOIUrl":"https://doi.org/10.1007/s00362-024-01607-4","url":null,"abstract":"<p>As an extension of the spatial autoregressive panel data model and the time-varying coefficient panel data model, the time-varying coefficient spatial autoregressive panel data model is useful in analysis of spatial panel data. While research has addressed the estimation problem of this model, less attention has been given to hypotheses tests. This paper studies two tests for this semiparametric spatial panel data model. One considers the existence of the spatial lag term, and the other determines whether some time-varying coefficients are constants. We employ the profile generalized likelihood ratio test procedure to construct the corresponding test statistic, and the residual-based bootstrap procedure is used to derive the p-value of the tests. Some simulations are conducted to evaluate the performance of the proposed test method, the results show that the proposed methods have good finite sample properties. Finally, we apply the proposed test methods to the provincial carbon emission data of China. Our findings suggest that the partially linear time-varying coefficients spatial autoregressive panel data model provides a better fit for the carbon emission data.</p>","PeriodicalId":51166,"journal":{"name":"Statistical Papers","volume":"167 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142251182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Julie Josse, Jacob M. Chen, Nicolas Prost, Gaël Varoquaux, Erwan Scornet
{"title":"On the consistency of supervised learning with missing values","authors":"Julie Josse, Jacob M. Chen, Nicolas Prost, Gaël Varoquaux, Erwan Scornet","doi":"10.1007/s00362-024-01550-4","DOIUrl":"https://doi.org/10.1007/s00362-024-01550-4","url":null,"abstract":"<p>In many application settings, data have missing entries, which makes subsequent analyses challenging. An abundant literature addresses missing values in an inferential framework, aiming at estimating parameters and their variance from incomplete tables. Here, we consider supervised-learning settings: predicting a target when missing values appear in both training and test data. We first rewrite classic missing values results for this setting. We then show the consistency of two approaches, test-time multiple imputation and single imputation in prediction. A striking result is that the widely-used method of imputing with a constant prior to learning is consistent when missing values are not informative. This contrasts with inferential settings where mean imputation is frowned upon as it distorts the distribution of the data. The consistency of such a popular simple approach is important in practice. Finally, to contrast procedures based on imputation prior to learning with procedures that optimize the missing-value handling for prediction, we consider decision trees. Indeed, decision trees are among the few methods that can tackle empirical risk minimization with missing values, due to their ability to handle the half-discrete nature of incomplete variables. After comparing empirically different missing values strategies in trees, we recommend using the “missing incorporated in attribute” method as it can handle both non-informative and informative missing values.</p>","PeriodicalId":51166,"journal":{"name":"Statistical Papers","volume":"15 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142201029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Markus Kreer, Ayşe Kızılersü, Jake Guscott, Lukas Christopher Schmitz, Anthony W. Thomas
{"title":"Maximum likelihood estimation for left-truncated log-logistic distributions with a given truncation point","authors":"Markus Kreer, Ayşe Kızılersü, Jake Guscott, Lukas Christopher Schmitz, Anthony W. Thomas","doi":"10.1007/s00362-024-01603-8","DOIUrl":"https://doi.org/10.1007/s00362-024-01603-8","url":null,"abstract":"<p>For a sample <span>(X_1, X_2,ldots X_N)</span> of independent identically distributed copies of a log-logistically distributed random variable <i>X</i> the maximum likelihood estimation is analysed in detail if a left-truncation point <span>(x_L>0)</span> is introduced. Due to scaling properties it is sufficient to investigate the case <span>(x_L=1)</span>. Here the corresponding maximum likelihood equations for a normalised sample (i.e. a sample divided by <span>(x_L)</span>) do not always possess a solution. A simple criterion guarantees the existence of a solution: Let <span>(mathbb {E}(cdot ))</span> denote the expectation induced by the normalised sample and denote by <span>(beta _0=mathbb {E}(ln {X})^{-1})</span>, the inverse value of expectation of the logarithm of the sampled random variable <i>X</i> (which is greater than <span>(x_L=1)</span>). If this value <span>(beta _0)</span> is bigger than a certain positive number <span>(beta _C)</span> then a solution of the maximum likelihood equation exists. Here the number <span>(beta _C)</span> is the unique solution of a moment equation,<span>(mathbb {E}(X^{-beta _C})=frac{1}{2})</span>. In the case of existence a profile likelihood function can be constructed and the optimisation problem is reduced to one dimension leading to a robust numerical algorithm. When the maximum likelihood equations do not admit a solution for certain data samples, it is shown that the Pareto distribution is the <span>(L^1)</span>-limit of the degenerated left-truncated log-logistic distribution, where <span>(L^1(mathbb {R}^+))</span> is the usual Banach space of functions whose absolute value is Lebesgue-integrable. A large sample analysis showing consistency and asymptotic normality complements our analysis. Finally, two applications to real world data are presented.</p>","PeriodicalId":51166,"journal":{"name":"Statistical Papers","volume":"4 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142201030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Confidence bounds for compound Poisson process","authors":"Marek Skarupski, Qinhao Wu","doi":"10.1007/s00362-024-01604-7","DOIUrl":"https://doi.org/10.1007/s00362-024-01604-7","url":null,"abstract":"<p>The compound Poisson process (CPP) is a common mathematical model for describing many phenomena in medicine, reliability theory and risk theory. However, in the case of low-frequency phenomena, we are often unable to collect a sufficiently large database to conduct analysis. In this article, we focused on methods for determining confidence intervals for the rate of the CPP when the sample size is small. Based on the properties of process parameter estimators, we proposed a new method for constructing such intervals and compared it with other known approaches. In numerical simulations, we used synthetic data from several continuous and discrete distributions. The case of CPP, in which rewards came from exponential distribution, was discussed separately. The recommendation of how to use each method to have a more precise confidence interval is given. All simulations were performed in R version 4.2.1.</p>","PeriodicalId":51166,"journal":{"name":"Statistical Papers","volume":"17 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142201031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Confidence intervals for overall response rate difference in the sequential parallel comparison design","authors":"Guogen Shan, Xinlin Lu, Yahui Zhang, Samuel S. Wu","doi":"10.1007/s00362-024-01606-5","DOIUrl":"https://doi.org/10.1007/s00362-024-01606-5","url":null,"abstract":"<p>High placebo responses could significantly reduce the treatment effect in a parallel randomized trial. To combat that challenge, several approaches were developed, including the sequential parallel comparison design (SPCD) that was shown to increase the statistical power as compared to the traditional randomized trial. A linear combination of the response rate differences from two phases per the SPCD is commonly used to measure the overall treatment effect size. The traditional approach to calculate the confidence interval for the overall rate difference is based on the delta method using the variance–covariance matrix of all outcomes. As outcomes from a multinomial distribution are correlated, we suggest utilizing a constrained variance–covariance matrix in the delta method. In the observation of anti-conservative coverages from asymptotic intervals, we further propose using importance sampling to develop accurate intervals. Simulation studies show that accurate intervals have better coverage probabilities than others and the interval width of accurate intervals is similar to the interval width of others. Two real trials to treat major depressive disorder are used to illustrate the application of the proposed intervals.</p>","PeriodicalId":51166,"journal":{"name":"Statistical Papers","volume":"39 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142201036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Bayesian and frequentist inference derived from the maximum entropy principle with applications to propagating uncertainty about statistical methods","authors":"David R. Bickel","doi":"10.1007/s00362-024-01597-3","DOIUrl":"https://doi.org/10.1007/s00362-024-01597-3","url":null,"abstract":"<p>Using statistical methods to analyze data requires considering the data set to be randomly generated from a probability distribution that is unknown but idealized according to a mathematical model consisting of constraints, assumptions about the distribution. Since the choice of such a model is up to the scientist, there is an understandable bias toward choosing models that make scientific conclusions appear more certain than they really are. There is a similar bias in the scientist’s choice of whether to use Bayesian or frequentist methods. This article provides tools to mitigate both of those biases on the basis of a principle of information theory. It is found that the same principle unifies Bayesianism with the fiducial version of frequentism. The principle arguably overcomes not only the main objections against fiducial inference but also the main Bayesian objection against the use of confidence intervals.</p>","PeriodicalId":51166,"journal":{"name":"Statistical Papers","volume":"46 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142201032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reduced bias estimation of the log odds ratio","authors":"Asma Saleh","doi":"10.1007/s00362-024-01593-7","DOIUrl":"https://doi.org/10.1007/s00362-024-01593-7","url":null,"abstract":"<p>Analysis of binary matched pairs data is problematic due to infinite maximum likelihood estimates of the log odds ratio and potentially biased estimates, especially for small samples. We propose a penalised version of the log-likelihood function based on adjusted responses which always results in a finite estimator of the log odds ratio. The probability limit of the adjusted log-likelihood estimator is derived and it is shown that in certain settings the maximum likelihood, conditional and modified profile log-likelihood estimators drop out as special cases of the former estimator. We implement indirect inference to the adjusted log-likelihood estimator. It is shown, through a complete enumeration study, that the indirect inference estimator is competitive in terms of bias and variance in comparison to the maximum likelihood, conditional, modified profile log-likelihood and Firth’s penalised log-likelihood estimators.</p>","PeriodicalId":51166,"journal":{"name":"Statistical Papers","volume":"6 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142201034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}