{"title":"The Just-About-Right Pilot Sample Size to Control the Error Margin","authors":"Scholastica C. Obodo, D. Toher, Paul White","doi":"10.5539/ijsp.v12n3p1","DOIUrl":"https://doi.org/10.5539/ijsp.v12n3p1","url":null,"abstract":"In practice, the required sample size for a two-arm randomised controlled trial cannot always be determined pre-study with great accuracy. This lack of accuracy has economic, ethical and scientific implications. The sample size for a pilot study is an important consideration in helping the decision making for the sample size of a follow-on trial. Consideration of under- and over-estimation of the sample size results in the idea of a Just-About-Right (JAR) sample size. For studies involving a minimally clinical important difference (MCID) we present the pilot sample sizes to meet investigator desired JAR considerations.","PeriodicalId":89781,"journal":{"name":"International journal of statistics and probability","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43683933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Small Area Estimation of Economic Security: A Proposal","authors":"Mario Marino, Silvia Pacei","doi":"10.5539/ijsp.v12n3p8","DOIUrl":"https://doi.org/10.5539/ijsp.v12n3p8","url":null,"abstract":"The objective of this work is to propose a small area estimation strategy for an economic security indicator. In the last decade the interest for the measurement of economic security or insecurity has grown constantly, especially since the financial crisis of 2008 and the pandemic period. In this work, economic security is measures through a longitudinal indicator that compares levels of equivalized household income over time. To solve a small area estimation problem, due to possible sample sizes too low in some areas, a small area estimation strategy is suggested to obtain reliable estimates of the indicator of interest. We consider small area models specified at area level. Besides the basic Fay-Herriot area-level model, we propose to consider some longitudinal extensions, including time-specific random effects following an AR(1) process or an MA(1) process. A simulation study based on EU-SILC data shows that all the small area models considered provide a significant efficiency gain with respect to the Horvitz-Thompson estimator, especially the small area model with MA(1) specification for random effects.","PeriodicalId":89781,"journal":{"name":"International journal of statistics and probability","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42845404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reviewer Acknowledgements for International Journal of Statistics and Probability, Vol. 12, No. 2","authors":"Wendy Smith","doi":"10.5539/ijsp.v12n2p49","DOIUrl":"https://doi.org/10.5539/ijsp.v12n2p49","url":null,"abstract":"Reviewer Acknowledgements for International Journal of Statistics and Probability, Vol. 12, No. 2","PeriodicalId":89781,"journal":{"name":"International journal of statistics and probability","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47722284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Importance of Type II Error in Hypothesis Testing","authors":"I. Jiménez-Gamero, M. Analla","doi":"10.5539/ijsp.v12n2p42","DOIUrl":"https://doi.org/10.5539/ijsp.v12n2p42","url":null,"abstract":"Statistical tests of significance theoretically deal with two mutually exclusive hypotheses: the null and the alternative. However, at least in biomedical assays, only the null hypothesis is taken into account through type I error evaluation. But, basing these tests solely on type I error has two drawbacks: first, the probability limits (5%, 1% and 0.1%) arbitrarily set to the significance levels have no scientific justification. Second, acceptation of the null hypothesis is just a matter of chance, as it is mainly conditioned by the sample size due to its direct effect on the power of the test. In this sense, while the alternative hypothesis should be accepted due to its higher likelihood, the inference based on type I error alone would lead erroneously to accepting the null one. A numerical example illustrates how considering type I error alone, a same difference was declared non-significant first but turned out to significant thereafter when the sample size was increased. Therefore, the same null hypothesis was initially accepted and rejected afterwards. However when type II error was included in the test, the same decision was adopted no matter what the sample size was. This was possible through a reformulation of the alternative hypothesis. On the other hand, type II error may, in many cases have more far-reaching consequences than type I, and then should never be ignored, especially in assays dealing with human health, food, toxicity, etc.","PeriodicalId":89781,"journal":{"name":"International journal of statistics and probability","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44687615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reliability of Meta-Analysis Research Claims for Gas Stove Cooking−Childhood Respiratory Health Associations","authors":"W. Kindzierski, S. Young, John Dunn","doi":"10.5539/ijsp.v12n3p40","DOIUrl":"https://doi.org/10.5539/ijsp.v12n3p40","url":null,"abstract":"Odds ratios or p-values from individual observational studies can be combined to examine a common cause−effect research question in meta-analysis. However, reliability of individual studies used in meta-analysis should not be taken for granted as claimed cause−effect associations may not reproduce. An evaluation was undertaken on meta-analysis of base papers examining gas stove cooking (including nitrogen dioxide, NO2) and childhood asthma and wheeze associations. Numbers of hypotheses tested in 14 of 27 base papers (52%) used in meta-analysis of asthma and wheeze were counted. Test statistics used in the meta-analysis (40 odds ratios with 95% confidence limits) were converted to p-values and presented in p-value plots. The median (interquartile range) of possible numbers of hypotheses tested in the 14 base papers was 15,360 (6,336−49,152). None of the 14 base papers made mention of correcting for multiple testing, nor was any explanation offered if no multiple testing procedure was used. Given large numbers of hypotheses available, statistics drawn from base papers and used for meta-analysis are likely biased. Even so, p-value plots for gas stove−current asthma and gas stove−current wheeze associations show randomness consistent with unproven gas stove harms. The meta-analysis fails to provide reliable evidence for public health policy making on gas stove harms to children in North America. NO2 is not established as a biologically plausible explanation of a causal link with childhood asthma. Biases – multiple testing and p-hacking – cannot be ruled out as explanation for a gas stove−current asthma association claim. Selective reporting is another bias in published literature of gas stove–childhood respiratory health studies.","PeriodicalId":89781,"journal":{"name":"International journal of statistics and probability","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45208171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nicklaus T. Hicks, Hasthika S. Rupasinghe Arachchige Don
{"title":"Olsavs: A New Algorithm For Model Selection","authors":"Nicklaus T. Hicks, Hasthika S. Rupasinghe Arachchige Don","doi":"10.5539/ijsp.v12n2p28","DOIUrl":"https://doi.org/10.5539/ijsp.v12n2p28","url":null,"abstract":"The shrinkage methods such as Lasso and Relaxed Lasso introduce some bias in order to reduce the variance of the regression coefficients in multiple linear regression models. One way to reduce bias after shrinkage of the coefficients would be to apply ordinary least squares to the subset of predictors selected by the shrinkage method used. This work extensively investigated this idea and developed a new variable selection algorithm. The authors named this technique OLSAVS (Ordinary Least Squares After Variable Selection). The OLSAVS algorithm was implemented in R. Simulations were used to illustrate that the new method is able to produce better predictions with less bias for various error distributions. The OLSAVS method was compared with a few widely used shrinkage methods in terms of their achieved test root mean square error and bias.","PeriodicalId":89781,"journal":{"name":"International journal of statistics and probability","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48835147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Heteroscedastic Analog of the Wilcoxon–Mann–Whitney Test When There Is A Covariate","authors":"R. Wilcox","doi":"10.5539/ijsp.v12n2p18","DOIUrl":"https://doi.org/10.5539/ijsp.v12n2p18","url":null,"abstract":"A basic method for comparing two independent groups is in terms of the probability that a randomly sampled observation from the first group is less than a randomly sampled observation from the second group. The Wilcoxon–Mann–Whitney test is based on an estimate of this probability, but it uses an incorrect estimate of the standard error when the distributions \u0000differ. Numerous methods have been derived that are aimed at dealing with this issue. The goal here is to suggest a method for estimating this probability, given the value of a covariate. A well-known quantile regression estimator provides a way of dealing with this issue. The paper reports simulation results on how well this method performs.","PeriodicalId":89781,"journal":{"name":"International journal of statistics and probability","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42239470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Joint Estimation of Binomial Proportions","authors":"K. Riggs, Stephanie Weatherford","doi":"10.5539/ijsp.v12n2p8","DOIUrl":"https://doi.org/10.5539/ijsp.v12n2p8","url":null,"abstract":"Interval estimation of a binomial proportion has had a consistent presence in the statistical literature through the years. Many interval procedures have been developed for a single proportion as well as for the difference of two proportions. However, little work has been conducted on the joint estimation of two binomial proportions. In this paper, we construct four confidence regions for two binomial proportions based on three statistics: the Wald (W), adjusted Wald (W*), score (S), and likelihood ratio (LR) statistics. Once the regions have been established, we compare their coverage probabilities and average areas for different parameter and sample size configurations. For small-to-moderate sample sizes, this paper finds that the three regions based on the W*, S, and LR statistics have good coverage properties, with the score region usually having the smallest average area. Finally, we apply these four confidence regions to some real data in veterinary science and medicine for the joint estimation of important proportions.","PeriodicalId":89781,"journal":{"name":"International journal of statistics and probability","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43101367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Wavelet Estimation of a Density From Observations of Almost Periodically Correlated Process Under Positive Quadrant Dependence","authors":"Moussa Koné, V. Monsan","doi":"10.5539/ijsp.v12n2p1","DOIUrl":"https://doi.org/10.5539/ijsp.v12n2p1","url":null,"abstract":"In this paper, we construct a new wavelet estimator of density for the component of a finite mixture under positive quadrant dependence. Our sample is extracted from almost periodically correlated processes. To evaluate our estimator we will determine a convergence speed from an upper bound for the mean integrated squared error (MISE). Our result is compared to the independent case which provides an optimal convergence rate.","PeriodicalId":89781,"journal":{"name":"International journal of statistics and probability","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45485606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Bayesian Predictive Inference Under Nine Methods for Incorporating Survey Weights","authors":"Lingli Yang, B. Nandram, J. Choi","doi":"10.5539/ijsp.v12n1p33","DOIUrl":"https://doi.org/10.5539/ijsp.v12n1p33","url":null,"abstract":"Sample surveys play a significant role in obtaining reliable estimators of finite population quantities, and survey weights are used to deal with selection bias and non-response bias. The main idea of this research is to compare the performance of nine methods with differently constructed survey weights, and we can use these methods for non-probability sampling after weights are estimated (e.g. quasi-randomization). The original survey weights are calibrated to the population size. In particular, the base model does not include survey weights or design weights. We use original survey weights, adjusted survey weights, trimmed survey weights, and adjusted trimmed survey weights into pseudo-likelihood function to build unnormalized or normalized posterior distributions. In this research, we focus on binary data, which occur in many different situations. \u0000A simulation study is performed and we analyze the simulated data using average posterior mean, average posterior standard deviation, average relative bias, average posterior root mean squared error, and the coverage rate of 95% credible intervals. We also performed an application on body mass index to further understand these nine methods. The results show that methods with trimmed weights are preferred than methods with untrimmed weights, and methods with adjusted weights have higher variability than methods with unadjusted weights.","PeriodicalId":89781,"journal":{"name":"International journal of statistics and probability","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45230403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}