{"title":"Disseminating massive frequency tables by masking aggregated cell frequencies","authors":"Min-Jeong Park, Hang J. Kim, Sunghoon Kwon","doi":"10.1007/s42952-023-00248-x","DOIUrl":"https://doi.org/10.1007/s42952-023-00248-x","url":null,"abstract":"<p>We propose a confidential approach for disseminating frequency tables constructed for any combination of key variables in the given microdata, including those of hierarchical key variables. The system generates all possible frequency tables by either marginalizing or aggregating fully joint frequency tables of key variables while protecting the original cells with low frequencies through two masking steps: the small cell adjustments for joint tables followed by the proposed algorithm called information loss bounded aggregation for aggregated cells. The two-step approach is designed to control both disclosure risk and information loss by ensuring the <i>k</i>-anonymity of original cells with small frequencies while keeping the loss within a bounded limit.</p>","PeriodicalId":49992,"journal":{"name":"Journal of the Korean Statistical Society","volume":"13 1","pages":""},"PeriodicalIF":0.6,"publicationDate":"2024-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139647272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Use of ridge calibration method in predicting election results","authors":"Yohan Lim, Mingue Park","doi":"10.1007/s42952-023-00254-z","DOIUrl":"https://doi.org/10.1007/s42952-023-00254-z","url":null,"abstract":"<p>Ridge calibration is a penalized method used in survey sampling to reduce the variability of the final set of weights by relaxing the linear restrictions. We proposed a method for selecting the penalty parameter that minimizes the estimated mean squared error of the mean estimator when estimated auxiliary information is used. We showed that the proposed estimator is asymptotically equivalent to the generalized regression estimator. A simple simulation study shows that our estimator has the smaller MSE compared to the traditional calibration ones. We applied our method to predict election result using National Barometer Survey and Korea Social Integration Survey.</p>","PeriodicalId":49992,"journal":{"name":"Journal of the Korean Statistical Society","volume":"7 1","pages":""},"PeriodicalIF":0.6,"publicationDate":"2024-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139558036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Asymptotic of the number of false change points of the fused lasso signal approximator","authors":"Donghyeon Yu, Johan Lim, Won Son","doi":"10.1007/s42952-023-00250-3","DOIUrl":"https://doi.org/10.1007/s42952-023-00250-3","url":null,"abstract":"<p>It is well-known that the fused lasso signal approximator (FLSA) is inconsistent in change point detection under the presence of staircase blocks in true mean values. The existing studies focus on modifying the FLSA model to remedy this inconsistency. However, the inconsistency of the FLSA does not severely degrade the performance in change point detection if the FLSA can identify all true change points and the estimated change points set is sufficiently close to the true change points set. In this study, we investigate some asymptotic properties of the FLSA under the assumption of the noise level <span>(sigma _n = o(n log n))</span>. To be specific, we show that all the falsely segmented blocks are sub-blocks of true staircase blocks if the noise level is sufficiently low and a tuning parameter is chosen appropriately. In addition, each false change point of the optimal FLSA estimate can be associated with a vertex of a concave majorant or a convex minorant of a discrete Brownian bridge. Based on these results, we derive an asymptotic distribution of the number of false change points and provide numerical examples supporting the theoretical results.</p>","PeriodicalId":49992,"journal":{"name":"Journal of the Korean Statistical Society","volume":"10 1","pages":""},"PeriodicalIF":0.6,"publicationDate":"2024-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139499555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Large sample properties of maximum likelihood estimator using moving extremes ranked set sampling","authors":"Han Wang, Wangxue Chen, Bingjie Li","doi":"10.1007/s42952-023-00251-2","DOIUrl":"https://doi.org/10.1007/s42952-023-00251-2","url":null,"abstract":"<p>In this paper, we investigate the maximum likelihood estimator (MLE) for the parameter <span>(theta)</span> in the probability density function <span>(f(x;theta ))</span>. We specifically focus on the application of moving extremes ranked set sampling (MERSS) and analyze its properties in large samples. We establish the existence and uniqueness of the MLE for two common distributions when utilizing MERSS. Our theoretical analysis demonstrates that the MLE obtained through MERSS is, at the very least, as efficient as the MLE obtained through simple random sampling with an equivalent sample size. To substantiate these theoretical findings, we conduct numerical experiments. Furthermore, we explore the implications of imperfect ranking and provide a practical illustration by applying our approach to a real dataset.</p>","PeriodicalId":49992,"journal":{"name":"Journal of the Korean Statistical Society","volume":"83 3 1","pages":""},"PeriodicalIF":0.6,"publicationDate":"2024-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139463672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tae-Young Heo, Joon Myoung Lee, Myung Hun Woo, Hyeongseok Lee, Min Ho Cho
{"title":"Logistic regression models for elastic shape of curves based on tangent representations","authors":"Tae-Young Heo, Joon Myoung Lee, Myung Hun Woo, Hyeongseok Lee, Min Ho Cho","doi":"10.1007/s42952-023-00252-1","DOIUrl":"https://doi.org/10.1007/s42952-023-00252-1","url":null,"abstract":"<p>Shape analysis is widely used in many application areas such as computer vision, medical and biological studies. One challenge to analyze the shape of an object in an image is its invariant property to shape-preserving transformations. To measure the distance or dissimilarity between two different shapes, we worked with the square-root velocity function (SRVF) representation and the elastic metric. Since shapes are inherently high-dimensional in a nonlinear space, we adopted a tangent space at the mean shape and a few principal components (PCs) on the linearized space. We proposed classification methods based on logistic regression using these PCs and tangent vectors with the elastic net penalty. We then compared its performance with other model-based methods for shape classification in application to shape of algae in watersheds as well as simulated data generated by the mixture of von Mises-Fisher distributions.</p>","PeriodicalId":49992,"journal":{"name":"Journal of the Korean Statistical Society","volume":"1 1","pages":""},"PeriodicalIF":0.6,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139463670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Byzantine-resilient decentralized network learning","authors":"Yaohong Yang, Lei Wang","doi":"10.1007/s42952-023-00249-w","DOIUrl":"https://doi.org/10.1007/s42952-023-00249-w","url":null,"abstract":"<p>Decentralized federated learning based on fully normal nodes has drawn attention in modern statistical learning. However, due to data corruption, device malfunctioning, malicious attacks and some other unexpected behaviors, not all nodes can obey the estimation process and the existing decentralized federated learning methods may fail. An unknown number of abnormal nodes, called Byzantine nodes, arbitrarily deviate from their intended behaviors, send wrong messages to their neighbors and affect all honest nodes across the entire network through passing polluted messages. In this paper, we focus on decentralized federated learning in the presence of Byzantine attacks and then propose a unified Byzantine-resilient framework based on the network gradient descent and several robust aggregation rules. Theoretically, the convergence of the proposed algorithm is guaranteed under some weakly balanced conditions of network structure. The finite-sample performance is studied through simulations under different network topologies and various Byzantine attacks. An application to Communities and Crime Data is also presented.</p>","PeriodicalId":49992,"journal":{"name":"Journal of the Korean Statistical Society","volume":"62 1","pages":""},"PeriodicalIF":0.6,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139414140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Sequential online monitoring for autoregressive time series of counts","authors":"","doi":"10.1007/s42952-023-00247-y","DOIUrl":"https://doi.org/10.1007/s42952-023-00247-y","url":null,"abstract":"<h3>Abstract</h3> <p>This study considers the online monitoring problem for detecting the parameter change in time series of counts. For this task, we construct a monitoring process based on the residuals obtained from integer-valued generalized autoregressive conditional heteroscedastic (INGARCH) models. We consider this problem within a more general framework using martingale difference sequences as the monitoring problem on GARCH-type processes based on the residuals or score vectors can be viewed as a special case of the monitoring problems on martingale differences. The limiting behavior of the stopping rule is investigated in this general set-up and is applied to the INGARCH processes. To assess the performance of our method, we conduct Monte Carlo simulations. A real data analysis is also provided for illustration. Our findings in this empirical study demonstrate the validity of the proposed monitoring process.</p>","PeriodicalId":49992,"journal":{"name":"Journal of the Korean Statistical Society","volume":"50 1","pages":""},"PeriodicalIF":0.6,"publicationDate":"2024-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139079722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wonwoo Choi, Seongho Jang, Sanghee Kim, Chayoung Park, Sunyoung Park, Seongjoo Song
{"title":"Return prediction by machine learning for the Korean stock market","authors":"Wonwoo Choi, Seongho Jang, Sanghee Kim, Chayoung Park, Sunyoung Park, Seongjoo Song","doi":"10.1007/s42952-023-00245-0","DOIUrl":"https://doi.org/10.1007/s42952-023-00245-0","url":null,"abstract":"<p>In this study, we aim to forecast monthly stock returns and analyze factors influencing stock prices in the Korean stock market. To find a model that maximizes the cumulative return of the portfolio of stocks with high predicted returns, we use machine learning models such as linear models, tree-based models, neural networks, and learning to rank algorithms. We employ a novel validation metric which we call the Cumulative net Return of a Portfolio with top 10% predicted return (CRP10) for tuning hyperparameters to increase the cumulative return of the selected portfolio. CRP10 tends to provide higher cumulative returns compared to out-of-sample R-squared as a validation metric with the data that we used. Our findings indicate that Light Gradient Boosting Machine (LightGBM) and Gradient Boosted Regression Trees (GBRT) demonstrate better performance than other models when we apply a single model for the entire test period. We also take the strategy of changing the model on a yearly basis by assessing the best model annually and observed that it did not outperform the approach of using a single model such as LightGBM or GBRT for the entire period.</p>","PeriodicalId":49992,"journal":{"name":"Journal of the Korean Statistical Society","volume":"65 1","pages":""},"PeriodicalIF":0.6,"publicationDate":"2023-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138816752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Spatially integrated estimator of finite population total by integrating data from two independent surveys using spatial information","authors":"Nobin Chandra Paul, Anil Rai, Tauqueer Ahmad, Ankur Biswas, Prachi Misra Sahoo","doi":"10.1007/s42952-023-00244-1","DOIUrl":"https://doi.org/10.1007/s42952-023-00244-1","url":null,"abstract":"<p>A major goal of survey sampling is finite population inference. In recent years, large-scale survey programs have encountered many practical challenges which include higher data collection cost, increasing non-response rate, increasing demand for disaggregated level statistics and desire for timely estimates. Data integration is a new field of research that provides a timely solution to these above-mentioned challenges by integrating data from multiple surveys. Now, it is possible to develop a framework that can efficiently combine information from several surveys to obtain more precise estimates of population parameters. In many surveys, parameters of interest are often spatial in nature, which means, the relationship between the study variable and covariates varies across all locations in the study area and this situation is referred as spatial non-stationarity. Hence, there is a need of a sampling methodology that can efficiently tackle this spatial non-stationarity problem and can be able to integrate this spatially referenced data to get more detailed information. In this study, a Geographically Weighted Spatially Integrated (GWSI) estimator of finite population total was developed by integrating data from two independent surveys using spatial information. The statistical properties of the proposed spatially integrated estimator were then evaluated empirically through a spatial simulation study. Three different spatial populations were generated having high spatial autocorrelation. The proposed spatially integrated estimator performed better than usual design-based estimator under all three populations. Furthermore, a Spatial Proportionate Bootstrap (SPB) method was developed for variance estimation of the proposed spatially integrated estimator.</p>","PeriodicalId":49992,"journal":{"name":"Journal of the Korean Statistical Society","volume":"15 1","pages":""},"PeriodicalIF":0.6,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138744869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Su Jin Jeong, Hyo-jung Lee, Soong Deok Lee, Su Jeong Park, Seung Hwan Lee, Jae Won Lee
{"title":"Statistical integration of allele frequencies from several organizations","authors":"Su Jin Jeong, Hyo-jung Lee, Soong Deok Lee, Su Jeong Park, Seung Hwan Lee, Jae Won Lee","doi":"10.1007/s42952-023-00243-2","DOIUrl":"https://doi.org/10.1007/s42952-023-00243-2","url":null,"abstract":"<p>Genetic evidence, especially evidence based on short tandem repeats, is of paramount importance for human identification in forensic inferences. In recent years, the identification of kinship using DNA evidence has drawn much attention in various fields. In particular, it is employed, using a criminal database, to confirm blood relations in forensics. The interpretation of the likelihood ratio when identifying an individual or a relationship depends on the allele frequencies that are used, and thus, it is crucial to obtain an accurate estimate of allele frequency. Each organization such as Supreme Prosecutors’ Office and Korean National Police Agency in Korea provides different statistical interpretations due to differing estimations of the allele frequency, which can lead to confusion in forensic identification. Therefore, it is very important to estimate allele frequency accurately, and doing so requires a certain amount of information. However, simply using a weighted average for each allele frequency may not be sufficient to determine biological independence. In this study, we propose a new statistical method for estimating allele frequency by integrating the data obtained from several organizations, and we analyze biological independence and differences in allele frequency relative to the weighted average of allele frequencies in various subgroups. Finally, our proposed method is illustrated using real data from 576 Korean individuals.</p>","PeriodicalId":49992,"journal":{"name":"Journal of the Korean Statistical Society","volume":"21 1","pages":""},"PeriodicalIF":0.6,"publicationDate":"2023-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138717017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}