{"title":"Combining multiple biomarkers linearly to minimize the Euclidean distance of the closest point on the receiver operating characteristic surface to the perfection corner in trichotomous settings.","authors":"Brian R Mosier, Leonidas E Bantis","doi":"10.1177/09622802241233768","DOIUrl":"10.1177/09622802241233768","url":null,"abstract":"<p><p>The performance of individual biomarkers in discriminating between two groups, typically the healthy and the diseased, may be limited. Thus, there is interest in developing statistical methodologies for biomarker combinations with the aim of improving upon the individual discriminatory performance. There is extensive literature referring to biomarker combinations under the two-class setting. However, the corresponding literature under a three-class setting is limited. In our study, we provide parametric and nonparametric methods that allow investigators to optimally combine biomarkers that seek to discriminate between three classes by minimizing the Euclidean distance from the receiver operating characteristic surface to the perfection corner. Using this Euclidean distance as the objective function allows for estimation of the optimal combination coefficients along with the optimal cutoff values for the combined score. An advantage of the proposed methods is that they can accommodate biomarker data from all three groups simultaneously, as opposed to a pairwise analysis such as the one implied by the three-class Youden index. We illustrate that the derived true classification rates exhibit narrower confidence intervals than those derived from the Youden-based approach under a parametric, flexible parametric, and nonparametric kernel-based framework. We evaluate our approaches through extensive simulations and apply them to real data sets that refer to liver cancer patients.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"647-668"},"PeriodicalIF":1.6,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11234871/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140040378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
André Victor Ribeiro Amaral, Francisco Javier Rubio, Manuela Quaresma, Francisco J Rodríguez-Cortés, Paula Moraga
{"title":"Extended excess hazard models for spatially dependent survival data.","authors":"André Victor Ribeiro Amaral, Francisco Javier Rubio, Manuela Quaresma, Francisco J Rodríguez-Cortés, Paula Moraga","doi":"10.1177/09622802241233767","DOIUrl":"10.1177/09622802241233767","url":null,"abstract":"<p><p>Relative survival represents the preferred framework for the analysis of population cancer survival data. The aim is to model the survival probability associated with cancer in the absence of information about the cause of death. Recent data linkage developments have allowed for incorporating the place of residence into the population cancer databases; however, modeling this spatial information has received little attention in the relative survival setting. We propose a flexible parametric class of spatial excess hazard models (along with inference tools), named \"Relative Survival Spatial General Hazard,\" that allows for the inclusion of fixed and spatial effects in both time-level and hazard-level components. We illustrate the performance of the proposed model using an extensive simulation study, and provide guidelines about the interplay of sample size, censoring, and model misspecification. We present a case study using real data from colon cancer patients in England. This case study illustrates how a spatial model can be used to identify geographical areas with low cancer survival, as well as how to summarize such a model through marginal survival quantities and spatial effects.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"681-701"},"PeriodicalIF":2.3,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140040380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Weight calibration in the joint modelling of medical cost and mortality.","authors":"Seong Hoon Yoon, Alain Vandal, Claudia Rivera-Rodriguez","doi":"10.1177/09622802241236935","DOIUrl":"10.1177/09622802241236935","url":null,"abstract":"<p><p>Joint modelling of longitudinal and time-to-event data is a method that recognizes the dependency between the two data types, and combines the two outcomes into a single model, which leads to more precise estimates. These models are applicable when individuals are followed over a period of time, generally to monitor the progression of a disease or a medical condition, and also when longitudinal covariates are available. Medical cost datasets are often also available in longitudinal scenarios, but these datasets usually arise from a complex sampling design rather than simple random sampling and such complex sampling design needs to be accounted for in the statistical analysis. Ignoring the sampling mechanism can lead to misleading conclusions. This article proposes a novel approach to the joint modelling of complex data by combining survey calibration with standard joint modelling. This is achieved by incorporating a new set of equations to calibrate the sampling weights for the survival model in a joint model setting. The proposed method is applied to data on anti-dementia medication costs and mortality in people with diagnosed dementia in New Zealand.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"728-742"},"PeriodicalIF":2.3,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11145918/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140040381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"BOIN-ETC: A Bayesian optimal interval design considering efficacy and toxicity to identify the optimal dose combinations.","authors":"Tomoyuki Kakizume, Kentaro Takeda, Masataka Taguri, Satoshi Morita","doi":"10.1177/09622802241236936","DOIUrl":"10.1177/09622802241236936","url":null,"abstract":"<p><p>One of the primary objectives of a dose-finding trial for novel anti-cancer agent combination therapies, such as molecular targeted agents and immune-oncology therapies, is to identify optimal dose combinations that are tolerable and therapeutically beneficial for subjects in subsequent clinical trials. The goal differs from that of a dose-finding trial for traditional cytotoxic agents, in which the goal is to determine the maximum tolerated dose combinations. This paper proposes the new design, named 'BOIN-ETC' design, to identify optimal dose combinations based on both efficacy and toxicity outcomes using the waterfall approach. The BOIN-ETC design is model-assisted, so it is expected to be robust, and straightforward to implement in actual oncology dose-finding trials. These characteristics are quite valuable from a practical perspective. Simulation studies show that the BOIN-ETC design has advantages compared with the other approaches in the percentage of correct optimal dose combination selection and the average number of patients allocated to the optimal dose combinations across various realistic settings.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"716-727"},"PeriodicalIF":2.3,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140040377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robin Ristl, Heiko Götte, Armin Schüler, Martin Posch, Franz König
{"title":"Simultaneous inference procedures for the comparison of multiple characteristics of two survival functions.","authors":"Robin Ristl, Heiko Götte, Armin Schüler, Martin Posch, Franz König","doi":"10.1177/09622802241231497","DOIUrl":"10.1177/09622802241231497","url":null,"abstract":"<p><p>Survival time is the primary endpoint of many randomized controlled trials, and a treatment effect is typically quantified by the hazard ratio under the assumption of proportional hazards. Awareness is increasing that in many settings this assumption is a priori violated, for example, due to delayed onset of drug effect. In these cases, interpretation of the hazard ratio estimate is ambiguous and statistical inference for alternative parameters to quantify a treatment effect is warranted. We consider differences or ratios of milestone survival probabilities or quantiles, differences in restricted mean survival times, and an average hazard ratio to be of interest. Typically, more than one such parameter needs to be reported to assess possible treatment benefits, and in confirmatory trials, the according inferential procedures need to be adjusted for multiplicity. A simple Bonferroni adjustment may be too conservative because the different parameters of interest typically show considerable correlation. Hence simultaneous inference procedures that take into account the correlation are warranted. By using the counting process representation of the mentioned parameters, we show that their estimates are asymptotically multivariate normal and we provide an estimate for their covariance matrix. We propose according to the parametric multiple testing procedures and simultaneous confidence intervals. Also, the logrank test may be included in the framework. Finite sample type I error rate and power are studied by simulation. The methods are illustrated with an example from oncology. A software implementation is provided in the R package nph.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"589-610"},"PeriodicalIF":2.3,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11025310/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140094617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The augmented synthetic control method in public health and biomedical research.","authors":"Taylor Krajewski, Michael Hudgens","doi":"10.1177/09622802231224638","DOIUrl":"10.1177/09622802231224638","url":null,"abstract":"<p><p>Estimating treatment (or policy or intervention) effects on a single individual or unit has become increasingly important in health and biomedical sciences. One method to estimate these effects is the synthetic control method, which constructs a synthetic control, a weighted average of control units that best matches the treated unit's pre-treatment outcomes and other relevant covariates. The intervention's impact is then estimated by comparing the post-intervention outcomes of the treated unit and its synthetic control, which serves as a proxy for the counterfactual outcome had the treated unit not experienced the intervention. The augmented synthetic control method, a recent adaptation of the synthetic control method, relaxes some of the synthetic control method's assumptions for broader applicability. While synthetic controls have been used in a variety of fields, their use in public health and biomedical research is more recent, and newer methods such as the augmented synthetic control method are underutilized. This paper briefly describes the synthetic control method and its application, explains the augmented synthetic control method and its differences from the synthetic control method, and estimates the effects of an antimalarial initiative in Mozambique using both the synthetic control method and the augmented synthetic control method to highlight the advantages of using the augmented synthetic control method to analyze the impact of interventions implemented in a single region.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"376-391"},"PeriodicalIF":2.3,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10981189/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139724029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amra Pepić, Maria Stark, Tim Friede, Annette Kopp-Schneider, Silvia Calderazzo, Maria Reichert, Michael Wolf, Ulrich Wirth, Stefan Schopf, Antonia Zapf
{"title":"A diagnostic phase III/IV seamless design to investigate the diagnostic accuracy and clinical effectiveness using the example of HEDOS and HEDOS II.","authors":"Amra Pepić, Maria Stark, Tim Friede, Annette Kopp-Schneider, Silvia Calderazzo, Maria Reichert, Michael Wolf, Ulrich Wirth, Stefan Schopf, Antonia Zapf","doi":"10.1177/09622802241227951","DOIUrl":"10.1177/09622802241227951","url":null,"abstract":"<p><p>The development process of medical devices can be streamlined by combining different study phases. Here, for a diagnostic medical device, we present the combination of confirmation of diagnostic accuracy (phase III) and evaluation of clinical effectiveness regarding patient-relevant endpoints (phase IV) using a seamless design. This approach is used in the Thyroid HEmorrhage DetectOr Study (HEDOS & HEDOS II) investigating a post-operative hemorrhage detector named ISAR-M THYRO® in patients after thyroid surgery. Data from the phase III trial are reused as external controls in the control group of the phase IV trial. An unblinded interim analysis is planned between the two study stages which includes a recalculation of the sample size for the phase IV part after completion of the first stage of the seamless design. The study concept presented here is the first seamless design proposed in the field of diagnostic studies. Hence, the aim of this work is to emphasize the statistical methodology as well as feasibility of the proposed design in relation to the planning and implementation of the seamless design. Seamless designs can accelerate the overall trial duration and increase its efficiency in terms of sample size and recruitment. However, careful planning addressing numerous methodological and procedural challenges is necessary for successful implementation as well as agreement with regulatory bodies.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"433-448"},"PeriodicalIF":2.3,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10981198/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139703499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Edwin R van den Heuvel, Osama Almalik, Zhuozhao Zhan
{"title":"Simulation models for aggregated data meta-analysis: Evaluation of pooling effect sizes and publication biases.","authors":"Edwin R van den Heuvel, Osama Almalik, Zhuozhao Zhan","doi":"10.1177/09622802231206474","DOIUrl":"10.1177/09622802231206474","url":null,"abstract":"<p><p>Simulation studies are commonly used to evaluate the performance of newly developed meta-analysis methods. For methodology that is developed for an aggregated data meta-analysis, researchers often resort to simulation of the aggregated data directly, instead of simulating individual participant data from which the aggregated data would be calculated in reality. Clearly, distributional characteristics of the aggregated data statistics may be derived from distributional assumptions of the underlying individual data, but they are often not made explicit in publications. This article provides the distribution of the aggregated data statistics that were derived from a heteroscedastic mixed effects model for continuous individual data and a procedure for directly simulating the aggregated data statistics. We also compare our simulation approach with other simulation approaches used in literature. We describe their theoretical differences and conduct a simulation study for three meta-analysis methods: DerSimonian and Laird method for pooling aggregated study effect sizes and the Trim & Fill and precision-effect test and precision-effect estimate with standard errors method for adjustment of publication bias. We demonstrate that the choice of simulation model for aggregated data may have an impact on (the conclusions of) the performance of the meta-analysis method. We recommend the use of multiple aggregated data simulation models to investigate the sensitivity in the performance of the meta-analysis method. Additionally, we recommend that researchers try to make the individual participant data model explicit and derive from this model the distributional consequences of the aggregated statistics to help select appropriate aggregated data simulation models.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"359-375"},"PeriodicalIF":2.3,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140068679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A latent class linear mixed model for monotonic continuous processes measured with error.","authors":"Osvaldo Espin-Garcia, Lizbeth Naranjo, Ruth Fuentes-García","doi":"10.1177/09622802231225963","DOIUrl":"10.1177/09622802231225963","url":null,"abstract":"<p><p>Motivated by measurement errors in radiographic diagnosis of osteoarthritis, we propose a Bayesian approach to identify latent classes in a model with continuous response subject to a monotonic, that is, non-decreasing or non-increasing, process with measurement error. A latent class linear mixed model has been introduced to consider measurement error while the monotonic process is accounted for via truncated normal distributions. The main purpose is to classify the response trajectories through the latent classes to better describe the disease progression within homogeneous subpopulations.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"449-464"},"PeriodicalIF":2.3,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10981203/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140176566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chun Yin Lee, Kin Yau Wong, Dipankar Bandyopadhyay
{"title":"Partly linear single-index cure models with a nonparametric incidence link function.","authors":"Chun Yin Lee, Kin Yau Wong, Dipankar Bandyopadhyay","doi":"10.1177/09622802241227960","DOIUrl":"10.1177/09622802241227960","url":null,"abstract":"<p><p>In cancer studies, it is commonplace that a fraction of patients participating in the study are <i>cured</i>, such that not all of them will experience a recurrence, or death due to cancer. Also, it is plausible that some covariates, such as the treatment assigned to the patients or demographic characteristics, could affect both the patients' survival rates and cure/incidence rates. A common approach to accommodate these features in survival analysis is to consider a mixture cure survival model with the incidence rate modeled by a logistic regression model and latency part modeled by the Cox proportional hazards model. These modeling assumptions, though typical, restrict the structure of covariate effects on both the incidence and latency components. As a plausible recourse to attain flexibility, we study a class of semiparametric mixture cure models in this article, which incorporates two single-index functions for modeling the two regression components. A hybrid nonparametric maximum likelihood estimation method is proposed, where the cumulative baseline hazard function for uncured subjects is estimated nonparametrically, and the two single-index functions are estimated via Bernstein polynomials. Parameter estimation is carried out via a curated expectation-maximization algorithm. We also conducted a large-scale simulation study to assess the finite-sample performance of the estimator. The proposed methodology is illustrated via application to two cancer datasets.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"498-514"},"PeriodicalIF":1.6,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11296351/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139940818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}