Peijin Wang, Sarah Peskoe, Rebecca Byrd, Patrick Smith, Rachel Breslin, Shein-Chung Chow
{"title":"Statistical Evaluation of Absolute Change versus Responder Analysis in Clinical Trials.","authors":"Peijin Wang, Sarah Peskoe, Rebecca Byrd, Patrick Smith, Rachel Breslin, Shein-Chung Chow","doi":"10.15212/amm-2022-0020","DOIUrl":null,"url":null,"abstract":"<p><p>In clinical trials, the primary analysis is often either a test of absolute/relative change in a measured outcome or a corresponding responder analysis. Though each of these tests may be reasonable, determining which test is most suitable for a particular research study is still an open question. These tests may require different sample sizes, define different clinically meaningful differences, and most importantly, lead to different study conclusions. This paper aims to compare a typical non-inferiority test using absolute change as the study endpoint to the corresponding responder analysis in terms of sample size requirements, statistical power, and hypothesis testing results. From numerical analysis, using absolute change as an endpoint generally requires a larger sample size; therefore, when the sample size is the same, the responder analysis has higher power. The cut-off value and non-inferiority margin are critical which can meaningfully impact whether the two types of endpoints yield conflicting conclusions. Specifically, an extreme cut-off value is more likely to cause different conclusions. However, this impact decreases as population variance increases. One important reason for conflicting conclusions is that the population distribution is not normal. To eliminate conflicting results, researchers should pay attention to the population distribution and cut-off value selection.</p>","PeriodicalId":72055,"journal":{"name":"Acta materia medica","volume":"1 3","pages":"320-332"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10237148/pdf/nihms-1833344.pdf","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Acta materia medica","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.15212/amm-2022-0020","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
In clinical trials, the primary analysis is often either a test of absolute/relative change in a measured outcome or a corresponding responder analysis. Though each of these tests may be reasonable, determining which test is most suitable for a particular research study is still an open question. These tests may require different sample sizes, define different clinically meaningful differences, and most importantly, lead to different study conclusions. This paper aims to compare a typical non-inferiority test using absolute change as the study endpoint to the corresponding responder analysis in terms of sample size requirements, statistical power, and hypothesis testing results. From numerical analysis, using absolute change as an endpoint generally requires a larger sample size; therefore, when the sample size is the same, the responder analysis has higher power. The cut-off value and non-inferiority margin are critical which can meaningfully impact whether the two types of endpoints yield conflicting conclusions. Specifically, an extreme cut-off value is more likely to cause different conclusions. However, this impact decreases as population variance increases. One important reason for conflicting conclusions is that the population distribution is not normal. To eliminate conflicting results, researchers should pay attention to the population distribution and cut-off value selection.