Cameron Trentz, Jacklyn Engelbart, Jason Semprini, Amanda Kahl, Eric Anyimadu, John Buatti, Thomas Casavant, Mary Charlton, Guadalupe Canahuate
{"title":"利用 SEER 登记数据评估非小细胞肺癌的机器学习模型偏差和种族差异。","authors":"Cameron Trentz, Jacklyn Engelbart, Jason Semprini, Amanda Kahl, Eric Anyimadu, John Buatti, Thomas Casavant, Mary Charlton, Guadalupe Canahuate","doi":"10.1007/s10729-024-09691-6","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Despite decades of pursuing health equity, racial and ethnic disparities persist in healthcare in America. For cancer specifically, one of the leading observed disparities is worse mortality among non-Hispanic Black patients compared to non-Hispanic White patients across the cancer care continuum. These real-world disparities are reflected in the data used to inform the decisions made to alleviate such inequities. Failing to account for inherently biased data underlying these observations could intensify racial cancer disparities and lead to misguided efforts that fail to appropriately address the real causes of health inequity.</p><p><strong>Objective: </strong>Estimate the racial/ethnic bias of machine learning models in predicting two-year survival and surgery treatment recommendation for non-small cell lung cancer (NSCLC) patients.</p><p><strong>Methods: </strong>A Cox survival model, and a LOGIT model as well as three other machine learning models for predicting surgery recommendation were trained using SEER data from NSCLC patients diagnosed from 2000-2018. Models were trained with a 70/30 train/test split (both including and excluding race/ethnicity) and evaluated using performance and fairness metrics. The effects of oversampling the training data were also evaluated.</p><p><strong>Results: </strong>The survival models show disparate impact towards non-Hispanic Black patients regardless of whether race/ethnicity is used as a predictor. The models including race/ethnicity amplified the disparities observed in the data. The exclusion of race/ethnicity as a predictor in the survival and surgery recommendation models improved fairness metrics without degrading model performance. Stratified oversampling strategies reduced disparate impact while reducing the accuracy of the model.</p><p><strong>Conclusion: </strong>NSCLC disparities are complex and multifaceted. Yet, even when accounting for age and stage at diagnosis, non-Hispanic Black patients with NSCLC are less often recommended to have surgery than non-Hispanic White patients. Machine learning models amplified the racial/ethnic disparities across the cancer care continuum (which are reflected in the data used to make model decisions). Excluding race/ethnicity lowered the bias of the models but did not affect disparate impact. Developing analytical strategies to improve fairness would in turn improve the utility of machine learning approaches analyzing population-based cancer data.</p>","PeriodicalId":12903,"journal":{"name":"Health Care Management Science","volume":" ","pages":""},"PeriodicalIF":2.3000,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Evaluating machine learning model bias and racial disparities in non-small cell lung cancer using SEER registry data.\",\"authors\":\"Cameron Trentz, Jacklyn Engelbart, Jason Semprini, Amanda Kahl, Eric Anyimadu, John Buatti, Thomas Casavant, Mary Charlton, Guadalupe Canahuate\",\"doi\":\"10.1007/s10729-024-09691-6\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>Despite decades of pursuing health equity, racial and ethnic disparities persist in healthcare in America. For cancer specifically, one of the leading observed disparities is worse mortality among non-Hispanic Black patients compared to non-Hispanic White patients across the cancer care continuum. These real-world disparities are reflected in the data used to inform the decisions made to alleviate such inequities. Failing to account for inherently biased data underlying these observations could intensify racial cancer disparities and lead to misguided efforts that fail to appropriately address the real causes of health inequity.</p><p><strong>Objective: </strong>Estimate the racial/ethnic bias of machine learning models in predicting two-year survival and surgery treatment recommendation for non-small cell lung cancer (NSCLC) patients.</p><p><strong>Methods: </strong>A Cox survival model, and a LOGIT model as well as three other machine learning models for predicting surgery recommendation were trained using SEER data from NSCLC patients diagnosed from 2000-2018. Models were trained with a 70/30 train/test split (both including and excluding race/ethnicity) and evaluated using performance and fairness metrics. The effects of oversampling the training data were also evaluated.</p><p><strong>Results: </strong>The survival models show disparate impact towards non-Hispanic Black patients regardless of whether race/ethnicity is used as a predictor. The models including race/ethnicity amplified the disparities observed in the data. The exclusion of race/ethnicity as a predictor in the survival and surgery recommendation models improved fairness metrics without degrading model performance. Stratified oversampling strategies reduced disparate impact while reducing the accuracy of the model.</p><p><strong>Conclusion: </strong>NSCLC disparities are complex and multifaceted. Yet, even when accounting for age and stage at diagnosis, non-Hispanic Black patients with NSCLC are less often recommended to have surgery than non-Hispanic White patients. Machine learning models amplified the racial/ethnic disparities across the cancer care continuum (which are reflected in the data used to make model decisions). Excluding race/ethnicity lowered the bias of the models but did not affect disparate impact. Developing analytical strategies to improve fairness would in turn improve the utility of machine learning approaches analyzing population-based cancer data.</p>\",\"PeriodicalId\":12903,\"journal\":{\"name\":\"Health Care Management Science\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":2.3000,\"publicationDate\":\"2024-11-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Health Care Management Science\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1007/s10729-024-09691-6\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"HEALTH POLICY & SERVICES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Health Care Management Science","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1007/s10729-024-09691-6","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"HEALTH POLICY & SERVICES","Score":null,"Total":0}
Evaluating machine learning model bias and racial disparities in non-small cell lung cancer using SEER registry data.
Background: Despite decades of pursuing health equity, racial and ethnic disparities persist in healthcare in America. For cancer specifically, one of the leading observed disparities is worse mortality among non-Hispanic Black patients compared to non-Hispanic White patients across the cancer care continuum. These real-world disparities are reflected in the data used to inform the decisions made to alleviate such inequities. Failing to account for inherently biased data underlying these observations could intensify racial cancer disparities and lead to misguided efforts that fail to appropriately address the real causes of health inequity.
Objective: Estimate the racial/ethnic bias of machine learning models in predicting two-year survival and surgery treatment recommendation for non-small cell lung cancer (NSCLC) patients.
Methods: A Cox survival model, and a LOGIT model as well as three other machine learning models for predicting surgery recommendation were trained using SEER data from NSCLC patients diagnosed from 2000-2018. Models were trained with a 70/30 train/test split (both including and excluding race/ethnicity) and evaluated using performance and fairness metrics. The effects of oversampling the training data were also evaluated.
Results: The survival models show disparate impact towards non-Hispanic Black patients regardless of whether race/ethnicity is used as a predictor. The models including race/ethnicity amplified the disparities observed in the data. The exclusion of race/ethnicity as a predictor in the survival and surgery recommendation models improved fairness metrics without degrading model performance. Stratified oversampling strategies reduced disparate impact while reducing the accuracy of the model.
Conclusion: NSCLC disparities are complex and multifaceted. Yet, even when accounting for age and stage at diagnosis, non-Hispanic Black patients with NSCLC are less often recommended to have surgery than non-Hispanic White patients. Machine learning models amplified the racial/ethnic disparities across the cancer care continuum (which are reflected in the data used to make model decisions). Excluding race/ethnicity lowered the bias of the models but did not affect disparate impact. Developing analytical strategies to improve fairness would in turn improve the utility of machine learning approaches analyzing population-based cancer data.
期刊介绍:
Health Care Management Science publishes papers dealing with health care delivery, health care management, and health care policy. Papers should have a decision focus and make use of quantitative methods including management science, operations research, analytics, machine learning, and other emerging areas. Articles must clearly articulate the relevance and the realized or potential impact of the work. Applied research will be considered and is of particular interest if there is evidence that it was implemented or informed a decision-making process. Papers describing routine applications of known methods are discouraged.
Authors are encouraged to disclose all data and analyses thereof, and to provide computational code when appropriate.
Editorial statements for the individual departments are provided below.
Health Care Analytics
Departmental Editors:
Margrét Bjarnadóttir, University of Maryland
Nan Kong, Purdue University
With the explosion in computing power and available data, we have seen fast changes in the analytics applied in the healthcare space. The Health Care Analytics department welcomes papers applying a broad range of analytical approaches, including those rooted in machine learning, survival analysis, and complex event analysis, that allow healthcare professionals to find opportunities for improvement in health system management, patient engagement, spending, and diagnosis. We especially encourage papers that combine predictive and prescriptive analytics to improve decision making and health care outcomes.
The contribution of papers can be across multiple dimensions including new methodology, novel modeling techniques and health care through real-world cohort studies. Papers that are methodologically focused need in addition to show practical relevance. Similarly papers that are application focused should clearly demonstrate improvements over the status quo and available approaches by applying rigorous analytics.
Health Care Operations Management
Departmental Editors:
Nilay Tanik Argon, University of North Carolina at Chapel Hill
Bob Batt, University of Wisconsin
The department invites high-quality papers on the design, control, and analysis of operations at healthcare systems. We seek papers on classical operations management issues (such as scheduling, routing, queuing, transportation, patient flow, and quality) as well as non-traditional problems driven by everchanging healthcare practice. Empirical, experimental, and analytical (model based) methodologies are all welcome. Papers may draw theory from across disciplines, and should provide insight into improving operations from the perspective of patients, service providers, organizations (municipal/government/industry), and/or society.
Health Care Management Science Practice
Departmental Editor:
Vikram Tiwari, Vanderbilt University Medical Center
The department seeks research from academicians and practitioners that highlights Management Science based solutions directly relevant to the practice of healthcare. Relevance is judged by the impact on practice, as well as the degree to which researchers engaged with practitioners in understanding the problem context and in developing the solution. Validity, that is, the extent to which the results presented do or would apply in practice is a key evaluation criterion. In addition to meeting the journal’s standards of originality and substantial contribution to knowledge creation, research that can be replicated in other organizations is encouraged. Papers describing unsuccessful applied research projects may be considered if there are generalizable learning points addressing why the project was unsuccessful.
Health Care Productivity Analysis
Departmental Editor:
Jonas Schreyögg, University of Hamburg
The department invites papers with rigorous methods and significant impact for policy and practice. Papers typically apply theory and techniques to measuring productivity in health care organizations and systems. The journal welcomes state-of-the-art parametric as well as non-parametric techniques such as data envelopment analysis, stochastic frontier analysis or partial frontier analysis. The contribution of papers can be manifold including new methodology, novel combination of existing methods or application of existing methods to new contexts. Empirical papers should produce results generalizable beyond a selected set of health care organizations. All papers should include a section on implications for management or policy to enhance productivity.
Public Health Policy and Medical Decision Making
Departmental Editors:
Ebru Bish, University of Alabama
Julie L. Higle, University of Southern California
The department invites high quality papers that use data-driven methods to address important problems that arise in public health policy and medical decision-making domains. We welcome submissions that develop and apply mathematical and computational models in support of data-driven and model-based analyses for these problems.
The Public Health Policy and Medical Decision-Making Department is particularly interested in papers that:
Study high-impact problems involving health policy, treatment planning and design, and clinical applications;
Develop original data-driven models, including those that integrate disease modeling with screening and/or treatment guidelines;
Use model-based analyses as decision making-tools to identify optimal solutions, insights, recommendations.
Articles must clearly articulate the relevance of the work to decision and/or policy makers and the potential impact on patients and/or society. Papers will include articulated contributions within the methodological domain, which may include modeling, analytical, or computational methodologies.
Emerging Topics
Departmental Editor:
Alec Morton, University of Strathclyde
Emerging Topics will handle papers which use innovative quantitative methods to shed light on frontier issues in healthcare management and policy. Such papers may deal with analytic challenges arising from novel health technologies or new organizational forms. Papers falling under this department may also deal with the analysis of new forms of data which are increasingly captured as health systems become more and more digitized.