Gabriel Frattallone-Llado, Juyong Kim, Cheng Cheng, Diego Salazar, Smitha Edakalavan, Jeremy C Weiss
{"title":"Using Multimodal Data to Improve Precision of Inpatient Event Timelines.","authors":"Gabriel Frattallone-Llado, Juyong Kim, Cheng Cheng, Diego Salazar, Smitha Edakalavan, Jeremy C Weiss","doi":"10.1007/978-981-97-2238-9_25","DOIUrl":null,"url":null,"abstract":"<p><p>Textual data often describe events in time but frequently contain little information about their specific timing, whereas complementary structured data streams may have precise timestamps but may omit important contextual information. We investigate the problem in healthcare, where we produce clinician annotations of discharge summaries, with access to either unimodal (text) or multimodal (text and tabular) data, (i) to determine event interval timings and (ii) to train multimodal language models to locate those events in time. We find our annotation procedures, dashboard tools, and annotations result in high-quality timestamps. Specifically, the multimodal approach produces more precise timestamping, with uncertainties of the lower bound, upper bounds, and duration reduced by 42% (95% CI 34-51%), 36% (95% CI 28-44%), and 13% (95% CI 10-17%), respectively. In the classification version of our task, we find that, trained on our annotations, our multimodal BERT model outperforms unimodal BERT model and Llama-2 encoder-decoder models with improvements in F1 scores for upper (10% and 61%, respectively) and lower bounds (8% and 56%, respectively). The code for the annotation tool and the BERT model is available (link).</p>","PeriodicalId":517371,"journal":{"name":"Advances in knowledge discovery and data mining : ... Pacific-Asia Conference, PAKDD ..., proceedings. Pacific-Asia Conference on Knowledge Discovery and Data Mining","volume":"14648 ","pages":"322-334"},"PeriodicalIF":0.0000,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11228894/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advances in knowledge discovery and data mining : ... Pacific-Asia Conference, PAKDD ..., proceedings. Pacific-Asia Conference on Knowledge Discovery and Data Mining","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/978-981-97-2238-9_25","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Textual data often describe events in time but frequently contain little information about their specific timing, whereas complementary structured data streams may have precise timestamps but may omit important contextual information. We investigate the problem in healthcare, where we produce clinician annotations of discharge summaries, with access to either unimodal (text) or multimodal (text and tabular) data, (i) to determine event interval timings and (ii) to train multimodal language models to locate those events in time. We find our annotation procedures, dashboard tools, and annotations result in high-quality timestamps. Specifically, the multimodal approach produces more precise timestamping, with uncertainties of the lower bound, upper bounds, and duration reduced by 42% (95% CI 34-51%), 36% (95% CI 28-44%), and 13% (95% CI 10-17%), respectively. In the classification version of our task, we find that, trained on our annotations, our multimodal BERT model outperforms unimodal BERT model and Llama-2 encoder-decoder models with improvements in F1 scores for upper (10% and 61%, respectively) and lower bounds (8% and 56%, respectively). The code for the annotation tool and the BERT model is available (link).