Systematic evidence reviews (SERs) produced by the U.S. Agency for Healthcare Research and Quality (AHRQ) Evidence-based Practice Center (EPC) Program use contextual questions to provide context and background information on the topic. There is currently no standardized approach to address contextual questions in systematic reviews. This study explored the use of publicly available large language models (LLMs) in addressing contextual questions.
Using a set of 20 published and 5 yet to be published SERs, we selected one contextual question per report and used it as a prompt to elicit answers from an LLM (ChatGPT, Bard, Claude, or Perplexity). Two independent reviewers rated the results using a priori established evaluation criteria (https://osf.io/4k3cu/), comparing the response in the SER to LLM-generated responses. The study was guided by six research questions addressing feasibility, validity of content, validity of structure, mistakes, congruence between responses, and incremental validity of using LLMs to address contextual questions.
Using minimal prompt engineering produced relevant responses and documented the feasibility of LLM-generated answers to contextual questions. Responses differed in content and format and are not reproducible (e.g., LLMs update regularly), but LLMs were able to produce articulate, clinically plausible, and well-structured responses. We detected few factual errors, contradictions, and no instance of suspected bias, but citations supporting LLM-generated responses could often not be produced or could not be verified (‘confabulations’). Congruence with human generated responses varied, with LLM-generated responses providing more background on the topic and SERs providing more nuanced answers in response to the contextual question. Results regarding incremental validity were mixed and may depend on the tool.
LLMs are potentially helpful in addressing contextual questions in systematic reviews but human expertise remains essential for using the generated information in a meaningful way.


