Comparison of tools for assessing the methodological quality of primary and secondary studies in health technology assessment reports in Germany.

Maren Dreier, Birgit Borutta, Jona Stahmeyer, Christian Krauth, Ulla Walter
{"title":"Comparison of tools for assessing the methodological quality of primary and secondary studies in health technology assessment reports in Germany.","authors":"Maren Dreier,&nbsp;Birgit Borutta,&nbsp;Jona Stahmeyer,&nbsp;Christian Krauth,&nbsp;Ulla Walter","doi":"10.3205/hta000085","DOIUrl":null,"url":null,"abstract":"<p><strong>Unlabelled: </strong>HEALTH CARE POLICY BACKGROUND: Findings from scientific studies form the basis for evidence-based health policy decisions.</p><p><strong>Scientific background: </strong>Quality assessments to evaluate the credibility of study results are an essential part of health technology assessment reports and systematic reviews. Quality assessment tools (QAT) for assessing the study quality examine to what extent study results are systematically distorted by confounding or bias (internal validity). The tools can be divided into checklists, scales and component ratings.</p><p><strong>Research questions: </strong>What QAT are available to assess the quality of interventional studies or studies in the field of health economics, how do they differ from each other and what conclusions can be drawn from these results for quality assessments?</p><p><strong>Methods: </strong>A systematic search of relevant databases from 1988 onwards is done, supplemented by screening of the references, of the HTA reports of the German Agency for Health Technology Assessment (DAHTA) and an internet search. The selection of relevant literature, the data extraction and the quality assessment are carried out by two independent reviewers. The substantive elements of the QAT are extracted using a modified criteria list consisting of items and domains specific to randomized trials, observational studies, diagnostic studies, systematic reviews and health economic studies. Based on the number of covered items and domains, more and less comprehensive QAT are distinguished. In order to exchange experiences regarding problems in the practical application of tools, a workshop is hosted.</p><p><strong>Results: </strong>A total of eight systematic methodological reviews is identified as well as 147 QAT: 15 for systematic reviews, 80 for randomized trials, 30 for observational studies, 17 for diagnostic studies and 22 for health economic studies. The tools vary considerably with regard to the content, the performance and quality of operationalisation. Some tools do not only include the items of internal validity but also the items of quality of reporting and external validity. No tool covers all elements or domains. Design-specific generic tools are presented, which cover most of the content criteria.</p><p><strong>Discussion: </strong>The evaluation of QAT by using content criteria is difficult, because there is no scientific consensus on the necessary elements of internal validity, and not all of the generally accepted elements are based on empirical evidence. Comparing QAT with regard to contents neglects the operationalisation of the respective parameters, for which the quality and precision are important for transparency, replicability, the correct assessment and interrater reliability. QAT, which mix items on the quality of reporting and internal validity, should be avoided.</p><p><strong>Conclusions: </strong>There are different, design-specific tools available which can be preferred for quality assessment, because of its wider coverage of substantive elements of internal validity. To minimise the subjectivity of the assessment, tools with a detailed and precise operationalisation of the individual elements should be applied. For health economic studies, tools should be developed and complemented with instructions, which define the appropriateness of the criteria. Further research is needed to identify study characteristics that influence the internal validity of studies.</p>","PeriodicalId":89142,"journal":{"name":"GMS health technology assessment","volume":"6 ","pages":"Doc07"},"PeriodicalIF":0.0000,"publicationDate":"2010-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/d5/0a/HTA-06-07.PMC3010881.pdf","citationCount":"11","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"GMS health technology assessment","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3205/hta000085","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 11

Abstract

Unlabelled: HEALTH CARE POLICY BACKGROUND: Findings from scientific studies form the basis for evidence-based health policy decisions.

Scientific background: Quality assessments to evaluate the credibility of study results are an essential part of health technology assessment reports and systematic reviews. Quality assessment tools (QAT) for assessing the study quality examine to what extent study results are systematically distorted by confounding or bias (internal validity). The tools can be divided into checklists, scales and component ratings.

Research questions: What QAT are available to assess the quality of interventional studies or studies in the field of health economics, how do they differ from each other and what conclusions can be drawn from these results for quality assessments?

Methods: A systematic search of relevant databases from 1988 onwards is done, supplemented by screening of the references, of the HTA reports of the German Agency for Health Technology Assessment (DAHTA) and an internet search. The selection of relevant literature, the data extraction and the quality assessment are carried out by two independent reviewers. The substantive elements of the QAT are extracted using a modified criteria list consisting of items and domains specific to randomized trials, observational studies, diagnostic studies, systematic reviews and health economic studies. Based on the number of covered items and domains, more and less comprehensive QAT are distinguished. In order to exchange experiences regarding problems in the practical application of tools, a workshop is hosted.

Results: A total of eight systematic methodological reviews is identified as well as 147 QAT: 15 for systematic reviews, 80 for randomized trials, 30 for observational studies, 17 for diagnostic studies and 22 for health economic studies. The tools vary considerably with regard to the content, the performance and quality of operationalisation. Some tools do not only include the items of internal validity but also the items of quality of reporting and external validity. No tool covers all elements or domains. Design-specific generic tools are presented, which cover most of the content criteria.

Discussion: The evaluation of QAT by using content criteria is difficult, because there is no scientific consensus on the necessary elements of internal validity, and not all of the generally accepted elements are based on empirical evidence. Comparing QAT with regard to contents neglects the operationalisation of the respective parameters, for which the quality and precision are important for transparency, replicability, the correct assessment and interrater reliability. QAT, which mix items on the quality of reporting and internal validity, should be avoided.

Conclusions: There are different, design-specific tools available which can be preferred for quality assessment, because of its wider coverage of substantive elements of internal validity. To minimise the subjectivity of the assessment, tools with a detailed and precise operationalisation of the individual elements should be applied. For health economic studies, tools should be developed and complemented with instructions, which define the appropriateness of the criteria. Further research is needed to identify study characteristics that influence the internal validity of studies.

德国卫生技术评估报告中用于评估初级和二级研究方法学质量的工具的比较。
背景:科学研究的结果构成了基于证据的卫生政策决策的基础。科学背景:评价研究结果可信度的质量评价是卫生技术评价报告和系统评价的重要组成部分。用于评估研究质量的质量评估工具(QAT)检查研究结果被混淆或偏倚系统性扭曲的程度(内部效度)。这些工具可以分为检查表、量表和组件评级。研究问题:有哪些QAT可用于评估介入性研究或卫生经济学领域的研究的质量,它们之间有何不同,从这些结果中可以得出哪些结论用于质量评估?方法:对1988年以来的相关数据库进行系统检索,并通过筛选参考文献、德国卫生技术评估机构(DAHTA)的卫生技术评估报告和互联网检索加以补充。相关文献的选择、数据提取和质量评估由两名独立评审员进行。QAT的实质性要素是通过修改后的标准清单提取出来的,该标准清单包括随机试验、观察性研究、诊断性研究、系统评价和卫生经济学研究的特定项目和领域。根据所涵盖的条目和领域的数量,可以区分出较全面和较不全面的QAT。为了就工具的实际应用问题交流经验,主办了一个研讨会。结果:共确定了8项系统方法学评价和147项QAT: 15项系统评价,80项随机试验,30项观察性研究,17项诊断性研究和22项卫生经济学研究。这些工具在操作的内容、性能和质量方面差别很大。有些工具不仅包括内部效度项目,还包括报告质量和外部效度项目。没有工具涵盖所有的元素或领域。介绍了特定于设计的通用工具,它们涵盖了大多数内容标准。讨论:使用内容标准对QAT进行评价是困难的,因为对内部效度的必要要素没有科学共识,并且并非所有被普遍接受的要素都基于经验证据。比较QAT的内容忽略了各自参数的可操作性,其中质量和精度对透明度、可复制性、正确评估和互解释器可靠性至关重要。应避免QAT将报告质量和内部效度项目混合在一起。结论:有不同的,设计特定的可用工具,可以优先用于质量评估,因为它更广泛地覆盖了内部效度的实质性要素。为了最大限度地减少评估的主观性,应该应用对各个要素进行详细和精确操作的工具。就卫生经济学研究而言,应开发工具,并辅以说明,确定标准的适当性。需要进一步的研究来确定影响研究内部效度的研究特征。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信