How to spot the truth

IF 2.6 4区 医学 Q2 PHYSIOLOGY
G. Drummond, M. J. Tipton
{"title":"How to spot the truth","authors":"G. Drummond,&nbsp;M. J. Tipton","doi":"10.1113/EP092160","DOIUrl":null,"url":null,"abstract":"<p>‘Truth’ is under attack, more so now than ever before, and for many reasons one of which is social media. We hear and read remarkable, often preposterous claims from many sources. This may be in political debate, the presentation of new products, or new health-enhancing exercises ranging from hot water pools to cold water swimming. These frequently claim to be ‘scientific findings’ often reporting ‘new studies have shown’ stories, underpinned by ‘expert’opinion. They are amplified in the media until the next fad comes along.</p><p>This pervasive form of persuasion is a war of beliefs, which in many cases may contradict accepted knowledge. It is always possible, in fact likely, that some of the more absurd claims may not involve, or even be properly aware of, current scientific understanding, in which case these claims may be logical, but based on incorrect assumptions or understanding. Flat earthers have a consistent world view, which is probably logical to them; it just is not compatible with other known facts. But truth is the first casualty of war, and now more than ever, we must equip ourselves and others with the skills needed to judge how valid the information we are presented with is.</p><p>This is not as simple as it might appear. The context is all-important. Interestingly, there are far fewer exact rules, firm guidelines and exact cut-off levels than people might imagine for establishing the truth. Most scientific knowledge is rarely expressed in terms of utter validity, but rather expressed as ‘fits’ or ‘is not inconsistent with’ what we know already, or ‘suitable for predicting performance’. For example, we now know that gravity can be bent; but Newton's simple straight-line approximation has taken astronauts to the moon and back (sorry, flat earthers). In addition, although statisticians use words consistently and exactly, they do not use words such as ‘population’ and ‘sample’ in the way they are used in general parlance. Nor is the logic of statistics straightforward. For example, the most commonly used tests of likelihood assume ‘if, and only if, these random samples were drawn from a single population, then…’ Logical and consistent, yes, but not well understood, even by some scientists. For example, in one study, trainee doctors, who should be reading this sort of stuff all the time, were given a simple statement using this test. When asked to choose the correct conclusion out of four possibilities, almost half made a wrong choice (Windish et al., <span>2007</span>).</p><p>The truth helps you make ‘adequately correct’ decisions and act accordingly. Such decisions depend on the situation, and the risks of making a correct or incorrect decision. Uncertainty doesn't mean we know nothing, or that anything could be true: it just means you don't bet your house on an outsider.</p><p>Some years ago, a district court decided that a particular vaccine was responsible for an adverse outcome (which was scientifically doubtful). This triggered a disastrous decrease in child vaccinations for a whole range of diseases. It also showed convincingly that the transmission of the faulty conclusion was related to internet broadband access: more broadband, greater decrease in vaccinations (Carrieri et al., <span>2019</span>).</p><p>In another case, however, a US court rejected a manufacturer's defence that there were insufficient data to meet the usual scientific criteria to demonstrate a causal link between a drug and a serious, but rare, adverse event; and this is why the drug was marketed without a warning. The court was unwilling to accept this statistical threshold, preferring to heed the reports of infrequent, but important, adverse events after the use of the drug, and thus awarded damages (Matrixx initiatives, Inc. et al. vs Siracusano et al., <span>2011</span>).</p><p>Here, we shall try to show the reader the processes applied in scientific evaluation, in the hope that you can apply them in your day-to-day decision-making. Facts don't speak for themselves—context is vital. An experienced scientist, who “knows the ropes”, is more likely to use their knowledge, experience and judgement to tease out the full story. The central question is not ‘can we be certain?’, but rather ‘can we process this information and adjust our ideas?’ Uncertainty is always present, but we may be able to be ‘confidently uncertain’.</p><p>Overall, as a result of failure to meet some of the requirements listed above, about half of published medical papers are unlikely to be true (Ioannidis, <span>2005</span>). In 2023, the number of retractions for research articles internationally reached a new record of over 10,000 (Noorden, <span>2023</span>) due to an increase in sham papers and peer-review fraud. Furthermore, despite a requirement for disclosure, a lot of government research is never released, or is delayed until interest in the topic has declined.</p><p>A recent study (Briganti et al., <span>2023</span>) reviewed the papers published on the health and recovery benefits of cold-water exposure. They found 931 articles, and then carefully weeded out irrelevant studies. The authors were left with 24 papers, and in these the risk of bias was ‘high’ in 15 and ‘gave concern’ in four. Thus, only five papers had a ‘low’ risk of bias: three of these looked at cold water immersion after exercise and two at cognitive function. So, a very small percentage of the studies examined had anything really useful to say.</p><p>Watch out for percentages (Bolton, <span>2023</span>). A simple change is easily understood as a percentage, but ‘scientific’ studies involving comparisons between groups can require more careful consideration. These comparisons should always trigger the question ‘percentage of what, exactly?’ The headline, ‘New drug/product/intervention cuts mortality by 50%’ sounds impressive, and attracts attention, but the reality could be less spectacular. Perhaps using the old drug, the death rate was 20 per 1000 patients, and when the new drug was first used, the rate became 10 per 1000 patients: a 50% reduction. But the absolute risk reduction in death rate was 10 per 1000, or 1%, a less impressive headline.</p><p>Also, beware of correlations. Just because two things relate to each other, for example, a diet and a sense of well-being, does not mean that one causes the other. The world is full of accidental (spurious) correlations (Van Cauwenberge, <span>2016</span>). One of our favourites is the high correlation between the divorce rate in Maine, USA and the per capita consumption of margarine! Also, ask the question ‘how many false positives and negatives will I get if I use this correlation to make a decision’ (Tipton et al., <span>2012</span>).</p><p>For the moment at least, artificial intelligence cannot quantify uncertainty very well. Generally, AI uses stuff from ‘out there’ as if it were true. Thus, a high proportion of garbage in will give you garbage out (which increases the proportion of garbage that AI uses next time round)!</p><p>We hope that, armed with the above checklist, you can challenge and interrogate the polarising information, from ‘spin’ to the outright falsehoods presented to you on a daily basis. We are at risk of being overwhelmed by an increasing number of dubious, unregulated and disparate sources. The next time you hear phrases like ‘they say this is great’ or ‘this is scientifically proven’ start by asking ‘who are they?’ and ‘which scientists, using which methods?’ Be cautious and questioning; snake oil and its vendors still exist, they come in many guises.</p><p>M. J. Tipton conceived the work. Both authors contributed to the design of the work, acquisition, analysis, or interpretation of data for the work, drafting of the work or revising it critically for important intellectual content. They both approved the final version of the manuscript and agree to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. All persons designated as authors therefore qualify for authorship, and all those who qualify for authorship are listed.</p><p>None declared.</p><p>No funding was received for this work.</p>","PeriodicalId":12092,"journal":{"name":"Experimental Physiology","volume":"109 11","pages":"1811-1814"},"PeriodicalIF":2.6000,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1113/EP092160","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Experimental Physiology","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1113/EP092160","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"PHYSIOLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

‘Truth’ is under attack, more so now than ever before, and for many reasons one of which is social media. We hear and read remarkable, often preposterous claims from many sources. This may be in political debate, the presentation of new products, or new health-enhancing exercises ranging from hot water pools to cold water swimming. These frequently claim to be ‘scientific findings’ often reporting ‘new studies have shown’ stories, underpinned by ‘expert’opinion. They are amplified in the media until the next fad comes along.

This pervasive form of persuasion is a war of beliefs, which in many cases may contradict accepted knowledge. It is always possible, in fact likely, that some of the more absurd claims may not involve, or even be properly aware of, current scientific understanding, in which case these claims may be logical, but based on incorrect assumptions or understanding. Flat earthers have a consistent world view, which is probably logical to them; it just is not compatible with other known facts. But truth is the first casualty of war, and now more than ever, we must equip ourselves and others with the skills needed to judge how valid the information we are presented with is.

This is not as simple as it might appear. The context is all-important. Interestingly, there are far fewer exact rules, firm guidelines and exact cut-off levels than people might imagine for establishing the truth. Most scientific knowledge is rarely expressed in terms of utter validity, but rather expressed as ‘fits’ or ‘is not inconsistent with’ what we know already, or ‘suitable for predicting performance’. For example, we now know that gravity can be bent; but Newton's simple straight-line approximation has taken astronauts to the moon and back (sorry, flat earthers). In addition, although statisticians use words consistently and exactly, they do not use words such as ‘population’ and ‘sample’ in the way they are used in general parlance. Nor is the logic of statistics straightforward. For example, the most commonly used tests of likelihood assume ‘if, and only if, these random samples were drawn from a single population, then…’ Logical and consistent, yes, but not well understood, even by some scientists. For example, in one study, trainee doctors, who should be reading this sort of stuff all the time, were given a simple statement using this test. When asked to choose the correct conclusion out of four possibilities, almost half made a wrong choice (Windish et al., 2007).

The truth helps you make ‘adequately correct’ decisions and act accordingly. Such decisions depend on the situation, and the risks of making a correct or incorrect decision. Uncertainty doesn't mean we know nothing, or that anything could be true: it just means you don't bet your house on an outsider.

Some years ago, a district court decided that a particular vaccine was responsible for an adverse outcome (which was scientifically doubtful). This triggered a disastrous decrease in child vaccinations for a whole range of diseases. It also showed convincingly that the transmission of the faulty conclusion was related to internet broadband access: more broadband, greater decrease in vaccinations (Carrieri et al., 2019).

In another case, however, a US court rejected a manufacturer's defence that there were insufficient data to meet the usual scientific criteria to demonstrate a causal link between a drug and a serious, but rare, adverse event; and this is why the drug was marketed without a warning. The court was unwilling to accept this statistical threshold, preferring to heed the reports of infrequent, but important, adverse events after the use of the drug, and thus awarded damages (Matrixx initiatives, Inc. et al. vs Siracusano et al., 2011).

Here, we shall try to show the reader the processes applied in scientific evaluation, in the hope that you can apply them in your day-to-day decision-making. Facts don't speak for themselves—context is vital. An experienced scientist, who “knows the ropes”, is more likely to use their knowledge, experience and judgement to tease out the full story. The central question is not ‘can we be certain?’, but rather ‘can we process this information and adjust our ideas?’ Uncertainty is always present, but we may be able to be ‘confidently uncertain’.

Overall, as a result of failure to meet some of the requirements listed above, about half of published medical papers are unlikely to be true (Ioannidis, 2005). In 2023, the number of retractions for research articles internationally reached a new record of over 10,000 (Noorden, 2023) due to an increase in sham papers and peer-review fraud. Furthermore, despite a requirement for disclosure, a lot of government research is never released, or is delayed until interest in the topic has declined.

A recent study (Briganti et al., 2023) reviewed the papers published on the health and recovery benefits of cold-water exposure. They found 931 articles, and then carefully weeded out irrelevant studies. The authors were left with 24 papers, and in these the risk of bias was ‘high’ in 15 and ‘gave concern’ in four. Thus, only five papers had a ‘low’ risk of bias: three of these looked at cold water immersion after exercise and two at cognitive function. So, a very small percentage of the studies examined had anything really useful to say.

Watch out for percentages (Bolton, 2023). A simple change is easily understood as a percentage, but ‘scientific’ studies involving comparisons between groups can require more careful consideration. These comparisons should always trigger the question ‘percentage of what, exactly?’ The headline, ‘New drug/product/intervention cuts mortality by 50%’ sounds impressive, and attracts attention, but the reality could be less spectacular. Perhaps using the old drug, the death rate was 20 per 1000 patients, and when the new drug was first used, the rate became 10 per 1000 patients: a 50% reduction. But the absolute risk reduction in death rate was 10 per 1000, or 1%, a less impressive headline.

Also, beware of correlations. Just because two things relate to each other, for example, a diet and a sense of well-being, does not mean that one causes the other. The world is full of accidental (spurious) correlations (Van Cauwenberge, 2016). One of our favourites is the high correlation between the divorce rate in Maine, USA and the per capita consumption of margarine! Also, ask the question ‘how many false positives and negatives will I get if I use this correlation to make a decision’ (Tipton et al., 2012).

For the moment at least, artificial intelligence cannot quantify uncertainty very well. Generally, AI uses stuff from ‘out there’ as if it were true. Thus, a high proportion of garbage in will give you garbage out (which increases the proportion of garbage that AI uses next time round)!

We hope that, armed with the above checklist, you can challenge and interrogate the polarising information, from ‘spin’ to the outright falsehoods presented to you on a daily basis. We are at risk of being overwhelmed by an increasing number of dubious, unregulated and disparate sources. The next time you hear phrases like ‘they say this is great’ or ‘this is scientifically proven’ start by asking ‘who are they?’ and ‘which scientists, using which methods?’ Be cautious and questioning; snake oil and its vendors still exist, they come in many guises.

M. J. Tipton conceived the work. Both authors contributed to the design of the work, acquisition, analysis, or interpretation of data for the work, drafting of the work or revising it critically for important intellectual content. They both approved the final version of the manuscript and agree to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. All persons designated as authors therefore qualify for authorship, and all those who qualify for authorship are listed.

None declared.

No funding was received for this work.

如何发现真相
他们找到了 931 篇文章,然后仔细剔除了无关的研究。最后,作者发现了 24 篇论文,其中 15 篇的偏倚风险为 "高",4 篇为 "令人担忧"。因此,只有五篇论文的偏倚风险为 "低":其中三篇研究了运动后的冷水浸泡,两篇研究了认知功能。因此,在所审查的研究中,真正有用的研究只占很小的比例。简单的变化很容易理解为百分比,但涉及组间比较的 "科学 "研究则需要更仔细的考虑。这些比较总是会引发一个问题:"百分比究竟是什么?新药/产品/干预措施使死亡率降低 50%"这样的标题听起来令人印象深刻,并能吸引眼球,但实际情况可能并不那么惊人。也许使用旧药时,死亡率为每 1000 名患者中有 20 人死亡,而首次使用新药时,死亡率变为每 1000 名患者中有 10 人死亡:降低了 50%。但死亡率的绝对风险降低率是每 1000 人中有 10 人,即 1%,这是一个不那么令人印象深刻的标题。此外,还要注意相关性。两件事相互关联,例如饮食和幸福感,并不意味着其中一件事会导致另一件事。这个世界充满了偶然的(虚假的)相关性(Van Cauwenberge,2016)。我们最喜欢的一个例子是美国缅因州的离婚率与人造黄油人均消费量之间的高度相关性!另外,还要问一个问题:"如果我使用这种相关性来做决定,会得到多少假阳性和假阴性"(Tipton et al.一般来说,人工智能会把 "外面 "的东西当作真的来使用。因此,高比例的 "垃圾信息 "会给你带来 "垃圾信息"(这会增加人工智能下次使用的 "垃圾信息 "的比例)!我们希望,有了上述清单,你就可以挑战和质疑两极分化的信息,从 "自旋 "到每天呈现给你的彻头彻尾的谎言。我们正面临着被越来越多可疑的、不受监管的和不同的信息来源淹没的风险。下一次,当你听到 "他们说这很好 "或 "这是经过科学验证的 "这样的说法时,首先要问 "他们是谁?"以及 "哪些科学家,使用了哪些方法?要谨慎,要质疑;蛇油及其销售商仍然存在,他们有很多伪装。两位作者都参与了工作的设计,工作数据的获取、分析或解释,工作的起草或重要知识内容的批判性修改。他们都批准了手稿的最终版本,并同意对工作的所有方面负责,确保与工作任何部分的准确性或完整性有关的问题得到适当的调查和解决。因此,所有被指定为作者的人都有资格成为作者,所有有资格成为作者的人都在作者名单中。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Experimental Physiology
Experimental Physiology 医学-生理学
CiteScore
5.10
自引率
3.70%
发文量
262
审稿时长
1 months
期刊介绍: Experimental Physiology publishes research papers that report novel insights into homeostatic and adaptive responses in health, as well as those that further our understanding of pathophysiological mechanisms in disease. We encourage papers that embrace the journal’s orientation of translation and integration, including studies of the adaptive responses to exercise, acute and chronic environmental stressors, growth and aging, and diseases where integrative homeostatic mechanisms play a key role in the response to and evolution of the disease process. Examples of such diseases include hypertension, heart failure, hypoxic lung disease, endocrine and neurological disorders. We are also keen to publish research that has a translational aspect or clinical application. Comparative physiology work that can be applied to aid the understanding human physiology is also encouraged. Manuscripts that report the use of bioinformatic, genomic, molecular, proteomic and cellular techniques to provide novel insights into integrative physiological and pathophysiological mechanisms are welcomed.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信