{"title":"How to spot the truth","authors":"G. Drummond, M. J. Tipton","doi":"10.1113/EP092160","DOIUrl":null,"url":null,"abstract":"<p>‘Truth’ is under attack, more so now than ever before, and for many reasons one of which is social media. We hear and read remarkable, often preposterous claims from many sources. This may be in political debate, the presentation of new products, or new health-enhancing exercises ranging from hot water pools to cold water swimming. These frequently claim to be ‘scientific findings’ often reporting ‘new studies have shown’ stories, underpinned by ‘expert’opinion. They are amplified in the media until the next fad comes along.</p><p>This pervasive form of persuasion is a war of beliefs, which in many cases may contradict accepted knowledge. It is always possible, in fact likely, that some of the more absurd claims may not involve, or even be properly aware of, current scientific understanding, in which case these claims may be logical, but based on incorrect assumptions or understanding. Flat earthers have a consistent world view, which is probably logical to them; it just is not compatible with other known facts. But truth is the first casualty of war, and now more than ever, we must equip ourselves and others with the skills needed to judge how valid the information we are presented with is.</p><p>This is not as simple as it might appear. The context is all-important. Interestingly, there are far fewer exact rules, firm guidelines and exact cut-off levels than people might imagine for establishing the truth. Most scientific knowledge is rarely expressed in terms of utter validity, but rather expressed as ‘fits’ or ‘is not inconsistent with’ what we know already, or ‘suitable for predicting performance’. For example, we now know that gravity can be bent; but Newton's simple straight-line approximation has taken astronauts to the moon and back (sorry, flat earthers). In addition, although statisticians use words consistently and exactly, they do not use words such as ‘population’ and ‘sample’ in the way they are used in general parlance. Nor is the logic of statistics straightforward. For example, the most commonly used tests of likelihood assume ‘if, and only if, these random samples were drawn from a single population, then…’ Logical and consistent, yes, but not well understood, even by some scientists. For example, in one study, trainee doctors, who should be reading this sort of stuff all the time, were given a simple statement using this test. When asked to choose the correct conclusion out of four possibilities, almost half made a wrong choice (Windish et al., <span>2007</span>).</p><p>The truth helps you make ‘adequately correct’ decisions and act accordingly. Such decisions depend on the situation, and the risks of making a correct or incorrect decision. Uncertainty doesn't mean we know nothing, or that anything could be true: it just means you don't bet your house on an outsider.</p><p>Some years ago, a district court decided that a particular vaccine was responsible for an adverse outcome (which was scientifically doubtful). This triggered a disastrous decrease in child vaccinations for a whole range of diseases. It also showed convincingly that the transmission of the faulty conclusion was related to internet broadband access: more broadband, greater decrease in vaccinations (Carrieri et al., <span>2019</span>).</p><p>In another case, however, a US court rejected a manufacturer's defence that there were insufficient data to meet the usual scientific criteria to demonstrate a causal link between a drug and a serious, but rare, adverse event; and this is why the drug was marketed without a warning. The court was unwilling to accept this statistical threshold, preferring to heed the reports of infrequent, but important, adverse events after the use of the drug, and thus awarded damages (Matrixx initiatives, Inc. et al. vs Siracusano et al., <span>2011</span>).</p><p>Here, we shall try to show the reader the processes applied in scientific evaluation, in the hope that you can apply them in your day-to-day decision-making. Facts don't speak for themselves—context is vital. An experienced scientist, who “knows the ropes”, is more likely to use their knowledge, experience and judgement to tease out the full story. The central question is not ‘can we be certain?’, but rather ‘can we process this information and adjust our ideas?’ Uncertainty is always present, but we may be able to be ‘confidently uncertain’.</p><p>Overall, as a result of failure to meet some of the requirements listed above, about half of published medical papers are unlikely to be true (Ioannidis, <span>2005</span>). In 2023, the number of retractions for research articles internationally reached a new record of over 10,000 (Noorden, <span>2023</span>) due to an increase in sham papers and peer-review fraud. Furthermore, despite a requirement for disclosure, a lot of government research is never released, or is delayed until interest in the topic has declined.</p><p>A recent study (Briganti et al., <span>2023</span>) reviewed the papers published on the health and recovery benefits of cold-water exposure. They found 931 articles, and then carefully weeded out irrelevant studies. The authors were left with 24 papers, and in these the risk of bias was ‘high’ in 15 and ‘gave concern’ in four. Thus, only five papers had a ‘low’ risk of bias: three of these looked at cold water immersion after exercise and two at cognitive function. So, a very small percentage of the studies examined had anything really useful to say.</p><p>Watch out for percentages (Bolton, <span>2023</span>). A simple change is easily understood as a percentage, but ‘scientific’ studies involving comparisons between groups can require more careful consideration. These comparisons should always trigger the question ‘percentage of what, exactly?’ The headline, ‘New drug/product/intervention cuts mortality by 50%’ sounds impressive, and attracts attention, but the reality could be less spectacular. Perhaps using the old drug, the death rate was 20 per 1000 patients, and when the new drug was first used, the rate became 10 per 1000 patients: a 50% reduction. But the absolute risk reduction in death rate was 10 per 1000, or 1%, a less impressive headline.</p><p>Also, beware of correlations. Just because two things relate to each other, for example, a diet and a sense of well-being, does not mean that one causes the other. The world is full of accidental (spurious) correlations (Van Cauwenberge, <span>2016</span>). One of our favourites is the high correlation between the divorce rate in Maine, USA and the per capita consumption of margarine! Also, ask the question ‘how many false positives and negatives will I get if I use this correlation to make a decision’ (Tipton et al., <span>2012</span>).</p><p>For the moment at least, artificial intelligence cannot quantify uncertainty very well. Generally, AI uses stuff from ‘out there’ as if it were true. Thus, a high proportion of garbage in will give you garbage out (which increases the proportion of garbage that AI uses next time round)!</p><p>We hope that, armed with the above checklist, you can challenge and interrogate the polarising information, from ‘spin’ to the outright falsehoods presented to you on a daily basis. We are at risk of being overwhelmed by an increasing number of dubious, unregulated and disparate sources. The next time you hear phrases like ‘they say this is great’ or ‘this is scientifically proven’ start by asking ‘who are they?’ and ‘which scientists, using which methods?’ Be cautious and questioning; snake oil and its vendors still exist, they come in many guises.</p><p>M. J. Tipton conceived the work. Both authors contributed to the design of the work, acquisition, analysis, or interpretation of data for the work, drafting of the work or revising it critically for important intellectual content. They both approved the final version of the manuscript and agree to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. All persons designated as authors therefore qualify for authorship, and all those who qualify for authorship are listed.</p><p>None declared.</p><p>No funding was received for this work.</p>","PeriodicalId":12092,"journal":{"name":"Experimental Physiology","volume":"109 11","pages":"1811-1814"},"PeriodicalIF":2.6000,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1113/EP092160","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Experimental Physiology","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1113/EP092160","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"PHYSIOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
‘Truth’ is under attack, more so now than ever before, and for many reasons one of which is social media. We hear and read remarkable, often preposterous claims from many sources. This may be in political debate, the presentation of new products, or new health-enhancing exercises ranging from hot water pools to cold water swimming. These frequently claim to be ‘scientific findings’ often reporting ‘new studies have shown’ stories, underpinned by ‘expert’opinion. They are amplified in the media until the next fad comes along.
This pervasive form of persuasion is a war of beliefs, which in many cases may contradict accepted knowledge. It is always possible, in fact likely, that some of the more absurd claims may not involve, or even be properly aware of, current scientific understanding, in which case these claims may be logical, but based on incorrect assumptions or understanding. Flat earthers have a consistent world view, which is probably logical to them; it just is not compatible with other known facts. But truth is the first casualty of war, and now more than ever, we must equip ourselves and others with the skills needed to judge how valid the information we are presented with is.
This is not as simple as it might appear. The context is all-important. Interestingly, there are far fewer exact rules, firm guidelines and exact cut-off levels than people might imagine for establishing the truth. Most scientific knowledge is rarely expressed in terms of utter validity, but rather expressed as ‘fits’ or ‘is not inconsistent with’ what we know already, or ‘suitable for predicting performance’. For example, we now know that gravity can be bent; but Newton's simple straight-line approximation has taken astronauts to the moon and back (sorry, flat earthers). In addition, although statisticians use words consistently and exactly, they do not use words such as ‘population’ and ‘sample’ in the way they are used in general parlance. Nor is the logic of statistics straightforward. For example, the most commonly used tests of likelihood assume ‘if, and only if, these random samples were drawn from a single population, then…’ Logical and consistent, yes, but not well understood, even by some scientists. For example, in one study, trainee doctors, who should be reading this sort of stuff all the time, were given a simple statement using this test. When asked to choose the correct conclusion out of four possibilities, almost half made a wrong choice (Windish et al., 2007).
The truth helps you make ‘adequately correct’ decisions and act accordingly. Such decisions depend on the situation, and the risks of making a correct or incorrect decision. Uncertainty doesn't mean we know nothing, or that anything could be true: it just means you don't bet your house on an outsider.
Some years ago, a district court decided that a particular vaccine was responsible for an adverse outcome (which was scientifically doubtful). This triggered a disastrous decrease in child vaccinations for a whole range of diseases. It also showed convincingly that the transmission of the faulty conclusion was related to internet broadband access: more broadband, greater decrease in vaccinations (Carrieri et al., 2019).
In another case, however, a US court rejected a manufacturer's defence that there were insufficient data to meet the usual scientific criteria to demonstrate a causal link between a drug and a serious, but rare, adverse event; and this is why the drug was marketed without a warning. The court was unwilling to accept this statistical threshold, preferring to heed the reports of infrequent, but important, adverse events after the use of the drug, and thus awarded damages (Matrixx initiatives, Inc. et al. vs Siracusano et al., 2011).
Here, we shall try to show the reader the processes applied in scientific evaluation, in the hope that you can apply them in your day-to-day decision-making. Facts don't speak for themselves—context is vital. An experienced scientist, who “knows the ropes”, is more likely to use their knowledge, experience and judgement to tease out the full story. The central question is not ‘can we be certain?’, but rather ‘can we process this information and adjust our ideas?’ Uncertainty is always present, but we may be able to be ‘confidently uncertain’.
Overall, as a result of failure to meet some of the requirements listed above, about half of published medical papers are unlikely to be true (Ioannidis, 2005). In 2023, the number of retractions for research articles internationally reached a new record of over 10,000 (Noorden, 2023) due to an increase in sham papers and peer-review fraud. Furthermore, despite a requirement for disclosure, a lot of government research is never released, or is delayed until interest in the topic has declined.
A recent study (Briganti et al., 2023) reviewed the papers published on the health and recovery benefits of cold-water exposure. They found 931 articles, and then carefully weeded out irrelevant studies. The authors were left with 24 papers, and in these the risk of bias was ‘high’ in 15 and ‘gave concern’ in four. Thus, only five papers had a ‘low’ risk of bias: three of these looked at cold water immersion after exercise and two at cognitive function. So, a very small percentage of the studies examined had anything really useful to say.
Watch out for percentages (Bolton, 2023). A simple change is easily understood as a percentage, but ‘scientific’ studies involving comparisons between groups can require more careful consideration. These comparisons should always trigger the question ‘percentage of what, exactly?’ The headline, ‘New drug/product/intervention cuts mortality by 50%’ sounds impressive, and attracts attention, but the reality could be less spectacular. Perhaps using the old drug, the death rate was 20 per 1000 patients, and when the new drug was first used, the rate became 10 per 1000 patients: a 50% reduction. But the absolute risk reduction in death rate was 10 per 1000, or 1%, a less impressive headline.
Also, beware of correlations. Just because two things relate to each other, for example, a diet and a sense of well-being, does not mean that one causes the other. The world is full of accidental (spurious) correlations (Van Cauwenberge, 2016). One of our favourites is the high correlation between the divorce rate in Maine, USA and the per capita consumption of margarine! Also, ask the question ‘how many false positives and negatives will I get if I use this correlation to make a decision’ (Tipton et al., 2012).
For the moment at least, artificial intelligence cannot quantify uncertainty very well. Generally, AI uses stuff from ‘out there’ as if it were true. Thus, a high proportion of garbage in will give you garbage out (which increases the proportion of garbage that AI uses next time round)!
We hope that, armed with the above checklist, you can challenge and interrogate the polarising information, from ‘spin’ to the outright falsehoods presented to you on a daily basis. We are at risk of being overwhelmed by an increasing number of dubious, unregulated and disparate sources. The next time you hear phrases like ‘they say this is great’ or ‘this is scientifically proven’ start by asking ‘who are they?’ and ‘which scientists, using which methods?’ Be cautious and questioning; snake oil and its vendors still exist, they come in many guises.
M. J. Tipton conceived the work. Both authors contributed to the design of the work, acquisition, analysis, or interpretation of data for the work, drafting of the work or revising it critically for important intellectual content. They both approved the final version of the manuscript and agree to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. All persons designated as authors therefore qualify for authorship, and all those who qualify for authorship are listed.
期刊介绍:
Experimental Physiology publishes research papers that report novel insights into homeostatic and adaptive responses in health, as well as those that further our understanding of pathophysiological mechanisms in disease. We encourage papers that embrace the journal’s orientation of translation and integration, including studies of the adaptive responses to exercise, acute and chronic environmental stressors, growth and aging, and diseases where integrative homeostatic mechanisms play a key role in the response to and evolution of the disease process. Examples of such diseases include hypertension, heart failure, hypoxic lung disease, endocrine and neurological disorders. We are also keen to publish research that has a translational aspect or clinical application. Comparative physiology work that can be applied to aid the understanding human physiology is also encouraged.
Manuscripts that report the use of bioinformatic, genomic, molecular, proteomic and cellular techniques to provide novel insights into integrative physiological and pathophysiological mechanisms are welcomed.