{"title":"Brain-Training Pessimism, but Applied-Memory Optimism","authors":"J. McCabe, Thomas S. Redick, R. Engle","doi":"10.1177/1529100616664716","DOIUrl":null,"url":null,"abstract":"As is convincingly demonstrated in the target article (Simons et al., 2016, this issue), despite the numerous forms of brain training that have been tested and touted in the past 15 years, there’s little to no evidence that currently existing programs produce lasting, meaningful change in the performance of cognitive tasks that differ from the trained tasks. As detailed by Simons et al., numerous methodological issues cloud the interpretation of many studies claiming successful far transfer. These limitations include small sample sizes, passive control groups, single tests of outcomes, unblinded informantand self-report measures of functioning, and hypothesisinconsistent significant effects. (However, note that, with older adults, a successful result of the intervention could be to prevent decline in the training group, such that they stay at their pretest level while the control group declines.) These issues are separate from problems related to publication bias, selective reporting of significant and nonsignificant outcomes, use of unjustified one-tailed t tests, and failure to explicitly note shared data across publications. So, considering that the literature contains such potential false-positive publications (Simmons, Nelson, & Simonsohn, 2011), it may be surprising and disheartening to many that some descriptive reviews (Chacko et al., 2013; Salthouse, 2006; Simons et al., 2016) and meta-analyses (Melby-Lervåg, Redick, & Hulme, 2016; Rapport, Orban, Kofler, & Friedman, 2013) have concluded that existing cognitive-training methods are relatively ineffective, despite their popularity and increasing market share. For example, a recent working-memory-training metaanalysis (Melby-Lervåg et al., 2016) evaluated 87 studies examining transfer to working memory, intelligence, and various educationally relevant outcomes (e.g., reading comprehension, math, word decoding). The studies varied considerably in terms of the sample composition (age; typical vs. atypical functioning) and the nature of the working-memory training (verbal, nonverbal, or both verbal and nonverbal stimuli; n-back vs. span task methodology; few vs. many training sessions). Despite the diversity in the design and administration of the training, the results were quite clear. Following training, there were reliable improvements in performance on verbal and nonverbal working-memory tasks identical or similar to the trained tasks. However, in terms of far transfer, there was no convincing evidence of improvements, especially when working-memory training was compared to an active-control condition. The meta-analysis also demonstrated that, in the working-memory-training literature, the largest nonverbal-intelligence far-transfer effects are statistically more likely to come from studies with small sample sizes and passive control groups. This finding is not particularly surprising, given other work showing that most working-memory training studies are dramatically underpowered (Bogg & Lasecki, 2015) and that underpowered studies with small sample sizes are more likely to produce inflated effect sizes (Button et al., 2013). In addition, small samples are predominantly the reason irregular pretest-posttest patterns have been observed in the control groups in various working-memory and video-game intervention studies (for review, see Redick, 2015; Redick & Webster, 2014). In these studies, inferential statistics and effect-size metrics provide evidence that the training “worked,” but investigation of the descriptive statistics tells a different story. Specifically, a number of studies with children and young adult samples have examined intelligence or other academic achievement outcomes before and after training. Closer inspection indicates that training “improved” intelligence or academic achievement relative to the control condition because the control group declined from pretest to posttest—that is, the training group did not significantly change from pretest to posttest. 664716 PSIXXX10.1177/1529100616664716McCabe et al.Brain-Training Pessimism research-article2016","PeriodicalId":20879,"journal":{"name":"Psychological Science in the Public Interest","volume":"17 1","pages":"187 - 191"},"PeriodicalIF":18.2000,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1529100616664716","citationCount":"28","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Psychological Science in the Public Interest","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1177/1529100616664716","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 28
Abstract
As is convincingly demonstrated in the target article (Simons et al., 2016, this issue), despite the numerous forms of brain training that have been tested and touted in the past 15 years, there’s little to no evidence that currently existing programs produce lasting, meaningful change in the performance of cognitive tasks that differ from the trained tasks. As detailed by Simons et al., numerous methodological issues cloud the interpretation of many studies claiming successful far transfer. These limitations include small sample sizes, passive control groups, single tests of outcomes, unblinded informantand self-report measures of functioning, and hypothesisinconsistent significant effects. (However, note that, with older adults, a successful result of the intervention could be to prevent decline in the training group, such that they stay at their pretest level while the control group declines.) These issues are separate from problems related to publication bias, selective reporting of significant and nonsignificant outcomes, use of unjustified one-tailed t tests, and failure to explicitly note shared data across publications. So, considering that the literature contains such potential false-positive publications (Simmons, Nelson, & Simonsohn, 2011), it may be surprising and disheartening to many that some descriptive reviews (Chacko et al., 2013; Salthouse, 2006; Simons et al., 2016) and meta-analyses (Melby-Lervåg, Redick, & Hulme, 2016; Rapport, Orban, Kofler, & Friedman, 2013) have concluded that existing cognitive-training methods are relatively ineffective, despite their popularity and increasing market share. For example, a recent working-memory-training metaanalysis (Melby-Lervåg et al., 2016) evaluated 87 studies examining transfer to working memory, intelligence, and various educationally relevant outcomes (e.g., reading comprehension, math, word decoding). The studies varied considerably in terms of the sample composition (age; typical vs. atypical functioning) and the nature of the working-memory training (verbal, nonverbal, or both verbal and nonverbal stimuli; n-back vs. span task methodology; few vs. many training sessions). Despite the diversity in the design and administration of the training, the results were quite clear. Following training, there were reliable improvements in performance on verbal and nonverbal working-memory tasks identical or similar to the trained tasks. However, in terms of far transfer, there was no convincing evidence of improvements, especially when working-memory training was compared to an active-control condition. The meta-analysis also demonstrated that, in the working-memory-training literature, the largest nonverbal-intelligence far-transfer effects are statistically more likely to come from studies with small sample sizes and passive control groups. This finding is not particularly surprising, given other work showing that most working-memory training studies are dramatically underpowered (Bogg & Lasecki, 2015) and that underpowered studies with small sample sizes are more likely to produce inflated effect sizes (Button et al., 2013). In addition, small samples are predominantly the reason irregular pretest-posttest patterns have been observed in the control groups in various working-memory and video-game intervention studies (for review, see Redick, 2015; Redick & Webster, 2014). In these studies, inferential statistics and effect-size metrics provide evidence that the training “worked,” but investigation of the descriptive statistics tells a different story. Specifically, a number of studies with children and young adult samples have examined intelligence or other academic achievement outcomes before and after training. Closer inspection indicates that training “improved” intelligence or academic achievement relative to the control condition because the control group declined from pretest to posttest—that is, the training group did not significantly change from pretest to posttest. 664716 PSIXXX10.1177/1529100616664716McCabe et al.Brain-Training Pessimism research-article2016
期刊介绍:
Psychological Science in the Public Interest (PSPI) is a distinctive journal that provides in-depth and compelling reviews on issues directly relevant to the general public. Authored by expert teams with diverse perspectives, these reviews aim to evaluate the current state-of-the-science on various topics. PSPI reports have addressed issues such as questioning the validity of the Rorschach and other projective tests, examining strategies to maintain cognitive sharpness in aging brains, and highlighting concerns within the field of clinical psychology. Notably, PSPI reports are frequently featured in Scientific American Mind and covered by various major media outlets.