{"title":"Articles of Public Interest","authors":"","doi":"10.1111/acer.15343","DOIUrl":null,"url":null,"abstract":"<p>When drinking choices are perceived as “just one drink,” with each single drink representing relatively slight risk, it may ironically lead to heavier drinking and alcohol-related harms. That's the finding of a novel study exploring the decision-making process around binge drinking. A better understanding of how people think about heavy episodic drinking could inform prevention and intervention approaches and help reduce the serious negative consequences of alcohol use. Young adults are especially vulnerable to high-risk drinking and its consequences; 29% are recent binge drinkers, and 15% meet the criteria for alcohol use disorder (AUD). Research and prevention efforts commonly assume that binge drinking reflects a lack of knowledge about its harmful effects. This implies that people consciously decide to consume large amounts. Another possibility is that each drink presents its own low-stakes decision: whether to have one (or one more) drink. For the study in <i>Alcohol: Clinical & Experimental Research</i>, investigators at Cornell University examined drinking decisions through the lens of “fuzzy-trace theory,” which involves varying framings of choices that involve risk. Fuzzy processing may shape decisions about alcohol use as a series of drink-by-drink choices rather than a decision to consume, say, 5 drinks in a night.</p><p>The researchers worked with 351 college students aged 18–31; 3 in 4 were women. The participants took surveys assessing their perceived risk of one drink, of heavy drinking, and of drinking consequences, and their overall sensitivity to risk. They also provided information on their recent drinking and experiences of negative consequences (such as risky driving or embarrassment) and were screened for dangerous drinking and AUD. Each participant was asked how likely they would be at a hypothetical party to get a first drink, then a second, and so on, up to 8. The researchers used statistical analysis to explore varying perceptions of risk related to alcohol use and their associations with measures of participants' drinking behaviors: drinks consumed per week, peak blood alcohol content (BAC), binge drinking in the last month, and criteria for dangerous drinking or AUD.</p><p>Close to 1 in 4 participants reported binge drinking in the past month, and 1 in 3 met criteria for hazardous drinking. Their perceived risk of one drink strongly predicted alcohol-related decision-making when choices were made one drink at a time. Those who perceived no risk in a single drink drank more and experienced greater alcohol consequences than those who saw low risk in one drink. Participants who perceived less risk in a single drink were more likely to start and continue drinking than those who associated one drink with higher risk. This effect continued for five drinks, equivalent to the threshold for binge drinking. These participants also reported more drinks per week, higher peak BAC, and more alcohol binges, and scored higher on scales of dangerous drinking and alcohol-related harms compared to those who saw higher risk in one drink. Moderate drinkers and abstainers were more risk-averse than heavy drinkers and more likely to perceive risk in a single drink. Higher perceived risk of heavy drinking was linked to lower likelihood of accepting a fourth, fifth, sixth, seventh, or eighth drink. But neither this nor risk sensitivity were protective against unsafe alcohol use.</p><p>The finding that “one-drink-at-a-time” thinking predicted risky decisions about alcohol supports the relevance of fuzzy-trace theory in the context of drinking decisions. The perceived risk of one hypothetical drink predicted real-world drinking behavior and the likelihood of AUD. The risk of AUD is especially heightened for those who believe a single drink carries zero risk. Early decisions around drinking may have larger effects on drinking and alcohol outcomes than decisions about later drinks. The study calls into question the effectiveness of messages about limiting consumption, which may imply that lower amounts of alcohol are not risky. Harm reduction messaging could instead address beliefs about the presumed safety of one drink. Future research could identify additional processes that drive decisions to decline early drinks. It could also examine whether the findings are relevant to other self-regulation challenges that may also involve serial decisions about small amounts rather than a single decision about a large amount, like gambling, procrastination, and overeating.</p><p>Making decisions one drink at a time and the “just one drink” effect: A fuzzy-trace theory model of harmful drinking. B. Hayes, V. Reyna, S. Edelson. (pp. 889–902)</p><p>Certain drinking behaviors beyond just the quantity of alcohol consumed may predict the likelihood a person will experience an alcohol-induced blackout, a condition where someone is conscious and engaging with their surroundings but will be unable to remember some or any of what occurred. While in this condition, people are at higher risk for a range of consequences, including violence or sexual assault. A study published in <i>Alcohol: Clinical and Experimental Research</i> found that the speed with which college students become intoxicated, how long their intoxication levels are increasing, and their peak intoxication were each associated with experiencing an alcohol-induced blackout. Interventions targeting the drinking behaviors associated with these factors may reduce the risk of alcohol-induced blackouts.</p><p>Researchers analyzed data from an intensive longitudinal study involving 79 college sophomores and juniors who typically drank four or more drinks on a weekend day and had experienced at least one alcohol-induced blackout during the past semester. The students wore wristwatch-like devices with transdermal alcohol concentration sensors, which measured intoxication levels through the skin on 12 weekend days. The students also completed daily diaries each morning to assess their memories of the prior day.</p><p>Over the 12-day period, the sensors detected a total of 486 days of alcohol use, and students together reported 147 alcohol-induced blackouts. Seventy percent of the students experienced at least one alcohol-induced blackout, with eighty percent of female students and 70 percent of male students reporting experiencing a blackout. The students who reported at least one blackout had, on average, 2.2 alcohol-induced blackouts during the 12 days.</p><p>Days where alcohol concentration rates rose faster, days with higher peak alcohol concentration, and days with longer duration of increasing alcohol concentration each predicted alcohol-induced blackouts. The authors suggested that targeting reductions in any one of these three features may lead to reductions in one or both of the other two features. Strategies to reduce the speed of alcohol intoxication might include encouraging students to avoid playing drinking games and to alternate between alcoholic and nonalcoholic drinks, which would also work to reduce peak alcohol concentration. Other interventions include personalized normative feedback, an intervention to dispel false beliefs that an individual's risky behavior is normal, and encouraging people to reduce the overall time spent drinking.</p><p>Future studies should examine the effectiveness of interventions on each of these features. While the wrist-worn monitor provides more detail about intoxication than self-report, it may miss lower-intensity drinking days. The morning diaries are subject to memory and may underreport or misreport blackouts.</p><p>Alcohol-induced blackouts are a significant problem among college students, with significant consequences. Prior studies have reported that 80 percent of college student drinkers reported at least one, and an average of eight, alcohol-induced blackouts during college. More than three additional consequences, which range from embarrassment to assault, occur on nights when students experience a blackout.</p><p>Transdermal alcohol concentration features predict alcohol-induced blackouts in college students. V. Richards, S. Glenn, R. Turrisi, K. Mallett, S. Ackerman, M. Russell. (pp. 880–888)</p><p>Low-to-moderate drinking may not be protective against certain health conditions, and “safe” alcohol use guidelines may be substantially off base. These are among the implications of a review of studies that use a novel research method. For most health conditions, the evidence that any amount of drinking increases risk is strong. For some other diseases, however, traditional data analysis yields a J-curve effect. In these findings, low-to-moderate drinking coincides with the lowest disease risk, while abstainers have a slightly higher risk, and heavy drinkers have a much greater risk. That's why a limited amount of red wine, which is high in antioxidants, has been considered protective against certain types of heart disease, type 2 diabetes, dementia, and depression. The premise that alcohol in smaller amounts has health benefits, somewhat offsetting its harms at higher doses, is built into models of alcohol's individual and societal effects and costs and public health policy and drinking guidelines.</p><p>The J-curve effect is facing increasing scrutiny of biases that may be contributing to it. The key challenge for researchers is establishing causation. Alcohol studies are observational; ethical and practical barriers prevent randomized controlled trials. However, observational studies are subject to biases, especially to do with “confounders”—like socioeconomic status and other hard-to-measure factors that may influence health outcomes. Recent advances in statistical approaches, combined with increasing availability of large data sets, offer ways to use observational data that more closely resemble randomized controlled trials. For the review in <i>Alcohol: Clinical & Experimental Research</i>, investigators in Australia compared findings using more novel methodologies to findings from traditional approaches exploring alcohol's effect on long-term disease outcomes.</p><p>One of the methods the researchers focused on was Mendelian Randomization (MR). This approach relates genetically predicted exposure levels (e.g., alcohol use) to health outcomes (e.g., a specific disease). Genetic analysis is especially appropriate for scrutinizing J-shaped curves of alcohol's health effects. For example, the MR approach suggests that alcohol at low levels may still be protective for certain conditions, such as type 2 diabetes—but its protective effects now seem much smaller and the risk much bigger than traditional methods suggested. For cardiovascular disease, the perceived benefits of alcohol disappear. An MR evaluation of alcohol and all-cause mortality in men also found no protective effects of alcohol, a finding that contrasts with observational analysis of the same population sample.</p><p>Improving our understanding of alcohol's long-term effects is crucial for policy and practice. Even if the protective effect of drinking is confirmed to be causal for some health outcomes, it is likely small and more than offset by alcohol's harms. Discrediting the J-curve could have substantive effects on drinking guidelines. In Australia, the number of drinks per week associated with acceptable health risks could fall from 10 to 2½. Clinical guidance on alcohol risk may need to be tailored to individuals, depending on their underlying risk factors and demographic characteristics. The researchers call for greater focus on the mechanisms by which alcohol may exert some protection, since this could help identify alternatives. For example, if alcohol in low amounts modifies cardiovascular risk by reducing platelet activity, aspirin can achieve that without the risks of drinking. Potentially, any protective effects of alcohol could be reframed as proof of concept for lifestyle interventions. The researchers acknowledge that even novel approaches to exploring causality are imperfect. Triangulating multiple analytical methods with complementary strengths and weaknesses is the most promising route to understanding alcohol's long-term health effects.</p><p>Is low-level alcohol consumption really health protective? A critical review of approaches to promote causal inference and recent applications. R. Visontay, L. Mewton, M. Sunderland, C. Chapman, T. Slade. (pp. 771–780)</p><p>When exposed to stress, people with alcohol use disorder engage parts of the brain associated with both stress and addiction, which may cause them to drink or crave alcohol after a stressful experience, suggest the authors of a study published in <i>Alcohol: Clinical and Experimental Research</i>. The brain imaging study of people with alcohol use disorder also found that women's brains respond differently to stressors than men's brains, showing greater activation of the amygdala and areas of the brain related to alcohol use disorder. The findings may improve understanding of the neural mechanisms associated with alcohol use disorder, including among women, whose rates of alcohol use disorder, binge drinking, and alcohol use have increased sharply in recent years.</p><p>Stress frequently triggers drinking as well as relapse in people with alcohol use disorder. Prior research has shown that alcohol use disorder and stress cause changes to overlapping areas of the brain in a way that can inhibit a person's ability to cope with stress and lead to continued alcohol use.</p><p>For this study, researchers sought to examine how the brains of people with moderate to severe alcohol use disorder respond to acute stressors. Functional magnetic resonance imaging (fMRI), used to identify parts of the brain engaged during the performance of different tasks, examined which brain regions are activated during a stress condition. While undergoing the fMRI, participants were given a set of tasks in the form of math problems of varying complexity, along with negative feedback and social pressure to improve their performance.</p><p>In both men and women, exposure to the stress condition activated neurocircuits in the brain associated with stress. During the stress condition, the brains of women showed increased activation of the amygdala, which is responsible for the body's reaction to threats. There was also greater activation in women compared to men in areas of the brain responsible for emotional regulation and self-referential processing. Activation in these areas might signal, for example, participants' thinking about their performance, comparing their performance to others, and regulating their emotions related to poor performance.</p><p>Female participants reported higher levels of anxiety than the male participants prior to the scan. Male participants, however, reported greater stress following the stressor than women did and also showed less activation in areas of the brain related to self-referential processing and emotional regulation, suggesting that female participants' greater use of higher-order regulatory processing in response to the stressor may have led to their feeling less stress than men following the scan.</p><p>Twenty-five participants, 15 men and 10 women, aged between 18 and 65, with an average age of 43, were included in the study. There were no significant demographic, substance use, alcohol use, or clinical differences between men and women in the study. This study was part of a larger medication trial where some participants were taking an anti-inflammatory medication that may have affected the neural and behavioral responses to stress. Future studies might measure biological indicators of stress, such as cortisol levels.</p><p>Sex differences in neural response to an acute stressor in individuals with an alcohol use disorder. E. Grodin, D. Kirsch, M. Belnap, L. Ray. (pp. 843–854)</p>","PeriodicalId":72145,"journal":{"name":"Alcohol (Hanover, York County, Pa.)","volume":null,"pages":null},"PeriodicalIF":3.0000,"publicationDate":"2024-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/acer.15343","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Alcohol (Hanover, York County, Pa.)","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/acer.15343","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"SUBSTANCE ABUSE","Score":null,"Total":0}
引用次数: 0
Abstract
When drinking choices are perceived as “just one drink,” with each single drink representing relatively slight risk, it may ironically lead to heavier drinking and alcohol-related harms. That's the finding of a novel study exploring the decision-making process around binge drinking. A better understanding of how people think about heavy episodic drinking could inform prevention and intervention approaches and help reduce the serious negative consequences of alcohol use. Young adults are especially vulnerable to high-risk drinking and its consequences; 29% are recent binge drinkers, and 15% meet the criteria for alcohol use disorder (AUD). Research and prevention efforts commonly assume that binge drinking reflects a lack of knowledge about its harmful effects. This implies that people consciously decide to consume large amounts. Another possibility is that each drink presents its own low-stakes decision: whether to have one (or one more) drink. For the study in Alcohol: Clinical & Experimental Research, investigators at Cornell University examined drinking decisions through the lens of “fuzzy-trace theory,” which involves varying framings of choices that involve risk. Fuzzy processing may shape decisions about alcohol use as a series of drink-by-drink choices rather than a decision to consume, say, 5 drinks in a night.
The researchers worked with 351 college students aged 18–31; 3 in 4 were women. The participants took surveys assessing their perceived risk of one drink, of heavy drinking, and of drinking consequences, and their overall sensitivity to risk. They also provided information on their recent drinking and experiences of negative consequences (such as risky driving or embarrassment) and were screened for dangerous drinking and AUD. Each participant was asked how likely they would be at a hypothetical party to get a first drink, then a second, and so on, up to 8. The researchers used statistical analysis to explore varying perceptions of risk related to alcohol use and their associations with measures of participants' drinking behaviors: drinks consumed per week, peak blood alcohol content (BAC), binge drinking in the last month, and criteria for dangerous drinking or AUD.
Close to 1 in 4 participants reported binge drinking in the past month, and 1 in 3 met criteria for hazardous drinking. Their perceived risk of one drink strongly predicted alcohol-related decision-making when choices were made one drink at a time. Those who perceived no risk in a single drink drank more and experienced greater alcohol consequences than those who saw low risk in one drink. Participants who perceived less risk in a single drink were more likely to start and continue drinking than those who associated one drink with higher risk. This effect continued for five drinks, equivalent to the threshold for binge drinking. These participants also reported more drinks per week, higher peak BAC, and more alcohol binges, and scored higher on scales of dangerous drinking and alcohol-related harms compared to those who saw higher risk in one drink. Moderate drinkers and abstainers were more risk-averse than heavy drinkers and more likely to perceive risk in a single drink. Higher perceived risk of heavy drinking was linked to lower likelihood of accepting a fourth, fifth, sixth, seventh, or eighth drink. But neither this nor risk sensitivity were protective against unsafe alcohol use.
The finding that “one-drink-at-a-time” thinking predicted risky decisions about alcohol supports the relevance of fuzzy-trace theory in the context of drinking decisions. The perceived risk of one hypothetical drink predicted real-world drinking behavior and the likelihood of AUD. The risk of AUD is especially heightened for those who believe a single drink carries zero risk. Early decisions around drinking may have larger effects on drinking and alcohol outcomes than decisions about later drinks. The study calls into question the effectiveness of messages about limiting consumption, which may imply that lower amounts of alcohol are not risky. Harm reduction messaging could instead address beliefs about the presumed safety of one drink. Future research could identify additional processes that drive decisions to decline early drinks. It could also examine whether the findings are relevant to other self-regulation challenges that may also involve serial decisions about small amounts rather than a single decision about a large amount, like gambling, procrastination, and overeating.
Making decisions one drink at a time and the “just one drink” effect: A fuzzy-trace theory model of harmful drinking. B. Hayes, V. Reyna, S. Edelson. (pp. 889–902)
Certain drinking behaviors beyond just the quantity of alcohol consumed may predict the likelihood a person will experience an alcohol-induced blackout, a condition where someone is conscious and engaging with their surroundings but will be unable to remember some or any of what occurred. While in this condition, people are at higher risk for a range of consequences, including violence or sexual assault. A study published in Alcohol: Clinical and Experimental Research found that the speed with which college students become intoxicated, how long their intoxication levels are increasing, and their peak intoxication were each associated with experiencing an alcohol-induced blackout. Interventions targeting the drinking behaviors associated with these factors may reduce the risk of alcohol-induced blackouts.
Researchers analyzed data from an intensive longitudinal study involving 79 college sophomores and juniors who typically drank four or more drinks on a weekend day and had experienced at least one alcohol-induced blackout during the past semester. The students wore wristwatch-like devices with transdermal alcohol concentration sensors, which measured intoxication levels through the skin on 12 weekend days. The students also completed daily diaries each morning to assess their memories of the prior day.
Over the 12-day period, the sensors detected a total of 486 days of alcohol use, and students together reported 147 alcohol-induced blackouts. Seventy percent of the students experienced at least one alcohol-induced blackout, with eighty percent of female students and 70 percent of male students reporting experiencing a blackout. The students who reported at least one blackout had, on average, 2.2 alcohol-induced blackouts during the 12 days.
Days where alcohol concentration rates rose faster, days with higher peak alcohol concentration, and days with longer duration of increasing alcohol concentration each predicted alcohol-induced blackouts. The authors suggested that targeting reductions in any one of these three features may lead to reductions in one or both of the other two features. Strategies to reduce the speed of alcohol intoxication might include encouraging students to avoid playing drinking games and to alternate between alcoholic and nonalcoholic drinks, which would also work to reduce peak alcohol concentration. Other interventions include personalized normative feedback, an intervention to dispel false beliefs that an individual's risky behavior is normal, and encouraging people to reduce the overall time spent drinking.
Future studies should examine the effectiveness of interventions on each of these features. While the wrist-worn monitor provides more detail about intoxication than self-report, it may miss lower-intensity drinking days. The morning diaries are subject to memory and may underreport or misreport blackouts.
Alcohol-induced blackouts are a significant problem among college students, with significant consequences. Prior studies have reported that 80 percent of college student drinkers reported at least one, and an average of eight, alcohol-induced blackouts during college. More than three additional consequences, which range from embarrassment to assault, occur on nights when students experience a blackout.
Transdermal alcohol concentration features predict alcohol-induced blackouts in college students. V. Richards, S. Glenn, R. Turrisi, K. Mallett, S. Ackerman, M. Russell. (pp. 880–888)
Low-to-moderate drinking may not be protective against certain health conditions, and “safe” alcohol use guidelines may be substantially off base. These are among the implications of a review of studies that use a novel research method. For most health conditions, the evidence that any amount of drinking increases risk is strong. For some other diseases, however, traditional data analysis yields a J-curve effect. In these findings, low-to-moderate drinking coincides with the lowest disease risk, while abstainers have a slightly higher risk, and heavy drinkers have a much greater risk. That's why a limited amount of red wine, which is high in antioxidants, has been considered protective against certain types of heart disease, type 2 diabetes, dementia, and depression. The premise that alcohol in smaller amounts has health benefits, somewhat offsetting its harms at higher doses, is built into models of alcohol's individual and societal effects and costs and public health policy and drinking guidelines.
The J-curve effect is facing increasing scrutiny of biases that may be contributing to it. The key challenge for researchers is establishing causation. Alcohol studies are observational; ethical and practical barriers prevent randomized controlled trials. However, observational studies are subject to biases, especially to do with “confounders”—like socioeconomic status and other hard-to-measure factors that may influence health outcomes. Recent advances in statistical approaches, combined with increasing availability of large data sets, offer ways to use observational data that more closely resemble randomized controlled trials. For the review in Alcohol: Clinical & Experimental Research, investigators in Australia compared findings using more novel methodologies to findings from traditional approaches exploring alcohol's effect on long-term disease outcomes.
One of the methods the researchers focused on was Mendelian Randomization (MR). This approach relates genetically predicted exposure levels (e.g., alcohol use) to health outcomes (e.g., a specific disease). Genetic analysis is especially appropriate for scrutinizing J-shaped curves of alcohol's health effects. For example, the MR approach suggests that alcohol at low levels may still be protective for certain conditions, such as type 2 diabetes—but its protective effects now seem much smaller and the risk much bigger than traditional methods suggested. For cardiovascular disease, the perceived benefits of alcohol disappear. An MR evaluation of alcohol and all-cause mortality in men also found no protective effects of alcohol, a finding that contrasts with observational analysis of the same population sample.
Improving our understanding of alcohol's long-term effects is crucial for policy and practice. Even if the protective effect of drinking is confirmed to be causal for some health outcomes, it is likely small and more than offset by alcohol's harms. Discrediting the J-curve could have substantive effects on drinking guidelines. In Australia, the number of drinks per week associated with acceptable health risks could fall from 10 to 2½. Clinical guidance on alcohol risk may need to be tailored to individuals, depending on their underlying risk factors and demographic characteristics. The researchers call for greater focus on the mechanisms by which alcohol may exert some protection, since this could help identify alternatives. For example, if alcohol in low amounts modifies cardiovascular risk by reducing platelet activity, aspirin can achieve that without the risks of drinking. Potentially, any protective effects of alcohol could be reframed as proof of concept for lifestyle interventions. The researchers acknowledge that even novel approaches to exploring causality are imperfect. Triangulating multiple analytical methods with complementary strengths and weaknesses is the most promising route to understanding alcohol's long-term health effects.
Is low-level alcohol consumption really health protective? A critical review of approaches to promote causal inference and recent applications. R. Visontay, L. Mewton, M. Sunderland, C. Chapman, T. Slade. (pp. 771–780)
When exposed to stress, people with alcohol use disorder engage parts of the brain associated with both stress and addiction, which may cause them to drink or crave alcohol after a stressful experience, suggest the authors of a study published in Alcohol: Clinical and Experimental Research. The brain imaging study of people with alcohol use disorder also found that women's brains respond differently to stressors than men's brains, showing greater activation of the amygdala and areas of the brain related to alcohol use disorder. The findings may improve understanding of the neural mechanisms associated with alcohol use disorder, including among women, whose rates of alcohol use disorder, binge drinking, and alcohol use have increased sharply in recent years.
Stress frequently triggers drinking as well as relapse in people with alcohol use disorder. Prior research has shown that alcohol use disorder and stress cause changes to overlapping areas of the brain in a way that can inhibit a person's ability to cope with stress and lead to continued alcohol use.
For this study, researchers sought to examine how the brains of people with moderate to severe alcohol use disorder respond to acute stressors. Functional magnetic resonance imaging (fMRI), used to identify parts of the brain engaged during the performance of different tasks, examined which brain regions are activated during a stress condition. While undergoing the fMRI, participants were given a set of tasks in the form of math problems of varying complexity, along with negative feedback and social pressure to improve their performance.
In both men and women, exposure to the stress condition activated neurocircuits in the brain associated with stress. During the stress condition, the brains of women showed increased activation of the amygdala, which is responsible for the body's reaction to threats. There was also greater activation in women compared to men in areas of the brain responsible for emotional regulation and self-referential processing. Activation in these areas might signal, for example, participants' thinking about their performance, comparing their performance to others, and regulating their emotions related to poor performance.
Female participants reported higher levels of anxiety than the male participants prior to the scan. Male participants, however, reported greater stress following the stressor than women did and also showed less activation in areas of the brain related to self-referential processing and emotional regulation, suggesting that female participants' greater use of higher-order regulatory processing in response to the stressor may have led to their feeling less stress than men following the scan.
Twenty-five participants, 15 men and 10 women, aged between 18 and 65, with an average age of 43, were included in the study. There were no significant demographic, substance use, alcohol use, or clinical differences between men and women in the study. This study was part of a larger medication trial where some participants were taking an anti-inflammatory medication that may have affected the neural and behavioral responses to stress. Future studies might measure biological indicators of stress, such as cortisol levels.
Sex differences in neural response to an acute stressor in individuals with an alcohol use disorder. E. Grodin, D. Kirsch, M. Belnap, L. Ray. (pp. 843–854)