Comments on “New Project for Scientific Psychology” by Mark Solms

Q3 Psychology
E. Brändas, R. Poznanski
{"title":"Comments on “New Project for Scientific Psychology” by Mark Solms","authors":"E. Brändas, R. Poznanski","doi":"10.1080/15294145.2021.1878606","DOIUrl":null,"url":null,"abstract":"In his impressive “New Project for a Scientific Psychology,” Mark Solms presents us with a revision of Freud’s original “Project for a Scientific Psychology: General Scheme.”While doing so, he conveys a neuropsychoanalytic perspective on Chalmers’ “hard problem” of consciousness. In Solms’ update, he mentions a few key notions and theories that were not available to Freud and his students, viz. (i) Shannon’s concept of information, (ii) surprisal, (iii) predictive coding, (iii) Friston’s free energy, and (iv) Panksepp’s affective neuroscience. Although the intention is not foremost to solve the hard problem, Solms states that he wants to investigate why and how consciousness arises, i.e. why and how there is something it is like to be for an organism. In other words, how and why do neurophysiological activities produce the “experience of consciousness.” The new project (Solms, 2017) implicates the hard problem, the view of dual-aspect monism, consciousness as a noncognitive, but affective function with the latter being homeostatic deviations that cognition suppresses surprise, all formulated within the free energy principle (FEP) (Friston, 2010). To achieve this program, Solms bases his theory on Friston’s model of FEP, shown to minimize surprisal as a unifying principle of brain functioning. The theoretical project sets down that prior hypotheses are supplanted by posterior ones implicating that the central organ of the human nervous system operates as a “Bayesian brain.” Yet, Solms’ dualistic view entails that the material brain does not produce the mind and therefore creates a “ghost in the machine” (Koestler, 1967; Koestler & Smythies, 1969). The use of hard-core scientific terms such as free energy, surprisal, negentropy, information, dualism, self-organization, Markov blanket, etc., prompts useful strategies and subtle and interesting interpretations. Nevertheless, their utility might lead to a confusing mix of equivocal terminology. For instance, is not minimizing the free energy and entropy a contradiction? First, let us briefly review some standard definitions from chemistry. The notion of free energy, H = U− TS, where U, T , S are the internal energy, absolute temperature, and entropy, respectively, reveals details of the direction of a chemical reaction. H is minimum and S maximum at equilibrium. However, in the present context, it is evident that FEP is a minimum principle for the surprise, which shifts the focus to S. Now surprisal is an information-theoretic concept, useful in many areas, which here assumes a somewhat ambiguous role. For instance, it was employed in chemistry by Bernstein and Levine (1972) to improve the understanding of non-equilibrium thermodynamic systems. The surprisal analysis is a way to identify and characterize systems that deviate from the state of maximum entropy due to physical constraints that prevent a situation of balancing equilibrium. Quantifying the probability of a particular event in relation to its prior probability, it is easy to identify surprisal with the lowering of entropy and negentropic gain. Extensions to feature biological phenomena and evolving cellular processes, supporting homeostasis, are straightforward. In passing, one should note that constraining the system costs energy and entropy. A complete formulation should maintain a steady state, equating reduction and production of entropy (Nicolis & Prigogine, 1977), showing that self-organizational life processes are commensurate with the second law. Entropy reduction and surprisal have also been extensively utilized in word processing and probabilistic language models (Venhuizen et al., 2019). Here one distinguishes between different aspects of the cognitive process, such as state-by-state surprisal versus end-state entropy reduction. It combines the theory of communication with the process of navigation through semantic space. The question is whether it is possible to estimate surprisal, the negentropic gain, without considering the cost or entropy production? Translating the hierarchical generative models of neuroscience to fit the concepts above necessarily stretches its rendering. In predictive coding, entropy breeds a “set of Bayesian beliefs” and minimizing the","PeriodicalId":39493,"journal":{"name":"Neuropsychoanalysis","volume":"22 1","pages":"47 - 49"},"PeriodicalIF":0.0000,"publicationDate":"2020-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/15294145.2021.1878606","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neuropsychoanalysis","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/15294145.2021.1878606","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Psychology","Score":null,"Total":0}
引用次数: 3

Abstract

In his impressive “New Project for a Scientific Psychology,” Mark Solms presents us with a revision of Freud’s original “Project for a Scientific Psychology: General Scheme.”While doing so, he conveys a neuropsychoanalytic perspective on Chalmers’ “hard problem” of consciousness. In Solms’ update, he mentions a few key notions and theories that were not available to Freud and his students, viz. (i) Shannon’s concept of information, (ii) surprisal, (iii) predictive coding, (iii) Friston’s free energy, and (iv) Panksepp’s affective neuroscience. Although the intention is not foremost to solve the hard problem, Solms states that he wants to investigate why and how consciousness arises, i.e. why and how there is something it is like to be for an organism. In other words, how and why do neurophysiological activities produce the “experience of consciousness.” The new project (Solms, 2017) implicates the hard problem, the view of dual-aspect monism, consciousness as a noncognitive, but affective function with the latter being homeostatic deviations that cognition suppresses surprise, all formulated within the free energy principle (FEP) (Friston, 2010). To achieve this program, Solms bases his theory on Friston’s model of FEP, shown to minimize surprisal as a unifying principle of brain functioning. The theoretical project sets down that prior hypotheses are supplanted by posterior ones implicating that the central organ of the human nervous system operates as a “Bayesian brain.” Yet, Solms’ dualistic view entails that the material brain does not produce the mind and therefore creates a “ghost in the machine” (Koestler, 1967; Koestler & Smythies, 1969). The use of hard-core scientific terms such as free energy, surprisal, negentropy, information, dualism, self-organization, Markov blanket, etc., prompts useful strategies and subtle and interesting interpretations. Nevertheless, their utility might lead to a confusing mix of equivocal terminology. For instance, is not minimizing the free energy and entropy a contradiction? First, let us briefly review some standard definitions from chemistry. The notion of free energy, H = U− TS, where U, T , S are the internal energy, absolute temperature, and entropy, respectively, reveals details of the direction of a chemical reaction. H is minimum and S maximum at equilibrium. However, in the present context, it is evident that FEP is a minimum principle for the surprise, which shifts the focus to S. Now surprisal is an information-theoretic concept, useful in many areas, which here assumes a somewhat ambiguous role. For instance, it was employed in chemistry by Bernstein and Levine (1972) to improve the understanding of non-equilibrium thermodynamic systems. The surprisal analysis is a way to identify and characterize systems that deviate from the state of maximum entropy due to physical constraints that prevent a situation of balancing equilibrium. Quantifying the probability of a particular event in relation to its prior probability, it is easy to identify surprisal with the lowering of entropy and negentropic gain. Extensions to feature biological phenomena and evolving cellular processes, supporting homeostasis, are straightforward. In passing, one should note that constraining the system costs energy and entropy. A complete formulation should maintain a steady state, equating reduction and production of entropy (Nicolis & Prigogine, 1977), showing that self-organizational life processes are commensurate with the second law. Entropy reduction and surprisal have also been extensively utilized in word processing and probabilistic language models (Venhuizen et al., 2019). Here one distinguishes between different aspects of the cognitive process, such as state-by-state surprisal versus end-state entropy reduction. It combines the theory of communication with the process of navigation through semantic space. The question is whether it is possible to estimate surprisal, the negentropic gain, without considering the cost or entropy production? Translating the hierarchical generative models of neuroscience to fit the concepts above necessarily stretches its rendering. In predictive coding, entropy breeds a “set of Bayesian beliefs” and minimizing the
评马克·索姆斯的《科学心理学新项目》
在他令人印象深刻的《科学心理学的新计划》中,马克·索姆斯向我们展示了弗洛伊德最初的《科学心理学计划:一般计划》的修订版。与此同时,他从神经精神分析的角度阐述了查尔默斯关于意识的“难题”。在索姆斯的更新中,他提到了一些弗洛伊德和他的学生没有得到的关键概念和理论,即(i)香农的信息概念,(ii)惊喜,(iii)预测编码,(iii)弗里斯顿的自由能,(iv)潘克塞普的情感神经科学。虽然首要的目的不是解决这个难题,但Solms表示,他想调查意识产生的原因和方式,也就是说,为什么生物体会有某种东西,以及如何有这种东西。换句话说,神经生理活动是如何以及为什么产生“意识体验”的。新项目(Solms, 2017)暗示了一个难题,即双重性一元论的观点,意识是一种非认知的情感功能,后者是认知抑制惊喜的稳态偏差,所有这些都是在自由能原理(FEP)中制定的(Friston, 2010)。为了实现这一计划,索姆斯将他的理论建立在弗里斯顿的FEP模型上,该模型表明,将惊讶最小化是大脑功能的统一原则。这个理论项目表明,先前的假设被后的假设所取代,这意味着人类神经系统的中枢器官以“贝叶斯大脑”的方式运作。然而,索姆斯的二元论观点认为,物质大脑并不产生精神,因此创造了一个“机器中的幽灵”(Koestler, 1967;Koestler & Smythies, 1969)。使用核心科学术语,如自由能、惊奇、负熵、信息、二元论、自组织、马尔可夫毯等,提示有用的策略和微妙而有趣的解释。然而,它们的使用可能会导致混淆模棱两可的术语。例如,最小化自由能和熵不是矛盾的吗?首先,让我们简单回顾一下化学中的一些标准定义。自由能的概念H = U−TS,其中U, T, S分别是热力学能,绝对温度和熵,揭示了化学反应方向的细节。平衡时H最小,S最大。然而,在目前的背景下,很明显,FEP是惊喜的最小原则,这将焦点转移到s上。现在惊喜是一个信息论概念,在许多领域都很有用,在这里假设了一个有点模糊的角色。例如,它被Bernstein和Levine(1972)用于化学,以提高对非平衡热力学系统的理解。意外分析是一种识别和描述由于物理约束而偏离最大熵状态的系统的方法,这些物理约束阻止了平衡状态。量化与先验概率相关的特定事件的概率,随着熵和负熵增益的降低,很容易识别惊喜。扩展特征的生物现象和进化的细胞过程,支持体内平衡,是直截了当的。顺便提一下,我们应该注意到约束系统需要消耗能量和熵。一个完整的公式应该保持稳定状态,使熵的减少和产生相等(Nicolis & Prigogine, 1977),这表明自组织生命过程与第二定律是相称的。熵降和surprisal也被广泛应用于文字处理和概率语言模型(Venhuizen et al., 2019)。在这里,人们区分了认知过程的不同方面,例如逐个状态的惊讶与最终状态的熵减少。它将交际理论与通过语义空间进行导航的过程相结合。问题是,在不考虑成本或熵产的情况下,是否有可能估计出惊人的负熵增益?翻译神经科学的层次生成模型以适应上述概念必然会扩展其渲染。在预测编码中,熵产生了“一组贝叶斯信念”,并将其最小化
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Neuropsychoanalysis
Neuropsychoanalysis Psychology-Neuropsychology and Physiological Psychology
CiteScore
2.50
自引率
0.00%
发文量
24
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信