{"title":"Comments on “New Project for Scientific Psychology” by Mark Solms","authors":"E. Brändas, R. Poznanski","doi":"10.1080/15294145.2021.1878606","DOIUrl":null,"url":null,"abstract":"In his impressive “New Project for a Scientific Psychology,” Mark Solms presents us with a revision of Freud’s original “Project for a Scientific Psychology: General Scheme.”While doing so, he conveys a neuropsychoanalytic perspective on Chalmers’ “hard problem” of consciousness. In Solms’ update, he mentions a few key notions and theories that were not available to Freud and his students, viz. (i) Shannon’s concept of information, (ii) surprisal, (iii) predictive coding, (iii) Friston’s free energy, and (iv) Panksepp’s affective neuroscience. Although the intention is not foremost to solve the hard problem, Solms states that he wants to investigate why and how consciousness arises, i.e. why and how there is something it is like to be for an organism. In other words, how and why do neurophysiological activities produce the “experience of consciousness.” The new project (Solms, 2017) implicates the hard problem, the view of dual-aspect monism, consciousness as a noncognitive, but affective function with the latter being homeostatic deviations that cognition suppresses surprise, all formulated within the free energy principle (FEP) (Friston, 2010). To achieve this program, Solms bases his theory on Friston’s model of FEP, shown to minimize surprisal as a unifying principle of brain functioning. The theoretical project sets down that prior hypotheses are supplanted by posterior ones implicating that the central organ of the human nervous system operates as a “Bayesian brain.” Yet, Solms’ dualistic view entails that the material brain does not produce the mind and therefore creates a “ghost in the machine” (Koestler, 1967; Koestler & Smythies, 1969). The use of hard-core scientific terms such as free energy, surprisal, negentropy, information, dualism, self-organization, Markov blanket, etc., prompts useful strategies and subtle and interesting interpretations. Nevertheless, their utility might lead to a confusing mix of equivocal terminology. For instance, is not minimizing the free energy and entropy a contradiction? First, let us briefly review some standard definitions from chemistry. The notion of free energy, H = U− TS, where U, T , S are the internal energy, absolute temperature, and entropy, respectively, reveals details of the direction of a chemical reaction. H is minimum and S maximum at equilibrium. However, in the present context, it is evident that FEP is a minimum principle for the surprise, which shifts the focus to S. Now surprisal is an information-theoretic concept, useful in many areas, which here assumes a somewhat ambiguous role. For instance, it was employed in chemistry by Bernstein and Levine (1972) to improve the understanding of non-equilibrium thermodynamic systems. The surprisal analysis is a way to identify and characterize systems that deviate from the state of maximum entropy due to physical constraints that prevent a situation of balancing equilibrium. Quantifying the probability of a particular event in relation to its prior probability, it is easy to identify surprisal with the lowering of entropy and negentropic gain. Extensions to feature biological phenomena and evolving cellular processes, supporting homeostasis, are straightforward. In passing, one should note that constraining the system costs energy and entropy. A complete formulation should maintain a steady state, equating reduction and production of entropy (Nicolis & Prigogine, 1977), showing that self-organizational life processes are commensurate with the second law. Entropy reduction and surprisal have also been extensively utilized in word processing and probabilistic language models (Venhuizen et al., 2019). Here one distinguishes between different aspects of the cognitive process, such as state-by-state surprisal versus end-state entropy reduction. It combines the theory of communication with the process of navigation through semantic space. The question is whether it is possible to estimate surprisal, the negentropic gain, without considering the cost or entropy production? Translating the hierarchical generative models of neuroscience to fit the concepts above necessarily stretches its rendering. In predictive coding, entropy breeds a “set of Bayesian beliefs” and minimizing the","PeriodicalId":39493,"journal":{"name":"Neuropsychoanalysis","volume":"22 1","pages":"47 - 49"},"PeriodicalIF":0.0000,"publicationDate":"2020-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/15294145.2021.1878606","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neuropsychoanalysis","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/15294145.2021.1878606","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Psychology","Score":null,"Total":0}
引用次数: 3
Abstract
In his impressive “New Project for a Scientific Psychology,” Mark Solms presents us with a revision of Freud’s original “Project for a Scientific Psychology: General Scheme.”While doing so, he conveys a neuropsychoanalytic perspective on Chalmers’ “hard problem” of consciousness. In Solms’ update, he mentions a few key notions and theories that were not available to Freud and his students, viz. (i) Shannon’s concept of information, (ii) surprisal, (iii) predictive coding, (iii) Friston’s free energy, and (iv) Panksepp’s affective neuroscience. Although the intention is not foremost to solve the hard problem, Solms states that he wants to investigate why and how consciousness arises, i.e. why and how there is something it is like to be for an organism. In other words, how and why do neurophysiological activities produce the “experience of consciousness.” The new project (Solms, 2017) implicates the hard problem, the view of dual-aspect monism, consciousness as a noncognitive, but affective function with the latter being homeostatic deviations that cognition suppresses surprise, all formulated within the free energy principle (FEP) (Friston, 2010). To achieve this program, Solms bases his theory on Friston’s model of FEP, shown to minimize surprisal as a unifying principle of brain functioning. The theoretical project sets down that prior hypotheses are supplanted by posterior ones implicating that the central organ of the human nervous system operates as a “Bayesian brain.” Yet, Solms’ dualistic view entails that the material brain does not produce the mind and therefore creates a “ghost in the machine” (Koestler, 1967; Koestler & Smythies, 1969). The use of hard-core scientific terms such as free energy, surprisal, negentropy, information, dualism, self-organization, Markov blanket, etc., prompts useful strategies and subtle and interesting interpretations. Nevertheless, their utility might lead to a confusing mix of equivocal terminology. For instance, is not minimizing the free energy and entropy a contradiction? First, let us briefly review some standard definitions from chemistry. The notion of free energy, H = U− TS, where U, T , S are the internal energy, absolute temperature, and entropy, respectively, reveals details of the direction of a chemical reaction. H is minimum and S maximum at equilibrium. However, in the present context, it is evident that FEP is a minimum principle for the surprise, which shifts the focus to S. Now surprisal is an information-theoretic concept, useful in many areas, which here assumes a somewhat ambiguous role. For instance, it was employed in chemistry by Bernstein and Levine (1972) to improve the understanding of non-equilibrium thermodynamic systems. The surprisal analysis is a way to identify and characterize systems that deviate from the state of maximum entropy due to physical constraints that prevent a situation of balancing equilibrium. Quantifying the probability of a particular event in relation to its prior probability, it is easy to identify surprisal with the lowering of entropy and negentropic gain. Extensions to feature biological phenomena and evolving cellular processes, supporting homeostasis, are straightforward. In passing, one should note that constraining the system costs energy and entropy. A complete formulation should maintain a steady state, equating reduction and production of entropy (Nicolis & Prigogine, 1977), showing that self-organizational life processes are commensurate with the second law. Entropy reduction and surprisal have also been extensively utilized in word processing and probabilistic language models (Venhuizen et al., 2019). Here one distinguishes between different aspects of the cognitive process, such as state-by-state surprisal versus end-state entropy reduction. It combines the theory of communication with the process of navigation through semantic space. The question is whether it is possible to estimate surprisal, the negentropic gain, without considering the cost or entropy production? Translating the hierarchical generative models of neuroscience to fit the concepts above necessarily stretches its rendering. In predictive coding, entropy breeds a “set of Bayesian beliefs” and minimizing the