Intentionality for better communication in minimally conscious AI design

R. Poznanski, L. Cacha, V. Sbitnev, N. Iannella, S. Parida, E.J. Brandas, J.Z. Achimowicz
{"title":"Intentionality for better communication in minimally conscious AI design","authors":"R. Poznanski, L. Cacha, V. Sbitnev, N. Iannella, S. Parida, E.J. Brandas, J.Z. Achimowicz","doi":"10.56280/1600750890","DOIUrl":null,"url":null,"abstract":"Consciousness is the ability to have intentionality, which is a process that operates at various temporal scales. To qualify as conscious, an artificial device must express functionality capable of solving the Intrinsicality problem, where experienceable form or syntax gives rise to understanding 'meaning' as a noncontextual dynamic prior to language. This is suggestive of replacing the Hard Problem of consciousness to build conscious artificial intelligence (AI) Developing model emulations and exploring fundamental mechanisms of how machines understand meaning is central to the development of minimally conscious AI. It has been shown by Alemdar and colleagues [New insights into holonomic brain theory: implications for active consciousness. Journal of Multiscale Neuroscience 2 (2023), 159-168] that a framework for advancing artificial systems through understanding uncertainty derived from negentropic action to create intentional systems entails quantum-thermal fluctuations through informational channels instead of recognizing (cf., introspection) sensory cues through perceptual channels. Improving communication in conscious AI requires both software and hardware implementation. The software can be developed through the brain-machine interface of multiscale temporal processing, while hardware implementation can be done by creating energy flow using dipole-like hydrogen ion (proton) interactions in an artificial 'wetwire' protonic filament. Machine understanding can be achieved through memristors implemented in the protonic 'wetwire' filament embedded in a real-world device. This report presents a blueprint for the process, but it does not cover the algorithms or engineering aspects, which need to be conceptualized before minimally conscious AI can become operational.","PeriodicalId":473923,"journal":{"name":"Journal of Multiscale Neuroscience","volume":"19 6","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Multiscale Neuroscience","FirstCategoryId":"0","ListUrlMain":"https://doi.org/10.56280/1600750890","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Consciousness is the ability to have intentionality, which is a process that operates at various temporal scales. To qualify as conscious, an artificial device must express functionality capable of solving the Intrinsicality problem, where experienceable form or syntax gives rise to understanding 'meaning' as a noncontextual dynamic prior to language. This is suggestive of replacing the Hard Problem of consciousness to build conscious artificial intelligence (AI) Developing model emulations and exploring fundamental mechanisms of how machines understand meaning is central to the development of minimally conscious AI. It has been shown by Alemdar and colleagues [New insights into holonomic brain theory: implications for active consciousness. Journal of Multiscale Neuroscience 2 (2023), 159-168] that a framework for advancing artificial systems through understanding uncertainty derived from negentropic action to create intentional systems entails quantum-thermal fluctuations through informational channels instead of recognizing (cf., introspection) sensory cues through perceptual channels. Improving communication in conscious AI requires both software and hardware implementation. The software can be developed through the brain-machine interface of multiscale temporal processing, while hardware implementation can be done by creating energy flow using dipole-like hydrogen ion (proton) interactions in an artificial 'wetwire' protonic filament. Machine understanding can be achieved through memristors implemented in the protonic 'wetwire' filament embedded in a real-world device. This report presents a blueprint for the process, but it does not cover the algorithms or engineering aspects, which need to be conceptualized before minimally conscious AI can become operational.
在微意识人工智能设计中加强交流的意向性
意识是具有意向性的能力,这是一个在不同时间尺度上运作的过程。为了符合有意识的条件,人工设备必须表达能够解决内在性问题的功能,在这种情况下,可体验的形式或语法使理解“意义”成为一种先于语言的非上下文动态。开发模型仿真和探索机器如何理解意义的基本机制是开发最低意识人工智能的核心。Alemdar和他的同事已经证明了这一点[对完整大脑理论的新见解:对主动意识的影响]。《多尺度神经科学杂志》2(2023),159-168],通过理解负熵行为产生的不确定性来创建有意系统的人工系统框架需要通过信息渠道进行量子热波动,而不是通过感知渠道识别(如内省)感官线索。在有意识的人工智能中改善沟通需要软件和硬件的实现。软件可以通过多尺度时间处理的脑机接口开发,而硬件实现可以通过在人工“湿线”质子丝中使用偶极子类氢离子(质子)相互作用产生能量流来完成。机器理解可以通过嵌入在现实世界设备中的质子“湿线”灯丝中实现的忆阻器来实现。这份报告给出了这个过程的蓝图,但它没有涵盖算法或工程方面,这些方面需要在最低意识的人工智能投入运营之前进行概念化。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信