More About Fractals of Speech: Incompleteness, Wobbling Consistency and Limits to Understanding.

IF 0.6 4区 心理学 Q4 PSYCHOLOGY, MATHEMATICAL
Eystein Glattre, Havard Glattre
{"title":"More About Fractals of Speech: Incompleteness, Wobbling Consistency and Limits to Understanding.","authors":"Eystein Glattre,&nbsp;Havard Glattre","doi":"","DOIUrl":null,"url":null,"abstract":"<p><p>This article presents the geometrical-fractal text-tree model of speech and writing, the development of which is part of a project with the long-term goal to answer the question whether Artificial Intelligence and the corresponding human intelligence are principally different or not. Text-tree models consist of word-shrubs 'glued' together by syntax. Word-shrubs are designed by means of two principles, one is the dictionary or semantic principle that we can explain all verbal meanings by the meanings of other words. The other is the initiator-generator procedure, used to develop geometrical fractals. The structure of the word-shrub grows from the root-word when the meaning of the root-word, the generator, is connected as a branch to the root-word which is first initiator. Then all generator words are redefined as new initiators and connected to their meaning, the second generators. But the words or these are redefined as new initiators, each then being connected to its generator-meaning. This is repeated ad infinitum. Each new layer of generators represents a branching level. Consistency of verbal meaning is achieved by fixing the number of branching levels of the word-shrub. Wobbling consistency occurs when the talking or writing person shifts between levels of branching. We develop the M-method, important for most of the results, because it allows differences in verbal meaning to be estimated numerically. An interesting property of the text-tree model is revealed by showing that there must exist a cloud of unexperienced meaning variants of human texts. Most interesting, perhaps, is the demonstration of what we call the lemma of incompleteness which states that humans cannot prove beyond doubt, that they understand correctly what they say and write. This lemma seems to be a distant barrier for the expansion of human understanding and of relevance for understanding human versus artificial intelligence.</p>","PeriodicalId":46218,"journal":{"name":"Nonlinear Dynamics Psychology and Life Sciences","volume":"24 4","pages":"389-402"},"PeriodicalIF":0.6000,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Nonlinear Dynamics Psychology and Life Sciences","FirstCategoryId":"102","ListUrlMain":"","RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"PSYCHOLOGY, MATHEMATICAL","Score":null,"Total":0}
引用次数: 0

Abstract

This article presents the geometrical-fractal text-tree model of speech and writing, the development of which is part of a project with the long-term goal to answer the question whether Artificial Intelligence and the corresponding human intelligence are principally different or not. Text-tree models consist of word-shrubs 'glued' together by syntax. Word-shrubs are designed by means of two principles, one is the dictionary or semantic principle that we can explain all verbal meanings by the meanings of other words. The other is the initiator-generator procedure, used to develop geometrical fractals. The structure of the word-shrub grows from the root-word when the meaning of the root-word, the generator, is connected as a branch to the root-word which is first initiator. Then all generator words are redefined as new initiators and connected to their meaning, the second generators. But the words or these are redefined as new initiators, each then being connected to its generator-meaning. This is repeated ad infinitum. Each new layer of generators represents a branching level. Consistency of verbal meaning is achieved by fixing the number of branching levels of the word-shrub. Wobbling consistency occurs when the talking or writing person shifts between levels of branching. We develop the M-method, important for most of the results, because it allows differences in verbal meaning to be estimated numerically. An interesting property of the text-tree model is revealed by showing that there must exist a cloud of unexperienced meaning variants of human texts. Most interesting, perhaps, is the demonstration of what we call the lemma of incompleteness which states that humans cannot prove beyond doubt, that they understand correctly what they say and write. This lemma seems to be a distant barrier for the expansion of human understanding and of relevance for understanding human versus artificial intelligence.

更多关于言语的分形:不完整,摇摆的一致性和理解的限制。
本文提出了语音和书写的几何分形文本树模型,该模型的开发是一个项目的一部分,该项目的长期目标是回答人工智能与相应的人类智能是否存在主要差异的问题。文本树模型由通过语法“粘合”在一起的单词灌木组成。词丛的设计依据两个原则,一个是词典或语义原则,即我们可以用其他词的意思来解释所有的词的意思。另一个是启动器-生成器程序,用于发展几何分形。当词根的意思作为产生者与词根的意思作为分支连接在一起时,词根的结构就从词根开始生长。然后,所有生成词都被重新定义为新的启动词,并连接到它们的意思,即第二个生成词。但这些词被重新定义为新的启动器,每个词都被连接到它的生成意义上。这是无限重复的。每个新的生成器层代表一个分支级别。动词意义的一致性是通过固定单词灌木的分支层次来实现的。当说话或写作的人在不同层次的分支之间转换时,就会出现不稳定的一致性。我们开发了m方法,对大多数结果都很重要,因为它允许用数字来估计口头意义的差异。本文揭示了文本树模型的一个有趣特性,即一定存在着人类文本的未经历过的意义变体云。也许最有趣的是我们所说的不完备引理的论证,它指出人类不能毫无疑问地证明他们正确地理解了他们所说和所写的东西。这个引理似乎是人类理解的扩展和理解人类与人工智能的相关性的一个遥远的障碍。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
1.40
自引率
11.10%
发文量
26
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信