{"title":"More About Fractals of Speech: Incompleteness, Wobbling Consistency and Limits to Understanding.","authors":"Eystein Glattre, Havard Glattre","doi":"","DOIUrl":null,"url":null,"abstract":"<p><p>This article presents the geometrical-fractal text-tree model of speech and writing, the development of which is part of a project with the long-term goal to answer the question whether Artificial Intelligence and the corresponding human intelligence are principally different or not. Text-tree models consist of word-shrubs 'glued' together by syntax. Word-shrubs are designed by means of two principles, one is the dictionary or semantic principle that we can explain all verbal meanings by the meanings of other words. The other is the initiator-generator procedure, used to develop geometrical fractals. The structure of the word-shrub grows from the root-word when the meaning of the root-word, the generator, is connected as a branch to the root-word which is first initiator. Then all generator words are redefined as new initiators and connected to their meaning, the second generators. But the words or these are redefined as new initiators, each then being connected to its generator-meaning. This is repeated ad infinitum. Each new layer of generators represents a branching level. Consistency of verbal meaning is achieved by fixing the number of branching levels of the word-shrub. Wobbling consistency occurs when the talking or writing person shifts between levels of branching. We develop the M-method, important for most of the results, because it allows differences in verbal meaning to be estimated numerically. An interesting property of the text-tree model is revealed by showing that there must exist a cloud of unexperienced meaning variants of human texts. Most interesting, perhaps, is the demonstration of what we call the lemma of incompleteness which states that humans cannot prove beyond doubt, that they understand correctly what they say and write. This lemma seems to be a distant barrier for the expansion of human understanding and of relevance for understanding human versus artificial intelligence.</p>","PeriodicalId":46218,"journal":{"name":"Nonlinear Dynamics Psychology and Life Sciences","volume":"24 4","pages":"389-402"},"PeriodicalIF":0.6000,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Nonlinear Dynamics Psychology and Life Sciences","FirstCategoryId":"102","ListUrlMain":"","RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"PSYCHOLOGY, MATHEMATICAL","Score":null,"Total":0}
引用次数: 0
Abstract
This article presents the geometrical-fractal text-tree model of speech and writing, the development of which is part of a project with the long-term goal to answer the question whether Artificial Intelligence and the corresponding human intelligence are principally different or not. Text-tree models consist of word-shrubs 'glued' together by syntax. Word-shrubs are designed by means of two principles, one is the dictionary or semantic principle that we can explain all verbal meanings by the meanings of other words. The other is the initiator-generator procedure, used to develop geometrical fractals. The structure of the word-shrub grows from the root-word when the meaning of the root-word, the generator, is connected as a branch to the root-word which is first initiator. Then all generator words are redefined as new initiators and connected to their meaning, the second generators. But the words or these are redefined as new initiators, each then being connected to its generator-meaning. This is repeated ad infinitum. Each new layer of generators represents a branching level. Consistency of verbal meaning is achieved by fixing the number of branching levels of the word-shrub. Wobbling consistency occurs when the talking or writing person shifts between levels of branching. We develop the M-method, important for most of the results, because it allows differences in verbal meaning to be estimated numerically. An interesting property of the text-tree model is revealed by showing that there must exist a cloud of unexperienced meaning variants of human texts. Most interesting, perhaps, is the demonstration of what we call the lemma of incompleteness which states that humans cannot prove beyond doubt, that they understand correctly what they say and write. This lemma seems to be a distant barrier for the expansion of human understanding and of relevance for understanding human versus artificial intelligence.