通过大型文本语料库量化文字差异

IF 0.6 3区 文学 0 LANGUAGE & LINGUISTICS
Hanna Lüschow
{"title":"通过大型文本语料库量化文字差异","authors":"Hanna Lüschow","doi":"10.1515/zfs-2021-2038","DOIUrl":null,"url":null,"abstract":"Abstract The use of some basic computer science concepts could expand the possibilities of (manual) graphematic text corpus analysis. With these it can be shown that graphematic variation decreases constantly in printed German texts from 1600 to 1900. While the variability is continuously lesser on a text-internal level, it decreases faster for the whole available writing system of individual decades. But which changes took place exactly? Which types of variation went away more quickly, which ones persisted? How do we deal with large amounts of data which cannot be processed manually anymore? Which aspects are of special importance or go missing while working with a large textual base? The use of a measurement called entropy quantifies the variability of the spellings of a given word form, lemma, text or subcorpus, with few restrictions but also less details in the results. The difference between two spellings can be measured via Damerau-Levenshtein distance. To a certain degree, automated data handling can also determine the exact changes that took place. Afterwards, these differences can be counted and ranked. As data source the German Text Archive of the Berlin-Brandenburg Academy of Sciences and Humanities is used. It offers for example orthographic normalization – which is extremely useful –, preprocessing of parts of speech and lemmatization. As opposed to many other approaches the establishment of today’s normed spellings is not seen as the aim of the developments and is therefore not the focus of the research. Instead, the differences between individual spellings are of interest. Afterwards intra- and extralinguistic factors which caused these developments should be determined. These methodological findings could subsequently be used for improving research methods in other graphematic fields of interest, e. g. computer-mediated communication.","PeriodicalId":43494,"journal":{"name":"Zeitschrift Fur Sprachwissenschaft","volume":"40 1","pages":"421 - 440"},"PeriodicalIF":0.6000,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Quantifying graphemic variation via large text corpora\",\"authors\":\"Hanna Lüschow\",\"doi\":\"10.1515/zfs-2021-2038\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract The use of some basic computer science concepts could expand the possibilities of (manual) graphematic text corpus analysis. With these it can be shown that graphematic variation decreases constantly in printed German texts from 1600 to 1900. While the variability is continuously lesser on a text-internal level, it decreases faster for the whole available writing system of individual decades. But which changes took place exactly? Which types of variation went away more quickly, which ones persisted? How do we deal with large amounts of data which cannot be processed manually anymore? Which aspects are of special importance or go missing while working with a large textual base? The use of a measurement called entropy quantifies the variability of the spellings of a given word form, lemma, text or subcorpus, with few restrictions but also less details in the results. The difference between two spellings can be measured via Damerau-Levenshtein distance. To a certain degree, automated data handling can also determine the exact changes that took place. Afterwards, these differences can be counted and ranked. As data source the German Text Archive of the Berlin-Brandenburg Academy of Sciences and Humanities is used. It offers for example orthographic normalization – which is extremely useful –, preprocessing of parts of speech and lemmatization. As opposed to many other approaches the establishment of today’s normed spellings is not seen as the aim of the developments and is therefore not the focus of the research. Instead, the differences between individual spellings are of interest. Afterwards intra- and extralinguistic factors which caused these developments should be determined. These methodological findings could subsequently be used for improving research methods in other graphematic fields of interest, e. g. computer-mediated communication.\",\"PeriodicalId\":43494,\"journal\":{\"name\":\"Zeitschrift Fur Sprachwissenschaft\",\"volume\":\"40 1\",\"pages\":\"421 - 440\"},\"PeriodicalIF\":0.6000,\"publicationDate\":\"2021-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Zeitschrift Fur Sprachwissenschaft\",\"FirstCategoryId\":\"98\",\"ListUrlMain\":\"https://doi.org/10.1515/zfs-2021-2038\",\"RegionNum\":3,\"RegionCategory\":\"文学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"0\",\"JCRName\":\"LANGUAGE & LINGUISTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Zeitschrift Fur Sprachwissenschaft","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1515/zfs-2021-2038","RegionNum":3,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"LANGUAGE & LINGUISTICS","Score":null,"Total":0}
引用次数: 0

摘要

摘要使用一些基本的计算机科学概念可以扩展(手动)图形文本语料库分析的可能性。由此可以看出,从1600年到1900年,印刷的德语文本中的字形变化不断减少。虽然在文本内部层面上的可变性一直较小,但在单个几十年的整个可用写作系统中,它下降得更快。但究竟发生了哪些变化?哪些类型的变异消失得更快,哪些变异持续存在?我们如何处理无法再手动处理的大量数据?在处理大量文本时,哪些方面特别重要或缺失?使用一种称为熵的测量方法来量化给定单词形式、引理、文本或子词库拼写的可变性,几乎没有限制,但结果中的细节也较少。两种拼写之间的差异可以通过Damerau-Levenstein距离来测量。在某种程度上,自动化数据处理也可以确定发生的确切变化。然后,可以对这些差异进行计数和排序。作为数据来源,使用了柏林-勃兰登堡科学与人文学院的德国文本档案。例如,它提供了正交归一化(这非常有用)、词性预处理和引理化。与许多其他方法相反,建立今天的规范拼写并没有被视为发展的目的,因此也不是研究的重点。相反,不同拼写之间的差异是令人感兴趣的。然后,应该确定引起这些发展的语言内外因素。这些方法论的发现可以用于改进其他感兴趣的图形领域的研究方法。 g.以计算机为媒介的交流。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Quantifying graphemic variation via large text corpora
Abstract The use of some basic computer science concepts could expand the possibilities of (manual) graphematic text corpus analysis. With these it can be shown that graphematic variation decreases constantly in printed German texts from 1600 to 1900. While the variability is continuously lesser on a text-internal level, it decreases faster for the whole available writing system of individual decades. But which changes took place exactly? Which types of variation went away more quickly, which ones persisted? How do we deal with large amounts of data which cannot be processed manually anymore? Which aspects are of special importance or go missing while working with a large textual base? The use of a measurement called entropy quantifies the variability of the spellings of a given word form, lemma, text or subcorpus, with few restrictions but also less details in the results. The difference between two spellings can be measured via Damerau-Levenshtein distance. To a certain degree, automated data handling can also determine the exact changes that took place. Afterwards, these differences can be counted and ranked. As data source the German Text Archive of the Berlin-Brandenburg Academy of Sciences and Humanities is used. It offers for example orthographic normalization – which is extremely useful –, preprocessing of parts of speech and lemmatization. As opposed to many other approaches the establishment of today’s normed spellings is not seen as the aim of the developments and is therefore not the focus of the research. Instead, the differences between individual spellings are of interest. Afterwards intra- and extralinguistic factors which caused these developments should be determined. These methodological findings could subsequently be used for improving research methods in other graphematic fields of interest, e. g. computer-mediated communication.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
1.10
自引率
0.00%
发文量
19
审稿时长
20 weeks
期刊介绍: The aim of the journal is to promote linguistic research by publishing high-quality contributions and thematic special issues from all fields and trends of modern linguistics. In addition to articles and reviews, the journal also features contributions to discussions on current controversies in the field as well as overview articles outlining the state-of-the art of relevant research paradigms. Topics: -General Linguistics -Language Typology -Language acquisition, language change and synchronic variation -Empirical linguistics: experimental and corpus-based research -Contributions to theory-building
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信