Hannah Frederick, Haizhu Hong, Margaret Williams, Amanda West, Briana K. Wright
{"title":"Data Schema to Formalize Education Research & Development Using Natural Language Processing","authors":"Hannah Frederick, Haizhu Hong, Margaret Williams, Amanda West, Briana K. Wright","doi":"10.1109/SIEDS52267.2021.9483781","DOIUrl":null,"url":null,"abstract":"Our work aims to aid in the development of an open source data schema for educational interventions by implementing natural language processing (NLP) techniques on publications within What Works Clearinghouse (WWC) and the Education Resources Information Center (ERIC). A data schema demonstrates the relationships between individual elements of interest (in this case, research in education) and collectively documents elements in a data dictionary. To facilitate the creation of this educational data schema, we first run a two-topic latent Dirichlet allocation (LDA) model on the titles and abstracts of papers that met WWC standards without reservation against those of papers that did not, separated by math and reading subdomains. We find that the distributions of allocation to these two topics suggest structural differences between WWC and non-WWC literature. We then implement Term Frequency-Inverse Document Frequency (TF-IDF) scoring to study the vocabulary within WWC titles and abstracts and determine the most relevant unigrams and bigrams currently present in WWC. Finally, we utilize an LDA model again to cluster WWC titles and abstracts into topics, or sets of words, grouped by underlying semantic similarities. We find that 11 topics are the optimal number of subtopics in WWC with an average coherence score of 0.4096 among the 39 out of 50 models that returned 11 as the optimal number of topics. Based on the TF-IDF and LDA methods presented, we can begin to identify core themes of high-quality literature that will better inform the creation of a universal data schema within education research.","PeriodicalId":426747,"journal":{"name":"2021 Systems and Information Engineering Design Symposium (SIEDS)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 Systems and Information Engineering Design Symposium (SIEDS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SIEDS52267.2021.9483781","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Our work aims to aid in the development of an open source data schema for educational interventions by implementing natural language processing (NLP) techniques on publications within What Works Clearinghouse (WWC) and the Education Resources Information Center (ERIC). A data schema demonstrates the relationships between individual elements of interest (in this case, research in education) and collectively documents elements in a data dictionary. To facilitate the creation of this educational data schema, we first run a two-topic latent Dirichlet allocation (LDA) model on the titles and abstracts of papers that met WWC standards without reservation against those of papers that did not, separated by math and reading subdomains. We find that the distributions of allocation to these two topics suggest structural differences between WWC and non-WWC literature. We then implement Term Frequency-Inverse Document Frequency (TF-IDF) scoring to study the vocabulary within WWC titles and abstracts and determine the most relevant unigrams and bigrams currently present in WWC. Finally, we utilize an LDA model again to cluster WWC titles and abstracts into topics, or sets of words, grouped by underlying semantic similarities. We find that 11 topics are the optimal number of subtopics in WWC with an average coherence score of 0.4096 among the 39 out of 50 models that returned 11 as the optimal number of topics. Based on the TF-IDF and LDA methods presented, we can begin to identify core themes of high-quality literature that will better inform the creation of a universal data schema within education research.
我们的工作旨在通过在What Works Clearinghouse (WWC)和教育资源信息中心(ERIC)的出版物上实施自然语言处理(NLP)技术,帮助开发用于教育干预的开源数据模式。数据模式演示感兴趣的单个元素(在本例中是教育方面的研究)与数据字典中的集体文档元素之间的关系。为了促进这个教育数据模式的创建,我们首先在符合WWC标准的论文的标题和摘要上运行一个双主题潜在狄利克雷分配(LDA)模型,而不保留那些不符合WWC标准的论文,以数学和阅读子领域分开。我们发现,这两个主题的分配表明了WWC和非WWC文献之间的结构差异。然后,我们实现术语频率-逆文档频率(TF-IDF)评分来研究WWC标题和摘要中的词汇表,并确定WWC中目前最相关的单元和双元。最后,我们再次利用LDA模型将WWC标题和摘要聚类到主题或词集中,并根据潜在的语义相似性进行分组。我们发现,在50个模型中,39个模型返回11个主题作为最优主题数,其中11个主题是WWC的最佳子主题数,平均相干分数为0.4096。基于所提出的TF-IDF和LDA方法,我们可以开始确定高质量文献的核心主题,这将更好地为教育研究中通用数据模式的创建提供信息。