{"title":"为修辞学分析建立更好的机器学习模型:使用修辞学特征集训练人工神经网络模型","authors":"Z. Majdik, James Wynn","doi":"10.1080/10572252.2022.2077452","DOIUrl":null,"url":null,"abstract":"ABSTRACT In this paper, we investigate two approaches to building artificial neural network models to compare their effectiveness for accurately classifying rhetorical structures across multiple (non-binary) classes in small textual datasets. We find that the most accurate type of model can be designed by using a custom rhetorical feature list coupled with general-language word vector representations, which outperforms models with more computing-intensive architectures.","PeriodicalId":45536,"journal":{"name":"Technical Communication Quarterly","volume":"32 1","pages":"63 - 78"},"PeriodicalIF":2.0000,"publicationDate":"2022-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Building Better Machine Learning Models for Rhetorical Analyses: The Use of Rhetorical Feature Sets for Training Artificial Neural Network Models\",\"authors\":\"Z. Majdik, James Wynn\",\"doi\":\"10.1080/10572252.2022.2077452\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"ABSTRACT In this paper, we investigate two approaches to building artificial neural network models to compare their effectiveness for accurately classifying rhetorical structures across multiple (non-binary) classes in small textual datasets. We find that the most accurate type of model can be designed by using a custom rhetorical feature list coupled with general-language word vector representations, which outperforms models with more computing-intensive architectures.\",\"PeriodicalId\":45536,\"journal\":{\"name\":\"Technical Communication Quarterly\",\"volume\":\"32 1\",\"pages\":\"63 - 78\"},\"PeriodicalIF\":2.0000,\"publicationDate\":\"2022-05-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Technical Communication Quarterly\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/10572252.2022.2077452\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMMUNICATION\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Technical Communication Quarterly","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/10572252.2022.2077452","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMMUNICATION","Score":null,"Total":0}
Building Better Machine Learning Models for Rhetorical Analyses: The Use of Rhetorical Feature Sets for Training Artificial Neural Network Models
ABSTRACT In this paper, we investigate two approaches to building artificial neural network models to compare their effectiveness for accurately classifying rhetorical structures across multiple (non-binary) classes in small textual datasets. We find that the most accurate type of model can be designed by using a custom rhetorical feature list coupled with general-language word vector representations, which outperforms models with more computing-intensive architectures.