A. A. P. Ratna, Prima Dewi Purnamasari, Nadhifa Khalisha Anandra, Dyah Lalita Luhurkinanti
{"title":"混合深度学习cnn -双向LSTM和曼哈顿距离用于日语自动简答评分:在日语研究中的用例","authors":"A. A. P. Ratna, Prima Dewi Purnamasari, Nadhifa Khalisha Anandra, Dyah Lalita Luhurkinanti","doi":"10.1145/3571662.3571666","DOIUrl":null,"url":null,"abstract":"This paper discusses the development of an Automatic Essay Grading System (SIMPLE-O) designed using hybrid CNN and Bidirectional LSTM and Manhattan Distance for Japanese language course essay grading. The most stable and best model is trained using hyperparameters with kernel sizes of 5, filters or CNN outputs of 64, a pool size of 4, Bidirectional LSTM units of 50, and a batch size of 64. The deep learning model is trained using the Adam optimizer with a learning rate of 0.001, an epoch of 25, and using an L1 regularization of 0.01. The average error obtained is 29%.","PeriodicalId":235407,"journal":{"name":"Proceedings of the 8th International Conference on Communication and Information Processing","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Hybrid Deep Learning CNN-Bidirectional LSTM and Manhattan Distance for Japanese Automated Short Answer Grading: Use case in Japanese Language Studies\",\"authors\":\"A. A. P. Ratna, Prima Dewi Purnamasari, Nadhifa Khalisha Anandra, Dyah Lalita Luhurkinanti\",\"doi\":\"10.1145/3571662.3571666\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper discusses the development of an Automatic Essay Grading System (SIMPLE-O) designed using hybrid CNN and Bidirectional LSTM and Manhattan Distance for Japanese language course essay grading. The most stable and best model is trained using hyperparameters with kernel sizes of 5, filters or CNN outputs of 64, a pool size of 4, Bidirectional LSTM units of 50, and a batch size of 64. The deep learning model is trained using the Adam optimizer with a learning rate of 0.001, an epoch of 25, and using an L1 regularization of 0.01. The average error obtained is 29%.\",\"PeriodicalId\":235407,\"journal\":{\"name\":\"Proceedings of the 8th International Conference on Communication and Information Processing\",\"volume\":\"21 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 8th International Conference on Communication and Information Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3571662.3571666\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 8th International Conference on Communication and Information Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3571662.3571666","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Hybrid Deep Learning CNN-Bidirectional LSTM and Manhattan Distance for Japanese Automated Short Answer Grading: Use case in Japanese Language Studies
This paper discusses the development of an Automatic Essay Grading System (SIMPLE-O) designed using hybrid CNN and Bidirectional LSTM and Manhattan Distance for Japanese language course essay grading. The most stable and best model is trained using hyperparameters with kernel sizes of 5, filters or CNN outputs of 64, a pool size of 4, Bidirectional LSTM units of 50, and a batch size of 64. The deep learning model is trained using the Adam optimizer with a learning rate of 0.001, an epoch of 25, and using an L1 regularization of 0.01. The average error obtained is 29%.