An Enhancement of Long-Short Term Memory for the Implementation of Virtual Assistant for Pamantasan ng Lungsod ng Maynila Students

Draillim Xaviery Valonzo, Jose Ramon Jasa, Mark Christopher R. Blanco, Khatalyn E. Mata, Dan Michael A. Cortez
{"title":"An Enhancement of Long-Short Term Memory for the Implementation of Virtual Assistant for Pamantasan ng Lungsod ng Maynila Students","authors":"Draillim Xaviery Valonzo, Jose Ramon Jasa, Mark Christopher R. Blanco, Khatalyn E. Mata, Dan Michael A. Cortez","doi":"10.25147/ijcsr.2017.001.1.91","DOIUrl":null,"url":null,"abstract":"Purpose – Natural Language Processing is an aspect of Artificial Intelligence that focuses on how technology can understand words, derive meaning from them, and return a meaningful and correct output. Therefore, it is used in the making of Virtual Assistants today. Training virtual assistants require long temporal dependencies and sequence-to-sequence classification. This study will be used to create a possible algorithm that will enhance the performance of a possible virtual assistant designed for PLM students and faculty members. Method – LSTM will be used to train the model to address these concerns. However. The LSTM algorithm faces the problem of slow computing speed and high computation costs. To address this the researchers implemented TensorFlow XLA to the model to optimize the computation costs in the problem. Results – Though the number of matrices exploded from 934 to 30000, the training can show slight improvement both in memory, CPU (Central Processing Unit) utilization, and time reduction. At 50 epochs, training the model with XLA has shown a time decrease of 8 minutes and can save at most 500 megabytes of memory. Conclusion – XLA has proven that it has helped the LSTM algorithm in terms of its usage in memory, utilization of CPU, and overall speed of training, especially in longer processes. Recommendations – The researchers recommend using XLA in the context of pruning and the effect of pruning paired with XLA to maximize the performance of the model. Practical Implication – This would allow a much more efficient and cost-friendly training of the model when feeding it new data to be used for virtual assistant designed for PLM students and faculty members.","PeriodicalId":33870,"journal":{"name":"International Journal of Computing Sciences Research","volume":"157 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Computing Sciences Research","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.25147/ijcsr.2017.001.1.91","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Purpose – Natural Language Processing is an aspect of Artificial Intelligence that focuses on how technology can understand words, derive meaning from them, and return a meaningful and correct output. Therefore, it is used in the making of Virtual Assistants today. Training virtual assistants require long temporal dependencies and sequence-to-sequence classification. This study will be used to create a possible algorithm that will enhance the performance of a possible virtual assistant designed for PLM students and faculty members. Method – LSTM will be used to train the model to address these concerns. However. The LSTM algorithm faces the problem of slow computing speed and high computation costs. To address this the researchers implemented TensorFlow XLA to the model to optimize the computation costs in the problem. Results – Though the number of matrices exploded from 934 to 30000, the training can show slight improvement both in memory, CPU (Central Processing Unit) utilization, and time reduction. At 50 epochs, training the model with XLA has shown a time decrease of 8 minutes and can save at most 500 megabytes of memory. Conclusion – XLA has proven that it has helped the LSTM algorithm in terms of its usage in memory, utilization of CPU, and overall speed of training, especially in longer processes. Recommendations – The researchers recommend using XLA in the context of pruning and the effect of pruning paired with XLA to maximize the performance of the model. Practical Implication – This would allow a much more efficient and cost-friendly training of the model when feeding it new data to be used for virtual assistant designed for PLM students and faculty members.
使用虚拟助手对帕曼塔桑、隆隆索和梅尼拉学生长短期记忆的增强
目的——自然语言处理是人工智能的一个方面,它关注的是技术如何理解单词,从中获得意义,并返回有意义和正确的输出。因此,今天它被用于制作虚拟助手。训练虚拟助手需要长时间的依赖关系和序列到序列的分类。本研究将用于创建一种可能的算法,该算法将提高为PLM学生和教师设计的可能的虚拟助手的性能。方法- LSTM将用于训练模型来解决这些问题。然而。LSTM算法面临着计算速度慢、计算成本高的问题。为了解决这个问题,研究人员在模型中实现了TensorFlow XLA,以优化问题中的计算成本。结果—虽然矩阵的数量从934激增到30000,但是训练可以在内存、CPU(中央处理器)利用率和时间减少方面显示出轻微的改善。在50个epoch时,使用XLA训练模型的时间减少了8分钟,最多可以节省500mb的内存。结论- XLA已经证明它在内存使用、CPU利用率和整体训练速度方面对LSTM算法有帮助,特别是在较长的进程中。建议-研究人员建议在修剪的背景下使用XLA,以及修剪与XLA配对的效果,以最大限度地提高模型的性能。实际意义-这将允许更有效和成本友好的模型训练,当它提供新的数据,用于为PLM学生和教师设计的虚拟助手。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
25
审稿时长
20 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信