利用自组织映射和长短期记忆算法分析情感

F. Sinaga, Sio Jurnalis Pipin, Sunaryo Winardi, Karina Mannita Tarigan, Ananda Putra Brahmana
{"title":"利用自组织映射和长短期记忆算法分析情感","authors":"F. Sinaga, Sio Jurnalis Pipin, Sunaryo Winardi, Karina Mannita Tarigan, Ananda Putra Brahmana","doi":"10.30812/matrik.v23i1.3332","DOIUrl":null,"url":null,"abstract":"This research delves into the impact of Chat Generative Pre-trained Transformer, one of Open Artificial Intelligence Generative Pretrained Transformer models. This model underwent extensive training on a vast corpus of internet text to gain insights into the mechanics of human language and its role in forming phrases, sentences, and paragraphs. The urgency of this inquiry arises from Chat Generative Pre-trained Transformer emergence, which has stirred significant debate and captured widespread attention in both research and educational circles. Since its debut in November 2022, Chat Generative Pre-trained Transformer has demonstrated substantial potential across numerous domains. However, concerns voiced on Twitter have centered on potential negative consequences, such as increasedforgery and misinformation. Consequently, understanding public sentiment toward Chat Generative Pre-trained Transformer technology through sentiment analysis has become crucial. The research’s primary objective is to conduct Sentiment Analysis Classification of Chat Generative Pre-trained Transformer regarding public opinions on Twitter in Indonesia. This goal involves quantifying and categorizing public sentiment from Twitter’s vast data pool into three clusters: positive, negative, or neutral. In the data clustering stage, the Self-Organizing Map technique is used. After the text data has been weighted and clustered, the next step involves using the classification technique with LongShort-Term Memory to determine the public sentiment outcomes resulting from the presence of Chat Generative Pre-trained Transformer technology. Rigorous testing has demonstrated the robust performance of the model, with optimal parameters: relu activation function, som size of 5, num epoch som and num epoch lstm both at 128, yielding an impressive 95.07% accuracy rate.","PeriodicalId":364657,"journal":{"name":"MATRIK : Jurnal Manajemen, Teknik Informatika dan Rekayasa Komputer","volume":"37 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Analyzing Sentiment with Self-Organizing Map and Long Short-Term Memory Algorithms\",\"authors\":\"F. Sinaga, Sio Jurnalis Pipin, Sunaryo Winardi, Karina Mannita Tarigan, Ananda Putra Brahmana\",\"doi\":\"10.30812/matrik.v23i1.3332\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This research delves into the impact of Chat Generative Pre-trained Transformer, one of Open Artificial Intelligence Generative Pretrained Transformer models. This model underwent extensive training on a vast corpus of internet text to gain insights into the mechanics of human language and its role in forming phrases, sentences, and paragraphs. The urgency of this inquiry arises from Chat Generative Pre-trained Transformer emergence, which has stirred significant debate and captured widespread attention in both research and educational circles. Since its debut in November 2022, Chat Generative Pre-trained Transformer has demonstrated substantial potential across numerous domains. However, concerns voiced on Twitter have centered on potential negative consequences, such as increasedforgery and misinformation. Consequently, understanding public sentiment toward Chat Generative Pre-trained Transformer technology through sentiment analysis has become crucial. The research’s primary objective is to conduct Sentiment Analysis Classification of Chat Generative Pre-trained Transformer regarding public opinions on Twitter in Indonesia. This goal involves quantifying and categorizing public sentiment from Twitter’s vast data pool into three clusters: positive, negative, or neutral. In the data clustering stage, the Self-Organizing Map technique is used. After the text data has been weighted and clustered, the next step involves using the classification technique with LongShort-Term Memory to determine the public sentiment outcomes resulting from the presence of Chat Generative Pre-trained Transformer technology. Rigorous testing has demonstrated the robust performance of the model, with optimal parameters: relu activation function, som size of 5, num epoch som and num epoch lstm both at 128, yielding an impressive 95.07% accuracy rate.\",\"PeriodicalId\":364657,\"journal\":{\"name\":\"MATRIK : Jurnal Manajemen, Teknik Informatika dan Rekayasa Komputer\",\"volume\":\"37 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-11-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"MATRIK : Jurnal Manajemen, Teknik Informatika dan Rekayasa Komputer\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.30812/matrik.v23i1.3332\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"MATRIK : Jurnal Manajemen, Teknik Informatika dan Rekayasa Komputer","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.30812/matrik.v23i1.3332","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

本研究深入探讨了聊天生成预训练转换器(开放式人工智能生成预训练转换器模型之一)的影响。该模型在大量互联网文本语料库中进行了广泛的训练,以深入了解人类语言的机制及其在形成短语、句子和段落中的作用。这一研究的紧迫性源于 Chat Generative Pre-trained Transformer 的出现,它在研究和教育界引起了广泛的讨论和关注。自 2022 年 11 月首次亮相以来,Chat Generative Pre-trained Transformer 已在众多领域展现出巨大潜力。然而,Twitter 上表达的担忧主要集中在潜在的负面影响上,如造假和错误信息的增加。因此,通过情感分析了解公众对 Chat Generative Pre-trained Transformer 技术的看法变得至关重要。本研究的主要目标是针对印度尼西亚推特上的公众意见,对聊天生成预训练转换器进行情感分析分类。这一目标包括将 Twitter 庞大数据池中的公众情绪量化并归类为三个群组:积极、消极或中性。在数据聚类阶段,使用了自组织地图技术。在对文本数据进行加权和聚类后,下一步是使用具有长短期记忆功能的分类技术,以确定因聊天生成预训练转换器技术的存在而产生的公众情绪结果。严格的测试证明了该模型的强大性能,最佳参数为:relu 激活函数、som 大小为 5、num epoch som 和 num epoch lstm 均为 128,准确率达到了令人印象深刻的 95.07%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Analyzing Sentiment with Self-Organizing Map and Long Short-Term Memory Algorithms
This research delves into the impact of Chat Generative Pre-trained Transformer, one of Open Artificial Intelligence Generative Pretrained Transformer models. This model underwent extensive training on a vast corpus of internet text to gain insights into the mechanics of human language and its role in forming phrases, sentences, and paragraphs. The urgency of this inquiry arises from Chat Generative Pre-trained Transformer emergence, which has stirred significant debate and captured widespread attention in both research and educational circles. Since its debut in November 2022, Chat Generative Pre-trained Transformer has demonstrated substantial potential across numerous domains. However, concerns voiced on Twitter have centered on potential negative consequences, such as increasedforgery and misinformation. Consequently, understanding public sentiment toward Chat Generative Pre-trained Transformer technology through sentiment analysis has become crucial. The research’s primary objective is to conduct Sentiment Analysis Classification of Chat Generative Pre-trained Transformer regarding public opinions on Twitter in Indonesia. This goal involves quantifying and categorizing public sentiment from Twitter’s vast data pool into three clusters: positive, negative, or neutral. In the data clustering stage, the Self-Organizing Map technique is used. After the text data has been weighted and clustered, the next step involves using the classification technique with LongShort-Term Memory to determine the public sentiment outcomes resulting from the presence of Chat Generative Pre-trained Transformer technology. Rigorous testing has demonstrated the robust performance of the model, with optimal parameters: relu activation function, som size of 5, num epoch som and num epoch lstm both at 128, yielding an impressive 95.07% accuracy rate.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信