神经网络的变体:综述

IF 1.1 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
B. H. Nayef, Siti Norul Huda Sheikh Abdullah, R. Sulaiman, Z. Alyasseri
{"title":"神经网络的变体:综述","authors":"B. H. Nayef, Siti Norul Huda Sheikh Abdullah, R. Sulaiman, Z. Alyasseri","doi":"10.22452/mjcs.vol35no2.5","DOIUrl":null,"url":null,"abstract":"Machine learning (ML) techniques are part of artificial intelligence. ML involves imitating human behavior in solving different problems, such as object detection, text handwriting recognition, and image classification. Several techniques can be used in machine learning, such as Neural Networks (NN). The expansion in information technology enables researchers to collect large amounts of various data types. The challenging issue is to uncover neural network parameters suitable for object detection problems. Therefore, this paper presents a literature review of the latest proposed and developed components in neural network techniques to cope with different sizes and data types. A brief discussion is also introduced to demonstrate the different types of neural network parameters, such as activation functions, loss functions, and regularization methods. Moreover, this paper also uncovers parameter optimization methods and hyperparameters of the model, such as weight, the learning rate, and the number of iterations. From the literature, it is notable that choosing the activation function, loss function, number of neural network layers, and data size is the major factor affecting NN performance. Additionally, utilizing deep learning NN resulted in a significant improvement in model performance for a variety of issues, which became the researcher's attention.","PeriodicalId":49894,"journal":{"name":"Malaysian Journal of Computer Science","volume":" ","pages":""},"PeriodicalIF":1.1000,"publicationDate":"2022-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"VARIANTS OF NEURAL NETWORKS: A REVIEW\",\"authors\":\"B. H. Nayef, Siti Norul Huda Sheikh Abdullah, R. Sulaiman, Z. Alyasseri\",\"doi\":\"10.22452/mjcs.vol35no2.5\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Machine learning (ML) techniques are part of artificial intelligence. ML involves imitating human behavior in solving different problems, such as object detection, text handwriting recognition, and image classification. Several techniques can be used in machine learning, such as Neural Networks (NN). The expansion in information technology enables researchers to collect large amounts of various data types. The challenging issue is to uncover neural network parameters suitable for object detection problems. Therefore, this paper presents a literature review of the latest proposed and developed components in neural network techniques to cope with different sizes and data types. A brief discussion is also introduced to demonstrate the different types of neural network parameters, such as activation functions, loss functions, and regularization methods. Moreover, this paper also uncovers parameter optimization methods and hyperparameters of the model, such as weight, the learning rate, and the number of iterations. From the literature, it is notable that choosing the activation function, loss function, number of neural network layers, and data size is the major factor affecting NN performance. Additionally, utilizing deep learning NN resulted in a significant improvement in model performance for a variety of issues, which became the researcher's attention.\",\"PeriodicalId\":49894,\"journal\":{\"name\":\"Malaysian Journal of Computer Science\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":1.1000,\"publicationDate\":\"2022-04-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Malaysian Journal of Computer Science\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.22452/mjcs.vol35no2.5\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Malaysian Journal of Computer Science","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.22452/mjcs.vol35no2.5","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 1

摘要

机器学习(ML)技术是人工智能的一部分。ML涉及模仿人类行为来解决不同的问题,如对象检测、文本手写识别和图像分类。有几种技术可以用于机器学习,例如神经网络(NN)。信息技术的发展使研究人员能够收集大量各种类型的数据。具有挑战性的问题是揭示适用于目标检测问题的神经网络参数。因此,本文对神经网络技术中最新提出和开发的组件进行了文献综述,以应对不同的大小和数据类型。还简要讨论了不同类型的神经网络参数,如激活函数、损失函数和正则化方法。此外,本文还揭示了模型的参数优化方法和超参数,如权重、学习率和迭代次数。从文献中可以注意到,选择激活函数、损失函数、神经网络层数和数据大小是影响神经网络性能的主要因素。此外,利用深度学习神经网络显著提高了模型在各种问题上的性能,这引起了研究人员的注意。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
VARIANTS OF NEURAL NETWORKS: A REVIEW
Machine learning (ML) techniques are part of artificial intelligence. ML involves imitating human behavior in solving different problems, such as object detection, text handwriting recognition, and image classification. Several techniques can be used in machine learning, such as Neural Networks (NN). The expansion in information technology enables researchers to collect large amounts of various data types. The challenging issue is to uncover neural network parameters suitable for object detection problems. Therefore, this paper presents a literature review of the latest proposed and developed components in neural network techniques to cope with different sizes and data types. A brief discussion is also introduced to demonstrate the different types of neural network parameters, such as activation functions, loss functions, and regularization methods. Moreover, this paper also uncovers parameter optimization methods and hyperparameters of the model, such as weight, the learning rate, and the number of iterations. From the literature, it is notable that choosing the activation function, loss function, number of neural network layers, and data size is the major factor affecting NN performance. Additionally, utilizing deep learning NN resulted in a significant improvement in model performance for a variety of issues, which became the researcher's attention.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Malaysian Journal of Computer Science
Malaysian Journal of Computer Science COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, THEORY & METHODS
CiteScore
2.20
自引率
33.30%
发文量
35
审稿时长
7.5 months
期刊介绍: The Malaysian Journal of Computer Science (ISSN 0127-9084) is published four times a year in January, April, July and October by the Faculty of Computer Science and Information Technology, University of Malaya, since 1985. Over the years, the journal has gained popularity and the number of paper submissions has increased steadily. The rigorous reviews from the referees have helped in ensuring that the high standard of the journal is maintained. The objectives are to promote exchange of information and knowledge in research work, new inventions/developments of Computer Science and on the use of Information Technology towards the structuring of an information-rich society and to assist the academic staff from local and foreign universities, business and industrial sectors, government departments and academic institutions on publishing research results and studies in Computer Science and Information Technology through a scholarly publication.  The journal is being indexed and abstracted by Clarivate Analytics'' Web of Science and Elsevier''s Scopus
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信