Permana langgeng wicaksono ellwid Putra, Muhammad Naufal, Erwin Yudi Hidayat
{"title":"A Comparative Study of MobileNet Architecture Optimizer for Crowd Prediction","authors":"Permana langgeng wicaksono ellwid Putra, Muhammad Naufal, Erwin Yudi Hidayat","doi":"10.30591/jpit.v8i3.5703","DOIUrl":null,"url":null,"abstract":"Artificial intelligence technology has grown quickly in recent years. Convolutional neural network (CNN) technology has also been developed as a result of these developments. However, because convolutional neural networks entail several calculations and the optimization of numerous matrices, their application necessitates the utilization of appropriate technology, such as GPUs or other accelerators. Applying transfer learning techniques is one way to get around this resource barrier. MobileNetV2 is an example of a lightweight convolutional neural network architecture that is appropriate for transfer learning. The objective of the research is to compare the performance of SGD and Adam using the MobileNetv2 convolutional neural network architecture. Model training uses a learning rate of 0.0001, batch size of 32, and binary cross-entropy as the loss function. The training process is carried out for 100 epochs with the application of early stop and patience for 10 epochs. Result of this research is both models using Adam's optimizer and SGD show good capability in crowd classification. However, the model with the SGD optimizer has a slightly superior performance even with less accuracy than model with Adam optimizer. Which is model with Adam has accuracy 96%, while the model with SGD has 95% accuracy. This is because in the graphical results model with the SGD optimizer shows better stability than the model with the Adam optimizer. The loss graph and accuracy graph of the SGD model are more consistent and tend to experience lower fluctuations than the Adam model.","PeriodicalId":503683,"journal":{"name":"Jurnal Informatika: Jurnal Pengembangan IT","volume":"27 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Jurnal Informatika: Jurnal Pengembangan IT","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.30591/jpit.v8i3.5703","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Artificial intelligence technology has grown quickly in recent years. Convolutional neural network (CNN) technology has also been developed as a result of these developments. However, because convolutional neural networks entail several calculations and the optimization of numerous matrices, their application necessitates the utilization of appropriate technology, such as GPUs or other accelerators. Applying transfer learning techniques is one way to get around this resource barrier. MobileNetV2 is an example of a lightweight convolutional neural network architecture that is appropriate for transfer learning. The objective of the research is to compare the performance of SGD and Adam using the MobileNetv2 convolutional neural network architecture. Model training uses a learning rate of 0.0001, batch size of 32, and binary cross-entropy as the loss function. The training process is carried out for 100 epochs with the application of early stop and patience for 10 epochs. Result of this research is both models using Adam's optimizer and SGD show good capability in crowd classification. However, the model with the SGD optimizer has a slightly superior performance even with less accuracy than model with Adam optimizer. Which is model with Adam has accuracy 96%, while the model with SGD has 95% accuracy. This is because in the graphical results model with the SGD optimizer shows better stability than the model with the Adam optimizer. The loss graph and accuracy graph of the SGD model are more consistent and tend to experience lower fluctuations than the Adam model.