{"title":"利用 VGG19 进行人体步态识别的迁移学习:CASIA-A 数据集","authors":"Veenu Rani, Munish Kumar","doi":"10.1007/s11042-024-20132-y","DOIUrl":null,"url":null,"abstract":"<p>Identification of individuals based on physical characteristics has recently gained popularity and falls under the category of pattern recognition. Biometric recognition has emerged as an effective strategy for preventing security breaches, as no two people share the same physical characteristics. \"Gait recognition\" specifically refers to identifying individuals based on their walking patterns. Human gait is a method of locomotion that relies on the coordination of the brain, nerves, and muscles. Traditionally, human gait analysis was performed subjectively through visual observations. However, with advancements in technology and deep learning, human gait analysis can now be conducted empirically and without the need for subject cooperation, enhancing the quality of life. Deep learning methods have demonstrated excellent performance in human gait recognition. In this article, the authors employed the VGG19 transfer learning model for human gait recognition. They used the public benchmark dataset CASIA-A for their experimental work, which contains a total of 19,139 images captured from 20 individuals. The dataset was segmented into two different patterns: 70:30 and 80:20. To optimize the performance of the proposed model, the authors considered three hyperparameters: loss, validation loss (val_loss), and accuracy rate. They reported accuracy rates of 96.9% and 97.8%, with losses of 2.71% and 2.01% for the two patterns, respectively.</p>","PeriodicalId":18770,"journal":{"name":"Multimedia Tools and Applications","volume":null,"pages":null},"PeriodicalIF":3.0000,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Transfer learning for human gait recognition using VGG19: CASIA-A dataset\",\"authors\":\"Veenu Rani, Munish Kumar\",\"doi\":\"10.1007/s11042-024-20132-y\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Identification of individuals based on physical characteristics has recently gained popularity and falls under the category of pattern recognition. Biometric recognition has emerged as an effective strategy for preventing security breaches, as no two people share the same physical characteristics. \\\"Gait recognition\\\" specifically refers to identifying individuals based on their walking patterns. Human gait is a method of locomotion that relies on the coordination of the brain, nerves, and muscles. Traditionally, human gait analysis was performed subjectively through visual observations. However, with advancements in technology and deep learning, human gait analysis can now be conducted empirically and without the need for subject cooperation, enhancing the quality of life. Deep learning methods have demonstrated excellent performance in human gait recognition. In this article, the authors employed the VGG19 transfer learning model for human gait recognition. They used the public benchmark dataset CASIA-A for their experimental work, which contains a total of 19,139 images captured from 20 individuals. The dataset was segmented into two different patterns: 70:30 and 80:20. To optimize the performance of the proposed model, the authors considered three hyperparameters: loss, validation loss (val_loss), and accuracy rate. They reported accuracy rates of 96.9% and 97.8%, with losses of 2.71% and 2.01% for the two patterns, respectively.</p>\",\"PeriodicalId\":18770,\"journal\":{\"name\":\"Multimedia Tools and Applications\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2024-09-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Multimedia Tools and Applications\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s11042-024-20132-y\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Multimedia Tools and Applications","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s11042-024-20132-y","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
Transfer learning for human gait recognition using VGG19: CASIA-A dataset
Identification of individuals based on physical characteristics has recently gained popularity and falls under the category of pattern recognition. Biometric recognition has emerged as an effective strategy for preventing security breaches, as no two people share the same physical characteristics. "Gait recognition" specifically refers to identifying individuals based on their walking patterns. Human gait is a method of locomotion that relies on the coordination of the brain, nerves, and muscles. Traditionally, human gait analysis was performed subjectively through visual observations. However, with advancements in technology and deep learning, human gait analysis can now be conducted empirically and without the need for subject cooperation, enhancing the quality of life. Deep learning methods have demonstrated excellent performance in human gait recognition. In this article, the authors employed the VGG19 transfer learning model for human gait recognition. They used the public benchmark dataset CASIA-A for their experimental work, which contains a total of 19,139 images captured from 20 individuals. The dataset was segmented into two different patterns: 70:30 and 80:20. To optimize the performance of the proposed model, the authors considered three hyperparameters: loss, validation loss (val_loss), and accuracy rate. They reported accuracy rates of 96.9% and 97.8%, with losses of 2.71% and 2.01% for the two patterns, respectively.
期刊介绍:
Multimedia Tools and Applications publishes original research articles on multimedia development and system support tools as well as case studies of multimedia applications. It also features experimental and survey articles. The journal is intended for academics, practitioners, scientists and engineers who are involved in multimedia system research, design and applications. All papers are peer reviewed.
Specific areas of interest include:
- Multimedia Tools:
- Multimedia Applications:
- Prototype multimedia systems and platforms