Myke D. M. Valadão, Diego A. Amoedo, Gustavo M. Torres, E. V. C. U. Mattos, Antônio M. C. Pereira, Matheus S. Uchôa, Lucas M. Torres, Victor L. G. Cavalcante, José E. B. S. Linhares, M. O. Silva, Agemilson P. Silva, Caio F. S. Cruz, Rômulo Fabrício, Ruan J. S. Belém, Thiago B. Bezerra, W. S. S. Júnior, Celso B. Carvalho
{"title":"利用ResNet实现生产线上工人装配动作的自动视频标注","authors":"Myke D. M. Valadão, Diego A. Amoedo, Gustavo M. Torres, E. V. C. U. Mattos, Antônio M. C. Pereira, Matheus S. Uchôa, Lucas M. Torres, Victor L. G. Cavalcante, José E. B. S. Linhares, M. O. Silva, Agemilson P. Silva, Caio F. S. Cruz, Rômulo Fabrício, Ruan J. S. Belém, Thiago B. Bezerra, W. S. S. Júnior, Celso B. Carvalho","doi":"10.1109/ICCE-Taiwan55306.2022.9869008","DOIUrl":null,"url":null,"abstract":"In this work, conducted by two partners, called UFAM/CETELI and, Envision (TPV Group), we present a method of automatic labeling of frames of worker's actions in factory environments using a model generated by a residual neural network. With this approach we used some manually labeled frames to training a model that provide the label of 4 classes of actions. We achieve accuracy rate over 96%, which give reliability to a supervised training of 3D dataset of actions.","PeriodicalId":164671,"journal":{"name":"2022 IEEE International Conference on Consumer Electronics - Taiwan","volume":"28 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Automatic Video Labeling with Assembly Actions of Workers on a Production Line Using ResNet\",\"authors\":\"Myke D. M. Valadão, Diego A. Amoedo, Gustavo M. Torres, E. V. C. U. Mattos, Antônio M. C. Pereira, Matheus S. Uchôa, Lucas M. Torres, Victor L. G. Cavalcante, José E. B. S. Linhares, M. O. Silva, Agemilson P. Silva, Caio F. S. Cruz, Rômulo Fabrício, Ruan J. S. Belém, Thiago B. Bezerra, W. S. S. Júnior, Celso B. Carvalho\",\"doi\":\"10.1109/ICCE-Taiwan55306.2022.9869008\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this work, conducted by two partners, called UFAM/CETELI and, Envision (TPV Group), we present a method of automatic labeling of frames of worker's actions in factory environments using a model generated by a residual neural network. With this approach we used some manually labeled frames to training a model that provide the label of 4 classes of actions. We achieve accuracy rate over 96%, which give reliability to a supervised training of 3D dataset of actions.\",\"PeriodicalId\":164671,\"journal\":{\"name\":\"2022 IEEE International Conference on Consumer Electronics - Taiwan\",\"volume\":\"28 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-07-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE International Conference on Consumer Electronics - Taiwan\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCE-Taiwan55306.2022.9869008\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Consumer Electronics - Taiwan","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCE-Taiwan55306.2022.9869008","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Automatic Video Labeling with Assembly Actions of Workers on a Production Line Using ResNet
In this work, conducted by two partners, called UFAM/CETELI and, Envision (TPV Group), we present a method of automatic labeling of frames of worker's actions in factory environments using a model generated by a residual neural network. With this approach we used some manually labeled frames to training a model that provide the label of 4 classes of actions. We achieve accuracy rate over 96%, which give reliability to a supervised training of 3D dataset of actions.