Experiments And Discussions On Vision Transformer (ViT) Parameters For Object Tracking

Daiki Fukushima, Tomokazu Ishikawa
{"title":"Experiments And Discussions On Vision Transformer (ViT) Parameters For Object Tracking","authors":"Daiki Fukushima, Tomokazu Ishikawa","doi":"10.1109/NicoInt55861.2022.00020","DOIUrl":null,"url":null,"abstract":"Recently, machine learning has been used to improve the accuracy of computer vision, and the latest network model, Transformer, has been widely used in the fields of natural language translation and object recognition. A feature of ViT used in the field of object recognition is that its accuracy is improved by accumulating layers of Transformers. However, the latest models of the previous study of object tracking show that the accuracy decreases as the layers of the Transformer are accumulated. Therefore, in this study, we thought that the accuracy could be improved by changing the experimental conditions while the layers of transformers are accumulated. In addition, by searching for hyperparameters in the loss function, we expect to further improve the accuracy. The experimental results show that the accuracy can be improved by 5% by adjusting the parameters of regression loss and loss on bounding box size. Also, the model used in this study has a problem that the accuracy decreased by up to 7% when the number of Transformer layers is increased. Although the accuracy improved by 2% compared to the model without adjusting the parameters when the parameters of the loss function are adjusted with the number of Transformer layers increased.","PeriodicalId":328114,"journal":{"name":"2022 Nicograph International (NicoInt)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 Nicograph International (NicoInt)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NicoInt55861.2022.00020","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Recently, machine learning has been used to improve the accuracy of computer vision, and the latest network model, Transformer, has been widely used in the fields of natural language translation and object recognition. A feature of ViT used in the field of object recognition is that its accuracy is improved by accumulating layers of Transformers. However, the latest models of the previous study of object tracking show that the accuracy decreases as the layers of the Transformer are accumulated. Therefore, in this study, we thought that the accuracy could be improved by changing the experimental conditions while the layers of transformers are accumulated. In addition, by searching for hyperparameters in the loss function, we expect to further improve the accuracy. The experimental results show that the accuracy can be improved by 5% by adjusting the parameters of regression loss and loss on bounding box size. Also, the model used in this study has a problem that the accuracy decreased by up to 7% when the number of Transformer layers is increased. Although the accuracy improved by 2% compared to the model without adjusting the parameters when the parameters of the loss function are adjusted with the number of Transformer layers increased.
用于目标跟踪的视觉变压器(ViT)参数的实验与讨论
最近,机器学习被用于提高计算机视觉的准确性,最新的网络模型Transformer在自然语言翻译和物体识别领域得到了广泛的应用。ViT在目标识别领域的一个特点是,它的精度是通过层层的变形来提高的。然而,以往的目标跟踪研究的最新模型表明,随着变压器层数的累积,精度会下降。因此,在本研究中,我们认为在变压器层数积累的同时,通过改变实验条件可以提高精度。此外,通过搜索损失函数中的超参数,我们希望进一步提高准确率。实验结果表明,通过调整回归损失和边界盒大小损失的参数,可以将准确率提高5%。此外,本研究中使用的模型存在一个问题,即当变压器层数增加时,精度下降高达7%。随着变压器层数的增加而调整损失函数的参数,与不调整参数的模型相比,精度提高了2%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信