Ziyang Liu , Zimeng Liu , Xingming Wu , Weihai Chen , Zhong Liu , Zhengguo Li
{"title":"CrossFlow: Learning cost volumes for optical flow by cross-matching local and non-local image features","authors":"Ziyang Liu , Zimeng Liu , Xingming Wu , Weihai Chen , Zhong Liu , Zhengguo Li","doi":"10.1016/j.jvcir.2025.104588","DOIUrl":null,"url":null,"abstract":"<div><div>Optical flow is the pixel-level correspondence between two consecutive video frames. The cost volume plays an important role in deep learning-based optical flow methods. It measures the dissimilarity and the matching cost between two pixels in consecutive frames. Extensive optical flow methods have revolved around the cost volume. Most existing work constructs the cost volume by computing the dot product between the features of the target image and the source images, which is generally extracted by a shared convolutional neural network (CNN). However, these methods cannot adequately address long-standing challenges such as motion blur and large displacements. In this study, we propose the CrossFlow, computing the cost volume by cross-matching the local and non-local image features. The local and non-local features are extracted by the CNN and the transformer, respectively. Then, a total of four kinds of cost volumes are computed, and they are fused adaptively through a Softmax layer. As such, the final cost volume contains both the high- and low-frequency information. It facilitates the network in finding the correct correspondences from images with motion blur and large displacements. The experimental results demonstrate that our optical flow estimation method outperforms the baseline method (CRAFT) by 7% and 10% on the publicly available benchmarks Sintel and KITTI respectively, revealing the effectiveness of the proposed cost volume.</div></div>","PeriodicalId":54755,"journal":{"name":"Journal of Visual Communication and Image Representation","volume":"112 ","pages":"Article 104588"},"PeriodicalIF":3.1000,"publicationDate":"2025-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Visual Communication and Image Representation","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1047320325002020","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Optical flow is the pixel-level correspondence between two consecutive video frames. The cost volume plays an important role in deep learning-based optical flow methods. It measures the dissimilarity and the matching cost between two pixels in consecutive frames. Extensive optical flow methods have revolved around the cost volume. Most existing work constructs the cost volume by computing the dot product between the features of the target image and the source images, which is generally extracted by a shared convolutional neural network (CNN). However, these methods cannot adequately address long-standing challenges such as motion blur and large displacements. In this study, we propose the CrossFlow, computing the cost volume by cross-matching the local and non-local image features. The local and non-local features are extracted by the CNN and the transformer, respectively. Then, a total of four kinds of cost volumes are computed, and they are fused adaptively through a Softmax layer. As such, the final cost volume contains both the high- and low-frequency information. It facilitates the network in finding the correct correspondences from images with motion blur and large displacements. The experimental results demonstrate that our optical flow estimation method outperforms the baseline method (CRAFT) by 7% and 10% on the publicly available benchmarks Sintel and KITTI respectively, revealing the effectiveness of the proposed cost volume.
期刊介绍:
The Journal of Visual Communication and Image Representation publishes papers on state-of-the-art visual communication and image representation, with emphasis on novel technologies and theoretical work in this multidisciplinary area of pure and applied research. The field of visual communication and image representation is considered in its broadest sense and covers both digital and analog aspects as well as processing and communication in biological visual systems.