A Dynamic Multi-Scale Voxel Flow Network for Video Prediction

Xiaotao Hu
Zhewei Huang
Ailin Huang
Jun Xu
Shuchang Zhou
[Paper]
[Pytorch]
[MegEngine]



Abstract

The performance of video prediction has been greatly boosted by advanced deep neural networks. However, most of the current methods suffer from large model sizes and require extra inputs, eg, semantic/depth maps, for promising performance.For efficiency consideration, in this paper, we propose a Dynamic Multi-scale Voxel Flow Network (DMVFN) to achieve better video prediction performance at lower computational costs with only RGB images, than previous methods. The core of our DMVFN is a differentiable routing module that can effectively perceive the motion scales of video frames. Once trained, our DMVFN selects adaptive sub-networks for different inputs at the inference stage. Experiments on several benchmarks demonstrate that our DMVFN is an order of magnitude faster than Deep Voxel Flow and surpasses the state-of-the-art iterative-based OPT on generated image quality.



Experiments

DMVFN outperforms previous methods in terms of image quality, parameter amount, and GFLOPs.



Ablations

(a): Average usage rate on videos with different motion magnitudes. "Fast": tested on Vimeo-Fast. "Medium": tested on Vimeo-Medium. "Slow": tested on Vimeo-Slow. (b): Difference between "Fast"/"Slow" and "Medium" of (a). (c): Averaged usage rate on different time intervals between two input frames from Vimeo-Slow. "Int.": time interval.



Paper and Supplementary Material

For more details and experiments check out our paper:

Xiaotao Hu, Zhewei Huang, Ailing Huang, Jun Xu, Shuchang Zhou.
A Dynamic Multi-Scale Voxel Flow Network for Video Prediction.
CVPR 2023.
(hosted on ArXiv)

[Bibtex]




This template was originally made by Phillip Isola and Richard Zhang.