Improving the Perceptual Quality of 2D Animation Interpolation

2021 
Traditional 2D animation is labor-intensive, often requiring animators to manually draw twelve illustrations per second of movement. While automatic frame interpolation may ease this burden, the artistic effects inherent to 2D animation make video synthesis particularly challenging compared to in the photorealistic domain. Lower framerates result in larger displacements and occlusions, discrete perceptual elements (e.g. lines and solid-color regions) pose difficulties for texture-oriented convolutional networks, and exaggerated nonlinear movements hinder training data collection. Previous work tried addressing these issues, but used unscalable methods and focused on pixel-perfect performance. In contrast, we build a scalable system more appropriately centered on perceptual quality for this artistic domain. Firstly, we propose a lightweight architecture with a simple yet effective occlusion-inpainting technique to improve convergence on perceptual metrics with fewer trainable parameters. Secondly, we design a novel auxiliary module that leverages the Euclidean distance transform to improve the preservation of key line and region structures. Thirdly, we automatically double the existing manually-collected dataset for this task by quantitatively filtering out movement nonlinearities, allowing us to improve model generalization. Finally, we establish LPIPS and chamfer distance as strongly preferable to PSNR and SSIM through a user study, validating our system's emphasis on perceptual quality in the 2D animation domain.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    38
    References
    0
    Citations
    NaN
    KQI
    []