Tear the Image into Strips for Style Transfer

2021 
Recently, Deep Convolutional Neural Networks (DCNNs) have achieved remarkable progress in computer vision community, including in style transfer tasks. Normally, most methods feed the full image to the DCNN. Although highquality results can be achieved in this manner, several underlying problems arise. For one, with the increase in image resolution, the memory footprint will increase dramatically, leading to high latency and massive power consumption. Furthermore, these methods are usually unable to integrate with the commercial image signal processor (ISP), which processes the image in a line-sequential manner. To solve the above problems, we propose a novel ISP-friendly deep learning-based style transfer algorithm: SequentialStyle. A brand new line-sequential processing mode is proposed, where the image is torn into strips, and each strip is sequentially processed, contributing to less memory demand. We further propose a Spatial-Temporal Synergistic (STS) mechanism that decouples the previously simplex 2-D image style transfer into spatial feature processing (in-strip) and temporal correlation transmission (in-between strips). Compared with the SOTA style transfer algorithms, experimental results show that our SequentialStyle is competitive. Besides, SequentialStyle has less demand for memory consumption, even for the images whose resolutions are 4k or higher.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []