|Risheng Liu||Dalian University of Technology|
|Shichao Cheng||Dalian University of Technology|
|Long Ma||School of Software Technology, Dalian University of Technology|
|Xin Fan||Dalian University of Technology|
|Zhongxuan Luo||DALIAN UNIVERSITY OF TECHNOLOGY|
Optimizing task-related mathematical model is one of the most fundamental methodologies in statistic and learning areas.However, generally designed schematic iterations may hard to investigate complex data distributions in real-world applications.
Optimizing task-related mathematical model is one of the most fundamental methodologies in statistic and learning areas. However, generally designed schematic iterations may hard to investigate complex data distributions in real-world applications. Recently, training deep propagations (i.e., networks) has gained promising performance in some particular tasks. Unfortunately, existing networks are often built in heuristic manners, thus lack of principled interpretations and solid theoretical supports. In this work, we provide a new paradigm, named Propagation and Optimization based Deep Model (PODM), to bridge the gaps between these different mechanisms (i.e., model optimization and deep propagation). On the one hand, we utilize PODM as a deeply trained solver for model optimization. Different from these existing network based iterations, which often lack theoretical investigations, we provide strict convergence analysis for PODM in the challenging nonconvex and nonsmooth scenarios. On the other hand, by relaxing the model constraints and performing end-to-end training, we also develop a PODM based strategy to integrate domain knowledge (formulated as models) and real data distributions (learned by networks), resulting in a generic ensemble framework for challenging real-world applications. Extensive experiments verify our theoretical results and demonstrate the superiority of PODM against these state-of-the-art approaches.