Weighted Task Regularization for Multitask Learning

2013 
Multitask Learning has been proven to be more effective than the traditional single task learning on many real-world problems by simultaneously transferring knowledge among different tasks which may suffer from limited labeled data. However, in order to build a reliable multitask learning model, nontrivial effort to construct the relatedness between different tasks is critical. When the number of tasks is not large, the learning outcome may suffer if there exists outlier tasks that inappropriately bias majority. Rather than identifying or discarding such outlier tasks, we present a weighted regularized multitask learning framework based on regularized multitask learning, which uses statistical metrics, such as Kullback-Leibler divergence, to assign weights prior to regularization process that robustly reduces the impact of outlier tasks and results in better learned models for all tasks. We then show that this formulation can be solved using dual form like optimizing a standard support vector machine with varied kernels. We perform experiments using both synthetic dataset and real-world dataset from petroleum industry which shows that our methodology outperforms existing methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    18
    References
    3
    Citations
    NaN
    KQI
    []