An autocovariance-based learning framework for high-dimensional functional time series

2020 
Many scientific and economic applications involve the analysis of high-dimensional functional time series, which stands at the intersection between functional time series and high-dimensional statistics gathering challenges of infinite-dimensionality with serial dependence and non-asymptotics. In this paper, we model observed functional time series, which are subject to errors in the sense that each functional datum arises as the sum of two uncorrelated components, one dynamic and one white noise. Motivated from a simple fact that the autocovariance function of observed functional time series automatically filters out the noise term, we propose an autocovariance-based three-step procedure by first performing autocovariance-based dimension reduction and then formulating a novel autocovariance-based block regularized minimum distance (RMD) estimation framework to produce block sparse estimates, from which we can finally recover functional sparse estimates. We investigate non-asymptotic properties of relevant estimated terms under such autocovariance-based dimension reduction framework. To provide theoretical guarantees for the second step, we also present convergence analysis of the block RMD estimator. Finally, we illustrate the proposed autocovariance-based learning framework using applications of three sparse high-dimensional functional time series models. With derived theoretical results, we study convergence properties of the associated estimators. We demonstrate via simulated and real datasets that our proposed estimators significantly outperform the competitors.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    41
    References
    6
    Citations
    NaN
    KQI
    []