Faster coreset construction for projective clustering via low rank approximation

2016 
In this work, we present randomized coreset construction for projective clustering that involves computing a set of $k$ closest $j$-dimensional linear (affine) subspaces of a given set of $n$ vectors in $d$ dimensions. Let $A \in R^{n\times d}$ be an input matrix, then an earlier deterministic coreset construction of Feldman \textit{et al.} relied on computing the SVD of $A$. The best known algorithms for SVD require $nd\min\{n, d\}$ time, which may not be feasible for large values of $n$ and $d$. We present a coreset construction by projecting the matrix $A$ on some orthonormal vectors that closely approximate the right singular vectors of $A$. As a consequence, when the values of $k$ and $j$ are small, we are able to achieve a significantly faster algorithm, as compared to Feldman \textit{et al.}, while maintaining almost the same approximation. We also benefit in terms of space as well as exploit the sparsity of the input dataset. Another advantage of our approach is that it can be constructed in a streaming setting quite efficiently.
    • Correction
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []