Support Vector Machines and Radon's Theorem

2020 
A support vector machine (SVM) is an algorithm which finds a hyperplane that optimally separates labeled data points in $\mathbb{R}^n$ into positive and negative classes. The data points on the margin of this separating hyperplane are called support vectors. We study the possible configurations of support vectors for points in general position. In particular, we connect the possible configurations to Radon's theorem, which provides guarantees for when a set of points can be divided into two classes (positive and negative) whose convex hulls intersect. If the positive and negative support vectors in a generic SVM configuration are projected to the separating hyperplane, then these projected points will form a Radon configuration. Further, with a particular type of general position, we show there are at most $n+1$ support vectors. This can be used to test the level of machine precision needed in a support vector machine implementation. We also show the projections of the convex hulls of the support vectors intersect in a single Radon point, and under a small enough perturbation, the points labeled as support vectors remain labeled as support vectors. We furthermore consider computations studying the expected number of support vectors for randomly generated data.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    29
    References
    0
    Citations
    NaN
    KQI
    []