Discovering Non-Redundant K-means Clusterings In Optimal Subspaces

Authors:
Dominik Mautz Ludwig Maximilian University of Munich)
Wei Ye Ludwig Maximilian University of Munich
Claudia Plant University of Vienna
Christian B

Introduction:

This paper studies non-redundant clustering. The authors show that non-redundant k-means-like clusterings may exist in different, arbitrarily oriented subspaces of the high-dimensional space.

Abstract:

A huge object collection in high-dimensional space can often be clustered in more than one way, for instance, objects could be clustered by their shape or alternatively by their color. Each grouping represents a different view of the data set. The new research field of non-redundant clustering addresses this class of problems. In this paper, we follow the approach that different, non-redundant k-means-like clusterings may exist in different, arbitrarily oriented subspaces of the high-dimensional space. We assume that these subspaces (and optionally a further noise space without any cluster structure) are orthogonal to each other. This assumption enables a particularly rigorous mathematical treatment of the non-redundant clustering problem and thus a particularly efficient algorithm, which we call Nr-Kmeans (for non-redundant k-means). The superiority of our algorithm is demonstrated both theoretically, as well as in extensive experiments.

You may want to know: