Visual-Tactile Fused Graph Learning for Object Clustering.

2021 
Object clustering has received considerable research attention most recently. However, 1) most existing object clustering methods utilize visual information while ignoring important tactile modality, which would inevitably lead to model performance degradation and 2) simply concatenating visual and tactile information via multiview clustering method can make complementary information to not be fully explored, since there are many differences between vision and touch. To address these issues, we put forward a graph-based visual-tactile fused object clustering framework with two modules: 1) a modality-specific representation learning module MR and 2) a unified affinity graph learning module MU. Specifically, MR focuses on learning modality-specific representations for visual-tactile data, where deep non-negative matrix factorization (NMF) is adopted to extract the hidden information behind each modality. Meanwhile, we employ an autoencoder-like structure to enhance the robustness of the learned representations, and two graphs to improve its compactness. Furthermore, MU highlights how to mitigate the differences between vision and touch, and further maximize the mutual information, which adopts a minimizing disagreement scheme to guide the modality-specific representations toward a unified affinity graph. To achieve ideal clustering performance, a Laplacian rank constraint is imposed to regularize the learned graph with ideal connected components, where noises that caused wrong connections are removed and clustering labels can be obtained directly. Finally, we propose an efficient alternating iterative minimization updating strategy, followed by a theoretical proof to prove framework convergence. Comprehensive experiments on five public datasets demonstrate the superiority of the proposed framework.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    2
    Citations
    NaN
    KQI
    []