Covariance based point cloud descriptors for object detection and recognition

2016 
We introduce a covariance-based feature descriptor for object classification.The descriptor is compact (low dimensionality) and computationally fast.Adding new descriptor features amounts to the addition of a new row and column.There is no need to tune parameters such as bin size or number.The descriptor is naturally discriminative and subtracts out common data features. Processing 3D point cloud data is of primary interest in many areas of computer vision, including object grasping, robot navigation, and object recognition. The introduction of affordable RGB-D sensors has created a great interest in the computer vision community towards developing efficient algorithms for point cloud processing. Previously, capturing a point cloud required expensive specialized sensors such as lasers or dedicated range imaging devices; now, range data is readily available from low-cost sensors that provide easily extractable point clouds from a depth map. From here, an interesting challenge is to find different objects in the point cloud. Various descriptors have been introduced to match features in a point cloud. Cheap sensors are not necessarily designed to produce precise measurements, which means that the data is not as accurate as a point cloud provided from a laser or a dedicated range finder. Although some feature descriptors have been shown to be successful in recognizing objects from point clouds, there still exists opportunities for improvement. The aim of this paper is to introduce techniques from other fields, such as image processing, into 3D point cloud processing in order to improve rendering, classification, and recognition. Covariances have proven to be a success not only in image processing, but in other domains as well. This work develops the application of covariances in conjunction with 3D point cloud data.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    41
    References
    20
    Citations
    NaN
    KQI
    []