Towards functional labeling of utility vehicle point clouds for humanoid driving

2013 
We present preliminary work on analyzing 3-D point clouds of a small utility vehicle for purposes of humanoid robot driving. The scope of this work is limited to a subset of ingress-related tasks including stepping up into the vehicle and grasping the steering wheel. First, we describe how partial point clouds are acquired from different perspectives using sensors including a stereo camera and a tilting laser range-finder. For finer detail and a larger model than one sensor view alone can capture, a Kinect Fusion [1]-1ike algorithm is used to integrate the stereo point clouds as the sensor head is moved around the vehicle. Second, we discuss how individual sensor views can be registered to the overall vehicle model to provide context, and present methods to estimate several geometric parameters critical to motion planning: (1) the floor height and boundaries defined by the seat and the dashboard, and (2) the steering wheel pose and dimensions. Results are compared using the different sensors, and the usefulness of the estimated quantities for motion planning is also demonstrated.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    37
    References
    13
    Citations
    NaN
    KQI
    []