Machine learning-based multi-modal information perception for soft robotic hands

2020 
This paper focuses on multi-modal Information Perception (IP) for Soft Robotic Hands (SRHs) using Machine Learning (ML) algorithms. A flexible Optical Fiber-based Curvature Sensor (OFCS) is fabricated, consisting of a Light-Emitting Diode (LED), photosensitive detector, and optical fiber. Bending the roughened optical fiber generates lower light intensity, which reflecting the curvature of the soft finger. Together with the curvature and pressure information, multi-modal IP is performed to improve the recognition accuracy. Recognitions of gesture, object shape, size, and weight are implemented with multiple ML approaches, including the Supervised Learning Algorithms (SLAs) of K-Nearest Neighbor (KNN), Support Vector Machine (SVM), Logistic Regression (LR), and the unSupervised Learning Algorithm (un-SLA) of K-Means Clustering (KMC). Moreover, Optical Sensor Information (OSI), Pressure Sensor Information (PSI), and Double-Sensor Information (DSI) are adopted to compare the recognition accuracies. The experiment results demonstrate that the proposed sensors and recognition approaches are feasible and effective. The recognition accuracies obtained using the above ML algorithms and three modes of sensor information are higher than 85 percent for almost all combinations. Moreover, DSI is more accurate when compared to single modal sensor information and the KNN algorithm with a DSI outperforms the other combinations in recognition accuracy.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    14
    References
    9
    Citations
    NaN
    KQI
    []