language-icon Old Web
English
Sign In

Multimodal object recognition

2019 
Object recognition traditionally uses static vision to classify objects. However, only using visual cues such as colour, shape and texture does not work well when these features are similar between different objects as the input stimuli appear almost identical. In these cases, even humans cannot identify objects without further inspection such as measuring the object’s taste, weight or other material properties. The hypothesis is that by using multiple modalities, it is possible to improve classification accuracy without affecting the performance of the system in most cases. This innovation is realisable due to the increasing availability of novel, low cost, non-visual sensors such as the SCiO molecular sensor. The first objective investigates existing computer vision techniques. The second uses the SCiO sensor and a load cell to develop two new modality classifiers. Finally, the work culminates in an algorithm to combine predictions from the implemented classifiers. Results show that combining data from the three sensors can increase the object recognition accuracy to 98%, which is 18% above the current near state-of-the-art methods while reducing the number of sensor queries by 30% when compared with a traditional modality fusion technique.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []