Mitigating Bias in Deep Nets with Knowledge Bases: the Case of Natural Language Understanding for Robots.

2020 
In this paper, we tackle the problem of lack of understandability of deep learning systems by integrating heterogeneous knowledge sources, and in the specific we present how we used FrameNet to guarantee the correct learning for an LSTM-based semantic parser in the task of Spoken Language Understanding for robots. The problem of the explainability of Artificial Intelligence (AI) systems, i.e. their ability to explain decisions to both experts and end users, has attracted growing attention in the latest years, affecting their credibility and trustworthiness. Trusting these systems is fundamental in the context of AI-based robotic companions interacting in natural language, as the users’ acceptance of the robot also relies on the ability to explain the reasons behind its actions. Following similar approaches, we first use the values of the neural attention layers employed in the semantic parser as a clue to analyze and interpret the model’s behavior and reveal the intrinsic bias induced by the training data. We then show how the integration of knowledge from external resources such as FrameNet can help minimizing, or mitigating, such bias, and consequently guarantee the model to provide the correct interpretations. Our preliminary, but promising results suggest that (i) attention layers can improve the model understandability; (ii) the integration of different knowledge bases can help overcoming the limitations of machine learning models; and (iii) an approach combining the strengths of both knowledge engineering and machine learning can foster the development of more transparent, understandable intelligent systems.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    30
    References
    0
    Citations
    NaN
    KQI
    []