Case-based Abductive Natural Language Inference.

2021 
Existing accounts of explanation emphasise the role of prior experience in the solution of new problems. However, most of the contemporary models for multi-hop textual inference construct explanations considering each test case in isolation. This paradigm is known to suffer from semantic drift, which causes the construction of spurious explanations leading to wrong conclusions. In contrast, we investigate an abductive framework for explainable multi-hop inference that adopts the retrieve-reuse-revise paradigm largely studied in case-based reasoning. Specifically, we present a novel framework that addresses and explains unseen inference problems by retrieving and adapting prior natural language explanations from similar training examples. We empirically evaluate the case-based abductive framework on downstream commonsense and scientific reasoning tasks. Our experiments demonstrate that the proposed framework can be effectively integrated with sparse and dense pre-trained encoding mechanisms or downstream transformers, achieving strong performance when compared to existing explainable approaches. Moreover, we study the impact of the retrieve-reuse-revise paradigm on explainability and semantic drift, showing that it boosts the quality of the constructed explanations, resulting in improved downstream inference performance.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    40
    References
    0
    Citations
    NaN
    KQI
    []