Explainable Unsupervised Argument Similarity Rating with Abstract Meaning Representation and Conclusion Generation

2021 
When assessing the similarity of arguments, researchers typically use approaches that do not provide interpretable evidence or justifications for their ratings. Hence, the features that determine argument similarity remain elusive. We address this issue by introducing \textitnovelargumentsimilaritymetrics that aim at high performance and explainability. We show that Abstract Meaning Representation (AMR) graphs can be useful for representing arguments, and that novel AMR graph metrics can offer explanations for argument similarity ratings. We start from the hypothesis that \textitsimilarpremises often lead to \textitsimilarconclusionsand extend an approach for \textitAMRbasedargumentsimilarityrating by estimating, in addition, the similarity of \textitconclusions that we automatically infer from the arguments used as premises. We show that AMR similarity metrics make argument similarity judgements more \textitinterpretable and may even support \textitargumentqualityjudgements. Our approach provides significant performance improvements over strong baselines in a \textitfullyunsupervised setting. Finally, we make first steps to address the problem of reference-less evaluation of argumentative conclusion generations.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []