Charles Explorer logo
🇬🇧

The Problem of Coherence in Natural Language Explanations of Recommendations

Publication at Faculty of Mathematics and Physics |
2023

Abstract

Providing natural language explanations for recommendations is particularly useful from the perspective of a non-expert user. Although several methods for providing such explanations have recently been proposed, we argue that an important aspect of explanation quality has been overlooked in their experimental evaluation. Specifically, the coherence between generated text and predicted rating, which is a necessary condition for an explanation to be useful, is not properly captured by currently used evaluation measures. In this paper, we highlight the issue of explanation and prediction coherence by

1) presenting results from a manual verification of explanations generated by one of the state-of-the-art approaches

2) proposing a method of automatic coherence evaluation

3) introducing a new transformer-based method that aims to produce more coherent explanations than the state-of-the-art approaches

4) performing an experimental evaluation which demonstrates that this method significantly improves the explanation coherence without affecting the other aspects of recommendation performance.