Accurate and Explainable Recommendation via Review Rationalization
- Sicheng Pan ,
- Dongsheng Li ,
- Hansu Gu ,
- Tun Lu ,
- Xufang Luo ,
- Ning Gu
TheWebConf 2022 |
Published by ACM
Auxiliary information, e.g., reviews, is widely adopted to improve collaborative filtering (CF) algorithms, e.g., to boost accuracy and provide explanations. However, most of the existing methods cannot distinguish between co-appearance and causality when learning from reviews, so that they may rely on spurious correlations rather than causal relations in recommendation — leading to poor generalization performance and unconvincing explanations. In this paper, we propose a Recommendation via Review Rationalization (R3) method including 1) a rationale generator to extract rationales from reviews to alleviate the effects of spurious correlations; 2) a rationale predictor to predict user ratings on items only from rationales; and 3) a correlation predictor upon both rationales and correlational features to ensure conditional independence between spurious correlations and rating predictions given causal rationales. Extensive experiments on real-world datasets show that the proposed method can achieve better generalization performance than state-of-the-art CF methods and provide causal-aware explanations even when the test data distribution changes.