Interpretable Rationale Augmented Charge Prediction System

Xin Jiang, Hai Ye, Zhunchen Luo, WenHan Chao, Wenjia Ma


Abstract
This paper proposes a neural based system to solve the essential interpretability problem existing in text classification, especially in charge prediction task. First, we use a deep reinforcement learning method to extract rationales which mean short, readable and decisive snippets from input text. Then a rationale augmented classification model is proposed to elevate the prediction accuracy. Naturally, the extracted rationales serve as the introspection explanation for the prediction result of the model, enhancing the transparency of the model. Experimental results demonstrate that our system is able to extract readable rationales in a high consistency with manual annotation and is comparable with the attention model in prediction accuracy.
Anthology ID:
C18-2032
Volume:
Proceedings of the 27th International Conference on Computational Linguistics: System Demonstrations
Month:
August
Year:
2018
Address:
Santa Fe, New Mexico
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
146–151
Language:
URL:
https://www.aclweb.org/anthology/C18-2032
DOI:
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
http://aclanthology.lst.uni-saarland.de/C18-2032.pdf