Automatic rubric-based content grading for clinical notes
Wen-wai Yim | Ashley Mills | Harold Chun | Teresa Hashiguchi | Justin Yew | Bryan Lu
Proceedings of the Tenth International Workshop on Health Text Mining and Information Analysis (LOUHI 2019)
Clinical notes provide important documentation critical to medical care, as well as billing and legal needs. Too little information degrades quality of care; too much information impedes care. Training for clinical note documentation is highly variable, depending on institutions and programs. In this work, we introduce the problem of automatic evaluation of note creation through rubric-based content grading, which has the potential for accelerating and regularizing clinical note documentation training. To this end, we describe our corpus creation methods as well as provide simple feature-based and neural network baseline systems. We further provide tagset and scaling experiments to inform readers of plausible expected performances. Our baselines show promising results with content point accuracy and kappa values at 0.86 and 0.71 on the test set.