Henghui Zhu


2020

pdf bib
An Ensemble Approach for Automatic Structuring of Radiology Reports
Morteza Pourreza Shahri | Amir Tahmasebi | Bingyang Ye | Henghui Zhu | Javed Aslam | Timothy Ferris
Proceedings of the 3rd Clinical Natural Language Processing Workshop

Automatic structuring of electronic medical records is of high demand for clinical workflow solutions to facilitate extraction, storage, and querying of patient care information. However, developing a scalable solution is extremely challenging, specifically for radiology reports, as most healthcare institutes use either no template or department/institute specific templates. Moreover, radiologists’ reporting style varies from one to another as sentences are written in a telegraphic format and do not follow general English grammar rules. In this work, we present an ensemble method that consolidates the predictions of three models, capturing various attributes of textual information for automatic labeling of sentences with section labels. These three models are: 1) Focus Sentence model, capturing context of the target sentence; 2) Surrounding Context model, capturing the neighboring context of the target sentence; and finally, 3) Formatting/Layout model, aimed at learning report formatting cues. We utilize Bi-directional LSTMs, followed by sentence encoders, to acquire the context. Furthermore, we define several features that incorporate the structure of reports. We compare our proposed approach against multiple baselines and state-of-the-art approaches on a proprietary dataset as well as 100 manually annotated radiology notes from the MIMIC-III dataset, which we are making publicly available. Our proposed approach significantly outperforms other approaches by achieving 97.1% accuracy.

pdf bib
Margin-aware Unsupervised Domain Adaptation for Cross-lingual Text Labeling
Dejiao Zhang | Ramesh Nallapati | Henghui Zhu | Feng Nan | Cicero Nogueira dos Santos | Kathleen McKeown | Bing Xiang
Findings of the Association for Computational Linguistics: EMNLP 2020

Unsupervised domain adaptation addresses the problem of leveraging labeled data in a source domain to learn a well-performing model in a target domain where labels are unavailable. In this paper, we improve upon a recent theoretical work (Zhang et al., 2019b) and adopt the Margin Disparity Discrepancy (MDD) unsupervised domain adaptation algorithm to solve the cross-lingual text labeling problems. Experiments on cross-lingual document classification and NER demonstrate the proposed domain adaptation approach advances the state-of-the-art results by a large margin. Specifically, we improve MDD by efficiently optimizing the margin loss on the source domain via Virtual Adversarial Training (VAT). This bridges the gap between theory and the loss function used in the original work Zhang et al.(2019b), and thereby significantly boosts the performance. Our numerical results also indicate that VAT can remarkably improve the generalization performance of both domains for various domain adaptation approaches.

pdf bib
Enhancing Clinical BERT Embedding using a Biomedical Knowledge Base
Boran Hao | Henghui Zhu | Ioannis Paschalidis
Proceedings of the 28th International Conference on Computational Linguistics

Domain knowledge is important for building Natural Language Processing (NLP) systems for low-resource settings, such as in the clinical domain. In this paper, a novel joint training method is introduced for adding knowledge base information from the Unified Medical Language System (UMLS) into language model pre-training for some clinical domain corpus. We show that in three different downstream clinical NLP tasks, our pre-trained language model outperforms the corresponding model with no knowledge base information and other state-of-the-art models. Specifically, in a natural language inference task applied to clinical texts, our knowledge base pre-training approach improves accuracy by up to 1.7%, whereas in clinical name entity recognition tasks, the F1-score improves by up to 1.0%. The pre-trained models are available at https://github.com/noc-lab/clinical-kb-bert.

pdf bib
End-to-End Synthetic Data Generation for Domain Adaptation of Question Answering Systems
Siamak Shakeri | Cicero Nogueira dos Santos | Henghui Zhu | Patrick Ng | Feng Nan | Zhiguo Wang | Ramesh Nallapati | Bing Xiang
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

We propose an end-to-end approach for synthetic QA data generation. Our model comprises a single transformer-based encoder-decoder network that is trained end-to-end to generate both answers and questions. In a nutshell, we feed a passage to the encoder and ask the decoder to generate a question and an answer token-by-token. The likelihood produced in the generation process is used as a filtering score, which avoids the need for a separate filtering model. Our generator is trained by fine-tuning a pretrained LM using maximum likelihood estimation. The experimental results indicate significant improvements in the domain adaptation of QA models outperforming current state-of-the-art methods.

2019

pdf bib
Neural Token Representations and Negation and Speculation Scope Detection in Biomedical and General Domain Text
Elena Sergeeva | Henghui Zhu | Amir Tahmasebi | Peter Szolovits
Proceedings of the Tenth International Workshop on Health Text Mining and Information Analysis (LOUHI 2019)

Since the introduction of context-aware token representation techniques such as Embeddings from Language Models (ELMo) and Bidirectional Encoder Representations from Transformers (BERT), there has been numerous reports on improved performance on a variety of natural language tasks. Nevertheless, the degree to which the resulting context-aware representations encode information about morpho-syntactic properties of the word/token in a sentence remains unclear. In this paper, we investigate the application and impact of state-of-the-art neural token representations for automatic cue-conditional speculation and negation scope detection coupled with the independently computed morpho-syntactic information. Through this work, We establish a new state-of-the-art for the BioScope and NegPar corpus. More importantly, we provide a thorough analysis of neural representations and additional features interactions, cue-representation for conditioning, discuss model behavior on different datasets and address the annotation-induced biases in the learned representations.