Chengyu Wang


2020

pdf bib
BiRRE: Learning Bidirectional Residual Relation Embeddings for Supervised Hypernymy Detection
Chengyu Wang | Xiaofeng He
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

The hypernymy detection task has been addressed under various frameworks. Previously, the design of unsupervised hypernymy scores has been extensively studied. In contrast, supervised classifiers, especially distributional models, leverage the global contexts of terms to make predictions, but are more likely to suffer from “lexical memorization”. In this work, we revisit supervised distributional models for hypernymy detection. Rather than taking embeddings of two terms as classification inputs, we introduce a representation learning framework named Bidirectional Residual Relation Embeddings (BiRRE). In this model, a term pair is represented by a BiRRE vector as features for hypernymy classification, which models the possibility of a term being mapped to another in the embedding space by hypernymy relations. A Latent Projection Model with Negative Regularization (LPMNR) is proposed to simulate how hypernyms and hyponyms are generated by neural language models, and to generate BiRRE vectors based on bidirectional residuals of projections. Experiments verify BiRRE outperforms strong baselines over various evaluation frameworks.

pdf bib
Meta Fine-Tuning Neural Language Models for Multi-Domain Text Mining
Chengyu Wang | Minghui Qiu | Jun Huang | Xiaofeng He
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

Pre-trained neural language models bring significant improvement for various NLP tasks, by fine-tuning the models on task-specific training sets. During fine-tuning, the parameters are initialized from pre-trained models directly, which ignores how the learning process of similar NLP tasks in different domains is correlated and mutually reinforced. In this paper, we propose an effective learning procedure named Meta Fine-Tuning (MFT), serving as a meta-learner to solve a group of similar NLP tasks for neural language models. Instead of simply multi-task training over all the datasets, MFT only learns from typical instances of various domains to acquire highly transferable knowledge. It further encourages the language model to encode domain-invariant representations by optimizing a series of novel domain corruption loss functions. After MFT, the model can be fine-tuned for each domain with better parameter initializations and higher generalization ability. We implement MFT upon BERT to solve several multi-domain text mining tasks. Experimental results confirm the effectiveness of MFT and its usefulness for few-shot learning.

2019

pdf bib
SphereRE: Distinguishing Lexical Relations with Hyperspherical Relation Embeddings
Chengyu Wang | Xiaofeng He | Aoying Zhou
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

Lexical relations describe how meanings of terms relate to each other. Typical examples include hypernymy, synonymy, meronymy, etc. Automatic distinction of lexical relations is vital for NLP applications, and also challenging due to the lack of contextual signals to discriminate between such relations. In this work, we present a neural representation learning model to distinguish lexical relations among term pairs based on Hyperspherical Relation Embeddings (SphereRE). Rather than learning embeddings for individual terms, the model learns representations of relation triples by mapping them to the hyperspherical embedding space, where relation triples of different lexical relations are well separated. Experiments over several benchmarks confirm SphereRE outperforms state-of-the-arts.

2018

pdf bib
Exploratory Neural Relation Classification for Domain Knowledge Acquisition
Yan Fan | Chengyu Wang | Xiaofeng He
Proceedings of the 27th International Conference on Computational Linguistics

The state-of-the-art methods for relation classification are primarily based on deep neural net- works. This kind of supervised learning method suffers from not only limited training data, but also the large number of low-frequency relations in specific domains. In this paper, we propose the task of exploratory relation classification for domain knowledge harvesting. The goal is to learn a classifier on pre-defined relations and discover new relations expressed in texts. A dynamically structured neural network is introduced to classify entity pairs to a continuously expanded relation set. We further propose the similarity sensitive Chinese restaurant process to discover new relations. Experiments conducted on a large corpus show the effectiveness of our neural network, while new relations are discovered with high precision and recall.

2017

pdf bib
Transductive Non-linear Learning for Chinese Hypernym Prediction
Chengyu Wang | Junchi Yan | Aoying Zhou | Xiaofeng He
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Finding the correct hypernyms for entities is essential for taxonomy learning, fine-grained entity categorization, query understanding, etc. Due to the flexibility of the Chinese language, it is challenging to identify hypernyms in Chinese accurately. Rather than extracting hypernyms from texts, in this paper, we present a transductive learning approach to establish mappings from entities to hypernyms in the embedding space directly. It combines linear and non-linear embedding projection models, with the capacity of encoding arbitrary language-specific rules. Experiments on real-world datasets illustrate that our approach outperforms previous methods for Chinese hypernym prediction.

pdf bib
A Short Survey on Taxonomy Learning from Text Corpora: Issues, Resources and Recent Advances
Chengyu Wang | Xiaofeng He | Aoying Zhou
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing

A taxonomy is a semantic hierarchy, consisting of concepts linked by is-a relations. While a large number of taxonomies have been constructed from human-compiled resources (e.g., Wikipedia), learning taxonomies from text corpora has received a growing interest and is essential for long-tailed and domain-specific knowledge acquisition. In this paper, we overview recent advances on taxonomy construction from free texts, reorganizing relevant subtasks into a complete framework. We also overview resources for evaluation and discuss challenges for future research.

pdf bib
Learning Fine-grained Relations from Chinese User Generated Categories
Chengyu Wang | Yan Fan | Xiaofeng He | Aoying Zhou
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing

User generated categories (UGCs) are short texts that reflect how people describe and organize entities, expressing rich semantic relations implicitly. While most methods on UGC relation extraction are based on pattern matching in English circumstances, learning relations from Chinese UGCs poses different challenges due to the flexibility of expressions. In this paper, we present a weakly supervised learning framework to harvest relations from Chinese UGCs. We identify is-a relations via word embedding based projection and inference, extract non-taxonomic relations and their category patterns by graph mining. We conduct experiments on Chinese Wikipedia and achieve high accuracy, outperforming state-of-the-art methods.

2016

pdf bib
Chinese Hypernym-Hyponym Extraction from User Generated Categories
Chengyu Wang | Xiaofeng He
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers

Hypernym-hyponym (“is-a”) relations are key components in taxonomies, object hierarchies and knowledge graphs. While there is abundant research on is-a relation extraction in English, it still remains a challenge to identify such relations from Chinese knowledge sources accurately due to the flexibility of language expression. In this paper, we introduce a weakly supervised framework to extract Chinese is-a relations from user generated categories. It employs piecewise linear projection models trained on a Chinese taxonomy and an iterative learning algorithm to update models incrementally. A pattern-based relation selection method is proposed to prevent “semantic drift” in the learning process using bi-criteria optimization. Experimental results illustrate that the proposed approach outperforms state-of-the-art methods.