Christopher Potts


2020

pdf bib
Modeling Subjective Assessments of Guilt in Newspaper Crime Narratives
Elisa Kreiss | Zijian Wang | Christopher Potts
Proceedings of the 24th Conference on Computational Natural Language Learning

Crime reporting is a prevalent form of journalism with the power to shape public perceptions and social policies. How does the language of these reports act on readers? We seek to address this question with the SuspectGuilt Corpus of annotated crime stories from English-language newspapers in the U.S. For SuspectGuilt, annotators read short crime articles and provided text-level ratings concerning the guilt of the main suspect as well as span-level annotations indicating which parts of the story they felt most influenced their ratings. SuspectGuilt thus provides a rich picture of how linguistic choices affect subjective guilt judgments. We use SuspectGuilt to train and assess predictive models which validate the usefulness of the corpus, and show that these models benefit from genre pretraining and joint supervision from the text-level ratings and span-level annotations. Such models might be used as tools for understanding the societal effects of crime reporting.

pdf bib
Pragmatic Issue-Sensitive Image Captioning
Allen Nie | Reuben Cohn-Gordon | Christopher Potts
Findings of the Association for Computational Linguistics: EMNLP 2020

Image captioning systems need to produce texts that are not only true but also relevant in that they are properly aligned with the current issues. For instance, in a newspaper article about a sports event, a caption that not only identifies the player in a picture but also comments on their ethnicity could create unwanted reader reactions. To address this, we propose Issue-Sensitive Image Captioning (ISIC). In ISIC, the captioner is given a target image and an issue, which is a set of images partitioned in a way that specifies what information is relevant. For the sports article, we could construct a partition that places images into equivalence classes based on player position. To model this task, we use an extension of the Rational Speech Acts model. Our extension is built on top of state-of-the-art pretrained neural image captioners and explicitly uses image partitions to control caption generation. In both automatic and human evaluations, we show that these models generate captions that are descriptive and issue-sensitive. Finally, we show how ISIC can complement and enrich the related task of Visual Question Answering.

pdf bib
Data and Representation for Turkish Natural Language Inference
Emrah Budur | Rıza Özçelik | Tunga Gungor | Christopher Potts
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

Large annotated datasets in NLP are overwhelmingly in English. This is an obstacle to progress in other languages. Unfortunately, obtaining new annotated resources for each task in each language would be prohibitively expensive. At the same time, commercial machine translation systems are now robust. Can we leverage these systems to translate English-language datasets automatically? In this paper, we offer a positive response for natural language inference (NLI) in Turkish. We translated two large English NLI datasets into Turkish and had a team of experts validate their translation quality and fidelity to the original labels. Using these datasets, we address core issues of representation for Turkish NLI. We find that in-language embeddings are essential and that morphological parsing can be avoided where the training set is large. Finally, we show that models trained on our machine-translated datasets are successful on human-translated evaluation sets. We share all code, models, and data publicly.

pdf bib
Neural Natural Language Inference Models Partially Embed Theories of Lexical Entailment and Negation
Atticus Geiger | Kyle Richardson | Christopher Potts
Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP

We address whether neural models for Natural Language Inference (NLI) can learn the compositional interactions between lexical entailment and negation, using four methods: the behavioral evaluation methods of (1) challenge test sets and (2) systematic generalization tasks, and the structural evaluation methods of (3) probes and (4) interventions. To facilitate this holistic evaluation, we present Monotonicity NLI (MoNLI), a new naturalistic dataset focused on lexical entailment and negation. In our behavioral evaluations, we find that models trained on general-purpose NLI datasets fail systematically on MoNLI examples containing negation, but that MoNLI fine-tuning addresses this failure. In our structural evaluations, we look for evidence that our top-performing BERT-based model has learned to implement the monotonicity algorithm behind MoNLI. Probes yield evidence consistent with this conclusion, and our intervention experiments bolster this, showing that the causal dynamics of the model mirror the causal dynamics of this algorithm on subsets of MoNLI. This suggests that the BERT model at least partially embeds a theory of lexical entailment and negation at an algorithmic level.

pdf bib
Communication-based Evaluation for Natural Language Generation
Benjamin Newman | Reuben Cohn-Gordon | Christopher Potts
Proceedings of the Society for Computation in Linguistics 2020

2019

pdf bib
TalkDown: A Corpus for Condescension Detection in Context
Zijian Wang | Christopher Potts
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

Condescending language use is caustic; it can bring dialogues to an end and bifurcate communities. Thus, systems for condescension detection could have a large positive impact. A challenge here is that condescension is often impossible to detect from isolated utterances, as it depends on the discourse and social context. To address this, we present TalkDown, a new labeled dataset of condescending linguistic acts in context. We show that extending a language-only model with representations of the discourse improves performance, and we motivate techniques for dealing with the low rates of condescension overall. We also use our model to estimate condescension rates in various online communities and relate these differences to differing community norms.

pdf bib
Posing Fair Generalization Tasks for Natural Language Inference
Atticus Geiger | Ignacio Cases | Lauri Karttunen | Christopher Potts
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

Deep learning models for semantics are generally evaluated using naturalistic corpora. Adversarial testing methods, in which models are evaluated on new examples with known semantic properties, have begun to reveal that good performance at these naturalistic tasks can hide serious shortcomings. However, we should insist that these evaluations be fair – that the models are given data sufficient to support the requisite kinds of generalization. In this paper, we define and motivate a formal notion of fairness in this sense. We then apply these ideas to natural language inference by constructing very challenging but provably fair artificial datasets and showing that standard neural models fail to generalize in the required ways; only task-specific models that jointly compose the premise and hypothesis are able to achieve high performance, and even these models do not solve the task perfectly.

pdf bib
An Incremental Iterated Response Model of Pragmatics
Reuben Cohn-Gordon | Noah Goodman | Christopher Potts
Proceedings of the Society for Computation in Linguistics (SCiL) 2019

pdf bib
Effective Feature Representation for Clinical Text Concept Extraction
Yifeng Tao | Bruno Godefroy | Guillaume Genthial | Christopher Potts
Proceedings of the 2nd Clinical Natural Language Processing Workshop

Crucial information about the practice of healthcare is recorded only in free-form text, which creates an enormous opportunity for high-impact NLP. However, annotated healthcare datasets tend to be small and expensive to obtain, which raises the question of how to make maximally efficient uses of the available data. To this end, we develop an LSTM-CRF model for combining unsupervised word representations and hand-built feature representations derived from publicly available healthcare ontologies. We show that this combined model yields superior performance on five datasets of diverse kinds of healthcare text (clinical, social, scientific, commercial). Each involves the labeling of complex, multi-word spans that pick out different healthcare concepts. We also introduce a new labeled dataset for identifying the treatment relations between drugs and diseases.

pdf bib
Recursive Routing Networks: Learning to Compose Modules for Language Understanding
Ignacio Cases | Clemens Rosenbaum | Matthew Riemer | Atticus Geiger | Tim Klinger | Alex Tamkin | Olivia Li | Sandhini Agarwal | Joshua D. Greene | Dan Jurafsky | Christopher Potts | Lauri Karttunen
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)

We introduce Recursive Routing Networks (RRNs), which are modular, adaptable models that learn effectively in diverse environments. RRNs consist of a set of functions, typically organized into a grid, and a meta-learner decision-making component called the router. The model jointly optimizes the parameters of the functions and the meta-learner’s policy for routing inputs through those functions. RRNs can be incorporated into existing architectures in a number of ways; we explore adding them to word representation layers, recurrent network hidden layers, and classifier layers. Our evaluation task is natural language inference (NLI). Using the MultiNLI corpus, we show that an RRN’s routing decisions reflect the high-level genre structure of that corpus. To show that RRNs can learn to specialize to more fine-grained semantic distinctions, we introduce a new corpus of NLI examples involving implicative predicates, and show that the model components become fine-tuned to the inferential signatures that are characteristic of these predicates.

2018

pdf bib
Generating Bilingual Pragmatic Color References
Will Monroe | Jennifer Hu | Andrew Jong | Christopher Potts
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)

Contextual influences on language often exhibit substantial cross-lingual regularities; for example, we are more verbose in situations that require finer distinctions. However, these regularities are sometimes obscured by semantic and syntactic differences. Using a newly-collected dataset of color reference games in Mandarin Chinese (which we release to the public), we confirm that a variety of constructions display the same sensitivity to contextual difficulty in Chinese and English. We then show that a neural speaker agent trained on bilingual data with a simple multitask learning approach displays more human-like patterns of context dependence and is more pragmatically informative than its monolingual Chinese counterpart. Moreover, this is not at the expense of language-specific semantic understanding: the resulting speaker model learns the different basic color term systems of English and Chinese (with noteworthy cross-lingual influences), and it can identify synonyms between the two languages using vector analogy operations on its output layer, despite having no exposure to parallel data.

pdf bib
Mittens: an Extension of GloVe for Learning Domain-Specialized Representations
Nicholas Dingwall | Christopher Potts
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)

We present a simple extension of the GloVe representation learning model that begins with general-purpose representations and updates them based on data from a specialized domain. We show that the resulting representations can lead to faster learning and better results on a variety of tasks.

pdf bib
Pragmatically Informative Image Captioning with Character-Level Inference
Reuben Cohn-Gordon | Noah Goodman | Christopher Potts
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)

We combine a neural image captioner with a Rational Speech Acts (RSA) model to make a system that is pragmatically informative: its objective is to produce captions that are not merely true but also distinguish their inputs from similar images. Previous attempts to combine RSA with neural image captioning require an inference which normalizes over the entire set of possible utterances. This poses a serious problem of efficiency, previously solved by sampling a small subset of possible utterances. We instead solve this problem by implementing a version of RSA which operates at the level of characters (“a”, “b”, “c”, ...) during the unrolling of the caption. We find that the utterance-level effect of referential captions can be obtained with only character-level decisions. Finally, we introduce an automatic method for testing the performance of pragmatic speaker models, and show that our model outperforms a non-pragmatic baseline as well as a word-level RSA captioner.

pdf bib
Representing Social Media Users for Sarcasm Detection
Y. Alex Kolchinski | Christopher Potts
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

We explore two methods for representing authors in the context of textual sarcasm detection: a Bayesian approach that directly represents authors’ propensities to be sarcastic, and a dense embedding approach that can learn interactions between the author and the text. Using the SARC dataset of Reddit comments, we show that augmenting a bidirectional RNN with these representations improves performance; the Bayesian approach suffices in homogeneous contexts, whereas the added power of the dense embeddings proves valuable in more diverse ones.

pdf bib
Retrofitting Distributional Embeddings to Knowledge Graphs with Functional Relations
Ben Lengerich | Andrew Maas | Christopher Potts
Proceedings of the 27th International Conference on Computational Linguistics

Knowledge graphs are a versatile framework to encode richly structured data relationships, but it can be challenging to combine these graphs with unstructured data. Methods for retrofitting pre-trained entity representations to the structure of a knowledge graph typically assume that entities are embedded in a connected space and that relations imply similarity. However, useful knowledge graphs often contain diverse entities and relations (with potentially disjoint underlying corpora) which do not accord with these assumptions. To overcome these limitations, we present Functional Retrofitting, a framework that generalizes current retrofitting methods by explicitly modeling pairwise relations. Our framework can directly incorporate a variety of pairwise penalty functions previously developed for knowledge graph completion. Further, it allows users to encode, learn, and extract information about relation semantics. We present both linear and neural instantiations of the framework. Functional Retrofitting significantly outperforms existing retrofitting methods on complex knowledge graphs and loses no accuracy on simpler graphs (in which relations do imply similarity). Finally, we demonstrate the utility of the framework by predicting new drug–disease treatment pairs in a large, complex health knowledge graph.

2017

pdf bib
Colors in Context: A Pragmatic Neural Model for Grounded Language Understanding
Will Monroe | Robert X.D. Hawkins | Noah D. Goodman | Christopher Potts
Transactions of the Association for Computational Linguistics, Volume 5

We present a model of pragmatic referring expression interpretation in a grounded communication task (identifying colors from descriptions) that draws upon predictions from two recurrent neural network classifiers, a speaker and a listener, unified by a recursive pragmatic reasoning framework. Experiments show that this combined pragmatic model interprets color descriptions more accurately than the classifiers from which it is built, and that much of this improvement results from combining the speaker and listener perspectives. We observe that pragmatic reasoning helps primarily in the hardest cases: when the model must distinguish very similar colors, or when few utterances adequately express the target color. Our findings make use of a newly-collected corpus of human utterances in color reference games, which exhibit a variety of pragmatic behaviors. We also show that the embedded speaker model reproduces many of these pragmatic behaviors.

2016

pdf bib
Learning to Generate Compositional Color Descriptions
Will Monroe | Noah D. Goodman | Christopher Potts
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing

pdf bib
A Fast Unified Model for Parsing and Sentence Understanding
Samuel R. Bowman | Jon Gauthier | Abhinav Rastogi | Raghav Gupta | Christopher D. Manning | Christopher Potts
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

2015

pdf bib
A large annotated corpus for learning natural language inference
Samuel R. Bowman | Gabor Angeli | Christopher Potts | Christopher D. Manning
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing

pdf bib
Recursive Neural Networks Can Learn Logical Semantics
Samuel R. Bowman | Christopher Potts | Christopher D. Manning
Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality

pdf bib
Text to 3D Scene Generation with Rich Lexical Grounding
Angel Chang | Will Monroe | Manolis Savva | Christopher Potts | Christopher D. Manning
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

2014

pdf bib
Exploiting Social Network Structure for Person-to-Person Sentiment Analysis
Robert West | Hristo S. Paskov | Jure Leskovec | Christopher Potts
Transactions of the Association for Computational Linguistics, Volume 2

Person-to-person evaluations are prevalent in all kinds of discourse and important for establishing reputations, building social bonds, and shaping public opinion. Such evaluations can be analyzed separately using signed social networks and textual sentiment analysis, but this misses the rich interactions between language and social context. To capture such interactions, we develop a model that predicts individual A’s opinion of individual B by synthesizing information from the signed social network in which A and B are embedded with sentiment analysis of the evaluative texts relating A to B. We prove that this problem is NP-hard but can be relaxed to an efficiently solvable hinge-loss Markov random field, and we show that this implementation outperforms text-only and network-only versions in two very different datasets involving community-level decision-making: the Wikipedia Requests for Adminship corpus and the Convote U.S. Congressional speech corpus.

2013

pdf bib
Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank
Richard Socher | Alex Perelygin | Jean Wu | Jason Chuang | Christopher D. Manning | Andrew Ng | Christopher Potts
Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing

pdf bib
A computational approach to politeness with application to social factors
Cristian Danescu-Niculescu-Mizil | Moritz Sudhof | Dan Jurafsky | Jure Leskovec | Christopher Potts
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf bib
Implicatures and Nested Beliefs in Approximate Decentralized-POMDPs
Adam Vogel | Christopher Potts | Dan Jurafsky
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

pdf bib
The Life and Death of Discourse Entities: Identifying Singleton Mentions
Marta Recasens | Marie-Catherine de Marneffe | Christopher Potts
Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Emergence of Gricean Maxims from Multi-Agent Decision Theory
Adam Vogel | Max Bodoia | Christopher Potts | Daniel Jurafsky
Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

2012

pdf bib
Did It Happen? The Pragmatic Complexity of Veridicality Assessment
Marie-Catherine de Marneffe | Christopher D. Manning | Christopher Potts
Computational Linguistics, Volume 38, Issue 2 - June 2012

2011

pdf bib
Learning Word Vectors for Sentiment Analysis
Andrew L. Maas | Raymond E. Daly | Peter T. Pham | Dan Huang | Andrew Y. Ng | Christopher Potts
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies

2010

pdf bib
Crowdsourcing and language studies: the new generation of linguistic data
Robert Munro | Steven Bethard | Victor Kuperman | Vicky Tzuyin Lai | Robin Melnick | Christopher Potts | Tyler Schnoebelen | Harry Tily
Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk

pdf bib
“Was It Good? It Was Provocative.” Learning the Meaning of Scalar Adjectives
Marie-Catherine de Marneffe | Christopher D. Manning | Christopher Potts
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics

2009

pdf bib
Not a Simple Yes or No: Uncertainty in Indirect Answers
Marie-Catherine de Marneffe | Scott Grimm | Christopher Potts
Proceedings of the SIGDIAL 2009 Conference