Andrew Hickl


2010

pdf bib
Unsupervised Discovery of Collective Action Frames for Socio-Cultural Analysis
Andrew Hickl | Sanda Harabagiu
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

pdf bib
Multilingual Question Generation
Andrew Hickl | Arnold Jung | Ying Shi
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

2008

pdf bib
Scaling Answer Type Detection to Large Hierarchies
Kirk Roberts | Andrew Hickl
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)

This paper describes the creation of a state-of-the-art answer type detection system capable of recognizing more than 200 different expected answer types with greater than 85% precision and recall. After describing how we constructed a new, multi-tiered answer type hierarchy from the set of entity types recognized by Language Computer Corporation’s CICEROLITE named entity recognition system, we describe how we used this hierarchy to annotate a new corpus of more than 10,000 English factoid questions. We show how an answer type detection system trained on this corpus can be used to enhance the accuracy of a state-of-the-art question-answering system (Hickl et al., 2007; Hickl et al., 2006b) by more than 7% overall.

pdf bib
Unsupervised Resource Creation for Textual Inference Applications
Jeremy Bensley | Andrew Hickl
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)

This paper explores how a battery of unsupervised techniques can be used in order to create large, high-quality corpora for textual inference applications, such as systems for recognizing textual entailment (TE) and textual contradiction (TC). We show that it is possible to automatically generate sets of positive and negative instances of textual entailment and contradiction from textual corpora with greater than 90% precision. We describe how we generated more than 1 million TE pairs - and a corresponding set of and 500,000 TC pairs - from the documents found in the 2 GB AQUAINT-2 newswire corpus.

pdf bib
Using Discourse Commitments to Recognize Textual Entailment
Andrew Hickl
Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008)

2007

pdf bib
A Discourse Commitment-Based Framework for Recognizing Textual Entailment
Andrew Hickl | Jeremy Bensley
Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing

2006

pdf bib
Methods for Using Textual Entailment in Open-Domain Question Answering
Sanda Harabagiu | Andrew Hickl
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics

pdf bib
FERRET: Interactive Question-Answering for Real-World Environments
Andrew Hickl | Patrick Wang | John Lehmann | Sanda Harabagiu
Proceedings of the COLING/ACL 2006 Interactive Presentation Sessions

pdf bib
What in the world is a Shahab?: Wide Coverage Named Entity Recognition for Arabic
Luke Nezda | Andrew Hickl | John Lehmann | Sarmad Fayyaz
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)

This paper describes the development of CiceroArabic, the first wide coverage named entity recognition (NER) system for Modern Standard Arabic. Capable of classifying 18 different named entity classes with over 85% F, CiceroArabic utilizes a new 800,000-word annotated Arabic newswire corpus in order to achieve high performance without the need for hand-crafted rules or morphological information. In addition to describing results from our system, we show that accurate named entity annotation for a large number of semantic classes is feasible, even for very large corpora, and we discuss new techniques designed to boost agreement and consistency among annotators over a long-term annotation effort.

pdf bib
Impact of Question Decomposition on the Quality of Answer Summaries
Finley Lacatusu | Andrew Hickl | Sanda Harabagiu
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)

Generating answers to complex questions in the form of multi-document summaries requires access to question decomposition methods. In this paper we present three methods for decomposing complex questions and we evaluate their impact on the responsiveness of the answers they enable.

pdf bib
Using Scenario Knowledge in Automatic Question Answering
Sanda Harabagiu | Andrew Hickl
Proceedings of the Workshop on Task-Focused Summarization and Question Answering

pdf bib
Enhanced Interactive Question-Answering with Conditional Random Fields
Andrew Hickl | Sanda Harabagiu
Proceedings of the Interactive Question Answering Workshop at HLT-NAACL 2006

2005

pdf bib
Experiments with Interactive Question-Answering
Sanda Harabagiu | Andrew Hickl | John Lehmann | Dan Moldovan
Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05)

2004

pdf bib
Experiments with Interactive Question Answering in Complex Scenarios
Andrew Hickl | John Lehmann | John Williams | Sanda Harabagiu
Proceedings of the Workshop on Pragmatics of Question Answering at HLT-NAACL 2004