Anselmo Peñas

Also published as: Anselmo Penas


2017

pdf bib
Proceedings of the Software Demonstrations of the 15th Conference of the European Chapter of the Association for Computational Linguistics
André Martins | Anselmo Peñas
Proceedings of the Software Demonstrations of the 15th Conference of the European Chapter of the Association for Computational Linguistics

2015

pdf bib
Unsupervised Learning of Coherent and General Semantic Classes for Entity Aggregates
Henry Anaya-Sánchez | Anselmo Peñas
Proceedings of the 11th International Conference on Computational Semantics

2014

pdf bib
“One Entity per Discourse” and “One Entity per Collocation” Improve Named-Entity Disambiguation
Ander Barrena | Eneko Agirre | Bernardo Cabaleiro | Anselmo Peñas | Aitor Soroa
Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers

2012

pdf bib
Temporally Anchored Relation Extraction
Guillermo Garrido | Anselmo Peñas | Bernardo Cabaleiro | Álvaro Rodrigo
Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf bib
Evaluating Machine Reading Systems through Comprehension Tests
Anselmo Peñas | Eduard Hovy | Pamela Forner | Álvaro Rodrigo | Richard Sutcliffe | Corina Forascu | Caroline Sporleder
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)

This paper describes a methodology for testing and evaluating the performance of Machine Reading systems through Question Answering and Reading Comprehension Tests. The methodology is being used in QA4MRE (QA for Machine Reading Evaluation), one of the labs of CLEF. The task was to answer a series of multiple choice tests, each based on a single document. This allows complex questions to be asked but makes evaluation simple and completely automatic. The evaluation architecture is completely multilingual: test documents, questions, and their answers are identical in all the supported languages. Background text collections are comparable collections harvested from the web for a set of predefined topics. Each test received an evaluation score between 0 and 1 using c@1. This measure encourages systems to reduce the number of incorrect answers while maintaining the number of correct ones by leaving some questions unanswered. 12 groups participated in the task, submitting 62 runs in 3 different languages (German, English, and Romanian). All runs were monolingual; no team attempted a cross-language task. We report here the conclusions and lessons learned after the first campaign in 2011.

2011

pdf bib
Detecting Compositionality Using Semantic Vector Space Models Based on Syntactic Context. Shared Task System Description
Guillermo Garrido | Anselmo Peñas
Proceedings of the Workshop on Distributional Semantics and Compositionality

pdf bib
A Simple Measure to Assess Non-response
Anselmo Peñas | Alvaro Rodrigo
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Unsupervised Discovery of Domain-Specific Knowledge from Text
Dirk Hovy | Chunliang Zhang | Eduard Hovy | Anselmo Peñas
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies

2010

pdf bib
Semantic Enrichment of Text with Background Knowledge
Anselmo Peñas | Eduard Hovy
Proceedings of the NAACL HLT 2010 First International Workshop on Formalisms and Methodology for Learning by Reading

pdf bib
Filling Knowledge Gaps in Text for Machine Reading
Anselmo Peñas | Eduard Hovy
Coling 2010: Posters

pdf bib
GikiCLEF: Crosscultural Issues in Multilingual Information Access
Diana Santos | Luís Miguel Cabral | Corina Forascu | Pamela Forner | Fredric Gey | Katrin Lamm | Thomas Mandl | Petya Osenova | Anselmo Peñas | Álvaro Rodrigo | Julia Schulz | Yvonne Skalban | Erik Tjong Kim Sang
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

In this paper we describe GikiCLEF, the first evaluation contest that, to our knowledge, was specifically designed to expose and investigate cultural and linguistic issues involved in structured multimedia collections and searching, and which was organized under the scope of CLEF 2009. GikiCLEF evaluated systems that answered hard questions for both human and machine, in ten different Wikipedia collections, namely Bulgarian, Dutch, English, German, Italian, Norwegian (Bokmäl and Nynorsk), Portuguese, Romanian, and Spanish. After a short historical introduction, we present the task, together with its motivation, and discuss how the topics were chosen. Then we provide another description from the point of view of the participants. Before disclosing their results, we introduce the SIGA management system explaining the several tasks which were carried out behind the scenes. We quantify in turn the GIRA resource, offered to the community for training and further evaluating systems with the help of the 50 topics gathered and the solutions identified. We end the paper with a critical discussion of what was learned, advancing possible ways to reuse the data.

pdf bib
Evaluating Multilingual Question Answering Systems at CLEF
Pamela Forner | Danilo Giampiccolo | Bernardo Magnini | Anselmo Peñas | Álvaro Rodrigo | Richard Sutcliffe
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

The paper offers an overview of the key issues raised during the seven years’ activity of the Multilingual Question Answering Track at the Cross Language Evaluation Forum (CLEF). The general aim of the Multilingual Question Answering Track has been to test both monolingual and cross-language Question Answering (QA) systems that process queries and documents in several European languages, also drawing attention to a number of challenging issues for research in multilingual QA. The paper gives a brief description of how the task has evolved over the years and of the way in which the data sets have been created, presenting also a brief summary of the different types of questions developed. The document collections adopted in the competitions are sketched as well, and some data about the participation are provided. Moreover, the main evaluation measures used to evaluate system performances are explained and an overall analysis of the results achieved is presented.

2007

pdf bib
Experiments of UNED at the Third Recognising Textual Entailment Challenge
Álvaro Rodrigo | Anselmo Peñas | Jesús Herrera | Felisa Verdejo
Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing

2006

pdf bib
The Multilingual Question Answering Track at CLEF
Bernardo Magnini | Danilo Giampiccolo | Lili Aunimo | Christelle Ayache | Petya Osenova | Anselmo Peñas | Maarten de Rijke | Bogdan Sacaleanu | Diana Santos | Richard Sutcliffe
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)

This paper presents an overview of the Multilingual Question Answering evaluation campaigns which have been organized at CLEF (Cross Language Evaluation Forum) since 2003. Over the years, the competition has registered a steady increment in the number of participants and languages involved. In fact, from the original eight groups which participated in 2003 QA track, the number of competitors in 2005 rose to twenty-four. Also, the performances of the systems have steadily improved, and the average of the best performances in the 2005 saw an increase of 10% with respect to the previous year.

2005

pdf bib
QARLA: A Framework for the Evaluation of Text Summarization Systems
Enrique Amigó | Julio Gonzalo | Anselmo Peñas | Felisa Verdejo
Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05)

pdf bib
Evaluating DUC 2004 Tasks with the QARLA Framework
Enrique Amigó | Julio Gonzalo | Anselmo Peñas | Felisa Verdejo
Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization

2004

pdf bib
An Empirical Study of Information Synthesis Task
Enrique Amigo | Julio Gonzalo | Victor Peinado | Anselmo Peñas | Felisa Verdejo
Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04)

pdf bib
Using syntactic information to extract relevant terms for multi-document summarization
Enrique Amigó | Julio Gonzalo | Víctor Peinado | Anselmo Peñas | Felisa Verdejo
COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics

pdf bib
Word Sense Disambiguation based on term to term similarity in a context space
Javier Artiles | Anselmo Penas | Felisa Verdejo
Proceedings of SENSEVAL-3, the Third International Workshop on the Evaluation of Systems for the Semantic Analysis of Text

2000

pdf bib
Evaluating Wordnets in Cross-language Information Retrieval: the ITEM Search Engine
Felisa Verdejo | Julio Gonzalo | Anselmo Peñas | Fernando López | David Fernández
Proceedings of the Second International Conference on Language Resources and Evaluation (LREC’00)

1999

pdf bib
Lexical ambiguity and Information Retrieval revisited
Julio Gonzalo | Anselmo Penas | Felisa Verdejo
1999 Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora

pdf bib
An Open Distance Learning Web-Course for NLP in IR
Felisa Verdejo | Julio Gonzalo | Anselmo Penas
EACL 1999: Computer and Internet Supported Education in Language and Speech Technology