Eva Maria Vecchi

Also published as: Eva Vecchi


2016

pdf bib
Many speakers, many worlds: Interannotator variations in the quantification of feature norms
Aurélie Herbelot | Eva Maria Vecchi
Linguistic Issues in Language Technology, Volume 13, 2016

Quantification (see e.g. Peters and Westerst ̊ahl, 2006) is probably one of the most extensively studied phenomena in formal semantics. But because of the specific representation of meaning assumed by modeltheoretic semantics (one where a true model of the world is a priori available), research in the area has primarily focused on one question: what is the relation of a quantifier to the truth value of a sentence? In contrast, relatively little has been said about the way the underlying model comes about, and its relation to individual speakers’ conceptual knowledge. In this paper, we make a first step in investigating how native speakers of English model relations between non-grounded sets, by observing how they quantify simple statements. We first give some motivation for our task, from both a theoretical linguistic and computational semantic point of view (§2). We then describe our annotation setup (§3) and follow on with an analysis of the produced dataset, conducting a quantitative evaluation which includes inter-annotator agreement for different classes of predicates (§4). We observe that there is significant agreement between speakers but also noticeable variations. We posit that in settheoretic terms, there are as many worlds as there are speakers (§5), but the overwhelming use of underspecified quantification in ordinary language covers up the individual differences that might otherwise be observed.

pdf bib
SLEDDED: A Proposed Dataset of Event Descriptions for Evaluating Phrase Representations
Laura Rimell | Eva Maria Vecchi
Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP

2015

pdf bib
Building a shared world: mapping distributional to model-theoretic semantic spaces
Aurélie Herbelot | Eva Maria Vecchi
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing

pdf bib
From distributional semantics to feature norms: grounding semantic models in human perceptual data
Luana Fagarasan | Eva Maria Vecchi | Stephen Clark
Proceedings of the 11th International Conference on Computational Semantics

2014

pdf bib
Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation
Dekai Wu | Marine Carpuat | Xavier Carreras | Eva Maria Vecchi
Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation

2013

pdf bib
Studying the Recursive Behaviour of Adjectival Modification with Compositional Distributional Semantics
Eva Maria Vecchi | Roberto Zamparelli | Marco Baroni
Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing

pdf bib
Fish Transporters and Miracle Homes: How Compositional Distributional Semantics can Help NP Parsing
Angeliki Lazaridou | Eva Maria Vecchi | Marco Baroni
Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing

pdf bib
51st Annual Meeting of the Association for Computational Linguistics Proceedings of the Student Research Workshop
Anik Dey | Sebastian Krause | Ivelina Nikolova | Eva Vecchi | Steven Bethard | Preslav I. Nakov | Feiyu Xu
51st Annual Meeting of the Association for Computational Linguistics Proceedings of the Student Research Workshop

2012

pdf bib
First Order vs. Higher Order Modification in Distributional Semantics
Gemma Boleda | Eva Maria Vecchi | Miquel Cornudella | Louise McNally
Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning

2011

pdf bib
(Linear) Maps of the Impossible: Capturing Semantic Anomalies in Distributional Space
Eva Maria Vecchi | Marco Baroni | Roberto Zamparelli
Proceedings of the Workshop on Distributional Semantics and Compositionality

2008

pdf bib
An Infrastructure, Tools and Methodology for Evaluation of Multicultural Name Matching Systems
Keith J. Miller | Mark Arehart | Catherine Ball | John Polk | Alan Rubenstein | Kenneth Samuel | Elizabeth Schroeder | Eva Vecchi | Chris Wolf
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)

This paper describes a Name Matching Evaluation Laboratory that is a joint effort across multiple projects. The lab houses our evaluation infrastructure as well as multiple name matching engines and customized analytical tools. Included is an explanation of the methodology used by the lab to carry out evaluations. This methodology is based on standard information retrieval evaluation, which requires a carefully-constructed test data set. The paper describes how we created that test data set, including the “ground truth” used to score the systems’ performance. Descriptions and snapshots of the lab’s various tools are provided, as well as information on how the different tools are used throughout the evaluation process. By using this evaluation process, the lab has been able to identify strengths and weaknesses of different name matching engines. These findings have led the lab to an ongoing investigation into various techniques for combining results from multiple name matching engines to achieve optimal results, as well as into research on the more general problem of identity management and resolution.