Marianne Dabbadie

Also published as: M. Dabbadie


2006

pdf bib
Terminological Resources Acquisition Tools: Toward a User-oriented Evaluation Model
Widad Mustafa El Hadi | Ismail Timimi | Marianne Dabbadie | Khalid Choukri | Olivier Hamon | Yun-Chuang Chiao
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)

This paper describes the CESART project which deals with the evaluation of terminological resources acquisition tools. The objective of the project is to propose and validate an evaluation protocol allowing one to objectively evaluate and compare different systems for terminology application such as terminological resource creation and semantic relation extraction. The project also aims to create quality-controlled resources such as domain-specific corpora, automatic scoring tool, etc.

pdf bib
CESTA: First Conclusions of the Technolangue MT Evaluation Campaign
O. Hamon | A. Popescu-Belis | K. Choukri | M. Dabbadie | A. Hartley | W. Mustafa El Hadi | M. Rajman | I. Timimi
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)

This article outlines the evaluation protocol and provides the main results of the French Evaluation Campaign for Machine Translation Systems, CESTA. Following the initial objectives and evaluation plans, the evaluation metrics are briefly described: along with fluency and adequacy assessed by human judges, a number of recently proposed automated metrics are used. Two evaluation campaigns were organized, the first one in the general domain, and the second one in the medical domain. Up to six systems translating from English into French, and two systems translating from Arabic into French, took part in the campaign. The numerical results illustrate the differences between classes of systems, and provide interesting indications about the reliability of the automated metrics for French as a target language, both by comparison to human judges and using correlations between metrics. The corpora that were produced, as well as the information about the reliability of metrics, constitute reusable resources for MT evaluation.

2004

pdf bib
EVALDA-CESART Project: Terminological Resources Acquisition Tools Evaluation Campaign
Widad Mustafa El Hadi | Ismail Timimi | Marianne Dabbadie
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)

pdf bib
CESTA: Machine Translation Evaluation Campaign [Work-in-Progress Project Report]
Widad Mustafa El Hadi | Marianne Dabbadie | Ismaïl Timimi | Martin Rajman | Philippe Langlais | Antony Hartley | Andrei Popescu Belis
Proceedings of the Second International Workshop on Language Resources for Translation Work, Research and Training

2002

pdf bib
Terminological Enrichment for non-Interactive MT Evaluation
Marianne Dabbadie | Widad Mustafa El Hadi | Ismaïl Timimi
Proceedings of the Third International Conference on Language Resources and Evaluation (LREC’02)