Ismail Timimi

Also published as: I. Timimi, Ismaïl Timimi


2008

pdf bib
The INFILE Project: a Crosslingual Filtering Systems Evaluation Campaign
Romaric Besançon | Stéphane Chaudiron | Djamel Mostefa | Ismaïl Timimi | Khalid Choukri
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)

The InFile project (INformation, FILtering, Evaluation) is a cross-language adaptive filtering evaluation campaign, sponsored by the French National Research Agency. The campaign is organized by the CEA LIST, ELDA and the University of Lille3-GERiiCO. It has an international scope as it is a pilot track of the CLEF 2008 campaigns. The corpus is built from a collection of about 1.4 million newswires (10 GB) in three languages, Arabic, English and French provided by the French news Agency Agence France Press (AFP) and selected from a 3-year period. The profiles corpus is made of 50 profiles from which 30 concern general news and events (national and international affairs, politics, sports?) and 20 concern scientific and technical subjects.

2006

pdf bib
Terminological Resources Acquisition Tools: Toward a User-oriented Evaluation Model
Widad Mustafa El Hadi | Ismail Timimi | Marianne Dabbadie | Khalid Choukri | Olivier Hamon | Yun-Chuang Chiao
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)

This paper describes the CESART project which deals with the evaluation of terminological resources acquisition tools. The objective of the project is to propose and validate an evaluation protocol allowing one to objectively evaluate and compare different systems for terminology application such as terminological resource creation and semantic relation extraction. The project also aims to create quality-controlled resources such as domain-specific corpora, automatic scoring tool, etc.

pdf bib
CESTA: First Conclusions of the Technolangue MT Evaluation Campaign
O. Hamon | A. Popescu-Belis | K. Choukri | M. Dabbadie | A. Hartley | W. Mustafa El Hadi | M. Rajman | I. Timimi
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)

This article outlines the evaluation protocol and provides the main results of the French Evaluation Campaign for Machine Translation Systems, CESTA. Following the initial objectives and evaluation plans, the evaluation metrics are briefly described: along with fluency and adequacy assessed by human judges, a number of recently proposed automated metrics are used. Two evaluation campaigns were organized, the first one in the general domain, and the second one in the medical domain. Up to six systems translating from English into French, and two systems translating from Arabic into French, took part in the campaign. The numerical results illustrate the differences between classes of systems, and provide interesting indications about the reliability of the automated metrics for French as a target language, both by comparison to human judges and using correlations between metrics. The corpora that were produced, as well as the information about the reliability of metrics, constitute reusable resources for MT evaluation.

2004

pdf bib
EVALDA-CESART Project: Terminological Resources Acquisition Tools Evaluation Campaign
Widad Mustafa El Hadi | Ismail Timimi | Marianne Dabbadie
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)

pdf bib
CESTA: Machine Translation Evaluation Campaign [Work-in-Progress Project Report]
Widad Mustafa El Hadi | Marianne Dabbadie | Ismaïl Timimi | Martin Rajman | Philippe Langlais | Antony Hartley | Andrei Popescu Belis
Proceedings of the Second International Workshop on Language Resources for Translation Work, Research and Training

2002

pdf bib
Terminological Enrichment for non-Interactive MT Evaluation
Marianne Dabbadie | Widad Mustafa El Hadi | Ismaïl Timimi
Proceedings of the Third International Conference on Language Resources and Evaluation (LREC’02)

2001

pdf bib
The ARC A3 Project: Terminology Acquisition Tools: Evaluation Method and Task
Widad Mustafa El Hadi | Ismail Timimi | Annette Beguin | Marcilio de Brito
Proceedings of the ACL 2001 Workshop on Evaluation Methodologies for Language and Dialogue Systems