Martin Rajman

Also published as: M. Rajman


2006

pdf bib
Archivus: A Multimodal System for Multimedia Meeting Browsing and Retrieval
Marita Ailomaa | Miroslav Melichar | Agnes Lisowska | Martin Rajman | Susan Armstrong
Proceedings of the COLING/ACL 2006 Interactive Presentation Sessions

pdf bib
Extending the Wizard of Oz Methodologie for Multimodal Language-enabled Systems
Martin Rajman | Marita Ailomaa | Agnes Lisowska | Miroslav Melichar | Susan Armstrong
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)

In this paper we present a proposal for extending the standard Wizard of Oz experimental methodology to language-enabled multimodal systems. We first discuss how Wizard of Oz experiments involving multimodal systems differ from those involving voice-only systems. We then go on to discuss the Extended Wizard of Oz methodology and the Wizard of Oz testing environment and protocol that we have developed. We then describe an example of applying this methodology to Archivus, a multimodal system for multimedia meeting retrieval and browsing. We focus in particular on the tools that the wizards would need to successfully and efficiently perform their tasks in a multimodal context. We conclude with some general comments about which questions need to be addressed when developing and using the Wizard of Oz methodology for testing multimodal systems.

pdf bib
CESTA: First Conclusions of the Technolangue MT Evaluation Campaign
O. Hamon | A. Popescu-Belis | K. Choukri | M. Dabbadie | A. Hartley | W. Mustafa El Hadi | M. Rajman | I. Timimi
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)

This article outlines the evaluation protocol and provides the main results of the French Evaluation Campaign for Machine Translation Systems, CESTA. Following the initial objectives and evaluation plans, the evaluation metrics are briefly described: along with fluency and adequacy assessed by human judges, a number of recently proposed automated metrics are used. Two evaluation campaigns were organized, the first one in the general domain, and the second one in the medical domain. Up to six systems translating from English into French, and two systems translating from Arabic into French, took part in the campaign. The numerical results illustrate the differences between classes of systems, and provide interesting indications about the reliability of the automated metrics for French as a target language, both by comparison to human judges and using correlations between metrics. The corpora that were produced, as well as the information about the reliability of metrics, constitute reusable resources for MT evaluation.

pdf bib
X-Score: Automatic Evaluation of Machine Translation Grammaticality
O. Hamon | M. Rajman
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)

In this paper we report an experiment of an automated metric used to analyse the grammaticality of machine translation output. The approach (Rajman, Hartley, 2001) is based on the distribution of the linguistic information within a translated text, which is supposed similar between a learning corpus and the translation. This method is quite inexpensive, since it does not need any reference translation. First we describe the experimental method and the different tests we used. Then we show the promising results we obtained on the CESTA data, and how they correlate well with human judgments.

pdf bib
Robust stochastic parsing: Comparing and combining two approaches for processing extra-grammatical sentences
Marita Ailomaa | Vladimír Kadlec | Martin Rajman | Jean-Cédric Chappelier
Proceedings of the 15th Nordic Conference of Computational Linguistics (NODALIDA 2005)

2004

pdf bib
INSPIRE: Evaluation of a Smart-Home System for Infotainment Management and Device Control
Sebastian Möller | Jan Krebber | Alexander Raake | Paula Smeele | Martin Rajman | Mirek Melichar | Vincenzo Pallotta | Gianna Tsakou | Basilis Kladis | Anestis Vovos | Jettie Hoonhout | Dietmar Schuchardt | Nikos Fakotakis | Todor Ganchev | Ilyas Potamitis
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)

pdf bib
Speech Recognition Simulation and its Application for Wizard-of-Oz Experiments
Alex Trutnev | Antoine Rozenknop | Martin Rajman
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)

pdf bib
Comparative Evaluations in the Domain of Automatic Speech Recognition
Alex Trutnev | Martin Rajman
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)

pdf bib
Automatic Keyword Extraction from Spoken Text. A Comparison of Two Lexical Resources: EDR and WordNet
Lonneke van der Plas | Vincenzo Pallotta | Martin Rajman | Hatem Ghorbel
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)

pdf bib
CESTA: Machine Translation Evaluation Campaign [Work-in-Progress Project Report]
Widad Mustafa El Hadi | Marianne Dabbadie | Ismaïl Timimi | Martin Rajman | Philippe Langlais | Antony Hartley | Andrei Popescu Belis
Proceedings of the Second International Workshop on Language Resources for Translation Work, Research and Training

2002

pdf bib
Automatic Ranking of MT Systems
Martin Rajman | Anthony Hartley
Proceedings of the Third International Conference on Language Resources and Evaluation (LREC’02)

pdf bib
Evaluation of a Vector Space Similarity Measure in a Multilingual Framework
Romaric Besançon | Martin Rajman
Proceedings of the Third International Conference on Language Resources and Evaluation (LREC’02)

2000

pdf bib
Development of Acoustic and Linguistic Resources for Research and Evaluation in Interactive Vocal Information Servers
Giulia Bernardis | Hervé Bourlard | Martin Rajman | Jean-Cédric Chappelier
Proceedings of the Second International Conference on Language Resources and Evaluation (LREC’00)