Guillaume Bernard


2018

pdf bib
Matics Software Suite: New Tools for Evaluation and Data Exploration
Olivier Galibert | Guillaume Bernard | Agnes Delaborde | Sabrina Lecadre | Juliette Kahn
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

2016

pdf bib
FABIOLE, a Speech Database for Forensic Speaker Comparison
Moez Ajili | Jean-François Bonastre | Juliette Kahn | Solange Rossato | Guillaume Bernard
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

A speech database has been collected for use to highlight the importance of “speaker factor” in forensic voice comparison. FABIOLE has been created during the FABIOLE project funded by the French Research Agency (ANR) from 2013 to 2016. This corpus consists in more than 3 thousands excerpts spoken by 130 French native male speakers. The speakers are divided into two categories: 30 target speakers who everyone has 100 excerpts and 100 “impostors” who everyone has only one excerpt. The data were collected from 10 different French radio and television shows where each utterance turns with a minimum duration of 30s and has a good speech quality. The data set is mainly used for investigating speaker factor in forensic voice comparison and interpreting some unsolved issue such as the relationship between speaker characteristics and system behavior. In this paper, we present FABIOLE database. Then, preliminary experiments are performed to evaluate the effect of the “speaker factor” and the show on a voice comparison system behavior.

pdf bib
LNE-Visu : a tool to explore and visualize multimedia data
Guillaume Bernard | Juliette Kahn | Olivier Galibert | Rémi Regnier | Séverine Demeyer
Actes de la conférence conjointe JEP-TALN-RECITAL 2016. volume 5 : Démonstrations

LNE-Visu : a tool to explore and visualize multimedia data LNE-Visu is a tool to explore and visualize multimedia data created for the LNE evaluation campaigns. 3 functionalities are available: explore and select data, visualize and listen data, apply significance tests

2010

pdf bib
A Question-answer Distance Measure to Investigate QA System Progress
Guillaume Bernard | Sophie Rosset | Martine Adda-Decker | Olivier Galibert
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

The performance of question answering system is evaluated through successive evaluations campaigns. A set of questions are given to the participating systems which are to find the correct answer in a collection of documents. The creation process of the questions may change from one evaluation to the next. This may entail an uncontroled question difficulty shift. For the QAst 2009 evaluation campaign, a new procedure was adopted to build the questions. Comparing results of QAst 2008 and QAst 2009 evaluations, a strong performance loss could be measured in 2009 for French and English, while the Spanish systems globally made progress. The measured loss might be related to this new way of elaborating questions. The general purpose of this paper is to propose a measure to calibrate the difficulty of a question set. In particular, a reasonable measure should output higher values for 2009 than for 2008. The proposed measure relies on a distance measure between the critical elements of a question and those of the associated correct answer. An increase of the proposed distance measure for French and English 2009 evaluations as compared to 2008 could be established. This increase correlates with the previously observed degraded performances. We conclude on the potential of this evaluation criterion: the importance of such a measure for the elaboration of new question corpora for questions answering systems and a tool to control the level of difficulty for successive evaluation campaigns.