Eetu Sjöblom


2020

pdf bib
Paraphrase Generation and Evaluation on Colloquial-Style Sentences
Eetu Sjöblom | Mathias Creutz | Yves Scherrer
Proceedings of the 12th Language Resources and Evaluation Conference

In this paper, we investigate paraphrase generation in the colloquial domain. We use state-of-the-art neural machine translation models trained on the Opusparcus corpus to generate paraphrases in six languages: German, English, Finnish, French, Russian, and Swedish. We perform experiments to understand how data selection and filtering for diverse paraphrase pairs affects the generated paraphrases. We compare two different model architectures, an RNN and a Transformer model, and find that the Transformer does not generally outperform the RNN. We also conduct human evaluation on five of the six languages and compare the results to the automatic evaluation metrics BLEU and the recently proposed BERTScore. The results advance our understanding of the trade-offs between the quality and novelty of generated paraphrases, affected by the data selection method. In addition, our comparison of the evaluation methods shows that while BLEU correlates well with human judgments at the corpus level, BERTScore outperforms BLEU in both corpus and sentence-level evaluation.

2019

pdf bib
Toward automatic improvement of language produced by non-native language learners
Mathias Creutz | Eetu Sjöblom
Proceedings of the 8th Workshop on NLP for Computer Assisted Language Learning

2018

pdf bib
Paraphrase Detection on Noisy Subtitles in Six Languages
Eetu Sjöblom | Mathias Creutz | Mikko Aulamo
Proceedings of the 2018 EMNLP Workshop W-NUT: The 4th Workshop on Noisy User-generated Text

We perform automatic paraphrase detection on subtitle data from the Opusparcus corpus comprising six European languages: German, English, Finnish, French, Russian, and Swedish. We train two types of supervised sentence embedding models: a word-averaging (WA) model and a gated recurrent averaging network (GRAN) model. We find out that GRAN outperforms WA and is more robust to noisy training data. Better results are obtained with more and noisier data than less and cleaner data. Additionally, we experiment on other datasets, without reaching the same level of performance, because of domain mismatch between training and test data.