2019
pdf
bib
abs
Making Fast Graphbased Algorithms with Graph Metric Embeddings
Andrey Kutuzov

Mohammad Dorgham

Oleksiy Oliynyk

Chris Biemann

Alexander Panchenko
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Graph measures, such as node distances, are inefficient to compute. We explore dense vector representations as an effective way to approximate the same information. We introduce a simple yet efficient and effective approach for learning graph embeddings. Instead of directly operating on the graph structure, our method takes structural measures of pairwise node similarities into account and learns dense node representations reflecting userdefined graph distance measures, such as e.g. the shortest path distance or distance measures that take information beyond the graph structure into account. We demonstrate a speedup of several orders of magnitude when predicting word similarity by vector operations on our embeddings as opposed to directly computing the respective pathbased measures, while outperforming various other graph embeddings on semantic similarity and word sense disambiguation tasks.
pdf
bib
abs
TARGER: Neural Argument Mining at Your Fingertips
Artem Chernodub

Oleksiy Oliynyk

Philipp Heidenreich

Alexander Bondarenko

Matthias Hagen

Chris Biemann

Alexander Panchenko
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations
We present TARGER, an open source neural argument mining framework for tagging arguments in free input texts and for keywordbased retrieval of arguments from an argumenttagged webscale corpus. The currently available models are pretrained on three recent argument mining datasets and enable the use of neural argument mining without any reproducibility effort on the user’s side. The open source code ensures portability to other domains and use cases.
pdf
bib
abs
Learning Graph Embeddings from WordNetbased Similarity Measures
Andrey Kutuzov

Mohammad Dorgham

Oleksiy Oliynyk

Chris Biemann

Alexander Panchenko
Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM 2019)
We present path2vec, a new approach for learning graph embeddings that relies on structural measures of pairwise node similarities. The model learns representations for nodes in a dense space that approximate a given userdefined graph distance measure, such as e.g. the shortest path distance or distance measures that take information beyond the graph structure into account. Evaluation of the proposed model on semantic similarity and word sense disambiguation tasks, using various WordNetbased similarity measures, show that our approach yields competitive results, outperforming strong graph embedding baselines. The model is computationally efficient, being orders of magnitude faster than the direct computation of graphbased distances.