Kalin Stefanov


pdf bib
A Multi-party Multi-modal Dataset for Focus of Visual Attention in Human-human and Human-robot Interaction
Kalin Stefanov | Jonas Beskow
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

This papers describes a data collection setup and a newly recorded dataset. The main purpose of this dataset is to explore patterns in the focus of visual attention of humans under three different conditions - two humans involved in task-based interaction with a robot; same two humans involved in task-based interaction where the robot is replaced by a third human, and a free three-party human interaction. The dataset contains two parts - 6 sessions with duration of approximately 3 hours and 9 sessions with duration of approximately 4.5 hours. Both parts of the dataset are rich in modalities and recorded data streams - they include the streams of three Kinect v2 devices (color, depth, infrared, body and face data), three high quality audio streams, three high resolution GoPro video streams, touch data for the task-based interactions and the system state of the robot. In addition, the second part of the dataset introduces the data streams from three Tobii Pro Glasses 2 eye trackers. The language of all interactions is English and all data streams are spatially and temporally aligned.


pdf bib
The Tutorbot Corpus — A Corpus for Studying Tutoring Behaviour in Multiparty Face-to-Face Spoken Dialogue
Maria Koutsombogera | Samer Al Moubayed | Bajibabu Bollepalli | Ahmed Hussen Abdelaziz | Martin Johansson | José David Aguas Lopes | Jekaterina Novikova | Catharine Oertel | Kalin Stefanov | Gül Varol
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

This paper describes a novel experimental setup exploiting state-of-the-art capture equipment to collect a multimodally rich game-solving collaborative multiparty dialogue corpus. The corpus is targeted and designed towards the development of a dialogue system platform to explore verbal and nonverbal tutoring strategies in multiparty spoken interactions. The dialogue task is centered on two participants involved in a dialogue aiming to solve a card-ordering game. The participants were paired into teams based on their degree of extraversion as resulted from a personality test. With the participants sits a tutor that helps them perform the task, organizes and balances their interaction and whose behavior was assessed by the participants after each interaction. Different multimodal signals captured and auto-synchronized by different audio-visual capture technologies, together with manual annotations of the tutor’s behavior constitute the Tutorbot corpus. This corpus is exploited to build a situated model of the interaction based on the participants’ temporally-changing state of attention, their conversational engagement and verbal dominance, and their correlation with the verbal and visual feedback and conversation regulatory actions generated by the tutor.