Multimodal Focused Interaction Dataset

Dataset

Description

Recording of daily life experiences from a first-person perspective has become more prevalent with the increasing availability of wearable cameras and sensors. This dataset was captured during development of a system for automatic detection of social interactions in such data streams, and in particular focused interactions in which co-present individuals, having mutual focus of attention, interact by establishing face-to-face engagement and direct conversation. Existing public datasets for social interaction captured from first person perspective tend to be limited in terms of duration, number of people appearing, continuity and variability of the recording. We contribute the Focused Interaction Dataset which includes video acquired using a shoulder-mounted GoPro Hero 4 camera, as well as inertial sensor data and GPS data, and output from a voice activity detector. The dataset contains 377 minutes (including 566,000 video frames) of continuous multimodal recording captured during 19 sessions, with 17 conversational partners in 18 different indoor/outdoor locations. The sessions include periods in which the camera wearer is engaged in focused interactions, in unfocused interactions, and in no interaction. Annotations are provided for all focused and unfocused interactions for the complete duration of the dataset. Anonymised IDs for 13 people involved in the focused interactions are also provided. In addition to development of social interaction analysis, the dataset may be useful for applications such as activity detection, personal location of interest understanding, and person association.
Date made availableJun 2018
PublisherUniversity of Dundee
Temporal coverage8 Sep 2016 - 3 Feb 2017
Date of data production2017

Cite this

Bano, S. (Creator), McKenna, S. (Creator)(Jun 2018). Multimodal Focused Interaction Dataset. University of Dundee. Readme_fidataset_(.docx). https://doi.org/10.15132/10000134