Projects per year
Abstract
Focused interaction occurs when co-present individuals, having mutual focus of attention, interact by establishing face-to-face engagement and direct conversation. Face-toface engagement is often not maintained throughout the entirety of a focused interaction. In this paper, we present an online method for automatic classification of unconstrained egocentric (first-person perspective) videos into segments having no focused interaction, focused interaction when the camera wearer is stationary and focused interaction when the camera wearer is moving. We extract features from both audio and video data streams and perform temporal segmentation by using support vector machines with linear and non-linear kernels. We provide empirical evidence that fusion of visual face track scores, camera motion profile and audio voice activity scores is an effective combination for focused interaction classification.
Original language | English |
---|---|
Title of host publication | 2017 IEEE International Conference on Computer Vision Workshop (ICCVW) |
Publisher | IEEE |
Pages | 2322-2330 |
Number of pages | 9 |
ISBN (Electronic) | 9781538610343 |
ISBN (Print) | 9781538610350 |
DOIs | |
Publication status | Published - 23 Jan 2018 |
Event | IEEE International Conference on Computer Vision Workshops - Venice Convention Centre, Venice, Italy Duration: 22 Oct 2017 → 29 Oct 2017 http://iccv2017.thecvf.com/ |
Publication series
Name | Proceedings - 2017 IEEE International Conference on Computer Vision Workshops, ICCVW 2017 |
---|---|
Volume | 2018-January |
Conference
Conference | IEEE International Conference on Computer Vision Workshops |
---|---|
Abbreviated title | ICCV 2017 |
Country/Territory | Italy |
City | Venice |
Period | 22/10/17 → 29/10/17 |
Internet address |
Keywords
- Cameras
- Face
- Feature extraction
- Tracking
- Visualization
- Legged locomotion
ASJC Scopus subject areas
- Computer Vision and Pattern Recognition
- Computer Science Applications
Fingerprint
Dive into the research topics of 'Finding Time Together: Detection and Classification of Focused Interaction in Egocentric Video'. Together they form a unique fingerprint.Projects
- 1 Finished
-
ACE-LP - Augmenting Communication Using Environmental Data to Drive Language Prediction (Joint with University of Cambridge)
Black, R. (Investigator), McKenna, S. (Investigator), Waller, A. (Investigator) & Zhang, J. (Investigator)
Engineering and Physical Sciences Research Council
1/02/16 → 31/01/20
Project: Research
Datasets
-
Multimodal Focused Interaction Dataset
Bano, S. (Creator) & McKenna, S. (Creator), University of Dundee, Jun 2018
DOI: 10.15132/10000134, http://cvip.computing.dundee.ac.uk/datasets/focusedinteraction/
Dataset
File