Projects per year
Focused interaction occurs when co-present individuals, having mutual focus of attention, interact by establishing face-to-face engagement and direct conversation. Face-toface engagement is often not maintained throughout the entirety of a focused interaction. In this paper, we present an online method for automatic classification of unconstrained egocentric (first-person perspective) videos into segments having no focused interaction, focused interaction when the camera wearer is stationary and focused interaction when the camera wearer is moving. We extract features from both audio and video data streams and perform temporal segmentation by using support vector machines with linear and non-linear kernels. We provide empirical evidence that fusion of visual face track scores, camera motion profile and audio voice activity scores is an effective combination for focused interaction classification.
|Title of host publication||2017 IEEE International Conference on Computer Vision Workshop (ICCVW)|
|Number of pages||9|
|Publication status||Published - 23 Jan 2018|
|Event||IEEE International Conference on Computer Vision Workshops - Venice Convention Centre, Venice, Italy|
Duration: 22 Oct 2017 → 29 Oct 2017
|Name||Proceedings - 2017 IEEE International Conference on Computer Vision Workshops, ICCVW 2017|
|Conference||IEEE International Conference on Computer Vision Workshops|
|Abbreviated title||ICCV 2017|
|Period||22/10/17 → 29/10/17|
- Feature extraction
- Legged locomotion
FingerprintDive into the research topics of 'Finding Time Together: Detection and Classification of Focused Interaction in Egocentric Video'. Together they form a unique fingerprint.
- 1 Finished