This paper introduces a publicly available dataset of complex activities that involve manipulative gestures. The dataset captures people preparing mixed salads and contains more than 4.5 hours of accelerometer and RGB-D video data, detailed annotations, and an evaluation protocol for comparison of activity recognition algorithms. Providing baseline results for one possible activity recognition task, this paper further investigates modality fusion methods at different stages of the recognition pipeline: (i) prior to feature extraction through accelerometer localization, (ii) at feature level via feature concatenation, and (iii) at classification level by combining classifier outputs. Empirical evaluation shows that fusing in- formation captured by these sensor types can considerably improve recognition performance.
|Title of host publication||UbiComp 2013 - Proceedings of the 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing|
|Number of pages||10|
|Publication status||Published - 1 Jan 2013|
Multi-Modal Recognition of Manipulation Activities through Visual Accelerometer Tracking, Relational Histograms, and User-AdaptationAuthor: Stein, S., 2014
Supervisor: McKenna, S. (Supervisor)
Student thesis: Doctoral Thesis › Doctor of PhilosophyFile