We present a framework for markerless articulated human motion tracking in multi-view sequences. We learn motion models of common actions in a low-dimensional latent space using charting, a nonlinear dimensionality reduction tool which estimates automatically the dimension of the latent space and keeps similar poses close together in it. Tracking takes place in the latent space using an efficient modified version of particle swarm optimisation. Observed image data are multiview silhouettes, represented by vector-quantized shape contexts. Multi-variate relevance vector machines are used to learn the mapping from the action manifold to the shape space. Tracking results with walking, punching, posing and praying sequences demonstrate the good accuracy and performance of our approach.
|Number of pages||11|
|Publication status||Published - 2010|
|Event||2010 21st British Machine Vision Conference, BMVC 2010 - Aberystwyth, United Kingdom|
Duration: 31 Aug 2010 → 3 Sep 2010
|Conference||2010 21st British Machine Vision Conference, BMVC 2010|
|Period||31/08/10 → 3/09/10|