Jointly learning heterogeneous features for RGB-D activity recognition

Jian-Fang Hu, Wei-shi Zheng, Jianhuang Lai, Jianguo Zhang

Research output: Chapter in Book/Report/Conference proceedingChapter (peer-reviewed)peer-review

394 Citations (Scopus)

Abstract

In this paper, we focus on heterogeneous feature learning for RGB-D activity recognition. Considering that features from different channels could share some similar hidden structures, we propose a joint learning model to simultaneously explore the shared and feature-specific components as an instance of heterogenous multi-task learning. The proposed model in an unified framework is capable of: 1) jointly mining a set of subspaces with the same dimensionality to enable the multi-task classifier learning, and 2) meanwhile, quantifying the shared and feature-specific components of features in the subspaces. To efficiently train the joint model, a three-step iterative optimization algorithm is proposed, followed by two inference models. Extensive results on three activity datasets have demonstrated the efficacy of the proposed method. In addition, a novel RGB-D activity dataset focusing on human-object interaction is collected for evaluating the proposed method, which will be made available to the community for RGB-D activity benchmarking and analysis.
Original languageEnglish
Title of host publication2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
PublisherIEEE
Pages5344-5352
ISBN (Electronic)9781467369640
ISBN (Print)9781467369657
DOIs
Publication statusPublished - 15 Oct 2015
Event2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) - Boston, MA, USA
Duration: 7 Jun 201512 Jun 2015

Publication series

Name
ISSN (Print)1063-6919
ISSN (Electronic)1063-6919

Conference

Conference2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Period7/06/1512/06/15

Fingerprint

Dive into the research topics of 'Jointly learning heterogeneous features for RGB-D activity recognition'. Together they form a unique fingerprint.

Cite this