Cross-scenario transfer person reidentification

Xiaojun Wang (Lead / Corresponding author), Wei-Shi Zheng, Xiang Li, Jianguo Zhang

Research output: Contribution to journalArticlepeer-review

61 Citations (Scopus)
571 Downloads (Pure)

Abstract

Person reidentification (Re-ID) matches images of the same person captured in disjoint camera views and at different times. To obtain a reliable similarity measurement between images, manually annotating a large amount of pairwise cross-camera-view person images is deemed necessary. However, this kind of annotation is both costly and impractical for efficiently deploying a Re-ID system to a completely new scenario, a new setting of nonoverlapping camera views between which person images are to be matched. To solve this problem, we consider utilizing other existing person images captured in other scenarios to help the Re-ID system in a target (new) scenario, provided that a few samples are captured under the new scenario. More specifically, we tackle this problem by jointly learning the similarity measurements for Re-ID in different scenarios in an asymmetric way. To model the joint learning, we consider that the Re-ID models share certain component across tasks. A distinct consideration in our multitask modeling is to extract the discriminant shared component that reduces the cross-task data overlap in the shared latent space during the joint learning, so as to enhance the target inter-class separation in the shared latent space. For this purpose, we propose to maximize the cross-task data discrepancy on the shared component during asymmetric multitask learning (MTL), along with maximizing the local inter-class variation and minimizing local intra-class variation on all tasks. We call our proposed method the constrained asymmetric multitask discriminant component analysis (cAMT-DCA). We show that cAMT-DCA can be solved by a simple eigen decomposition with a closed form, getting rid of any iterative learning used in most conventional MTL analyses. The experimental results show that the proposed transfer model gains a clear improvement against the related nontransfer and
general multitask person Re-ID models.
Original languageEnglish
Pages (from-to)1447-1460
Number of pages14
JournalIEEE Transactions on Circuits and Systems for Video Technology
Volume26
Issue number8
Early online date25 Jun 2015
DOIs
Publication statusPublished - 5 Aug 2016

Keywords

  • visual surveillance
  • Cross-scenario transfer
  • person reidentification (Re-ID)

Fingerprint

Dive into the research topics of 'Cross-scenario transfer person reidentification'. Together they form a unique fingerprint.

Cite this