Global-Local Temporal Saliency Action Prediction

Shaofan Lai, Wei-Shi Zheng (Lead / Corresponding author), Jian-Fang Hu, Jianguo Zhang

Research output: Contribution to journalArticlepeer-review

26 Citations (Scopus)
542 Downloads (Pure)


Action prediction on a partially observed action sequence is a very challenging task. To address this challenge, we first design a global-local distance model, where a globaltemporal distance compares subsequences as a whole and localtemporal distance focuses on individual segment. Our distance model introduces temporal saliency for each segment to adapt its contribution. Finally, a global-local temporal action prediction model is formulated in order to jointly learn and fuse these two types of distances. Such a prediction model is capable of recognizing action of 1) an on-going sequence and 2) a sequence with arbitrarily frames missing between the beginning and end (known as gap-filling). Our proposed model is tested and compared to related action prediction models on BIT, UCF11 and HMDB datasets. The results demonstrated the effectiveness of our proposal. In particular, we showed the benefit of our proposed model on predicting unseen action types and the advantage on addressing the gapfilling problem as compared to recently developed action prediction models
Original languageEnglish
Pages (from-to)1-14
Number of pages14
JournalIEEE Transactions on Image Processing
Issue number99
Early online date11 Sept 2017
Publication statusPublished - 11 Sept 2017


  • Action prediction
  • Gapfilling


Dive into the research topics of 'Global-Local Temporal Saliency Action Prediction'. Together they form a unique fingerprint.

Cite this