Deep learning vs. traditional algorithms for saliency prediction of distorted images

Xin Zhao, Hanhe Lin, Pengfei Guo, Dietmar Saupe, Hantao Liu

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)

Abstract

Saliency has been widely studied in relation to image quality assessment (IQA). The optimal use of saliency in IQA metrics, however, is nontrivial and largely depends on whether saliency can be accurately predicted for images containing various distortions. Although tremendous progress has been made in saliency modelling, very little is known about whether and to what extent state-of-the-art methods are beneficial for saliency prediction of distorted images. In this paper, we analyse the ability of deep learning versus traditional algorithms in predicting saliency, based on an IQA-aware saliency benchmark, the SIQ288 database. Building off the variations in model performance, we make recommendations for model selections for IQA applications.

Original languageEnglish
Title of host publication2020 IEEE International Conference on Image Processing (ICIP)
Subtitle of host publicationproceedings
PublisherIEEE
Pages156-160
Number of pages5
ISBN (Electronic)978-1-7281-6395-6
ISBN (Print)978-1-7281-6396-3
DOIs
Publication statusPublished - Oct 2020
Event2020 IEEE International Conference on Image Processing
- Virtual, Abu Dhabi, United Arab Emirates
Duration: 25 Sep 202028 Sep 2020

Publication series

NameProceedings - International Conference on Image Processing, ICIP
Volume2020-October
ISSN (Print)1522-4880

Conference

Conference2020 IEEE International Conference on Image Processing
Abbreviated titleICIP
Country/TerritoryUnited Arab Emirates
CityVirtual, Abu Dhabi
Period25/09/2028/09/20

Keywords

  • distortion
  • eyetracking
  • Image quality assessment
  • saliency
  • statistical analysis

Fingerprint

Dive into the research topics of 'Deep learning vs. traditional algorithms for saliency prediction of distorted images'. Together they form a unique fingerprint.

Cite this