CUDAS: Distortion-Aware Saliency Benchmark

Xin Zhao, Jianxun Lou (Lead / Corresponding author), Xinbo Wu (Lead / Corresponding author), Yingying Wu, Lucie Leveque, Xiaochang Liu, Pengfei Guo, Yipeng Qin, Hanhe Lin, Dietmar Saupe, Hantao Liu

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)
37 Downloads (Pure)

Abstract

Visual saliency prediction remains an academic challenge due to the diversity and complexity of natural scenes as well as the scarcity of eye movement data on where people look in images. In many practical applications, digital images are inevitably subject to distortions, such as those caused by acquisition, editing, compression or transmission. A great deal of attention has been paid to predicting the saliency of distortion-free pristine images, but little attention has been given to understanding the impact of visual distortions on saliency prediction. In this paper, we first present the CUDAS database - a new distortion-aware saliency benchmark, where eye-tracking data was collected for 60 pristine images and their corresponding 540 distorted formats. We then conduct a statistical evaluation to reveal the behaviour of state-of-the-art saliency prediction models on distorted images and provide insights on building an effective model for distortion-aware saliency prediction. The new database is made publicly available to the research community.

Original languageEnglish
Pages (from-to)58025-58036
Number of pages12
JournalIEEE Access
Volume11
DOIs
Publication statusPublished - 6 Jun 2023

Keywords

  • Benchmark testing
  • Computational modeling
  • Databases
  • deep learning
  • Distortion
  • distortion
  • Eye-tracking
  • Gaze tracking
  • Graphics processing units
  • image quality
  • saliency
  • Visualization

ASJC Scopus subject areas

  • General Computer Science
  • General Materials Science
  • General Engineering

Fingerprint

Dive into the research topics of 'CUDAS: Distortion-Aware Saliency Benchmark'. Together they form a unique fingerprint.

Cite this