TY - UNPB
T1 - Going the Extra Mile in Face Image Quality Assessment
T2 - A Novel Database and Model
AU - Su, Shaolin
AU - Lin, Hanhe
AU - Hosu, Vlad
AU - Wiedemann, Oliver
AU - Sun, Jinqiu
AU - Zhu, Yu
AU - Liu, Hantao
AU - Zhang, Yanning
AU - Saupe, Dietmar
PY - 2022/7/11
Y1 - 2022/7/11
N2 - Computer vision models for image quality assessment (IQA) predict the subjective effect of generic image degradation, such as artefacts, blurs, bad exposure, or colors. The scarcity of face images in existing IQA datasets (below 10\%) is limiting the precision of IQA required for accurately filtering low-quality face images or guiding CV models for face image processing, such as super-resolution, image enhancement, and generation. In this paper, we first introduce the largest annotated IQA database to date that contains 20,000 human faces (an order of magnitude larger than all existing rated datasets of faces), of diverse individuals, in highly varied circumstances, quality levels, and distortion types. Based on the database, we further propose a novel deep learning model, which re-purposes generative prior features for predicting subjective face quality. By exploiting rich statistics encoded in well-trained generative models, we obtain generative prior information of the images and serve them as latent references to facilitate the blind IQA task. Experimental results demonstrate the superior prediction accuracy of the proposed model on the face IQA task.
AB - Computer vision models for image quality assessment (IQA) predict the subjective effect of generic image degradation, such as artefacts, blurs, bad exposure, or colors. The scarcity of face images in existing IQA datasets (below 10\%) is limiting the precision of IQA required for accurately filtering low-quality face images or guiding CV models for face image processing, such as super-resolution, image enhancement, and generation. In this paper, we first introduce the largest annotated IQA database to date that contains 20,000 human faces (an order of magnitude larger than all existing rated datasets of faces), of diverse individuals, in highly varied circumstances, quality levels, and distortion types. Based on the database, we further propose a novel deep learning model, which re-purposes generative prior features for predicting subjective face quality. By exploiting rich statistics encoded in well-trained generative models, we obtain generative prior information of the images and serve them as latent references to facilitate the blind IQA task. Experimental results demonstrate the superior prediction accuracy of the proposed model on the face IQA task.
KW - cs.CV
KW - Computer Vision and Pattern Recognition
U2 - 10.48550/arXiv.2207.04904 Focus to learn more
DO - 10.48550/arXiv.2207.04904 Focus to learn more
M3 - Preprint
BT - Going the Extra Mile in Face Image Quality Assessment
PB - arXiv
CY - Cornell University
ER -