In real life, telling a person’s age from his/her face, we tend to look at his/her whole face first and then focus on certain important regions like eyes. After that we will focus on each particular facial feature individually like the nose or the mouth so that we can decide on the age of the person. Similarly, in this paper, we propose a new framework for age estimation, which is based on human face sub-regions. Each sub-network in our framework is made up of two input images from human facial regions. One of them is the global face, and the other one is a vital sub-component. Then, we combine predictions from different sub-regions so as to make full use of various opinions from different regions based on a majority voting method. We call our framework Multi-Region Network Prediction Ensemble (MRNPE) and evaluate our approach based on two popular public datasets: MORPH Album II and Cross Age Celebrity Dataset (CACD). Experiments show that our method outperforms the existing state-of-the-art age estimation methods by a significant margin. The Mean Absolute Errors (MAE) of age estimation are dropped from 3.03 to 2.73 years on the MORPH Album II and 4.79 to 4.40 years on the CACD.
|Publication status||Published - 2017|
|Event||British Machine Vision Conference - Imperial College London, London, United Kingdom|
Duration: 4 Sep 2017 → 7 Sep 2017
|Conference||British Machine Vision Conference|
|Abbreviated title||BMVC 2017|
|Period||4/09/17 → 7/09/17|