Automatically classifying retinal blood vessels appearing in fundus camera imaging into arterioles and venules can be problematic due to variations between people as well as in image quality, contrast and brightness. Using the most dominant features for retinal vessel types in each image rather than predefining the set of characteristic features prior to classification may achieve better performance. In this paper, we present a novel approach to classifying retinal vessels extracted from fundus camera images which combines an Orthogonal Locality Preserving Projections for feature extraction and a Gaussian Mixture Model with Expectation-Maximization unsupervised classifier. The classification rate with 47 features (the largest dimension tested) using OLPP on our own ORCADES dataset and the publicly available DRIVE dataset was 90.56% and 86.7% respectively.
- Feature extraction
- Fundus images
- Orthogonal locality preserving projections (OLPP)
- Vessel classification
Relan, D., Ballerini, L., Trucco, E., & MacGillivray, T. (2018). Using Orthogonal Locality Preserving Projections to Find Dominant Features for Classifying Retinal Blood Vessels. Multimedia Tools and Applications, 78(10), 12783-12803. https://doi.org/10.1007/s11042-018-6474-7