Visualizing image collections using high-entropy layout distributions

Ruixuan Wang, Stephen J. McKenna, Junwei Han, Annette A. Ward

    Research output: Contribution to journalArticlepeer-review

    8 Citations (Scopus)


    Mechanisms for visualizing image collections are essential for browsing and exploring their content. This is especially true when metadata are ineffective in retrieving items due to the sparsity or esoteric nature of text. An obvious approach is to automatically lay out sets of images in ways that reflect relationships between the items. However, dimensionality reduction methods that map from high-dimensional content-based feature distributions to low-dimensional layout spaces for visualization often result in displays in which many items are occluded whilst large regions are empty or only sparsely populated. Furthermore, such methods do not consider the shape of the region of layout space to be populated. This paper proposes a method, high-entropy layout distributions. that addresses these limitations. Layout distributions with low differential entropy are penalized. An optimization strategy is presented that finds layouts that have high differential entropy and that reflect inter-image similarities. Efficient optimization is obtained using a step-size constraint and an approximation to quadratic (Renyi) entropy. Two image archives of cultural and commercial importance are used to illustrate and evaluate the method. A comparison with related methods demonstrates its effectiveness.

    Original languageEnglish
    Pages (from-to)803-813
    Number of pages11
    JournalIEEE Transactions on Multimedia
    Issue number8
    Publication statusPublished - Dec 2010


    • Content-based browsing
    • High-entropy layout distribution (HELD)
    • Image layouts
    • Manifold learning
    • Renyi entropy
    • Nonlinear dimensionality reduction


    Dive into the research topics of 'Visualizing image collections using high-entropy layout distributions'. Together they form a unique fingerprint.

    Cite this