Content-based image retrieval (CBIR) systems often incorporate a relevance feedback mechanism in which retrieval is adapted based on users identifying images as relevant or irrelevant. Such relevance decisions are often assumed to be category-based. However, forcing a user to decide upon category membership of an image, even when unfamiliar with a database and irrespective of context, is restrictive. An alternative is to obtain user feedback in the form of relative similarity judgments. The ability of a user to provide meaningful feedback depends on the interface that displays retrieved images and facilitates the feedback. Similarity-based 2D layouts provide context and can enable more efficient visual search. Motivated by these observations, this study describes and evaluates an interactive image browsing and retrieval approach based on relative similarity feedback obtained from 2D image layouts. It incorporates online maximal-margin learning to adapt the image similarity metric used to perform retrieval. A user starts a session by browsing a collection of images displayed in a 2D layout. He/she may choose a query image perceived to be similar to the envisioned target image. A set of images similar to the query are then returned. The user can then provide relational feedback and/or update the query image to obtain a new set of images. Algorithms for CBIR are often characterised empirically by simulating usage based on pre-defined, fixed category labels, deeming retrieved results as relevant if they share a category label with the query. In contrast, the purpose of the system in this study is to enable browsing and retrieval without predefined categories. Therefore evaluation is performed in a target-based setting by quantifying the efficiency with which target images are retrieved given initial queries.