Optimal feature integration in visual search

Benjamin T. Vincent, Roland J. Baddeley, Tom Troscianko, Iain D. Gilchrist

    Research output: Contribution to journalArticlepeer-review

    33 Citations (Scopus)


    Despite embodying fundamentally different assumptions about attentional allocation, a wide range of popular models of attention include a max-of-outputs mechanism for selection. Within these models, attention is directed to the items with the most extreme-value along a perceptual dimension via, for example, a winner-take-all mechanism. From the detection theoretic approach, this MAX-observer can be optimal under specific situations, however in distracter heterogeneity manipulations or in natural visual scenes this is not always the case. We derive a Bayesian maximum a posteriori (MAP)-observer, which is optimal in both these situations. While it retains a form of the max-of-outputs mechanism, it is based on the maximum a posterior probability dimension, instead of a perceptual dimension. To test this model we investigated human visual search performance using a yes/no procedure while adding external orientation uncertainty to distracter elements. The results are much better fitted by the predictions of a MAP observer than a MAX observer. We conclude a max-like mechanism may well underlie the allocation of visual attention, but this is based upon a probability dimension, not a perceptual dimension.
    Original languageEnglish
    Article number15
    JournalJournal of Vision
    Issue number5
    Publication statusPublished - 15 May 2009


    Dive into the research topics of 'Optimal feature integration in visual search'. Together they form a unique fingerprint.

    Cite this