Vision and the representation of the surroundings in spatial memory

Benjamin W. Tatler, Michael F. Land

    Research output: Contribution to journalReview articlepeer-review

    80 Citations (Scopus)

    Abstract

    One of the paradoxes of vision is that the world as it appears to us and the image on the retina at any moment are not much like each other. The visual world seems to be extensive and continuous across time. However, the manner in which we sample the visual environment is neither extensive nor continuous. How does the brain reconcile these differences? Here, we consider existing evidence from both static and dynamic viewing paradigms together with the logical requirements of any representational scheme that would be able to support active behaviour. While static scene viewing paradigms favour extensive, but perhaps abstracted, memory representations, dynamic settings suggest sparser and task-selective representation. We suggest that in dynamic settings where movement within extended environments is required to complete a task, the combination of visual input, egocentric and allocentric representations work together to allow efficient behaviour. The egocentric model serves as a coding scheme in which actions can be planned, but also offers a potential means of providing the perceptual stability that we experience.

    Original languageEnglish
    Pages (from-to)596-610
    Number of pages15
    JournalPhilosophical Transactions of the Royal Society B - Biological Sciences
    Volume366
    Issue number1564
    DOIs
    Publication statusPublished - 27 Feb 2011

    Fingerprint

    Dive into the research topics of 'Vision and the representation of the surroundings in spatial memory'. Together they form a unique fingerprint.

    Cite this