- Wertheimer Kolloquium
- Julia Karbach: Neurocognitive plasticity across the lifespan
- Andreas Reif: Vom Molekül zur Klinik und zurück
- David Kaplan: The Future of Quantitative Inquiry in the Social Sciences
- Guillaume Rousselet: Early face brain activity
- Roy Baumeister: How Rejection Affects People
- Melissa Vo: Reading Scenes
- Jeremy Wolfe: Dancing chickens and gorillas in the lung
- Prof. Silvia A. Bunge: Reasoning to learn, and learning to reason
- SoSe 2014
- WiSe 2013/2014
- SoSe 2012
- WiSe 2011/2012
Reading Scenes: How Semantic, Syntactic, and Episodic Scene Memory Guide Attention in Real-World Environments.
Prof. Dr. Melissa Le-Hoa Võ
Scene Grammar Lab, Goethe Universität Frankfurt
Termin: 14.01.2015 Zeit: 16 Uhr c.t. Ort: Campus Westend, PEG 1.G135
The sources that guide attention are manifold and interact in complex ways. Internal goals, task rules, or salient external stimuli have shown to be some of the strongholds of attentional control. But what guides attention in complex, real-world environments?
Following Wertheimer’s Gestalt ideas, I will argue that a scene is more than the sum of its objects. That is, attention during scene viewing is mainly controlled by generic knowledge regarding the meaningful composition of objects that make up a scene. Contrary to arbitrary target objects placed in random arrays of distractors, objects in naturalistic scenes are placed in a very rule-governed manner. Thus, scene priors — i.e. expectations regarding what objects (scene semantics) are supposed to be where (scene syntax) within a scene — strongly guide attention.
Violating such semantic and syntactic scene priors results in differential ERP responses similar to the ones observed in sentence processing and might suggest some commonality in the mechanisms for processing meaning and structure across a wide variety of cognitive tasks. While generic scene priors tend to guide attention especially in new environments, we would assume that episodic memory (having seen this particular scene before) should take on the control of attention in familiar environments.
To counter this intuition, I will provide some of our latest data that illustrate the difference in memory representations generated as a function of looking at versus for objects in scenes. Together these data show that top-down control of attention during scene viewing is the product of complex interactions between semantic, syntactic and episodic scene memory.