How do we predict what will happen next as a scene unfolds?
Understanding and interacting with the visual world requires more than identifying "what is where." To engage with a scene, we must understand its physical structure-- how the constituent objects rest on and support each other, how much force would be required to move them, and how they will behave when they fall, roll, or collide. Work in our lab seeks to characterize people's physical inference abilities and identify the neural resources we use to interpret and predict the physical events in a scene.
In Fischer, Mikhael, Tenenbaum, & Kanwisher, PNAS, 2016, we uncovered a set of brain regions that are recruited when people observe physical events and predict their outcomes. This "physics engine" in the brain falls within brain areas previously implicated in action planning and tool use, suggesting an intimate connection between physical scene understanding and motor planning in the brain.
How do we isolate the key contents of a scene from surrounding distractions?
At a given moment, only a fraction of the vast information in a scene is crucial to the task at hand; the remainder can be distracting. A fundamental goal in scene understanding is therefore to isolate the key objects from the clutter, suppressing the influence of irrelevant input on perception. Our lab studies how the brain’s attentional system filters out distractors, and how this filtering enhances the processing of important, attended objects.
Physical properties like hardness, weight, and slipperiness are often inaccessible to direct inspection and yet they play a crucial role in our interaction with everyday objects. In Guo, Courtney & Fischer (JEP: General, 2020), we found that people can implicitly leverage their knowledge about everyday objects' physical properties to search more efficiently through a cluttered scene, providing first evidence that physical properties can shape how we attend to the visual world.