Fascinating article by Daniel Yan in Aeon on how our brain see the things based upon our preformed expectations. f.sheikh
The Book of Days (1864) by the Scottish author Robert Chambers reports a curious legal case: in 1457 in the town of Lavegny, a sow and her piglets were charged and tried for the murder of a partially eaten small child. After much deliberation, the court condemned the sow to death for her part in the act, but acquitted the naive piglets who were too young to appreciate the gravity of their crimes.
Subjecting a pig to a criminal trial seems perverse through modern eyes, since many of us believe that humans possess an awareness of actions and outcomes that separates us from other animals. While a grazing pig might not know what it is chewing, human beings are surely abreast of their actions and alert to their unfolding consequences. However, while our identities and our societies are built on this assumption of insight, psychology and neuroscience are beginning to reveal how difficult it is for our brains to monitor even our simplest interactions with the physical and social world. In the face of these obstacles, our brains rely on predictive mechanisms that align our experience with our expectations. While such alignments are often useful, they can cause our experiences to depart from objective reality – reducing the clear-cut insight that supposedly separates us from the Lavegny pigs.
One challenge that our brains face in monitoring our actions is the inherently ambiguous information they receive. We experience the world outside our heads through the veil of our sensory systems: the peripheral organs and nervous tissues that pick up and process different physical signals, such as light that hits the eyes or pressure on the skin. Though these circuits are remarkably complex, the sensory wetware of our brain possesses the weaknesses common to many biological systems: the wiring is not perfect, transmission is leaky, and the system is plagued by noise – much like how the crackle of a poorly tuned radio masks the real transmission.
But noise is not the only obstacle. Even if these circuits transmitted with perfect fidelity, our perceptual experience would still be incomplete. This is because the veil of our sensory apparatus picks up only the ‘shadows’ of objects in the outside world. To illustrate this, think about how our visual system works. When we look out on the world around us, we sample spatial patterns of light that bounce off different objects and land on the flat surface of the eye. This two-dimensional map of the world is preserved throughout the earliest parts of the visual brain, and forms the basis of what we see. But while this process is impressive, it leaves observers with the challenge of reconstructing the real three-dimensional world from the two-dimensional shadow that has been cast on its sensory surface.
Thinking about our own experience, it seems like this challenge isn’t too hard to solve. Most of us see the world in 3D. For example, when you look at your own hand, a particular 2D sensory shadow is cast on your eyes, and your brain successfully constructs a 3D image of a hand-shaped block of skin, flesh and bone. However, reconstructing a 3D object from a 2D shadow is what engineers call an ‘ill-posed problem’ – basically impossible to solve from the sampled data alone. This is because infinitely many different objects all cast the same shadow as the real hand. How does your brain pick out the right interpretation from all the possible contenders?
The second challenge we face in effectively monitoring our actions is the problem of pace. Our sensory systems have to depict a rapid and continuous flow of incoming information. Rapidly perceiving these dynamic changes is important even for the simplest of movements: we will likely end up wearing our morning coffee if we can’t precisely anticipate when the cup will reach our lips. But, once again, the imperfect biological machinery we use to detect and transmit sensory signals makes it very difficult for our brains to quickly generate an accurate picture of what we’re doing. And time is not cheap: while it takes only a fraction of a second for signals to get from the eye to the brain, and fractions more to use this information to guide an ongoing action, these fractions can be the difference between a dry shirt and a wet one.
Psychologists and neuroscientists have long wondered what strategies our brains might use to overcome the problems of ambiguity and pace. There is a growing appreciation that both challenges could be overcome using prediction. The key idea here is that observers do not simply rely on the current input coming in to their sensory systems, but combine it with ‘top-down’ expectations about what the world contains.