top of page

Predicting the future - how our brain catches up to reality

  • comdepri
  • Dec 1, 2024
  • 3 min read

Imagine this: you’re in the park, playing ultimate frisbee with your friends. Despite differing skill levels, everyone can catch it - more or less. You see the frisbee flying your way, reach out and snag it in mid-air.


This simple action is actually an amazing feat of coordination. It requires a chain of events to happen precisely at the right time - light emanating from the frisbee hits your eyes, photons get transduced to electrical signals, they pass through your visual system, get translated into motor commands that are finally sent down into your muscles. For this to succeed, your hand must arrive at the right spot at the right time - not where the frisbee was a second ago.


Easier said than done - our visual system is able to do very complicated tasks of image processing, but it doesn’t do them fast. Compared to a modern computer, the process is agonizingly slow - a whole 100 milliseconds pass from the moment photons reach the retina and the moment where the signal exits the visual processing chain. This delay is a death sentence to any system that needs to react to fast changes in the environment to survive - be it evolutionary environment or the high-stakes word of ultimate frisbee.


As evidenced by our survival, our brain has a way of dealing with these delays - signal extrapolation. Our visual processing chain communicates not only where objects are, but also estimates of where they’re going. Using this information, each stage in the processing chain extrapolates the object’s trajectory, effectively “fast-forwarding” the signal to account for delays. The signal sent forward in the chain is the position of the object, correct by the time it reaches the next stage.


However, prediction is very difficult, especially if it's about the future. Our brain has limited access to the true nature of reality (and to the quantities of relevance - position and speed of objects). Moreover, it needs to represent them on a noisy biological infrastructure. Extrapolating the position in this noisy environment only increases the noise, creating a bias-variance tradeoff - the bigger the delay to compensate for, the more noise introduced to the signal, and the less reliable it is.


We model this system as a trio of noisy control mechanisms - one tracking the object position, one tracking its velocity, and the third tracking the extrapolated signal from the other two. We can show analytically the amount of noise added to the system by the extrapolation, and demonstrate the bias-variance tradeoff between accurately compensating for the delay and increasing the reliability of the control system.


Experimentally, we explore this using the flash-lag effect - a perceptual illusion where a moving object appears ahead of a flashed stationary one. By measuring individual participants’ lag and perceptual variability, we’ve discovered a square-root relationship between the perceptual variance and the speed. Our current hypothesis is that this relationship is due to low-level noise in the estimation of the velocity used to extrapolate, which is not controlled for - indicating that in this bias-variance tradeoff, our brain comes decidedly in favor of eliminating bias at the cost of increased variance.


We continue to investigate this phenomenon, the limits of the bias-reduction preference, and the possible control mechanisms used to compensate for the inherent delays in the system.


Flash lag effect experiment

Comments


To understand bird flight, we have to understand aerodynamics;

only then does the structure of feathers make sense.

David Marr 

bottom of page