how we generate the egocentric reference frame

scruffy

Diamond Member
Mar 9, 2022
19,517
16,029
Another stellar piece of research by the Swiss.

The egocentric frame is still a mystery. We know it's being processed in the hippocampus but there it's context dependent.

This research shows how an important part of the context is generated.

Head position only takes 20 msec, whereas eye position takes 140 msec.


Why?

Because head position goes directly to sensory cortex S1. Whereas eye position goes through multiple pre-cortical areas before it finally arrives in the temporal lobe.

This information dovetails perfectly with the hypothesis I presented in the other thread.

Our consciousness is "slightly ahead of" real time. That's why we're "aware".
 
45b0gb.jpg
 
Our consciousness is "slightly ahead of" real time. That's why we're "aware".

The hidden mind or factor which makes all of past science almost certain to be inaccurate .

Because the scientific process takes no account of consciousness .
 
Here's the important piece.

Consider:

1720394348042.png



"Vergence" eye movements are related to depth, they are "disjunctive" which means the eyes move in opposite directions. As distinct from saccades, where both eyes move in the same direction.

In the hippocampus there "place cells" and "grid cells", which put together a 3-d map of the external world, relative to the organism.

Here's the important part: that map is taken apart again when we move our eyes. Here's how we know: vergence movements interrupt saccades. They're handled by a different part of the brain.

Here's a graph of what an eye movement looks like, when it involves both saccades and vergence:

1720394825384.png



The graphs on top clearly show the vergence movement that takes place "in the middle of" a saccade.


Here are the different parts of the brain that are involved:


A cursory look might lead one to conclude that the brain activation is identical.

1720395375001.png



But it's not.

1720395424886.png
 
This is what the eye movement system looks like in the human brain. The sensory side is on the right, the motor side is on the left. The two sides converge on the square box in the middle, where it says SC which stands for "superior colliculus". The SC is part of the midbrain, it's near the cerebellum.

1720397798420.png


The SC is "retinotopic", it tells the eyes where to move by targeting areas in the visual field. Below the SC, there are areas in the brainstem that map target position into the activity of the oculomotor muscles. Here's what that looks like (the right side of the diagram):

1720398018098.png



Returning to the top picture, we know what most of these brain areas do. FEF are the frontal eye fields, area 8 in Broca's brain map. But notice the box right next to it that no one ever talks about (because no one knows what it does or how it works) - the box labeled DLPFC/SEF, which stands for the dorsolateral prefrontal cortex and the supplementary eye fields. Here's a bit about that:


Look in the section that says 'Function". Note especially the information about the "delayed response task".

"Patients with minor DLPFC damage display disinterest in their surroundings and are deprived of spontaneity in language as well as behavior."

The DLPFC is generative. It tells us what to pay attention to, and it makes us curious and clever. And, it organized the CONTEXT for the spatial maps that are presented to the hippocampus.

Here's an example. You just took your dog for a walk, and now you're putting him in the back seat of the car. The dog's spatial map changes, it goes from the in-the-park map to the in-the-car map. Instead of sniffing trees, the dog is now looking out the open car window. "Objects" have a different context in the new map. If you show the dog a stick in the park, it becomes excited because it thinks you're going to play catch. If you show the dog a stick in the car, it becomes confused and frightened because it thinks it's about to be punished. The dog knows it can't play catch in the car. This is what "comparing two objects" means. Stick + park = good, stick + car = bad.
 
So - the relevant part of this, for consciousness and for the egocentric reference frame, is something called "efference copy". It works like this:

Let's say you're looking at a visual scene, and you become interested in one of its features. So, your eyes move to the feature. When that happens, the sensory brain (visual cortex) gets a new view of the scene. It's the same scene, nothing has changed in the external world, but now the retinal image has slightly shifted from where it was.

It turns out in this case, that the visual cortex already knows what to expect, before the new view of the scene arrives. The same frontal eye field neurons that command the eyes to move, send an advance copy of the command to the visual cortex. This is the "efference copy", and it allows the visual system to selectively respond to the features of interest. (This is "selective attention" in action).

There is all kinds of very complicated evidence for how and why this works, and some thick models. But the part I'd like to draw your attention to, is the "expectation template". The FEF is telling the visual cortex what to expect.

So now, if you map this onto a timeline, you'll notice this is a continuous behavior. There are micro saccades about 100 times a second, so our frontal brain is constantly telling the sensory systems what to expect. And whereas the retina takes in about a gigabit per second, it only actually uses a few bits of that information, to map the salient features of the visual scene.

You can consider this mechanism for example, in reading. As our eyes move across the page, we foveate the current word of interest, but our peripheral retina is seeing the "next" word, and telling the FEF where to move to so it can be looked at. Then, the FEF issues the command to move the eyes, and at the same time it tells the visual system "get ready, I'm moving the eyes to the word CAT" - so when the retina moves there, it already knows to expect the letters C, A, and T in order. This is why we can read so fast, word to word instead of letter to letter.

When the visual system expects CAT and it gets DOG instead, it generates a P300 brain wave, which is a whole-brain reset, because something has gone drastically wrong with the model. In this case, the microsaccades stop, and the visual cortex stops being selective, and the eyes start moving "around" the area of interest to establish a new reference frame. Once they do, the microsaccades begin again.

In terms of consciousness, reading comprehension is "uh huh uh huh", but the P300 is "whaaaat???". There is meta-activity that deals with the timeline in different ways. When a P300 occurs, it's the same as changing the spatial map, it's the same as moving from the park to the car. Out with the old context, in with the new.

People with damage to the DLPFC can't do this. It takes them a LONG time to establish a new context, or recover an existing one. Not only is the meta processing missing, but the expectation templates are missing as well. Reading is "different" in these people, it's slow and laborious and they don't get interested in what they're reading. The eyes work perfectly fine, and the eye movements "seem" normal, until they engage in a delayed response task (which is what reading is, you don't get the meaning till you get to the end of the sentence, and meanwhile you have to hold the preceding words in memory).
 

Forum List

Back
Top