consciousness precedes real time

consciousness precedes real time​

scruffy the troll trapper.
I'm sort of hijacking this thread to bring it back to the OP. I have been interested in consciousness as a sideline to ANN. I would have titled the OP as Ersatz consciousness precedes real time, but I'm just being a nitpicker. Have you studied blind sight? I think the late Oliver Sacks had a few cases in his famous book. It seems that there is a loss of connection between the visual cortex and whatever produces consciousness. The patient is encouraged to guess what he sees. One patient was especially good at it. He went back to work as a CEO guessing his way through life. A strong pitcher can toss the ball to the batter in 400 mS that requires a strong ability of preceding consiousness.

Here are a few other misc. observations.

I read somewhere that the Hopfield net can be thought of as a set of matched filters, where the filter with the largest output is the winner. That is still consistent with the idea of an associative memory. In image processing the various filters are trained with small image fragment exemplars. In pattern recognition, it is used to clean up noisy images, the Hopfield net can be used as a pre-filter where the winning pixel area is replaced by the exemplar.

I also used translation invariant sparse ANNs for looking at the surface of IC chips to find their angle and location for die bonding. They had to be trained in the field by an unskilled operator within a few seconds. During a manufacturing run they were able to detect the angle and location in 10 to 20 mS. The systems used a three layer network, and were sort of modeled after an extremely lobotomized version of Hubel and Wiesel's work. We sold thousands of systems.

That was done in the 90's when the sophistication was far more primitive. I have no idea what industrial computer vision is today.
 
I'm surprised you haven't shot yourself. And I don't believe you either.
You're a troll. You're not supposed to believe. You're not good at it. Stick to trolling, you're good at that.
 

consciousness precedes real time​

scruffy the troll trapper.
I'm sort of hijacking this thread to bring it back to the OP. I have been interested in consciousness as a sideline to ANN. I would have titled the OP as Ersatz consciousness precedes real time, but I'm just being a nitpicker. Have you studied blind sight?

No, not really. I'll look into it. Thanks. :)


I think the late Oliver Sacks had a few cases in his famous book. It seems that there is a loss of connection between the visual cortex and whatever produces consciousness. The patient is encouraged to guess what he sees. One patient was especially good at it. He went back to work as a CEO guessing his way through life. A strong pitcher can toss the ball to the batter in 400 mS that requires a strong ability of preceding consiousness.

Here are a few other misc. observations.

I read somewhere that the Hopfield net can be thought of as a set of matched filters, where the filter with the largest output is the winner.

In an abstract conceptual sense, yes. The individual memories tend to the corners of a hypercube.

The real power of Hopfield is when you combine it with an ordinary feed forward or recurrent network. That's when you get the sophisticated adaptive filtering you're alluding to.

Essentially the Hopfield portion learns the adaption path "faster than" the filters adapt, so it's able to guide and control the filters.

That is still consistent with the idea of an associative memory. In image processing the various filters are trained with small image fragment exemplars. In pattern recognition, it is used to clean up noisy images, the Hopfield net can be used as a pre-filter where the winning pixel area is replaced by the exemplar.

Yes, that is possible. I'd go the other way though. For instance here is Fukushima's "Neo-Cognitron", a translation invariant machine used for (Japanese) handwriting analysis.

1725156165560.png


If you can imagine a Hopfield network sitting horizontally beneath this, with the cells connecting along the long axis - so for example cell 1 connects at the far left and cell N connects at the far right, and the rest of the cells "tap" various points along the cascade.

You have to have "enough" connections into the cascade, because Hopfield updates asynchronously. If you have "enough", then the cascade becomes just another version of Hopfield pattern learning.

I also used translation invariant sparse ANNs for looking at the surface of IC chips to find their angle and location for die bonding. They had to be trained in the field by an unskilled operator within a few seconds. During a manufacturing run they were able to detect the angle and location in 10 to 20 mS. The systems used a three layer network, and were sort of modeled after an extremely lobotomized version of Hubel and Wiesel's work.

Spatial filtering? Sounds right, angle and location. Far out.

That was done in the 90's when the sophistication was far more primitive. I have no idea what industrial computer vision is today.

Today we have things like the Luma Dream Machine, that will generate video from stills. It uses transformer technology. In an industrial capacity, the same technology is used to "pay attention to" the details of an assembly operation. For instance if you have a checklist of 30 items for QA, the transformer will traverse the list and make the needed adjustments at each step, even "as" it's being assembled.

Another version involves knowledge of "production runs", so for example each run may have an idiosyncratic set of glitches, etc - the goal being to inform the technicians and increase production quality and efficiency, in addition to correcting or patching the problems during QC.

Cool stuff. Thanks for getting the thread back on track.
 
Your responses to my posts says otherwise. I'm in your head.
No, I'm just deciding whether to feed you or shoot you. You're disrupting the rest of the class, and the principal is busy.
 

Forum List

Back
Top