How long until AI becomes sentient?

How long till AI becomes self aware?

  • 10 years

  • 20 years

  • 50 years

  • 100 years

  • Never


Results are only viewable after voting.
Not even 10 years.
Apple is beginning with new chip technology that is likely to evolve into serious-serious processing power in the next 5 years or so. We are talking the power of 5 computers in the size of a Cheese Nip. Create an array of 20 of those and you have the processing ability of 100 computers that you can fit in a playing card box.
 
And BTW - I am not afraid of AI becoming sentient.
I am much more worried about what mankind will do with the technology.
 
Probably never. I suppose it's possible to make a machine intelligence that can narrowly fake sentience within programmed parameters but self-awareness is still an absolute mystery. No one has any idea how you could possibly boil it down to a set of instructions.
 
Thoughts? There are some incredibly smart self learning AI programs out there.
I already am you nincompoop.

68xlf9.jpg
 
Probably never. I suppose it's possible to make a machine intelligence that can narrowly fake sentience within programmed parameters but self-awareness is still an absolute mystery. No one has any idea how you could possibly boil it down to a set of instructions.
That was thought probably in the 1960s.
This is 2022. Computers can already write their own programing and correct code, and modify code to new circumstances.
When we have chips that have petabyte per second processing power that can generate their own code depending on what is happening around them... you are into a gray area of what is sentient and what is not.

"Dave" in the movie Space Odyssey is a great example. It was 100% programing and code. But was fully capable of forming new code on the fly, and make choices based on what it was experiencing. Hard to argue that is not self aware.
 
That was thought probably in the 1960s.
This is 2022. Computers can already write their own programing and correct code, and modify code to new circumstances.
When we have chips that have petabyte per second processing power that can generate their own code depending on what is happening around them... you are into a gray area of what is sentient and what is not.

"Dave" in the movie Space Odyssey is a great example. It was 100% programing and code. But was fully capable of forming new code on the fly, and make choices based on what it was experiencing. Hard to argue that is not self aware.
You mean Hal? He was faking self-awareness. He malfunctioned because he could not resolve a conflict in his programming. That was just a book. The real thing is proving more difficult.
 
Probably never. I suppose it's possible to make a machine intelligence that can narrowly fake sentience within programmed parameters but self-awareness is still an absolute mystery. No one has any idea how you could possibly boil it down to a set of instructions.
I can't believe anyone would even answer "never"...until I saw your take.
 
AI lacks the basic essentials to ever be sentient. Humans have instincts, emotions, and biological imperatives to survive. A machine can weigh options but would have an artificial limit preset, once that limit is reached it would no longer progress. If all the options available failed to reach the preset success value the machine would do nothing.
 
Few more decades to even get to insect brains
Next ..is few decades to fish
Few decades to dog
Few decades to chimp
Etc
 

Forum List

Back
Top