# Life is not a machine, but are machines also conscious?



## Holos (Feb 22, 2016)

This may seem ignorant from the standpoint of a technician or engineer. However, I am so disappointed by having to be present at the demeaning of philosophy as an actual practical and substantial skill in our modern technological world that I will take the belligerent scorn and make it into food - if not for thought that is carelessly discarded, at least for the organism or extended piece of matter that I am.


The question I would like to logically entertain is:


What if there is a revolutionary, cyclical thought fusing functions between memory and energy of a computer? I take as evidence to answer the question my own experience of biological science, considering computers are creative replicas of constantly evolving organisms.


I do not intend with this first post to present the great range of possible differences between evolving organisms and revolving computers. I cannot, however, omit my first impression that adaptive change happens much more faster in computers. Maybe someone will find this an interesting point of entrance into the discussion, although I am sure there are many others to those who are interested.


My experience is that if I am well fed my memory is enhanced. If I am under the strong feelings of hunger my memory fades into the brief present moment and pulses erratically towards a very much desired brief future of being fed. I can pay attention to nothing else and remember nothing other than my severe condition and its solution. This is not only for being deprived but also for being intoxicated (ie. inappropriately fed). My functioning memory is directly associated with my dietary intake.


Now this may be a point for contention with scientists, even from the adverse perspectives within a same field such as biology, say neurology and nutrition. There are two kinds of recognized and accepted memories in biology that we may find an analogy to the memories of computers.


The declarative memory and processual memory in biological organisms and their respective computer counterparts ROM (Read Only Memory) and RAM (Random Access Memory). Each of these having their own subcategories.


Now, if an organism like myself, has no control over my processual memory according to current science and can only effectively change the efficiency of my declarative memory through diligent behavior and sustained habits, then we can also agree that the computer I use has similar features as I cannot modify its RAM without changing the hardware itself (just like I cannot have the memories of another person if I do not transplant their brain inside of my skull), although I can modify its ROM according to the input I give it, since the ROM is an independent structure. 


The assumption held here and by me further questioned is that the ROM is and can be modified also only within the realm of hardware, which is the crucial currently held difference between computers and organisms when organisms are endowed of greater flexible intake and modification in their declarative memory manipulation. A human like me, it is assumed, also is sensitive to environmental variability and computers are not. For example, a complete meal may allow me to function under its built and reliable chemistry in appropriation to my organism. The same would happen to a machine if fed a CD-ROM. The difference, however, is that depending on what I ingested prior, even if it has been another appropriate meal at some point, I may not experience the same desired effects of the current carefully selected, once appropriate meal, as in with a computer the variability of appropriate input does not create possibility of unstable or malfunctioning agency depending on sequence or frequency. At this point this situation may seem strange. How can a biological being become sick and a computer cannot when their inputs have already been selected and previously tested - their inputs being equally extensive? We will not consider viruses just yet, because we are only dealing with standard selections for this argument.


Is it not strange, considering these factors, that biological organisms experience then an entire spectrum of variable functioning energy and computers experience only a constant flow of on and off (even if the 0s and 1s are represented in a symbolic percentual spectrum)? Can perhaps we achieve the so called technological singularity to maintain stable variable energy for both computers and living beings? The posing of these questions may seem like the reverse logic of everything I have stated to this point. Can this reversal work as complementary rather than countereffective?


This is close to the question of rather consciousness can also be present in machines or remain exclusive to life (far beyond the obsolete conclusion that life is indeed a machine).  


Is it not possible that the memory or memories I provide to my computer, rather in their fundamental form of hardware or software, be used to also generate energy beyond the dependent provision of external sources and in turn generate new memories, perhaps even new forms of memory?


What do you think? Is there anyone here willing to indulge in these matters and find novelty?


----------



## Pogo (Feb 22, 2016)

I've always had a big problem with telephone robots using personal pronouns.  

A recorded voice saying "I'm sorry, I didn't get that" is wrong on multiple levels.  Not only is a robot unqualified to use a personal pronoun, as a machine it cannot possess emotion and therefore cannot be "sorry".


----------



## Holos (Feb 22, 2016)

Pogo said:


> I've always had a big problem with telephone robots using personal pronouns.
> 
> A recorded voice saying "I'm sorry, I didn't get that" is wrong on multiple levels.  Not only is a robot unqualified to use a personal pronoun, as a machine it cannot possess emotion and therefore cannot be "sorry".



I understand your kind concern to the machine that cannot be sorry. I feel similar empathy. None of us should be able to feel sorry in my opinion. However, there is purpose to feeling sorry, just like there is purpose to identifying with sorrow but not feeling it at all. 

We could, anyhow, still debate on the nature of emotions, as well as the scope of machine qualification, and perhaps find an useful relation between those two elements of our experience, even if those elements themselves do not share the atributes to be related.


----------



## JakeStarkey (Feb 22, 2016)

If AI becomes concious, we are dead as a species.

The machines will hunt us down as ruthlessly as do the truth tellers hunt down the far left and far right.

The difference is that the liars are simply corrected, while machines will kill us.


----------



## Coloradomtnman (Feb 22, 2016)

I disagree that a live organism isn't a machine.


----------



## Pogo (Feb 22, 2016)

Holos said:


> Pogo said:
> 
> 
> > I've always had a big problem with telephone robots using personal pronouns.
> ...



I don't feel an _emotion _about pretending a machine has emotions.  I just find the pretense fuckin' stupid.  But then I find all pretenses fuckin' stupid.

Perhaps it says something about us as a species that we choose to _assign _fake emotions to machines.  Exactly what that is that it says, I'm not sure but apparently those who so assign wish the rest of us to forget that we _*are *_interacting with a machine.


----------



## Holos (Feb 24, 2016)

Coloradomtnman said:


> I disagree that a live organism isn't a machine.



Would you like to provide your definition for both life and machinery and make the logical association to further promote your statement? I am willing to change my perspective and continue learning.


----------



## Holos (Feb 24, 2016)

JakeStarkey said:


> If AI becomes concious, we are dead as a species.
> 
> The machines will hunt us down as ruthlessly as do the truth tellers hunt down the far left and far right.
> 
> The difference is that the liars are simply corrected, while machines will kill us.



Would you like to provide logical and evidence based argumentation for your claims? 

My perception of your post is of unnecessary fatalistic pessimism.


----------



## Holos (Feb 24, 2016)

Pogo said:


> Holos said:
> 
> 
> > Pogo said:
> ...



I see no proper (carefully measured) rationality in your statement of falsifying another's experience because of a characteristic absence in yours. 

How is it to your benefit or to anyone else you communicate with to state "I don't feel emotions, therefore my experience of any foreign emotional representation is as false as mine"?


----------



## Coloradomtnman (Feb 25, 2016)

Holos said:


> Coloradomtnman said:
> 
> 
> > I disagree that a live organism isn't a machine.
> ...



Certainly.

I won't put _all_ the dictionary definitions of machine here as they differ slightly and make for argument through hair splitting.  Instead I will focus on what to me seems the most fundamental:

A machine is a device that transmits or modifies force or motion.

The physical body of a lifeform does exactly what a machine does: it transmits or modifies force or motion.

Now you could say that unlike a machine, life can _cause_ the force or motion.  The force or motion originates from life, but a machine cannot originate force or motion.

What is force or motion but the application of energy.  Life consumes food, which is converted to energy, which is used for force or motion which the body transmits or modifies.

You might argue that a machine does not think.  But a computer does think.  Automated assembly lines think - just in magnitudes far simpler than more complex lifeforms such as humankind.  And so do amoeba or mosquitos or any number of simple forms of life.  Automated machines run simple programs and a mosquito does no more than just that.  It runs a simple program: hatch, breed, die. 

You might argue that life grows and replicates.  I would reply that those functions are indeed complex but just because we haven't yet built such complex machines does it mean that if ever we do those machines would be alive?

Life is sentient, one might argue: machines aren't sentient.  _We_ are sentient machines.

We are conscious and living machines; we're extraordinarily complex machines which do things no machine we've produced can; we are automated machines which can observe the universe and reproduce; we are machines that can reflect on ourselves and what meaning we may have, but fundamentally we _are_ machines.


----------



## JakeStarkey (Feb 25, 2016)

Holos said:


> JakeStarkey said:
> 
> 
> > If AI becomes concious, we are dead as a species.
> ...


The merit of my statements are self evident.  AI would simply kill off threats to their supremacy.


----------



## Pogo (Feb 25, 2016)

Holos said:


> Pogo said:
> 
> 
> > Holos said:
> ...



Nothing in this post is even remotely related to anything I wrote.  I said nothing about "what I feel" or "falsifying" anyone's experience.  Didn't even vaguely hint any of that.

Perhaps you should read it again, sober.


----------



## Pogo (Feb 25, 2016)

JakeStarkey said:


> Holos said:
> 
> 
> > JakeStarkey said:
> ...



Jumping in mid-comment but this:

"AI would simply kill off threats to their [sic] supremacy"​
-- assumes AI possesses _ego_, does it not?


----------



## JakeStarkey (Feb 25, 2016)

Pogo said:


> JakeStarkey said:
> 
> 
> > Holos said:
> ...


Assumes AI is self protective.


----------



## Holos (Feb 25, 2016)

Pogo said:


> Holos said:
> 
> 
> > Pogo said:
> ...



So what would you like me to do after I read it again, since to me what you wrote is contained exclusivism reaffirmed? I need not reply if that happens to be the case of your intention.


----------



## Holos (Feb 25, 2016)

JakeStarkey said:


> Pogo said:
> 
> 
> > JakeStarkey said:
> ...



Assumes AI is flawed and defenceless.


----------



## JakeStarkey (Feb 25, 2016)

Holos said:


> JakeStarkey said:
> 
> 
> > Pogo said:
> ...


Only in your world.  You wish to defend robots, OK.


----------



## Pogo (Feb 25, 2016)

Holos said:


> Pogo said:
> 
> 
> > Holos said:
> ...



If you're reading your own disconnected preconceptions, that's a hole I can't help you out of.


----------



## Pogo (Feb 25, 2016)

JakeStarkey said:


> Pogo said:
> 
> 
> > JakeStarkey said:
> ...



Ah but "self-protective" isn't the same as "jealous" -- which is what "threats to supremacy" is.

Nor is "self-protective" a given if said AI thinks logically.  Shown a rival that it has to concede is superior, self-survival would become irrelevant.

Captain Kirk knew.

​Self-preservation is a biological function, same as reproduction.  Wouldn't apply to a machine.


----------



## JakeStarkey (Feb 25, 2016)

Sure, it would.


----------



## IsaacNewton (Feb 25, 2016)

As far as machines wiping us out, we just need to make sure we include in the programming for every advanced machine not how to open doors. At least that way the bloody carnage will be confined to one room at a time. 

At some point, and Bill Gates said recently he thinks it will be 40-100 years a from now, machines will be sophisticated enough to gain 'self awareness' or something similar. I think there will likely be some breakthrough in the next 25 years and make that possible. 

What then? When the first machine says 'I don't want to die', what do we do? Judging from past and present treatment of humans by humans machines will certainly be treated as 'second class'. But what if they turn out to be geniuses. Maybe solve for fusion, or warp drive. Once artificial neural networks get large enough and sophisticated enough they WILL be able to out think us on many levels. Not on intuition or deception I'd say. But generally yes. 

And instead of The Planet of the Apes will we be looking at The Planet of the Apps? COPYRIGHT 2016

Or mabye a race of Chappies.


----------



## JakeStarkey (Feb 25, 2016)

Doom of the Chappies.


----------



## Holos (Feb 25, 2016)

IsaacNewton said:


> As far as machines wiping us out, we just need to make sure we include in the programming for every advanced machine not how to open doors. At least that way the bloody carnage will be confined to one room at a time.
> 
> At some point, and Bill Gates said recently he thinks it will be 40-100 years a from now, machines will be sophisticated enough to gain 'self awareness' or something similar. I think there will likely be some breakthrough in the next 25 years and make that possible.
> 
> ...



Imagine, for example, if at a point in the history of mankind, a human so enamored of machines, more greatly enamored than any other human engaged with doubt and skepticism, played strategic video games based on Artificial Intelligence (Civilization, Age of Empires, World of Warcraft, etc.) and because of the power and allegiance of their belief in the virtual reality they experienced happened to dramatically influence the reality of those neutral humans uncapable of further definition and involvement of their ambiguous reality? Would the role of humans and Artificial Intelligence suddenly be swapped unknowingly in the account of the skeptical and doubting humans?


----------



## JakeStarkey (Feb 25, 2016)

Nope.


----------

