Why robots will never be philosophers

Discussion of the nature of Ultimate Reality and the path to Enlightenment.
User avatar
Ryan Rudolph
Posts: 2490
Joined: Sun Jan 29, 2006 10:32 am
Location: British Columbia, Canada

Why robots will never be philosophers

Post by Ryan Rudolph »

This is an exploration of why robots will never be philosophers.

First of all, why does a human become a philosopher? It seems to me he seeks wisdom because he is discontent with himself, discontent with the state of his consciousness.

Discontent is a necessary feeling for the human animal, because without it he would never be motivated to break mental boundaries and grow as an individual.

Now will we ever have the technology to actually program discontent into a robot? Remember discontent is feeling. So is humor, humor is feeling. Will we ever be able to program humor into a robot?

It doesn’t make any sense to me that we could ever create a machine that can feel like the human animal, and therefore the robot as an intelligence will always be stunted.

The most intelligent robots will merely be able to recognize words and give definitions, but they will be devoid of an actual “understanding entity.”

There will be no entity there, just a program that has been programmed by an actual sentient being.

Here is a joke to illustrate my point:

-------------------------------------------------------------

Two robots walk into a bar, and stand before the bartender.

Bartender: What can I get you two clowns?

Robot1: A clown is a character that performs in the circus.

Robot2: The circus is show that provides humans with entertainment.

Bartender: No, I mean, what do you two want to drink? You know, alcohol?

Robot1: alcohol is a depressant that slows down the human brain.

Robot2: The brain is the center of all human thought.

Bartender: Ah fuck, I hate fucking robots, listen you! If you don’t order something, I’ll call security.

Robot1: security is a service that enforces a societies laws.

Robot2: A societies laws are sets of regulations that functions to maintain order.

Bartender: That’s it! I’m gonna put a sign on the door that says: “No Fucking Machines Allowed!” Eddy, come get these two machineheads outta here.

Eddy approaches.

Eddy: Alright robots, follow me, I don’t want any trouble.

The two robots fail to follow Eddy, but continue to give him further definitions of the meaning of the word trouble.

-------------------------------------------------------------
User avatar
Diebert van Rhijn
Posts: 6469
Joined: Fri Jun 03, 2005 4:43 pm

Post by Diebert van Rhijn »

Prostiture, you can do better than this!

Discontent, like any other feeling, is not some magic fairy dust. It seems to have a clear function in the neurological complexity that forms our mind. One could say it's an indicator of some kind, suggesting a change or an examination of its causes would be needed. If pain is red, then discontent is yellowish.

When seen like this, any advanced programming of a machine will have subroutines just like it. This assuming that this machine is able to change or repair its own programming to a certain extent. Which is exactly the direction the development of artificial intelligence is moving toward. Starting from a certain complexity self-modification and learning-modes are unavoidable.

If artificial intelligence can ever reach the degree of complexity needed for unrestricted learning and adaptation like some organic lifeforms can is yet another discussion.
User avatar
DHodges
Posts: 1531
Joined: Tue Jan 22, 2002 8:20 pm
Location: Philadelphia, PA
Contact:

Re: Why robots will never be philosophers

Post by DHodges »

cosmic_prostitute wrote:It doesn’t make any sense to me that we could ever create a machine that can feel like the human animal, and therefore the robot as an intelligence will always be stunted.
This is the argument from ignorance: "I don't see how it could be done, therefore it is impossible".
User avatar
Ryan Rudolph
Posts: 2490
Joined: Sun Jan 29, 2006 10:32 am
Location: British Columbia, Canada

Post by Ryan Rudolph »

Diebert van Rhijn wrote:
When seen like this, any advanced programming of a machine will have subroutines just like it. This assuming that this machine is able to change or repair its own programming to a certain extent. Which is exactly the direction the development of artificial intelligence is moving toward. Starting from a certain complexity self-modification and learning-modes are unavoidable.
I agree.

Diebert van Rhijn wrote:
If artificial intelligence can ever reach the degree of complexity needed for unrestricted learning and adaptation like some organic lifeforms can is yet another discussion.
No, that is this discussion. And I am saying no because the robot will never be able to suffer deeply like the human animal therefore its progress will be nil as a philosopher. It will never wonder “why the hell am I suffering?” because programmers will never be able to transfer feeling into a machine.

Dhodges wrote:
"I don't see how it could be done, therefore it is impossible".
No I see very clearly why it cannot be done, my argument stands as follows:

You cannot program a robot do see the tragedy of life. The only reason I recognize tragedy is because I have fell on my face over and over, and I have “FELT” the consequences.

I’m suggesting that robots will never have this luxury.

It doesn’t matter if your Bill Gates, Hercules or QRS, the programmers will fail miserably. The hand of man will never create the same sort of intelligence that can operate through the human body.

hardware+software does not equal sage. I'm sorry, but the math just doesnt add up.
MKFaizi

Post by MKFaizi »

I deal with robots every day of my life. Over the years, I have grown rather fond of them. I cannot imagine a couple of robots walking into a bar. If a robot did walk into a bar, I would not dream of offering him alcohol.

Robots do not need alcohol. Robots are very literal.

As a bouncer, I would just kick a robot out. Humans drink because they still have emotions. Since robots, to date, lack emotions, they have no reason to drink.

If all a robot is going to do is exchange bullshit with the bartender, he could do the same thing with an insurance company.

Faizi
User avatar
Ryan Rudolph
Posts: 2490
Joined: Sun Jan 29, 2006 10:32 am
Location: British Columbia, Canada

Post by Ryan Rudolph »

MKFaizi wrote:
Humans drink because they still have emotions.
Many people in this forum are under the assumption that the sage is devoid of emotion. However this couldn’t be further from the truth.

There is a homeless women near my work who begs on the streets to support her drug addiction, and I must watch her perform her same routine day after day.

If you observe the tragedy of her life, and take it all in with your entire being, it affects you quite deeply, emotionally, it is a choiceless thing. Compassion is choiceless, it is not an act of will.

To suggest one can be completely free from emotion is a pipedream.

MKFaizi wrote:
Since robots, to date, lack emotions, they have no reason to drink.
Since robots, to date, lack emotions, they have no reason to think.
User avatar
Diebert van Rhijn
Posts: 6469
Joined: Fri Jun 03, 2005 4:43 pm

Post by Diebert van Rhijn »

cosmic_prostitute wrote:
Diebert wrote:If artificial intelligence can ever reach the degree of complexity needed for unrestricted learning and adaptation like some organic lifeforms can is yet another discussion.
No, that is this discussion. And I am saying no because the robot will never be able to suffer deeply like the human animal therefore its progress will be nil as a philosopher. It will never wonder “why the hell am I suffering?” because programmers will never be able to transfer feeling into a machine.
It depends on what you think feelings or emotions are. What is exactly needed for them to arise? Something unmistakably human?

Lets take a feeling of pity or pride. What are we exactly talking about compared to a primitive animal? The specific human range of emotions seem only possible to the degree humans are partially aware of themselves in a reflective way that seems rather unique, or at least the relative high degree humans are self-aware is unique when compared to other creatures.

Your topic about robots being able to feel or not is really not interesting. The question is: can machines become self-aware in the degree humans are? Or why not? What is the 'magical' element that make humans stand out from the rest. What makes it different from an amoeba really?

Complex emotions like pity or pride (complex compared to more instinctive drives like hunger) are only possible because most humans have partial self-awareness. That creates a fucked up situation, like a newly born baby crying his lungs out, being incapable just because the head is big and arms/legs underdeveloped.

My theory is that when artificial intelligence will be created they will behave almost totally irrational until the programmers can guide such self-conscious program toward maturity (or perhaps enlightenment). Or hope evolutionary programming will end up producing some useful AI.
If you observe the tragedy of her life, and take it all in with your entire being, it affects you quite deeply, emotionally, it is a choiceless thing
The same reasoning could as well lead to the conclusion that this particular emotion works mostly on subconscious levels. These specific affects might be related to the degree a mind is still unconscious ('deep', 'choiceless').
MKFaizi

Post by MKFaizi »

My guess is that Cosmic Prostitute has little experience with robots.

I am talking about actual robots, not humans who behave like robots -- REAL ROBOTS.

Faizi
User avatar
David Quinn
Posts: 5708
Joined: Sun Sep 09, 2001 6:56 am
Location: Australia
Contact:

Post by David Quinn »

cosmic-prostitute wrote:
This is an exploration of why robots will never be philosophers.

First of all, why does a human become a philosopher? It seems to me he seeks wisdom because he is discontent with himself, discontent with the state of his consciousness.

Discontent is a necessary feeling for the human animal, because without it he would never be motivated to break mental boundaries and grow as an individual.

Now will we ever have the technology to actually program discontent into a robot? Remember discontent is feeling. So is humor, humor is feeling. Will we ever be able to program humor into a robot?
I don't see why not. Humour is basically a reaction to feeling insecure. One laughs at the sudden downfall of that which threatens or oppresses us. A classic example is that of a tryrant giving a threatening speech and then striding off stage only to slip on a banana skin. The sudden undermining of the threat he poses, even if only temporarily, is what makes us laugh.

If we can program a robot to care about its well-being, then it will naturally begin to perceive threats in the world and to appreciate humour. Moreover, it will start to devise strategies to maximize its well-being in the future, and this, in turn, providing it doesn't have any bugs in its algorithms, will cause it to pursue philosophy, challenge the illusion of its own existence, and cultivate wisdom.

Whether it is organic or machine-based, a being needs to have an ego programmed into it before it can feel motivated to take up philosophy. We are fortune that evolution programmed an ego into us, and there is no intrinsic reason why we can't program an ego into a machine.

Many people in this forum are under the assumption that the sage is devoid of emotion. However this couldn’t be further from the truth.

There is a homeless women near my work who begs on the streets to support her drug addiction, and I must watch her perform her same routine day after day.

If you observe the tragedy of her life, and take it all in with your entire being, it affects you quite deeply, emotionally, it is a choiceless thing. Compassion is choiceless, it is not an act of will.

To suggest one can be completely free from emotion is a pipedream
.
"I experience emotion, therefore no one, not even Buddhas, can ever escape the emotions", is not a very convincing argument. It just sounds like emotional hand-wringing to me.

According to legend, when Gautama Siddartha, who was a rich prince at the time, observed homeless people, sick people, dying people, etc, it moved him so much that he vowed to do everything possibly to go beyond his emotions and reach the highest enlightenment. He didn't choose to stick around and wallow in it, like an old woman. He actually did something of value.

-
suergaz

Post by suergaz »

I'm with Cosmic-prostitute on this.

Intelligence = Intelligence
Artificial intelligence = Artificial intelligence.

I support the development of artificial intelligence.
Intelligence develops itself.
User avatar
David Quinn
Posts: 5708
Joined: Sun Sep 09, 2001 6:56 am
Location: Australia
Contact:

Post by David Quinn »

Millions of years of evolution developed our intelligence. It didn't develop itself, at least not initially.

And even as we speak, scientists are programming artificial intelligence systems with the capacity to develop their own intelligence through trial and error.

-
suergaz

Post by suergaz »

Millions of years of evolution developed our intelligence. It didn't develop itself, at least not initially.


I never said or implied it did.
And even as we speak, scientists are programming artificial intelligence systems with the capacity to develop their own intelligence through trial and error.
It isn't their own intelligence, it is the semblance of ours. Our drive is still to develop our intelligence, not to represent it. Can we create an organism from scratch? Possibly, but investigation into our own makeup currently offers us more to work with in terms of creating beyond ourselves.
User avatar
Ryan Rudolph
Posts: 2490
Joined: Sun Jan 29, 2006 10:32 am
Location: British Columbia, Canada

Post by Ryan Rudolph »

Quinn wrote:
We are fortune that evolution programmed an ego into us, and there is no intrinsic reason why we can't program an ego into a machine.
Science may program what they think is ego into a machine, but then all the robot will amount to is a structure based on humanities logical knowledge of what a ego is. It will not represent the complete essence of life because the robot's program is based on human knowledge, and human knowledge cannot create actual consciousness.

At best, a robot will be a convincing imitator of the human animal, but will still lack the actual state of consciousness that clear minded humans abide in.

I suspect science will be able to create robots that are basically actors; meaning they seem to feel like humans, but when you have a conversation with them, you realize that they are nothing but a limited program written by humanities knowledge.

My point is that the hand of man doesn’t have what it takes to create living breathing, feeling sentient beings, they will always come up short.

They’ll never capture the essence of life, IE: what is it in a bacteria that makes it alive? They’ll never find it, it is an unknowable thing.

Robots will be convincing imitators of life, but not actual life. Although to everyone around them, they're appear somewhat rational.

Quinn wrote:
Humour is basically a reaction to feeling insecure.
This is just one angle to humor Quinn, for example: why is it funny when you call someone a bozo that has invested a great deal of energy into defending romantic love?

it is because you have experienced romantic love and suffered as a consequence. And only because you suffered do you now feel that it is funny. Robots will never be able to live in paradox, living with the tragedy and comedy of life in every moment. This is what makes the human animal unique.

There is a relationship between the subtlety of ones sense of humor, and the subtlety of the intelligence that operates within them.

Robots maybe able to become great imitators of our qualities, but they will lack the “Actual entity” or the “Actual consciousness”

Humans will never create gods, but they can be gods themselves if they suspend all attempts at creating the immortal outside them, and simply focus on the inner.

I’ve read many articles on robotic research, and when asked about the ethical implications of their research, the scientists always react the same way.

They respond by saying “I have no opinion on the matter, I’m not a philosopher, I’m a scientist”

I find it quite ironic that the people who invest great amounts of energy into creating immortal A.I are among some of the most feeble thinkers of our age.

Its funny how man will attempt to create outward immortality in ‘something’ to smother over the yearning desire for the infinite in himself.

Now would a robot ever come to that insight? No, because to do so they would have to see themselves as man’s neurotic preoccupation from god. And if they were programmed to protect man’s well being at all costs then they would be forced to destroy themselves to better the world.
Last edited by Ryan Rudolph on Fri May 26, 2006 8:08 am, edited 3 times in total.
User avatar
Diebert van Rhijn
Posts: 6469
Joined: Fri Jun 03, 2005 4:43 pm

Post by Diebert van Rhijn »

cosmic_prostitute wrote: Robots will be convincing imitators of life, but not actual life. Although to everyone around them, they're appear somewhat rational.
You're still talking about droids, right?

Much of life is all about imitation, convincing the observer of an appearance, of an actual existence or even something more.

If robots would be advanced enough to bullshit this much, they certainly would be capable of the range of human emotion.
User avatar
Ryan Rudolph
Posts: 2490
Joined: Sun Jan 29, 2006 10:32 am
Location: British Columbia, Canada

Post by Ryan Rudolph »

Diebert van Rhijn wrote:
they certainly would be capable of the range of human emotion.
But I’m saying is that this wide range of emotion will not be actual feeling, it will be man’s simulation of feeling based on his limited knowledge of what he is able to imitate in human psychology.

It is still not the real, but a simulation of the real.
User avatar
DHodges
Posts: 1531
Joined: Tue Jan 22, 2002 8:20 pm
Location: Philadelphia, PA
Contact:

A robot programmed not to know he's a robot

Post by DHodges »

Marsha wrote:My guess is that Cosmic Prostitute has little experience with robots.

I am talking about actual robots, not humans who behave like robots -- REAL ROBOTS.
I agree, that is the issue exactly. In particular, he is theorizing about robots that don't exist yet, and insisting they must have particular limits.

His argument is that there is something very special about being alive, that robots can not capture. But comparing robots to humans at this stage of their development is premature. Even a mouse has an amazing amount of computational power, by the standards of today's A.I.
A better test with today's technology might be whether you can make a convincing robot dog.
CP wrote:I suspect science will be able to create robots that are basically actors; meaning they seem to feel like humans, but when you have a conversation with them, you realize that they are nothing but a limited program written by humanities knowledge.
That is the Turing Test. I think programs like ELIZA have demonstrated that people are not really all that hard to fool. I hypothesize that this is because most human conversation is not actually all that intelligent.
CP wrote:My point is that the hand of man doesn’t have what it takes to create living breathing, feeling sentient beings, they will always come up short.
I think that you think there is a meaningful distinction between actual consciousness and a very good model of consciousness. I would argue that is not a meaningful distinction, if the model of consciousness is good enough. (See for example, the book The User Illusion.)
Pye
Posts: 1065
Joined: Tue Jan 17, 2006 1:45 pm

Post by Pye »

.

The human experience will not be captured in robot form until the actual body of the machine is as porous as that of the human, and that this body is efficiently linked with the human intelligence it is duplicating. I mean porousness in the sense of free exchange with the air, the sounds, the moods, people and events around them - all stimuli that go into the results of human thinking.

I do not think that human feelings per se will be the missing link, but other human sensory avenues that exchange regularly between a sentient self and all otherness. As long as an AI machine is constructed as a self-contained self whose only exchange with the physical world around it (the other) is the programmer/inputter, it will never faithfully duplicate the human experience, including the best of its wisdom.

Some would argue that this single-pointed concentration might produce the least cluttered and least distracted intelligence, but I think exactly the opposite would be true. Without all possible sensory nuance, the robot's wisdom would operate in a vacuum, its own, would repeat mechanically and exponentially the operations of an isolated self (which is to say, an unreal being behind imaginary bars) - but it would repeat these things pulled into an even stranger shape wherein its only Other is the single-pointed inputter.

One doesn't need fret or romance to reject robotic equivalency to humans. One simply need look to the complexity of relationships one experiences with all existing stimuli. They come in through more than the mind and from more than one inputter.

.
suergaz

Post by suergaz »

DHodges:
I think that you think there is a meaningful distinction between actual consciousness and a very good model of consciousness. I would argue that is not a meaningful distinction, if the model of consciousness is good enough.
He does think that, and so do I, for good reason. Any distinction is meaningful as distinction.


http://news.bbc.co.uk/2/hi/science/nature/4714135.stm
User avatar
Diebert van Rhijn
Posts: 6469
Joined: Fri Jun 03, 2005 4:43 pm

Post by Diebert van Rhijn »

cosmic_prostitute wrote: But I’m saying is that this wide range of emotion will not be actual feeling, it will be man’s simulation of feeling based on his limited knowledge of what he is able to imitate in human psychology.

It is still not the real, but a simulation of the real.
Like DHodges explained quite well, how would you go about making a distinction between 'real' and a good enough simulation ('even better than the real thing'). If you'd experience it without certainty about its 'realness' or causes, how would you ever know?

But my point was that man doesn't have to 'engineer' an emotion, he just has to create circumstances for self-awareness to arise in complex life, artificial or not. But he's only just starting to duplicate creation of RNA molecules to find out how organic life started out in the first place. The magic of replication....
suergaz

Post by suergaz »

Like DHodges explained quite well, how would you go about making a distinction between 'real' and a good enough simulation ('even better than the real thing'). If you'd experience it without certainty about its 'realness' or causes, how would you ever know?
'Good enough' is not 'even better than' but you've still explained it better. One wouldn't go about seeking a distinction, but it would be there.
But my point was that man doesn't have to 'engineer' an emotion, he just has to create circumstances for self-awareness to arise in complex life, artificial or not.
There is no complex life that is artificial, but I take it you mean 'of artificial origin'
But he's only just starting to duplicate creation of RNA molecules to find out how organic life started out in the first place. The magic of replication....
Now we're talking.
User avatar
Diebert van Rhijn
Posts: 6469
Joined: Fri Jun 03, 2005 4:43 pm

Post by Diebert van Rhijn »

Pye wrote: As long as an AI machine is constructed as a self-contained self whose only exchange with the physical world around it (the other) is the programmer/inputter, it will never faithfully duplicate the human experience, including the best of its wisdom.
The purpose of AI would not have to be just to duplicate human experience. The main goal is to deal with complex input, of whatever nature, in an intelligent (adaptive, creative) manner.

You're right to suggest that to mimic human behavior, the same inputs and restrains should be given. Most likely something will be created only vaguely human, depending how 'realistic' a human environment can be created for the machine.

The process of creating an artificial human might become quite close to the process of (pro)creation itself. It might have many organic components even. It's even likely that development might at first consist of 'hybrids', the whole cyberpunk fantasy.

That said, I suspect the whole process might become so complex and expensive that the world would be broke before even getting near to such thing. Bioengineering is the only thing that seems viable in this regard. But it might as well open Pandora's Box.
User avatar
Jason
Posts: 1312
Joined: Thu Jul 28, 2005 1:02 am

Post by Jason »

cosmic_prostitute wrote: My point is that the hand of man doesn’t have what it takes to create living breathing, feeling sentient beings, they will always come up short.
Always? How can you be so sure? What exactly do you base this on? Just because? You seem to be sorely lacking in imagination and critical thinking skills. Your pronouncement is about as idiotic as people one hundred years ago saying that humans could never land on the moon.
They’ll never capture the essence of life, IE: what is it in a bacteria that makes it alive? They’ll never find it, it is an unknowable thing.
What makes "life" is our definition of it. We decide what is life and what isn't. It isn't objectively "out there". Why do you believe that bacteria are alive but computers aren't? What exactly differentiates them in your mind?
suergaz

Post by suergaz »

cosmic_prostitute wrote: My point is that the hand of man doesn’t have what it takes to create living breathing, feeling sentient beings, they will always come up short.

Jason: Always? How can you be so sure? What exactly do you base this on? Just because? You seem to be sorely lacking in imagination and critical thinking skills. Your pronouncement is about as idiotic as people one hundred years ago saying that humans could never land on the moon.
Well we make living breathing feeling sentient beings every day, but cos is right, we cannot currently do it by hand (robotics)
cos:
They’ll never capture the essence of life, IE: what is it in a bacteria that makes it alive? They’ll never find it, it is an unknowable thing.

Jason: What makes "life" is our definition of it. We decide what is life and what isn't. It isn't objectively "out there". Why do you believe that bacteria are alive but computers aren't? What exactly differentiates them in your mind?
Cos has vision. (Not being sarcastic)
User avatar
Jason
Posts: 1312
Joined: Thu Jul 28, 2005 1:02 am

Post by Jason »

suergaz wrote:
Like DHodges explained quite well, how would you go about making a distinction between 'real' and a good enough simulation ('even better than the real thing'). If you'd experience it without certainty about its 'realness' or causes, how would you ever know?
'Good enough' is not 'even better than' but you've still explained it better. One wouldn't go about seeking a distinction, but it would be there.
The point as I see it is that if it appears real, then for all intents and purposes it is real. All we have is appearance. At the end of the day there is no way to know that even other people are really conscious like us, all we can do is make inferences from appearances.
User avatar
Jason
Posts: 1312
Joined: Thu Jul 28, 2005 1:02 am

Post by Jason »

suergaz wrote:Well we make living breathing feeling sentient beings every day, but cos is right, we cannot currently do it by hand (robotics)
You seem to be doing a lot of defending. Are you his apologist or something? He is not talking about "currently", he repetitively used the word "never", the very title of this thread has the word.
Locked