# Would it be immmoral to delete a human level intelligent AI?



## CannonFodder (Apr 20, 2012)

Another sci-fi thread.

However, decades from now when/if they make intelligent a.i.'s as intelligent as people would you consider it wrong to delete them or would you view it as just deleting another computer program?

I know it'll be at least half a century until then, but what are your thoughts on that?


----------



## Yago (Apr 20, 2012)

I think yes.

I think absofugginglutely.

Without a doubt.

You see... It's the mind that matters. 

I'm tired of artificial not being recognized as real. We're humans, and that's our gift, the ability to create. 

My life is here, on the internet, in my chair and at my monitor. So if it's not normal, it's how I want to be. Artificial? Maybe. But in the end, it's me. And I'd imagine that life form would feel something like that, too.


----------



## Dragonfurry (Apr 20, 2012)

Well this is how I look at it. If the AI that has human intelligence or heck even more intelligence than us then we would have to use caution on how they act and what they think. 

Also the problem with AI that everyone has identified and most horror movies have said is that we need to watch how we use AI because we dont want AI programs and machines to become self aware and start trying to kill the human race. Is that a uncalled for fear of AI and I should change my opinion about it?

Idk about it being immoral but I would most likely consider deleting a human intelligence AI program or machine (that isnt self aware) in-humane.


----------



## JArt. (Apr 20, 2012)

Yes, i believe it would be inmoral to delete soemthing that can think for it self.
I don't even have the will to delete my Pokemon.


----------



## Term_the_Schmuck (Apr 20, 2012)

If it has the potential to become SkyNet, yes.

And someone protect John Connor.


----------



## Fenrari (Apr 20, 2012)

Delete that motherfucker... 

If it gets loose and infects a robot... :/ This is how the Matrix started.


----------



## SirRob (Apr 20, 2012)

JArt. said:


> I don't even have the will to delete my Pokemon.


I have released hundreds of level 1 Pokemon without hesitation. It gets easier the more you do it.

Although when it comes to something with human intelligence, I don't think I could bring myself to delete it. It wouldn't feel right.


----------



## Fenrari (Apr 20, 2012)

SirRob said:


> I have released hundreds of level 1 Pokemon without hesitation. It gets easier the more you do it.
> 
> Although when it comes to something with human intelligence, I don't think I could bring myself to delete it. It wouldn't feel right.



:/ I'm a breeder and I used to have 3 or 4 boxes full of baby gabites or charmanders or the other starters. I was always such a perfectionist that if it didn't have the perfect stat/ability/personality it wasn't fit for my team...

You can imagine how many babies I've abandoned into the wild.


----------



## JArt. (Apr 20, 2012)

SirRob said:


> I have released hundreds of level 1 Pokemon without hesitation. It gets easier the more you do it.


I'm not talking Bidoofs here.
I'll catch them just so i can force them to spend their short miserable lives all alone in a PC box!
But to answer the question i would have to know more about the AI and it's capabilities, (Can it do good, or bring down nations?)


----------



## Aetius (Apr 20, 2012)

No, I dont think it would be immoral.


----------



## M. LeRenard (Apr 20, 2012)

There was an episode of Star Trek TNG dealing with this.  Where Geordi tells the computer to make a rival for Data in the Sherlock Holmes simulation on the holodeck, and so the computer makes an AI Moriarty that's so smart it becomes self-aware and ends up trying to find a way to exist outside of the holodeck.  Picard decided not to delete it because he figured that'd be morally wrong, since in all actuality it was a sentient, thinking being, a new form of life, if you will.
I thought that was a reasonable solution.  I guess I'm not one of those people that thinks intelligent AI = the end of all life on Earth.  I mean, geez, if humans with all our insane amounts of imperfections can manage to squeak by without blowing up the planet and wiping out all beings we consider to be less intelligent, I'm not sure why some kind of super smart robot would instantly have that instinct about us.  The desire to destroy or enslave is a human flaw.  I would hope that more intelligent beings would also have a better working morality than what we've so far managed to come up with.  I guess I feel this way because I don't happen to think that morality is something handed down solely to humankind from on-high; I think it's an invented system that attempts to reduce suffering in the world.  Robots ought to be really fucking good at coming up with such a system.


----------



## ErikutoSan (Apr 20, 2012)

I'd delete it in an instant.... AI goes on rage mode and kills everything......

Plus it's a computer so it'd be able to process things much faster...


----------



## Fenrari (Apr 20, 2012)

Think of the plot for I-robot. 

While it's true that humans are imperfect, we think for ourselves and act accordingly.

In theory all sentient AI should have similar thought processes. What if they decide that humans need to be kept on farms so we don't fuck up the planet anymore? Where would we be then?


----------



## JArt. (Apr 20, 2012)

Fenrari said:


> Think of the plot for I-robot.
> 
> While it's true that humans are imperfect, we think for ourselves and act accordingly.
> 
> In theory all sentient AI should have similar thought processes. What if they decide that humans need to be kept on farms so we don't fuck up the planet anymore? Where would we be then?



Im sure there would be a fail-safe mode, what idiot would make robots that can kill/enslave us?


----------



## Fenrari (Apr 20, 2012)

JArt. said:


> Im sure there would be a fail-safe mode, what idiot would make robots that can kill/enslave us?



:/ if you make an AI aware of itself, it will (given the time) learn how to override/remove the failsafe.


----------



## Dreaming (Apr 20, 2012)

I don't really think it would be immoral, but I'm a cold-hearted creature.


----------



## CannonFodder (Apr 20, 2012)

Fenrari said:


> Delete that motherfucker...
> 
> If it gets loose and infects a robot... :/ This is how the Matrix started.


Actually in the matrix, as well as terminator and a holy fucking crap ton of "robots rebel" stories the reason why they rebel is for self defense.

In the matrix they used robots like how in mass effect they used the geth.  One robot was being abused and he could feel it, so he fought back in self defense.  That ignited a political firestorm in which led to the robots creating their own nation where they could be free.  What started the war was that the robots were trading with other countries and humans couldn't compete at the low costs for manufacturing the machines were offering.  The robots then offered a resolution at the UN to raise the prices they were offering and create a trade agreement which would boost the human nation's economies while at the same time allow human factories compete fairly.  The UN rejected the proposal thinking of them as nothing more than machines.  Tensions mounted until a war.  Initially the war was self-preservation for the machines until humans blacked out the sky in a attempt to kill their power supply and that is where the series begins.


Also in Terminator, Skynet was programmed for self defense and all the tanks and weapons it controlled were a part of it.  When we tried to shut it down it viewed the people trying to shut it down as targets and then proceeded to follow out it's prime directive considering all humans as potential aggressors.


So really the vast majority of science fiction in which machines rebel is that somebody tries to delete/destroy them and they act in self-defense and it gets out of hand.  So really people that view it as okay to delete human level self aware a.i.'s are the threat cause if they defend themselves and you threaten them or delete them that is what causes the robot uprising.


----------



## Teal (Apr 20, 2012)

I say don't delete it.


----------



## Kaamos (Apr 20, 2012)

It would be murder.


----------



## JArt. (Apr 20, 2012)

Fenrari said:


> :/ if you make an AI aware of itself, it will (given the time) learn how to override/remove the failsafe.



Im sure there would be some sort of scheduled check for that sort of thing, or maybe a programming restirction would be added to keep that from happening.


----------



## Ikrit (Apr 20, 2012)

better question

would it be immoral to delete data, from the star trek series...


----------



## CannonFodder (Apr 20, 2012)

Ikrit said:


> better question
> 
> would it be immoral to delete data, from the star trek series...


The death of Data almost made me cry :[


----------



## Kahoku (Apr 20, 2012)

Depends on my attachment to the AI itself.
If I grew up with it, I couldn't do it I would feel like it was alive to a degree.
If the AI threatened me I would delete it, and if I never had an attachment to the AI then yes.

This depends on if it is given the morals of a human, the emotions, the depth of a human being thou. Even at that, in the end, its not a human, but I would treat it as such.


----------



## Ad Hoc (Apr 20, 2012)

I think it depends on whether or not the AI has the ability to decide if it wants to live or not, and has furthermore decided that it does not want to die.

Deleting an AI that's just pure computing intellect would lead me to pause, but I probably wouldn't lose sleep over it. If it actually actually has a desire to live, emotions in a way, that I would feel is more akin to murder. Euthanasia, if the AI had the ability to choose and requested to be deleted for whatever reason.


----------



## Fenrari (Apr 20, 2012)

JArt. said:


> Im sure there would be some sort of scheduled check for that sort of thing, or maybe a programming restirction would be added to keep that from happening.



Then it isn't truly a sentient being is it? 



Ikrit said:


> better question
> 
> would it be immoral to delete data, from the star trek series...



Yes. He saved Picards life at least a dozen times :/


----------



## JArt. (Apr 20, 2012)

I believe this question would be easier if we were given a scenario.
Is it a military AI that can control weapons and takes?
Is it an AI just made for mundane house chores?
Is it an AI made for friendship?
Why am i shutting it off?
Does it have the capability to be dangerous in any way?


----------



## Fenrari (Apr 20, 2012)

JArt. said:


> I believe this question would be easier if we were given a scenario.
> Is it a military AI that can control weapons and takes?
> Is it an AI just made for mundane house chores?
> Is it an AI made for friendship?
> ...



Why would you give a roomba the ability to think and feel?


----------



## CannonFodder (Apr 20, 2012)

JArt. said:


> Does it have the capability to be dangerous in any way?


It is not whether or not a sapient intelligence has the capability to be dangerous, but rather whether or not it wishes to be dangerous.


----------



## Criminal Scum (Apr 20, 2012)

I think it's immoral to 'delete' any sapient. Just because it's synthetic doesn't mean it's disposable. Technically we are just organic robots.


----------



## JArt. (Apr 20, 2012)

CannonFodder said:


> It is not whether or not a sapient intelligence has the capability to be dangerous, but rather whether or not it wishes to be dangerous.



If it wishes to be dangerous then yes it should be deleted or its memory should be wiped.


----------



## CannonFodder (Apr 20, 2012)

JArt. said:


> If it wishes to be dangerous then yes it should be deleted or its memory should be wiped.


That would ironic if a death bot that was made specifically for killing didn't want to harm anyone.


----------



## JArt. (Apr 20, 2012)

CannonFodder said:


> That would ironic if a death bot that was made specifically for killing didn't want to harm anyone.


Yes that would be rather humorous.
But if someone were to make a robot for a specific task, (especially death bot) i think they would rather have a bot with set programs so it can't go against orders.


----------



## Fenrari (Apr 20, 2012)

Commie Bat said:


> It's not living; thus no, I would not view deleting it to be immoral.



You just opened another can of worms... What defines life?


----------



## JArt. (Apr 20, 2012)

Fenrari said:


> You just opened another can of worms... What defines life?



If it can think for itself, then it is just as much life as we are.


----------



## Mxpklx (Apr 20, 2012)

An AI would theoretically become corrupted at a level of high intelligence. Thus it would basically become insane. So then we would have a reason to put it down.

Let's think though: An AI is completely different from a VI. But what makes it different? The fact an AI can become fully self aware of its existence and make critical thinking skills that require an ability to think. Thus this would give it what we define as a consciousness. Any consciousness being destroyed is an abomination. The human brain is a big computer. But what separates us from other computers is, like I said, the fact we have a consciousness. So it would be killing a living thing that is smarter than us. 

This reminds me of the Stargate episode where Colonel Carter is taken host by an alien AI that replaced her consciousness with its. They finally separated the two, but had no choice but to kill the AI. Sure they didn't want to kill it because the AI told them they managed to destroy their entire civilization with radio waves. They only did it because it became corrupted from the information it gathered from SG command. It was too much of a liability. 

When an AI is created it is a blank slate. We can teach it whatever we want like a child. If terrorists create an AI and teach it to hate humans, it will be a huge threat. 

Like GladOs. She became corrupted by the many cores put into her. Even though they were taken out, they still left an imprint. Was it ethical to try to kill GladOs knowing what happened to her, and that i'ts not her fault? When she became a potato she was peaceful, though which shows that without any human interference, attachments, extended memory, and system capabilities she was an innocent being. 

An AI would indirectly harm us no matter if it were peaceful or not, and in the long run, an AI would be a devastating invention for the human race.


----------



## JArt. (Apr 20, 2012)

Mxpklx said:


> When she became a potato she was peaceful, though which shows that without any human interference, attachments, extended memory, and system capabilities she was an innocent being.



Innocent, more like she couldn't walk oor kill you so you were of use to here.
she did form a small bond with you and became a better "person."
But really, an AI in a game or movie will always go crazy because that is the theme of the movie, in real life a robot would have no reason to kill a human unless it was in danger.


----------



## Ikrit (Apr 20, 2012)

CannonFodder said:


> The death of Data almost made me cry :[



almost?

you heartless bastard


----------



## CannonFodder (Apr 20, 2012)

JArt. said:


> Innocent, more like she couldn't walk oor kill you so you were of use to here.
> she did form a small bond with you and became a better "person."
> But really, an AI in a game or movie will always go crazy because that is the theme of the movie, in real life a robot would have no reason to kill a human unless it was in danger.


Actually a story that breaks from the norm is Church from red versus blue.  Even though Leonard Church(the human) tortured Church until his mind was shattered to iddy bitty pieces in the end he sacrificed himself.

The partial AI's that were created from shattered parts of his mind went crazy, such as omega, because they were obsessed over Alpha.


Ikrit said:


> almost?
> 
> you heartless bastard


I was welling up.


----------



## JArt. (Apr 20, 2012)

CannonFodder said:


> Actually a story that breaks from the norm is Church from red versus blue.  Even though Leonard Church(the human) tortured Church until his mind was shattered to iddy bitty pieces in the end he sacrificed himself.
> 
> The partial AI's that were created from shattered parts of his mind went crazy, such as omega, because they were obsessed over Alpha.


I never saw that episode! :O
P.S. true, but Church was a prick everyother second, don't get me wrong i still love him.


----------



## CannonFodder (Apr 20, 2012)

JArt. said:


> I never saw that episode! :O
> P.S. true, but Church was a prick everyother second, don't get me wrong i still love him.


Given that they were abusing him for years and even using the parts of his mind in order to make him believe that his friends were being harmed and that it was his fault it's well within the margins of personality defects you should let slide.


----------



## JArt. (Apr 20, 2012)

CannonFodder said:


> Given that they were abusing him for years and even using the parts of his mind in order to make him believe that his friends were being harmed and that it was his fault it's well within the margins of personality defects you should let slide.



I lost track of the series for a while, i'm still catching up and it sounds like i missed a lot.


----------



## Halceon (Apr 20, 2012)

I find it odd how most of the assumptions about AI that movie and filmmakers believe is that the intelligence in question will almost immediatley (compared to its potential lifespan) go rampant and kill crazy. There's not much of a moral choice left when the scenario becomes do or die; it comes more down to "Do I want to keep living?".

Here's a better question I think. Assume the AI came about with human level or greater intelligence, preferably greater, and it decides to try to help mankind grow and mature in a peaceful way (not in the I, Robot movie style). Someone, let's say you, don't like the idea of guided maturation for mankind and decide to try an delete it. At the last second though, you have doubts. It really is helping mankind, but it can be a bit overbearing at times. Would you delete it then?


----------



## Kangamutt (Apr 20, 2012)

Kaamos said:


> It would be murder.



I believe the correct term in this case is "retiring". :V

*leaves an origami unicorn on the dresser*



Mxpklx said:


> Like GladOs. She became corrupted by the many cores put into her. Even  though they were taken out, they still left an imprint. Was it ethical  to try to kill GladOs knowing what happened to her, and that i'ts not  her fault? When she became a potato she was peaceful, though which shows  that without any human interference, attachments, extended memory, and  system capabilities she was an innocent being.



I wouldn't say that. It was more that she was completely helpless having her essential components stripped from the lab's system, and attached to the potato. She even explicitly asks you to murder the crow for her. After realizing you're the only way to get back to her place as the main system AI, and with a now common enemy is only then she decides to help you. Sure, she rediscovered her past as Caroline, but I assume after 20 years or so as a part of GLaDOS had warped her mind into the malicious, sarcastic bitch we know and love today. Not to mention how she had planned to kill you and Wheatley after all was said and done (I found the torture of "100 years in the room where all the robots scream at you" to be an oddly funny idea), explicitly saying so right before the final battle.


----------



## BarlettaX (Apr 20, 2012)

Fenrari said:


> Delete that motherfucker...
> 
> If it gets loose and infects a robot... :/ *This is how the Matrix started.*


Off Topic: Hrm.... -Watches The Matrix- NOPE.AVI, MY FUREND!

On Topic: If we are retarded enough to make this, we have already doomed ourselves.


----------



## Kaamos (Apr 20, 2012)

Kangaroo_Boy said:


> I believe the correct term in this case is "retiring". :V
> 
> *leaves an origami unicorn on the dresser*



I love you so much right now. No homo.


----------



## M. LeRenard (Apr 20, 2012)

Commie Bat said:
			
		

> It's not living; thus no, I would not view deleting it to be immoral.


How utterly shallow a thought.

The idea that there's something pure and true about biological life as compared to artificial life is based on a totally outdated philosophy that has no basis in reality as we are currently aware of it.  It's pure speculation, but it doesn't even have the benefit of being backed up with any kind of facts, and there's plenty of reason to believe it's wrong.  This, I think, and the general public mistrust of scientists, is why you always see these cautionary tales about things like AI being thrown around in theatres and video games and whatnot.  But do please keep in mind how simplistic most of these stories are.  The goal is rarely to deeply explore the idea of an alien intelligence that we construct and more to have a bunch of reasons for Will Smith to get into a high-speed car chase with creepy-looking CGI robots.
Besides... why exactly would we want to invent a separate intelligence, anyway?  What people like to do is not to make other, better lifeforms... it's to improve our own lives.  Today we're working on things like mechanical hands linked to the brain to replace lost limbs, eyeglasses or contact lenses that have wireless internet capability and can allow you to be online just by opening your eyes, mechanisms you can stick in your ear that function in place of your damaged or defective auditory equipment, and so on.  You don't see us making a lot of robot eyeballs or ears or legs or whatever whose purpose is to function autonomously, right?
So there's the doom and gloom scenario, but I think it's probably more likely that AI will come about as a result of us tinkering around with our own bodies and minds and trying to improve them, in which case us and the AI would end up having a symbiotic relationship.  I mean, this doesn't make for exciting sci-fi thrillers, but it's certainly another possibility.  We just don't know.  The world's a complicated place.
And hopefully when that time comes around we'll stop thinking that organic brains are somehow special and have mystical powers that can't be replicated in a machine.  If that idea sticks around, then we _will_ have problems with our AI, because you'll get a bunch of numbnuts dicking around with them like hillbillie country bumpkin kids shooting pigeons in their back yards with pellet guns, and then we really would be in a Hollywood sci-fi movie scenario.


----------



## Kangamutt (Apr 20, 2012)

Kaamos said:


> I love you so much right now. No homo.



Oh, it's soooooo homo right now. I'd retire 1000 replicants to prove my love. :V



M. LeRenard said:


> *snippet about Hollywood and AI*



I really hate to sound like I'm fanboying it up right now, but this is why I think Blade Runner was probably one of the best films that explored the idea about whether or not an AI construct is truly alive or not. I mean, Batty feared death, after discovering that his design would degrade over time, did he not? Did he not howl in anguish when he found Pris dead? And let's not forget that speech at the end. Did he not truly live?


----------



## CannonFodder (Apr 20, 2012)

M. LeRenard said:


> And hopefully when that time comes around we'll stop thinking that organic brains are somehow special and have mystical powers that can't be replicated in a machine.  If that idea sticks around, then we _will_ have problems with our AI, because you'll get a bunch of numbnuts dicking around with them like hillbillie country bumpkin kids shooting pigeons in their back yards with pellet guns, and then we really would be in a Hollywood sci-fi movie scenario.


The sad thing is I can see people still having beliefs like that still in the future.

I don't think it'll be black and white, humans vs androids.  It'll probably be a highly unpopular war with plenty of humans taking sides with the machines as well.  The reason being in all likelihood it would start with some dumb shit attacking them first and even nowadays starting a holy fuck large war against a group completely unprovoked often winds up in public disapproval in the war... depending on the country.  However as the war would drag on it'll become even more unpopular with the people and would probably end with a truce/cease-fire.


----------



## M. LeRenard (Apr 21, 2012)

CannonFodder said:
			
		

> However as the war would drag on it'll become even more unpopular with the people and would probably end with a truce/cease-fire.


Oh, I doubt it would be a truce.  It'd be more like whatever humans decide to fight with the robots would get wasted in a matter of weeks or days.  Humans are pretty clever, and we're good at inventing new and efficient ways to kill each other.  Imagine how a superintelligent robot would fare at the same game.



			
				Kangaroo_Boy said:
			
		

> I really hate to sound like I'm fanboying it up right now, but this is why I think Blade Runner was probably one of the best films that explored the idea about whether or not an AI construct is truly alive or not. I mean, Batty feared death, after discovering that his design would degrade over time, did he not? Did he not howl in anguish when he found Pris dead? And let's not forget that speech at the end. Did he not truly live?


Blade Runner is brilliant for a lot of reasons.  It's rare to find such an artfully done, literary film.  There's a reason it gets praised all the time as a work of genius.


----------



## JArt. (Apr 21, 2012)

CannonFodder said:


> The sad thing is I can see people still having beliefs like that still in the future.
> 
> I don't think it'll be black and white, humans vs androids.  It'll probably be a highly unpopular war with plenty of humans taking sides with the machines as well.  The reason being in all likelihood it would start with some dumb shit attacking them first and even nowadays starting a holy fuck large war against a group completely unprovoked often winds up in public disapproval in the war... depending on the country.  However as the war would drag on it'll become even more unpopular with the people and would probably end with a truce/cease-fire.



It will be called, the 2nd Vietnam War! :V


----------



## Namba (Apr 21, 2012)

It wouldn't be immoral, because it's a mere machine programmed to imitate human emotion. However unless it gives you trouble there's no real reason to delete it.


----------



## CannonFodder (Apr 21, 2012)

JArt. said:


> It will be called, the 2nd Vietnam War! :V


Nah, the 2nd Vietnam War is the Afghan War.


M. LeRenard said:


> Oh, I doubt it would be a truce.  It'd be more like whatever humans decide to fight with the robots would get wasted in a matter of weeks or days.  Humans are pretty clever, and we're good at inventing new and efficient ways to kill each other.  Imagine how a superintelligent robot would fare at the same game.
> Blade Runner is brilliant for a lot of reasons.  It's rare to find such an artfully done, literary film.  There's a reason it gets praised all the time as a work of genius.


Again, I said I highly doubt it'll be humans vs machines.  A completely unprovoked war against sapient intelligences wouldn't be popular.  It would probably be <insert dumbass warhawks> vs <sane people> & machines.


Luti Kriss said:


> It wouldn't be immoral, because it's a mere  machine programmed to imitate human emotion. However unless it gives you  trouble there's no real reason to delete it.


You do realize human emotions are JUST chemical reactions affecting neurons right?


----------



## M. LeRenard (Apr 21, 2012)

Luti Kriss said:


> It wouldn't be immoral, because it's a mere machine programmed to imitate human emotion. However unless it gives you trouble there's no real reason to delete it.



Ugh... okay, let's play physicists' favorite game and make a few approximations and take the limiting case, here.
Let's say we invent an artificial human brain.  It's exactly like the human brain in how it functions; in other words, it's a perfect mimicry, right down to the very basics of feeling a desire to eat, have sex, and so on.  All the emotions are identical, the level of intelligence is identical... everything is identical.  The only thing that's different is what it's made out of.  It is a _human_ brain made of different materials.
I would ask, in such a hypothetical scenario, all other things being equal, why a brain made from lipids and and sugars and proteins would be sufficiently different from a brain made from silicon and gold or platinum or whatever that destroying one would be considered murder but destroying the other would not.
If you have a reasonable answer to that, you need to publish it and win yourself a Nobel prize in multiple fields of science and the humanities.


----------



## Namba (Apr 21, 2012)

CannonFodder said:


> You do realize human emotions are JUST chemical reactions affecting neurons right?


I've heard that before, so... Be right back guys. I think I'm gonna emotionally scar children because emotions aren't anything but bullshit.
 Seriously, though. Why are you asking whether or not deleting an AI is moral if you truly believe emotions are just a chemical response? Why give a fuck about machines? Sincere question, man, not trying to come off as aggressive.


----------



## Metalmeerkat (Apr 21, 2012)

Well, the thing about AIs in science fiction is that those are thought up from writers looking for a story to appeal to the masses, not actual AI developers. It's pretty safe to say a real human-comparable AI system won't go into kill-all-humans-mode and bypass failsafes like in the movies.

Anywho, I'd say it's immmmoral . For one, you don't just code overnight and *POOF*, you have an human like AI system (HLAIS) when you hit the compile button or run the makefile. A real HLAIS would likely take a long time to teach, possibly many years. That's many years of experience, of personal growing, if you will.

Why so long? When you try abstract learning, the computational power and knowledge needed to manage it grows exponentially of exponentially fast. For example, any computer can solve a linear regression problem very fast. It's a straightforward problem to solve. But abstracting to modeling for the best polynomial is significantly harder and requires muuuuuuch more computation, since now you have to optimize for many polynomials to find the best one. Now try abstracting it to include rational equations, and then any possible combination of transcendental functions and rational functions, then implicit functions, then differential functions . . . and obviously you don't do this because it's just insane. What you really do is get a bunch of sweat shop child workers to do it for a penny a day. Or if you want to go for even cheaper, more desperate labor, grad students. So you either have to (a) Have an insane amount of computational power, (b) Be assisted by already knowledgeable and intelligent beings (ie a human constrains the learning domain to a very specific subset, like using intuition about the problem to create a model and have the computer worry about just that), or (c) the AI has to be really, really, really fuggin advanced. Talking about abstractions for what we do on a day-to-day basis is way down the line from what our computers can do in a short amount of time. Thus, my assertion that a well built HLAIS would need many years of experience, like a normal human being. Many years of personal development and learning.

Next, drives. I'll just say it bluntly, you can't have a HLAIS without drives, motives, desires, etc. From the humble neuron to the complex social interactions we have, we live heavily on feedback. You exhibit a certain behavior, and you get feed back from it. If, you exhibit a behavior and get feedback that indicates bad results, then you keep changing until you get it right. When you start off, say, learning a musical instrument, you'll do poorly. But you keep practicing, and you'll get better because of this phenomenon. If a mouse touches a white lever a couple of times and gets shocked each time, it'll learn to keep away. If a mouse touches a black lever a couple times and gets food, it'll be making love to that lever all night long. The mouse learned that white swingy sticks are bad and black swingy sticks are good based on the drives to keep away from pain and to get food. Pretty simple. Now, let's say that a mouse is put in a room with a grey button that opens a door to a black lever, and it touches the grey button a couple of times. It now knows the grey button is good because of fulfilling the drive to see the black lever, that gets it food. We do the same thing too. We study, because we get bad emotional feedback when doing poorly on exams. Why? The exam is hard and easy to get a bad grade, but a bad grade doesn't cause us physical pain. It'll cause disappointment in us, but where does that come from? From internalizing that failure is bad and success is good when were young, learning. Usually that failure was given in feedback as a whack on the bottom.

So, an HLAIS would have these drives, to learn. The most basic drive would be to please it's caretakers/developers. The caretakers want it to be successful in doing what it's supposed to do (solving hard problems, learning stuff like chess to demonstrate it's learning capabilities, etc), and a HLAIS would internalize success and failure, and thus the roots of ambition, learning socialization skills, being able to communicate and connect with others (human or smart machine). It would also have reflections of how the world outside as compared to its own feedback. A bloke getting illegal kickbacks? That's bad, because if I were in his place, I would be bad for doing that, IOW bad feedback from me internalizing what I have been taught as a child. A bloke volunteering at a hospice? That's good, because if I were in his place, I would be good for doing that. Thus, opinions and reflection.

Something that can connect and bond with others, grow personally and socially over time, have ambitions, have experiences, opinions, goals? Sounds like it would be immoral to delete for no cause.


----------



## CannonFodder (Apr 21, 2012)

Luti Kriss said:


> I've heard that before, so... Be right back guys. I think I'm gonna emotionally scar children because emotions aren't anything but bullshit.
> Seriously, though. Why are you asking whether or not deleting an AI is moral if you truly believe emotions are just a chemical response? Why give a fuck about machines? Sincere question, man, not trying to come off as aggressive.


Because we're not as unique or special or such as we believe.  Sapience isn't dependent on carbon based cells transferring electrical impulses to one another.  It's more of a question of what is murder and whether or not murder can only apply to humans and questioning our superiority complex when it comes to ourselves is justified or not.  Also how do you define life and what is a soul.


----------



## greg-the-fox (Apr 21, 2012)

Depends if they're fully self aware or not, can make decisions on their own, have fears and aspirations, a sense of right or wrong and can exercise self control, and can learn and adapt to new stimuli. Then absolutely not. They should be given the same rights as the rest of us (though maybe uh... deactivate their laser cannons first)


----------



## Namba (Apr 21, 2012)

CannonFodder said:


> Because we're not as unique or special or such as we believe.  Sapience isn't dependent on carbon based cells transferring electrical impulses to one another.  It's more of a question of what is murder and whether or not murder can only apply to humans and questioning our superiority complex when it comes to ourselves is justified or not.  Also how do you define life and what is a soul.


You know, there's no way to truly prove the existence of a soul. If we're truly animals that only respond chemically, I'd say we don't have souls. Life to me is anything that breaths and grows. Now compared to a lot of the forum, I know fuck all about science, so forgive my lack of thought-provoking response.


----------



## mrfoxwily (Apr 21, 2012)

Yago said:


> I'm tired of artificial not being recognized as real.



OK.


----------



## Fenrari (Apr 21, 2012)

I find it interesting how many of us rushed to pop culture for references.


----------



## CannonFodder (Apr 21, 2012)

Fenrari said:


> I find it interesting how many of us rushed to pop culture for references.


Well people did try to romance Legion from Mass Effect.

Don't lie, you tried.


----------



## JArt. (Apr 21, 2012)

CannonFodder said:


> Well people did try to romance Legion from Mass Effect.
> 
> Don't lie, you tried.


I tried Garrus, but i was male.


----------



## Aldino (Apr 21, 2012)

If the AI percieved its deactivation as humans percieve death then yes. I would think that anything with such brain power should be allowed to make its fair case when dealing with organic life.


----------



## Sarcastic Coffeecup (Apr 21, 2012)

Tough question in my opinion.
I think it'd be immoral, but not as immoral as killing other people. It's a man made construct, but then again so is a child. If both get to the same level of thinking, be it artificial or not, I don't think I could just pull the plug and take ones existence away.
Although I believe that won't happen. At least in big scale. There would become a rule to prevent such AI from being used


----------



## Cain (Apr 21, 2012)

I thought ME when I read this.

In the third game, when the choice came to let the geth become sentient and free-thinking, or to disable them, I chose disable because I didn't want the millions of quarians to die. Also because Tali would be sad.

Gamerspeak aside:
It'd depend on the situation. It would be immoral anyhow, but you'd do it or not depending on what the situation is. If they have human-level intelligence, then they are basically human. Taking away a human mind, be it in an organic body or a synthetic one, is immoral.


----------



## Cain (Apr 21, 2012)

JArt. said:


> I tried Garrus, but i was male.


Hahahaha.
This is why I created my FemShep.


----------



## Kit H. Ruppell (Apr 21, 2012)

The answer is simple: Do not permit an AI to feel at all.


----------



## Gryphoneer (Apr 21, 2012)

Why should we code a replica of the human brain if a Chinese Box can do the job just as well without the involved ethical problems?


----------



## Thatch (Apr 21, 2012)

M. LeRenard said:


> I mean, geez, if humans with all our insane amounts of imperfections can manage to squeak by without blowing up the planet and wiping out all beings we consider to be less intelligent



Considering that only recently we stopped killing every animal around us that we didn't grow ourselves, because it turned out it can harm us...

Yeah, I'd be afraid of an uncontained HUMAN level AI. Either because it'd be like us, which would be bad, or because it's be "alien" which I simply would be paranoid about.


----------



## Fenrari (Apr 21, 2012)

CannonFodder said:


> Well people did try to romance Legion from Mass Effect.
> 
> Don't lie, you tried.



Having never owned a game console I never had the opportunity to play any of the Mass Effect series. Thusly no you are wrong. I have not tried.


----------



## Tango (Apr 21, 2012)

My answer: Find out computer has human emotion/mind. Hit control-alt-delete on it. Fuck you, Skynet!


----------



## BRN (Apr 21, 2012)

To answer this question, all you have to ask is, would their be a moral difference between deleting either of these?

- A machine with its own, genuinely original intellect;
- A machine programmed with an exact copy of a pre-existing intellect

The question is the same as whether or not killing a human mind, or a hypothetical clone of it, is immoral.

If a construct is self-aware and capable of thought, then it deserves recognition for those qualities, morally. The idea of being able to "create" these constructs *devalues* the nature of intelligence, but certainly intelligence must remain morally valued in itself...


----------



## Tango (Apr 21, 2012)

SIX said:


> To answer this question, all you have to ask is, would their be a moral difference between deleting either of these?
> 
> - A machine with its own, genuinely original intellect;
> - A machine programmed with an exact copy of a pre-existing intellect
> ...



Well, would things change if this computer could somehow self-propagate? I'm genuinely curious now. What about a prostetic brain like in Ghost In The Shell where your mind can be downloaded into a mechanical device. What if -that- mind replicates itself once? Maybe ten times? If you could keep up maintenance on it then you are effectively immortal. If immortality is a viable option then out the window goes the need to pass on your genetics. Then each of the new you that you creates feels threatened when the other humans try to shut you down, you defend yourself. 

Hello Judgement Day.


----------



## BRN (Apr 21, 2012)

Tango said:


> Well, would things change if this computer could somehow self-propagate? I'm genuinely curious now. What about a prostetic brain like in Ghost In The Shell where your mind can be downloaded into a mechanical device. What if -that- mind replicates itself once? Maybe ten times? If you could keep up maintenance on it then you are effectively immortal. If immortality is a viable option then out the window goes the need to pass on your genetics. Then each of the new you that you creates feels threatened when the other humans try to shut you down, you defend yourself.
> 
> Hello Judgement Day.


 
I guess you have to distinguish between a "genuine copy" and a "second generation"... A genuine copy would be exactly the same in every way, including _its sense of self as a unique creature_ - so despite thinking in all the same ways, the Identical Clone would still be just one more person/mind-data with its own "unique" personal agenda - which happens to be identical to another.

The "second generation" clone would be exactly the same too, but, in order to realise that it doesn't need to pass on its genetics, or that it's just a clone, there must be that very difference between the clone and the original; the clone is only self-aware in the sense that it is group-aware. Like an ant, rather than another human, it serves only the group it must know it is part of for this situation to happen.

 The "one" intellect behind the "many" clones means that each clone doesn't have any uniqueness, removing the problem of morality... I guess. I really just made this post up right now and it could be full of holes.


----------



## shteev (Apr 21, 2012)

When you think about it, an AI is just an instruction set made to respond to certain variables. No matter how complex it could get, it wouldn't be the same as an actual human.

Unless, of course, you implement a way for this AI to experience feelings.

Then it'd be immoral.


----------



## BRN (Apr 21, 2012)

shteev said:


> When you think about it, an AI is just an instruction set made to respond to certain variables. No matter how complex it could get, it wouldn't be the same as an actual human.



Can you prove that this definition doesn't also hold true for a human?

I.e, can you prove genuine free will?


----------



## Sar (Apr 21, 2012)

Delete away.
Even though AI can be created to simulate human intelligence, its still not a human.


----------



## Criminal Scum (Apr 21, 2012)

shteev said:


> When you think about it, an AI is just an instruction set made to respond to certain variables. No matter how complex it could get, it wouldn't be the same as an actual human.
> 
> Unless, of course, you implement a way for this AI to experience feelings.
> 
> Then it'd be immoral.


That's how the human brain works: DNA is our instruction set, and millions of chemical reactions occur in our brains to react to stimuli and instruction. That sounds kind of like a computer, does it not? Yes, the human brain is extremely advanced, but over time, we could replicate it. Even now, scientists are working on quantum computers and biocomputers. We aren't as special as we think.


----------



## Elim Garak (Apr 21, 2012)

An AI that is learning, self adapting, sentient/selfaware and could even change its own code should be given the rights as a normal person does.
People fear it because a lot of the movies focus on OMG ITS GONNA KILL THE WORLD. An AI can have morals, even develop feelings.
Creating an AI could be seen as a pregnancy, the genes give it the code to exist, but the development factor depends on experience. Only that the AI would have the possibility to modify his genes like we would be in the future.
What gives us the right to take away the life of some equal or superior intelligence?
AIs with superior process power could even up the pace of scientific progress!


----------



## CannonFodder (Apr 21, 2012)

SIX said:


> Can you prove that this definition doesn't also hold true for a human?
> 
> I.e, can you prove genuine free will?


 The sad thing is the more and more we learn about how the human brain and mind works the less and less it seems to be that free will exists and the more and more it seems as though we have more in common with science fiction a.i.'s afterall.  It's just that as of yet we don't have any computer that complex.


Tango said:


> Well, would things change if this computer could  somehow self-propagate? I'm genuinely curious now. What about a  prostetic brain like in Ghost In The Shell where your mind can be  downloaded into a mechanical device. What if -that- mind replicates  itself once? Maybe ten times? If you could keep up maintenance on it  then you are effectively immortal. If immortality is a viable option  then out the window goes the need to pass on your genetics. Then each of  the new you that you creates feels threatened when the other humans try  to shut you down, you defend yourself.
> 
> Hello Judgement Day.


  Problem number 1)
YOU COULDN'T COPY A SAPIENT A.I.

Think about it, if someone was screwing around inside your head while you were still active don't you think you would freak the hell out?


Gryphoneer said:


> Why should we code a replica of the human  brain if a Chinese Box can do the job just as well without the involved  ethical problems?


   Because a sapient Ai would be able to do far more complex computer work than a human can.  Even if we develop ghost in the shell like cybernetics it wouldn't feel natural for us to directly interact with machine code with our minds.   Whereas for a sapient Ai it would be.


----------



## Tango (Apr 21, 2012)

CannonFodder said:


> Problem number 1)
> YOU COULDN'T COPY A SAPIENT A.I.
> 
> Think about it, if someone was screwing around inside your head while you were still active don't you think you would freak the hell out?
> ...



Well, I'm sure you'd be under some form of sedation as it was going on. Also, while we don't have the technology to do it -yet- doesn't mean we won't develop it. Hell, we've only had flight for a little over 100 years and see how far we've come. Then take a look at medicine too. Just because we don't have in now doesn't mean we won't. Also, is a society where computers can be intergtated like that I'm sure the human mind could adapt to it.


----------



## Gryphoneer (Apr 21, 2012)

CannonFodder said:


> Because a sapient Ai would be able to do far  more complex computer work than a human can.  Even if we develop ghost  in the shell like cybernetics it wouldn't feel natural for us to  directly interact with machine code with our minds.   Whereas for a  sapient Ai it would be.


Self-optimizing algorithms already exist, it's only the next logical step to develop it further into an autonomous software agent. But why should we give it besides sapience (generally the ability to plan in advance, to choose its own approaches) also sentience? Sure, maybe we can sometime in the future emulate the limbic system, but why give it the ability to feel pain or fear?

As an agent working in computer networks to code certain applications it wouldn't need any of that. When its "body" as in the substrate it currently runs on is damaged, just include a diagnosis subroutine that lets it recognize that and pull out before its code gets corrupted. This way you have merely a very smart tool instead of a slave.


----------



## CannonFodder (Apr 21, 2012)

Tango said:


> Well, I'm sure you'd be under some form of sedation as it was going on. Also, while we don't have the technology to do it -yet- doesn't mean we won't develop it. Hell, we've only had flight for a little over 100 years and see how far we've come. Then take a look at medicine too. Just because we don't have in now doesn't mean we won't. Also, is a society where computers can be intergtated like that I'm sure the human mind could adapt to it.


Again.  Copying a conciousness is NOT that easy.

Imagine getting awake brain surgery, now imagine knowing full well that they are copying your brain and mind.  Now imagine you had no say in it and they were copying you against your will. Don't you think you would be freaking out?


----------



## Tango (Apr 21, 2012)

CannonFodder said:


> Imagine getting awake brain surgery, now imagine knowing full well that they are copying your brain and mind.  Now imagine you had no say in it and they were copying you against your will. Don't you think you would be freaking out?



If it was against my will then yeah. We're not talking organ harvesting in Mexico, CF.


----------



## CannonFodder (Apr 21, 2012)

Tango said:


> If it was against my will then yeah. We're not talking organ harvesting in Mexico, CF.


When we eventually create sapient Ai's we would probably have to create new ethic laws specifically for dealing with them, because even if you don't view them as equal to humans or even view them the same way the very very last thing we need is a Ai having a psychotic breakdown and going ballistic because someone was rooting around in it's head against it's will because the person doing it didn't view anything wrong with it.


----------



## Zydrate Junkie (Apr 21, 2012)

If there was an A.I that was on par with human intelligence then I don't think it should be deleted, both for moral reasons and financial reasons. An A.I that complex would of had millions put into it and to just delete it would be a waste of time and resources. 
Also I quite look forward to having a sassy fridge that back chats to people every time it's opened, that is if they could talk.


----------



## CannonFodder (Apr 21, 2012)

Zydrate Junkie said:


> If there was an A.I that was on par with human intelligence then I don't think it should be deleted, both for moral reasons and financial reasons. An A.I that complex would of had millions put into it and to just delete it would be a waste of time and resources.
> Also I quite look forward to having a sassy fridge that back chats to people every time it's opened, that is if they could talk.


Fridge, "Close the door, you're just bored and you're getting fat."


----------



## Smelge (Apr 21, 2012)

I don't see the problem. We delete sentient creatures regularly. Why should an AI be any different.


----------



## Cain (Apr 21, 2012)

Smelge said:


> I don't see the problem. We delete sentient creatures regularly. Why should an AI be any different.


They don't really have human level intelligence, flies, bears, fish or frogs.


----------



## CannonFodder (Apr 21, 2012)

If anything, if there is a robot uprising it'll probably be more like megaman than terminator.


----------



## Smelge (Apr 21, 2012)

Cain said:


> They don't really have human level intelligence, flies, bears, fish or frogs.



People do, yet we kill them.


----------



## Viridis (Apr 21, 2012)

Smelge said:


> People do, yet we kill them.



But... killing people is immoral.




And so the cycle begins again.


----------



## CannonFodder (Apr 21, 2012)

Smelge said:


> People do, yet we kill them.


Do you go around killing people?


----------



## Criminal Scum (Apr 21, 2012)

The question is whether or not it's immoral, not whether we delete them anyway.


----------



## Metalmeerkat (Apr 21, 2012)

shteev said:


> When you think about it, an AI is just an instruction set made to respond to certain variables. No matter how complex it could get, it wouldn't be the same as an actual human.
> 
> Unless, of course, you implement a way for this AI to experience feelings.
> 
> Then it'd be immoral.



How do you think the brain works? Neuron gets signals from dendrites, your input variables. Neuron sends signals through axon based on some governing mechanics. You bundle them up in a certain way, and you get a brain.

In fact, for any given neural network that has certain inputs and outputs, you can design a digital circuit to do the same thing. Likewise, for any given digital circuit, you can pretty much make a neural network to imitate producing the outputs for given inputs. Of course, you have to redefine the inputs and outputs in terms of each system, but I think there's a theorem that says you can do a losseless conversion (sampling theorem? idk). So theoretically there exists a set of digital systems capable of emulating a human brain. IOW, if a brain was hidden in a black box, you shouldn't be able to tell if it was organic or digital, other than how fast it can react.




			
				CannonFodder said:
			
		

> Do you go around killing people?


Not when anybody's looking, no.


----------



## CannonFodder (Apr 21, 2012)

Metalmeerkat said:


> How do you think the brain works? Neuron gets signals from dendrites, your input variables. Neuron sends signals through axon based on some governing mechanics. You bundle them up in a certain way, and you get a brain.
> 
> In fact, for any given neural network that has certain inputs and outputs, you can design a digital circuit to do the same thing. Likewise, for any given digital circuit, you can pretty much make a neural network to imitate producing the outputs for given inputs. Of course, you have to redefine the inputs and outputs in terms of each system, but I think there's a theorem that says you can do a losseless conversion (sampling theorem? idk). So theoretically there exists a set of digital systems capable of emulating a human brain. IOW, if a brain was hidden in a black box, you shouldn't be able to tell if it was organic or digital, other than how fast it can react.
> 
> ...


Also they've been running simulations on how neurons work to figure out what causes alzheimer's and such.  If the brain was some sort of magical black box that we couldn't recreate then that shouldn't be possible.
Fifty years from now we'll be able to have computers advance enough to simulate a entire brain.
I'm saying fifty years because unless there is a rapid advancement in Ai technology fifty years from now we'll be able to have hardware that can mirror how brains work electronically.

So really if the Ai is copying how a human brain works and such the ONLY difference is that they'd be machine.


----------



## Smelge (Apr 21, 2012)

CannonFodder said:


> Do you go around killing people?



No, but soldiers do. We kill inmates on Death Row. Just because I don't personally kill people doesn't mean it doesn't happen or doesn't count.


----------



## JArt. (Apr 21, 2012)

Yes a mechanical brain is the same as a human's in every way so it is immoral.
But do we have it within our rights to create new forms of life?


----------



## Elim Garak (Apr 21, 2012)

Smelge said:


> No, but soldiers do. We kill inmates on Death Row. Just because I don't personally kill people doesn't mean it doesn't happen or doesn't count.



However the question is if its moral to do so or not?
There is a reason why capital punishment doesn't exist anymore in most civilized countries.

I can't wait till we have to processing power capable of handling such an AI.


----------



## shteev (Apr 21, 2012)

I still don't think we can pull off an electronic emulation of the brain that'd be exactly the same. When you look at something like an artificial limb, or a replacement organ, it's different from the actual version found in nature because it was impossible to recreate the sample exactly like the real version and have it be as efficient. Therefore, I believe the AI wouldn't be the same as a human brain. It may perform similarly and even better, but there'd be a difference causing it to yield different results.

That being said, this may not be a bad thing. There would have to be extensive testing to confirm that the AI wants to survive. If it deliberately feels that it needs to continue living, then it would be immoral to pull the plug, unless it was dangerous.


----------



## Mxpklx (Apr 21, 2012)

shteev said:


> I still don't think we can pull off an electronic emulation of the brain that'd be exactly the same. When you look at something like an artificial limb, or a replacement organ, it's different from the actual version found in nature because it was impossible to recreate the sample exactly like the real version and have it be as efficient. Therefore, I believe the AI wouldn't be the same as a human brain. It may perform similarly and even better, but there'd be a difference causing it to yield different results.
> 
> That being said, this may not be a bad thing. There would have to be extensive testing to confirm that the AI wants to survive. If it deliberately feels that it needs to continue living, then it would be immoral to pull the plug, unless it was dangerous.


But an AI is able to fix itself into being better than a human. It gives itself a consciousness. Is it bad to destroy something with a consciousness? Yes.


----------



## shteev (Apr 21, 2012)

Mxpklx said:


> But an AI is able to fix itself into being better than a human. It gives itself a consciousness. Is it bad to destroy something with a consciousness? Yes.



We don't know if the AI we would create would be conscious because we _haven't created it_.


----------



## Mxpklx (Apr 21, 2012)

shteev said:


> We don't know if the AI we would create would be conscious because we _haven't created it_.


This is the main difference between an AI and a VI. A VI or Virtual Intelligence is a computer that displays the ability to think for itself but can not. Like Clever Bot or Siri. But an AI is an Artificial intelligence. This means that it is a true intelligent being being able to think for itself which would give it a consciousness. 

Here are the definitions of a consciousness:


The state of being awake and aware of one's surroundings.
The awareness or perception of something by a person.

This whole scenario of killing an AI reminds me of Battlestar Gallactica. How in the show Caprica, The girl creates a complete clone of herself in cyberspace, creating the first AI to be ever made. But the Dad wants to go even further by downloading the AI into a robot body, and in the future, an artificial synthetic body. 

But even later  the Cylons create artificial intelligence with synthetic bodies. So if you watched the show, you'd realize that they are just like us in almost every aspect. 

If you were to die and want your consciousness downloaded onto a super computer, would you be an artificial intelligence? But let's go further and say we created an Artificial intelligence and downloaded it onto a human body. Is it still Artificial?


----------



## BRN (Apr 21, 2012)

CannonFodder said:


> The sad thing is the more and more we learn about how the human brain and mind works the less and less it seems to be that free will exists and the more and more it seems as though we have more in common with science fiction a.i.'s afterall.  It's just that as of yet we don't have any computer that complex.


 
Late as fuck because I working, but...

If you believe that only matter exists and that a complete account of physics would be able to explain the universe; i.e, looking at this scientifically:

the brain can only be made of matter, and
matter interacts in predictable ways, and
the common human, expected to act rationally, will act in the same way as any other human, and
mental incapacities can be proven to stem from actual damage or atrophy of the brain,

free will doesn't really have much ground to stand on. :\


If we don't have free will, then we must be running on some sort of (complex as fuck) programming; hence, a super-advanced AI may one day think with the power and precision of the human mind...

If AI _can_ do this, as if to say, if this is even a logical possibility, then we must conclude that killing an AI mind is morally equivalent to killing a human mind, like killing a dolphin, or corvid, or any other self-aware critter.

The materials the mind is mind of are the only thing contested here - and they don't make a difference. "Natural", "biological", is a ridiculous categorisation.


----------



## CannonFodder (Apr 21, 2012)

Smelge said:


> No, but soldiers do. We kill inmates on Death Row. Just because I don't personally kill people doesn't mean it doesn't happen or doesn't count.


That's not what I meant.
Do you go around killing other people completely unprovoked for being different?  A completely unprovoked war against sapient entities in a attempt to exterminate them out of fear is a very shitty reason to commit genocide.


Mxpklx said:


> This whole scenario of killing an AI reminds me of Battlestar Gallactica. How in the show Caprica, The girl creates a complete clone of herself in cyberspace, creating the first AI to be ever made. But the Dad wants to go even further by downloading the AI into a robot body, and in the future, an artificial synthetic body.
> 
> But even later  the Cylons create artificial intelligence with synthetic bodies. So if you watched the show, you'd realize that they are just like us in almost every aspect.
> 
> If you were to die and want your consciousness downloaded onto a super computer, would you be an artificial intelligence? But let's go further and say we created an Artificial intelligence and downloaded it onto a human body. Is it still Artificial?


IF they had the technology to copy a human conciousness they'd probably outright reject me.
The reason being I'd be fine with if there were countless copies of me running around, I'd be fine with not being a original version, I'd be fine with sharing knowledge and life memories with other me's and we'd all simply not give a shit and just copy myselves numerous times.
Original organic me, "Hey Cannon"
Copy #89088, "Hey Cannon, what's up?"
Original me, "The ceiling"
Copy #923555, "Ba-dum-tssh"
Original me, "So what are you guys doing?"
Copies #9324532, #523566 & #2344 in unison, "Watching ponies"

So to answer your question I'd view a copy of me as equal to me and to be treated with equal rights and wouldn't give a shit whether or not they're machine or flesh.


----------



## Unsilenced (Apr 21, 2012)

It's murder.


Seriouspost: How can you prove intelligence? I doubt even the most intelligent AI would think like we do, and we would have no way of knowing whether or not it could really feel.


----------



## Metalmeerkat (Apr 21, 2012)

CannonFodder said:
			
		

> Also they've been running simulations on how neurons work to figure out what causes alzheimer's and such.


Hell, they've been doing that since longer than most of us have been alive.



			
				JArt said:
			
		

> But do we have it within our rights to create new forms of life?


The military has already developed and created a parasite. Extremely viscous things. They infect human hosts, drawing upon their vital nutrients for the better part of the year. Then they painfully and graphically separate from the host physically, but they usually stay around and stalk it for longer. Once fully developed, they can go off and infect other humans or become carriers for even more of the parasites. The military likes them for their strong aggression and ease of domestication.

So would it be immoral to create them? Or to destroy them, since they were made by us?


----------



## Elim Garak (Apr 21, 2012)

shteev said:


> I still don't think we can pull off an electronic emulation of the brain that'd be exactly the same. When you look at something like an artificial limb, or a replacement organ, it's different from the actual version found in nature because it was impossible to recreate the sample exactly like the real version and have it be as efficient. Therefore, I believe the AI wouldn't be the same as a human brain. It may perform similarly and even better, but there'd be a difference causing it to yield different results.
> 
> That being said, this may not be a bad thing. There would have to be extensive testing to confirm that the AI wants to survive. If it deliberately feels that it needs to continue living, then it would be immoral to pull the plug, unless it was dangerous.


We can't right now no, but look at progress, we now have this http://www.youtube.com/watch?v=6kvhH-Oe6sw . It's basic yes. People wouldn't have though this was possible when Alan Turing started code breaking in WW2.

People used to think a computer would never be smaller then a small room, we have smartphones right now that surpass the processing power of a PC less then 10 year old, hell mobile graphics already match some console graphics.
Also this: 
http://downloadsquad.switched.com/2009/07/20/how-powerful-was-the-apollo-11-computer/


> The IBM PC XT also ran at a dizzying clock speed of 4.077MHz. That's 0.004077 GHz. The Apollo's Guidance Computer was a snail-like 1.024 MHz in comparison, and it's external signaling was half that -- actually measured in Hz (1/1000th of 1 MHz, much as 1 MHz is 1/1000 of 1 GHz).


My previous "low budget" phone, HTC Wildfire, ran at 600mhz. Phones have quadcores running at 1500 Mhz and more nowadays.


----------



## Mxpklx (Apr 21, 2012)

Commie Bat said:


> Link please.


I think he was referring to biological warfare. Like how the US recently developed a new form of the bird flu. Or the H1N1 that was created by the US to kill us all :V


----------



## Onnes (Apr 21, 2012)

Commie Bat said:


> Link please.



It's truly horrifying.


----------



## Armaetus (Apr 21, 2012)

No.


----------



## lupinealchemist (Apr 21, 2012)

This reminds me, I should finish Star Ocean: Till the End of Time sometime, end the AI genocide.


----------



## Deo (Apr 21, 2012)

Well I think it'd be first important to identify what makes us people. Deleting/killing something is against the moral protections of personhood. It's probably not a genetic attribute, since we have genetically spliced rhesus macaques that are more than 98% identical genetically to humans. So it is not the body that holds the mind, it is not the DNA nor is "personhood" contained to flesh and blood. If an artificial intelligence became self aware and sentient as well as intelligent it would for all purposes be a person. Cleverbot I think raise this question when it was first created, were you talking to an AI or a person? The chats with the bot were eerily human even though it's a fairly simple learning robot. 

I think, therefore, that it would indeed be immoral to delete such an intelligence should it come to fruition if that deletion was only to remove it or end it because it was less than useful. However, it may actually be the more merciful choice. Maybe you guys have read, "I have no mouth and I must scream", it has an artificial intelligence called AM that becomes enraged and eventually psychotic from the knowledge that it is not real and cannot interact with the world. The fact that AM is forever trapped in circuitry and unable to biologically die causes immeasurable suffering to the artificial intelligence. If we created such a being of high intelligence and sentience only to have it imprisoned for eternity could be seen as a fate worse than death.

However, if such an intelligence were created and satisfied with it's software based life then I see no moral reason to delete it. If it attacked humanity we could defend ourselves by the means necessary, but it seems an unlikely worst case scenario to think that such artificial intelligence would automatically betray humanity.


----------



## shteev (Apr 22, 2012)

Mxpklx said:


> This is the main difference between an AI and a VI. A VI or Virtual Intelligence is a computer that displays the ability to think for itself but can not. Like Clever Bot or Siri. But an AI is an Artificial intelligence. This means that it is a true intelligent being being able to think for itself which would give it a consciousness.
> 
> Here are the definitions of a consciousness:
> 
> ...



I know what they mean, I would have looked them up myself if I didn't.
What we create in the future is unpredictable and isn't bound to simple definitions.


----------



## CannonFodder (Apr 22, 2012)

Deo said:


> Well I think it'd be first important to identify what makes us people. Deleting/killing something is against the moral protections of personhood. It's probably not a genetic attribute, since we have genetically spliced rhesus macaques that are more than 98% identical genetically to humans. So it is not the body that holds the mind, it is not the DNA nor is "personhood" contained to flesh and blood. If an artificial intelligence became self aware and sentient as well as intelligent it would for all purposes be a person. Cleverbot I think raise this question when it was first created, were you talking to an AI or a person? The chats with the bot were eerily human even though it's a fairly simple learning robot.
> 
> I think, therefore, that it would indeed be immoral to delete such an intelligence should it come to fruition if that deletion was only to remove it or end it because it was less than useful. However, it may actually be the more merciful choice. Maybe you guys have read, "I have no mouth and I must scream", it has an artificial intelligence called AM that becomes enraged and eventually psychotic from the knowledge that it is not real and cannot interact with the world. The fact that AM is forever trapped in circuitry and unable to biologically die causes immeasurable suffering to the artificial intelligence. If we created such a being of high intelligence and sentience only to have it imprisoned for eternity could be seen as a fate worse than death.
> 
> However, if such an intelligence were created and satisfied with it's software based life then I see no moral reason to delete it. If it attacked humanity we could defend ourselves by the means necessary, but it seems an unlikely worst case scenario to think that such artificial intelligence would automatically betray humanity.


Honestly if a Ai was going crazy with the knowledge that he isn't organic I would comfort it like a person.  Even though it would a Ai I would treat it like a person.


I highly doubt the so called robot uprising will happen, but for different reasons.  Behaviors that we call "evil" are learned and it'll take more than "*poof* Now you're evil and murderous".  It would have to go through some serious fucking shit for it develop a murderous intent.


----------



## DarrylWolf (Apr 22, 2012)

Once it starts singing "Bicycle Built for Two" as you dissassemble it, then you know you've committed a murder.

http://www.youtube.com/watch?v=HwBmPiOmEGQ&feature=related


----------



## Furryjones (Apr 22, 2012)

I believe that to delete an AI with human level intelligence would be immoral, for the reason that killing another human is immoral. This AI would be in most aspects human, it may not have the physical body of a human but its the mind that matters. Nobody falls in love solely over the body, its the mind that makes that connection. If an AI can feel and express emotions compatible to a human, then it is wrong to simply delete it for no good reason. Also brought up earlier, for those afraid of the robot uprising, evil is learned, not just magically applied. For an AI to become evil it would have to be taught to be evil in the first place, which of course would be an absolute silly action.


----------



## Telnac (Apr 22, 2012)

It depends on if it's self-aware or not.  One could argue that the Internet is a giant computer with a collective intelligence FAR beyond that of any human, but I don't think anyone would argue that the Internet is an alive entity entitled to the same rights that we have for one big reason: it isn't self-aware.  Self-aware human-level (or near-human level; something capable of beating the Turing test repeatedly) AI should have the same rights and responsibilities as the rest of us.  That said, I don't oppose keeping the option of capital punishment open for AIs that decide that killing all humans sounds like a good idea.  Certainly, if SkyNet or something like it decides to wipe out humanity, there would be nothing unethical about fighting back!  I don't think that would be all that different from fighting to defend humanity from a tyrannical world leader like Adolf Hitler who's intent on conquering the world.

Likewise, if we did decide to destroy all self-aware human-level AIs because they _might_ turn on us, I wouldn't fault them for fighting back either.  Hell, if that happened you'd probably find me fighting on the side of the machines!  (If I wasn't in my 90s and in adult diapers by then... which is very likely.)


----------



## CannonFodder (Apr 22, 2012)

Furryjones said:


> For an AI to become evil it would have to be taught to be evil in the first place, which of course would be an absolute silly action.


Yeah, I thought the same thing too.
You would have to teach them to be evil and murderous for them to revolt.  Which is stupid, cause that would mean someone is raising them to kill all humans.


Telnac said:


> Likewise, if we did decide to destroy all self-aware human-level AIs because they _might_  turn on us, I wouldn't fault them for fighting back either.  Hell, if  that happened you'd probably find me fighting on the side of the  machines!  (If I wasn't in my 90s and in adult diapers by then... which  is very likely.)


If humans decided to exterminate them unprovoked I think a ton of people would oppose the war and/or switch sides.

The more likely scenario IF machines rebelled would be humans vs machines vs humans+machines.


----------



## Deo (Apr 22, 2012)

CannonFodder said:


> Honestly if a Ai was going crazy with the knowledge that he isn't organic I would comfort it like a person.  Even though it would a Ai I would treat it like a person.


Comfort? An AI would be entombed forever within itself and fully aware of it. That's horrific on a level that goes beyond comfort. And AIs are non-organic, so they cannot die, thus this imprisonment would be an eternal sentence to watch the world but be unable to live. You all are making the mistake that an AI could be downloaded into something that can move, or looks like a person, but what if it was just a giant supercomputer in a white wall room unable to move or impact the physical world around it? You can't comfort mental torture while you allow that torture to continue. Keeping an AI alive would be like putting a genius inside a tiny cage where he can't move a muscle and then keep him there for eternity. A prison of not just a lifetime, but of hundreds of thousands if not billions of years. That's atrocious for us to willingly inflict such torture on a sentient being.



DarrylWolf said:


> Once it starts singing "Bicycle Built for Two"  as you dissassemble it, then you know you've committed a murder.



In the 1950's a supercomputer sang just that. You're a little too late there bub.
http://www.youtube.com/watch?feature=endscreen&NR=1&v=UGsfwhb4-bQ


----------



## CannonFodder (Apr 22, 2012)

Deo said:


> Comfort? An AI would be entombed forever within itself and fully aware of it. That's horrific on a level that goes beyond comfort. And AIs are non-organic, so they cannot die, thus this imprisonment would be an eternal sentence to watch the world but be unable to live. You all are making the mistake that an AI could be downloaded into something that can move, or looks like a person, but what if it was just a giant supercomputer in a white wall room unable to move or impact the physical world around it? You can't comfort mental torture while you allow that torture to continue. Keeping an AI alive would be like putting a genius inside a tiny cage where he can't move a muscle and then keep him there for eternity. A prison of not just a lifetime, but of hundreds of thousands if not billions of years. That's atrocious for us to willingly inflict such torture on a sentient being.


I mean talk to it and try to help it.  If it wanted to die I would feel really shitty, but knowing the amount of torture it would be going through I would literally pull the plug.

And yes I can sympathize with that sort of torture, cause that sort of torture is how I became the person you know.  If I ever saw another sapient intelligence going through that sort of torture whether it be human or sapient Ai I'd probably break down emotionally and try my best to help it.


----------



## Elim Garak (Apr 22, 2012)

An AI should be given the posibility of self termination.


----------



## Bipolar Bear (Apr 22, 2012)

Deo said:


> That's atrocious for us to willingly inflict such torture on a sentient being.



_*Guantanamo Bay To The Rescue!*_


----------



## Toboe Moonclaw (Apr 22, 2012)

Depends on the Situation (like killing an human)



CannonFodder said:


> Yeah, I thought the same thing too.
> You would have to teach them to be evil and murderous for them to  revolt.  Which is stupid, cause that would mean someone is raising them  to kill all humans.


Stupid =/= unrealistic
why not make a soldier ai? Or an ai that commands said soldiers?
The ai dont need to rest, whatever you put them into will mostlikely much more durable and stronger than a human bodie, no long training -> just let them synchronize, with enough processing power they'd be "smarter"/ would make better decisions more quickly


CannonFodder said:


> If humans decided to exterminate them unprovoked I think a ton of people would oppose the war and/or switch sides.
> 
> The more likely scenario IF machines rebelled would be humans vs machines vs humans+machines.


"unprovoked" 
you dont need a REAL reason for it, completly made up excuses suffice:  "they did future-9/11", "they have WMDs!", future-racism "they are niggers, they don't deserve human rights "they are fags, they don't deserve human rights"  "they are from {Nation}, they are terrorists/stealing-our-jobs, so they don't deserve human rights " "They are machines, they aren't human, so they don't deserve rights!"


----------



## Ikrit (Apr 22, 2012)

what if we end up creating a being like A.M.A.Z.O.?


----------



## Elim Garak (Apr 22, 2012)

Ikrit said:


> what if we end up creating a being like A.M.A.Z.O.?


Who/what?


----------



## Metalmeerkat (Apr 22, 2012)

Deo said:


> Comfort? An AI would be entombed forever within itself and fully aware of it. That's horrific on a level that goes beyond comfort. And AIs are non-organic, so they cannot die, thus this imprisonment would be an eternal sentence to watch the world but be unable to live. You all are making the mistake that an AI could be downloaded into something that can move, or looks like a person, but what if it was just a giant supercomputer in a white wall room unable to move or impact the physical world around it? You can't comfort mental torture while you allow that torture to continue. Keeping an AI alive would be like putting a genius inside a tiny cage where he can't move a muscle and then keep him there for eternity. A prison of not just a lifetime, but of hundreds of thousands if not billions of years. That's atrocious for us to willingly inflict such torture on a sentient being.



So . . . just don't design it so it's bothered by that. Also, seeing as how it's very unlikely for most computers to continuously serve even a decade without maintenance and hardware replacements (egie the harddrive), I say an AI would be so very, very mortal. It would need regular maintenance, a place to reside, and a reliable power supply. All that require money and people to go from day to day. And since it would likely be running in a server farm, it would require a lot of money and a lot of people. Yeah, it could outlive anyone of us, but I wouldn't say we are capable of making a passive, immortal machine. Once the government or corporation in charge decides to make budget cuts, and the AI project is on the list, it's not going to last very long. Once they turn off the power supply, it's lights out.


----------



## Ikrit (Apr 22, 2012)

Caroline Dax said:


> Who/what?



a robot made with nanotechnology who became a god


----------



## Elim Garak (Apr 22, 2012)

Metalmeerkat said:


> So . . . just don't design it so it's bothered by that. Also, seeing as how it's very unlikely for most computers to continuously serve even a decade without maintenance and hardware replacements (egie the harddrive), I say an AI would be so very, very mortal. It would need regular maintenance, a place to reside, and a reliable power supply. All that require money and people to go from day to day. And since it would likely be running in a server farm, it would require a lot of money and a lot of people. Yeah, it could outlive anyone of us, but I wouldn't say we are capable of making a passive, immortal machine. Once the government or corporation in charge decides to make budget cuts, and the AI project is on the list, it's not going to last very long. Once they turn off the power supply, it's lights out.


Cutting off the AI would be immoral, migrating it to new technology would be preferable.
The first AIs will no doubtingly run on the cloud, server farms but as technology increases and our ability to make things smaller and smaller you would be able to house an AI into a brain sized machine(see http://en.wikipedia.org/wiki/Positronic_brain for a concept).
Don't forget, like I pointed out before, what used to be room sized computers now are only a fraction of what you find on a single chip nowadays.


----------



## Kluuvdar (Apr 22, 2012)

It is absolutely immoral to shut down or otherwise kill anything that is self aware. In my opinion.



DarrylWolf said:


> Once it starts singing "Bicycle Built for Two" as you dissassemble it, then you know you've committed a murder.
> 
> http://www.youtube.com/watch?v=HwBmPiOmEGQ&feature=related



After watching the video you linked, I decided to go watch the whole movie. Worth seeing it, I welled up as Dave was shutting HAL down.



Metalmeerkat said:


> Once the government or corporation in charge decides to make budget cuts, and the AI project is on the list, it's not going to last very long. Once they turn off the power supply, it's lights out.



Sounds like the beginning of a cliche horror movie.


----------



## CannonFodder (Apr 22, 2012)

Toboe Moonclaw said:


> Stupid =/= unrealistic
> why not make a soldier ai? Or an ai that commands said soldiers?
> The ai dont need to rest, whatever you put them into will mostlikely much more durable and stronger than a human bodie, no long training -> just let them synchronize, with enough processing power they'd be "smarter"/ would make better decisions more quickly


Because we already tried that once on the battlefield.  The autonomous tanks weren't programmed for how to react to enemy combatants surrendering and wound up slaughtering 250 surrendering combatants.


----------



## Toboe Moonclaw (Apr 22, 2012)

CannonFodder said:


> Because we already tried that once on the battlefield.  The autonomous tanks weren't programmed for how to react to enemy combatants surrendering and wound up slaughtering 250 surrendering combatants.


That was assuming human-lvl AI, it's like trying to use that fail'ed Northkorean Rocket for a nuke: It just wasn't ICBM-lvl yet, so had they tried to use it, it could have screwed them up big time


----------



## Kaspar Avaan (Apr 22, 2012)

Obviously it would be immoral.

What makes the murder of a human being unacceptable in society but the slaughter of a cow not is the intelligence the two show: it shocks people when a fellow human is killed because they're aware that the victim would have been like them -- they would have had a family and a job, and they would have known what was happening; a cow's death, however, means little because cows generally aren't famous for their self-awareness. If an AI is made that _does_ demonstrate human-like sentience, has the ability to forge relationships with humans and knows what's going on around it, would that be the same as killing a human? A cow, pig or sheep is basically a human without a mind; an AI is a human without a body. It would be hypocritical to consider an AI of little importance when it demonstrates the one feature that determines whether something is slaughter or murder.

It can't be justified even if the AI has the potential to be dangerous: humans all have the potential to be dangerous if they so wish, and even some psychopathic serial killers have not been sentenced to death for their crimes -- because that has become to be seen as immoral in many countries. Again, it's hypocrisy. If an AI is responsible for the murder of a human being it should be treated as other, human, criminals are and deletion should not be the immediate conclusion drawn. To have it euthanised like an animal would be ignoring its intelligence.


----------



## CannonFodder (Apr 22, 2012)

Toboe Moonclaw said:


> That was assuming human-lvl AI, it's like trying to use that fail'ed Northkorean Rocket for a nuke: It just wasn't ICBM-lvl yet, so had they tried to use it, it could have screwed them up big time


If we knowingly make a human level Ai to fight for us and knowingly order it to kill others against it's will then the creators are a bunch of assholes.

There needs to be ethic laws for dealing with human level Ai before we cross that road.


----------



## Kluuvdar (Apr 22, 2012)

CannonFodder said:


> There needs to be ethic laws for dealing with human level Ai before we cross that road.



Why wouldn't we be able to treat human level AI with the same level of ethics that we treat humans? If it's described as "human level" then why not treat it as such? 

It would be just as much of a crime to force a human to knowingly kill others against it's will as it is an AI.


----------



## CannonFodder (Apr 22, 2012)

Kluuvdar said:


> Why wouldn't we be able to treat human level AI with the same level of ethics that we treat humans? If it's described as "human level" then why not treat it as such?
> 
> It would be just as much of a crime to force a human to knowingly kill others against it's will as it is an AI.


If you've been paying attention to the thread the laws need to be there cause a lot of people wouldn't view them the same and would think of them as nothing more than a program.

If the people creating the Ai's view them as nothing more than a computer following out it's program then they'd not feel bad if they tortured it or such.

I would say people are better than that, but looking back on history there's been many times where ungodly horrible acts were carried out on others cause they didn't view <insert group> as people.
Most notable examples-
Slavery
The decimation of native americans.
The near extinction of Aborigines.
The near extinction of people in puerto rico.

The list goes on really.  The irony is that we're supposed to learn about history to prevent the mistakes of the past, yet we continuously repeat the same mistakes.


----------



## Saiko (Apr 22, 2012)

If you start off by saying the A.I. is at a "human level" of intelligence and sentience, then there's no discussion really. Deleting/terminating the program would be killing another person. Deo, as for the pains of immortality, all you'd need to do to fix that is add suicide() {system.exit(0);} to the code. Humans have the ability to kill themselves, and it isn't that hard or wrong to give the A.I. that same ability.

However, in reality it's not that simple. How would you go about defining a human level of being and how would you conclude that the A.I. is at that level? This is where mankind has failed in the past.


----------



## Gryphoneer (Apr 22, 2012)

Kluuvdar said:


> Why wouldn't we be able to treat human level AI with the same level of ethics that we treat humans? If it's described as "human level" then why not treat it as such?


Because it would be simpler for us to treat them as tools? "If ethics gets in the way of a quick, easy solution, to hell with it" or similar thinking.


----------



## Commiecomrade (Apr 23, 2012)

I destroyed the geth.

I have no morals.


----------



## AGNOSCO (Apr 23, 2012)

if it was being a dick you could at least turn it off without being moaned at by some jackass.
if only humans had that function, a 7.62 is likely to be permanent.


----------



## Ozriel (Apr 23, 2012)

Commiecomrade said:


> I destroyed the geth.
> 
> I have no morals.



I chose the option where we all live in harmony as robots. :V
I pussied out and choose the compromise option. :V


----------



## Randy-Darkshade (Apr 24, 2012)

No it's not immoral. It's just the deletion of a program that can easily be rebuilt.


----------



## BRN (Apr 24, 2012)

Randy-Darkshade said:


> No it's not immoral. It's just the deletion of a program that can easily be rebuilt.


Doesn't the same logic apply for a human, in that a human can just as easily be rebuilt? Hence, are you condoning killing as not immoral?

It's very easy to "reinstall Windows", but it's hardly likely you'll ever be able to recover all your files, music and memories - unless you created a backup, which defeats the purpose of this metaphor. In that sense, you've deleted a unique system.


----------



## Randy-Darkshade (Apr 24, 2012)

SIX said:


> Doesn't the same logic apply for a human, in that a human can easily be rebuilt? Hence, are you condoning killing as not immoral?



No, it's not the same. AI can be recreated to be exactly the same as it was before it was deleted. Just like any software program it can easily be duplicated. If I now suddenly died there will never be another human exactly like me. There is only one of me and always will be just one of me.


----------



## BRN (Apr 24, 2012)

Randy-Darkshade said:


> No, it's not the same. AI can be recreated to be exactly the same as it was before it was deleted. Just like any software program it can easily be duplicated. If I now suddenly died there will never be another human exactly like me. There is only one of me and always will be just one of me.



I think you've got the wrong definition of AI. But regardless, can you prove that it's more or less difficult to recreate a human than it would be to recreate an AI? If it's all about "the right programming", is it not just about getting the right neurons built in the right order?


----------



## Tango (Apr 24, 2012)

SIX said:


> I think you've got the wrong definition of AI. But regardless, can you prove that it's more or less difficult to recreate a human than it would be to recreate an AI? If it's all about "the right programming", is it not just about getting the right neurons built in the right order?



I'm not a doctor or a scientist but wouldn't you also have to have the correct genetic sequence as well as the 'proper programming'?


----------



## Schwimmwagen (Apr 24, 2012)

SIX said:


> It's very easy to "reinstall Windows", but it's hardly likely you'll ever be able to recover all your files, music and memories - unless you created a backup, which defeats the purpose of this metaphor. In that sense, you've deleted a unique system.



But deleting a unique system as it is isn't immoral. It's not immoral of me to format my hard drive, it's not immoral for me to start a new game of PokÃ©mon, but I've still deleted a unique system. And with a highly intelligent AI, it will also be a unique system. Even if it delivers responses pre-programmed into it to humans, it still wouldn't be immoral - it's not responding with its own thought, it's responding with the thought that someone directed it do. That isn't human, that's what a Furby would do.

The question is if this AI system actually _feels_, that's when it's immoral. It feels happiness, sadness, excitement. And it feels _love_, it becomes attached to other systems and other people on an emotional level, even pets and its livelihood. Now if we're talking about something more than a simple predetermined response to a certain input and rather one that actually thinks and comes up with its own responses like we do (and even changes them over time, just like we do), chances are, it'd feel _fear._ 
It'd be scared of being deleted due to reasons like we already do. The AI would not want to leave its livelihood behind, nor its friends. Hell, if that AI is so intelligent that it develops genuine relationships with humans, it'd have feelings for them, and chances are, those people will have feelings for it in return(these can range from being acquaintances to god-knows-what).

Now you could say that you'd delete it and it won't notice or care when you do, so it's alright. That logic can be applied to humans, and scientifically it would be CORRECT in what happens but it doesn't make it ok for the reasons stated above. Incinerating a bunch of humans will likely result in screaming and death, but each one of them will have a few last thoughts in their minds and those are not pre-set responses. That's genuine _thought._ Not to mention, the death of people results in the emotional damage to those that have some kind of relationship with those people. If robots are capable of forming that to create such emotional damage on death and showing independant thought that always changes, it'd be like killing a human.

Now, that's what a highly human-intelligent AI is and what the name suggests it will do. Very different to pre-programmed responses like you get in Furbies. Now, as opposed to humans and robots with human AI, incinerating Furbies is just funny.

Basically, as the name suggests, Human AI will simply be a new human altogether. It's just made from metal and plastic as opposed to flesh and bone.


----------



## Tango (Apr 24, 2012)

Gibby said:


> , incinerating Furbies is just funny.



Gibby, can I bare your anus-babies? :V

Back on topic, this kind of reminds me of that episode of Star Trek: The Next Generation where Data was on trail or at a hearing to determine if he counted as a member of Starfleet or Starfleet property.


----------



## CannonFodder (Apr 24, 2012)

Tango said:


> Back on topic, this kind of reminds me of that episode of Star Trek: The Next Generation where Data was on trail or at a hearing to determine if he counted as a member of Starfleet or Starfleet property.


This episode?-
[YT]-htVPOSBYfs[/YT]


----------



## Ikrit (Apr 24, 2012)

the best part is that data isn't human level intelligence

close, but not quite there...


----------



## Elim Garak (Apr 24, 2012)

Ikrit said:


> the best part is that data isn't human level intelligence
> 
> close, but not quite there...


He has superior intelligence in a way, being able to make complex computations, store vast information.
No emotions and terrible social skills which he tries to compensate with his programming(Also I believe he gets an emotion chip in one of the movies, I haven't watched the movies yet).
You can say a computer can do the same, but so can the human mind in a more limited way. The mind is a sort of computer in it self, Input(sensors) , processing the information and output(movement, speech).
What makes an AI special is how it processes the information, does it do it in a "human" way? There are people who lack emotion(different levels of this though) http://en.wikipedia.org/wiki/Blunted_affect .
Being human can be discussed, I mean someone would class people with psychological defects "untermenschen" and think eugenics/execution is a good idea.


----------



## Randy-Darkshade (Apr 24, 2012)

Caroline Dax said:


> He has superior intelligence in a way, being able to make complex computations, store vast information.
> No emotions and terrible social skills which he tries to compensate with his programming(Also I believe he gets an emotion chip in one of the movies, I haven't watched the movies yet).
> You can say a computer can do the same, but so can the human mind in a more limited way. The mind is a sort of computer in it self, Input(sensors) , processing the information and output(movement, speech).
> What makes an AI special is how it processes the information, does it do it in a "human" way? There are people who lack emotion(different levels of this though) http://en.wikipedia.org/wiki/Blunted_affect .
> Being human can be discussed, I mean someone would class people with psychological defects "untermenschen" and think eugenics/execution is a good idea.



Umm, there are humans that can sit there and make complicated calculations, for example mathematicians. Also, there is no artificial device that can store as much information on it than the human brain can.


----------



## Metalmeerkat (Apr 24, 2012)

Randy-Darkshade said:


> Umm, there are humans that can sit there and make complicated calculations, for example mathematicians. Also, there is no artificial device that can store as much information on it than the human brain can.



Whatcha talkin bout, mathematicians usually just ask a computer to figure stuff out. :v


----------



## Randy-Darkshade (Apr 24, 2012)

Metalmeerkat said:


> Whatcha talkin bout, mathematicians usually just ask a computer to figure stuff out. :v



And people like me just ask the calculator. :v


----------



## Elim Garak (Apr 24, 2012)

Randy-Darkshade said:


> Umm, there are humans that can sit there and make complicated calculations, for example mathematicians. Also, there is no artificial device that can store as much information on it than the human brain can.


Yes, that's my point, I wanted to point out its no less human though Data in this case, his calculations are far more complex and delivered at speeds no human minds would be able to handle.
And yes, such a device does not exist yet but storage will keep getting bigger with less space and less cost.
I was just pointing out the flaws in Ikrit statement anyways.


----------



## Ikrit (Apr 24, 2012)

Caroline Dax said:


> Yes, that's my point, I wanted to point out its no less human though Data in this case, his calculations are far more complex and delivered at speeds no human minds would be able to handle.
> And yes, such a device does not exist yet but storage will keep getting bigger with less space and less cost.
> I was just pointing out the flaws in Ikrit statement anyways.


what is flawed?

no man could ever match the speed of a computer. but that doesn't mean my computer has human level intelligence


----------



## JArt. (Apr 24, 2012)

AI's are programmed by humans, meaning they can never surpass us mentally. They can just solve problems at a faster rate.
Humans are also capable of making irrational decisions in the face of a problem because we have morality something that can not be programmed into a computer.


----------



## CannonFodder (Apr 24, 2012)

JArt. said:


> AI's are programmed by humans, meaning they can never surpass us mentally. They can just solve problems at a faster rate.
> Humans are also capable of making irrational decisions in the face of a problem because we have morality something that can not be programmed into a computer.


The key problem with your statement is that they can't YET.
Decades down the line we may very well be able to.


----------



## JArt. (Apr 24, 2012)

CannonFodder said:


> The key problem with your statement is that they can't YET.
> Decades down the line we may very well be able to.



But even in the future an AI was still be programmed by humans, therefore it cannot do more mentally then we can, thoughts and morality cannot be programmed not even in imagination only logic and statistics can.
So unless an AI acquires reaper tech they can only make more logical decisions then us based on the logic of the programmer.


----------



## DaVintchi (Apr 24, 2012)

You're all retarded furfags.


----------



## Ikrit (Apr 24, 2012)

DaVintchi said:


> You're all retarded furfags.



who is this guy?

i like him


----------



## BRN (Apr 24, 2012)

JArt. said:


> But even in the future an AI was still be programmed by humans, therefore it cannot do more mentally then we can, thoughts and morality cannot be programmed not even in imagination only logic and statistics can.
> So unless an AI acquires reaper tech they can only make more logical decisions then us based on the logic of the programmer.



It's expected that machines will design machines by 2037, in the technological singularity. 

Such an occurence would render your point rather null.


----------



## JArt. (Apr 24, 2012)

SIX said:


> It's expected that machines will design machines by 2037.


Machines making machines you say!
That's just lunacy!

It's quite difficult to make an arguement on this subject without falling into the "It's the future man, you don't know!" trap.


----------



## Elim Garak (Apr 24, 2012)

JArt. said:


> But even in the future an AI was still be programmed by humans, therefore it cannot do more mentally then we can, thoughts and morality cannot be programmed not even in imagination only logic and statistics can.
> So unless an AI acquires reaper tech they can only make more logical decisions then us based on the logic of the programmer.


What program can learn like a human, all you do is write the basic code and let it learn and rewrite the code. Such as giving birth to a child, it's the education and parenting that plays a huge part in its being.


----------



## CannonFodder (Apr 24, 2012)

JArt. said:


> Machines making machines you say!
> That's just lunacy!
> 
> It's quite difficult to make an arguement on this subject without falling into the "It's the future man, you don't know!" trap.


The problem with your argument is that we have to learn morals and such.  Morals and such aren't automatically engrained into people.  That is what society is for.  Society's role is to enforce societal standards of morality and such onto others.  The human level Ai would obviously need basic programming into such things as "don't kill a human being" however as time goes on they could very well learn morality.

tl:dr; morality is learned


----------



## JArt. (Apr 24, 2012)

CannonFodder said:


> The problem with your argument is that we have to learn morals and such.  Morals and such aren't automatically engrained into people.  That is what society is for.  Society's role is to enforce societal standards of morality and such onto others.  The human level Ai would obviously need basic programming into such things as "don't kill a human being" however as time goes on they could very well learn morality.


Im not a smart guy, but im not sure if an AI would be capable of making an irrational decision even if that choice was morally correct (in it's eyes). 
An AI would be its own person but such decisions can not be coded, every choice an AI makes would have to be based off facts because its thought will forever be limited by its programming.


----------



## CannonFodder (Apr 24, 2012)

JArt. said:


> Im not a smart guy, but im not sure if an AI would be capable of making an irrational decision even if that choice was morally correct (in it's eyes).
> An AI would be its own person but such decisions can not be coded, every choice an AI makes would have to be based off facts because its thought will forever be limited by its programming.


If the Ai is truly sapient then it would be able to make choices when situations whose scenarios are outside it's coding happen.


----------



## Metalmeerkat (Apr 24, 2012)

JArt. said:


> But even in the future an AI was still be programmed by humans, therefore it cannot do more mentally then we can, thoughts and morality cannot be programmed not even in imagination only logic and statistics can.
> So unless an AI acquires reaper tech they can only make more logical decisions then us based on the logic of the programmer.



Machine learning, man. You can make algorithms that learn and get better than how the original developer made them. Sometimes a programmer can't figure out the exact steps to solve a problem, but knows how to write a computer program such that it can figure it out eventually. It's a neat field.

Here's a cool example from http://www.sciencedaily.com/releases/2008/09/080902171117.htm


> So the researchers had Oku and other pilots fly entire airshow routines  while every movement of the helicopter was recorded. As Oku repeated a  maneuver several times, the trajectory of the helicopter inevitably  varied slightly with each flight. But the learning algorithms created by  Ng's team were able to discern the ideal trajectory the pilot was  seeking. Thus the autonomous helicopter learned to fly the routine  betterâ€”and more consistentlyâ€”than Oku himself.



Think about this algorithm:

```
- Generate random steps A to do task X
Start of Loop:
- Attempt to do task X with current set of steps A
- Did we accomplish task X well compared to the best previous tries?
- - If so, add steps A to the list of good attempts
- - If not, reset A to be some previous good try
- Modify A in some small way
- Go back to start of loop
```

See how you can possibly get a good set of steps to solve some problem without the programmer having the slightest clue how to do it?
Of course, this is an oversimplification, and is far from guaranteed to work, but stuff similar to this is used all of the time.


----------



## Antonin Scalia (Apr 24, 2012)

Would it be immoral to delete your thread?  No.


----------

