# first step for AI...



## NekoFox08 (Jul 22, 2008)

just wondering, didn't see any threads on this (only went a few pages back XD)

for those interested in advancement of technology, what would be the first step in creating an actual AI program? (via, localization (knowing where you are),http://en.wikipedia.org/wiki/Robotic_mapping mapping (learning what is around you) andhttp://en.wikipedia.org/wiki/Motion_planning motion planning (figuring out how to get there). etc.). how would it be done? I'm really interested in seeing how the future looks


----------



## Pi (Jul 22, 2008)

Lisp?


----------



## NekoFox08 (Jul 22, 2008)

Pi said:


> Lisp?



heh, next is emotion?


----------



## Neybulot (Jul 22, 2008)

The first step? Learn to code.


----------



## ArielMT (Jul 22, 2008)

Learn about what programming languages are out there, first.  Creating an artificial intelligence won't be possible without a very high-level programming language.

Personally, I think Prolog is closer to that than Lisp, what with Prolog being a logical language and Lisp being a functional language, but Lisp has more history, more development, and greater platform support.

However, if the progress of computing to date is any indicator, the first widely publicized AI will be written in a Frankenstein-like hybrid procedural language like C++, C#, or whatever the C-du-jour happens to be.


----------



## Project_X (Jul 22, 2008)

Well...In the freeze tag server I mentioned before in another thread, the Bots in the game talk to you in full conversation. Actually took me awhile to catch on. ^^;

One of them was all like "What!? I'm not a bot!" and I said. "Yes you are!" and the bot said "Okay...you caught me. I'm a bot." Then another bot said, "Oh! Don't like bots do ya?"

and the awesomeness continued from there.


----------



## nrr (Jul 22, 2008)

ArielMT said:


> Learn about what programming languages are out there, first.  Creating an artificial intelligence won't be possible without a very high-level programming language.


Quoted for truth.



			
				ArielMT said:
			
		

> Personally, I think Prolog is closer to that than Lisp, what with Prolog being a logical language and Lisp being a functional language, but Lisp has more history, more development, and greater platform support.


The big thing about Lisp is that you're given the power of macros that can write more code, and you can modify parts of the running image during runtime.

It isn't so much that there aren't Prolog implementations for certain systems or that Prolog has less history and is less mature than Lisp.



			
				ArielMT said:
			
		

> However, if the progress of computing to date is any indicator, the first widely publicized AI will be written in a Frankenstein-like hybrid procedural language like C++, C#, or whatever the C-du-jour happens to be.


I'd love to see them try.



Pi said:


> Lisp?


Something like that.



NekoFox08 said:


> for those interested in advancement of technology, what would be the first step in creating an actual AI program? (via, localization (knowing where you are), mapping (learning what is around you) and motion planning (figuring out how to get there). etc.). how would it be done?


Right now, there's a metric fuckton of research being conducted in all of these areas, of which I'm pretty sure you're aware, but IIRC, not any single one is a first step up.

I think we really started to see a proliferation of AI into the public consciousness with video games for the most part, given that wayfinding in RTS-genre titles use a rudimentary motion planning algorithm to get the little units from point A to point B.

The military's largely interested in mapping and localization, which is why the DARPA Defense Sciences Office is paying out some pretty huge grants for research in sensor networks.


----------



## Eevee (Jul 22, 2008)

perl 6 has this built in


----------



## NekoFox08 (Jul 22, 2008)

well, I can see this all happening in 25 years or so... gawd, I want my own wall-e (none of that cheap RC crap) TT_TT

next step... GI >=3


----------



## Project_X (Jul 22, 2008)

Oh god...not GI, man....o.o


----------



## NekoFox08 (Jul 22, 2008)

Project_X said:


> Oh god...not GI, man....o.o



why not?

GI.... JOE!


----------



## Project_X (Jul 22, 2008)

Then it'll be like this:


----------



## NekoFox08 (Jul 22, 2008)

Project_X said:


> Then it'll be like this:



aw, how cute XD you're so paranoid

I always thought... if someone is intelligent enough to create something with artificial intelligence, that they might also create some sort of fail-system. like, blow it the fuck up if it slaps you or something >_<

but what about this guy? watch the end parts. he's actually freaking smart enough to serve people! 0_o' also, I remember some scientist say, robots will never be able to fully functionally run like a human being... well, I'm sure he's cowering in a corner right now T.T


----------



## Project_X (Jul 22, 2008)

NekoFox08 said:


> aw, how cute XD you're so paranoid



Oh hush....noone must know!!! >.O


But asimo is da bomb...


----------



## nrr (Jul 22, 2008)

Eevee said:


> duke nukem forever has this built in


fixed.


----------



## NekoFox08 (Jul 22, 2008)

Project_X said:


> Oh hush....noone must know!!! >.O
> 
> 
> But asimo is da bomb...



lol, did you see the commercial for asimo? I swear, I couldn't stop laughing xD

I almost feel sorry for asimo... wait, I do


----------



## Wontoon Kangaroo (Jul 22, 2008)

The first step for AI...is planning it out.


----------



## Project_X (Jul 22, 2008)

LOL! XD


----------



## NekoFox08 (Jul 22, 2008)

Wontoon Kangaroo said:


> The first step for AI...is planning it out.



they've been doing that for over 60 years T.T

cmon, I wanna see them technology!


----------



## ArielMT (Jul 22, 2008)

nrr said:


> ArielMT said:
> 
> 
> > However, if the progress of computing to date is any indicator, the first widely publicized AI will be written in a Frankenstein-like hybrid procedural language like C++, C#, or whatever the C-du-jour happens to be.
> ...


Microsoft is proud to introduce the first PC operating system that really is smarter than you, Windows Vista.  What.

Dear aunt, let's set so double the killer delete select all. (Oldie but goodie .)


----------



## NekoFox08 (Jul 22, 2008)

ArielMT said:


> Microsoft is proud to introduce the first PC operating system that really is smarter than you, Windows Vista.  What.
> 
> Dear aunt, let's set so double the killer delete select all. (Oldie but goodie .)



lol, I'm lost at the killer part xD

overall, I still gotta say, I'm impressed even with the flaws.


----------



## Pi (Jul 22, 2008)

ArielMT said:


> Personally, I think Prolog is closer to that than Lisp, what with Prolog being a logical language and Lisp being a functional language, but Lisp has more history, more development, and greater platform support.



Prolog is good if you're going for a logical/expert-system AI. Lisp lets you do open-ended things more easily, in part due to what nrr's mentioned.

We're still a long way off from Turing-passable AIs, though.


----------



## Aurali (Jul 22, 2008)

@OP; depends on what you wanna do with the AI. Artificial Intelligence is a broad term. and AI in the game field is very different from AI in the robotics field.


----------



## Project_X (Jul 22, 2008)

Eli said:


> @OP; depends on what you wanna do with the AI. Artificial Intelligence is a broad term. and AI in the game field is very different from AI in the robotics field.



I still like what [ESG] Did with their bots the best so far.....


----------



## ArielMT (Jul 22, 2008)

NekoFox08 said:


> overall, I still gotta say, I'm impressed even with the flaws.


In all seriousness, I've been insisting since it came out that those who really, truly, desperately want Vista get it on a new PC instead of trying the upgrade.


Pi said:


> Prolog is good if you're going for a logical/expert-system AI. Lisp lets you do open-ended things more easily, in part due to what nrr's mentioned.


True.  I forgot about that.  (Then again, Lisp seems to be a language I doubt I'll ever truly grok.)


----------



## Kommodore (Jul 23, 2008)

In all likelihood, the first step of building self-aware machines will come from the successful completion of a computer that mimics the neural structure of an organic brain, i.e, quantum computers. Having electrons "talk" (send bits of information) to other electrons is, in essence, the same function neurons do. It does not matter how much code you wright if the machine you write it for only thinks one thing at a time. 

Modern computers, all of them, essentially do one calculation at a time. You can have them do it _really reaaaly fast_, but it is still one at a time, and there is no way you will be able to fit sentience onto a format like that, no matter how good your program is. Quantum computers, however, can have multiple "thoughts" all at the same time, just like an organic brain. In both function and form, they are very similar. Put you uber-code into a quantum computer, and you are well on your way to an AI. 

Also, I doubt we will ever "code" self awareness, far more likely imo is a base program the "evolves" into a more and more complex machine, similar to how natural intelligence evolved. Why do all this work yourself when you can copy nature, she did all this shit already, why do it again?

And I would just like to get this out, because I know a lot of you are thinking this: no, there won't be a point where teh machines become vastly superior to there puny human counterparts. Remember, every program for calculations and computation the AI has, we will also have. It doesn't matter if the AI can tell you exactly where pi ends, you can plug it into your quantum calculator and do the same damn thing. Any advantages the AI will have we will also have access to, making them void as advantages.


----------



## hiphopopotimus (Jul 23, 2008)

NekoFox08 said:


> they've been doing that for over 60 years T.T
> 
> cmon, I wanna see them technology!



Yes, they have!

The brightest minds in computer science have been trying to do this for the past 90(longer?) years and they have all failed.

So I think its fair to say that no one knows the first step to making AI.


----------



## NekoFox08 (Jul 23, 2008)

hiphopopotimus said:


> Yes, they have!
> 
> The brightest minds in computer science have been trying to do this for the past 90(longer?) years and they have all failed.
> 
> So I think its fair to say that no one knows the first step to making AI.



well... technology back then sucked bad! now a days, people are inventing new shit every day (there's gotta be at least 1,000 upgrades to fucking cellphones by now). they've gotta make a breakthrough soon >_>

If they can make an invisible cloak that doesn't make you blind, then they can make some stupid AI T.T


----------



## hiphopopotimus (Jul 24, 2008)

NekoFox08 said:


> If they can make an invisible cloak that doesn't make you blind, then they can make some stupid AI T.T



But we need invisibility cloaks to sneak into the girls locker room. Will AI help us with that? NO!

Invisibility is srsly important.


----------



## makee43999 (Jul 24, 2008)

well they already have a basic ai that will allow the bot to find location surroundings and figure out how to get from point a to point b... its currently research as for a  self driving car and yes the program does work and its sooo sweet!!


----------



## Widontknow (Jul 28, 2008)

CommodoreKitty said:


> In all likelihood, the first step of building self-aware machines will come from the successful completion of a computer that mimics the neural structure of an organic brain



Now this is something I strongly disagree with.  I think that this is the wrong direction to be tacking the problem of AI from.

If we wanted a human that is inorganic, then this would be a feasible approach.  The problem is, we want a computer with super-human abilities.  We don't know enough about how our brains function to say for certain what we'd need to change to make a human brain-based AI do anything else except take up ram and drink e-beer all day .


----------



## Aurali (Jul 28, 2008)

HI WIDONTKNOW :3


as much as I agree with ya hun. I'm gonna disagree. Neurologic is something that can't be avoided when doing Artificial intelligence. Back in the early 60's they did tests to try to make machines learn to tell a tank from it's scenery using a new concept of computing (I can't remember it's name right now.. neural networks? zzz). Of course it failed. YET even when they scrapped the project, they found that the computer DID learn how to tell day from night. Which is a step in the right direction no?  Just think where we'd be if that kind of technology was developed instead of this crappy form of pseudologic we use today.


----------



## Alblaka (Jul 28, 2008)

I think you should seperear the different kinds of AI:

First theres the not-AI: like an machine, that everytime does the same, equal what you do or not. So that can be an dish washer or an very simple Computer.

Next theres that, what i call MI: Mechanical Intelligence
If you do something, it will react to it, but it will ALWAYS react the same way. Perfect example: An simple CnC Match. You attack the one side of the MIs base. It will always send all his troops to defend. And always you can then attack the base on other side perfectly. It wont learn, that then it will lose.

I think the upper two "AIs" are often used in different sections of the humans live.

Then comes the "real" AI:
The chess Computer Deep Blue. In his first rounds an 3-year old kid could win against him, after thousand of matches the worlds-best chess master, loses against him.
First the computer has only known the basic rules of chess. But then, in his games, it saves every turn and every cause the turn had. So it really learned.

So you can say, the humanity is able to create AIs. But only very "small" AIs: They can only make this one thing, they are programmed for.

As result of this argument chain there must be an kind of forth AI:
The AI, you can compare with the human mind, whcih is able to learn that, what it was made for, and then decide, what it wants to learn next and so on.
I think its possible to make such an AI. But until now you could only make it "thinking about what learning next" about that things, you've programmed in. If you finish the AI and there will be a new "option", it couldnt chose it.
So making a real AI, which is able to do everything it wants on it own is not a possibility now. And i think it will takes at least hundred years to get the technology to get this possibility.

oh... *yet see, thats the third page and the topic was _a bit_ different* -.-

Hmm, i think i wont erase the text up there.

What would be the first step to create an AI (the last one i explained, i think)?
Hmmm...
To build an Hard drive which is able to carry billions of TB at least. Only with such an big "memory" you can begin to create an AI, which should be compareable with the human mind. Cuz the human mind can hold this much information... ( i think more then billions of TB...)


----------



## Aurali (Jul 28, 2008)

Deep blue is a good example of a neural network^^


----------



## Widontknow (Jul 28, 2008)

Hi Eli :3

The problem with the neurologic approach, in my opinion, is that it's just trying to take a shortcut via emulating an organic system that we're not even sure we'll be able to expand on later.

It's a start, yes, but only until we can find a way for mechanical systems to perform accurate and quick approximations/assumptions.

Computers, by design, are very precise machines.  The hardest programming problem you could encounter today is having a computer generate a reasonable assumption based on some non-fixed input.  When we develop a software or hardware paradigm to handle this, we'll have successfully developed far enough to begin true research into AI.  

So that, I guess, would be my answer.  Develop a system capable of generating case-specific assumptions from a non-fixed dataset, then work on abstracting it to the general case.


----------



## Aurali (Jul 28, 2008)

We now can introduce a new type of hard programming into this equation.
Fuzzy logic. When used correctly, fuzzy logic can fix the flaws of precise mechanical machines by introducing limiters to what a computer thinks is precise..  

Example:

One could tell a machine that 1-10 is simply 1, and everything else is zero..
Now, real world use is GOING to be a lot more complicated then this, but an experienced programmer (nrr or yak wanna chip in? ) can really make big strides in creating a fuzzy system a neural network can follow well.

So yeah.. your last sentence :3


----------



## Wolf_Fox_Guy (Jul 28, 2008)

um............this might be ignorance and inexperience talking but would like a thrid option for a computer help? like computer progrmas are all 1 and 0, each one representing either on or off as flowing through microchips. Since these pulses are the computers language and would a third option make it easier? thats my thought anyway. I'll go away now


----------



## Aurali (Jul 28, 2008)

A third option creates instability. a one and a zero is very hard to flip magnetic poles.. but a .5 can twice as easily become a one or a zero.


----------



## Wolf_Fox_Guy (Jul 28, 2008)

Eli said:


> A third option creates instability. a one and a zero is very hard to flip magnetic poles.. but a .5 can twice as easily become a one or a zero.



well thats kinda my point. I mean we humans arent all the exact same. I mean just look at the exsistance of this site. no one punched a button on our backs and told us to like this place. Same thing with a computer. I mean talking about AI it means the compter can think and do its own thing right? (the three laws aside.) to do that wouldnt it need the option to either deviate from its programing or choose a responce not programed or learned? Like teh chess computer. If you built one that was exactly like the one that beat teh guy at chess, coppied its programing and its learning and pitted it against teh other one teh game wouldnt add up to much now would it? most likely every game will end in stale mate or whoever moves first will win alot more. you create a third option, put it in jsut the right place for instability to exsist, then maybe one machines process would become a bit more unpredictable. maybe computer one would relly alot more heavily on its pawns and the other might think its kinghts less usefull. Humans created the computer after all, last time I checked we got along for over 10,000 years without them, and to this day if you pitted man against computer in a life or death struggle I think I can promise you that mr chess super computer could probably be beaten by just one guy with a bat. so maybe the possiblity of instability would actully help create true thinking machines capable of all sorts of stuff. 

Does any of that make sence or sound possible/feasible?


----------



## Alblaka (Jul 28, 2008)

No...

You have to understand: saying a computeris only 0 and 1 is... simple

For example:
10101
The first is 1=on. That means the basic value of "1"
The second is 2=off. Value is "1"
The next is 4=on. So the value is "5" (1+4)
Then comes the 8. Its off, so value is still 5.
At least 16, which is on, so Value gets 21.

So value=1 could be "A", 2 could be "B" and so on.
An Computer works with billions of these numbers, like 0100010101110100101001010...

You can turn it on or off, but not both and not other. To use a "third option" you have to make an complete new system.
With the "old" system you can create EVERY Value.

I will count from 1 to 8=
1000
0100
1100
0010
1010
0110
1110
0001

Every choice is possible. You could use an "2" to say "use doubled value" (so 2-0-0-0 would be "2"), but thats not necessary. You simple say 0-1-0-0.

So an "thrid option" would at least only slow down the computer or somethink like that. And that only if you _forget_ that mechanically (and Pcs ARE mechanic based) only on and off is avaible.


----------



## Wolf_Fox_Guy (Jul 28, 2008)

Alblaka said:


> So an "thrid option" would at least only slow down the computer or somethink like that. And that only if you _forget_ that mechanically (and Pcs ARE mechanic based) only on and off is avaible.



well, I can see the problem there but..........we use computers to simulate tons of things that are really complex and use an ungodly number of variables like climate change, weather paterns on a global scale, and cosmic events involving tons of particals. so, oculd we run a program that would ismulate a computer? and then tweak that program to simulate the computer in the virtual world as having a third option?


----------



## Alblaka (Jul 28, 2008)

So you want an Computer to simulate a computer. ^^ Thats nice.
Then you can cancel the mechanic probs...

But again:


> Every choice is possible. You could use an "2" to say "use doubled value" (so 2-0-0-0 would be "2"), but thats not necessary. You simple say 0-1-0-0.



The system are only numbers. (lots of numbers). With the "third option" you would only change the system _how the numbers get "saved"_, not how the numbers would be "used".


----------



## Aurali (Jul 28, 2008)

Wolf_Fox_Guy said:


> well thats kinda my point. I mean we humans arent all the exact same. I mean just look at the exsistance of this site. no one punched a button on our backs and told us to like this place. Same thing with a computer. I mean talking about AI it means the compter can think and do its own thing right? (the three laws aside.) to do that wouldnt it need the option to either deviate from its programing or choose a responce not programed or learned? Like teh chess computer. If you built one that was exactly like the one that beat teh guy at chess, coppied its programing and its learning and pitted it against teh other one teh game wouldnt add up to much now would it? most likely every game will end in stale mate or whoever moves first will win alot more. you create a third option, put it in jsut the right place for instability to exsist, then maybe one machines process would become a bit more unpredictable. maybe computer one would relly alot more heavily on its pawns and the other might think its kinghts less usefull. Humans created the computer after all, last time I checked we got along for over 10,000 years without them, and to this day if you pitted man against computer in a life or death struggle I think I can promise you that mr chess super computer could probably be beaten by just one guy with a bat. so maybe the possiblity of instability would actully help create true thinking machines capable of all sorts of stuff.
> 
> Does any of that make sence or sound possible/feasible?



You missed my point entirely. There isn't a third option to protect data. The way the processor/hardddrive/ram works is set up so data doesn't corrupt from slight changes in the numbers. Look at an analog TV, next to a digital TV.

Your suggesting us use analog (grainy) computers. besides Digital (crisp ones and zeros)... using a third option.. the chances of 1+1  becoming 3 multiply millionfold.. because environment could turn that third option into the first or second... 

There is a reason its only on and off.


----------



## Wolf_Fox_Guy (Jul 28, 2008)

I'll just default to you guys for this, I really dont know enough about any of it.


----------



## Eevee (Jul 28, 2008)

trinary computers have been attempted but iirc the implementation is flimsy and there is no real benefit


----------



## Pi (Jul 28, 2008)

Nearly 99% of this thread is complete bullshit. Quit being dilettantes and actually do your research.


----------



## nrr (Jul 29, 2008)

Pi said:


> Nearly 99% of this thread is complete bullshit. Quit being dilettantes and actually do your research.


*[size=+2]Quoted for fucking truth.[/size]*


----------



## Hollud (Jul 29, 2008)

Pi said:


> Nearly 99% of this thread is complete bullshit. Quit being dilettantes and actually do your research.





nrr said:


> *[size=+2]Quoted for fucking truth.[/size]*





And! Quoted again to demonstrate what sort of behaviour AI should not have.

Unless, of course, someone with this sort of behaviour decides to mirror such an AI after themselves.


----------



## ArielMT (Jul 29, 2008)

Pi said:


> Nearly 99% of this thread is complete bullshit. Quit being dilettantes and actually do your research.


Quoted a third time for the hard truth necessary for a practical AI's first step. Meaningful debate to follow no sooner.


----------



## Pi (Jul 29, 2008)

Hollud said:


> And! Quoted again to demonstrate what sort of behaviour AI should not have.
> 
> Unless, of course, someone with this sort of behaviour decides to mirror such an AI after themselves.



So you'd rather have an AI turn out incoherent garbage like this? Not to mention Eli's reply?

Oooookay.


----------



## Xenofur (Jul 29, 2008)

Hollud said:


> And! Quoted again to demonstrate what sort of behaviour AI should not have.
> 
> Unless, of course, someone with this sort of behaviour decides to mirror such an AI after themselves.


Let me put it like this: Ever brought up a child? If not, don't even try to think about a human-like AI in the first place.


----------



## Hollud (Jul 29, 2008)

Xenofur said:


> Let me put it like this: Ever brought up a child? If not, don't even try to think about a human-like AI in the first place.


I concur. It is infinitely difficult to produce an artificial intelligence that can argue, banter and agree with your thoughts and ideas. After all, it is a human (or humans) who will ultimately come up with a concept and a design for an A.I. system. A person's individual character can subliminally affect the manner in which the system will react to a user.

In short, just like how you would bring up a child. Your child absorbs and picks up pieces of who you are. That aspect brings about character development and will affect how the child performs later in life. Theoretically, it should be the same in a human-like A.I.

But there are about a few billion individuals on this planet alone. One system cannot possibly comprehend the vast number of unique personalities. Everyone has their own personal definition of just what defines 'good' and 'bad'. The phrase "one man's meat is another man's poison" comes to mind. Same concept here. What individual A finds to be good may not be the same as what individual B thinks of it to be.

Whatever you do, you cannot teach a computer morality. You cannot teach a computer values. You cannot impart upon a piece of silicon just what is best for you, because it just isn't human. It's calculative and predictive, and if you asked it to stop and smell the flowers, it'll probably warn you that doing so will trigger your allergies.

The closest we can come to is an automation of what we today's technology. We can teach it to anticipate our needs, but fulfilling it will still require a human touch.

And if we ever someone ever comes up with an A.I. system, it'll probably be some Frankenstein creation involving a mad scientist's brain and plenty of stuff borrowed from the world of fiction.


----------



## Alblaka (Jul 29, 2008)

Nice argumentation.

First i wanted to write, your wrong, you can teach computers morality.
But during writing it, i notices you're right: If the AI should really think it will come to the conclusion that morality is not an efficient logic. Like you said


> It's calculative and predictive, and if you asked it to stop and smell the flowers, it'll probably warn you that doing so will trigger your allergies.


The second part of sentence is nice XD

So the only way to get such an AI with thoughts, "feelings, morality... is to "copy" an human mind into an Computer. But until thats possible...
*looking on calendar*
hmmm... Could take some days *VERY ironic*


----------



## Hollud (Jul 29, 2008)

Alblaka said:


> The second part of sentence is nice XD



Can you imagine just how awkward computers would sound if they were to react on a "human" level?

Human: "Wow! That looks delicious!"
Computer: "Your medical records show that you are allergic to seafood. Consuming it could result in your body going into anaphylactic shock. You could die."

H: "Say you love me."
C: "You love me."

H: "I think you're cute."
C: "Like your Pokemon collection?"


----------



## NekoFox08 (Jul 30, 2008)

Hollud said:


> Human: "Wow! That looks delicious!"
> Computer: "Your medical records show that you are allergic to seafood. Consuming it could result in your body going into anaphylactic shock. You could die."
> 
> H: "Say you love me."
> ...



hehe... awkward... yea >_<

so who's the 1% that ISN'T bullshit? Pi. demonstrate to all of us your logic of the situation that isn't bullshit... why don't you teach me? =3


----------



## Pi (Jul 30, 2008)

NekoFox08 said:


> hehe... awkward... yea >_<
> 
> so who's the 1% that ISN'T bullshit? Pi. demonstrate to all of us your logic of the situation that isn't bullshit... why don't you teach me? =3



I'm not re-reading the entire thread because it's too late for me to concentrate. However: for starters:
Everything I said. Duh.
What Eevee said about trinary computers.
What ArielMT said about prolog and Vista.
What nrr said about Lisp.

Nearly (please, please note the quantifier here) everything else was amateur garbage that sounds more like someone read 2001 and smoked a joint.


----------



## Hollud (Jul 30, 2008)

NekoFox08 said:


> so who's the 1% that ISN'T bullshit? Pi. demonstrate to all of us your logic of the situation that isn't bullshit... why don't you teach me? =3



That would not be advisable.

You see, he's already made his point, which he is entitled to, that the bulk of the thread is composed of ideas (read: nicer word for "garbage") that will not materialise given the quality of contributions from... well... contributors. Given that his statement promoted the idea that he was and would be understandably disgusted at the condition of the thread, it would therefore be expected that any participation whatsoever would do little to improve the status quo.

There is a hint that he does harbour some form of knowledge, given that he has reinforced his argument with conclusive evidence. Unfortunately, judging from his mannerisms and the words used, it is apparent that he will not be likely to impart such knowledge to benefit others.

Teach? That's _nearly_ impossible.


----------



## Alblaka (Jul 30, 2008)

Hollud said:


> Can you imagine just how awkward computers would sound if they were to react on a "human" level?
> 
> Human: "Wow! That looks delicious!"
> Computer: "Your medical records show that you are allergic to seafood. Consuming it could result in your body going into anaphylactic shock. You could die."
> ...



More, more! I wanna more of this word games XD

H: "What's the sense of life?"
C: "42"
*ok, i copied that from the film "Per Anhalter durch die Galxis" I don't know the english name for it...*

H: "Could you please be more nice to me?"
C: "Why?"
H: "Because you never say please or somethink like that."
C: "Saying please would cost unneccessary used energy..."

Hmmm... Im not at this jokes, someone could help me?


Back to topic:
I think not everything was rubbish or garbage: There were some nice ideas and discussions.
If you don't like the tread or if you are super-intelligent computer-maniacs, then simply don't look into this tread and let us the fun...


----------



## nrr (Jul 30, 2008)

Hollud said:


> There is a hint that he does harbour some form of knowledge, given that he has reinforced his argument with conclusive evidence. Unfortunately, judging from his mannerisms and the words used, it is apparent that he will not be likely to impart such knowledge to benefit others.
> 
> Teach? That's _nearly_ impossible.


I don't know what his take on this is, but I'm going to pull out the "it's an internet forum" card and just say that I don't care what people on the Internet generally think.  People on the Internet tend to think that I'm a crass asshole who has no concept of social graces mainly because if I see that you're an idiot, I'm going to call you out on it.  This earns me lots of complaints and rules infractions and things of the like, but I really don't particularly care.

Moreover, I do this same sort of thing during classroom and research group meeting discussions, and folks tend to appreciate it because they know I'm right most of the time; likewise, if I'm wrong, they appreciate that I'm very quick to admit it.  The only difference is that we're all paid (or are paying) for advancing academics in this instance, so we're going to make the best of it.

The same thing goes with trying to teach people something by means of "an internet forum."  You're going to get it how I want to give it to you, and if I'm feeling particularly snarky that day, well, that's just the luck of the draw, I suppose.  You have no right to the knowledge I have, so why the hell should I cater to you and your ad hoc system of amateur scholarship?


----------



## SpaderG (Jul 30, 2008)

hmmm... I suspect that while we will (eventually) get machines to think and plan, they most likely will never have emotions (which sucks). As for machines taking over the world, all you have to do is pull the plug (that was NOT a Matrix ref). I'm working on a much similar project that involves getting a machine to work in a urban environment full of moving targets. (I.E. Highschool. Ugh.)


----------



## Xenofur (Jul 30, 2008)

people said:
			
		

> _fucking stupid and "clever" wordgames_


Go back to IRC. Oh wait, people of your mental stature probably find that to be very complex and confusing. Go back to MSN.


----------



## NekoFox08 (Jul 30, 2008)

Hollud said:


> That would not be advisable.
> 
> You see, he's already made his point, which he is entitled to, that the bulk of the thread is composed of ideas (read: nicer word for "garbage") that will not materialise given the quality of contributions from... well... contributors. Given that his statement promoted the idea that he was and would be understandably disgusted at the condition of the thread, it would therefore be expected that any participation whatsoever would do little to improve the status quo.
> 
> ...



no, I was serously asking... lol, not trying to be a dick.

and btw, that's more than 1% xD


----------



## Alblaka (Jul 30, 2008)

NekoFox08 said:


> and btw, that's more than 1% xD



XD
Thats right:
1 % would be one of hundred posts. But at least the first post is useful (it started the tread noone can argue agasinst that) and cuz we have only 60 posts it somewhere aroun 1,3%.

And then there are some posts, which are non garbagen, too... ^^


----------



## Pi (Jul 30, 2008)

wow you guys sure showed me by interpreting my (qualified) figure of speech literally!!!


----------



## Alblaka (Jul 30, 2008)

Pi said:


> wow you guys sure showed me by interpreting my (qualified) figure of speech literally!!!



bÃ¶lp...
Didn't understand it, pls explain...


----------



## Hollud (Jul 30, 2008)

nrr said:


> Moreover, I do this same sort of thing during classroom and research group meeting discussions, and folks tend to appreciate it because they know I'm right most of the time; likewise, if I'm wrong, they appreciate that I'm very quick to admit it.  The only difference is that we're all paid (or are paying) for advancing academics in this instance, so we're going to make the best of it.
> 
> The same thing goes with trying to teach people something by means of "an internet forum."  You're going to get it how I want to give it to you, and if I'm feeling particularly snarky that day, well, that's just the luck of the draw, I suppose.  You have no right to the knowledge I have, so why the hell should I cater to you and your ad hoc system of amateur scholarship?





Your point that people have no right to know what you know is true and valid. The nature of my job, though simple, heralds many layers of restricted classification. One could say the lives of people can be affected through the documents that pass through on my desk. Not a lot of people get this sort of opportunity, and I'm deeply grateful for having it.

Yet still, I use what I know and learnt and do my utmost best to teach others not to repeat the mistakes of what others have done. They don't have access to what I know, so I try to make sure they understand without revealing the delicate details. Because I know, inevitably, people will start asking and questioning and, being human, will begin to hop to and fro from assumption to assumption.

Yes, you have your entitlement to 'attack' and shoot down anything that _you know_ are in conflict with what you know. But people are still going to ask and question over and over again. And they keep on assuming and assuming because they don't know just which part of the story is right and which part is wrong.

The Internet is here as a form of medium for communication. A forum is here as a form of gathering place for members to exchange ideas and information. It's like the digital version of a marketplace. You don't come to a marketplace just to stick your finger in someone's face just because he/she asked a trivial question. In traditional times, the marketplace was the focal point of trade. Merchants from all over the country would gather, and it was here where not just wares would be traded, but knowledge.

If you feel that your knowledge should be kept to yourself, then yes please, keep it to yourself. But if there's nothing beneficial to be contributed to the community at large, then perhaps what you said is true. You are making the best of "advancing academics" in this environment, which in this case happens to be a *free* and open community.

Sorry, but some of us just aren't the material of mathematicians, physicists and other scientific professions that are far too complex for us to understand that keeps the world in constant development of itself. And if you are (or believe you are) of that calibre, then that's well and good. Some of us just can only dream of reaching a status as high as yours.

Borrowed from Star Trek: First Contact, there is a line which I am very fond of and I think people should seriously consider. It goes,

_*"The acquisition of wealth is no longer the driving force in our lives. We work to better ourselves, and the rest of humanity."*_



[EDIT]

And just to keep things on-topic, that phrase right there should be the fundamental basis if there is that someone who wants to consider building some form of artificial intelligence. A computer system is only worth it's cost if its core programming is designed to put the welfare of humans as its primary goal. No point designing something out of the movies because everything there is often blown out of proportion.

Start simple. And work from there.


----------



## Xenofur (Jul 31, 2008)

Alblaka said:


> bÃ¶lp...
> Didn't understand it, pls explain...


Er sagte "fast" (nearly) und hat darauf HINGEWIESEN. Es ist nÃ¤mlich nur ne Redewendung, die man im allgemeinen Sprachgebrauch verwendet.

Jeder der hier auf die 1% eingegangen ist, isn absoluter Volltrottel.


----------



## nrr (Jul 31, 2008)

Hollud said:
			
		

> Yes, you have your entitlement to 'attack' and shoot down anything that _you know_ are in conflict with what you know. But people are still going to ask and question over and over again. And they keep on assuming and assuming because they don't know just which part of the story is right and which part is wrong.


Yes, any scholar should openly question anything he/she hears or reads.  Moreover, any scholar should also _pay attention to_ and _actively listen to/read_ anything he/she hears or reads.

And assumptions aren't necessarily bad!  In mathematics, we use assumptions to form the basis of proof, and without that mechanism, the process of proof would be a completely moot point.



			
				Hollud said:
			
		

> communist drek about the Internet being a marketplace, etc.


No.

If I publish a scholarly paper, it will be professional, funded through grants and donations, and detailed.  This means that it won't be snarky or otherwise offensive.  To boot, it will be on the Internet.

On the other hand, my informal drivel here on "internet forums" is sometimes snarky, sarcastic, condescending, or anything equally socially repulsive.  This is mainly because I'm communicating with folks who have no idea of the various chains of command that are involved in publishing polished formal scientific papers, and they all have their own perception of what a particular field is that may be either incomplete or completely inaccurate.  In most cases, said folks tend to be unwilling to admit that their idea of said particular field may be incomplete or completely inaccurate.

This frustrates me and makes me turn into a grumpy curmudgeon.  This is the typical "Fine, if you don't want to play by the rules, I'll pack up my toys and go home." routine, except I've had one too many beers, and I'm beginning to wave my cane around like a crazy idiot and yell out my research in the loudest, rudest way possible.

You read way too much into the economic point I'd made.



			
				Hollud said:
			
		

> If you feel that your knowledge should be kept to yourself, then yes please, keep it to yourself. But if there's nothing beneficial to be contributed to the community at large, then perhaps what you said is true. You are making the best of "advancing academics" in this environment, which in this case happens to be a *free* and open community.


My point was that you're expecting the moon (i.e., you're expecting those of us with the credentials to deliver a polished presentation on the subject) in return for nothing.  You get what you pay for.

Free and open community or not, that doesn't fly.



			
				Hollud said:
			
		

> Sorry, but some of us just aren't the material of mathematicians, physicists and other scientific professions that are far too complex for us to understand that keeps the world in constant development of itself. And if you are (or believe you are) of that calibre, then that's well and good. Some of us just can only dream of reaching a status as high as yours.


It's easy.  Learn simple critical thinking, stop making an ad hoc mess out of scholarship, and polish up your understanding of the scientific method.

Also, yes, I'm currently collaborating with some other scholars on some small research projects.  We're hoping to have preprints out later this year.  It's mostly digital signal processing, image processing, linear algebra.  Boring stuff to you guys.



			
				Hollud said:
			
		

> Start simple. And work from there.


*[size=+2]Quoted for fucking truth.[/size]*


----------



## Hollud (Jul 31, 2008)

Duly noted.

I'm no scientist, so I'm imperfectly and inadequately qualified to continue this discussion, which has twisted beyond recognition of its original topic. I don't have the benefit to your experience, intellectual development and expertise in this subject. It's apparent that my thinking processes are flawed and have a degree of cognitive bias. I'm not outspoken, and I'm not good at speaking out.

Moderators, this thread needs some reviewing. Please act upon it as you see fit.


----------



## Xenofur (Jul 31, 2008)

Hollud said:


> I'm no scientist, so I'm imperfectly and inadequately qualified to continue this discussion.


Considering this thread is about AIs, a project that humanity has been working on in some form or another pretty much FOREVER, yes. That's absolutely true.

Flieg MaikÃ¤fer, flieg.


----------



## nrr (Jul 31, 2008)

Xenofur said:


> Er sagte "fast" (nearly) und hat darauf HINGEWIESEN. Es ist nÃ¤mlich nur ne Redewendung, die man im allgemeinen Sprachgebrauch verwendet.
> 
> Jeder der hier auf die 1% eingegangen ist, isn absoluter Volltrottel.


Deutsch in meinem FAF?  Wahrscheinlicher als ich glauben mÃ¶chte!



Hollud said:


> Duly noted.


I appreciate it.


----------



## Arc (Jul 31, 2008)

nrr said:


> Deutsch in meinem FAF?  Wahrscheinlicher als ich glauben mÃ¶chte!


In der Tat, man sieht es selten, aber es existiert.
Und nein, ich habe nichts zu diesem Thema zu sagen,
ich poste hier nur, weil ich Kind von dir will nrr.


----------



## Aurali (Jul 31, 2008)

Dude. Seriously.. am I the only one who gets stalked by the mods?

To keep from getting any more points for thread derailment. All I will say is tell us what you know then.


----------



## Xenofur (Jul 31, 2008)

Eli said:


> Dude. Seriously.. am I the only one who gets stalked by the mods?
> 
> To keep from getting any more points for thread derailment. All I will say is tell us what you know then.


Hello you lovely fuzzy entropy generator. I missed you and your unpredictable results.


----------



## nrr (Jul 31, 2008)

Eli said:


> am I the only one who gets stalked by the mods?


Yes.


----------



## Pi (Jul 31, 2008)

Eli said:


> Dude. Seriously.. am I the only one who gets stalked by the mods?
> 
> To keep from getting any more points for thread derailment. All I will say is tell us what you know then.



To avoid getting more citations for thread derailment, you post something that's 66% fluff about thread derailment, and the rest some passive-aggressive cryptic trash?


----------



## Aurali (Jul 31, 2008)

.. No seriously. If 99% of this is amateur BS. then bring it out of here. Teach me.


----------



## Eevee (Jul 31, 2008)

this thread has pretty much constantly gone downhill since my first reply


----------



## Pi (Jul 31, 2008)

Eli said:


> .. No seriously. If 99% of this is amateur BS. then bring it out of here. Teach me.



Okay. More specifically, the last paragraph.


----------



## Aurali (Jul 31, 2008)

This is a discussion. I wanna discuss. I gave my examples. I was told I was wrong. I wanna know what's wrong.



Eevee said:


> this thread has pretty much constantly gone downhill since my first reply


Eevee. No one really wants to hear, "You are all idiots." I'm trying to keep a topic up. I shared what I knew. I wanna expand what I know. Not get told No because someone with a superiority complex says so. 

Now seriously.. Can we discuss this. Or can this get locked.


----------



## Alblaka (Jul 31, 2008)

-.-

Yep, you staff it...
With this amount of "All who posted here a amateurs and so on" you got my out of this tread, bye


----------

