Rave Radio: Offline (0/0)
Email: Password:
Page: 1 2 3 4 Next »»Rating: Unrated [0]
Artificial Intelligence Programming
Good [+1]Toggle ReplyLink» ApR1zM replied on Wed Aug 20, 2003 @ 7:58pm
apr1zm
Coolness: 165480
yeah for those of you who dare speaking the language of the godZ!

[ www.iturls.com ]
Good [+1]Toggle ReplyLink» Screwhead replied on Wed Aug 20, 2003 @ 8:00pm
Good [+1]Toggle ReplyLink» ApR1zM replied on Wed Aug 20, 2003 @ 8:02pm
apr1zm
Coolness: 165480
awwww sweeeeet
Good [+1]Toggle ReplyLink» Screwhead replied on Wed Aug 20, 2003 @ 8:10pm
screwhead
Coolness: 686270
That whole site is a great resource for any and everything cyberpunk related. They even have a library with a bunch of books typed up!

The Cyberpunk Project Information Database

Bruce Bethke - Cyberpunk
(Short story, where the word 'cyberpunk' appears first time ever. )

William Gibson

Neuromancer - Sprawl One. The bible of cyberpunk.
Count Zero - Sprawl Two.
Mona Lisa Overdrive - Sprawl Three.
Agrippa - A short story.
Burning Chrome - Collection of short stories.
Johnny Mnemonic - A short story from book "Burning Chrome".

Neal Stephenson - In the Beginning Was the Command Line
(Past and future of personal computer operating systems. )

Bruce Sterling - The Hacker Crackdown
Good [+1]Toggle ReplyLink» neoform replied on Thu Aug 21, 2003 @ 1:35am
neoform
Coolness: 340345
AI will never be more thne what we program it to be.

should the AI be coded to be evil, it will be that, if it's gonna be coded to be good, it will be that instead.

REAL artificial intelligence will never exist.
Good [+1]Toggle ReplyLink» Screwhead replied on Thu Aug 21, 2003 @ 1:43am
screwhead
Coolness: 686270
That's the inherant fault that people think of with AI. Unless we hard-code a neural network to have limits, then it's not really inteligence. But a neural net that's coded with no limits could easily have an AI develop it's own awareness, which would be quite diffrent than our own human awarenes.

Go read Neuromancer.
Good [+1]Toggle ReplyLink» mdc replied on Thu Aug 21, 2003 @ 1:43am
mdc
Coolness: 149495
true... i agree with drunk ian
Good [+1]Toggle ReplyLink» neoform replied on Thu Aug 21, 2003 @ 1:48am
neoform
Coolness: 340345
for it to develope it's own nature you would have to code it to do so.. and why would you code it to develope itself in a negative way..

in the end the programmer would have ultimate control as to how the AI developes it'self and unless the programmer is completely insane there's no way he could let the AI become evil..

the AI will always be bound by it's original code.
Good [+1]Toggle ReplyLink» Screwhead replied on Thu Aug 21, 2003 @ 1:56am
screwhead
Coolness: 686270
Then it wouldn't be "inteligence", it would simply be a program.

What people are trying to do is program a simulation of a human neural network. If/when they get one going, then there isn't a need to "program" anymore. Programing would instead become "teaching" because you'd have to teach the neural net how to do math, how to use language, etc.

Since it would be a computer, it could learn much faster than a human. It could go through a history book and aply that to it's neural net, just like a human reading the same book's neural net changes and learns.

Since this neural net would be similar to humans, it would have not just the capabilities of learning, but thinking as well.

Us humans are also "bound" by our original code. Our brain only has a limited capacity for troubleshooting, working out problems and learning, and it's pretty slow. Now imagine something who's "thoughts" and "learning" works like ours, but without limits of things like brain size or without having cell decay/losing braincells etc. No fear of memory afecting diseases (other than computer viruses) and no way to lose any information it's ever learned. In hours it could learn what a "regular" human learns in their lifetime as long as it had the right amount of storage space to keep everything.
Good [+1]Toggle ReplyLink» neoform replied on Thu Aug 21, 2003 @ 2:12am
neoform
Coolness: 340345
even today scientists and doctors still do not fully understand the human brain, and the human brain is far more powerfull then any computer that we've designed.

and as for designing a computer AI that can reproduce human inteligence, it will never happen since computers are simply very fast calcutators..

can you teach a calculator to be happy or sad? no, it's very simple, you can teach it to "seem" as though it is, but it will never be.

many people have seen fractal art that has been supposedly generated by computers independantly of humans, however have been greatly critisized by critics as merely being the sum of the porammagers code and nothing more.

which by would also logically apply to an AI, it MUST and WILL follow the original rules applied to it.. even if it's programming tells it to re describe it's porgramming, the new set of rules will be based off of the original.. since it's impossible to tell a computer to make a new set of instructions by ignorring it's programming.. it's just not possible since computers a BUILT to follow the rules.
Good [+1]Toggle ReplyLink» OMGSTFUDIEPLZKTX replied on Thu Aug 21, 2003 @ 7:58am
omgstfudieplzktx
Coolness: 67195
We don't understand the human brain because we have no clue how something in nature can do what it can do.

but with technology, we have an idea.

I like the way Robert A. Heilien described AI in The Moon is a Harsh Mistress. He basically described the Matrix. A computer with terabytes upon terabytes of probabilities. The computer could not predict events, but it knew what to do, how to do it, when to do it, and why it should be doing it, without interference from humans. It was self-aware, but only in the logical sense. It knew it was alive since it was the only logical choice. If it wasn't alive, it would be limited by things that aren't living. All this based on the outcomes of situations recorded for hundreds of years.

Also, since it has all these probabilities, the computer knows how to go beyond rules set to it by humans, as it knows when a private disobeyed the captain's orders, and the outcome was good.

Is this true intelligence? I don't see why not. The computer is basing all its choices on the collective history of humanity. "Hi, computer. I need you to clean my floor". Computer sits there and thinks to itself "Mop or vaccume or broom. Assess situation. Floor is covered in grease. Based on the outcomes of over 100 000 probabilities, 80% of successful floor cleanings was done by mop, 20% done by cloth which I don't need to do since I can apply appropriate pressure anyways". And the floor gets cleaned. Not because you programmed it to though. The computer came to its own conclusion.

However: "Floor needs to be washes, but look at tiles. Warped and spaces down to the hard wood floor. Out of 586 583 situations, 86% of outcomes resulted in the destruction of the floor tiles when water meets glue. Will report error instead of following command." Again, the computer thought about it first.

Intelligence is the ability to process knowledge. In these two cases, a computer loaded with knowledge was able to process it properly and come out with logicul decisions on its next action.
Good [+1]Toggle ReplyLink» neoform replied on Thu Aug 21, 2003 @ 12:12pm
neoform
Coolness: 340345
"Also, since it has all these probabilities, the computer knows how to go beyond rules set to it by humans, as it knows when a private disobeyed the captain's orders, and the outcome was good."

this is what cannot be done. it cannot exceed it's own programming.. it can expand, it can update, but it cannot remove it's originla code since the new code was inherited from the original code making all new code based of it the original.

it changed it's code based on the rules of the original code, and due to that the new code will be bound by the original again.
Good [+1]Toggle ReplyLink» soyfunk replied on Thu Aug 21, 2003 @ 12:21pm
soyfunk
Coolness: 127450
HOLY MOLY!!!

[ project.cyberpunk.ru ]

haven't been to that page in yeeeeeears
thanks spooky!
Good [+1]Toggle ReplyLink» Screwhead replied on Thu Aug 21, 2003 @ 12:27pm
screwhead
Coolness: 686270
*sigh*

Read information on artificial neural networks before talking like an expert on a subject you clearly know nothing about.

Programing a computer program is one thing, and THOSE programs can't excede their limitations, like, say, a default instalation of photoshop.

Now, let's take the photoshop example I used.

No effects, no styles, no textures. A computer with an artificial neural network could "watch" a television show/movie, "look" at pictures, whatever, and "see" what a lens flare is. It would analyse it and then reproduce it on it's own. No programing involved, other than programing it to learn, much like a human.
Good [+1]Toggle ReplyLink» Screwhead replied on Thu Aug 21, 2003 @ 12:28pm
screwhead
Coolness: 686270
Sammy: Site was down for a little over a year and a half, it only recently (last few months) came back up.

I love cyberpunk. :P
Good [+1]Toggle ReplyLink» OMGSTFUDIEPLZKTX replied on Thu Aug 21, 2003 @ 12:56pm
omgstfudieplzktx
Coolness: 67195

"Also, since it has all these probabilities, the computer knows how to go beyond rules set to it by humans, as it knows when a private disobeyed the captain's orders, and the outcome was good."

this is what cannot be done. it cannot exceed it's own programming.. it can expand, it can update, but it cannot remove it's originla code since the new code was inherited from the original code making all new code based of it the original.

it changed it's code based on the rules of the original code, and due to that the new code will be bound by the original again.


Here is why you're wrong:

Take Robert's idea of self awareness. Accidently the computer became self aware due to probability, banks and banks and banks of cause and effects.

The computer doesn't change its core programming, granted. But it makes every single decision based on the outcome of thousands, millions of outcomes to millions of different scenarios.

Take freds learning example and mix it with mine. Someone tells the computer to draw out a realistic picture of the sun, as if taken from a Kodak. The computer will create it, not because it was programmed to make it. Not because photoshop was installed on it, but because the cause of light hitting the film has millions of probable outcomes, and the computer figures out which outcomes to use, then voila. Picture is created.

Also, unless there was some sort of security measure, the computer could come to the conclusion that formatting itself could be the course of action based on all these probabilities. In essance, the computer could commit suicide if it were best.

Heck, it could probally develop self preserveness (correct term anyone?) just based on causes and effects of suicides. Seeing as how in most cases of suicide, the subject got extremely afraid of dieing before it was too late, or got very mutalated as a result of attempting suicide, the computer might come up to the conclusion that suicide may not be the best option, might as well fix itself.
Good [+1]Toggle ReplyLink» neoform replied on Thu Aug 21, 2003 @ 1:40pm
neoform
Coolness: 340345
All computers learn things..
but they do not alter their code, they merely base their responce on the information it has..

stimulous responce.. this isn't intelligence..

i made a script on my site that bans any IP address that has 2 accounts created under it. I never told the computer what IPs to ban, it has code to descide that.. next time that IP tries gaining access to the site, the computer knows not to allow the user..

does this mean the computer learned? or had some intelligent responce? no, it's just following the rules set before it.
Good [+1]Toggle ReplyLink» soyfunk replied on Thu Aug 21, 2003 @ 2:00pm
soyfunk
Coolness: 127450
ever played shadowrun on genesis?

man the Ai of those corporate black ice are smart... intelligent also!
Good [+1]Toggle ReplyLink» Screwhead replied on Thu Aug 21, 2003 @ 2:30pm
screwhead
Coolness: 686270
That's not the same thing. You said it yourself, it was a script you made. Scripts are a specific set of instructions that the computer can't deviate from and are nowhere near as complex as a simulated neural network.

Polymorphic viruses re-write their own code to find new methods of infection and to dodge anti-virus software. There are some viruses that "bypas" and dissable anti-virus programs. They aren't specificaly PROGRAMED against norton/mcafee/etc., they have been programed to terminate any process that can terminate them. Some viruses already have a rudimentary survival instinct. They re-write their own code to be more efficient. They "evolve" and become better at "survival"

Yes, they have been "programed" to survive, but their programing makes them re-code themselves to be undetectable and to survive by killing anti virus programs, or any other process that has anti-virus capabilities. It's a rudimentary form of inteligence, no diffrent than single-cell amoeba.

What you need to do, Ian, before ever posting in this thread again, is to actualy read up on what you are blindly arguing about.

Polymorphism, simulated/artificial neural networks, those are things you should throw into your search engien of choice and spend a few hours learning something instead of being locked-down with your current "Well I can code PHP so I know everything there is to know and what you say is impossible" mentality.

Your starting to sound like Dino and religion. "Computers can't do that because I refuse to accept that they can, based solely off of my limited knowledge and absolutely no research on the subject. All the proof you supply me with means nothing to me because I can't grasp the concept, so it obviously must be impossible."
Good [+1]Toggle ReplyLink» Screwhead replied on Thu Aug 21, 2003 @ 2:31pm
screwhead
Coolness: 686270
Sammy: The genesis one is the best of the bunch! Much better than the SNES crap. I'm still playing the genesis one on an emulator. :)
Artificial Intelligence Programming
Page: 1 2 3 4 Next »»
Post A Reply
You must be logged in to post a reply.