Discussion:
Dangers of artificial intelligence?
(too old to reply)
r***@yahoo.com
2006-03-29 08:35:39 UTC
Permalink
After watching the Matrix, what are your thoughts on the creation of
artifical intelligence?

Is it conceivable that it's creation could end up in disaster for
humankind, as seen in the trilogy, or is that just a crazy idea?
JPM III
2006-03-29 17:02:08 UTC
Permalink
Post by r***@yahoo.com
After watching the Matrix, what are your thoughts on the creation of
artifical intelligence?
Is it conceivable that it's creation could end up in disaster for
humankind, as seen in the trilogy, or is that just a crazy idea?
Conceivable? Certainly. The idea has already been conceived.

Possible? I'd say so.

Feasible? Maybe a little.

Plausible? Not really, not at this point.

Likely? No.

We develop machines to perform automated tasks more consistently precisely
(efficiently) than we can, and the tasks they perform get more complex and
microscopic by the day. If our developments ever allow them any significant
learning ability of their own, they could feasibly begin to learn how to
develop improvements on themselves. However, I don't think any human in his
right mind would program a machine to fulfill a task even at the direct
expense of human life. (But how many geniuses border on insanity?)

It's a possibility for humanity's future, but I wouldn't say it's very
likely at all.
Pouchcotato
2006-05-12 01:06:34 UTC
Permalink
Post by JPM III
Post by r***@yahoo.com
After watching the Matrix, what are your thoughts on the creation of
artifical intelligence?
Is it conceivable that it's creation could end up in disaster for
humankind, as seen in the trilogy, or is that just a crazy idea?
Conceivable? Certainly. The idea has already been conceived.
Possible? I'd say so.
Feasible? Maybe a little.
Plausible? Not really, not at this point.
Likely? No.
What is this? Mythbusters? ;-)
Post by JPM III
We develop machines to perform automated tasks more consistently precisely
(efficiently) than we can, and the tasks they perform get more complex and
microscopic by the day. If our developments ever allow them any
significant learning ability of their own, they could feasibly begin to
learn how to develop improvements on themselves. However, I don't think
any human in his right mind would program a machine to fulfill a task even
at the direct expense of human life. (But how many geniuses border on
insanity?)
It's a possibility for humanity's future, but I wouldn't say it's very
likely at all.
Several forms of artificial intelligence exist already, programs that can
learn and adapt to their environment, but they are limited in how far they
adapt by the programming, so you may or may not call this AI

It is my belief that AI will only be developed by another, less adept, AI
machine, I don't think humans have the capacity or understanding of what AI
really entails.

When the day comes (if...) I don't think humans will have any worries. I've
read some things that say (I actually can't remember if it refered to alien
civilizations or AI or both) advanced beings would not see the point in war,
fighting or petty disputes. They would have the mind set to resolve issues
without violence or hate. Now, I'm a bit of a romantic at heart, so I like
to believe this notion, and it makes sense to me. A civilization can't move
forward when you've constantly gotta look over your shoulder wondering if
one of your friends is going to stab you in the back.

I think it's a certainty that full AI will be developed, but not in any of
our life times.
JPM III
2006-06-05 21:35:10 UTC
Permalink
Post by Pouchcotato
Post by JPM III
Post by r***@yahoo.com
After watching the Matrix, what are your thoughts on the creation of
artifical intelligence?
Is it conceivable that it's creation could end up in disaster for
humankind, as seen in the trilogy, or is that just a crazy idea?
Conceivable? Certainly. The idea has already been conceived.
Possible? I'd say so.
Feasible? Maybe a little.
Plausible? Not really, not at this point.
Likely? No.
What is this? Mythbusters? ;-)
Except mythbusters usually tackles myths/legends in the real world, not
fiction. :)
Post by Pouchcotato
Post by JPM III
We develop machines to perform automated tasks more consistently
precisely (efficiently) than we can, and the tasks they perform get more
complex and microscopic by the day. If our developments ever allow them
any significant learning ability of their own, they could feasibly begin
to learn how to develop improvements on themselves. However, I don't
think any human in his right mind would program a machine to fulfill a
task even at the direct expense of human life. (But how many geniuses
border on insanity?)
It's a possibility for humanity's future, but I wouldn't say it's very
likely at all.
Several forms of artificial intelligence exist already, programs that can
learn and adapt to their environment, but they are limited in how far they
adapt by the programming, so you may or may not call this AI
It is my belief that AI will only be developed by another, less adept, AI
machine, I don't think humans have the capacity or understanding of what
AI really entails.
I don't entirely agree or disagree with that. I would say, however, that
humans do understand the concept of AI and what it entails; we simply lack
the ability to develop it precisely. This is similar to what you are saying
though, because I do believe we have the ability to develop logical machines
capable of developing more intelligent machines than what we ourselves can
produce... and this is the basis of our fear of what might happen if that
happens. Once our developments possess the ability to improve upon
themselves and, perhaps, the capacity to simulate life by actively seeking
to protect itself, humanity could be in lots of trouble.
Post by Pouchcotato
When the day comes (if...) I don't think humans will have any worries.
I've read some things that say (I actually can't remember if it refered to
alien civilizations or AI or both) advanced beings would not see the point
in war, fighting or petty disputes. They would have the mind set to
resolve issues without violence or hate. Now, I'm a bit of a romantic at
heart, so I like to believe this notion, and it makes sense to me. A
civilization can't move forward when you've constantly gotta look over
your shoulder wondering if one of your friends is going to stab you in the
back.
We think of ourselves as intelligent beings, and yet we would seek to
destroy anything that threatens our dominant place in the world. Any other
intelligent beings who recognize this would annihilate us out of
self-preservation if they could, because we would force them to. A truly
intelligent race can only coexist peacefully with another intelligent race
if BOTH seek such peace. And humans can't even seek peace with each other.
(On the other hand, humans seek any and all truth that compromises their
source of power and seek to hide or annihilate it. We're just complex
animals, seeking to gain and maintain power for as long as possible.
Animals.)

I hope what I just said isn't true, but the billions of simple-minded humans
among us control our fate. A truly intelligent race wouldn't see the point
in trying to distinguish intellectuals from the crowd.
Post by Pouchcotato
I think it's a certainty that full AI will be developed, but not in any of
our life times.
I hope not. I'd prefer not living in fear of "the machine".

Continue reading on narkive:
Loading...