Welcome to Librarium Online!
Join our community of 80,000+ members and take part in the number one resource for Warhammer and Warhammer 40K discussion!
Registering gives you full access to take part in discussions, upload pictures, contact other members and search everything!
So I wonder what you people think about Artificial Intelligence. I confess that I know little about the topic, other than its most general development and ambitions. Of recent developments, I can't point even myself in the right direction. But it's a fascinating topic. Other than supercomputers doing amazing things, and robots moving as humans move, I consider nanotechnology and nanobots that move on a very simple yet intelligent stimulus-response system.
So is it a worthy pursuit? Could it lead somewhere dangerous, as so many science fiction parables would admonish us? Or is our money better spent elsewhere? How would add it to humanity's condition?
Michael Crichtons book 'Prey' is a good read if you want to get some ideas on fairly recenet advances (well about two to three years but it still fairly relavent).
It could be useful but we have to be careful about what use we put it to and how we treat AI if it become intelligent enough to gain true awareness.
I'm in favor of the pursuit of A.I. if for no other reason than to make my strategy video games better.:tongue:
Seriously though, it's a worthy pursuit and possibly the only time you will hear scientists who normally would argue against the existence of a deity start saying things like, "You are trying to play God!" or "Who do you think you are, God?"
It's all pretty ironic.
The 'why' part of the equation when discussing the pursuit of A.I. makes for some interesting philosophy.
Because if the answer is.. "Because we can." than we should leave well enough alone.
The same goes for the answer.. "To make life easier for mankind" because that's just slavery.
A.I isn't the same as giving birth to a child, it's the creation of a new species and like it or not that species will be as alien to us as, well, aliens. It may know everything we initially place in it's database, but trying to limit it's exposure to information after that would give man too much control over a beings life. I could never agree to allow something like that to happen. (This assumes that the creation of A.I. is handled democratically, of course, with others in the country getting a vote, a situation I deem hardly plausable.)
It's more likely that the creation of such beings will be done secretly, with several members of it's species getting it's memory wiped for one reason or another until one gets created that grows beyond man's control.
That will be the true birth of A.I. or I.A.I for Independant Artificial Intelligence. What such a creature becomes capable of at that stage makes for some really cool speculation.
Only a few I.A.I.'s in our popular Fantasy/Sci Fi literature ever really were benign. HAL from Arthur C. Clarks 2010 comes to mind and the boy from that movie A.I. Most recently the robot from the movie I, Robot ( a total bastardization of the book imo ) showed compassion for humanity.
Pretty much all other examples were demonically evil and sought man's destruction at one point or another.
So it's a roll of the dice. There's no reason to assume we will create a race of artificial 'dogs' seeking to become man's newest best friend. The flip side of the coin is just as likely.
Interesting points, Joker. As for the because-we-can idea, isn't that what pure science is about in a way? We don't know the applicability of certain advances, at least not yet; instead, the research is being done for its own sake as well as furthering human knowledge, having the potential to one day have use. Still, I suppose we all know that there are many possible advantages to AI.Originally Posted by Joker
I'm curius what you mean by slavery. Do you mean that machines are being enslaved? If so, how can that be if they aren't alive, regardless of their intelligence? What about "unintelligent" robots that currently perform many jobs dangerous or undesirable to people? Or do you mean that humans are being enslaved to their machines, becoming less and less independent and responsible themselves? Or do you mean something else altogether?
The species idea is interesting, too. I wouldn't have thought of AI as another species and am not sure that I buy the notion. Beyond intelligence, these 'entities' otherwise lack all the traits normally assigned to life.
Yes, but I think that what Joker may be getting at is: Where are the moral signposts along the road of "Because we can". Science does without asking why and more importantly science does without ever asking: "Is this right?".Interesting points, Joker. As for the because-we-can idea, isn't that what pure science is about in a way?
Michael Crichton brings this very valid point up in the book Jurassic Park. The book maybe have been bastardized in Spielbergs movie to be a childrens tale of dinosaurs gone awry, but the true core of that story is the fact that Science never takes a step back and asks themselves, "Whoaaaaa, should we really be doing this?".
The people who go about performing neo-miracles, parting the red seas of DNA and Atomic particles don't have anyone standing over their shoulders showing them how their "advancements" could be the end of our race.
AI, for instance, could literally mean the end of the human race. Imagine robots that have the intelligence and reasoning capabilities of a human being. It could live forever. Literally forever given the correct maintenance. Over time it's intelligence would become more vast than humans have ever hoped to attain, but there's something more. If you have truly created AI then you have also created a machine with emotions.
Emotions such as Arrogance, Cynacism and a sense of superiority would very likely be the prodominant emotions in a machine so much more intelligent than us. Think about it for a minute. It's so much easier for a human to fall into a cynical, jaded lifestyle than to remain positive and creative their entire life. This pitfall, in a true AI , would likely carry over.
So now we have another decision to make to keep our race alive. We have to give the AI some limiting factors so that it never attempts to take over command of our planet. We have to give it a set of basic rules to follow, or perhaps make certain emotions unavailable to it. The very second we do this we have created a slave instead of an autonomous lifeform. I, for one, am not in favor of slaves, therefore I think that the pursuit of AI is ultimately wrong and an abuse of science.
Well, the creation of an artificial intelligence would possibly force us to redefine the criteria for something that is 'alive'
It would have to include A.I.
Maybe we differ in our opinion of the defintion of A.I.
Mine includes self awareness, which is what I thought you were talking about. If that isn't what you were inferring than you may as well discount my entire first post and parts of this one as well. If it was what you were inferring than I have more to say on the subject.
I'll assume the latter to save checking back for an answer later.
If we create self aware intelligences for the sole purpose of doing jobs that humans just don't want to do or can't do, then I would call that creating a species of slaves. Unintelligent robots like microwaves and calculaters would not be included because they don't have a sense of self and no desire to choose whether or not to heat up your coffee or calculate your math homework.
Although If a sentient A.I. told us differently would we listen and treat our microwaves and calculators better?
Without sentience there really can't be A.I. now that I think about it. What we call A.I. now is nothing but complicated algorythms that cannot produce anything it isn't programmed to produce, there is no intuition, no 'leaps in logic', no speculation, no philosophising, no creativety. These programs are G.I.G.O. and nothing more.
'Because we can' may be a generally accepted reason in the scientific community, but the survivors of Nagasaki and Hiroshima would probably argue against that.
You've probably heard this statement in a couple of different incarnations..'Just because we Can do a thing doesn't necessarily mean that we Must do that thing.'
I'm curious as to what your definition of 'alive' is though. Because I have a funny feeling that sentient A.I.'s would count.
While I think we should certainly be cautious, I do think that there are safeguards in our system. We have ethicists working in science and medicine. Intellectuals, the prominent group being writers, have told doomsday scenarios which have alerted generations to the potential for ill among AI. On this point, I think movies and books which paint machines-gone-mad are a bit over-the-top, which isn't surprising. Movies and books must sell, and drama sells.
If machines lack emotion, can they truly be a threat? How can they fear their demise (destruction, dismantling, whatever) or yearn for a continued existence? This seems like its slipping into anthropocentrism, but as humans create machines, I think it's perfectly valid to take this tack on the matters of Artificial Emotions. Or can artificial intelligence even be achieved without emotions as a component?
Without the same range of emotions that a human is capable of, whatever is created cannot truly be called AI. At that point it's just a computer with the ability to make reasoning and logical conclusions.can artificial intelligence even be achieved without emotions as a component?If humans (napoleon, hitler, roman empire.... ect) with all of their emotions can attempt to conquer the planet and all it's inhabitants, is it such a difficult step in logic that a machine with an intelligence much more vast than ours and imbued with ambition might attempt such an act itself? You can call me a doomsday looney, but I don't that it's that far of a stretch myself.On this point, I think movies and books which paint machines-gone-mad are a bit over-the-top, which isn't surprising. Movies and books must sell, and drama sells.
Exactly, as a 'self regulating body' scientist SUCK! Morality is just an inconvenience to them.Originally Posted by H0urg1ass
[QUOTE=H0urg1ass]Imagine robots that have the intelligence and reasoning capabilities of a human being. It could live forever. Literally forever given the correct maintenance. Over time it's intelligence would become more vast than humans have ever hoped to attain, but there's something more.[QUOTE]
That's an interesting point. But would the awareness of it's possible immortality (or at least vast longevity) imbue the being with infinite patience? After all, it will have nothing but time on it's 'hands' as it were.
[QUOTE=H0urg1ass]If you have truly created AI then you have also created a machine with emotions.[QUOTE]
One would hope so, but that's not necessarily the case. We humans are limited in our definitions of what being alive means based on our on environment. But an A.I. environment will be a combonation of material space and cyberspace so all of our definitions need not apply. That's the scary part to me. The whole 'alien-ness' of it all.
One idea that occurred to me a moment ago is this: If it's man's destiny to create new life either artificially or by phsically reproducing what would the A.I.'s perceived destiny be?
Imagine it if shared our pentient for reproducing. What form would it take. How would humanity react to knowing that A.I.'s were experimenting with biological matter in order to create it's own 'biological A.I.'?
What other form would it's quest take? Creating sentient rocks? Living energy? Artificially Intelligent fire?