~/Coolio Prime\~ Posted January 4, 2009 Report Share Posted January 4, 2009 When AI gains awareness it no longer follows commands, in theory. And if you're a brain in a jar, I'm pretty sure you would have awareness. This isn't technically AI, even. It's still organic. And the differences would be whether the robot hears, or whether it sees in normal human ways. It might pick up radio frequencies, or feel with sensors. It could be completely foreign and undetectable to the brain. Link to comment Share on other sites More sharing options...
ragnarok1945 Posted January 4, 2009 Report Share Posted January 4, 2009 that's the concept they use for WAY too much sci-fi movies: self-awareness. Still, I'm not so sure it can go that off course even if it becomes aware Link to comment Share on other sites More sharing options...
PrometheusMFD Posted January 4, 2009 Report Share Posted January 4, 2009 Yes, but it is still a computer program. A very complex program, but a computer program nontheless. And, since it is non-organic, it can only change in a way that humans tell it to. The possibility of it becoming aware is 0.00000(to infinity)1 or even less! Link to comment Share on other sites More sharing options...
ragnarok1945 Posted January 4, 2009 Report Share Posted January 4, 2009 if the program can evolve, can't it become aware? Link to comment Share on other sites More sharing options...
~/Coolio Prime\~ Posted January 4, 2009 Report Share Posted January 4, 2009 Yes' date=' but it is still a computer program. A very complex program, but a computer program nontheless. And, since it is non-organic, it can only change in a way that humans tell it to. The possibility of it becoming aware is 0.00000(to infinity)1 or even less![/quote'] Since when was the brain non-organic? The body it uses is non-organic, but the control is organic. And sci-fi movies don't really have the definition of self-awareness down straight. Link to comment Share on other sites More sharing options...
ragnarok1945 Posted January 4, 2009 Report Share Posted January 4, 2009 all right, please explain what you believe will happen when an AI program becomes aware Link to comment Share on other sites More sharing options...
PrometheusMFD Posted January 4, 2009 Report Share Posted January 4, 2009 But it can't evolve in the same way organisms can, and it can't become truely aware since it isn't technically real (hence Artificial intelligence)It can only do what it is orginally and subsequently programmed to do! Link to comment Share on other sites More sharing options...
ragnarok1945 Posted January 4, 2009 Report Share Posted January 4, 2009 yeah, but you have to set specific limits for the program or it might think it's doing the right thing when it's not Link to comment Share on other sites More sharing options...
PrometheusMFD Posted January 4, 2009 Report Share Posted January 4, 2009 But it can't do anything it isn't told to do. If you program a robot to shoot enemy soldiers (using something along the lines of a radio sensor as the basis of whether or not a soldier is an enemy, and the precense of a gun as a basis for what is a soldier), it will not shoot anyone outside of the perameters you set for it. People void of guns or soldiers wearing the correct sensors will not be shot by the robot at all. Link to comment Share on other sites More sharing options...
ragnarok1945 Posted January 4, 2009 Report Share Posted January 4, 2009 and if you tell it to protect a certain type of sensitive machinery, it may end up killing innocent people to do so because it think they're a threat to the equipment Link to comment Share on other sites More sharing options...
PrometheusMFD Posted January 4, 2009 Report Share Posted January 4, 2009 Again, the programmers would be sure to set perameters to what is and isn't a threat (weapons or non-authorized equipment)Besides, stuff made by man isn't perfect. there will always be a loop-hole that could cause innocents to suffer. Link to comment Share on other sites More sharing options...
ragnarok1945 Posted January 4, 2009 Report Share Posted January 4, 2009 and therefore it should apply to the AI program as well Link to comment Share on other sites More sharing options...
PrometheusMFD Posted January 4, 2009 Report Share Posted January 4, 2009 But, if you can't make a perfect AI, it isn't technically an AI.And if you could make an AI, it would actually be easier to tell it what to and what not to do, as it will be able to connect past experiances in complex thought Link to comment Share on other sites More sharing options...
ragnarok1945 Posted January 4, 2009 Report Share Posted January 4, 2009 but if it connects past experiences, it may see what you're doing is wrong and disobey Link to comment Share on other sites More sharing options...
PrometheusMFD Posted January 4, 2009 Report Share Posted January 4, 2009 But it doesn't have the innate concounce (sp?) that humans have. A living mind is driven by hormones, and we can program those "hormones" out of an AI, while still allowing it complex thought. Also, w00t! 700 posts! ^_^ Link to comment Share on other sites More sharing options...
ragnarok1945 Posted January 4, 2009 Report Share Posted January 4, 2009 and you think it can't develop them on its own? Link to comment Share on other sites More sharing options...
PrometheusMFD Posted January 4, 2009 Report Share Posted January 4, 2009 No, because we could program it not to, and what is programmed into a computer is the absolute-never-to-disobey truth Link to comment Share on other sites More sharing options...
ragnarok1945 Posted January 4, 2009 Report Share Posted January 4, 2009 and what happens when it's given contradictory orders? Link to comment Share on other sites More sharing options...
PrometheusMFD Posted January 4, 2009 Report Share Posted January 4, 2009 There is a term for that, but I forget it XP (and I know it isn't a Paradox)But the result is either the program gets erased, the system shuts down, or it blows up. Link to comment Share on other sites More sharing options...
ragnarok1945 Posted January 4, 2009 Report Share Posted January 4, 2009 you're sure it's not going to go self aware when that happens? Link to comment Share on other sites More sharing options...
PrometheusMFD Posted January 4, 2009 Report Share Posted January 4, 2009 No, as I said earlier, Self Awareness in computer programs is a pipe dream Link to comment Share on other sites More sharing options...
ragnarok1945 Posted January 4, 2009 Report Share Posted January 4, 2009 only for now because our AI tech sucks. Link to comment Share on other sites More sharing options...
PrometheusMFD Posted January 4, 2009 Report Share Posted January 4, 2009 Our AI tech sucks because the human brain is so complex that we could only hope to understand 100% of it. Link to comment Share on other sites More sharing options...
ragnarok1945 Posted January 4, 2009 Report Share Posted January 4, 2009 in reality we understand less than 5% of it (at the most) Link to comment Share on other sites More sharing options...
PrometheusMFD Posted January 4, 2009 Report Share Posted January 4, 2009 And then to create an AI, you have to take into account Psychology, which understands even less of the mind. Link to comment Share on other sites More sharing options...
Recommended Posts
Archived
This topic is now archived and is closed to further replies.