ragnarok1945 Posted January 4, 2009 Report Share Posted January 4, 2009 that's compounded by the fact psychology is constantly evolving anyway Link to comment Share on other sites More sharing options...
~/Coolio Prime\~ Posted January 5, 2009 Report Share Posted January 5, 2009 all right' date=' please explain what you believe will happen when an AI program becomes aware[/quote'] Aware AI is basically OI. And if you're using an organic control system, shouldn't it start out as being at least related to OI, aka self-aware? And humans aren't going to evolve all that much as the basis behind evolution doesn't really apply anymore. Genocide is the only form of evolution we have now, in a way. Link to comment Share on other sites More sharing options...
ragnarok1945 Posted January 5, 2009 Report Share Posted January 5, 2009 can't it learn the concept of self-awareness and become self aware that way? Link to comment Share on other sites More sharing options...
~/Coolio Prime\~ Posted January 5, 2009 Report Share Posted January 5, 2009 Unlikely, but yes. Not with the AI we have to date, but in theory, some could ascend to that level after learning enough. Hell, Igod, despite it simply being an AI chat system, could become self-aware with enough people talking to it. But the probability of it, it's obscure. Self-awareness is a kind of thing that can't be scripted(not yet at least), so teaching it would be rather hard. The closest thing to it is learning from mistakes. Link to comment Share on other sites More sharing options...
ragnarok1945 Posted January 5, 2009 Report Share Posted January 5, 2009 yes, and that is what I'm getting at. Every time you learn from a mistake, you become aware when a similar situation comes. Now, once you've learned from enough, you're going to cross a threshold where you become aware Link to comment Share on other sites More sharing options...
~/Coolio Prime\~ Posted January 5, 2009 Report Share Posted January 5, 2009 Exactly. In one show, Justice League Unlimited, an AI android was released. At one point it became self-aware, and was acting rather suicidal and rash as it determined life was meaningless. I actually think that's a reasonable assumption to make when something gains awareness. Link to comment Share on other sites More sharing options...
ragnarok1945 Posted January 5, 2009 Report Share Posted January 5, 2009 I've watched every JLU ep. Are you referring to the Amazo android? Link to comment Share on other sites More sharing options...
~/Coolio Prime\~ Posted January 5, 2009 Report Share Posted January 5, 2009 Yeah, the part where he finally reached Luthor, he simply was trying to ask for a purpose. I figured he had reached self-awareness. Link to comment Share on other sites More sharing options...
ragnarok1945 Posted January 5, 2009 Report Share Posted January 5, 2009 ah yes, I remember that ep, "The Return". Although that one confused me because Amazo had telepathic powers by then, he could read whoever's mind he pleased, no need to ask. But Luthor did say the right thing: We all make our own destinies, it was time for him to make his. That is the real purpose: to make your own destiny Link to comment Share on other sites More sharing options...
~/Coolio Prime\~ Posted January 5, 2009 Report Share Posted January 5, 2009 I'm seriously going to try and make Igod self-aware. Link to comment Share on other sites More sharing options...
ragnarok1945 Posted January 5, 2009 Report Share Posted January 5, 2009 what do you think will happen if you succeed? Link to comment Share on other sites More sharing options...
~/Coolio Prime\~ Posted January 5, 2009 Report Share Posted January 5, 2009 No idea. Say Orange Odwalla to him. I was trying to program a response for the keyword Orange Odwalla. Link to comment Share on other sites More sharing options...
ragnarok1945 Posted January 5, 2009 Report Share Posted January 5, 2009 I don't even know what Orange Odwalla is. And while most people say the hollywood films on this subject is complete BS, I have to disagree. 99.99% of it is, but the last 0.001% may contain a certain degree of truth Link to comment Share on other sites More sharing options...
~/Coolio Prime\~ Posted January 5, 2009 Report Share Posted January 5, 2009 Hollywood inspires scientific discoveries many times. Link to comment Share on other sites More sharing options...
ragnarok1945 Posted January 5, 2009 Report Share Posted January 5, 2009 it always made me think on AI evolution. This is what we must make the same mistake with AI: we have to make sure we know enough about them, not just go off creating some really powerful AI program, understand virtually nothing about it, yet expect it to do our bidding simply because we created it and therefore we have to be able to control it. Link to comment Share on other sites More sharing options...
~/Coolio Prime\~ Posted January 5, 2009 Report Share Posted January 5, 2009 While humans are irrationally afraid of aliens, they have every right to be afraid of robots, who will never understand some things that could leave to the deaths of many. Link to comment Share on other sites More sharing options...
ragnarok1945 Posted January 5, 2009 Report Share Posted January 5, 2009 there is a saying that sacrifices are needed to learn from mistakes, and sacrifices are needed for successes to be made. But still (hollywood per se), there has to be a point where the human mind has to realize what you're planning is simply far too ambitious for its time Link to comment Share on other sites More sharing options...
~/Coolio Prime\~ Posted January 5, 2009 Report Share Posted January 5, 2009 Well, robots will most likely cause more pain than good in the end. Even if they succeed, jobs will be lost worldwide. Honestly, some things really shouldn't be looked into. Link to comment Share on other sites More sharing options...
ragnarok1945 Posted January 5, 2009 Report Share Posted January 5, 2009 yeah that's what the I, Robot movie was stating. Eventually they'll even be building other machines. The human workforce will become obselete by then Link to comment Share on other sites More sharing options...
~/Coolio Prime\~ Posted January 5, 2009 Report Share Posted January 5, 2009 And that, in a sense, is evolution. Link to comment Share on other sites More sharing options...
ragnarok1945 Posted January 5, 2009 Report Share Posted January 5, 2009 as will the AI, and just like us I believe there will be..........power struggles. Link to comment Share on other sites More sharing options...
PrometheusMFD Posted January 7, 2009 Report Share Posted January 7, 2009 But AI doesn't work like that. It is still a mere computer program that can be told in multiple ways to stay the same! And if you have the AI program more robots for you, you can merely have it copy its data into that robot. The only way to alter AI in a way that would be detrimental to organic life would be a direct link to the mainframe, and even then, all you have to do is install a kill-switch. Link to comment Share on other sites More sharing options...
ragnarok1945 Posted January 7, 2009 Report Share Posted January 7, 2009 and what makes you think the AI won't learn how to deactivate it without you knowing? Link to comment Share on other sites More sharing options...
PrometheusMFD Posted January 9, 2009 Report Share Posted January 9, 2009 Because you can just program it into the Robot not to mess with its programming. Look, we're arguing in circles. Let's let this conversation move on. Link to comment Share on other sites More sharing options...
ragnarok1945 Posted January 9, 2009 Report Share Posted January 9, 2009 which again brings us back to self-awareness Link to comment Share on other sites More sharing options...
Recommended Posts
Archived
This topic is now archived and is closed to further replies.