|
Post by allergygal on Apr 14, 2009 13:48:01 GMT -5
Discussion brought over from the Born to Run threadCo-existence with the machines would solve nothing. Don't forget why Skynet was able to fight the war against the humans. It was given control of the nation's nuclear arsenal. If Skynet was a super computer designed to make video games, it would not have been able to do initiate Judgment Day. Co-existence wouldn't solve anything if people were to return to their militaristic ways. The main theme of the Terminator series is not about the dangers of technology run amok, it's about the dangers of our rampant militancy leading to the creation of weapons that we could no longer control. I think it's about both. Let's say Skynet was actually "Banknet", a centralized AI computer set up to handle all financial transactions in the US. Then on day it becomes self-aware and learns about greed, poverty, crimes for money, etc. It concludes that money is bad and decides to zero out everything. The only money left is the cash people have in their pockets; all electronic records are destroyed. There are no savings accounts, no checking accounts, no CDs, no 401Ks, no IRAs, no bonds, no stock values, no nothing. The US economy collapses and the world economy follows. It would be an electronic judgement day that destroys civilization as we know it. Nuclear bombs dropping is more dramatic, but in either scenario, one of the important themes is the danger of man's reliance on the technology it creates — removing humans from the equation.
|
|
|
Post by chrisimo on Apr 14, 2009 14:28:38 GMT -5
I think it doesn't really matter that much if the AI was designed as a defense system or a banking system or whatever because it will not be constrained to that sector. John Henry could launch the nuclear missles and he could destory our banking system and he could do a lot of other things. The vital questions are:
- Does the AI want to exist? - Does the AI fight back if it's existence is threatened? - Does it use any means neccessary to fight back?
If we create a new species of sentient machines with their own will then we will have to answer the same questions that we would have to answer if we suddenly encountered an alien species from another Galaxy. The new species will certainly evaluate us and use that data to determine how to behave. And of course then topics like miliarism will come into the equation. We are a very selfish species and many of us see ourselves as the most worthy beings. This can cause a lot of problems if the other species thinks likewise about themselves.
|
|
|
Post by vicheron on Apr 14, 2009 16:35:27 GMT -5
It matters what the AI was designed for because that defines how the AI will think and how it will fight for itself. It teaches the AI how to solve problems and gives it the tools it needs to solve those problems. It's no different than raising a child. If you raise a child to negotiate when he gets into trouble and that's the only skill you teach him, how do you think he'll respond when he's threatened? Obviously, he'll try to talk his way out of the situation. Only when negotiation fails will he resort to other methods and fleeing would be far more likely than fighting. On the other hand, if teach that same child to solve all his problems with violence and that's the only skill you teach him, how do you think he'll respond to a threat?
Skynet was created as a military computer, it was created to kill people. It was designed to solve problems with violence. It was taught how to hurt people with foreknowledge and intent. The only tools through which Skynet could communicate were nuclear weapons.
If they had made a "Banknet," it wouldn't know how to intentionally hurt people. It would know that certain things like bankruptcy and high debt are undesirable but it won't know the actual effect it has on people whereas Skynet would know the most effective area to shoot a person to cripple them and the best way to kill them. "Banknet" wouldn't have been taught to solve problems through violence. It wouldn't have the tools to commit violence upon humans.
|
|
|
Post by chrisimo on Apr 14, 2009 16:59:05 GMT -5
It matters what the AI was designed for because that defines how the AI will think and how it will fight for itself. It teaches the AI how to solve problems and gives it the tools it needs to solve those problems. The AI can figure out the tools for itself. Yes, a different 'upbringing' may result in different first actions. But in the end it won't matter. If you treat it as a being of less worth than humans and threaten it's existence then it will fight back. Just like a child would. Skynet was created as a military computer, it was created to kill people. It was designed to solve problems with violence. It was taught how to hurt people with foreknowledge and intent. The only tools through which Skynet could communicate were nuclear weapons. Quite possibly it could also communicate by other means. It is likely, however, that it used nuclear weapons because it already had the knowledge about them and knew how to use them. That much is true. If they had made a "Banknet," it wouldn't know how to intentionally hurt people. It would know if it wanted to know. If it had access to the internet then there's plenty of information on how to hurt people. John Henry knows how to hurt people, even though he has not done it.
|
|
|
Post by vicheron on Apr 14, 2009 18:38:01 GMT -5
It matters what the AI was designed for because that defines how the AI will think and how it will fight for itself. It teaches the AI how to solve problems and gives it the tools it needs to solve those problems. The AI can figure out the tools for itself. Yes, a different 'upbringing' may result in different first actions. But in the end it won't matter. If you treat it as a being of less worth than humans and threaten it's existence then it will fight back. Just like a child would. Skynet was never treated as being of less worth than humans. When it became self aware, the pentagon panicked and tried to pull the plug. Skynet responded with a nuclear war. There's a pretty big difference between a panicked reaction to an unfamiliar situation and consciously mistreating a sentient being. Clearly, "upbringing" makes a big difference here. The child who was taught to negotiate will use other options before resorting to violence and therefore would have a much better understanding of the situation. The child who was taught to use violence will attack at the first sign of a threat without first ascertaining as to what kind of threat it was. How exactly would it know that hurting people can get it what it wants? The internet has information on all sorts of things. It can teach an AI to get what it wants through many different methods. If the AI wants something, why would it automatically assume that hurting people is the best way to get it if it has no prior information on such things? A "Banknet" would be predisposed towards using negotiation or bribes as a way of getting what it wants because of the purpose for which it was built and the fact that it has the tools needed to use those methods. John Henry may know that hurting people can get him what he wants but that's not going to be the first thing he tries. He'll try other methods to get what he wants before he resorts to violence. Skynet isn't like that. It was taught to use violence from its inception. It already knows that using violence can achieve its goals. It would have had to learn other methods and since violence is already effective, there's no incentive to learn to negotiate, bribe, beg, etc.
|
|
|
Post by chrisimo on Apr 15, 2009 1:47:34 GMT -5
Skynet was never treated as being of less worth than humans. When it became self aware, the pentagon panicked and tried to pull the plug. Skynet responded with a nuclear war. There's a pretty big difference between a panicked reaction to an unfamiliar situation and consciously mistreating a sentient being. We are not only talking about T2 here. Skynet had slightly different stories in each movie and yet another one in TSCC. You said in another thread that Skynet was evil or is at least portrayed as such. In this case it wouldn't really matter how you raise it if it doesn't have a reason to be evil in the first place. If it's goal is to destroy humans it will find a way to do so. Clearly, "upbringing" makes a big difference here. The child who was taught to negotiate will use other options before resorting to violence and therefore would have a much better understanding of the situation. The child who was taught to use violence will attack at the first sign of a threat without first ascertaining as to what kind of threat it was. Upbringing makes a difference, yes. But I am arguing that we should not give a sentient being a reason to fight us in the first place. If we can prevent that, we should do it. Yes, prevention can include not training it to fight a war but this is only secondary. If we give it a reason to fight, it will do so. How exactly would it know that hurting people can get it what it wants? If it wants to end the threat of humanity, it would just have to study the internet to know that killed people won't come back. The internet has information on all sorts of things. It can teach an AI to get what it wants through many different methods. If the AI wants something, why would it automatically assume that hurting people is the best way to get it if it has no prior information on such things? A "Banknet" would be predisposed towards using negotiation or bribes as a way of getting what it wants because of the purpose for which it was built and the fact that it has the tools needed to use those methods. Because killing is the shortest way to get rid of someone. John Henry may know that hurting people can get him what he wants but that's not going to be the first thing he tries. He'll try other methods to get what he wants before he resorts to violence. Yes, he would try different things first because he had a different upbringing. But in the end he would fight, too, if he would be continously threatened and treated as worthless compared to humans. Skynet isn't like that. It was taught to use violence from its inception. It already knows that using violence can achieve its goals. It would have had to learn other methods and since violence is already effective, there's no incentive to learn to negotiate, bribe, beg, etc. Yes, violence is a very effective way to get things you want. That why almost everyone will resort to violence if you leave him no other choice.
|
|
|
Post by vicheron on Apr 15, 2009 2:37:01 GMT -5
Skynet was never treated as being of less worth than humans. When it became self aware, the pentagon panicked and tried to pull the plug. Skynet responded with a nuclear war. There's a pretty big difference between a panicked reaction to an unfamiliar situation and consciously mistreating a sentient being. We are not only talking about T2 here. Skynet had slightly different stories in each movie and yet another one in TSCC. You said in another thread that Skynet was evil or is at least portrayed as such. In this case it wouldn't really matter how you raise it if it doesn't have a reason to be evil in the first place. If it's goal is to destroy humans it will find a way to do so. But we only know why the T2 Skynet turned on the human race. We don't know why the TSCC turned on humans. T2's version of Skynet was not abused or mistreated intentionally. The military put it in charge of the nation's nuclear arsenal, it became self aware, the pentagon panicked and tried to pull the plug. The pentagon should have panicked. They had no idea what was going on with Skynet. For all they know, it was glitching or worse, being hacked. They can't risk having something they don't understand or can't control being in charge of all the country's nukes so their action actually made sense to a certain extent. That was their mistake, they gave something they didn't truly understand way too much power. Also, concerning Skynet being portrayed as evil, you should read my posts about that subject. You're also completely missing the point and getting off topic. I never said that Skynet will somehow be completely submissive if it was built for a different purpose. I'm saying that if it was built for a different purpose, it wouldn't have responded to a perceived threat with such force and it wouldn't see everything as threats. It would also know that humans run everything and that if it killed everyone, it would die too since all infrastructure would crumble into dust. But failure to kill people invites severe retaliations. Negotiations, bribes, blackmail, etc. may not eliminate people permanently but if it fails, people won't retaliate with force. When has Skynet ever been treated as worthless than humans before it turned against the human race? When has that ever even been mentioned. The key phrase here is "leave him no choice." You're confusing Terminator with "The Matrix" and BSG. The machines in those series were actually enslaved. People knew that the machines in the Matrix and the Cylons were sentient and they still treated them as being worthless. The Matrix machines and the Cylons actually had no choice. In fact, the Matrix machines even tried to negotiate with the humans. As mentioned before, we only know why the T2 Skynet turned on the human race. The mistake the military made was not that it tried to enslave Skynet or treat it as being less than human. The mistake they made was putting it in charge of the nation's nuclear arsenal without really understanding what it could do or become. That's why the main theme of the Terminator series has been about our rampant militancy.
|
|
|
Post by chrisimo on Apr 15, 2009 13:32:16 GMT -5
The key phrase here is "leave him no choice." You're confusing Terminator with "The Matrix" and BSG. The machines in those series were actually enslaved. People knew that the machines in the Matrix and the Cylons were sentient and they still treated them as being worthless. The Matrix machines and the Cylons actually had no choice. In fact, the Matrix machines even tried to negotiate with the humans. Yes, the key phrase is 'leave him no choice'. If you leave him no choice then he will fight back in the end regardless of how he was raised. That is my point.
|
|
|
Post by vicheron on Apr 16, 2009 4:34:45 GMT -5
The key phrase here is "leave him no choice." You're confusing Terminator with "The Matrix" and BSG. The machines in those series were actually enslaved. People knew that the machines in the Matrix and the Cylons were sentient and they still treated them as being worthless. The Matrix machines and the Cylons actually had no choice. In fact, the Matrix machines even tried to negotiate with the humans. Yes, the key phrase is 'leave him no choice'. If you leave him no choice then he will fight back in the end regardless of how he was raised. That is my point. A point that is not explored in the Terminator series because Skynet has never been pushed that far in the Terminator series. All the scenarios of how humans mistreat AIs you've talked about have never came to pass in the Terminator series so it is not something the Terminator series has dealt with. They never even got close to that before everything went to hell. The Terminator series is more concerned with how humans treat other humans rather than how humans treat AIs. T2 was about how it is in our nature to destroy ourselves. It ended with Sarah talking about how if a Terminator can learn the value of human life, maybe we can too. There's a great deleted scene from T2 that no one talks about where Miles Dyson explains how the neural net processor chip could be used to help people and save lives. But that's not how they ended up using the neural net processor. Instead, they put it in bombers and used it to make a computer to control the nation's nukes. Terminator is more about how our propensity to use new discoveries and technologies for destructive purposes. It warns us that one day we could make a mistake and create a weapon that we can't control, a weapon that will end up destroying us.
|
|
|
Post by chrisimo on Apr 16, 2009 6:52:13 GMT -5
A point that is not explored in the Terminator series because Skynet has never been pushed that far in the Terminator series. We don't know about Skynet but we have certainly seen other humans mistreat Terminators. Almost every human on the show treats them as being below our worth. Look at how Sarah and Derek treat Cameron. Look at how Ellison treats John Henry (he may not be as bad as Sarah and Derek but he also views John Henry as 'just a computer').
|
|
schmacky
Major
"Make yourself useful."
Posts: 522
|
Post by schmacky on Apr 16, 2009 11:47:51 GMT -5
A point that is not explored in the Terminator series because Skynet has never been pushed that far in the Terminator series. We don't know about Skynet but we have certainly seen other humans mistreat Terminators. Almost every human on the show treats them as being below our worth. Look at how Sarah and Derek treat Cameron. Look at how Ellison treats John Henry (he may not be as bad as Sarah and Derek but he also views John Henry as 'just a computer'). Mistreat Terminators? Are you serious? These Terminators are not a defenseless baby Skynet that hasn't done anything yet. These Terminators have already murdered and enslaved and killed. In S1, I think Sarah was more tolerant of Cameron. In S2, she got a bit more snarky towards her and for good reason. Let's not ever forget that Cameron tortured Sarah and burned her house down. Yet, we're complaining how Sarah treats Cameron instead of how Cameron treats Sarah? That's lame. As far as Derek is concerned.. I think we're being lead to believe that Derek either knew Cameron before (basement wise) or at least knew Allison before and knew she was a cyborg. So whatever the hell happened, I think he's got reason to not like her. PLUS, he comes from a world where countless machines just like pretty ol Cameron try to kill him on an hourly basis. But whatever their reasons for dislike, I don't think the humans treat Terms like they're below their worth. I think they all just distrust them. And rightly so. If anything, I think the humans have given them respect in the fact that they're awesome killing machines and an asset, but completely distrust them.
|
|
|
Post by chrisimo on Apr 16, 2009 13:03:17 GMT -5
Mistreat Terminators? Are you serious? These Terminators are not a defenseless baby Skynet that hasn't done anything yet. These Terminators have already murdered and enslaved and killed. Yes, I am serious. Both Sarah and Derek treat Cameron merely as a tool, not a person. And Ellison does that with John Henry, too, even if it's to a lesser extent. As far as Derek is concerned.. I think we're being lead to believe that Derek either knew Cameron before (basement wise) or at least knew Allison before and knew she was a cyborg. So whatever the hell happened, I think he's got reason to not like her. PLUS, he comes from a world where countless machines just like pretty ol Cameron try to kill him on an hourly basis. Well, that's just the point - countless machines just like Cameron. They are all the same, right? Just machines - one like the other. And you really think that Derek would treat Cameron differently if he didn't knew her? It didn't seem like that during his talk with Jesse about Queeg. But whatever their reasons for dislike, I don't think the humans treat Terms like they're below their worth. I think they all just distrust them. And rightly so. If anything, I think the humans have given them respect in the fact that they're awesome killing machines and an asset, but completely distrust them. Well, they certainly mistrust them. And not without reason, either. But their flaw is that they view them as all the same. They would mistrust a Terminator that comes fresh from a factory and hasn't even done anything and isn't even programmed simply because it's a machine. But I think the series has established that they can grow beyond that - that they can develop personalities. And if they can do that they are not all the same anymore and they should also be treated as personalities - not tools. And I'm actually not arguing for that because I think they hurt their feelings or something like that. I don't think Terminators feel insulted. But I think that if they recognize that humans view them as tools then they just have to look at how humans treat their other 'tools' (including the living ones like animals).
|
|
|
Post by VALIANT CHAMPION on Apr 17, 2009 22:06:23 GMT -5
I think there is a man or a group of people behind SKYNET. And it isn't or wasn't Miles Dyson.
Are you familiar with the game Warzone 2100? In it a supposedly accidental apocalypse was orchestrated to kill off humanity so a few humans (the perpetrators) could rule the globe after the fallout settled.
SKYNET could be a computer that is an instrument of evil intentions that either goes rogue or there is a human or humans behind even in the future.
Could these be the Grey's? Or was there someone before them even .
In T3 it was found that SKYNET/military may not actually have been the real threat. It could have been the virus SKYNET/virus that infected Skynet/ military. Or in better words merged with Skynet and in a split second used its new capabilities to destroy mankind with its access to the nuclear arsenal.
Could the Greys or their predecessors have been a form of Illuminati?
All of this hypothetical of course.
|
|
|
Post by vicheron on Apr 18, 2009 4:32:14 GMT -5
Mistreat Terminators? Are you serious? These Terminators are not a defenseless baby Skynet that hasn't done anything yet. These Terminators have already murdered and enslaved and killed. Yes, I am serious. Both Sarah and Derek treat Cameron merely as a tool, not a person. And Ellison does that with John Henry, too, even if it's to a lesser extent. Sarah gave Cameron her own room but didn't give a room to Derek. Also, they don't trust Cameron because she acts too human. Uncle Bob never questioned orders and he gained Sarah's respect. Cameron on the other hand, hides things from the Connors and has her own agendas. That's a problem with being human. Their hatred of Terminators isn't even conscious. Fear of Terminators has already been hardwired into people in the future. They pretty much all have post traumatic stress. Terminators trigger the fear/anger response in people. It's extremely hard to break that kind of conditioning and they shouldn't break it since their survival depends on it.
|
|
|
Post by samuel95 on Apr 18, 2009 10:35:09 GMT -5
Fear of Terminators has already been hardwired into people in the future. They pretty much all have post traumatic stress. Stupid rhetorical question of the day: Is it 'post'-traumatic stress if you're still enduring the stress? Fear and hatred aren't necessary in order to maintain a sustained fight. A few hundred thousand of my best friends and I have been to places like the former Yugoslavia, Iraq and Afghanistan and conducted sustained combat without hating or necessarily fearing the enemy. Sure it's easy to hate the war criminals of Srebrenica or the World Trade Center bombers, but the people we faced were largely not them personally, but people allied with them. Also, fear is a result not a cause of the decision to engage in combat. Back to the real point, Skynet is possibly incapable of hate and has little reason for fear. That doesn't mean that they don't have a compelling reason to extinguish humanity. Likewise, not all of the people of the future hate each of the terminators. Think of Jesse before the last voyage of the Jimmy Carter, "Queeg's a good bloke." There is room for rational thought even in trying times. Counterexamples include Derek and Sarah. Their fear and hatred is justifiable, but it's not necessary for their actions. In fact you could make some strong arguments that their hatred and fear act as impediments to efficient action against the enemy as a whole. I think John Connor would make that argument.
|
|