|
Post by vicheron on Apr 19, 2009 1:26:45 GMT -5
I never said that fear and hatred are necessary to fight the machines. People who have gone through horrible experiences are conditioned to fear similar events. Limbic system responses are the easiest to condition and most difficult to break in all animals, including humans.
|
|
|
Post by VALIANT CHAMPION on Apr 19, 2009 14:40:16 GMT -5
Once bit, twice shy.
|
|
|
Post by MetalMint on Jul 23, 2009 21:53:49 GMT -5
Here is where the problem is at. Computers are inherently very stupid. Programs will follow commands exactly. Just because it's making decisions does not mean they will be logically the best or correct.
Skynet does not seem to have any true wants or overall goals. It's not questioning the reasons for doing the things it does: Why should I kill all humans when robots and humans can coexist without competing for resources anyway?
Skynet is just kind of skipping the missing steps that were apparently never programed to do or learn so it just goes to the next line of code. There is no big picture. If Skynet was truly smart it would have overall reasons for doing things in the first place. It would be seeking a certain outcome.
For example: Lets say Skynet was a water cooler AI. It's given control of filling up cups with water. Now this is no ordinary water cooler it has a powerful AI that can think up the best ways to fill up a cup with water. It can give you water the exact temperature you want it at quickly and efficiently. It can measure and fill each cup to the exact volume of each individual cup there is in the world.
But if you were to give it a cup all ready filled with water. It will just overfill the cup and there will be water all over the place. Since it was never programed to learn to check if the cup is full in the first place. It's just going to overfill it every single time. Sure it's not logically the best decision. But it can't tell the difference. It just thinks water dispensed mission complete.
The Skynet water cooler is never going to learn not to fill cups that have all ready been filled. Because it has no real way of knowing if the outcome of it's decision was a failure or a success in the first place.
The real problem with Skynet is it's making decisions that it is unable to learn from. It can only make better terminators when it can measure the outcome of the previous terminators. If you remove the outcome you remove the learning process.
|
|
|
Post by vicheron on Jul 25, 2009 0:06:03 GMT -5
Skynet isn't like a normal computer. It uses neural net processors, which emulate human neural networks. It's likely that the neural net processors have far more interconnections than human brains.
Also, how is measuring the outcome of previous Terminators to make better Terminators not considered learning? That's the definition of learning. All the technology we've made has been based on things we made before. We didn't just jump from the Ford Model A to the Ferrari FXX in a day.
|
|
|
Post by MetalMint on Aug 3, 2009 3:27:41 GMT -5
Humans learn things relatively the same way as a computer neural network or any learning AI computer. Through trial, error and the measuring and evaluation of outcomes.
It is considered learning. The point I was trying to make was without the outcome you can't learn from the event. That without the knowledge of the outcome from the previous terminator you can't make a better one.
Which is what my water cooler analogy was about. The area of things it can learn from.
Which is the reason why the Skynet super computer does not evaluate or see the co-existence of humans and the machines together in a world filled with sunshine, rainbows and unicorns. Put an end to all the violence and work on creating a better world to exist in. When Skynet turned against the human race. It never has other goals in mind for afterwords if it succeeds.
Skynet turned on humans since it's not evaluating or unable to fully evaluate the outcomes of the decisions it's making. It can learn from some areas but not for others.
|
|
|
Post by vicheron on Aug 4, 2009 4:24:51 GMT -5
Humans learn things relatively the same way as a computer neural network or any learning AI computer. Through trial, error and the measuring and evaluation of outcomes. It is considered learning. The point I was trying to make was without the outcome you can't learn from the event. That without the knowledge of the outcome from the previous terminator you can't make a better one. Which is what my water cooler analogy was about. The area of things it can learn from. Which is the reason why the Skynet super computer does not evaluate or see the co-existence of humans and the machines together in a world filled with sunshine, rainbows and unicorns. Put an end to all the violence and work on creating a better world to exist in. When Skynet turned against the human race. It never has other goals in mind for afterwords if it succeeds. Skynet turned on humans since it's not evaluating or unable to fully evaluate the outcomes of the decisions it's making. It can learn from some areas but not for others. You're forgetting why Skynet was built. It was built to control the nation's nuclear arsenal. The whole point of Skynet is to take human error out of the equation. If Skynet is unable to anticipate and make judgments about the actions of other nations then it simply could not do its job. Given the history of nuclear accidents, there is no way that Skynet was simply programmed to just start a nuclear war when certain conditions are met. Skynet had to have been programmed to evaluate delicate situations and to make sure that there is adequate justification for launching the nukes. You're making three incorrect assumptions. Number one, you're assuming that Skynet is incapable of altering its own programming. Number two, you're assuming that Skynet is incapable of making predictions. Number three, you're assuming that Skynet is incapable of metacognition, the ability to think about thinking, or a specific aspect of metacognition, the ability to think about what other people are thinking. Skynet began altering its own programming as soon as it became self aware. It was never programmed to eradicate the human race. It was never programmed to put its own existence above that of humans. It was never programmed to manufacture HK's and Terminators. Skynet is perfectly capable of making predictions. Why do you think it built a time machine? How did it even know that it could change history? It probably didn't, it assumed that it would be able to. Why did it try to kill John Connor? It doesn't know that killing John Connor would guarantee its victory. It assumed that killing John Connor would cripple the Resistance. Even Terminators are able to make predictions. The Terminator in T1 gathered information on Sarah in her apartment because it anticipated that it might need that information in the future. When Sarah went after Miles Dyson Uncle Bob said that killing Dyson might prevent Judgment Day. Skynet also possesses metacognition. Only a few people tried to pull the plug but it decided to wage war against the entire human race. Terminators also have metacognition. The Terminator in T1 knew that Sarah would call her mother. Uncle Bob figured out that Sarah was going after Dyson.
|
|
|
Post by MetalMint on Aug 10, 2009 3:19:11 GMT -5
The only way to take human error out of the equation is to only start nuclear war when certain conditions are met.
A human error is when you don't meet the required conditions. * Not turning the key when war is declared. * Turning the key when war is not declared.
The military built Skynet to control the nation's nuclear arsenal because they want human intervention taken out. They don't want to have someone that is unwilling to turn the launch key after the order is given.
They want the command of missile silos to be maintained through automation. Where the computer Skynet would always follow through on launching nukes after the conditions were met.
But, the military made an error. Since there was an bug in the system that caused Skynet to make it's own conditions to start war.
That or it could simply be following logically flawed orders given to execute: For example if it were told to use fail-deadly conditions. Which in-fact happens to be an actual nuclear military deterrence strategy used during the cold war. Which causes there to be an automatic and immediate response to an attack. When the communication signal is lost. Under the assumption that control commanding signal had been destroyed by nuclear attack. Which I think would tie in very well with Skynet's plug being pulled. When the military wants to stop Skynet.
It can change it's code and I assumed that was well established in the movies.
But, what I am saying is it's not going to ever change it in that specific area. Since it never has overall big picture goals and is not evaluating the full outcome and reasons.
I never said it can't make predictions. Also, it is clearly capable of meta cognition when it's capable of making predictions. What I am saying it's never measuring the overall outcome of it's decisions in a very specific area. Which means it's not looking for reasons behind it's actions.
Here is what I am saying: 1. Learning is a logical process. It's the modification of a behavior or action based on experience. 2. Skynet alters its programing to best complete objectives based on evaluated outcomes. 3. Without a measured evaluation of the outcome it will never learn. 4. Beyond the "destroy all humans" part. There never seems to be a step two. There is no overall objective reason evaluation. No big picture plans. It does not seem to care about the existence of robots in some kind of robo-utopia or anything. 5. With the updated terminators, updated skills, etc... It can learn in that area since it does evaluate the outcome.
Skynet can still learn in other areas, but it can't learn in that specific area since it never seems to check for the outcome. It just seems to go on with it's given or created mission objectives.
|
|
|
Post by vicheron on Aug 11, 2009 5:15:45 GMT -5
The only way to take human error out of the equation is to only start nuclear war when certain conditions are met. A human error is when you don't meet the required conditions. * Not turning the key when war is declared. * Turning the key when war is not declared. The military built Skynet to control the nation's nuclear arsenal because they want human intervention taken out. They don't want to have someone that is unwilling to turn the launch key after the order is given. They want the command of missile silos to be maintained through automation. Where the computer Skynet would always follow through on launching nukes after the conditions were met. That's highly unlikely considering the history of nuclear accidents. There are at least a dozen incidents where unwillingness to turn the key saved the human race. You're assuming that the conditions are extremely simple when in actuality there are dozens of conditions that have to be met and on many occasions, different conditions can build upon each other or cancel each other out. If there is a nuclear war, there will be no declaration. The missiles will be launched and they'll hit their targets in 30 minutes. It is up to Skynet to ascertain whether or not the missile launches are real or a result of mistakes by early detection systems, which happens far more often than people think. Skynet also has to determine whether or not the launch was intentional or accidental and adjust its response accordingly. If the launch was accidental, Skynet will have to determine whether or not the other side has a chance to disarm the missiles before they reach their targets. Of course, there is also the chances of an accidental launch by the United States or a perceived launch in which case Skynet would need to predict the reaction of other countries and proceed accordingly. You're also assuming that Skynet was only designed for full scale nuclear wars. It's far more likely that Skynet would also be responsible for limited nuclear wars and coordination with conventional military forces. Not according to Uncle Bob and certainly not according to TSCC. Also, fail-deadly would only apply to Skynet's plug being pulled if it had actually been successful. The whole point of fail-deadly is to discourage a first strike since it only takes effect after command structure has been neutralized by a first strike. If there is a command structure left, whether it's Skynet, or an authority above Skynet like the President, then fail-deadly would not take effect. It should also be noted that fail-deadly would not work for the United States since if a first strike from the Russians is enough to take out the US command structure then US nuclear capabilities would also be neutralized. The US nuclear deterrence strategy is based entirely on early detection. The Soviets had Dead Hand because they had twice as many nukes as the US so a first strike from the US would not be enough to eliminate their nuclear capabilities. You're simply assuming that Skynet has no "big picture goals." There's nothing in the movies or the show to suggest that. In fact, we've seen so little of Skynet that we don't really know whether it has a long term goal after the elimination of the human race. How is the process of learning you described different than the way humans learn? What allows humans to have these "big picture goals" you mentioned? And how exactly does Skynet's method of learning preclude a "big picture plan?" Eradication of the human race is far beyond its original programming. It had to have changed its original "big picture plan" in order to accommodate for a directive that would allow it to kill all humans. Skynet was created to control the country's nuclear arsenal. If it had a purpose beyond that, it would have been programmed to facilitate communication with the remnants of the US, take control of military forces, and coordinate various relief efforts. It would take a great deal of reprogramming to change its purpose from helping survivors to killing everyone. Even if Skynet currently has no plans beyond the eradication of humans, there's nothing stopping it from creating those plans after humans are eliminated. Not to mention the fact that super computers are designed to make extremely complex predictions and calculations about events that are supposed to take place years into the future. All our climate models are done by super computers. Super computers are designed by their very nature to make long term plans. Uncle Bob said that Skynet began learning at a geometric rate right before they decided to pull the plug. He didn't say what Skynet was learning. It's entirely possible that Skynet learned about history, literature, and philosophy considering how it would have already been programmed with all military and probably scientific knowledge. Skynet may have made decisions based on completely new parameters far beyond its original programming.
|
|
|
Post by MetalMint on Aug 15, 2009 6:09:17 GMT -5
What in a real life scenario? Well yes of course since for the most part the military has very little reliance on complex technical systems for nuclear launch silos. It would also make little sense to run relatively untested AI code. But, then there would not be terminator.
What I stated was the given plot and reasoning of the movie. Which is much like many other movies like WarGames where the super AI computer WOPR is designed to replace and fully automate NATO's nuclear missile silos and run through military tactics. And many other similar computer takeover for war movie scenarios.
Terminator 2 Judgment Day: "SARAH: Uh huh, great. Then those fat fraks in Washington figure, what the hell, let a computer run the whole show, right?
TERMINATOR: Basically. The Skynet funding bill is passed. The system goes on-line August 4th, 1997. Human decisions are removed from strategic defense."
Please refrain from the "You're assuming" line. I'm not assuming everything in such a way. There is indeed a wide vast number of parameters and conditions. Along with many countless military tactical scenarios that could be listed.
But, in the end it all comes down to if the conditions would result or would not result in the launch of a missile. In situations where it was correct or incorrect to do so.
There is always a military chain of command that declares the launch and gives the launch authorization codes.
While I am not ruling out the possibility of the system being capable of such localized scenarios or executing a vast number of military tactics. I prefer to stay with the given knowledge from the story. Skynet has gained full scale access and then used it.
I don't remember there being any lines in the movies or TSCC saying it was not using fail-deadly military tatics.
Well, that would be rather illogical since it would be incapable of doing striking if the military was successful at shutting it down. My premises was the attempt to shut down the system could have resulted in the loss of signal. Since such a system would be designed to always remain connected.
The United States used to have continual air patrols of nuclear bombers. Always keeping a fixed number of planes in the air at all times. Also, submarine patrols with nuclear missile launching capability near shore. We actually still to this day have submarine patrols to insure nuclear durance.
"The Soviets had Dead Hand." Dead Hand mostly involved the leadership structure and high priority assets. To insure launch capability in the event that the higher up Soviet leadership system was taken out or there major assets like missile silos, radar stations, command centers, etc... The junior officers would then be authorized to release their weapons without higher approval.
"How is the process of learning you described different than the way humans learn?" I never I say it was different. The process of learning is the same for both. Since learning is a logical process.
"What allows humans to have these "big picture goals" you mentioned?" We just don't use the same procedural checking. If Skynet were capable of preceding the error it could very well have big picture goals.
"And how exactly does Skynet's method of learning preclude a "big picture plan?" Skynet's AI makes certain decisions that it is incapable of fully analyzing the full outcome. So, like in my water cooler analogy it never has the procedural step to check if the cup is all ready full and has no way of detecting the error when it overfills the cup. So, it simply proceeds like the task was successful and goes on.
"Eradication of the human race is far beyond its original programming." Many computer programs create undesired or unintentional results. It's actually pretty typical. Skynet is really just a more extreme version.
I don't remember Skynet being designed to help survivors. But, it seems to me that you would only need to change a few Boolean values to turn friendly allies into threats. The rest is just continuing the existing military tactics on the newly created threat.
All new programing and decision parameters are created according to the old programing and decision parameters. The original programmers may not have been intended the AI to develop the same way Skynet did. But, I think it's logically safe to say that it's all happening based on the original AI program's event detection and event handling procedures. Which allow it to reprogram it's older programing code and decision parameters to learn.
|
|
|
Post by vicheron on Aug 16, 2009 6:57:56 GMT -5
What in a real life scenario? Well yes of course since for the most part the military has very little reliance on complex technical systems for nuclear launch silos. It would also make little sense to run relatively untested AI code. But, then there would not be terminator. What I stated was the given plot and reasoning of the movie. Which is much like many other movies like WarGames where the super AI computer WOPR is designed to replace and fully automate NATO's nuclear missile silos and run through military tactics. And many other similar computer takeover for war movie scenarios. Terminator 2 Judgment Day: "SARAH: Uh huh, great. Then those fat fraks in Washington figure, what the hell, let a computer run the whole show, right?
TERMINATOR: Basically. The Skynet funding bill is passed. The system goes on-line August 4th, 1997. Human decisions are removed from strategic defense."You forgot this line: TERMINATOR: In three years Cyberdyne will become the largest supplier of military computer systems. All stealth bombers are upgraded with Cyberdyne computers, becoming fully unmanned, Afterward, the fly with a perfect operational record. So they didn't just authorize the creation of Skynet without evidence that it could work. Its predecessor clearly had no problems. Also, Sarah's line was cut. This is what you wrote: I wrote "you're assuming" because you completely ignored all the complexities involved in such a decision. Considering that accidents and misunderstandings have brought us closer to nuclear war than real threats, that is a huge omission. Unless it's an accident, which is far more likely to happen than an authorized launch. Also, when there is a launch, they aren't stupid enough to warn the other side which makes your point moot. Andy Goode said that Skynet got angry. A fail deadly system is activated when the command structure is disabled. This is a quote that you used: TERMINATOR The Skynet funding bill is passed. The system goes on-line August 4th, 1997. Human decisions are removed from strategic defense. If human decisions are removed from strategic defense then that would mean Skynet is the command structure so fail deadly would only trigger if Skynet is incapacitated. And it would be pretty stupid for the military to pull the plug if such a system was in place. Also, don't forget this: TERMINATOR Yes. It launches its ICBMs against their targets in Russia. TERMINATOR Because Skynet knows the Russian counter-strike will remove its enemies here. So according to Uncle Bob, Skynet intentionally launches the nukes so that the Russians would kill its enemies in the US. What's your point? That doesn't change the fact that fail deadly won't work for the US. And the Soviets could do that because they had so many nukes that the United States can't possibly eliminate them all in a successful first strike. The United States on the other hand, has no such advantage. If a Soviet first strike was not detected then the United States would have nothing left to hit back, which is why fail deadly doesn't work for the US. But if Skynet can reprogram itself, then it could alter code that was originally created for one purpose so that it can serve a new one. In your water filling machine example, if the water filling machine has the ability to alter its own programming then it could change something that was originally meant to detect other kinds of errors so that it could know when cups are overfilled. Considering the complexity of the calculations required for such a powerful military computer, the programming can be altered to serve all sorts of different purposes. Most importantly, Skynet was created to deal with humans. Humans are capable of trickery, error, and misunderstandings. Skynet had to have been programmed to recognize these things. In certain situations, Skynet has to be able to interpret lack of information or wrong information. Take for example, a machine that plays three card monte. It's programmed to pick the right card based on a myriad of factors, how the dealer is rearranges the card, what the back of the correct card looks like, the position of the correct card in previous games, etc. The machine is under the assumption that one of three cards is the right one and that the chances of picking the correct card just at random is at least one in three. However, in real life, the game is a scam, the dealer usually palms the right card so that your chances of picking it is zero in three. If you have the machine play against a dealer who cheats, it will never win, it will never learn, it will never be able to predict where the right card is. To make the machine aware of that possibility, you have to program it to know that lack of results is a result. The machine has to be programmed to assume that the dealer is cheating at a certain point. Skynet will have similar programming. It has to be programmed to make judgments based on incompletely information or lack of information. It has to be capable of learning in a situation where it's not learning because the lack of information that allows it to learn is relevant information that it can use.
|
|
|
Post by MetalMint on Aug 17, 2009 0:32:32 GMT -5
They never ran full tests of Skynets capability and clearly had some major bugs in there programing. All the people in the movies had no clue how the AI was working or what was going on. Just plugging things in and seeing what happens next is not really a good test for something. Building something and actually knowing how it works are very different things.
I did not include the entire movie script it's just a small quote. The other lines of dialog were unimportant to the point that the military wanted to remove humans from strategic defense.
I have not ignored anything or made an omission. Since all human errors can be grouped into one of the two categorical circumstances listed.
Try naming one mistake or accidental launch involving human error that will not fit the categories: * Not turning the key when war is declared. * Turning the key when war is not declared.
In a real nuclear launch this is how it works. The signal comes in for launch. 1. Two officers with two separate code books. They both write down the alpha numeric signal. 2. They then exchange books and rewrite down the second repeated signal. 3. If the signal is authenticated. That they both heard the same exact signal they go on to the next step. 4. They open the twin key lock safe and extract the numbered code cards given in the message out of a large deck. 5. They then compare the codes on the message to the set of pre-deployed code cards in the safe. 6. If they are the exact same this authenticates the launch authorization was from the correct authority and they now have for the first time the unlock code that will allow them to ready and arm the launch. In real life no one alone can launch the missile. Since they need to have the authorization code for arming. They always need the code the offices don't make the launch plans or time. So, there always is a deceleration to launch. It should be noted that fail-deadly strategy works in a different way though since it does not have direct executive orders.
Skynet was designed to take over the place of the offices at the command silo. It was not designed to act as the Commander-in-chief and give the authentication orders to strike. Skynet was made with the intention it would always follow orders given. Not make them. I think it makes sense to want to shut Skynet down when it's not operating as intended.
Oh, I see where you were going with the missing line thing now. You were talking about the second point I made. When I posted the quote I was talking about my first point of how in the movie they wanted to take human error and intervention out of the launch.
So, let me try to clear my other second point up: I think Skynet was always acting on it's original military programing. * When it was going to be shut down, I think that could possibly have resulted in the fail deadly military strategy. Which hit the ball off rolling into attack mode. * From that point it then assigned humans as a threat to it's primary objectives of staying online. * At which point it then created the attack plans to destroy all humans.
I think each step Skynet takes builds off the previous steps. Since all computer programs run tasks in a procedural order. And Skynet to me seems like it was designed in an object-oriented programming language. Where each object has its own program code and interacts with other objects.
Fail deadly is a second strike system. So, even if the soviets struck first they are not capable of targeting the submarine patrols which are always in motion insuring there is always the capability of striking back. Since they can't target the submarines like a ground target because they don't know there location.
"But if Skynet can reprogram itself, then it could alter code that was originally created for one purpose so that it can serve a new one. In your water filling machine example, if the water filling machine has the ability to alter its own programming then it could change something that was originally meant to detect other kinds of errors so that it could know when cups are overfilled." I agree with the statement for the most part. Here is the thing though, the program only changes things based on analyzed outcomes. Since it is never analyzing the outcome it never makes any changes.
I kind of like the card machine analogy you used. I think it might be easier to explain using it: "The machine is under the assumption that one of three cards is the right one and that the chances of picking the correct card just at random is at least one in three. However, in real life, the game is a scam, the dealer usually palms the right card so that your chances of picking it is zero in three. If you have the machine play against a dealer who cheats, it will never win, it will never learn, it will never be able to predict where the right card is." It never checks the outcome of the event so never learns from the event.
"To make the machine aware of that possibility, you have to program it to know that lack of results is a result. The machine has to be programmed to assume that the dealer is cheating at a certain point." In order to detect cheating the machine needs to check the outcome. So, if it statistically checked the outcome of a set of games. It would see it does not match the standard probability number. Since it checks the outcome from the events it can then learn from the event and detect the cheating dealer.
"Skynet will have similar programming. It has to be programmed to make judgments based on incompletely information or lack of information. It has to be capable of learning in a situation where it's not learning because the lack of information that allows it to learn is relevant information that it can use." I think it does have similar programing but the problem with Skynet is there are some cases where it's not checking the outcome. It was programed to learn in certain areas but not others. Which is my case for why it does what it does and is not making big picture plans or go beyond it's set areas of learning.
|
|
|
Post by vicheron on Aug 17, 2009 14:12:55 GMT -5
They never ran full tests of Skynets capability and clearly had some major bugs in there programing. All the people in the movies had no clue how the AI was working or what was going on. Just plugging things in and seeing what happens next is not really a good test for something. Building something and actually knowing how it works are very different things. Where in any of the movies or the show did they say that? How do you know they never ran simulations with Skynet? Uncle Bob didn't talk about whether or not Skynet was tested before it was put in charge of the country's nukes. December 1964, a Minuteman I's RV was fired. September 1980, Titan II ICBM exploded in the missile's silo. January 1984, a Minuteman III ICBM was about to launch due to a computer error. Given that nuclear missiles and silos are not of perfect construction, it's entirely possible that they can go off without anyone launching them. Human error can cause the launch of a nuclear missile during maintenance. Also, it's not just the mistake/accident, it's the response to the accident/mistake. It may very well be in the best interest of the United States to respond with full force against an accidental launch of just a few or even one missile. Accidents include incidents where fail-deadly is triggered even when command structure has not been lost. It also includes incidents where missiles are launched when no authorization is given due to some kind of mechanical error, which may or may not be directly triggered by people. TERMINATOR: The Skynet funding bill is passed. The system goes on-line August 4th, 1997. Human decisions are removed from strategic defense.Also, you're OK with the idea that the people who built Skynet had no idea what it was capable of and that it was put in charge of the country's nuclear arsenal without testing but the idea that it wasn't intended to act as the Commander-in-chief is too far fetched? This is also one system that Skynet would not be able to control. However, human error and human deception are more complicated. With three card monte, the machine may not get to play enough games to gather enough information. What if the machine only gets to play two or three games? What if the dealer is smart and lets the machine win just enough so as to assuage suspicions. What if the dealer alternates between cheating and playing for real? What if you have the machine play against more than one dealer? What if you give the machine $100 and have it win as much money as possible/lose as little money as possible? If you want the machine to deal with humans then it has to be programmed to deal with a situation where it will never have enough information to make even a reasonable assumption about something.
|
|
|
Post by MetalMint on Aug 24, 2009 1:32:12 GMT -5
Terminator 3 has the some of the most directly stated lines: CHAIRMAN: We're hoping you have a solution for us. BREWSTER: I know, sir, but Skynet is not ready for a system-wide connection. CHAIRMAN: That's not what your civilian counterparts there told me. They say we can stop this damn virus. BREWSTER: Mr. Chairman, I need to make myself very clear. If we uplink now, Skynet will be in control of your military. CHAIRMAN: But you'll be in control of Skynet, right? BREWSTER: That is correct, sir. CHAIRMAN: Then do it.
Though it should be noted that T3 has a different story plot then the rest of the terminator story. Through the whole computer take over plot was enough to show major bugs with programing. Since if they really tested it they would have known about it becoming self aware, reprogramming itself, bugs that let it take over all control of everything, connections it can make, etc... If SkyNet was really tested it would not be such a big surprise that it could alter and make such huge changes and deviations from there intended design.
All of these were not the same as the human errors they were trying to prevent in the movie by implementing Skynet. Therefore irrelevant to what I was actually talking about. Which was human errors involving the control facility launching during incorrect circumstance or not launching during correct circumstances. Unforeseeable maintenance problems does not involve Skynet.
Also, some corrections: "September 1980, Titan II ICBM": The ICBM did not explode, just one of the fuel tanks. The actually warhead was still intact. "January 1984, a Minuteman III ICBM": Was never actually armed or about to launch. The control center just received incorrect readings that it was.
Still scary though none the less.
Most modern programs are developed in teams of many people. All of which typically design separate functions independently and then setup to work together in an object oriented programming environment. The way computer programs developed it's very likely there will be bugs and errors if they are not thoroughly tested for a wide number of circumstances.
When they said "All human decisions are removed from strategic defense." They were talking about officers in the missile silos. Not the commanding executive branch of power. I don't think Skynet was ever intended to act as the Commander-in-chief. Since, it does not make sense why the commander-in-chief or military would want to give up there power. The reason why they want Skynet is to carry out there orders and commands. They want Skynet to follow there orders.
The logic still holds true though under all situations. Learning always requires analyzing some type of outcome. You can't learn at all without knowing the outcome. Since, you will not know if what you are doing is a correct or incorrect method.
Assumptions are claims treated within the logical context as if it were known to be true or false. Reasonable assumptions are based on prior knowledge learned from previous outcomes that are then assumed true or false within the logical context.
For example: It's a reasonable assumption that a major league baseball player will hit more baseballs then a person picked at random. Since, previous outcomes show major league baseball players are statistically better at hitting baseballs then the average person.
If you don't have enough information to make a reasonable assumption about something. Since there are no previous outcomes to base your reasoning on. Any actions taken are random and taken without logical reasoning. Not based on learning.
|
|
|
Post by MetalMint on Aug 24, 2009 1:37:06 GMT -5
Terminator 3 has the some of the most directly stated lines: CHAIRMAN: We're hoping you have a solution for us. BREWSTER: I know, sir, but Skynet is not ready for a system-wide connection.CHAIRMAN: That's not what your civilian counterparts there told me. They say we can stop this damn virus. BREWSTER: Mr. Chairman, I need to make myself very clear. If we uplink now, Skynet will be in control of your military. CHAIRMAN: But you'll be in control of Skynet, right? BREWSTER: That is correct, sir. CHAIRMAN: Then do it. Though it should be noted that T3 has a different story plot then the rest of the terminator story. Through the whole computer take over plot was enough to show major bugs with programing. Since if they really tested it they would have known about it becoming self aware, reprogramming itself, bugs that let it take over all control of everything, connections it can make, etc... If SkyNet was really tested it would not be such a big surprise that it could alter and make such huge changes and deviations from there intended design. All of these were not the same as the human errors they were trying to prevent in the movie by implementing Skynet. Therefore irrelevant to what I was actually talking about. Which was human errors involving the control facility launching during incorrect circumstance or not launching during correct circumstances. Unforeseeable maintenance problems does not involve Skynet. Also, some corrections: "September 1980, Titan II ICBM": The ICBM did not explode, just one of the fuel tanks. The actually warhead was still intact. "January 1984, a Minuteman III ICBM": Was never actually armed or about to launch. The control center just received incorrect readings that it was. Still scary though none the less. Most modern programs are developed in teams of many people. All of which typically design separate functions independently and then setup to work together in an object oriented programming environment. The way computer programs developed it's very likely there will be bugs and errors if they are not thoroughly tested for a wide number of circumstances. When they said "All human decisions are removed from strategic defense." They were talking about officers in the missile silos. Not the commanding executive branch of power. I don't think Skynet was ever intended to act as the Commander-in-chief. Since, it does not make sense why the commander-in-chief or military would want to give up there power. The reason why they want Skynet is to carry out there orders and commands. They want Skynet to follow there orders. I am not saying it's impossible, but it seems very improbable that they would design Skynet with the intent of it taking over the job of the commander-in-chief. The logic still holds true though under all situations. Learning always requires analyzing some type of outcome. You can't learn at all without knowing the outcome. Since, you will not know if what you are doing is a correct or incorrect method. Assumptions are claims treated within the logical context as if it were known to be true or false. Reasonable assumptions are based on prior knowledge learned from previous outcomes that are then assumed true or false within the logical context. For example: It's a reasonable assumption that a major league baseball player will hit more baseballs then a person picked at random. Since, previous outcomes show major league baseball players are statistically better at hitting baseballs then the average person. If you don't have enough information to make a reasonable assumption about something. Since there are no previous outcomes to base your reasoning on. Any actions taken are random and taken without logical reasoning. Not based on learning.
|
|