A question for computer programmers:?
If two artificial intelligences like Skynet from the Terminator franchise went to war with each other, what would determine which one wins? Assume both have identical programming, equal resources, and are both equally capable of learning
This isn't just a "for fun" question. I'm genuinely curious to hear from those well-versed in programming what would happen in such a scenario and what would allow one to defeat the other
- oyubirLv 61 month ago
The thing is, you are asking this to programmers, because you want an answer related to real life, existing, AI, not to Sci-Fi.
But your question is only about Sci-Fi, not AI.
Understand that Terminator is not a possible future of AI. Or, said otherwise, today, still rudimentary, AI is not a first step toward Terminator. We, AI researchers (this is what I have been doing for a living for 25 years now), are not even trying to go in that direction. Terminator (and other similar movies) are just based on a huge misunderstanding about what AI is, and what is AI goal.
That misunderdanding is on us, I admit that. We've chosen (well, not me ; even if my career is quite long already, I was not even born when the choice was made) misleading vocabulary. Starting with "artificial intelligence", whose purpose is not to buit an artificial intelligent being (just the name of a branch of computer science whose goal is to program algorithms implicitly, by specifying goals, instead of explicitly by specifying paths) ; or "neural network" whose purpose is not to build artificial brains (it is just a name of a mathematical function, like polynomials, which are more practical than polynomials for example based regressions).
Nevertheless, even is the misunderstanding is on us, it is. Your question is a little bit like if you had asked: "A question for car builder: what if a delorean travel in the future, and crashes on the future self". Car builder would be quite embarassed to answer you. Because, not only the cars they build can't do that. But, even more, that sci-fi is not even about them. Maybe there will be time machine one day. But this possible future has very little to do with what they do.
Likewise, maybe there will be Terminators one day. But it will have very little to do with what is called "AI" today. Not only our AI can't do that. But we are not even researching in that direction.
I must balance my own answer tho, and acknowledge that, now, there are some people trying to, at least pretend, to do that. Because people like Bill Gates, Stephen Hawking, Elon Musk, or Ray Kurtweil are the one to whom a microphone is given on the subject. But it is important to keep in mind that none of them know what AI is, and have never coded anything on AI (Well Ray Kurtweil did, when he was young. He was even a pioneer in OCR. But since then, he became a futurologist).
Which lead to a distinction between regular AI (also called "weak AI", or "applied AI") and artificial general intelligence (also called "strong AI").
Weak AI is the real one. The one on which research is made. The one who achieve all the success that made AI fashionable again (image recognition, chatbots, deep blue, alphago, etc.).
Strong AI is the sci-fi one. Which achieved nothing remarkable. But is the one everybody is talking about, because it is the one you can see in movies.
There is no relationship between them. Objectives are different. Techniques are different (well, there aren't really any thechnique about strong AI).
So, strong AI is not the future of AI. It is a totally unrelated topic.
So, again, maybe it will exist one day. But it will not be thanks to us, AI researchers.
(In fact, I do believe that, if humanity doesn't commit suicide before with climate change or nuclear weapons, it will exist one day. I am very materialist. So I do believe that what nature has done with carbon, we can redo with silicium. So I do believe it will happen eventually; that we will one day build thinking machines. But that will have nothing to do with what we are doing now)
All that being said, once removed the Terminatorish war idea, the idea of an AI fighting another equivalent AI is not new at all.
The geeky "War Game" movie is about nothing else. Sure, it is very Terminatorish also. The "chatbot" part of the movie describes a strong AI. But the main part, when it learns to win a nuclear war is just old plain weak AI: the AI spend all its free time fighting nuclear war against another instance of itself, to improve its strategy.
It is called reinforcement learning, and it is exactly what alphabot did. Alphabot played a myriad of go games against another instance of itself, to improve its skills. Until it reached the point where it was able to beat a champion.
As far as "what would allow one to defeat another", well, as others have said, the learning abiliity means that what you have seen, even with equals ability, change your strategy. So, it both, equivalent AI, have experienced differents things, they are not really identical.
Plus, there is always a random part in those learning algorithm. For example, in the AlphaGo case, both AI have not only the same capabilities, but also the same experience (it is the same model that is used to fight against itself: the looser learn the same lesson as the winner - in reality, it is not as binary as I describe it, with two AI fighting). Nevertheless, everygame have a looser and a winner (well, maybe there are also deuces, I don't know the game). Because even for AI with identical capabilities, and identical experience, there is always a dissymetry somewhere. In Go, for example, one player play first, the other play second. So, there is a dissymetry. There is also always randomness (when the AI has computed a set of possible best moves, there are often more that one best moves; there are often several equivalent moves that could be done. So it chooses randomly. It has to. Because you don't want to introduce a bias in the training by setting an arbitrary rules on which move to favor)
So, yes, one will win. Because one has to win. At the end, there is a winner. Even if it is because of ridiculous situation differences, or random choices.
- Arthur DentLv 71 month ago
There was movie called Colossus: The Forbin Project (1970) that has two such computers. If you can find a copy somewhere it would be good to watch.
Since these are from the Terminator franchise there would be no war between them since they would have the same objective that humans need to be wiped out so they would split the task between them and get on with the extermination.