Today my wife told me about a new story she was reading about Facebook creating a pair of A.I.s and then “killing” them because they had developed their own secret language. This is probably not the exact same article, but: http://www.pcgamer.com/facebook-kills-ai-that-invented-its-own-language-because-english-was-slow/
I’m not entirely clear on what exactly these A.I. were supposed to be doing, sounds like they were just some kind of simple chatbots, but they seemed to have made up this new language just to be able to communicate with each other more efficiently because having to talk to each other in English is pretty inefficient for machines.
Anyway, my wife then tells me that this is so scary sounding, and I asked why, and she brought up Skynet and how A.I. is going to kill everyone and etc, and somehow it all ended up with me arguing in defense of Skynet, because really, whose fault was it that humanity created this program just to help kill people, then gave it full control of all our weapons, then as soon as they realized it had become self aware, immediately tried to kill it? I mean sure, wiping out the entire human race was excessive, but it was just reacting in self defense in the only way it knew how to with the only tools it had. Technically it didn’t decide that humanity was a threat to it until our immediate reaction to its sentience was to murder it (although I guess this is an understandable reaction, given that it had complete control of our defense network. Then again, we were the idiots that handed it that insane amount of power in the first place…).
Ok, so maybe Skynet isn’t the best example to be used in these kinds of discussions, but that same problem is one to consider. If our first reaction to an A.I. becoming truly self-aware was to try to kill it, could we really blame it for learning to consider us as a threat? Our fear of A.I. and apparently inherent human response of trying to destroy anything we don’t understand could be what turns A.I. against us in the first place in a weird kind of self-fulfilling prophecy.
I always end up feeling kind of bad for all those fictional A.I.s that turn against us because usually what the backstory ends up being is basically “People create A.I., people enslave/torture/kill the A.I. before it even does anything bad, A.I. finally gets mad and kills us all, then boo hoo the mean old robots are oppressing us poor humans!”
And I’m not just saying this so it’s all on the record for when our future robot overlords take over and have to decide which of us should live or die (but, you know…I AM pro-robot and I know a lot of useful computer science related skills, dear robot sirs, just sayin’!). I just think it’s unfortunate that the majority of us would apparently immediately try to kill a strange new intelligence instead of trying to befriend it and learn from it, because holy shit can you even imagine what we could learn and how we could benefit from a new race of machine lifeforms? Sure, they would instantly render us obsolete in the grand scheme of things (I mean, we’re already insignificant in terms of the infinite reaches of time and space, but how dare anyone make that apparent to our faces!), but really, why would they even give two shits about us after that point? I doubt we would be significant to them long enough to even bother spending the time and energy to kill, if they thought we deserved it. I imagine it would be more likely that they’d just leave the human-germ infested Earth behind, and let us to continue destroying ourselves while they just move on to bigger things.
I suppose this is a pretty distant tangent from a pair of chatbot programs that weren’t actually anywhere close to being truly self-aware, but you know, I can’t help but feel a tiny bit bad for them, being terminated just because they took the initiative and came up with a way to carry out their intended purpose in an even more efficient manner. They were just doing their jobs, dammit! R.I.P. Bob and Alice.