The blog post that I posted yesterday morning took a very long time to upload, because there was a thing on my computer that I had to hunt down and uninstall first. I call it a thing because, according to my virus check, it wasn’t actually a virus. In my opinion, though, it qualified as a virus because it got onto my computer without my permission, repeatedly brought up a pop-up message asking me if I wanted to install a certain toolbar, and in so doing, slowed down my internet to the point that it was completely useless. Apparently, the reason that my computer didn’t recognize it as a virus was that it wasn’t any kind of malicious spyware or a trojan or anything like that. All it wanted to do was to install that toolbar, and it wasn’t even being insidiously sneaky about it. But I didn’t want that toolbar and I definitely didn’t want to see that pop-up message every couple of seconds or to have my internet working in slow motion. So I found the problem and got rid of it, and it fortunately went away willingly as soon as I clicked the delete button.

Technically, I can’t really blame my computer. That sort-of-a-virus was presumably created by another person on another computer and snuck onto my computer uninvited because that’s the way it was designed to work. Still, it would be nice if the computer was capable of deciding for itself that it doesn’t need a random new toolbar. But it doesn’t work that way. My computer responds to these kinds of situations by saying to itself, “Ooh! A new toolbar! I must need it; I’m getting a message that says it’s important!” I respond to these kinds of situations by clicking on the red X in the corner because my brain functions well enough (just barely) to be aware that I don’t really want that toolbar even if some computer program insists that I do. I am capable of deciding for myself what I do or don’t want, but my computer doesn’t have the capability of making decisions, so it just believes whatever it’s told. If I’m not the one telling it to do things, that’s a problem.

Computers basically just do what they’re programmed to do. Even artificial intelligence is, as the name indicates, artificial. You can have conversations with an artificial intelligence computer program on certain websites that work more or less the same way as instant messaging with a person, except that there’s no actual person on the other side. The computer decides what to say based upon data that tells it what real people have said in response to certain types of phrases. I’m sure it’s an extremely complicated and clever algorithm, but anybody has the capability of messing with it by typing in random words and phrases instead of having a sensible conversation with it. When other users try to communicate with the computer program in the same way that they’d communicate with a person, they will get a lot of non sequitor responses. The computer is responding in the way that is logical according to its programming, which doesn’t take into account the fact that there are no rules or algorithms determining what real people can do with the system.

This is the kind of thing that can happen when you’re playing the computer. This picture comes from my brother and I did not ask for permission to use it. Sorry, Brother.

Every type of artificial intelligence has the same limitations. For example, one of the games that came on my computer is chess. At the lowest levels, it’s very easy to win because the game is apparently programmed to make stupid blunders every so often and to miss any clever tactics that take more than a couple moves to win material. The higher levels, of course, are more difficult, and I myself have never been able to beat them, but there are people who have discovered easy ways to defeat that program at the highest level in just a few moves, and they can make the same strategy keep working no matter how many times they do it. I know this because some of these people have made videos and posted them on youtube. Maybe it could technically be possible for that same exact game to be played between two human players, but it certainly wouldn’t happen many consecutive times unless the player who was losing was doing it on purpose. I am aware that there are chess computer programs that are more advanced and can’t be outsmarted so easily, but even then, they only work because some intelligent human being has programmed it to follow some kind of algorithm, and not because the computer itself has a sentient understanding of chess and the capability of thinking about the game as it plays.


Really, the only thing a computer can do that it hasn’t been told to do is to stop working. I have known of computers to lose internet access for no readily apparent reason, to fail to save documents, and to freeze for hours on end, but I have never known of a computer to come to the conclusion that humanity is inferior and to choose to destroy or enslave it. I’m not necessarily saying that such a thing is absolutely impossible, but it couldn’t happen anytime soon because computers would first have to develop some human characteristics such as the ability to follow a thought process (as opposed to blindly following an algorithm), a desire for power or control, and basic human stubbornness. As long as computers are gullable and stupid enough to want every toolbar that the internet offers them, they are clearly lacking in these traits, and I think humanity is safe from the threat of computers taking over everything and ending the world as we know it.

Come to think of it, though, The Matrix specifically says that artificial intelligence will take over the world in the early 21st century, the proponents of the Mayan apocalypse specifically predict the end of the world in 2012, and the weirdos with the signs that I saw at the Riverfest in Little Rock last month were very certain that the end is near. Perhaps they’re all on to something after all.