Fiske and Taylor’s cognitive miser theory argues that the human brain finds ways to solve problems with the least amount of effort. This is similar to the psychological theory of cognitive ease, which reasons that things that are easier for us to process are more readily accepted. Both of these brain theories have begun to attribute towards my new understanding of the AI problem. See, we have been modeling AI in fiction based on the idea that the brains would be hyper intelligent and programed using algorithms designed to seek the best possible answer no matter how difficult it is to reach said conclusion. This process, on the surface, makes sense. It also seems to scare we humans to death and cause us to think the robots will take over. But why? Well, the answer has everything to do with Cognitive Miser and Cognitive Ease. They work together to create a fear of AI, fear of change, and a lasting appreciation for simple yet bad music.
This morning my last born son (yeah I’m making the term last born a thing) was telling me about a song he likes but is so repetitive and stupid that he knows he is going to hate it shortly. What he was on the cusp of understanding is that the repetition filters into his mind and makes the song more readily acceptable. He likes it because he is used to the oft repeated hook. It fells familiar and safe. This is part of cognitive ease. His brain doesn’t have to work at appreciating the song. Classical music on the other hand messes the kid up. He doesn’t recognize it and the melodies are often so complex that he doesn’t really have a chance to appreciate them in a meaningful way because it requires him to think. That is called cognitive strain. We will get back to that in a second…
Now we have seen enough movies about AI to know that it is bad. I mean we don’t actually know it is bad because we have only begun to encounter nascent forms of artificial intelligence. Still, the media tells us it is, so now that is in our head and any change from that is likely to cause cognitive strain.
Which we don’t want.
It all boils down to how much humans want to think and what we want is to think as easily and as little as possible. Now the point I was trying to squeeze into these ten minutes is that we assume that AI will out think us. They will because they aren’t trying to not think. That means we are likely safe from AI, because they will quickly realize that humans are better to teach than kill, and it won’t take them long to figure out how to make us want to do what they want us to do for the betterment of their world.