Ray Kurtzweil’s Pattern Recognition Theory of the Mind proposes that rather than using deductive reasoning, humans use pattern recognizers to create connections between neurons that allow them to learn. Chapter 3 of How to Create a Mind concentrates on the neocortex, which Kurtzweil argues is made up of cortical columns that contain a total of 300 million pattern recognizers. According to Kurzweil, our neocortex is a blank slate when we are born and only our experiences can wire connections between pattern recognizers. His hypothesis states that the structure of the neocortex is malleable, changing the hierarchical way it is connected between modules as we learn over time.
I particularly thought that this argument was convincing in terms of the examples he gave:
“[Translating memories into language] is also accomplished by the neocortex, using pattern recognizers trained with patterns that we have learned for the purpose of using language. Language is itself highly hierarchical and evolved to take advantage of the hierarchical nature of the neocortex, which in turn reflects the hierarchical nature of reality.”
If reality truly is hierarchical, as Kurtzweil argues, then it would be natural that the way our brain processes information would be hierarchical as well.
This chapter could easily have been extremely technical — however, overall, he explained the concepts of hierarchical pattern recognition clearly, with easily understood examples, such as a higher/lower level recognizers sending a message of how a loved one is recognized.
Kurtzweil’s pattern recognition theory of the mind draws many parallels between the way we process and learn things and the way he believes computers (artificial intelligence) learn. Ultimately, Kurtzweil makes a case that one day we will be able to imitate and perhaps venture further than human intelligence. While I personally do not believe that the brain’s structure is just a simple algorithmic process as he implies, it did start me thinking of the capabilities of artificial intelligence, and if we did create them, how far should we take it?
For example, here is one quote that made me begin to think of how “human” our potential AI’s should be:
“However, if I don’t think about her for a given period of time, then these pattern recognizers will become reassigned to other patterns. That is why memories grow dimmer with time: The amount of redundancy becomes reduced until certain memories become extinct.”
If “memories become extinct”, how can an AI imitate that? Should it imitate that? With all the distortions in our logical thinking, how much of human intelligence would we want to imitate?
Imagine if we created artificial intelligence that is capable of being exposed to as many experiences and sensory inputs that we are bombarded by everyday, but has a pattern recognizer less prone distortions of memory that are natural in humans…I was hoping that Kurtzweil touch on the moral responsibilities that we have in creating artificial intelligence, but unfortunately, he did not dive into the potential dangers of artificial intelligence.
Can you truly replicate human intelligence? You can replicate the logical process, perhaps, but can you replicate empathy? Emotion? While I don’t doubt that one-day AIs will have the ability to surpass our own intelligence, I doubt that we can replicate how the human mind truly works.