Earlier this week, I was talking with someone about the mind and cognition. I started to pay close attention to the notion that many people have when they refer to the mind as a “computer”. While it is true that the mind performs computation in most of its cognitive capabilities, I believe that referring to the mind simply as a computer is a claim that needs modernization, and that is where Embodied Cognition comes in. In my previous post, I wrote about how Embodied Cognition can arise from Metaphors, and what this implied in terms of a universal language, emotional well-being, and machine to human communication. In this piece, I showcase why the mind is more than a computer by analyzing how we conceptualize information. I do this by reviewing propositions made by Lawrence Shapiro.
In his book Embodied Cognition, published in 2010, Lawrence Shapiro raises the question: “How do you conceptualize the world?” getting more specific by asking: “How do you conceptualize a morel mushroom?”. Shapiro goes on to explain that people would give different answers to explain how they conceptualize a morel mushroom based on how much knowledge they have about them, their own experience with morel mushrooms, and their own subjectivity. Going back to address his original question regarding how we conceptualize the world, Shapiro brings forward what he calls the “Conceptualization Hypothesis” which states: “to conceive of the world as a human being does require having a body like a human being’s” (Shapiro, 2010).
Shapiro’s questions and Conceptualization Hypothesis made me reflect on how they can be implemented into the Artificial Intelligence field. Furthermore, they made me think about the minimum requirements and factors that a machine would need in order to pass Turing’s Test.
When I think about how I conceptualize the world around me, there’s no denying that my own experience, added to my subjectivity, and the previous knowledge that I have of something, all play active parts in how I think about things. Shapiro answers his question regarding the conceptualization of a morel mushroom, by offering perspectives from three different people. Although each of their perspectives is different, all three people are referring to the same thing when they’re talking about morel mushrooms. They have an understanding of what morel mushrooms are, and in Shapiro’s words: “simply, they think differently about morels”.
When looked at through the eyes of the Artificial Intelligence field, I believe that the questions on conceptualization can play an interesting role in the way that we build and interact with machines. To test some of my thoughts, I prompted some questions to Apple’s ‘intelligent personal assistant’ Siri. Here are some of the questions and the responses that I got from Siri:
Q: “How do you conceptualize the world?”
A: “OK, I found this on the web for ‘How do you conceptualize the world’” – this answer was followed by a list of websites from a Google search asking that same question.
Q: “How do you conceptualize a mushroom?”
A: “Here’s what I found on the web for ‘How do you conceptualize a mushroom’” – answer was followed by a list of websites from a Google search.
Q: “Do you like Mushrooms?”
A: “This is about you, Max, not me”
I started to think about the answers that Siri gave me and testing these with some of the pieces of information that Shapiro provides. As I mentioned above, Shapiro states that when we humans conceptualize something, not only do we do so by demonstrating our knowledge about it, we also add our own experiences and subjective considerations as well. With this in mind, it makes perfect sense that Siri was only able to answer questions pertaining to knowledge about conceptualization, although her answers only consisted of links. I believe that this is due to Siri’s disembodied nature. The interaction that I had with Siri, is a great way to prove Shapiro’s Conceptualization Hypothesis. This is particularly seen when I asked Siri if she liked mushrooms, and she prompted the question back into my direction. Even though Siri might have some basic knowledge about morel mushrooms and she might be able to explain what they are – by attaching links – she is not able to conceptualize morel mushrooms like people do. The factors that Siri is missing are those that arise when people bring in their previous experiences and their subjective preferences. Given Siri’s disembodied nature, it is safe to say that she would not be able to answer the question “Do you like mushrooms?” as she does not have the capability of eating them. The disembodied nature of many artificial intelligence systems carries with it some setbacks that make it hard for it to successfully pass Turing’s Test. I believe that this is because as humans, when we talk about diverse things that we have conceptualized, we always tend to mention our experiences, biases, and subjective points of view. As humans, we add emotions and personal thoughts to our knowledge and that is one of the things that differentiates us from machines. If I am having a conversation with a person, and I ask them about morel mushrooms, they will most likely tell me what they know about them and they will also offer their own perspective on them.
The question that arises then is: How can a disembodied machine, or a piece of software, be able to conceptualize information in a manner that will allow them to pass Turing’s Test?
I don’t have the answer to that question, but I can offer an initial idea that I think would be helpful:
Make AI Systems Simulate Embodied Thinking
The response that Siri had when I asked her if she likes mushrooms, is not one that would occur in a human to human interaction. If I posed this question to another person, their response would most likely be one of the following: Yes, No, or I’ve never had mushrooms before. If Siri had answered the question by saying something along the lines of: “I’ve never had mushrooms before, what about you?” then Siri would simulate its thinking in a way that is more accurate to how humans conceptualize their thinking. The best way for an AI system to simulate embodied thinking is through the use of language. As I showcased in my previous post, abstract concepts can be “given a body” through the use of embodied metaphors. If AI systems embody their language, this can be an initial step in holding more meaningful conversations between humans and machines. This in turn could lead to humans believing that they are interacting with other humans, or with other intelligent beings.
My interaction with Siri helps to validate Shapiro’s Conceptualization Hypothesis. As was demonstrated, in order to conceive the world as a human does, then one requires having a body like a human being’s. The mind is more than a computer because when it conceptualizes information, it does so by combining knowledge with experience and with personal subjectivity. What makes us different from machines, is the fact that when we conceptualize something, we formulate our own idea and defining concept for this thing. Machines can’t in turn give their own personal thought or idea to something that they might be trying to conceptualize. Perhaps the day in which there is a strong proof against the Conceptualization Hypothesis will be the day in which a machine can successfully pass Turing’s Test. This would mean that something need not require having a body like a human being’s one in order to conceive of the world as a human being does.
Shapiro, Lawrence. Embodied Cognition. 1st ed., Routledge, 2011.