This week, I found myself reading articles from the newest issue of the Trends in Cognitive Science journal, which was published for March 2018. The main theme of the issue focuses on the modeling of the predictive social mind and creating a multilayered framework for social prediction, as many cognitive scientists believe that “to successfully interact with other people, we must anticipate their thoughts, feelings, and actions” (Drayton, 2018). The same can be said about the interactions that Artificial Intelligence systems have with people. In order for an AI system to successfully interact with people, then it must anticipate their thoughts, feelings, and actions. By using the knowledge obtained from reading the articles, as well as by analyzing technological improvements, I propose ways in which our growing understanding of social cognition can help us build AI systems capable of having meaningful interactions with people.
In the first article the researchers argue that humans can form impressions of others based on their facial appearance; claiming that in the past, face-based judgments were limited to the dimensions of trustworthiness and dominance (Stolier et al, 2018). For example, a mouth giving form to a smile can inspire trustworthiness, while someone giving a heavy brow could be conveying aggression. Traditionally, research has focused on bottom-up stimulus attributes that show how certain facial features can deliver meaningful judgements. Stolier, Hehman, and Freeman propose using quantitative techniques to showcase how bottom-up facial features and top-down social cognitive processes work together to form a dynamic trait space. They mention that the trait space (judgements that are made) are very influenced by stereotypes, motives, and interacting with one’s own-culture or other cultures. In their theoretical framework, Stolier, Hehman, and Freeman suggest that trait-pair similarities in face judgements are a combination of both facial features and stereotypes.
The second article was written by Diana I. Tamir and Mark A. Thornton and titled Modeling the Predictive Social Mind. In their article, they “propose a multilayered framework of social cognition in which two hidden layers – the mental states and traits of others – support predictions about the observable layer – the actions of others” (Tamir et al, 208). As they explain, “our social interactions depend on our capacity for social prediction, and our social predictions are predicated on knowledge about other people, such as their mental states or traits” (Tamir et al, 2018). Tamir and Thornton propose a 3-layer framework for social cognition that operates in the following way:
- First layer consists of the observable actions of people.
- Second layer is a hidden layer regarding the mental states of people. Thoughts, feelings, and perceptions are in this layer.
- Third layer is a hidden layer concerned with the traits of people. Individual differences in social identity and personality attributes are in this layer.
Tamir and Thornton explain that in the domain of social cognition, people can use the
knowledge that they have of the mental states and identities of others to predict the actions that they are going to take. They believe that people hold intuitions that can predict mental states and actions, and that mental states can predict future mental states and actions. One of the examples that they use is the following: “if one can see that a colleague is currently tired, and one has the intuition that tiredness leads to frustration, then one could make a useful prediction that the colleague may later feel frustrated – but only if tiredness actually precedes frustration with some regularity” (Tamir et al, 2018).
The information and beliefs presented in Modeling the Predictive Social Mind as well as those
in A Dynamic Structure of Social Space offer insight regarding how we should build AI systems in order for AI machines and humans to have meaningful interactions. In A Dynamic Structure of Social Space, we learn that facial judgements can be made through the combination of both facial features and stereotypes. We can then apply this knowledge of facial judgements into Modeling the Predictive Social Mind, as these “judgements” are the traits that can in turn can predict mental states and actions. So how can we implement this into a machine? The first idea is through our mobile devices.
Apple’s new IPhone X offers revolutionary face technology. With your face, you can unlock your phone and pay for things. The possibilities of merging this type of technology with the ideas of social cognition can be promising in many fields. For example, in regard to mental health, if we can sometimes see/assume that someone is happy or sad based on their facial expressions, then we can take this knowledge and apply it to devices like the IPhone X. When unblocking your phone with your face, for instance, imagine if the phone could recognize through your facial feature how you’re feeling. Ideally, if someone is feeling sad, then the mobile device would be able to recognize this, and through someone like Siri have an interaction to try and understand what is going wrong. Furthermore, when someone is buying something through their phone and they are using their face to approve their Apple Pay purchase, then using the information from both articles, one could imagine a mobile device that might build a trait space based on the facial expression that was given in the purchase and use this to predict future actions as well. While there is no denying that being able to advance the technology of a mobile device to perform all of these tasks would take an immense amount of work, I believe that it is a great way in which machines and humans could hold meaningful interactions. Maybe such a system might one day pass the Turing Test.
Another futuristic idea that came into my mind when I was reading both articles had to do with safe driving principles. For example, if cars in the future (assuming they are not self-driving cars) had no keys and they in turn relied on face recognition to open the car and subsequently turn it on, I believe that there could be a real possibility of reducing (and eventually getting rid of) Driving Under the Influence. If the car can recognize, through the face expression and the environment, that someone is under the influence of drugs or alcohol, then the car could potentially deny the driver the ability to drive, because it could predict that the actions of letting someone in that mental state drive could result in an accident. This need not be limited to instances of Driving Under the Influence. The same mechanism could be utilized when a person is extremely exhausted and not in a fit mental state to drive at that time.
I believe that the new discoveries in Social Cognition can, and should, be applied to machines and different AI systems that are being developed. This is because they can improve the way that machines and humans interact with one another, creating more meaningful interactions. Given that our social interactions are predicted by our knowledge of other’s mental states, if machines can do this when there are no humans around, they can in turn offer a supportive and helping hand.
Drayton, L. (2018). On the Cover. Trends in Cognitive Sciences, 22(3). Retrieved March 3, 2018, from http://www.cell.com/trends/cognitive-sciences/pdf/S1364-6613(18)30032-9.pdf
Stolier, R. M., Hehmann, E., & Freeman, J. B. (2018). A Dynamic Structure of Social Trait Space. Trends in Cognitive Sciences, 22(3). doi:https://doi.org/10.1016/j.tics.2017.12.003
Tamir, D. I., & Thornton, M. A. (2018). Modeling the Predictive Social Mind. Trends in Cognitive Sciences, 22(3). doi:https://doi.org/10.1016/j.tics.2017.12.005