Machine Learning and Conceptual Blending

Machine learning (ML) is dedicated to categorizing, recognizing, predicting, and detecting. For example, suppose we want our system to recognize elephants.  In supervised ML, we would provide a huge "ground truth" training set tagged for two categories: elephant and not-elephant, and let the system learn. The goal is a recognition system that picks out patterns that are consistent with the "elephant" tag but not consistent with the "not-elephant" tag. In the study of multimodal communication, ML has been dedicated so far to finding regular, consistent patterns in expressions. To be sure, recognizing consistent patterns is a crucial part of animal intelligence, but actually, the central feature of cognitively modern thought is the human ability to connect and blend ideas that are in strong conflict with each other.  All languages provide grammatical constructions useful in prompting listeners to such blending.  Consider "She's gymnastics royalty," which uses the nominal compound N+N form, "gymnastics royalty." What we know of royals is that they do not specialize in gymnastics, are not employed to perform for audiences, are royal from birth or marriage and do not need to engage in athletic contests where points are given toward the attaining of first, second, and third place, and so on. Conversely, what we know of gymnastics is that the gymnast attains status as a gymnast through a history of nonviolent, solo performance, judged against other such performances. Gymnasts are not royals; royals are not gymnasts.  If there is any royal who is actually a gymnast, or a gymnast who is a royal, that person would be regarded as a curious outlier.  The only overlap in the conceptual frames for "royalty" and "gymnastics" is perhaps "human beings."  One should never confuse the two things. When human beings understand "gymnastics royalty," they are not confusing gymnasts and royals but rather blending the two ideas to form another idea, gymnastics royalty, by selective projection and emergent structure.  The XYZ construction is another grammatical construction that prompts for blending, usually without giving any indication of the details of the selective projection or the emergent structure; the human being can be relied upon to know how to go about constructing meaning, even if that human being for some reason lacks some of the knowledge needed to succeed in this case of blending.  Examples of the XYZ construction are "Paul is the father of Sally," "Bakersfield is the Alaska of California," and "Bob is the Mother Theresa of our office."  Red Hen tags for conceptual frames but prunes that tagging using Semafor ( Semafor tries to filter the candidate conceptual frames according to compatibility and consistency.  But Bob  and Mother Theresa are not compatible in these ways, and "Mother Theresa of our office" is also impossible in this regard: Bob is decidedly not Mother Theresa, and Mother Theresa could not possibly have any connection to our office.  Even the most basic grammatical forms are used in these ways: a humble brag (Adj + N) is entirely intelligible to the human being even though humility and bragging are opposites: a humble brag is something that involves self-flattering content but deployed in discourse in such a way as to hide, or try to hide, the purpose of bragging. "We blocked him from the door" uses the caused-motion clausal construction (NP V NP PP indicating direction, e.g. "He threw the ball over the fence") and the verb "block" to convey not caused-motion but instead the prevention of motion. Advanced blending is the hallmark of cognitively modern human cognition. Machine Learning. by focusing on established compatibility, deprives itself of working the way human beings work. Red Hen wants to invent new tagging and recognition methods that would instead search for and recognize prompts to blend ideas in conflict.

Would you like to work on this task?
If so, write to 

and we will try to connect you with a mentor.