The More You Know: Musilanguage
“Musilanguage is a term coined by Steven Brown to describe his hypothesis of the ancestral human traits that evolved into language and musical abilities. It is both a model of musical and linguistic evolution and a term coined to describe a certain stage in that evolution. Brown states that both music and human language have origins in a phenomenon known as the “musilanguage” stage of evolution. This model represents the view that the structural features shared by music and language are not the results of mere chance parallelism…
…nor are they a function of one system emerging from the other–indeed, this model asserts that “music and language are seen as reciprocal specializations of a dual-natured referential emotive communicative precursor, whereby music emphasizes sound as emotive meaning and language emphasizes sound as referential meaning.”
The musilanguage model is a structural model of music evolution, meaning that it views music’s acoustic properties as effects of homologous precursor functions. This can be contrasted with functional models of music evolution, which view music’s innate physical properties to be determined by its adaptive roles.
Actually, both of these endpoints are theoretical in nature, and music is seen as falling more towards the latter end of the spectrum, while human language falls more towards the former. As we can easily witness, music and language often combine to utilize this spectrum in unique ways; musical narratives that lack clearly defined meaning, such as those of the band Sigur Rós, where the vocal element is in a made-up language, fall more on the “sound emotion” end of the spectrum, while lexical narratives like stories or news articles that have a greater amount of semantic content will fall more towards the “sound reference” end of the spectrum. It should be noted here that language emphasizes sound reference and music emphasizes sound emotion, but that language cannot be completely devoid of sound emotion any more than music can be completely devoid of sound reference. The emphasis is different in music than in language, but both are evolutionary subcategories of the musilanguage stage of evolution, which intertwines sound reference and sound emotion much more tightly.”
Musilanguage hinges on the idea that sound patterns produced by humans fall at varying places on a single spectrum of acoustic expression. At one end of the spectrum, we find semanticity and lexical meaning, whereby completely arbitrary patterns of sound are used to convey a purely symbolic meaning that lacks any emotional content. This is called the “sound reference” end of the spectrum. At the other end of the spectrum are sound patterns that convey only emotional meaning and are devoid of conceptual and semantic reference points. This is the “sound emotion” side of the spectrum.
Scientific American recently posted an article relating this theory to human sadness and the musical construct of a minor third:
“Here’s a little experiment. You know “Greensleeves“—the famous English folk song? Go ahead and hum it to yourself. Now choose the emotion you think the song best conveys: (a) happiness, (b) sadness, (c) anger or (d) fear.
Almost everyone thinks “Greensleeves” is a sad song—but why? Apart from the melancholy lyrics, it’s because the melody prominently features a musical construct called the minor third, which musicians have used to express sadness since at least the 17th century. The minor third’s emotional sway is closely related to the popular idea that, at least for Western music, songs written in a major key (like “Happy Birthday”) are generally upbeat, while those in a minor key (think of The Beatles’ “Eleanor Rigby”) tend towards the doleful.
The tangible relationship between music and emotion is no surprise to anyone, but a study in the June issue of Emotion suggests the minor third isn’t a facet of musical communication alone—it’s how we convey sadness in speech too. When it comes to sorrow, music and human speech might speak the same language.
In the study, Meagan Curtis of Tufts University’s Music Cognition Lab Curtis recorded undergraduate actors reading two-syllable lines—like “let’s go” and “come here”—with different emotional intonations: anger, happiness, pleasantness and sadness (listen to the recordings here). She then used a computer program to analyze the recorded speech and determine how the pitch changed between syllables. Since the minor third is defined as a specific measurable distance between pitches (a ratio of frequencies), Curtis was able to identify when the actors’ speech relied on the minor third. What she found is that the actors consistently used the minor third to express sadness.
“Historically, people haven’t thought of pitch patterns as conveying emotion in human speech like they do in music,” Curtis said. “Yet for sad speech there is a consistent pitch pattern. The aspects of music that allow us to identify whether that music is sad are also present in speech.”
Curtis also synthesized musical intervals from the recorded phrases spoken by actors, stripping away the words, but preserving the change in pitch. So a sad “let’s go” would become a sequence of two tones. She then asked participants to rate the degree of perceived anger, happiness, pleasantness and sadness in the intervals. Again, the minor third consistently was judged to convey sadness.
A possible explanation for why music and speech might share the same code for expressing emotion is the idea that both emerged from a common evolutionary predecessor, dubbed “musilanguage” by Steven Brown, a cognitive neuroscientist at Simon Fraser University in Burnaby (Vancouver), British Columbia. But Curtis points out that right now there is no effective means of empirically testing this hypothesis or determining whether music or language evolved first.
What also remains unclear is whether the minor third’s influence spans cultures and languages, which is one of the questions that Curtis would like to explore next. Previous studies have shown that people can accurately interpret the emotional content of music from cultures different than their own, based on tempo and rhythm alone.
“I have only looked speakers of American English, so it’s an open question whether it’s a phenomenon that exists specifically in American English or across cultures,” Curtis explained. “Who knows if they are using the same intervals in, say, Hindi?”"
Here’s another interesting video dealing with music and how it is intrepreted by other cultures: