Usually when we think about combining music and text, it’s often in form of an opera. As I am not really a fan of the opera and honestly mostly don’t understand what they are singing about, who they are singing to and why they are singing (I am also very much convinced half of the people who visit an opera feel the same way, but would never admit it), I sometimes wished for an translation. But imagine there would be a way to translate books into music and making them emotionally understandable, even if you are not a classical music genius or my grandmother. Imagine you could read a book and in the background, there could be playing the perfect themesong for exactly that book. And I don’t mean humming the intro of the TV show while reading ” A Song of Ice and Fire”. Imagine you could skip reading hundreds of pages of a book you don’t like, but just listen to a song and decide whether you should be reading this book or not. Just imagine you could listen to a book and emotionally understand what the book is about.
So obviously there has always been an artistic fascination with hearing the written word.
Hannah Davis is about to change the perception of texts by translating them into sounds with her innovative project.
The New York based programmer was curious to find out what books would sound like. So she created an impressive system of data sonification which enables us to hear a story in form of a composition.
Hanna Davis used data sonification to match a variable of data to a variable of sound like tempo, drums and pitch. With this she could generate music from the text and compose songs based on grammar and writing style. This lead to some pretty impressive, but not yet quite fitting examples of Hemmingway and Hesse.
Can you hear the book tonight?
And though every book worm is already pretty impressed by the composer herself, she felt that the emotional component was missing. She wanted to translate emotions from one medium to another, from a book to a musical piece. But she also wished to create something that had more complexity, order and emotional accuracy.
So the programmer started to analyze texts on an emotional level using the NRC World Emotion Association Lexicon. She divided emotions into negative and positive emotions and matched them with a sound variable. But that wasn’t accurate enough, so she measured emotions in terms of valence and arousal to create a more accurate translation.
The final piece was a composition that translated stories not on a literal level but on an emotional level. So if you have ever wondered (and I bet you have) what heart of darkness and a clockwork orange sound like, check out Hannah Davis’ TEDtalk and the TEDxVienna conference last year: