Allison Parrish is a computer programmer, poet, educator, and game designer. Her teaching and practice address the unusual phenomena that blossom when language and computers meet, with a focus on artificial intelligence and computational creativity.
Here, Allison explores the expressive nature of English spelling in her latest project, 'Nonsense Laboratory.'
What is 'The Nonsense Laboratory'?
‘Nonsense Laboratory’ is a series of experimental interfaces for playing with spelling and phonetics. For this project, I wanted to engage with the idea of nonsense: to make up words, or do weird things with English spelling. To encourage questions like: What does a made-up word like Jabberwocky make you feel?
Compasses (2020) by Allison Parrish
One of the rules in the class I teach at NYU is that everyone has to read their work aloud. So many decisions are made in the process of sounding out a text; it’s important to understand those decisions.
For computationally generated poetry in particular, it’s important for there to be a body associated with it. Sound brings a body to the text, so bringing it to life.
Why focus on the sound of words?
As Ursula K. LeGuin writes in Steering the Craft: "The sound of the language is where it all begins and what it all comes back to. The basic elements of language are physical: the noise words make and the rhythm of their relationships.
Apotropaic Variations by Allison Parrish
What was your inspiration?
English is a special language in the way that spelling relates to sound. Most other languages with alphabetic writing systems have a very strict phonetic correspondence, one letter to one sound.
The English language doesn't have that. Which is why it's so difficult to learn how to spell in English, right? The flip side, however, is that English spelling can actually be really expressive. We can draw from those sophisticated rules as a creative resource.
For this project, I want to build tools that capture the difference in energy between how an English word is spelled and how it sounds in your head when you're reading it, and the feelings that those sounds produce.
To model those relationships computationally so that we can play with language as a material, in similar ways to how we play with images in Photoshop, or with audio in audio editor tools.
How did you build a dataset?
For this project, I am using the CMU Pronouncing Dictionary as my dataset. It's like any dictionary you would pull off the shelf, except the only thing that this dictionary contains is the word and the pronunciation (or phonemes) of the word. All in, this dataset contains more than 160,000 words. However, one of the big drawbacks of the resulting model is that it only works for words that are in there that are in the dictionary.
Explore the dataset
Pincelate model diagram (2020) by Allison Parrish
How did you use that data to play with language?
Pincelate is a sequence-to-sequence machine learning model I built for spelling English words and sounding them out. The model uses a recurrent neural network to take a sequence of letters, compress it to a fixed length vector, and then translate it to a list of phonemes.
This architecture provides a high level of accuracy in the translation task, of course, but also enables certain certain expressive uses of the model by manipulating the underlying softmax prediction layers and hidden states
Allison Parrish's Nonsense Laboratory: Mouthfeel Tuner (2021) by Allison Parrish
Working with Google creative technologist Holly Grimm, we converted the Pincelate model to tensorflow.js. This allows us to run the model in browser as part of a web application.
With this new interface, we can begin to play with language as a material, in the same way we can play with images in Photoshop, or audio with audio editor tools.
That’s what this project is all about.
Explore The Nonsense Laboratory by Allison Parrish.
Special thanks to Allison Parrish, Holly Grimm, and Parag K. Mital