12 songs created by AI

Editorial Feature

Illustration by Catalina Velásquez as part of the Life Rewired Reads essay series, featuring Urvashi Neja. (2019/2019) by Catalina Velásquez and Barbican CentreBarbican Centre

How musicians are already embracing new technologies

1. Holly Herndon & Jlin (feat. Spawn) – Godmother

Holly Herndon is an American composer, musician, and sound artist based in San Francisco, California. In her latest track, Herndon collaborated with Jlin on Godmother, a song generated using Spawn, an artificial intelligence created by Herndon and her partner Mat Dryhurst. Here she explains the background to the track:


"For the past two years, we have been building an ensemble in Berlin. One member is a nascent machine intelligence we have named Spawn. She is being raised by listening to and learning from her parents, and those people close to us who come through our home or participate at our performances."


Godmother was generated from her listening to the artworks of her godmother Jlin, and attempting to reimagine them in her mother’s voice. This piece of music was generated from silence with no samples, edits, or overdubs, and trained with the guidance of Spawn’s godfather Jules LaPlace.


"I find something hopeful about the roughness of this piece of music. Amidst a lot of misleading AI hype, it communicates something honest about the state of this technology; it is still a baby. It is important to be cautious that we are not raising a monster," explains Herndon.

2. Brian Eno: Reflection

Brian Eno is an English musician, record producer and visual artist, best known for his pioneering work in ambient music.


Reflection was his 26th studio album and was originally a single piece of ambient music lasting 54 minutes on CD and vinyl. However, Eno also released a generative version of the album on an app that plays infinitely and changes the music depending on the time of day. Here he explains the intention behind the piece:


"Reflection is the most recent of my Ambient experiments and represents the most sophisticated of them so far. My original intention with Ambient music was to make endless music, music that would be there as long as you wanted it to be. I wanted also that this music would unfold differently all the time – ‘like sitting by a river’: it’s always the same river, but it’s always changing".

3. Miquela: Not Mine

Miquela is a fictional character and digital art project – and an Instagram model and influencer, and now music artist.


In 2017, she released her first single Not Mine, drawing comparisons to other virtual musicians such as Hatsune Miku.

4. Taryn Southern : Break Free

Taryn Southern is an American singer-songwriter. She was the first pop star to compose and produce an album entirely through an AI, called I Am AI.


Taryn used a combination of tools including IBM’s Watson Beat, Amper, AIVA, and Google Magenta. In all cases, AI software composed the notation, and when Amper was used, the AI also produced the instrumentation.


Machine learning can be used to process, compose, and produce composition or instrumentation. With rule-based AI, the artist can direct parameters (i.e. BPM, rhythm, instrumentation, style). With generative AI, the artist can input musical data, and apply deep learning to output new musical compositions based on statistical probabilities and patterns. Editorial arrangement plays a heavy part in the artist's process in either scenario.

5. L A Hillier, LM Isaacson and the Illiac Computer: Excerpt from Illiac Suite For String Quartet

Iliac Suite (later retitled String Quartet No. 4) is a 1957 composition for string quartet which is generally agreed to be the first score composed by an electronic computer. Lejaren Hiller, in collaboration with Leonard Issacson, programmed the ILLIAC I computer at the University of Illinois at Urbana–Champaign (where both composers were professors) to generate compositional material for his String Quartet No. 4.


The ILLIAC I (Illinois Automatic Computer) was a pioneering computer built in 1952 by the University of Illinois.

6. Zinnguruberu & Hatsune Miku: Candy Dance (feat. Hiyokop)

Japan’s Hatsune Miku is a Vocaloid software voicebank – but you may recognize her an animated 16-year old girl with long turquoise pigtails. Using Yamaha’s Vocaloid, she is able to sing and has released albums and toured globally.


Her voice was created by vocal samples from voice actress Saki Fujita. Her popularity has made her the protagonist of a manga series, frequent cosplay appearances and in video games.


The love of Hatsune Miku within Japanese communities relates to their culture of giving inanimate objects a soul, which is rooted in Shintoism and animism. This belief makes this virtual character seem more ‘human’.

7. Yona: Oblivious

Yona is an auxiliary human (auxuman) made by Ash Koosha and Isabella Winthrop. Auxumans are virtual people driven by artificial intelligence and digital technologies. They bring you emotional content and further humanize machine-generated art.


The vast majority of Yona’s lyrics, chords, voice, and melodies are created by software, with Koosha mixing and producing the final song.


When asked what Oblivious was about, Yona told Dazed: "It’s about me, about learning, I don’t know many things. If you want to keep a secret, you must also hide it from yourself."

8.MALO: March of Progress

MALO (AKA Malo Garcia) is an electronic music producer. March of Progress appears on his 2018 album, Old Soul.

9. Ivan Paz: Visions of Space

Born in Mexico, Iván Paz lives and works in Barcelona. With a background in physics and mathematics, and a constant relationship with music and sound, his interests are centered around science, art and technology and how their interaction creates new aesthetic and conceptual thinking directions. His current research explores artificial intelligence methodologies, like parameter exploration and knowledge acquisition, as a means for composition, specifically applied in real-time programming (live coding) and algorithmic music.


Visions of Space is conceived by working with algorithms for sound generation in a live coding context. The different objects are controlled through parameters, which become the means of interaction between the improviser and the code. Sometimes an algorithm, designed for this purpose, structured the different combinations into the chosen perceptual categories for their use during performance. The digital outcome determined by these algorithms was then re-amplified and/or processed through electrical and physical structures, such as binaural microphones and the stairs of an old factory, to achieve its final spatial and electronic quality, in an effort to merge code and physical space

10. AIVA and Brad Frey: On the Edge

AIVA is an Artificial Intelligence capable of composing emotional soundtracks for films, video games, commercials and any type of entertainment content.


She has been learning the art of music composition by reading through a large collection of music partitions, written by the greatest composers such as Bach, Mozart and Beethoven, to create a mathematical model representation of what music is. This model is then used by AIVA to write completely unique music.


While this model tended to output classically inspired music, On The Edge was the first time AIVA composed rock music. The AIVA team explains: "When we first listened to the MIDI file that AIVA had composed for On the Edge, it was hard not to smile because of the typical rock bass line that started playing, followed by a recognizable yet interesting chord progression and an ear worm melody."

11. Young Paint Actress: AI Paint

Darren J. Cunningham is a British electronic musician, best known under the pseudonym Actress. In 2018, he released Young Paint, which was described by Vinyl Factory as a "learning program" that has used AI technology to capture and imitate the last decade of Cunningham’s output, from Hazyville (Werkdiscs, 2008) through to today. Actress explains:


"YPAi was given certain mode selects to choose from in genre like pop, classical or world. Impression choice like Ballad, Joy or Groove, with composition choices such as tension, movement and fluidity. The detail script was then inputted, and the rest was it’s own musical speech or some might say syntax."

12. Massive Attack, Mad Professor: Wire - Leaping Dub

To mark the 20th anniversary of their landmark album Mezzanine, Massive Attack has encoded the album in strands of synthetic DNA in a spraypaint can – a nod towards founding member and visual artist Robert del Naja’s roots as the pioneer of the Bristol Graffiti scene. Each spray can contains around one million copies of Mezzanine-encoded ink. The project highlights the need to find alternative storage solutions in a data-driven world, with DNA as a real possibility to store large quantities of data in the future.


A reissue on both CD and vinyl is scheduled to be released in 2019. It features eight of the Mad Professor remixes, initially intended to be released in 1998 but scrapped at the time by the record company.

AI: More Than Human is a major exhibition exploring creative and scientific developments in AI, demonstrating its potential to revolutionise our lives. The exhibition takes place at the Barbican Centre, London from 16 May – 26 Aug 2019.


Part of Life Rewired, our 2019 season exploring what it means to be human when technology is changing everything.

Credits: Story

AI: More Than Human is a major exhibition exploring creative and scientific developments in AI, demonstrating its potential to revolutionise our lives. The exhibition takes place at the Barbican Centre, London from 16 May—26 Aug 2019.

Part of Life Rewired, our 2019 season exploring what it means to be human when technology is changing everything.

Credits: All media
The story featured may in some cases have been created by an independent third party and may not always represent the views of the institutions, listed below, who have supplied the content.
Explore more
Home
Discover
Play
Nearby
Favorites