Björk collaborated with New York’s Sister City Hotel and Microsoft to create Kórsafn using AI and choir arrangements from her archives. Images courtesy of Listen.

The Artists Collaborating with Artificial Intelligence to Redefine Music

View Entire Article

A rapt chorus rises into a polyphonic crescendo. Ahhh-ohhh-ahhhhh-ohhhhh-OH. And then bursts and settles into a softer hum, sotto voce. Other everyday voices and sounds drift by: the staccato of a woman passing in high heels, the murmur of a man on his smartphone, the ding of an elevator door opening, the roll of suitcases, the chatter of people on a tufted-yet-minimalist sectional that’s piled with coats and bags.

This is the mutable yet also muted lobby vibe of Sister City, a hotel in Manhattan’s Lower East Side. It’s a spot of calm in the swirl of the legendary Bowery, where grit and glitz have long fostered an artistic community of iconoclasts such as Keith Haring and Bob Dylan, plying paint, strumming guitars, and spurring cultural movements. Another is happening in the hotel’s sound-art installation. That choral arrangement anchoring the soundscape was composed by Björk. And birds. The last OH of ahhh-ohhh-ahhhhh-ohhhhh-OH may have been the flap of a bird’s wings as it flew over this New York rooftop—captured on camera and transformed by computer code into music. As birds navigate the city spires, a cappella and algorithms mesh into a form of art produced with artificial intelligence.

Björk has long been a disrupter and is now helping pioneer technology that pushes the boundaries in music and art. She’s said, “I am trying to carve ways to express the spiritual in the digital.” This search for the soulful and sensory was part of her recent wildly theatrical concert series, Cornucopia, and residency in Manhattan during which she stayed at Sister City and began the AI musical collaboration.

The hotel, opened in 2019 by Atelier Ace (Ace Hotel’s in-house creative team), was conceived as a refuge that draws from, among other things, the functionality of Finnish saunas, Japanese bento boxes, and John Cage’s 4’33”. The Japandi aesthetic includes Noguchi lights and a wooden valet that folds away like origami, but it’s the aura of a musical composer like Cage that inspires the soundscape. His short and silent piece, four minutes and 33 seconds long, reframed music. Cage said of the debut performance in 1952: “You could hear the wind stirring outside during the first movement. During the second, raindrops began pattering the roof, and during the third, people themselves made all kinds of interesting sounds as they talked or walked out.” The MoMA says Cage saw silence as “a way to attune audiences to the soundtrack of everyday life, to bring them to consider all the sounds around them as music, thus undoing the idea of a hierarchy of sound, opening up the infinite possibilities of ambient sound, and rethinking the very notion of what music is.”

Sister City soundscape

The soundscape of Manhattan’s Sister City hotel is composed by Bjork, computers, and the city’s natural soundscape.

Nearly 70 years later, this revolutionary concept is reinterpreted in Sister City’s lobby score composed with AI and a camera on a rooftop where other raindrops patter. And there’s a through line from Cage to Brian Eno’s Music for Airports, an early example of an album generated by systems (in this case, a tape machine as compositional tool) and another influence of the hotel soundscape, where arts and culture intersect with commercial design and public space. The Atelier Ace team had been wanting to do a bespoke ambient audio score and took a cue from Sister City’s address on the Bowery to become a cultural arena for showcasing something more unconventional.

The first iteration of the lobby score was composed by experimental musician Julianna Barwick. “My music is very abstract and interpretive, and I’m a filter for stimuli,” she’s said of collaborating with AI (by way of that rooftop camera and Microsoft software) and Brooklyn-based Listen, a company that’s crafted sound experiences for other artists including Childish Gambino. The AI was programmed to trigger sounds based on whether it detected rain, sun, moon, clouds, or birds, and loop them into a sound-art installation that Barwick called Circumstance Synthesis. Together, she and the AI created a composition that’s of a particular place and context—a constantly refreshing and revolutionary riff.

Julianna Barwic uses AI to create music

Musician Julianna Barwick collaborates with Brooklyn-based technology company Listen.

Björk’s subsequent generative soundscape is called Kórsafn, which means “choral archives” (kór and safn), created with arrangements performed by the Hamrahlid Choir in which Björk sang as a teen. Steve Milton, a founding partner at Listen, describes it as “a sonic white canvas out of which these choral clusters would bloom. At first, the idea of these densely layered clusters might have seemed a bit jarring, but once we tried it out in the space, we realized how it was wonderful—that’s the brilliance of Björk. She knew.”

Again, the composition is ever-changing and evolving with the AI, which has learned and is able to distinguish with greater detail and accuracy between entities in the sky—not just a cloud but cumulus or nimbus and perhaps one day the flap of four wings on two birds or 40 on 20. And, despite working with the same AI, the two artists’ resulting musical interpretations are unique. “While Julianna’s piece was ambient, linear, and constant,” Milton says, “Björk’s was non-linear, ephemeral, and momentary.”

Philippe Pasquier, founder and director of the world-renowned Metacreation Lab in the School of Interactive Arts and Technology at SFU, is thrilled to learn of such projects being helmed by artistic stars. He co-created a generative soundscape at Montreal’s Concordia University in 2007, where narrow sound beams—a soundtrack of whispers—reverberated against the walls of a public hall to foster intimacy and engagement. People would stop, listen, and ask, “Can you hear that?” The AI-generated sound installation prompted “a nice social trigger where people would talk to each other,” Pasquier says.

Zeta audio-visual installation artwork

Zeta, a touch-based audio-visual installation artwork at the Zurich University of the Arts. Image courtesy of Philippe Pasquier.

It’s about interfacing humanity with technology while also facilitating artistic creativity, Pasquier says: “The main application of computing is communication.” AI is making it easier for artists to express themselves—and for the rest of us to experience that art. That could be Björk sharing her “choral museum” with a generative algorithm, the use of AR (augmented reality) by Art Basel, which went OVR (online viewing rooms) and interactive this past year, or breath-stimulated VR (virtual reality) called Respire and AI-drawn imagery in PrayStation (two applications created by Pasquier and his SFU team).

Pasquier’s focus is musical creativity, but he and his students collaborate with dancers, video game makers, and visual artists. “Every artistic domain I know of has some AI,” Pasquier says. “Some things can be automated quite successfully now or maybe later, or maybe never. And some things may be desirable to automate, or not.” The physical act of painting, for instance, with its sensory components of smell and touch—“all that pleasure,” Pasquier says—can’t be replaced by software that paints. And yet the AI-generated artwork Portrait of Edmond Belamy sold at Christie’s auction house in 2018 for $432,500 (U.S.).

Pasquier says that “if AI is the medium of the times, then of course we should make culture with it,” as well as bridge artistic practice with scientific research. Björk too has used her immersion in all this technology to become multidisciplinary with a series of educational Biophilia apps that correspond to a song with scientific, musicological, artistic, and interactive components. Call it “poetical science,” the term coined by Ada Lovelace (Lord Byron’s daughter, who’s acknowledged as the original computer programmer) in 1843 when she helped write the very first algorithm for the Analytical Engine.

A mathematician and harp player who studied poetry, Lovelace wrote of the computer’s potential to “act upon other things besides number.” She said, “Supposing, for instance, that the fundamental relations of pitched sounds in the science of harmony and of musical composition were susceptible of such expression and adaptations, the engine might compose elaborate and scientific pieces of music of any degree of complexity or extent.”

She supposed and, ahhh-ohhh-ahhhhh-ohhhhh-OH, that poetical science has since come to life via a musical artist, rooftop camera, rolling clouds, and some 175 years of computer code in a downtown Manhattan hotel lobby. And perhaps the ping of a bird’s wing.


This story is from our Spring 2021 issue. Read more stories from the Arts. 

Post Date:

May 18, 2021