When Technology Makes Music More Accessible

https://www.nytimes.com/2022/11/03/arts/music/technology-disability-music.html

Version 0 of 1.

LONDON — As the audience at Cafe OTO, a venue here, settled down to hear Neil Luck introduce his ambitious new piece, “Whatever Weighs You Down,” bemused smiles flickered across many faces.

The evening’s performances had already featured an intriguing selection of musical technologies, including sensor gloves, text-to-speech software and recordings of bird song processed by artificial intelligence.

So when Luck launched into a low-tech étude, raucously inflating a balloon while gasping into a microphone, audience members couldn’t help but laugh.

A dark humor punctuated “Whatever Weighs You Down,” a bizarre, violent 40-minute work for piano, video, electronics and sensor gloves. It was the centerpiece of an evening that presented works made with Cyborg Soloists, a multiyear, 1.4 million-pound ($1.6 million) project, led by the pianist and composer Zubin Kanga, to advance interdisciplinary music-making through new interactions with technology.

“Whatever Weighs You Down” is one of several experimental works that recently premiered in Britain and Ireland that show the rich musical possibilities when disability and neurodiversity are incorporated into the creative process. These works also point to newly developed technologies as both malleable tools for expressing diverse perspectives in experimental music, and as potentially enabling greater accessibility to composition, which traditionally has been a rarefied and exclusive world.

In recent years, increasing attention has been paid, particularly in Britain, to making classical music more accessible. This includes the widespread adoption of what are called relaxed performances in concert halls — where audiences are allowed to make noise — and the creation of professional ensembles for disabled musicians, such as BSO Resound, part of the Bournemouth Symphony Orchestra, and the Paraorchestra, which is based in Bristol, England.

For “Whatever Weighs You Down,” Luck worked closely with the Deaf performance artist Chisato Minamimura, who in the piece appeared on a video screen and used sign language to retell her own dreams about falling, one of the main themes of Luck’s work.

In “Whatever Weighs You Down,” Minamimura wanted to express a deaf perspective on sound and music. “I have hearing loss, but I can feel things — I can feel sounds,” she said in a recent video interview via an interpreter. Workshops to develop the piece involved Minamimura responding to vibrations wherever she could find them: pressing her full body against the lid of the piano, feeling the underside of the soundboard and even biting the strings of certain instruments.

As the performance of “Whatever Weighs You Down” drew to a close, it reached a striking semi-synthesis. Onscreen, Minamimura’s gestures mirrored Kanga’s onstage hand movements. Both performers provided a kind of accompaniment for each other, experienced in entirely different ways by audience members, depending on their relationship to sound.

“Traditionally, music is just heard in an auditory sense,” Minamimura said, “but, of course, we can see someone playing a piano or playing a flute. For me, technology means incorporating a film, visuals, or a general feeling of something else; we’re adding more sensory experiences for an audience.”

Creating music that incorporates multisensory experience is just one of the areas Cyborg Soloists explores. The project, supported by the government-funded U.K. Research and Innovation Future Leaders Fellowship, also involves new types of visual interactions, including virtual reality, the creation of new digital instruments and the use of artificial intelligence and machine learning.

The next frontier for Kanga, he said, is finding a way to translate brain activity from electroencephalogram caps into sound. And in Ireland, a recent installation explores a similar process.

The visual artist Owen Boss described the first time he heard the sonic reproduction of a brain mid-seizure as “an absolutely extraordinary moment,” describing “a very low-end bass sound, kind of rhythmic, it just emerges in these sweeping, intense bass noises that whoosh in and whoosh out.”

The sound files were created by Mark Cunningham, a professor of neurophysiology of epilepsy at Trinity College Dublin, who analyzed slivers of removed brain tissue that had been put through a process that simulated a seizure. He translated the analysis into binary code, and then into sound. Inspired by those deeply jarring reverberations and his family’s own experience, Boss then began piecing together an installation, “The Wernicke’s Area,” which is named after the part of the brain involved in understanding speech. The installation is showing at the Irish Museum of Modern Art.

In 2014, Boss’s wife, Debbie Boss, had surgery to remove a brain tumor. The procedure was successful — the tumor was removed from her brain’s Wernicke’s area — but there were some side effects: The former soprano developed epilepsy and also now finds communication challenging.

With his wife’s permission, Boss and the composer Emily Howard created what he calls “a portrait of Debbie,” a multimedia work including details from the diaries she kept of her seizures, images of her brain, warped snippets of her favorite Handel aria and a variety of electroacoustic music drawn from data produced by artificially induced brain seizures.

For all involved, the first performance of “The Wernicke’s Area” was an extremely moving experience, particularly for the Boss family. Debbie Boss became emotional “watching people do what she couldn’t do anymore,” her husband said. Yet, because she wasn’t directly involved in shaping the work, there’s a slight distance to “The Wernicke’s Area.”

Lived experience plays a large role in the work of the composer Megan Steinberg, which places neurodiverse and disabled practitioners in all aspects of the creative process.

Steinberg’s “Outlier II,” created with the Distractfold ensemble and the artists Elle Chante and Luke Moore, explores, in musical form, how artificial intelligence, or A.I., can exclude disabled people by working off a generalized understanding of human experience. “Outlier II” involves an A.I.-generated melody that generalizes over time, gradually losing nuance before being disrupted by a series of chance-based improvisations.

Steinberg considered accessibility from the start of the creative process, and produced scores that were tailored to each performer’s needs.

“That’s so rare in arts environments,” said Chante, a vocalist with hypermobile Ehlers-Danlos Syndrome, a condition affecting her joints. “Normally, it’s like, ‘Oh, we’ve got this thing, and we want it to be accessible.’ Here, it’s, ‘We want to be accessible, and here’s this piece we’re trying to create.’ And that made a giant difference.”

Projects like these also produce music that is more representative of the breadth of human experience, according to Cat McGill, the head of program development at Drake Music, an arts charity focused on music, disability and technology. These projects “force us to challenge our thinking around disability and neurodiversity,” she wrote in an email interview.

“If we approach a situation with the assumption that each individual has a unique contribution to make, rather than feeling like we need to fix them,” McGill added, “we embrace the differences as a natural part of humanity.”