The coronavirus pandemic and the climate crisis both demonstrate the need for new music technology. It is time that people who work with music become more computer literate, says Stefano Fasciani.
In 1951, the avant-garde composer John Cage came up with the idea of performing a piece of music using sounds from outside the concert venue. With 12 radios on stage, sounds being made other places created a new musical experience for the audience.
Today, this may seem outdated, as we are used to performances that include more than the instruments we see on stage. However, for electronic musicians in different geographical locations to perform synchronously in real time has actually been very challenging, until Stefano Fasciani at the Department of Musicology developed a new system for network performances.
“I have set up a collection of tools, in order to allow for different types of computer-based real time music collaboration or music performances between people that are sitting in two different places using computer networks,” says Fasciani, who is a researcher in music technology.
The tool, Networked Electronic Music Tools (NEMT), is something musicians should learn to use, he believes.
“One thing is the coronavirus pandemic, where musicians cannot meet to play. With the climate crisis looming, we will also have to travel less in the future.”
The magic 20 millisecond limit
After COVID-19 put an end to social gatherings in March, musicians have had to think of new ways to perform. Both amateurs and big stars have played quarantine concerts online from private homes or empty concert venues. What you may not know is that the internet creates a delay between the moment the artist starts to sing, and the moment you hear the song.
“When streaming, it usually takes up to 15 seconds for the sound and images to find its way from sender to receiver. When you perform from your living room and the audience is sitting somewhere else, that latency does not matter,” says Fasciani.
If, on the other hand, you want to perform together in real time from different locations, the delay matters more.
“The delay on the audio or video signal should always be less than 20 milliseconds. This is difficult to achieve and does not depend on the equipment you have in front of you, but mostly on the network.”
Sound signals take time to move from one place to another, regardless of whether they travel through copper or optic fibers.
“With a 20 millisecond latency, you can still hear it. But if I play drums and you play guitar, we will naturally adjust to each other and tolerate the latency. As the delay gets longer, we both start to slow down.”
Research from the RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion has shown that there are techniques for dealing with the delay. Musicians almost unconsciously adjust the quality of the sounds to make them sound right. The goal, however, is to avoid latency altogether.
The quality of the network has been the challenge
Fasciani has developed the tool that was missing to cope with delay. Performing together from two or more locations has been possible previously, but only when using local or stable networks. An example is the Italian system LOLA.
“The problem is that most systems for networked music performances are prototypes, where you cannot rely on tech support if something goes wrong. You still have to be in the same building or geographically close by, or use a private network.”
Another solution is to use stable networks, that most universities and conservatories have access to.
“These networks have stable and symmetrical bandwidth, meaning they have the same capacity to upload the data, as to download. When you move to the consumer Internet connections, however, you share cables with many others. The priority of your packets is unknown, and the latency is not fixed. Most of these connections are asymmetric, so you have no control over how long it takes to upload and download the data.”
Everyone follows the same clock
Fasciani’s tool has primarily been developed for electronic music.
“When no one is playing acoustic instruments and you use computers or electronic devices that are sequenced, a computer is playing the notes. As a musician you are in control of the process, like a conductor.”
When using multiple machines, they must be synchronized. The challenge in playing with different machines over the network is that existing synchronization strategies, such as clock signals, work well only over local networks but not over the Internet.
“You send the audio, which is what you can hear, but this has also non-audible synchronization clock embedded in it, which can be extracted at the receiver side. If several people play on their own machines, one of the machines is the ‘master,” and its clock dictates the others. When the sound from different machines is mixed together it is perfectly aligned to the same tempo, as all audio processes were timed against the same clock and mixed taking into consideration the network latency as well.”
The master computer becomes the conductor and ensures that all sounds are played synchronously. Latency and unwelcome chitter that would ruin the interaction are taken out of the picture.
“This is how a band can play together with electronic instruments. The software is also designed for DJs to play seamless back-to-back sets,” says Fasciani.
Music needs good technology
Both early prototypes for network performances and newer tools, such as Fasciani’s, not only open opportunities for musicians, but also for other forms of communication. But the fact that they have been developed in a musical context is no coincidence, according to Fasciani.
“This is an example of how the arts and humanities can drive innovation in technology. The requirements when it comes to systems for sound, image and communication technology are far more advanced in music than, in any other field.” he says.
Fasciani is still worried that the field of music performances is not keeping up with new developments in technology.
“In other areas, letters have been replaced by e-mail or records with audio files. It is strange that sound engineers who can set up a stage for a concert in a stadium and make a show for a hundred thousand people do not know how to set up network-based music performance. But then again—where should they learn it? It should be part of the curriculum in all music programs.”
Literacy in network collaboration
The network-based tools Stefano Fasciani works with are in active use in the international master’s program Music, Communication and Technology, where he is one of the teachers. The program teaches students exactly what Fasciani thinks the musicians and music technologists of today need.
“Musicians, music teachers, sound engineers—everyone involved in music should be literate in the use of network collaboration,” he says.
Fasciani entered the field through the club scene, but also has a background as an engineer.
“Whenever there is a new tool that make our job easier, we engineers use it straight away. There’s always a little bit of a learning curve, but then you have a much easier life. The uphill is very short. I do not understand why everyone aren’t equally open to innovation,” he says.
The pandemic has highlighted the need to adapt, he emphasizes.
“Many have had it quite easy, while others have struggled. But when you have had more than fifteen years to go online, and have chosen not to, you have simply missed an opportunity.”
A fraction of a second is all you need to feel the music
New technology allows musicians to perform together in real time and around the globe (2020, December 4)
retrieved 4 December 2020
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.