Virtual Music Performance Communities
The author focuses on the specific topic of networked music performance (NMP) to distinguish the subject of this article from the various communities related to the topic of music. Specifically, this is the collaboration of musicians in the composition and performance of music (in layman's terms, practice, improvisation, and "jamming") via the use of computers to enable performers to collaborate as they would if they were in the same studio.
Until very recently, there have been technical limitations that have made "dislocated performance" impracticable (primarily, the hardware, software, and networking required to transmit music of reasonable quality in "real" time) - progress in the last few years has overcome many of those limitations, and it is expected that dislocated performance will become feasible in the near future.
VIRTUAL COMMUNITIES OF PRACTICE IN MUSIC
The author considers the various species of virtual community related to music indicating they can be best categorized by their stated purpose or expected outcome. Based on this, the author arrives at five basic types: music sharing communities, audience communities, performance communities, learning communities, and composers' communities.
Music Sharing Communities
Music sharing communities are those in which the primary activity of participants is to post and download music files. The recording industry has prompted regulators to pursue such sites for copyright violation, and many have been shut down or transformed into e-commerce sites where music is sold rather than shared. However, some music sharing sites have continued to exist, working around the legal limitations:
- MySpace was originally designed to facilitate conversations about music, and enables members to share files (the user must indicate that the music is their own composition)
- Some artists have chosen to provide "free" music that users may download and share, as a promotional vehicle
- Some sites (such as the "free sound project") offer a collection of open-license works, which vary from brief sounds to complete songs
- Internet Radio has also filled the gap - enabling users to tune into broadcasts of radio programs (which is merely listening to a broadcast, not sharing "files" of music
It's also noted that the regulators have not had much success in addressing the massive peer-to-peer networks where music files are shared, and there is some suggestion that the file-sharing software includes (or is rapidly adopting) additional features that would qualify it as a "community."
An audience community is a site where the primary attraction is a discussion of music among consumers. It's noted that certain performance groups (such as a metropolitan orchestra) relies upon the financial support of its patrons, and seeks to involve them in community discussions. Also, record labels often attempt to build an official "fan club" online as a marketing vehicle, and there are many unofficial "fan sites" for bands. Finally, some sites exist as all-encompassing communities, not dedicated to a specific performer, but attempting to provide a central community for discussion.
The author notes that some of these sites provide music-listening features (such as an internet broadcast of a performance), but in these instances, users connect simultaneously and interact with one another online (similar to interactive television) during the performance, which differentiates them slightly from the sites where users go merely to listen independently.
A "performance" community refers to the use of the Internet for individuals to take part in playing music together - i.e., participating in a virtual "jam session" with other musicians who are geographically removed.
The chief obstacle to doing this is the latency in the audio signal - musicians who perform "together" must hear one another in real time, and even a fraction of a second's delay makes joint performance impossible.
It's noted that technology for audio conferencing already exists - and while it is entirely useful for speech communication, music requires greater synchronicity (conversation involves listening to another party, then speaking in turn, so a minute delay would not be an impediment)
The author lists some of the technologies under development to address this problem, but it seems that they have yet to be entirely successful.
(EN: The author does not mention asynchronous performance - in which musicians do not perform synchronously. It seems to me that it should be possible for one musician to record their "parts" at different times, and for the members to join up the separate tracks to hear the composition - much in the way that the music studio records the separate instruments, play back the tracks, and composite them afterward.)
Music learning communities may be similar to performance ones, though their focus is different: they are primarily focused on intermittent communication of individual performers (e.g., the student plays while the instructor observes or vice-versa), such that perfect synchronicity is unnecessary - it is more in the nature of listening/responding in turn, like a voice conversation.
However, fidelity becomes more of an issue in a learning situation. Even with current connection speeds, audio must be degraded to reduce the volume of data, and the instructor may be unable to determine whether a "defect" in the sound is a problem with the student's technique or with the technology. And while it is possible to produce high-fidelity sound digitally, the higher the fidelity, the more data, and the greater the delay.
It is also noted that teaching music also requires the instructor to observe the student as well as hear them, so the problem is not with audio alone, but in synchronizing the audio to a video track (though the fidelity of the video is not as critical).
In a composition community, the primary goal of participants is to work collaboratively on creating a work of music (specifically, a score for performance, rather than the product of performance itself).
The key need of such a community is the ability to transcribe music into symbolic notation - for which a symbolic convention already exists (staves, clef, time signature, notes, and rests). Providing the ability to automatically translate symbolic notation into an audio playback that the user may hear is also a desirable feature (not strictly necessary, but it does make things more efficient), as is the ability to "sample" an audio track and transform it into symbolic notation.
SCENARIOS AND CASE STUDIES
The DIAMOUSES project seeks to provide a generic platform that will support all of the above activities, though its current focus is on solving the problems of fidelity and latency for live performance.
The author diagrams the platform architecture - fundamentally, a collection of servers that perform various functions to which users connect via the Internet. The end-user hardware consists of their computer, headphones, microphone, and camera - plus some MIDI controllers to connect instruments to the computer.
A "jazz rehearsal" scenario was created to test the capabilities. This included three musicians connected over a LAN (notably, not the Internet), who attempted to collaborate in performing music. The key findings were that the audio component was not useful, musicians differed in their preference to use headsets or speakers, there were noticeable defects in sound quality (merely distracting), and while the interruptions due to packet loss were frustrating, they were not debilitating.
Another experiment was done, using MIDI streaming rather than audio (the output of the MIDI device was transmitted, and the "sound" was created on the other user's computer). The key findings of this experiment were that the video was not found to be useful, the performers felt unable to communicate with one another during performance, and each "felt insecure" that what they were playing was getting through to the other performer. In addition, an audience was invited to witness the performance, and it was noted that they found it "difficult to assess" the performance, but seemed interested in the concept of dislocated performance.
A third scenario dealt with music learning, with a student at teacher interacting over a networked connection, using low-quality video and audio transmission. Key findings were that the the video insufficient (due to small size) to observe one another, there were "synchronization problems" in the electronic metronome, and it was difficult to switch attention between the various parts of the screen (the score, metronome, and video).
OPEN ISSUES AND FUTURE PERSPECTIVES
The majority of musicians are skeptical about the use of technology to facilitate collaborative performance, being accustomed to interacting in a studio environment where interaction can be more varied and natural, though the seem to be more accepting of the notion to enable collaboration when it is not possible to meet in a "real" studio.
There is also a concern that the medium could have an impact on the output. While technology may be able to facilitate certain styles of music (long notes, slow tempo), it seems further from being able to provide the required level of synchronicity for other styles ("fast note attacks" in a quick tempo), and it may be difficult for more than two or three performers to collaborate.
Even so, the author seems excited by the potential of technology to merge different "cultures" in the music community, enabling performers from remote parts of the globe to interact freely with one another.