Each instrument of the orchestra is a technical marvel, just as the orchestra itself is one. The organ, with the fine control over timbre that its various knobs and stops provide, might be considered the first synthesizer. And concert halls, themselves, are grand displays of an architect’s mastery over acoustics. All these, including the concert hall, are simply tools composers use to create music. And the tools can have as much of an influence on the final product as composers themselves.
Music has always been a forum for exploring and developing new technologies. Ten technologies, in particular, have had an outsized influence on not only the compositional process, but on the relationship we all have with music. From the software used to record and manipulate audio, to how music is stored, to novel ways of generating tones, these ten developments from the last hundred years or so, presented in rough chronological order, fundamentally changed what it means to be both a composer and a listener.
1. The microphone
Nearly every technological innovation related to music presented here depends on one thing: taking vibrating air – sound – transforming it into an electric signal, and then transforming that signal back into sound. The transformations themselves are achieved via vibrating membranes that we encounter as microphones and loudspeakers. It is such a simple concept that it is almost shocking that it actually works. But as remarkable as these processes are, the signal a microphone captures is extremely faint, making it essentially useless unless something is done to it to make it louder...
2. Amplification
Amplification fundamentally changed our relationship with music. Before it, music was experienced as an event. After things like radio and commercial recording became possible because of amplification, music became more common, and it became a thing.
It also meant that more composers could reach more people than ever with relative ease. Guitars went from being barely audible chamber instruments to beasts capable of deafening thousands in an arena. An entirely new world of sound became possible. The Steve Reich Ensemble could pair the softness of early music vocals with a dozen instrumentalists playing full bore, and John Adams could instruct a soloist to lie on her back while singing an aria. In short, everything changed.
3. LPs
Audio recording has been around since the mid 19th century. The first recordings were purely mechanical engravings in soft material that aged poorly and were difficult to reproduce. It took several decades to perfect the technology, but when it finally happened, sound recordings made it possible for anyone to listen to whatever they want in their own home. Today, we associate the recording industry primarily with pop music, but the first recording to sell one million copies was Enrico Caruso singing “Vesti la giubba” from Leoncavallo’s Pagliacci. The medium of recording itself has also been an inspiration to countless composers. Pierre Schaeffer invented musique concrète, music crafted from found recording, while Phil Kline employs cheap boom-boxes to continuously record and re-record a sound until it becomes unrecognizable.
4. Synthesizers
In the early 1960s, composer Morton Subotnick envisioned a world where anyone could create and perform sophisticated electronic music in their homes. While it took 50 years for that dream to be fully realized, the tool he and engineer Don Buchla forged, the Buchla 100 Series Modular Electronic Music System, was a mighty first step. The “Buchla Box” and all other synthesizers since have liberated the concept of what an instrument is by allowing a user to combine and manipulate electronically generated signals to create sounds that might have never existed before. Suddenly, any sound a composer might imagine became theoretically, if not literally, possible.
A few years after Buchla’s invention, Nonesuch Records released Subotnick’s Silver Apples of the Moon, which was the first work of electronic music to be commissioned by a record label. Its improvisatory feel and energetic rhythms were a departure from the abstract bleeps and bloops that dominated academic electronic music.
5. The portable studio
The Japanese electronics manufacture TEAC released the first Portastudio in 1979. It was a simplified version of a professional recording studio, allowing musicians to record multiple channels of audio directly onto a cassette tape. Although its original $900 price tag didn’t exactly make it available to anyone, by the 90s, these multichannel records were available for just a few hundred dollars. Compare that to the thousands required to record a single song in a professional recording studio and suddenly the Portastudio seems like a revolutionary tool.
Today, virtually every laptop comes with multichannel audio recording software built in, complete with software synthesizers and digital effects. Everyone has the capability for creating professional-sounding music at home with little or no cost.
6. Samplers
Sampling – copying a snippet of audio for future reuse – may not seem particularly revolutionary. At the heart of sampling, though, is the idea of looping sounds, and that idea alone has been enough to spawn multiple genres of music. Without loops and samples, Steve Reich would never have created Come Out, It’s Gonna Rain or Different Trains; hip-hop as we know it would not exist; and without the ability to sample and loop live performance, Maya Beiser might be just another talented cellist.
Sampling also provides the most effective means to imitate acoustic instruments. Software companies record every member of the orchestra playing every imaginable dynamic and technique in order to create libraries of sounds that composers use to recreate a symphony on the cheap. The results are convincing enough that few realize that the music for most television shows and many films was not performed by live musicians. Rather, it was composed, performed, and recorded by a composer – perhaps with an assistant – in a small studio with a modest budget.
7. Notation software
Itself a game-changing technology, musical notation was the first method humans devised for making permanent copies of music. For hundreds of years, sheet music was the primary method of record keeping and distribution for music. It was useful both for storing sketches of musical kernels and artfully describing every detail of a symphony. Now, with software such as Sibelius, notation itself becomes a compositional tool.
Instead of relying on their own imperfect imagination and prowess on the piano (that might be more developed if they spent less time writing music), composers can input music into Sibelius and hit play. These programs do more than just facilitate the printing of sheet music. They have built-in sound libraries that allow composers to be confident that what they print out truly is what they imagined.
8. Mobile devices
It has taken virtually no time for the declaration that mobile devices have changed our lives to become a cliché. Nonetheless, it is true, and our experience of music was the first thing that changed. With a music library in our pockets at all times, music has become more central to our lives, almost like water. But just as water has little commercial value despite the fact that we will die without it, the commercial value of music has been declining rapidly over the last decade. While the ease of digital distribution makes it ever more possible for composers and musicians to find an audience, the middle class of professionals that powered the once-mighty recording industry has virtually vanished. More musicians are being heard, but a smaller percentage of them than ever are making a living from it.
These devices, however, have the capacity to revolutionize music in another way. Brian Eno was among the first to explore the potential of interactive music on smartphones with the iOS app Bloom, a piece of software that generates new music every time a user launches it. Better known is Björk’s partnership with digital artist and software engineer Scott Snibbe that led to Biophilia, the first interactive album released by a well-known artist.
Whether the promise of interactive music in everyone’s pocket will be realized or prove to be a novelty remains to be seen. What is certain is that many more composers will be exploring the possibility in the years to come.
9. Cheap components
Over the last ten years, it has become possible to anyone to cheaply procure electronic components and experiment with them. Like the Arts and Crafts movement of the late 19th century, the Maker movement of today has turned countless curious individuals into inventors and entrepreneurs. Access to inexpensive microprocessors and sensors has inspired many to present novel musical instruments as a part of a composition. Moreover, the academic field of New Interfaces for Musical Expression is thriving. In addition to the efflorescence of instrument design, some composers use these tools themselves as an instrument. Composer Tristan Perich’s 1-Bit Symphony is “performed” by a £1 computer chip embedded in a CD jewel case alongside an on/off switch, a battery, and a headphone jack. It has a gritty, driving sound that demonstrates how harsh constraints can lead to incredible innovation.
10. Visual programming
Composers and computer programmers have much in common. Both user formal languages to make abstract ideas tangible. And both trades tend to require years or decades of training. As composers depend more and more on tools created by software engineers, more and more of them feel the need to create their own tools. Until recently, few could craft high quality software. That is, until composer and engineer Miller Puckette decided to solve the problem by creating a visual programming environment tailored for manipulating sound. The result has multiple incarnations: the commercial product Max/MSP and the free software Pure Data.
Max/MSP and Pure Data both operate under the principle of connecting various boxes with lines that represent moving signals. Want to play a sound? Just click on the box representing that sound, drag a line to the box that represents the computer’s speakers, and listen. It gets more complicated, of course, but composers and media artists can use the technique to do things they previously required a team of software engineers to do for them. It also allows them to explore otherwise impossible ideas. Luke Dubois’ Vertical Music, in which twelve musicians are recorded performing a short chamber work on high definition film a slowed down to one tenth the speed, is a disarming example of this marriage of software and compositional craft. Its undulating textures are gorgeous, while accompanying visuals of the slow moving performers are deeply hypnotic.