Essay uitgelicht

The Unified Composer-Producer-Performer: How the Digital Age has Democratised Musical Composition


By Manuel Gutierrez Rojas


[M]odern technology has [. . .] shifted the metaphor from exceptional accomplishment on paper by “composers” to exceptional accomplishment on hard disk by “producers.” Moreover, the producer and his machines are on stage, just as the composer was once a performer. At the top of the current charts, one increasingly finds cases in which the producer is the artist is the composer is the producer; and technology is what has driven the change.

— Virgil Moorefield[1]

Personal computers revolutionized every aspect of music making from composition (including nonelectronic composition) to performance to distribution to consumption. And at every level their effects has been to simplify and democratize the art. But in the process they may have dealt the literate tradition a slow-acting death blow.

— Richard Taruskin[2]



Composing music in a digital environment makes for an “immediate process” of composition: writing music and being able to listen to how it sounds at the same time or real-time composing.[3] It is different from the “abstract process” that traditional options offer such as writing down the music on paper, and finally being able to listen to how it sounds after it is performed by the assigned musicians.[4] Indeed, the Digital Audio Workstation (DAW) initiated a more accessible platform to compose music with—not only for composers, songwriters, and musicians in general, but for people who are not necessarily formally musically trained. The instantaneousness of these digital tools seems to have popularised and maybe even standardised the craft of musical composition and it may have closely connected or even unified the composer with the producer and/or the performer.

With my paper I want to examine how much this democratisation of music composition has affected the artistic values of its profession and whether this development can be regarded as an extension of the craft, or that the traditional composer will die out. For instance, how much is Samuel Adler’s well-respected Study of Orchestration of use for these new composers?[5] Many basic principles that traditional composers had to master are automated with music software (e.g. the necessary knowledge of a musical instrument’s capabilities and limitation) or pre-performed (performance samples from sound libraries such as EastWest’s). How then will these technological developments affect the compositional process?

I will use several books, articles, a private interview with video game composer Chris Hülsbeck, Adler’s aforementioned book, music software, and possibly other media for supporting my paper.

How Musical Ideas Become Reality

It does not matter much in what language and in which terminology composers happen to think their thoughts: their concepts of what is to be music next are always related to some technological considerations, and this relationship ranges from extreme subtlety to gross obviousness. There ought to be no need at this point to elaborate on the rather commonplace notion that technological considerations show the way from a musical idea to its realization, first in some code and then in a performance, and that technological considerations lead to the availability of the acoustical phenomena needed by composers for an audible representation of their musical ideas. It may be appropriate, however, to remember that musical ideas are thinking models in more or less deliberately stipulated linguistic systems; that [. . .] the complexity of such systems is increasing in many a sense and dimension; and that, therefore, composers now have to turn to technology with the additional request for assistance in handling the systems they stipulate.

— Herbert Brün [6]

Herbert Brün carefully describes the thought process of a composer’s mind and how the type of medium that forms the building blocks for the mind’s ideas to be realised can make the process more efficient or more comfortable. Throughout the centuries, the medium to write the music with changed. Before digital technology was sophisticated enough to write music in a digital environment using software, making one’s musical idea a tangible medium for others to perceive, could be done by writing it down in a standard that is music notation, by performing it, or by recording the performance of the idea—either directly or using the notation of it. Depending on these options, there are differences in the way the musical idea is documented. Musical notation gives instruction for how the idea should be performed while the recording of a performance documents an interpretation of the idea. In his “Rationalization and Democratization in the New Technologies of Popular Music,” Andrew Goodwin argues that music notation is a way to structure or “bring order to the creation” of music and that “a universal notational system and the precise measurement of tonal and rhythmic differences comes to define what music is.”[7] Adam Basanta further elaborates that the score is not the composition; it is a part of it, “a piece of the puzzle,” and that the listening of the piece is the other necessary part.[8] There can be a benefit for performing the idea, rather than writing it down. Some musical ideas are very hard or even impossible to write down in standard notation. Then there is the skill of being able to perform the idea that might make music notation more beneficial, for it can instruct another musician who has the skill to perform the idea, to perform it.


Human Artificiality

Many prominent tools for digital music production are for correcting imperfections of recorded performances. Especially recorded MIDI input (MIDI is a standard to connect computer devices with other electronic musical devices such as a digital piano) can be edited to such a degree that there will not be any audible traces of digital artifacts, such as is the case with live recordings (i.e. the side effect of auto-tuning vocals). However, a flawless performance does not necessarily make for a good performance. For instance, a slightly offset beat by the player could be intentional, as part of the player’s “feel.” Andrew Goodwin argues that “it is essential to note that arithmetical accuracy is rarely the goal in popular music. Instead, the sometimes elusive quality of ‘feel’ is what most musicians and producers seek, and this usually involves some displacement of the notes away from their mathematically ‘correct’ position.”[9] This human element that is “feel” and that can be part of a performer’s style will be lost more and more if a standard of perfectionism has become the norm.

If the digital composer is not careful, his digital compositions can easily sound artificial—even if he uses high-quality pre-recorded samples, either because of a lack of knowledge to how a certain musical instrument should be performed or because of too much cleaning up of human imperfections. Indeed, a lack of digital artifacts does not hide inhumanly flawless performance. During the “Democratization of Performance” panel discussion, Pierre-Luc Senécal notes that the DAW options of using preperformed music segments—most notably music loops, but also guitar riffs, or even detailed instrumental articulations—can make it difficult for the user to secure his own musical identity, but that they can be beneficial because they “help save time: it is useless to reinvent the wheel every time.”[10] Indeed, using programmed instruments for composing can make for a less obstructive writing process—especially for artists that write their music by performing and recording them, than by writing it down traditionally. Progressive math-metal band Meshuggah wrote their album Catch Thirtythree(2005) using programmed drums, not because drummer Tomas Haake could not perform the parts, but because of the comfortable production process as Haake points out: “It would have taken away from the initial idea, which is just spur of the moment and lets everything flow freely.”[11] Haake, however, felt it necessary to acknowledge that the drums were programmed, saying:

We know a ton of bands where the drums are programmed, but they will never admit to it. It’s so important to them that people think, “Yeah, you’re playing live.” That was another thing, for me at least, that felt like a release of sorts, just to say, “Yeah, this is programmed,” and just have that out in the open, and then let people decide on whether they think it’s good or not.[12]

This, as Moorefield would describe, “freeing [oneself] of the notion of performance,”[13] goes back to the musique concrète, wherein “notating musical ideas” were seen as a limitation to composition.[14] Selecting and compiling sounds to create music was the way forward.[15] Moorefield furthermore examines that Pierre Schaeffer, who coined the term musique concrète, “composed music in which the studio was the performer. More precisely, they created music which was meant to exist exclusively in recorded form, as tape music.”[16] While musique concrète aims at a certain musical style, the production of such music is very similar to digital music production. Indeed, the term connects neatly to digital composition. After all, the immediacy of contemporary music production incites composers to create compositions that are based on sound rather than on tonality and harmony. Mathieu Lacroix argues that “Music is now not an axis of notes/harmony into time, but of sound into time.”[17]

In his Producer as Composer, Sam Logan observes that “the DAW might become a crutch for the composer who will become reliant on it in order to compose, and who will gain an unrealistic expectation of orchestration and instrumentation.” When a composer only uses real-time composing as a draft, as a mock-up for an eventual live recording with a real orchestra, it could lead to frustration if it turns out that the composition cannot be performed as cleanly and with the same dynamics by real instrumentalists.[18] He demonstrates this problem with two woodwinds:

[W]hen working with two virtual instruments of flute, playing in its lower register and a bassoon, also in its lower register, by default they will sound at the same volume in the DAW when played back together. A live performance would see the flute being entirely drowned out. On top of this, the budding composer inexperienced in orchestration and instrumentation can easily fall victim to writing outside instrumental range, as well as never coming to grips with the idiomatic eccentricities of the instruments the VST instrument is modelling.[19]

Furthermore, he remarks that DAW users often “use a keyboard as a physical and graphical means of operating the instrument.”[20] This could lead into non-keyboard instruments being performed in a keyboard manner.[21]

Studying the capabilities for each instrument before real-time composing was available, was necessary for the composer to know how to transfer his musical ideas into a performance by a palette of sounds of the musical instruments, as Samuel Adler points out in his Study of Orchestration.[22] In the book’s chapter “The Woodwind Choir (Reed Aerophones)” he mentions the limitations of volume and dynamics for woodwinds such as flutes: “Intensity and volume vary with each woodwind instrument, depending on the range and particular register in which the passage appears. [. . .] For example, the flute and piccolo are very weak in volume in their lowest octave [. . .].”[23]

The Composer Becoming the Composer–Performer–Producer

The immediacy of composing in a digital environment has given the composer many options for both the performance of the composition as well as the production of it. In his “Democratization of Performance,” Pierre-Luc argues that “[w]hereas every step of the musical creation was once realized separately by a professional and in a specialized venue, now it can all be done by the same person in one place and sometimes at incredibly low costs.”[24] Although such a digital composer can be more affordable than separately hiring people for specific professions, because of this composer–performer–producer unification, he has a lot more responsibility.

Besides, it could still be less expensive in the long run to hire a dedicated orchestra, conductor, recording studio, and recording engineers for a composition to be recorded, than to use virtual instruments and months of tweaking to make them sound convincing, according to video game composer Chris Hülsbeck.[25] In a private interview, he elaborately discusses the benefits and problems of composing digitally. About hiring real musicians versus sequencing music digitally, he argues that even if a composer has the latest high-quality virtual instruments at his disposal, it is impossible to make his composition match a real orchestra, at least for now. He compares it to creating an art piece out of different photographs collaged together and that the original sources will always be visible:

[I]t still will look a little [. . .] bit plasticy, if you will. It does not have that soul. [. . .] I always say: when you have sixty/seventy people in an orchestra play your music, everyone of those musicians is interpreting in a way. Even [notated music] with good [detailed] instructions, [the musicians] are interpreting with their own soul, [. . .] how they learned their instrument, what they’re bringing to it. And the combination of all [of that], if you put those seventy people together, that creates something [so] that, that soul is expressed in the music.[26]

Hülsbeck’s analogy connects to Senécal’s composer as a painter idea “You’re working directly with sound, and there’s no transmission loss between you and the sound—you handle it. It puts the composer in the identical position of the painter—he’s working directly with a material, working directly onto a substance, and he always retains the options to chop and change, to paint a bit out, add a piece, etc.”[27] This availability of “chop and chang[ing]” at will could lead to a composition that keeps getting edited.[28] Especially in film music, film producers micromanage the composition process of the composer, because of the real-time composing aspect as demonstrated in the short documentary “The Marvel Symphonic Universe.”[29] A film director could hear directly how one part of the piece can affect the film and vice versa—so much that the creativity of the composer is reduced to the producers’ wishes.[30] In his History of Music Production, Richard James Burgess argues that this idea that digital music technology cannot be made responsible for the “declining standards of musicianship, bad music on the radio, excessive mediation by producers, and so forth,” only its users can.[31]He remarks that “[t]he producer and the needs of the production dictate the use of the equipment not the other way round. This is conceptually no different than driving a Ferrari in a thirty mph zone.”[32]

One way to prevent digitally programmed compositions from sounding unconvincing performance-wise, is to avoid delicate musical elements—ornamental phrasing, gentle vibrato effects, etc.—but this, in turn, limits the compositional freedom of the composer. The idea is similar to how CGI artists[33] can easily fool the viewer with computer generated background elements of a film to look real, such as those in Brokeback Mountain(dir. Ang Lee, 2005),[34] but will have a harder time to computer-generate a convincing human being, such as a young Arnold Schwarzenegger as The Terminator in Terminator Genisys(dir. Alan Taylor, 2015).[35] This unusable range of digital manipulation for digital composers is an “uncanny valley” (a term coined by Masahiro Mori).[36] Accessing this valley—no matter how careful or prepared—can easily turn a convincing, yet “safe” composition into a very unconvincing mess, if I may. The more realistic these virtual instruments sound, the more difficult it becomes for them to sound convincing.

EastWest’s Hollywood Orchestra is a sound library consisting of a complete modern classical orchestra. Every section has many different articulations. For instance, the user can select for the violins:

  • bowing positions;
  • articulations such as staccato, marcato, pizzicato;
  • effects such as trills and tremolos;
  • legato (slur) and portamento playing;
  • etc. [37]

I experimented with the Slur Runs patch for the violins using Rimsky-Korsakov’s famous “Flight of the Bumblebee” (1899–1900).[38] While the fast runs sound very real (so much that YouTube flagged the video as being a copyrighted upload of a real recording of the piece), every note is perfectly divided within every measure, because they were inputted using music notation software (Sibelius) and then transferred to a DAW (Studio One 2), which makes the performance sound too mechanical. I would have to perform the piece on my digital piano for human elements to appear, but this would still be a translation from one instrument (a piano) to other ones (violins) which still results in performative problems. Having discussed these problems with Hülsbeck he comments high-quality samples with many articulations make it harder to sound real. He mentions:

There are some sample libraries [that] are trying to capture the soul of the performance, but then they’re limited [to] what they can do. If you make them more flexible, then you have to make them kind of like more robotic sounding, because you want a clean string sound for example, for that note. It’s always going to be that clean string sound. Each time you hit that note, it’s going to sound the same. There are some libraries that try to do several different recordings when you press the note in succession and stuff like that [such as the round-robin sampling technique].[39] [. . .]Right now, the resolution is just not there. [. . .] When you want to try to make your MIDI sampling recording sound good, you have to spend a lot of time. You have to actually put in imperfections in a way. You have to mix different libraries. You have to experiment and do all these things that the musicians in an orchestra do just [from] their experience. And in the end, you’re spending maybe months to make a piece sound, whereas an orchestra could record it in fifteen minutes and you could move on to the next piece.[40]

On the other hand, it is the question whether the listener cares that what he hears clearly sounds artificial. For example, all the guitars in Britney Spears’s Oops!… I Did It Again(2000) are artificial (as can be heard by her cover of “(I Can’t Get No) Satisfaction” (2000).[41] And the orchestra of the famous “November Rain” (1991) by Guns N’ Roses is completely synthetic (even if the music video assumes otherwise).[42] Nevertheless, Spears’s album and Guns N’ Roses’ song did not seem to be negatively affected by these artificial sounding instruments.

E-literate Composition: The New Direction of Documenting Written Music

In the 1940s, John Cage looked forward to a future wherein musical notation would become a thing of the past, a future wherein “composers [could] make music directly, without the assistance of intermediary performers.”[43] This paper demonstrated that this essentially happened because of digital music technology. Though musical notation is still being used, many musical performers can rely on media which does not necessarily need traditional musical notation. Taruskin speaks of a postliterate age. He says:

In preliterate cultures compositions can be fixed in memory and reproduced orally or (with rehearsal) by ensembles of performers; in the postliterate future pieces will go right on being fixed and reproduced in those time-honored ways, but it will also be possible to fix them digitally and reproduce them via synthesizer or via MIDI. Indeed, it is already possible to do these things, even if only a minority of composers now work that way. When a majority of composers work that way, the postliterate age will have arrived. That will happen when—or if—reading music becomes a rare specialized skill, of practical value only for reproducing “early music” (meaning all composed music performed live. [. . .] [T]he availability of technologies that can circumvent notation in the production of complex composed music may eventually render musical literacy, like knowledge of ancient scripts, superfluous to all but scholars.[44]

However, a postliterate age implies an age beyond musical literacy, while digital music production does rely on literacy, only digitally, either using piano rolls on DAWs, or sequencing the music in other digital ways. Thus electronically literate composition or e-literate composition would be the way forward.


As of now, while the mixture of human performances with digitally coded performances or even solely programmed performances can be very useful for preproduction of a compositional piece as an end product, it could make the work sound artificial. Unfortunately, because of a DAW’s immediate process of composing, pre- and postproduction have either become the same or disappeared completely. Furthermore, the use of virtual orchestras creates compositional limitations, because some orchestral techniques and instruments cannot sound convincing beyond a certain level of realism. This artificiality of digital programmed composition—resulting into an “uncanny valley” of sound—is the one thing that makes real musicians, session artists, orchestras, etc. still relevant; they still do have a purpose. Furthermore, the strong manipulative capabilities with real-time composition ironically complicate matters with the composing aspect itself. There is a big responsibility into efficiently streamlining the writing process rather than obstructing it. Especially film composers struggle with their superiors micromanaging them.

When the time comes that artificiality is eliminated or that humanisation options have improved so much that a composition not performed by real people sounds like it is, then it is important to find out how the traditional performer/artist can still be part of this musical culture. However, the music listener might not care about the presence of audible synthetic instruments such as the artificial orchestra in “November Rain.”

The traditional skills for a composer to be able to work professionally have become less of a requirement because many compositional elements are automated with computer software, such as the range for each musical instrument (but in virtual form). The lack of understanding each instrument’s capabilities and limitations can make for digital compositions that cannot be performed in real life, but that makes the DAW an electronic instrument of itself. It connects digital composers that use a virtual instrument beyond what a real performer is capable of performing to musique concrète composition, in which the composition is freed from human performance, in which the sound is central.

Taruskin’s idea that “[p]ersonal computers [. . .] might have dealt the literate tradition a slow-acting death blow” is not entirely true.[45] A postliterate age, wherein musical notation will be obsolete, does not seem like an age we are nearing, but a transition to an e-literate age—using electronic devices for notating and composing music—that has happened. This new technology definitely has affected the process of composition, the “way from a musical idea to its realization.”[46]




Adler, Samuel. “The Orchestra—Yesterday and Today.”The Study of Orchestration 3rd ed., 3–6. New York: W. W. Norton & Company, 2016.

———. “The Woodwind Choir (Reed Aerophones).”The Study of Orchestration 3rd ed., 164–179. New York: W. W. Norton & Company, 2016.

Brün, Herbert. “Technology and the Composer.” Interpersonal Relational Networks(1971): 1–9.

Burgess, Richard James. “Random Access Recording Technology.” The History of Music Production, 134–146. Oxford: Oxford University Press, 2014.

Eno, Brian. “The Studio as Compositional Tool.” Audio Culture: Readings in Modern Music, 127–130. Cambridge: Da Capo Press, 2004.

Fuller, David. “The Performer as Composer.” Performance Practice Volume II: Music After 1600, 117–146. New York: W. W. Norton & Company, 1990.

Goodwin, Andrew. “Rationalization and Democratization in the New Technologies of Popular Music.” Popular Music: The Rock Era, Volume 2, 147–168. Edited by Simon Frith. London: Routledge, 2004.

Gutierrez Rojas, Manuel. “Contemporary Film Music Production that Changed the Hollywood Score.” Music and the Moving Image, 1–8. Utrecht: Utrecht University, 2017.

Haake, Tomas. “Guest Spots: Meshuggah on the Drumkit from Hell.” Alarm Magazine: A Passion in Discovering Exceptional Music. Accessed April 27, 2017,

Hülsbeck, Chris. Interview by the author, March 7, 2017. Transcribed by the author.

Lacroix, Mathieu. “The Producer/Composer: The Hybridization of Roles and how it Affects Production and Composition of Contemporary Music.” Master’s thesis, NTNU, 2016.

Logan, Sam. “Modern Composition & Production Tools.” Illusions of Liveness: Producer as Composer, 27–34. Master of Musical Arts in Composition exegesis, Massey University and Victoria University of Wellington, 2013.

Moorefield, Virgil. “The Contemporary Situation: Is the Producer Obsolete?” The Producer as Composer: Shaping the Sounds of Popular Music, no page numbers. Cambridge: The MIT Press, 2005. Kindle E-book.

Mori, Masahiro, Marl F. MacDorman, and Norri Kageki. “The Uncanny Valley.” IEEE Robotics & Automation Magazine19, no. 2, June 2012.

Satterwhite, Brian, Taylor Ramos, and Tony Zhou. “The Marvel Symphonic Universe.” YouTube. Accessed April 30, 2017,

Senécal, Pierre-Luc. “Democratization of Performance.” Panel discussion with Adam Basanta, Nicolas Bernier, Myriam Bleau, Gabriel Dharmoo, and Erin Gee. Moderated by Patrick Saint-Denis. Held at the Canadian Music Centre, Québec Region, April 6, 2015. Canadian League of Composers. Accessed April 7, 2017, www.composition/org/events/


Taruskin, Richard. “Millennium’s End.” The Oxford History of Western Music: Music in the Late Twentieth Century, 473–528. Oxford: Oxford University Press, 2010.

———. “The Third Revolution.” The Oxford History of Western Music: Music in the Late Twentieth Century, 175–220. Oxford: Oxford University Press, 2010.

“Terminator Genisys: Creating a Fully Digital Schwarzenegger.” YouTube. Posted by Wired. Accessed April 30, 2017,

“The Visual FX of Brokeback Mountain.” YouTube. Retitled as “Special Effects of Brokeback Mountain.” Posted by Fricky007. Accessed April 30, 2017,


Motion Pictures

Brokeback Mountain. Directed by Ang Lee. Produced by Diana Ossana and James Schamus. Starring Heath Ledger, Jake Gyllenhaal, Anne Hathaway, and Michelle Williams. Music by Gustavo Santaolalla. Universal City, California: Focus Features, 2005.

Terminator Genisys. Directed by Alan Taylor. Produced by David Ellison and Dana Goldberg. Starring Arnold Schwarzenegger, Jason Clarke, Emilia Clarke, and Jai Courtney. Music by Lorne Balfe. Hollywood, California: Paramount Pictures, 2015.



“Flight of the Bumblebee.” Written by Nikolai Rimsky-Korsakov. Performed by the author using a DAW. From The Tale of Tsar Saltan. Composed in 1899–1900. Retitled to “Flight of the Bumblebee using Virtual Instruments.” YouTube. Posted by ManolitoMystiq. Accessed April 30, 2017,

“(I Can’t Get No) Satisfaction.” Written by Mick Jagger and Keith Richards. Performed by Britney Spears. From Oops!… I Did It Again. Produced by Max Martin et al. New York: Jive Records, 2000.

“November Rain.” Written by Axl Rose. From Use Your Illusion I. Produced by Mike Clink and Guns N’ Roses. New York: Geffen, 1991.

Oops!… I Did It Again. Produced by Max Martin et al. New York: Jive Records, 2000.


[1]Virgil Moorefield, “The Contemporary Situation: Is the Producer Obsolete?,” The Producer as Composer: Shaping the Sounds of Popular Music(Cambridge: The MIT Press, 2005), no page numbers, Kindle E-book.

[2]Richard Taruskin, “Millennium’s End,” The Oxford History of Western Music: Music in the Late Twentieth Century(Oxford: Oxford University Press, 2010), 495.

[3]Manuel Gutierrez Rojas, “Contemporary Film Music Production that Changed the Hollywood Score,” Music and the Moving Image(Utrecht: Utrecht University, 2017), 6.


[5]Samuel Adler, The Study of Orchestration3rd ed. (New York: W. W. Norton & Company, 2002).

[6]Herbert Brün, “Technology and the Composer,” Interpersonal Relational Networks (1971): 2–3.

[7]Andrew Goodwin, “Rationalization and Democratization in the New Technologies of Popular Music,” Popular Music: Critical Concepts in Media and Cultural Studies 2, The Rock Era, 147–148, edited by Simon Frith (London: Routledge, 2004).

[8]Pierre-Luc Senécal, “Democratization of Performance,” panel discussion with Adam Basanta, Nicolas Bernier, Myriam Bleau, Gabriel Dharmoo, and Erin Gee. Moderated by Patrick Saint-Denis. Held at the Canadian Music Centre, Québec Region, April 6, 2015, Canadian League of Composers, accessed April 7, 2017, www.composition/org/events/democratization-of-performance-2.

[9]Andrew Goodwin, “Rationalization and Democratization in the New Technologies of Popular Music,” Popular Music: The Rock Era, Volume 2, 149–150, edited by Simon Frith (London: Routledge, 2004).

[10]Senécal, “Democratization of Performance.”

[11]Tomas Haake, “Guest Spots: Meshuggah on the Drumkit from Hell,” Alarm Magazine: A Passion in Discovering Exceptional Music, accessed April 27, 2017,

[12]Haake, “Meshuggah on the Drumkit from Hell.”

[13]Moorefield, “The Discothèque and Musique Concrète,” The Producer as Composer.

[14]Jean de Reydellet, “Pierre Schaeffer, 1910–1995: The Founder of ‘Musique Concrete,’” Computer Music Journal 20, no. 2 (summer 1996: 10).


[16]Moorefield, “The Discothèque and Musique Concrète,” The Producer as Composer.

[17]Mathieu Lacroix, “The Producer/Composer: The Hybridization of Roles and how it Affects Production and Composition of Contemporary Music,” Master’s thesis, NTNU, 2016.

[18]Sam Logan, “Modern Composition & Production Tools,” Illusions of Liveness: Producer as Composer, 27–28 (Master of Musical Arts in Composition exegesis, Massey University and Victoria University of Wellington, 2013).

[19]Logan, “Modern Composition & Production Tools,” 27–28.



[22]Adler, “The Orchestra—Yesterday and Today,” The Study of Orchestration, 4.

[23]Ibid., “The Woodwind Choir (Reed Aerophones), The Study of Orchestration, 170.

[24]Senécal, “Democratization of Performance.”

[25]Chris Hülsbeck, interview by the author, March 7, 2017, transcribed by the author.


[27]Brian Eno, “The Studio as Compositional Tool,” Audio Culture: Readings in Modern Music, 129 (Cambridge: Da Capo Press, 2004).


[29]Brian Satterwhite, Taylor Ramos, and Tony Zhou, “The Marvel Symphonic Universe,” YouTube, accessed April 30, 2017,


[31]Richard James Burgess, “Random Access Recording Technology,” The History of Music Production(Oxford: Oxford University Press, 2014), 134.


[33]CGI stands for computer-generated imagery.

[34]“The Visual FX of Brokeback Mountain,” YouTube, retitled as “Special Effects of Brokeback Mountain,” posted by Fricky007, accessed April 30, 2017,

[35]“Terminator Genisys: Creating a Fully Digital Schwarzenegger,” YouTube, posted by Wired, accessed April 30, 2017,

[36]Masahiro Mori, Karl F. MacDorman, and Norri Kageki, “The Uncanny Valley,” IEEE Robotics & Automation Magazine 19, no. 2, June 2012.

[37]See the following document for the complete list:


[38]“Flight of the Bumblebee,” written by Nikolai Rimsky-Korsakov, performed by the author using a DAW, from The Tale of Tsar Saltan, composed in 1899–1900, retitled to “Flight of the Bumblebee using Virtual Instruments,” YouTube, posted by ManolitoMystiq, accessed April 30, 2017,

[39]For more information, see the Taming The Robin–section on this EastWEst/Quantum Leap Hollywood Solo Instruments review:

[40]Hülsbeck, interview by the author.

[41]“(I Can’t Get No) Satisfaction,” written by Mick Jagger and Keith Richards, performed by Britney Spears, from Oops!… I Did It Again, produced by Max Martin et al. (New York: Jive Records, 2000).

[42]“November Rain,” written by Axl Rose, from Use Your Illusion I, produced by Mike Clink and Guns N’ Roses (New York: Geffen, 1991).

[43]Taruskin, “The Third Revolution,” 176–177.

[44]Ibid., “Millennium’s End,” 509–510.

[45]Taruskin, “Millennium’s End,” 495.

[46]Brün, “Technology and the Composer,” 3.