www.aec.at  
Ars Electronica 1992
Festival-Programm 1992
Back to:
Festival 1979-2007
 

 

A History of Electronic Music Pioneers part 2


'David Dunn David Dunn

1) THE VOLTAGE-CONTROLLED ANALOG SYNTHESIZER
A definition: Unfortunately the term "synthesizer" is a gross misnomer. Since there is nothing synthetic about the sounds generated from this class of analog electronic instruments, and since they do not "synthesize" other sounds, the term is more the result of a conceptual confusion emanating from industrial nonsense about how these instruments "imitate" traditional acoustic ones. However, since the term has stuck, becoming progressively more ingrained over the years, I will use the term for the sake of convenience. In reality the analog voltage-controlled synthesizer is a collection of waveform and noise generators, modifiers (such as filters, ring modulators, amplifiers), mixers and control devices packaged in modular or integrated form. The generators produce an electronic signal which can be patched through the modifiers and into a mixer or amplifier where it is made audible through loudspeakers. This sequence of interconnections constitutes a signal path which is determined by means of patch cords, switches, or matrix pinboards. Changes in the behaviors of the devices (such as pitch or loudness) along the signal path are controlled from other devices which produce control voltages. These control voltage sources can be a keyboard, a ribbon controller, a random voltage source, an envelope generator or any other compatible voltage source.

The story of the analog "synthesizer" has no single beginning. In fact, its genesis is an excellent example of how a good idea often emerges simultaneously in different geographic locations to fulfill a generalized need. In this case the need was to consolidate the various electronic sound generators, modifiers and control devices distributed in fairly bulky form throughout the classic tape studio. The reason for doing this was quite straight forward: to provide a personal electronic system to individual composers that was specifically designed for music composition and/or live performance, and which had the approximate technical capability of the classic tape studio at a lower cost. The geographic locales where this simultaneously occurred were the east coast of the United States, San Francisco, Rome and Australia.

The concept of modularity usually associated with the analog synthesizer must be credited to Harald Bode who in 1960 completed the construction of his MODULAR SOUND MODIFICATION SYSTEM. In many ways this device predicted the more concise and powerful modular synthesizers that began to be designed in the early 1960's and consisted of a ring modulator, envelope follower, tone-burst-responsive envelope generator, voltage-controlled amplifier, filters, mixers, pitch extractor, comparator and frequency divider, and a tape loop repeater. This device may have had some indirect influence on Robert Moog but the idea for his modular synthesizer appears to have evolved from another set of circumstances.

In 1963, MOOG was selling transistorized Theremins in kit form from his home in Ithaca, New York. Early in 1964 the composer Herbert Deutsch was using one of these instruments and the two began to discuss the application of solid-state technology to the design of new instruments and systems. These discussions led Moog to complete his first prototype of a modular electronic music synthesizer later that year. By 1966 the first production model was available from the new company he had formed to produce this instrument. The first systems which Moog produced were principally designed for studio applications and were generally large modular assemblages that contained voltage-controlled oscillators, filters, voltage-controlled amplifiers, envelope generators, and a traditional style keyboard for voltage control of the other modules. Interconnection between the modules was achieved through patch cords. By 1969 Moog saw the necessity for a smaller portable instrument and began to manufacture the Mini Moog, a concise version of the studio system that contained an oscillator bank, filter, mixer, VCA and keyboard. As an instrument designer Moog was always a practical engineer. His basically commercial but egalitarian philosophy is best exemplifled by some of the advertising which accompanied the Mini Moog in 1969 and resulted in its becoming the most widely used synthesizer in the "music industry":
"R.A. Moog, Inc. built its first synthesizer components in 1964. At that time, the electronic music synthesizer was a cumbersome laboratory curiosity, virtually unknown to the listening public. Today, the Moog synthesizer has proven its indispensability through its widespread acceptance. Moog synthesizers are in use in hundreds of studios maintained by universities, recording companies, and private composers throughout the world. Dozens of successful recordings, film scores, and concert pieces have been realized on Moog synthesizers. The basic synthesizer concept as developed by R.A. Moog, Inc., as well as a large number of technological innovations, have literally revolutionized the contemporary musical scene, and have been instrumental in bringing electronic music into the mainstream of popular listening.

In designing the Mini Moog, R. A. Moog engineers talked with hundreds of musicians to find out what they wanted in a performance synthesizer. Many prototypes were built over the past two years, and tried out by musicians in actual live-performance situations. Mini Moog circuitry is a combination of our time-proven and reliable designs with the latest developments in technology and electronic components.

Tte result is an instrument which is applicable to studio composition as much as to live performance, to elementary and high school music education as much as to university instruction, to the demands of commercial music as much as to the needs of the experimental avantgarde. The Mini Moog offers a truly unique combination of versatility, playability, convenience, and reliability at an eminently reasonable price."
In contrast to Moog's industrial stance, the rather counter-cultural design philosophy of DONALD BUCHLA and his voltage-controlled synthesizers can partially be attributed to the geographic locale and cultural circumstances of their genesis. In 1961 San Francisco was beginning to emerge as a major cultural center with several vanguard composers organizing concerts and other performance events. MORTON SUBOTNICK was starting his career in electronic music experimentation, as were PAULINE OLIVEROS, Ramon Sender and TERRY RILEY. A primitive studio had been started at the San Francisco Conservatory of Music by Sender where he and Oliveros had begun a series of experimental music concerts. In 1962 this equipment and other resources from electronic surplus sources were pooled together by Sender and Subotnick to form the San Francisco Tape Music Center which was later moved to Mills College in 1966. Because of the severe limitations of the equipment, Subotnick and Sender sought out the help of a competent engineer in 1962 to realize a design they had concocted for an optically based sound generating instrument. After a few failures at hiring an engineer they met DONALD BUCHLA who realized their design but subsequently convinced them that this was the wrong approach for solving their equipment needs. Their subsequent discussions resulted in the concept of a modular system. Subotnick describes their idea in the following terms:
"Our idea was to build the black box that would be a palette for composers in their homes. It would be their studio. The idea was to design it so that it was like an analog computer. It was not a musical instrument but it was modular .. It was a collection of modules of voltage-controlled envelope generators and it had sequencers in it right off the bat .. It was a collection of modules that you would put together. There were no two systems the same until CBS bought it … Our goal was that it should be under $400 for the entire instrument and we came very close. That's why the original instrument I fundraised for was under $500."
Buchla's design approach differed markedly from Moog. Right from the start Buchla rejected the idea of a "synthesizer" and has resisted the word ever since. He never wanted to "synthesize" familiar sounds but rather emphasized new timbral possibilities. He stressed the complexity that could arise out of randomness and was intrigued with the design of new control devices other than the standard keyboard. He summarizes his philosophy and distinguishes it from Moog's in the following statement:
"I would say that philosophically the prime difference in our approaches was that I separated sound and structure and he didn't. Control voltages were interchangeable with audio. The advantage of that is that he required only one kind of connector and that modules could serve more than one purpose. There were several drawbacks to that kind of general approach, one of them being that a module designed to work in the structural domain at the same time as the audio domain has to make compromises. DC offset doesn't make any difference in the sound domain but it makes a big difference in the structural domain, whereas harmonic distortion makes very little difference in the control area but it can be very signficant in the audio areas. You also have a matter of just being able to discern what's happening in a system by looking at it. If you have a very complex patch, it's nice to be able to tell what aspect of the patch is the structural part of the music versus what is the signal path and so on. There's a big difference in whether you deal with linear versus exponential functions at the control level and that was a very inhibiting factor in Moog's more general approach.

Uncertainty is the basis for a lot of my work. One always operates somewhere between the totally predictable and the totally unpredictable and to me the "source of uncertainty," as we called it, was a way of aiding the composer. The predictabilities could be highly defined or you could have a sequence of totally random numbers. We had voltage control of the randomness and of the rate of change so that you could randomize the rate of change. In this way you could make patterns that were of more interest than patterns that are totally random."
While the early Buchla instruments contained many of the same modular functions as the Moog, it also contained a number of unique devices such as its random control voltage sources, sequencers and voltage-controlled spatial panners. Buchla has maintained his unique design philosophy over the intervening years producing a series of highly advanced instruments often incorporating hybrid digital circuitry and unique control interfaces.

The other major voltage-controlled synthesizers to arise at this time (1964) were the Synket, a highly portable instrument built by Paul Ketoff, and a unique machine designed by Tony Furse in Australia. According to composer Joel Chadabe, the SYNKET resulted from discussions between himself, Otto Leuning and JOHN EATON while these composers were in residence in Rome.

Chadabe had recently inspected the developmental work of Robert Moog and conveyed this to Eaton and Leuning. The engineer Paul Ketoff was enlisted to build a performance oriented instrument for Eaton who subsequently became the virtuoso on this small synthesizer, using it extensively in subsequent years. The machine built by Furse was the initial foray into an electronic instrument design by this brilliant Australian engineer. He later became the principal figure in the design of some of the earliest and most sophisticated digital synthesizers of the 1970's.

After these initial efforts a number of other American designers and manufacturers followed the lead of Buchla and Moog. One of the most successful was the ARP SYNTHESIZER built by Tonus, Inc. with design innovations by the team of Dennis Colin and David Friend. The studio version of the ARP was introduced in 1970 and basically imitated modular features of the Moog and Buchla instruments. A year later they introduced a smaller portable version which included a preset patching scheme that simplified the instrument's function for the average pop-oriented performing musician. Other manufacturers included EML, makers of the ELECTRO-COMP, a small synthesizer oriented to the educational market; OBERHIEM, one of the earliest polyphonic synthesizers; muSonics' SONIC V SYNTHESIZER: PAIA, makers of a synthesizer in kit form; Roland; Korg, and the highly sophisticated line of modular analog synthesizer systems designed and manufactured by Serge Tcherepnin and referred to as Serge Modular Music Systems.

In Europe the major manufacturer was undoubtedly EMS, a British company founded by its chief designer Peter Zinovieff. EMS built the Synthi 100, a large integrated system which introduced a matrix-pinboard patching system, and a small portable synthesizer based on similar design principles initially called the Putney but later modified into the SYNTHI A or Portabella. This later instrument became very popular with a number of composers who used it in live performance situations.

One of the more interesting footnotes to this history of the analog synthesizer is the rather problematic relationship that many of the designers have had with commercialization and the subsequent solution of manufacturing problems. While the commercial potential for these instruments became evident very early on in the 1960's, the different aesthetic and design philosophies of the engineers demanded that they deal with this realization in different ways. Buchla, who early on got burnt by larger corporate interests, has dealt with the burden of marketing by essentially remaining a cottage industry, assembling and marketing his instruments from his home in Berkeley, California. In the case of MOOG, who as a fairly competent businessman grew a small business in his home into a distinctly commercial endeavor, even he ultimately left Moog Music in 1977, after the company had been acquired by two larger corporations, to pursue his own design interests.

It is important to remember that the advent of the analog voltage-controlled synthesizer occurred within the context of the continued development of the tape studio which now included the synthesizer as an essential part of its new identity as the electronic music studio. It was estimated in 1968 that 556 nonprivate electronic music studios had been established in 39 countries. An estimated 5,140 compositions existed in the medium by that time.

Some of the landmark voltage-controlled "synthesizer" compositions of the 1960's include works created with the "manufactured" machines of Buchla and Moog but other devices were certainly also used extensively. Most of these works were tape compositions that used the synthesizer as resource. The following list includes a few of the representative tape compositions and works for tape with live performers made during the 1960's with synthesizers and other sound sources.

1960) Stockhausen: KONTAKTE; Mache: Volumes;

1961) Berio: VISAGE; Dockstader: TWO FRAGMENTS FROM APOCALYPSE

1962) Xenakis: BOHOR I; Philippot: Étude III; Parmegiani: DANSE

1963) Bayle: PORTRAIT DE L'OISEAU-QUI-N'EXISTE-PAS; Nordheim: EPITAFFIO

1964) Babbitt: Ensembles for Synthesizer; Brim: Futility; Nono: LA FABBRICA ILLUMINATA

1965) Gaburo: LEMON DROPS; Mimaroglu: Agony; Davidovsky: Synchronisms No. 3

1966) Oliveros: I OF IV; Druckman: Animus I;

1967) Subotnick: SILVER APPLES OF THE MOON; Eaton: CONCERT PIECE FOR SYN-KET AND SYMPHONY ORCHESTRA; Koenig: Terminus X; Smiley: ECLIPSE:

1968) Carlos: Switched-On Bach; Gaburo: DANTE'S JOYNTE; Nono: CONTRAPPUNTO DIALETTICO ALLA MENTE

1969) Wuorinen: TIME'S ENCOMIUM; Ferrari: MUSIC PROMENADE

1970) Arel: Stereo Electronic Music No. 2; Lucier: I AM SITTING IN A ROOM
2) COMPUTER MUSIC
A distinction: Analog refers to systems where a physical quantity is represented by an analogous physical quantity. The traditional audio recording chain demonstrates this quite well since each stage of translation throughout constitutes a physical system that is analogous to the previous one in the chain. The fluctuations of air molecules which constitute sound are translated into fluctuations of electrons by a microphone diaphragm. These electrons are then converted via a bias current of a tape recorder into patterns of magnetic particles on a piece of tape. Upon playback the process can be reversed resulting in these fluctuations of electrons being amplified into fluctuations of a loudspeaker cone in space. The final displacement of air molecules results in an analogous representation of the original sounds that were recorded. Digital refers to systems where a physical quantity is represented through a counting process. In digital computers this counting process consists of a two-digit binary coding of electrical on-off switching states. In computer music the resultant digital code represents the various parameters of sound and its organization.

As early as 1954, the composer YANNIS XENAKIS had used a computer to aid in calculating the velocity trajectories of glissandi for his orchestral composition Metastasis. Since his background included a strong mathematical education, this was a natural development in keeping with his formal interest in combining mathematics and music. The search that had begun earlier in the century for new sounds and organizing principles that could be mathematically rationalized had become a dominant issue by the mid-1950's. Serial composers like MILTON BABBIT had been dreaming of an appropriate machine to assist in complex compositional organization. While the RCA Music Synthesizer fulfilled much of this need for Babbitt, other composers desired even more machine-assisted control. LEJAREN HILLER, a former student of Babbitt, saw the compositional potential in the early generation of digital computers and generated the Illiac Suite for string quartet as a demonstration of this promise in 1956.

Xenakis continued to develop, in a much more sophisticated manner, his unique approach to computer-assisted instrumental composition. Between 1956 and 1962 he composed a number of works such as Morisma-Amorisma using the computer mathematical aid for finalizing calculations that were applied to instrumental scores. Xenakis stated that his use of probabilistic theories and the IBM 7090 computer enabled him to advance "… a form of composition which is not the object in itself, but an idea in itself, that is to say, the beginnings of a family of compositions."

The early vision of why computers should be applied to music was elegantly expressed by the scientist Heinz Von Foerster:
"Accepting the possibilities of extensions in sounds and scales, how do we determine the new rules of synchronism and succession?

It is at this point, where the complexity of the problem appears to get out of hand, that computers come to our assistance, not merely as ancillary tools but as essential components in the complex process of generating auditory signals that fulfill a variety of new principles of a generalized aesthetics and are not confined to conventional methods of sound generation by a given set of musical instruments or scales nor to a given set of rules of synchronism and succession based upon these very instruments and scales. The search for those new principles, algorithms, and values is, of course, in itself symbolic of our times."
The actual use of the computer to generate sound first occurred at Bell Labs where Max Mathews used a primitive digital to analog converter to demonstrate this possibility in 1957. Mathews became the central figure at Bell Labs in the technical evolution of computer generated sound research and compositional programming with computer over the next decade. In 1961 he was joined by the composer JAMES TENNEY who had recently graduated from the University of Illinois where he had worked with Hiller and Gaburo to finish a major theoretical thesis entitled Meta + Hodos For Tenney, the Bell Lab residency was a significant opportunity to apply his advanced theoretical thinking (involving the application of theories from Gestalt Psychology to music and sound perception) into the compositional domain. From 1961 to 1964 he completed a series of works which include what are probably the first serious compositions using the MUSIC IV program of Max Mathews and Joan Miller and therefore the first serious compositions using computer-generated sounds: Noise Study, Four Stochastic Studies, Dialogue, Stochastic String Quartet, Ergodos I, Ergodos II, and PHASES.

In the following extraordinarily candid statement, Tenney describes his pioneering efforts at Bell Labs:
"I arrived at the Bell Telephone Laboratories in September, 1961, with the following musical and intellectual baggage:
1. numerous instrumental compositions reflecting the influence of Webern and Varèse,
2. two tape-pieces, produced in the Electronic Music Laboratory at the University of Illinois – both employing familiar, 'concrete' sounds, modified in various ways;
3. a long paper ("Meta + Hodos, A Phenomenology of 20th Century Music and an Approach to the Study of Form", June, 1961), in which a descriptive terminology and certain structural principles were developed, borrowing heavily from Gestalt psychology. The central point of the paper involves the clang, or primary aural gestalt, and basic laws of perceptual organization of clangs, clang-elements, and sequences (a high-order Gestalt-unit consisting of several clangs).
4. A dissatisfaction with all the purely synthetic electronic music that I had heard up to that time, particularly with respect to timbre;
5. ideas stemming from my studies of acoustics, electronics and especially information theory, begun in Hiller's class at the University of Illinois; and finally
6. a growing interest in the work and ideas of John Cage. I leave in March, 1964, with:
1. six tape-compositions of computer-generated sounds – of which all but the last were also composed by means of the computer, and several instrumental pieces whose composition involved the computer in one way or another;
2. a far better understanding of the physical basis of timbre, and a sense of having achieved a significant extension of the range of timbres possible by synthetic means;
3. a curious history of renunciations of one after another of the traditional attitudes about music, due primarily to gradually more thorough assimilation of the insights of John Cage.

In my two-and-a-half years here I have begun many more compositions than I have completed, asked more questions than I could find answers for, and perhaps failed more often than I have succeeded. But I think it could not have been much different. The medium is new and requires new ways of thinking and feeling. Two years are hardly enough to have become thoroughly acclimated to it, but the process has at least begun."
In 1965 the research at Bell Labs resulted in the successful reproduction of an instrumental timbre: a trumpet waveform was recorded and then converted into a numerical representation and when converted back into analog form was deemed virtually indistinguishable from its source. This accomplishment by Mathews, Miller and the French composer JEAN-CLAUDE RISSET marks the beginning of the recapitulation of the traditional representationist versus modernist dialectic in the new context of digital computing. When contrasted against Tenney's use of the computer to obtain entirely novel waveforms and structural complexities, the use of such immense technological resources to reproduce the sound of a trumpet, appeared to many composers to be a gigantic exercise in misplaced concreteness. When seen in the subsequent historical light of the recent breakthroughs of digital recording and sampling technologies that can be traced back to this initial experiment, the original computing expense certainly appears to have been vindicated. However, the dialectic of representationism and modernism has only become more problematic in the intervening years.

The development of computer music has from its inception been so critically linked to advances in hardware and software that its practitioners have, until recently, constituted a distinct class of specialized enthusiasts within the larger context of electronic music. The challenge that early computers and computing environments presented to creative musical work was immense. In retrospect, the task of learning to program and pit one's musical intelligence against the machine constraints of those early days now takes on an almost heroic aire. In fact, the development of computer music composition is definitely linked to the evolution of greater interface transparency such that the task of composition could be freed up from the other arduous tasks associated with programming. The first stage in this evolution was the design of specific music-oriented programs such as MUSIC IV. The 1960's saw gradual additions to these languages such as MUSIC IVB (a greatly expanded assembly language version by Godfrey Winham. and Hubert S. Howe); MUSIC IVBF (a fortran version of MUSIC IVB); and MUSIC360 (a music program written for the IBM 360 computer by Barry Vercoe). The composer Charles Dodge wrote during this time about the intent of these music programs for sound synthesis:
"It is through simulating the operations of an ideal electronic music studio with an unlimited amount of equipment that a digital computer synthesizes sound. The first computer sound synthesis program that was truly general purpose (i.e., one that could, in theory, produce any sound) was created at the Bell Telephone Laboratories in the late 1950's. A composer using such a program must typically provide: (1) Stored functions which will reside in the computer's memory representing waveforms to be used by the unit generators of the program. (2) "Instruments" of his own design which logically interconnect these unit generators. (Unit generators are subprograms that simulate all the sound generation, modification, and storage devices of the ideal electronic music studio.) The computer "instruments" play the notes of the composition. (3) Notes may correspond to the familiar "pitch in time" or, alternatively, may represent some convenient way of dividing the time continuum."
By the end of the 1960's computer sound synthesis research saw a large number of new programs in operation at a variety of academic and private institutions. The demands of the medium however were still quite tedious and, regardless of the increased sophistication in control, remained a tape medium as its final product. Some composers had taken the initial steps towards using the computer for real-time performance by linking the powerful control functions of the digital computer to the sound generators and modifiers of the analog synthesizer. We will deal with the specifics of this development in the next section. From its earliest days the use of the computer in music can be divided into two fairly distinct categories even though these categories have been blurred in some compositions: 1) those composers interested in using the computer predominantly as a compositional device to generate structural relationships that could not be imagined otherwise and 2) the use of the computer to generate new synthetic waveforms and timbres.

A few of the pioneering works of computer music from 1961 to 1971 are the following:

1961) Tenney: Noise Study

1962) Tenney: Four Stochastic Studies

1963) Tenney: PHASES

1964) Randall: QUARTETS IN PAIRS

1965) Randall: MUDGETT

1966) Randall: Lyric Variations

1967) HilIer: Cosahedron

1968) Brim: INDEFRAUDIBLES; Risset: COMPUTER SUITE FROM LITTLE BOY

1969) Dodge: CHANGES; Risset: Mutations I

1970) Dodge: EARTH'S MAGNETIC FIELD

1971) Chowning: SABELITHE
3) LIVE ELECTRONIC PERFORMANCE PRACTICE
A Definition: For the sake of convenience I will define live electronic music as that in which electronic sound generation, processing and control predominantly occurs in real-time during a performance in front of an audience.

The idea that the concept of live performance with electronic sounds should have a special status may seem ludicrous to many readers. Obviously music has always been a performance art and the primary usage of electronic musical instruments before 1950 was almost always in a live performance situation. However, it must be remembered that the defining of electronic music as its own genre really came into being with the tape studios of the 1950's and that the beginnings of live electronic performance practice in the 1960's was in large part a reaction to both a growing dissatisfaction with the perceived sterility of tape music in performance (sound emanating from loudspeakers and little else) and the emergence of the various philosophical influences of chance, indeterminacy, improvisation and social experimentation.

The issue of combining tape with traditional acoustic instruments was a major one ever since Maderna, Varèse, Luening and Ussachevsky first introduced such works in the 1950's. A variety of composers continued to address this problem with increasing vigor into the 1960's. For many it was merely a means for expanding the timbral resources of the orchestral instruments they had been writing for, while for others it was a specific compositional concern that dealt with the expansion of structural aspects of performance in physical space. For instance MARIO DAVIDOVSKY and KENNETH GABURO have both written a series of compositions which address the complex contrapuntal dynamics between live performers and tape: Davidovski's Synchronisms 1-8 and Gaburo's Antiphonies 1-11. These works demand a wide variety of combinations of tape channels, instruments and voices in live performance contexts. In these and similar works by other composers the tape sounds are derived from all manner of sources and techniques including computer synthesis. The repertory for combinations of instruments and tape grew to immense international proportions during the 1960's and included works from Australia, North America, South America, Western Europe, Eastern Europe, Japan, and the Middle East. An example of how one composer viewed the dynamics of relationship between tape and performers is stated by Kenneth Gaburo:
"On a fundamental level ANTLPHONY III is a physical interplay between live performers and two speaker systems (tape). In performance, 16 soloists are divided into 4 groups, with one soprano, alto, tenor, and bass in each. The groups are spatially separated from each other and from the speakers. Antiphonal aspects develop between and among the performers within each group, between and among groups, between the speakers, and between and among the groups and speakers.

On another level Antiphony III is an auditory interplay between tape and live bands. The tape band may be divided into 3 broad compositional classes: (1) quasi-duplication of live sounds, (2) electro-mechanical transforms of these beyond the capabilities of live performers, and (3) movement into complementary acoustic regions of synthesized electronic sound. Incidentally, I term the union of these classes electronics, as distinct from tape content which is pure concrete-mixing or electronic sound synthesis. The live band encompasses a broad spectrum from normal singing to vocal transmission having electronically associated characteristics. The total tape-live interplay, therefore, is the result of discrete mixtures of sound, all having the properties of the voice as a common point of departure."
Another important aesthetic shift that occurred within the tape studio environment was the desire to compose onto tape using real-time processes that did not require subsequent editing. PAULINE OLIVEROS and Richard Maxfield were early practitioners of innovative techniques that allowed for live performance in the studio. Oliveros composed I of IV (1966) in this manner using tape delay and mixer feedback systems. Other composers discovered synthesizer patches that would allow for autonomous behaviors to emerge from the complex interactions of voltage-control devices. The output from these systems could be recorded as versions on tape or amplified in live performance with some performer modification. Entropical Paradise (1969) by Douglas Leedy is a classic example of such a composition for the Buchla Synthesizer.

The largest and most innovative category of live electronic music to come to fruition in the 1960's was the use of synthesizers and custom electronic circuitry to both generate sounds and process others, such as voice and/or instruments, in real-time performance. The most simplistic example of this application extends back to the very first use of electronic amplification by the early instruments of the 1930's. During the 1950's JOHN CAGE and DAVID TUDOR used microphones and amplification as compositional devices to emphasize the small sounds and resonances of the piano interior. In 1960 Cage extended this idea to the use of phonograph cartridges and contact microphones in CARTRIDGE MUSIC. The work focused upon the intentional amplification of small sounds revealed through an indeterminate process. Cage described the aural product:
"The sounds which result are noises, some complex, others extremely simple such as amplifier feedback, loud-speaker hum, etc. (All sounds, even those ordinarily thought to be undesirable, are accepted in this music.)"
For Cage the abandonment of tape music and the move toward live electronic performance was an essential outgrowth of his philosophy of indeterminacy. Cage's aesthetic position necessitated the theatricality and unpredictability of live performance since he desired a circumstance where individual value judgements would not intrude upon the revelation and perception of new possibilities. Into the 1960's his fascination for electronic sounds in indeterminate circumstances continued to evolve and become inclusive of an ethical argument for the appropriateness of artists working with technology as critics and mirrors of their cultural environment. Cage composed a large number of such works during the 1960's often enlisting the inspired assistance of like-minded composer/performers such as David Tudor, Gordon Mumma, David Behrman, and Lowell Cross. Among the most famous of these works was the series of compositions entitled VARIATIONS of which there numbered eight by the end of the decade. These works were really highly complex and indeterminate happenings that often used a wide range of electronic techniques and sound sources.

The composer/performer DAVID TUDOR was the musician most closely associated with Cage during the 1960's. As a brilliant concert pianist during the 1950's he had championed the works of major avantgarde composers and then shifted his performance activities to electronics during the 1960's, performing other composer's live-electronic works and his own. His most famous composition, RAINFOREST, and its multifarious performances since it was conceived in 1968, almost constitute a musical subculture of electronic sound research. The work requires the fabrication of special resonating objects and sculptural constructs which serve as one-of-a-kind loudspeakers when transducers are attached to them. The constructed "loudspeakers" function to amplify and produce both additive and subtractive transformations of source sounds such as basic electronic waveforms. In more recent performances the sounds have included a wide selection of prerecorded materials.

While live electronic music in the 1960's was predominantly an American genre, activity in Europe and Japan also began to emerge. The foremost European composer to embrace live electronic techniques in performance was KARLHEINZ STOCKHAUSEN. By 1964 he was experimenting with the staightforward electronic filtering of an amplified tam-tam in MICROPHONIE I. Subsequent works for a variety of instrumental ensembles and/ or voices, such as Prozession or Stimmung, explored very basic but ingenious use of amplification, filtering and ring modulation techniques in real-time performance. In a statement about the experimentation that led to these works Stockhausen conveys a clear sense of the spirit of exploration into sound itself that purveyed much of the live electronic work of the 1960's:
"Last summer I made a few experiments by activating the tam-tam with the most disparate collection of materials I could find about the house -glass, metal, wood, rubber, synthetic materials – at the same time linking up a hand-held microphone (highly directional) to an electric filter and connecting the filter output to an amplifier unit whose output was audible through loudspeakers. Meanwhile my colleague Jaap Spek altered the settings of the filter and volume controls in an improvisatory way. At the same time we recorded the results on tape. This tape-recording of our first experiences in 'microphony' was a discovery of the greatest importance for me. We had come to no sort of agreement. I used such of the materials I had collected as I thought best and listened-in to the tam-tam surface with the microphone just as a doctor might listen-in to a body with his stethoscope; Spek reacted equally spontaneously to what he heard as the product of our joint activity."
In many ways the evolution of live electronic music parallels the increasing technological sophistication of its practitioners. In the early 1960's most of the works within this genre were concerned with fairly simple real-time processing of instrumental sounds and voices. Like Stockhausen's work from this period this may have been as basic as the manipulation of a live performer through audio filters, tape loops or the performer's interaction with acoustic feedback. ROBERT ASHLEY'S Wolfman (1964) is an example of the use of high amplification of voice to achieve feedback that alters the voice and a prerecorded tape.

By the end of the decade a number of composers had technologically progressed to designing their own custom circuitry. For example, GORDON MUMMA'S MESA (1966) and HORNPIPE (1967) are both examples of instrumental pieces that use custom-built electronics capable of semi-automatic response to the sounds generated by the performer or resonances of the performance space. One composer whose work illustrates a continuity of gradually increasing technical sophistication is DAVID BEHRMAN. From fairly rudimentary uses of electronic effects in the early 1960's his work progressed through various stages of live electronic complexification to compositions like RUNTHROUGH (1968), where custom-built circuitry and a photo electric sound distribution matrix is activated by performers with flashlights.

This trend toward new performance situations in which the technology functioned as structurally intrinsic to the composition continued to gain favor. Many composers began to experiment with a vast array of electronic control devices and unique sound sources which often required audio engineers and technicians to function as performing musicians, and musicians to be technically competent. Since the number of such works proliferated rapidly, a few examples of the range of activities during the 1960's must suffice. In 1965, ALVIN LUCIER presented his Music for Solo Performer 1965 which used amplified brainwave signals to articulate the sympathetic resonances of an orchestra of percussion instruments. John Mizelle's Photo Oscillations (1969) used multiple lasers as light sources through which the performers walked in order to trigger a variety of photo-cell activated circuits. Pendulum Music (1968) by Steve Reich simply used microphones suspended over loudspeakers from long cables. The microphones were set in motion and allowed to generate patterns of feedback as they passed over the loudspeakers. For these works, and many others like them, the structural dictates which emerged out of the nature of the chosen technology also defined a particular composition as a unique environmental and theatrical experience.

Co-synchronous with the technical and aesthetic advances that were occurring in live performance that I have just outlined, the use of digital computers in live performance began to slowly emerge in the late 1960's. The most comprehensive achievement at marrying digital control sophistication to the real-time sound generation capabilities of the analog synthesizer was probably the SAL-MAR CONSTRUCTION (1969) of SALVATORE MARTIRANO. This hybrid system evolved over several years with the help of many colleagues and students at the University of Illinois. Considered by Martirano to be a composition unto itself, the machine consisted of a motley assortment of custom-built analog and digital circuitry controlled from a completely unique interface and distributed through multiple channels of loudspeakers suspended throughout the performance space. Martirano describes his work as follows:
"The SAL-MAR CONSTRUCTION was designed, financed and built in 1969-1972 by engineers Divilbiss, Franco, Borovec and composer Martirano here at the University of Illinois. It is a hybrid system in which 77L logical circuits (small and medium scale integration) drive analog modules, such as voltage-controlled oscillators, amplifiers and filters. The SMC weighs 1500 lbs crated and measures 8'x 5'x 3'.

It can be set-up at one end of the space with a 'spider web' of speaker wire going out to 24 plexiglass enclosed speakers that hang in a variety of patterns about the space. The speakers weigh about 6 lbs. each, and are gently mobile according to air currents in the space. A changing pattern of sound-traffic by 4 independently controlled programs produces rich timbres that occur as the moving source of sound causes the sound to literally bump into itself in the air, thus effecting phase cancellation and addition of the signal.

The control panel has 291 touch-sensitive set/reset switches that are patched so that a tree of diverse signal paths is available to the performer. The output of the switch is either set 'out l' or reset 'out 2'. Further, the 291 switches are multiplexed down 4 levels. The unique characteristic of the switch is that it can be driven both manually and logically, which allows human/machine interaction.
Most innovative feature of the human/machine interface is that it allows the user to switch from control of macro to microparameters of the information output. This is analogous to a zoom lens on a camera. A pianist remains at one level only, that is, on the keys. It is possible to assign performer actions to AUTO and allow the SMC to make all decisions."
One of the major difficulties with the hybrid performance systems of the late 1960's and early 1970's was the sheer size of digital computers. One solution to this problem was presented by GORDON MUMMA in his composition Conspiracy 8 (1970). When the piece was presented at New York's Guggenheim Museum, a remote data-link was established to a computer in Boston which received information about the performance in progress. In turn this computer then issued instructions to the performers and generated sounds which were also transmitted to the performance site through data-link.

Starting in 1970 an ambitious attempt at using the new minicomputers was initiated by Ed Kobrin, a former student and colleague of Martirano's. Starting in Illinois in collaboration with engineer Jeff Mack, and continuing at the Center for Music Experiment at the University of California, San Diego, Kobrin designed an extremely sophisticated hybrid system (actually referred to as HYBRID I THROUGH V) that interfaced a minicomputer to an array of voltage-controlled electronic sound modules. As a live performance electronic instrument, its six-voice polyphony, complexity and speed of interaction made it the most powerful real-time system of its time. One of its versions is described by Kobrin:
"The most recent system consists of a PDP 11 computer with 16k words of core memory, dual digital cassette unit, CRT terminal with ASCII keyboard, and a piano-type keyboard. A digital interface consisting of interrupt modules, address decoding circuitry, 8 and 10 bit digital to analog converters with holding registers, programmable counters and a series of tracking and status registers is hardwired to a synthesizer. The music generated is distributed to 16 speakers creating a controlled sound environment"
Perhaps the most radical and innovative aspect of live electronic performance practice to emerge during this time was the appearance of a new form of collective music making. In Europe, North America and Japan several important groups of musicians began to collaborate in collective compositional, improvisational, and theatrical activities that relied heavily upon the new electronic technologies. Some of the reasons for this trend were: 1) the performance demands of the technology itself which often required multiple performers to accomplish basic tasks: 2) the improvisatory and open-ended nature of some of the music was friendly and/or philosophically biased towards a diverse and flexible number of participants; and 3) the cultural and political climate was particularly attuned to encouraging social experimentation.

As early as 1960, the ONCE Group had formed in Ann Arbor, Michigan. Comprised of a diverse group of architects, composers, dancers, filmmakers, sculptors and theater people, the ONCE Group presented the annual ONCE FESTIVAL. The principal composers of this group consisted of George Cacioppo, Roger Reynolds, Donald Scavarda, Robert Ashley and Gordon Mumma, most of whom were actively exploring tape music and developing live electronic techniques. In 1966 Ashley and Mumma joined forces with David Behrman and Alvin Lucier to create one of the most influential live electronic performance ensembles, the SONIC ARTS UNION. While its members would collaborate in the realization of compositions by its members, and by other composers, it was not concerned with collaborative composition or improvisation like many other groups that had formed about the same time.

Concurrent with the ONCE Group activities were the concerts and events presented by the participants of the San Francisco Tape Music Center such as Pauline Oliveros, Terry Riley, Ramon Sender and Morton Subotnick. Likewise a powerful center for collaborative activity had developed at the University of Illinois, Champaign/Urbana where Herbert Bruen, Kenneth Gaburo, Lejaren Hiller, Salvatore Martirano, and James Tenney had been working. By the late 1960's a similarly vital academic scene had formed at the University of California, San Diego where Gaburo, Oliveros, Reynolds and Robert Erickson were now teaching.

In Europe several innovative collectives had also formed. To perform his own music Stockhausen had gathered together a live electronic music ensemble consisting of Alfred Alings, Harald Boje, Peter Eötvös, Johannes Fritsch, Rolf Gehlhaar, and Aloys Kontarsky. In 1964 an international collective called the Gruppo di Improvisazione Nuova Consonanza was created in Rome for performing live electronic music. Two years later, Rome also saw the formation of Musica Elettronica Viva, one of the most radical electronic performance collectives to advance group improvisation that often involved audience participation. In its original incarnation the group included Allan Bryant, Alvin Curran, John Phetteplace, Frederic Rzewski, and Richard Teitelbaum.

The other major collaborative group concerned with the implications of electronic technology was AMM in England. Founded in 1965 by jazz musicians Keith Rowe, Lou Gare and Eddie Provost, and the experimental genius Cornelius Cardew, the group focused its energy into highly eclectic but disciplined improvisations with electro-acoustic materials. In many ways the group was an intentional social experiment the experience of which deeply informed the subsequent Scratch Orchestra collective of Cardew's. One final category of live electronic performance practice involves the more focused activities of the minimalist composers of the 1960's. These composers and their activities were involved with both individual and collective performance activities and in large part confused the boundaries between the so-called "serious" avantgarde and popular music. The composer TERRY RILEY exemplifies this idea quite dramatically. During the late 1960's Riley created a very popular form of solo performance using wind instruments, keyboards and voice with tape delay systems that was an outgrowth from his early experiments into pattern music and his growing interest in Indian music. In 1964 the New York composer LaMonte Young formed THE THEATRE OF ETERNAL MUSIC to realize his extended investigations into pure vertical harmonic relationships and tunings. The ensemble consisted of string instruments, singing voices and precisely tuned drones generated by audio oscillators. In early performances the performers included John Cale, Tony Conrad, LaMonte Young, and Marian Zazeela.

A very brief list of significant live electronic music works of the 1960's is the following:

1960) Cage: CARTRIDGE MUSIC

1964) Young: The Tortoise, His Dreams and Journeys; Sender: Desert Ambulance; Ashley: Wolfman; Stockhausen: Mikrophonie 1

1965) Lucier: Music for Solo Performer

1966) Mumma: MESA

1967) Stockhausen: PROZESSION; Mumma: HORNPIPE

1968) Tudor: RAINFOREST; Behrman: RUNTHROUGH

1969) Cage and Hiller: HPSCHD; Martirano: Sal-Mar Construction; Mizelle: Photo Oscillations

1970) Rosenboom: Ecology of the Skin
4) MULTIMEDIA
The historical antecedants for mixed-media connect multiple threads of artistic traditions as diverse as theatre, cinema, music, sculpture, literature, and dance. Since the extreme eclecticism of this topic and the sheer volume of activity associated with it is too vast for the focus of this essay, I will only be concerned with a few examples of mixed-media activities during the 1960's that impacted the electronic art and music traditions from which subsequent video experimentation emerged.

Much of the previously discussed live electronic music of the 1960's can be placed within the mixed-media category in that the performance circumstances demanded by the technology were intentionally theatrical or environmental. This emphasis on how technology could help to articulate new spatial relationships and heightened interaction between the physical senses was shared with many other artists from the visual, theatrical and dance traditions. Many new terms arose to describe the resulting experiments of various individuals and groups such as "happenings" "events", "action theatre", "environments", or what Richard Kostelanetz called "The Theatre of Mixed-Means." In many ways the aesthetic challenge and collaborative agenda of these projects was conceptually linked to the various counter-cultural movements and social experiments of the decade. For some artists these activities were a direct continuity from participation in the avantgarde movements of the 1950's such as Fluxus, electronic music, "kinetic sculpture," Abstract Expressionism and Pop Art, and for others they were a fulfillment of ideas about the merger of art and science initiated by the 1930's Bauhaus artists.

Many of the performance groups already mentioned were engaged in mixed-media as their principal activity. In Michigan, the ONCE Group had been preceded by the Manifestations: Light and Sound performances and Space Theatre of Milton Cohen as early as 1956. The filmmaker Jordan Belson and Henry Jacobs organized the Vortex performances in San Francisco the following year. Japan saw the formation of Tokyo's Group Ongaku and Sogetsu Art Center with Kuniharu Akiyama, Toshi Ichiyanagi, Joji Yuasa, Takahisa Kosugi, and Chieko Shiomi in the early 1960's. At the same time were the ritual oriented activities of LaMonte Young's THE THEATRE OF ETERNAL MUSIC. The group Pulsa was particularly active through the late sixties staging environmental light and sound works such as the BOSTON PUBLIC GARDENS DEMONSTRATION (1968) that used 55 xenon strobe lights placed underwater in the garden's four-acre pond. On top of the water were placed 52 polyplanar loudspeakers which were controlled, along with the lights, by computer and prerecorded magnetic tape. This resulted in streams of light and sound being projected throughout the park at high speeds. At the heart of this event was the unique HYBRID DIGITAL/ANALOG AUDIO SYNTHESIZER which Pulsa designed and used in most of their subsequent performance events.

In 1962, the USCO formed as a radical collective of artists and engineers dedicated to collective action and anonymity. Some of the artists involved were Gerd Stem, Stan Van Der Beek, and Jud Yalkut. As Douglas Davis describes them:
"USCO's leaders were strongly influenced by McLuhan's ideas as expressed in his book Understanding Media. Their environments – performed in galleries, churches, schools, and museums across the United States -increased in complexity with time, culminating in multiscreen audiovisual "worlds" and strobe environments. They saw technology as a means of bringing people together in a new and sophisticated tribalism. In pursuit of that ideal, they lived, worked, and created together in virtual anonymity."
The influence of McLuhan also had a strong impact upon John Cage during this period and marks a shift in his work toward a more politically and socially engaged discourse. This shift was exemplified in two of his major works during the 1960's which were large multimedia extravaganzas staged during residencies at the University of Illinois in 1967 and 1969: Musicircus and HPSCHD. The later work was conceived in collaboration with Lejaren Hiller and subsequently used 51 computer-generated sound tapes, in addition to seven harpsichords and numerous film projections by Ronald Nameth.

Another example of a major mixed-media work composed during the 1960's is the TEATRO PROBABILISTICO III (1968) for actors, musicians, dancers, light, TV cameras, public and traffic conductor by the brazilian composer JOCY DE OLIVEIRA. She describes her work in the following terms that are indicative of a typical attitude toward mixed-media performance at that time:
"This piece is an exercise in searching for total perception leading to a global event which tends to eliminate the set role of public versus performers through a complementary interaction. The community life and the urban space are used for this purpose. It also includes the TV communication on a permutation of live and video tape and a transmutation from utilitarian camera to creative camera.

The performer is equally an actor, musician, dancer, light, TV camera/video artist or public. They all are directed by a traffic conductor. He represents the complex contradiction of explicit and implicit. He is a kind of military God who controls the freedom of the powers by dictating orders through signs. He has power over everything and yet he cannot predict everything. The performers improvise on a time-event structure, according to general directions. The number of performers is determined by the space possibilities. It is preferable to use a downtown pedestrian area.

The conductor should be located in the center of the performing area visible to the performers (over a platform). He should wear a uniform representing any high rank.

For the public as well as the performers this is an exercise in searching for a total experience in complete perception."
One of the most important intellectual concerns to emerge at this time amongst most of these artists was an explicit embracing of technology as a creative counter-cultural force. In addition to McLuhan, the figure of Buckminster Fuller had a profound influence upon an entire generation of artists. Fuller's assertion that the radical and often negative changes wrought by technological innovation were also opportunities for proper understanding and redirection of resources became an organizing principle for vanguard thinkers in the arts. The need to take technology seriously as the social environment in which artists lived and formulated critical relationships with the culture at large became formalized in projects such as Experiments in Art and Technology, Inc. and the various festivals and events they sponsored: Nine Evenings: Theater and Engineering; Some More Beginnings; the series of performances presented at Automation House in New York City during the late 1960's; and the PEPSI-COLA PAVILION FOR EXPO 70 in Osaka, Japan. One of the participants in Expo 70, Gordon Mumma, describes the immense complexity and sophistication that mixed-media presentations had evolved into by that time:
"The most remarkable of all multimedia collaborations was probably the Pepsi-Cola Pavilion for Expo 70 in Osaka. This project included many ideas distilled from previous multimedia activities, and significantly advanced both the art and technology by numerous innovations. The Expo 70 pavilion was remarkable for several reasons. It was an international collaboration of dozens of artists, as many engineers, and numerous industries, all coordinated by Experiments in Art and Technology, Inc. From several hundred proposals, the projects of twenty-eight artists and musicians were selected for presentation in the pavilion. The outside of the pavilion was a 120-foot-diameter geodesic dome of white plastic and steel, enshrouded by an ever-changing, artificially generated watervapor cloud. The public plaza in front of the pavilion contained seven man-sized, sound-emitting floats, that moved slowly and changed direction when touched. A thirty foot polar heliostat sculpture tracked the sun and reflected a ten-foot-diameter sunbeam from its elliptical mirror through the cloud onto the pavilion. The inside of the pavilion consisted of two large spaces, one blackwalled and clam-shaped, the other a ninety-foot high hemispherical mirror dome. The sound and light environment of these spaces was achieved by an innovative audio and optical system consisting of state-of-the-art analog audio circuitry, with krypton-laser, tungston, quartz-iodide, and xenon lighting, all controlled by a specially designed digital computer programming facility.

The sound, light, and control systems, and their integration with the unique hemispherical acoustics and optics of the pavilion, were controlled from a movable console. On this console the lighting and sound had separate panels from which the intensities, colors, and directions of the lighting, pitches, loudness, timbre, and directions of the sound could be controlled by live performers. The soundmoving capabilities of the dome were achieved with a rhombic grid of thirty-seven loudspeakers surrounding the dome, and were designed to allow the movement of sounds from point, straight line, curved, and field types of sources. The speed of movement could vary from extremely slow to fast enough to lose the sense of motion. The sounds to be heard could be from any live, taped, or synthesized source, and up to thirty-two different inputs could be controlled at one time. Furthermore, it was possible to electronically modify these inputs by using eight channels of modification circuitry that could change the pitch, loudness, and timbre in a vast number of combinations. Another console panel contained digital circuitry that could be programmed to automatically control aspects of the light and sound. By their programming of this control panel, the performers could delegate any amount of the light and sound functions to the digital circuitry. Thus, at one extreme the pavilion could be entirely a live-performance instrument, and at the other, an automated environment. Vie most important design concept of the pavilion was that it was a live-performance, multimedia instrument. Between the extremes of manual and automatic control of so many aspects of environment, the artist could establish all sorts of sophisticated man-machine performance interactions."
CONSOLIDATION: THE 1970 AND 80'S
The beginning of the 1970's saw a continuation of most of the developments initiated in the 1960's. Activities were extremely diverse and included all the varieties of electronic music genres previously established throughout the 20th century. Academic tape studios continued to thrive with a great deal of unique custom-built hardware being conceived by engineers, composers and students. Hundreds of private studios were also established as the price of technology became more affordable for individual artists. Many more novel strategies for integrating tape and live performers were advanced as were new concepts for live electronics and multimedia. A great rush of activity in new circuit design also took place and the now familiar pattern of continual miniaturization with increased power and memory expansion for computers began to become evident. Along with this increased level of electronic music activity two significant developments became evident: 1) what had been for decades a pioneering fringe activity within the larger context of music as a cultural activity now begins to become dominant; and 2) the commercial and sophisticated industrial manufacturing of electronic music systems and materials that had been fairly esoteric emerges in response to this awareness. The result of these new factors signals the end of the pioneering era of electronic music and the beginning of a post-modern aesthetic that is predominantly driven by commercial market forces.

By the end of the 1970's most innovations in hardware design had been taken over by industry in response to the emerging needs of popular culture. The film and music "industries" became the major forces in establishing technical standards which impacted subsequent electronic music hardware design. While the industrial representationist agenda succeeded in the guise of popular culture, some pioneering creative work continued within the divergent contexts of academic tape studios and computer music research centers and in the non-institutional aesthetic research of individual composers. While specialized venues still exist where experimental work can be heard, it has been an increasing tendency that access to such work has gotten progressively more problematic.

One of the most important shifts to occur in the 1980's was the progressive move toward the abandonment of analog electronics in favor of digital systems which could potentially recapitulate and summarize the prior history of electronic music in standardized forms. By the mid1980's the industrial onslaught of highly redundant MIDI interfaceable digital synthesizers, processors, and samplers even began to displace the commercial merchandizing of traditional acoustic orchestral and band instruments. By 1990 the presence of these commercial technologies had become a ubiquitous cultural presence that largely defined the nature of the music being produced.
CONCLUSION
What began in this century as a utopian and vaguely Romantic passion, namely that technology offered an opportunity to expand human perception and provide new avenues for the discovery of reality, subsequently evolved through the 1960's into an intoxication with this humanistic agenda as a social critique and counter-cultural movement. The irony is that many of the artist's who were most concerned with technology as a counter-cultural social critique built tools that ultimately became the resources for an industrial movement that in large part eradicated their ideological concerns. Most of these artists and their work have fallen into the anonymous cracks of a consumer culture that now regards their experimentation merely as inherited technical R & D. While the mass distribution of the electronic means of musical production appears to be an egalitarian success, as a worst case scenario it may also signify the suffocation of the modernist dream at the hands of industrial profiteering. To quote the philosopher Jacques Attali:
"What is called music today is all too often only a disguise for the monologue of power. However, and this is the supreme irony of it all, never before have musicians tried so hard to communicate with their audience, and never before has that communication been so deceiving. Music now seems hardly more than a somewhat clumsy excuse for the self-glorification of musicians and the growth of a new industrial sector."
From a slightly more optimistic perspective, the current dissolving of emphasis upon heroic individual artistic contributions, within the context of the current proliferation of musical technology, may signify the emergence of a new socio-political structure: the means to create transcends the created objects and the personality of the object's creator. The mass dissemination of new tools and instruments either signifies the complete failure of the modernist agenda or it signifies the culminating expression of commoditization through mass production of the tools necessary to deconstruct the redundant loop of consumption. After decades of selling records as a replacement for the experience of creative action, the music industry now sells the tools which may facilitate that creative participation. We shift emphasis to the means of production instead of the production of consumer demand.

Whichever way the evolution of electronic music unfolds will depend upon the dynamical properties of a dialectical synthesis between industrial forces and the survival of the modernist belief in the necessity for technology as a humanistic potential. Whether the current users of these tools can resist the redundancy of industrial determined design biases, induced by the clichés of commercial market forces, depends upon the continuation of a belief in the necessity for alternative voices willing to articulate that which the status quo is unwilling to hear.