Rabu, 08 Agustus 2012

Four Criteria of Electronic Music


<<STOCKHAUSEN ON MUSIC>>
___________________________________
FOUR CRITERIA OF
ELECTRONIC MUSIC

From the lecture FOUR CRITERIA OF ELECTRONIC MUSIC
Filmed by Allied Artists, London 1971
_________________________


            Four criteria of electronic music. The first criterion is the unified time structuring.  The Second Criterion is the Splitting of Sound. The Third, the multi-layered spatial composition. The Fourth, the equality of sound and noise – or better, of tone and noise.
            New means change the method; new methods change the experience, and new experience chage man. Whenever we hear sounds we are changed: we are no longer the same after hearing certain sounds, and this is the more the case when we hear organized sound, sounds organized by another human being: music.
            Until around 1950 the idea of music as sound was largely ignored. That composing with sound could also involve the composition of sounds themselves, was no longer self-evident. It was revived as result, we migth say, of a historical development. The Viennese School of Schoenberg, Berg, and Webern had reduced their musical themes and motifs to entities of only two sounds, to intervals. Webern in particular, Anton Von Webern. And when I started to compose music, I was certainly a child of the first half century, continuing and expanding what the composers of the first half had prepared. It took a little leap forward to reach the idea of composing, or synthesizing, the individual sound.
            I should say immediately that it was a second thougth, because I started first of all by analyzing all sort of sounds. I was twenty three and working at the musique concrète  studio in Paris. I resorded sounds in the Musèe de l’Homme, where you can find exotic instruments of all kind; instrument of wood, of stone, of metal; instrument of different cultures and historical periods. I also analyzed sounds and noises which I recorded from daily life, and began to study books in which you can find spectral analyses of the sounds of classical musical instruments. Bit by bit, not having had any proper training in acoustics at the music conservatory or at university, I became aware that sound is more than just an experience. I became very interested in the diffrences between sounds; what is the difference between a piano sound and vowel aaah and the sound of the wind – shhh or whsss. It was after analyzing a lot of sounds that this second thougth came up (it was always implied): if I can analyze sounds which exist already and I have recorded, why can I not try to synthesize sound in order to find new sounds, if possible.
            At that time the only instruments available which tried to imitate classical instrumen were those you found in nightclubs, not in the symphony orchestra, which even now is still a fairly closed sound world because of its social composition. You wont’ find, for example, electric organs of the modern type in the normal line up of a symphony orchestra. On the order hand, pop music make use of a range of specially manufactured keyboard instruments with registers to imitate trumpets, flutes, clarinet and so forth. Today all sort of gimmicks have been added to transform these sounds; then there were fewer gimmicks nevertheless there was a certain variety of synthetic tmbres which the composer might choose, like painter choosing colours to mix from. Classical orchestration is traditionally an art of mixing.
To synthesized a sound you have to start with something more basic, more simple, than the sounds you encounter in daily life. Istarted looking in acoustic laboratories for source of the simplest forms of sound wave, for example sine wave generators, which are used for measurement. And I started very primitively to synthesize individual sounds by superimposing sine waves in harmonic spectra, in order to make sounds like vowels, aaah, oooh, eeeh etc., the gradually I found how to use white noise generators and electric filter to produce coloured noise, like consonants, ssss, sssh, fffh, etc. and when I pulsed them it sounded like water dripping.
From this primitive beginnings I began, as many others were then doing in the musique concrète studios, to transform recorded sounds with electric device. For example, to speed up sound: every body who has a record player knows how to speed up the music up or down just by changing from 33 to 45 vice versa. Well, we have been able since the late forties to chage the speedd of tape recorder continously, not just in steps, and so to transform sounds by speeding them up or down. This is very important. Let’s immediately jump to extreme, and then we come to the first criterion.

1.        THE UNIFIED TIME STRUCTURING
Suppose you take a recording of a Beethoven Symphony on tape and speed it up, but in such a way that you donot at the same time transpose the pitch. And yout speed it up until it last just one second. Then you get a sound that has a particular colour or timbre, a particular shape or dynamic evolution, and an inner life which is what Beethoven has composed, highly compressed time. And it is a very Characteristic sound, compared let’s say to a piece of Gagaku music from japan if it were similarly compressed. On the other hand, if we were to take any given sound and stretch it out in time to such an extent that it lasted twenty minute instead of one second, then what we have is a musical piece whose large-scale form in time is the expansion of the microacoustic time structure of the original sound.
I started to compose sounds in a new way around 1956. I recorded individual pulses from an impulse generator, and spilced them together I a particla rhythm. Then I made atape loop of this rhythm, let’s say it is tac-tac, tac, a very simple rhythm-and then I speed it up, tarac-tac, tarac-tac, tarac-tac,tarac-tac, and so on. After a while the rhythm become continous, and when I speed it up still more, you begin to hear a low tone rising in pitch. That means this little periode tarac-tac, tarac tac, which lasted about a second, is now lasting less than one-sixteenth of a second is the lower limit of the perception of pitch, and a sound vibrating at 16 cycles per second corresponds to avery low fundamental pitch on the organ. The timbre of this sound also an effect of the original rhythm being tarac-tac rather than, say, tacato-tarot, tacato-tarot, which would give a different tone colour. You don’t actually hear the rhythm any more, only a specific timbre, a spectrum, which is determined by composition of it’s components.
Now imagine speeding up original one-second rhythm one thounsand times, so that each cycle now lasts one-thousandth of a second; that will give you a sound in tha middle range of audibility, of a constant picth about two octaves above middle C on the piano. A frequency of 1000 cycle persecond, and a particular timbre. I made a lot of experiments with different rhythms in order to see what they would give as differences intimbre, what weperceive as rhythm from a certain perspective, is perceived at a faster time of perception pitch, with its melodic implications. You can build melodies by changing th basic periodicity, making it faster or slower for the sound to go up or down inpitch recpectively. Within the basic period which determines the fundamental pitch, there are what I call the partials, which are subdivisions of the basic periodicity, and they are represented here by the inner divisions making up the original rhythm. These are percieved as the timbre.
If I change the periodicity of the sound; a little faster, a little slower, or to be more precise, make the duration of each period a little shorter or a little longer, then the sound starts oscillating around a certain middle frequency, and all the half vowel or half consonant components, which are already fairly broad-band, begin to break up. So the continuum between a more and less stable periodicity; the noisest noise being the most aperiodic. This discovery of a continuum between sound and noise, the fourth criterion of electronic music, was extremely important, because once such a continuum becomes available, you can control it, you can compose it, you can organize it.
If now we slow down the speed of a given rhythm we come into the realm of form. What is form in music? Well, we usually say musical structureof between the one or two minute of a piece of entertainment music, and the hour and a half of a Mahler symphony, which is about the longest we encounter inmusic of the western tradition. (there are a few opers from the end of the nineteenth century which last longer, and which last longer, and which introduced some very important expansions of musical time, but there is nothing in our tradition like the Omizutori ceremony of Japan, in the Temple of Nara, which lasts three days and three night without any break, or like certain tribal rituals still to be found in ceylon or parts of Africa). So, according to the fixed perpective of our tradition form varies between dimensions of around one minute and ninety minutes. This corresponds to 1, 2, 4, 8, 16, 32, 64, 128 – a range of around seben octaves. Amazingly enough, we find a similar seven-octaves range within the traditional formal subdivisions of music, from the length of a phrase, the smallest formal subdivision, say eight seconds, to largest complete section, or ‘movement’ of about sixteen to seventeen minutes’ duration (8-16-32-64-128-256-512-1024 seconds). So there is a range of about seven octaves for duration from eight second upt o seventeen minutes. Between eight and sixteen seconds durations become less and less easy to remember. It has something to do with our perception: if I ask you to compare a duration of 13 second with 15 seconds, you hardly know the diffrence. If I ask you to compare a sound of one second with a sound of three seconds’ duration, on the other hand, the same difference of two seconds appears enormous. Our perceptions are logarithmic, not arithmetic, and that is important.  Rhythm has its own field of perception and between eight and sixteen second there is a transition between our perception of rhythm and form.
Rhythm and metre are organized in measures, traditionally to fixed periodicity or tempo for a given movement, say fast, or medium fast, or slow, because everything was based on dancing or body actions, and that’s where the music came from. A perodicity of eight seconds is perceived as very slow: we are already entering the region where the music came from. A periodicity of eight seconds is perceived as very slow: we are already entering the region where form begins. Subdivide eight seconds, and you have 8, 4, 2, 1 a half, a fourth, one-eighth, one-sixteenth. One-eighth, eight attacks per second, is about the fastest wecan play with our fingers: it is a limit determined by our muscles and bodily construction. I could go faster perhaps, to twelve or fourteen, by rolling my hands in a special way, but no more. There again, you see, the range is seven octaves (8-4-2-1-1/2-1/4-1/8-1/16); it’s interesting.
With sixteen attacks per second, we reach what we call pitch; between eight and sixteen, there is another transitional region where it is difficult to know what the sound really is. And as we know from the keyboard of the piano, there are seven and a half octaves in the range of fundamental pitches: form 16 to araound 4000 cycles per second. Above that we percieve only brilliance.
The ranges of perception are ranges of time, and the time is subdivided by us, by the construction of our bodies and by our organs of perception. And since these modern means have become avaiable, to change the time of perception continously, from one range to another, from a rhythm into a pitch, or a tone or noise into a formal structure, the composer can now work within a unified time domain. And that completely changes the traditional concept of how to compose and think music, because previously they were all in sepereate boxes: harmony and melody in one box, rhythm and metre in another, then periods, phrasing, larger formal entities in another, while in timbre field we had only names of instruments, no unity of reference at all. (I sometimes think we are fortunate in having such a poor language to describe sounds, musch poorer than visual field. That’s why, in the visual field, almost all perseption has benn rationalized and no longer has any magic).
There are  a very crucial moment in my composition KONTAKTE for electronic sounds, beginning just before 17’ 0,5” in the printed score. A translation of the title might be ‘Contacts’, and the contacs are also between different forms and speeds in different layers. The moment begins with a tone of about 169 cycles per second, approximately F below middle C. Many of the various sounds in KONTAKTE have been composed by determining specific rhythms and speeding them up several hundred or a thousand times or more, there by obtaining distinctive timbres. What is interesting about this moment is that if I were to play little bits of the passage one after another, like notes on the piano, nobody would be able to hear the transition that takes palce from one field of time perception to another. The fact that I make the transition continuously makes us concious of it, and this effort of consciousness changes our whole attitude towards our acoustic enviroment. Every sound becomes a very mysterious thing, it has its own time.
There is a very important observation which was made not so long ago by Viktor von Weizsacker, a German biologist who started in medicine, which says that the traditional concept is that things are in time, where as the new concept is that time is in the things. This is quite different from the traditional concept of an objective, astronomical time represented by our clock, which measures everything according to the same units, and is the same time for everything. Instead, the new concept tells me as a musician that every sound has its own time, as every day has its own time. This is new in musical composition, to think in terms of an individual time-event, which then takes its own time to be put together with other sounds.
The is a end of the transition in KONTAKTE which I started to describe sustained note, E below middle C, which when I reached it I worked on for another four minutes, making very small changes in pitch. Other sounds pass by, as if you are looking out of the window of a space vehicle, but the line of orientation remains. You hear it go right away into the distance, then come back.

2.         THE SPLITTING OF THE SOUND
The same sound servesfor another section of the composition KONTAKTE, beginning approximately twenty-two minute into the tape, which I use to clarify what I call the splitting of the sound. If we understand that sounds can be composed, literally put together, not only stationary sounds which don’t change, but also a sound like owww, which changes in the course of its duration; if we can compose these sounds, in the sense of the Latin componere meaning put together, then naturally we can also think in terms-note the quotation marks-of the ‘decomposition’ of a sound. That means we split the sound, and this can be much more revealing in a certain context than hearing a unfied sound on its own terms, and comparing it to another which is happening at the same time or immediately before or after it.
You hear this sound gradually revealing itself to be made up of anumber of components which one by one, very slowly leave the original frequency and glissando up and down: the order is down, up, down, up, up, down. The original sound is literally taken apart into its six components, and each component in turn is decomposing before our ears, into its individual rhythm of pulses. In the background one component of the original sound continues to the end of the section. And whenever a component leaves the original pitch, naturally the timbre of the sound changes.
So what has this got to do with composing? What makes this more than just an example from a lecture by an acoutician or a physicist who say: ‘Today I am speaking about the subject of sound decomposition and this is what it sounds like’. If it were no more than that, music would be reduced to a teaching aid-and the previous example too. This is the point: whereas it is true that traditionally in music, and in art general, the context, the ideas or themes, were more or less descriptive, either psychologically descriptive of inter-human relationships, or descriptive of certain phenomena in the world, we now have a sound, or the passing of asound through severaltime layers, may be the theme itself, granted that by theme we mean the behaviour of life of the sound. And we live through exactly the same transformation that the sound is going through. The sound split into six, and if we want to follow all six, we have to become poluphonic, multilayered beings. Or when the sound falls six and a half octaves, you have to go with it, because if you stay put in your time chair, as it were, you won’t perceive it. That’s why many people get a strange feeling in the pit of the stomach when they hear the sound falling down. So there you have it: the theme of the music, of KONTAKTE itself, is the revealing of such processes, and their composition.
Of course it could be done more or less intelligently. I mean, a physics professor would just have gone straight down six and a half octaves, and leave it at that. Some one else might just, well, vary it a little, make it a bit more inventive. If the same process is composed by different people and one is more imaginative than the other, then that’s all there is to say, about the process, and about the difference between a physics professor and composer in this context.
There are many visual artist today who are mainly concerned with the exploration of new ways of seeing. Seeing itself is the theme: how to look at things and what we can see, the widening of our perception. Having said that, I must say  that most examples of this kind of art that I see in galleries are absolutely vacuous: one look and that’s it, you’ve got it. Or it makes your eyes flicker and you say well, okay, looking at such things makes your eyes flicker. One begins to fell like those animals in experiments which have to respond in certain way to a particular stimulus. Much of modernart is like that. We are in a very important transition from the traditional way of perceiving art to a new way of making and perceiving art, and discovering new functions for art, that it is revelatory. It reveals our existence and ourselves, and there by changes us as human beings.

This change in perception will bring about incomporable chages in humanity in the next hundred years, spiritual and physiological. Don’t imagine we remain the sam when our perception is changing so drastically, now that our musical perpectives have become relative instead of absolute. Take timing: when I have to pass quickly through the continuum of speeds and tempi in music, ichange completely, and am no longer comparable to someone who is fixed in his time perpective of metronome 70, his heartbeat, or metronome 20 or 30, his breathing, for whom everything that is faster is fast, and everything that is slower, slow. What we need, and what we will become as individuals-some of us- are beings who are able to change their speed and direction of response very quickly, experience all these transformations, and yes, become the sounds
As I say, people in general occupy a certain middle position in time, from which they judge what is fast and what slow, and this middle position is determined basically by the body, by the breathing, the heartbeat, the speed with which the limbs-including the fingers- can be moved, the tongue, lips, head and so on. All these limits determine a middle range of speed, and everything that is faster or slower we judge from this standpoint. The sameis true for the voice, which has a natural middle register, and which for most people is fixed, from which we judge sounds to be higher or lower. It’s very hard, in a musical composition of a more modern character, to move the listener out of theirmiddle region into another, let’s say into a very fast region, for long enough for the fast to become normal and everything that was medium before to appear slow. Or the other way around, to slow an audience down with music like Japanese Gagaku music, which is very much slower than traditional western music, so that having listenend for a long time to this very slow music, everything medium in speed is perceived as fast.
That will give you an idea of the change of perpective which has come about from the enormous expansion of musical timing. Nowadays a modern composition switches very fast form one time layer, one tempo, to another, whereas, as I said before, in traditional music we find  a slowmovement, a fast movement, with a break in between, then aminuet movement, then a very fast movement, staying at a particular perpective long enough in each case for the listener to feel safe. Naturally, with arrival of modern transport, our context of experience has changed a lot: in everyday life we can experience many different time perpectives. If I am driving in a car, and I see someone walking past, and there is an airplane passing overhead, the airplane can be very slow compared to person walking; or if I am overtaken by train, that can be extremely slow compared to a cyclist coming from the opposite direction. If on the other hand I leave my car and go on an airplane, in my experience I am exchanging a very fast time for a slow time, because compared to being in a car, where the trees go by very fast, the experience of being in an airplane is very slow.
So in real life we may change time very quickly, and modern man has ti change his time perspective just as quickly, and if he doesn’t he gets sick, or even dies, because the degree of chage is just too much. The same applies to space. Musical space has been fixed in the western tradition, for as long as musicians gave up running through the woods for sitting on chairs on a stage. The function of spacehas been neutralized in our western music. Some conductors, for the sake of instrumental effect, make changes in positions of players in an orchestra, for instance putting the celli at the left side instead of the right, but such changes have no real revealing function: it’s still fixed, it doesn’t move, all it serves to clarify is the music being a static object in space. It has something to do, I should add, with the fact that until very recently it has not been important to able clearly to identify sound which come from behind, say 270˚, likea ship’s navigator, on circle of 360˚, or one which comes from say north 15˚ with 45˚ of elevation, or south 170˚ from an angle of minus 40˚ or 50˚. In the concert hall we always have the same perpective, the one seat as a pointof reference, which is determined, or has been up to now, by how much we can pay.
Well, I discussed at length with my studio technicians about 1953. Whether it would be wise to put musicians in chair and wing them around, for example, and many said they might object. So then we thought it would perhaps be preferable to let them play into microphones and connect the microphones to speakers and then swing the speakers around, and then they would not object, butthey objected to that too. They said, oh no, you can’t do that with me: I’m here, and the sound has to comefrom here. Well, we are not birds, that is the problem. If we were birds, then naturally we would not argue that way, but we are clumsy and would rather sit in one spot-in fact, most of the audience can’t even stand, let alone moveduring a concert, so our perspective of musical space is utterly frozen. And it has led to a music in which the movement and direction of sound space has no function.
But the moment we have the means to move sound with anyagiven apeed in a given auditorium, or even in a given space outdoors, there is no longer any reason for a fixed spatialperspective for music. In fact, that is the end of it, with the introduction of relativity tino the composition of movementand speed of sound in space, as well as of the other parameters of music. And this movement in space of music becomes as important as the composition of its melodic lines, meaning changes in pitch, and as its rhythmic characteristics, meaning changes in durations. If I have a sound of constant spectrum, and the sound moves in acurve, then the movement gives the sound a particular character compared to another sound which moves just in a straight line. Whether a sound moves clockwise or counter-clockwise, is at the left back  and right front, or anya other combination, these are all configurations in space which are as meaningful as intervals in melody or harmony. So from the time these measns of moving sound have been avaiable, I have been speaking of and composing and finding a notation for space melodies, to indicate movement up or down in space, or describe a particular configuration in given space, at acertain speed.
The culmination of this concept came about, happily thanks to a lot of diplomacy, at the 1970 World Fair in Osaka in Japan. I was given the chance to realize, in collaboration with an architect, a project I had first described in 1956. It was a spherical hall seating six hundred people on a platform in middle, which is sound-transparent. They entered by a moving staircase and sat down whenever they wished. There were cushions: the Japanese like these. The platform was a metal grid, an there were speakers all around: seven circles from bottom to top, three below and four above the platform arranged in ten vertical rows around the audience. A sound source, a singer, a player or tape recording, could be sent to any point in this pattern of speakers. Singers and soloist worked from six crow’s nest balconies around and above the central platform; their sound was picked up by microphones and sent to a mixing where I or one ofmy assistants would be sitting. I had two soundmills constructed, each having one input and ten outputs, allowing a chosen sound to be rotated by hand at speeds up to about five revolutions per second, in any direction. For example, Icould decide to make a voice go in an upward spiral movement for two or three minute, either clockwise or anti clockwise, while at the same time another player’s sound moved in a circle using the other soundmill, and a third crossed in a straight line, using just two potentiometers. So we were able to realize a free spatial composition. It Could be improvised or predetermined, but we had a wonderful time improvising for six and half hours every day for 183 days. It was wonderful playing with these things.
And the Japanese would come  in sit. They are very polite. Then I would start: the house lights would go out. It was very beautiful: wherever there was a spaker you could see a pattern of five little light. It looked like a night sky, a very geometrically composed sky of stars. I would sit down; the players would appear, I would introduce the players in English, it would be translated by a hostestss into Japanese, Mr so-and-so will now play a duo with Mrs So-andso- and then we’d start. I could always see the hall from the control desk. These were mostly simple, many with babies on their backs, and at the first sound everyone would look round in astonishment, and try to follow it with their eyes. And after a session of fifteen to twenty minutes they would walk out turning their heads like geese and making spiralling movements with pointing fingers. So even if they had never heard new music before, it was still exotic stuff, as their music is of little importance: what is important is that they went out imitating the movements they had heard, and I was very happy. If you discover something really new, which affects human experience, I mean, there’s no discussion, that’s just the way it is. All the rest is minor talk about little details. But that was important, it was a new experience.
It was a wonderful building: they destroyed it afterward. I tried to get it to Europe,it was not very expensive, I must say. It was a geodesic construction with a plastic skin, very well made. And it worked acoustically: everyone said a sphere never work well, you get sounds bouncing up and down, but the sound was wonderfull, very good acoustics and good reverberation. One day I will get back. Certainly it was such an important experience for the first time in history to have the sound moving in a controlled way around listener, with the listener at the centre. If you don’t have good auditoriums, the way I recommend to hear music where the movement of sound is very important, is with earphones. That way the sound moves within you and your head becomed this sphere, and with a little imagination you can expand this sphere to any size.

3.        THE MULTI-LAYERED SPATIAL COMPOSITION
Multi-layered spatial composition means the following: that not only does the sound move around the listener at a constant distance, but it can also move as far aways as we can imagine, and also come extremely close. These characteristics are distinctly different, so I’m being cautious when I say that I have managed to superimpose acoustically only six layers up to now, and that it is very difficult to add more layers. At the end of the section in KONTAKTE where the sound is split into its separate components, about twenty-four minutes into the tape, there are dense, noisy sounds in the forefront, covering the whole range of audibility. Nothing can pierce this wall of sound, so to speak. Then all of sudden, at 24’ 18,7” in the score, I stop the sound and you hear a second layer of sound behind it. You realize it was already there but you couldn’t hear it. I cut it again, like with a knife,  and you hear another layer behind that one, then again. Building spatial depth by superimposition of layers enables us to comose perpectives in sound from close up to far away, analogous to the way we compose layers of melody and harmony in two-dimensional plane of traditional music. This is really very important, and nothing new in human experience: I mean, it happens everywhere it’s important to be able to hear whether the car coming towards me is still far away or not, because if I hear it’s just two feet away, I will behave differently. Well, some people may think it doesn’t matter in music, but beware: if the sound comes very close it can have the same impact on our own audioelectrical system.
Why should spatial perspective be typical only of electronic music?have we not already encountered it in a Mahler Symphony where the composer says that trumpets should sound outside the hall? Naturally we have, but such example are fairly primitive: there is more to spatial perspective than playing loud and soft, with or without reverberation. Imegine, for example, that someone is whispering very softly in your ear, while a thunderstorm or a rocket taking off is going on ten miles away. You are still aware that the whispear is very soft, but it’s close, whereas the rocket is very loud, but very far away. Now two things are necessary for hearing spatial perspectives: one that we know what it is we are hearing, and two, that we know whether it is far away or close. When we have never heard a particular sound before, we don’t always know whether it is far or close. We have to have heard it several times before in the context of the music, in order to know how it sounds when closer and further away.
There is something very important now to be said about our concept of perception. Our concept of perception dates back, as we all know, to Gutenberg: since printingwe have become verticalized, and our perceptions have become dominated by visual. It has led to the incredible situation where nobody believes somebody else if he can’t see what it is. Inevery field of social life you find this need to establish everything in visual terms, because what you cannot see people do not believe. And this leads to the very strange response of most people listening to this music, that when they hear the sound in a given hall are moving very far away, and coming very close, they say well, that’s an illusion.
We now have the means technically to make the sound appear as if it were far away: ‘as if’, they say. A sound that is coming from far away is broken up and reflected by the leaves of the trees, by the walls and other surfaces, and reaches my ear only indirectly. There is a Factor of distrtion and noise. Naturally we are able to reproduce these noise factors synthetically. On the other hand, a sound that is very close to my ear reaches my ear directly, without reflections, and the unreflected sound can also be produced artificially. Whether a sound appears ‘as if’ far away or very close, depends on a combination of intensity and degree of distortion. The purer the sound, the closer it is, and in an absolute sense the louder it is.
Now I come to my point: when they hear the layers revealed, one behind the other, in this new music, most listeners cannot even perceive it because they say, well, the walls not moved, so it is an illusion. I say to them, the fact that you say the walls not moved is an illusion, because you have clearly heard that the sounds went away, very far, and that is the truth. Whether the walls have moved at all has nothing todo with this perception, but with believing in what we hear as absolutely as we formerly believed in what we see or saw. They open their eyes and they say, well now, aha, there are the walls, so that was an illusion, the sound has not really moved away. What makes it so difficult for new music to be really appreciated is this mental block in people, which makes them say ‘as if’, or that they can’t even perci eve what they hear. To hear a sound three miles away: they identify the sound with an object that must be at the given distance. That’s what we are struggling with, and that’s what will change mankind as gradually more and more people percieve this music in its real terms.

4.        THE EQUALITY OF TONE AND NOISE
The equality of tones and noises has already been made clear in discussing the continuous transition from periodic to more or less aperiodic waveform. If the degree of  aperiodicity of any given sound can be contrlled, and controlled in a particular way, then any constant sound can be transformed into a noise. A noise is determined as we say, by a certain band-width, or band of frequencies, the widest band-width covering the whole audible range (though to spread a sound to that extent one needs to repeat the process several times). In addition, there is the distribution of energy to be considered. The band-width might be the same for several noises, but their energy distribution might be quite different. Nowadays we have various electronic filters and modulators avaiable to transform a steady sound into one that is more aleatoric in its inner structure.
As it has becomes possible to define a continuum between sound and noises, completely new problems have come up for when we compose or play intuitively, because we have no training whatsoever in balancing tones and noises, traditionally in western music noises have been taboo, and there are precise reason for this. It began from the time when staff notation was introduced, and music could be notated in precise intervals for the first time. Then it was mainly vocal music, sung predominatly with vowels rather than consonant. If I sing a melody of consonants now, people would say it isn’t music: we have no tradition of music composed in these sounds, and no notation for it. There you see how narrow our concept of music is, from having excluded consonants, then noises. Of course you find consonant in vocal music, but only in order to make a word comprehensible: that’s the function of consonant in our daily language, to clarify meaning. But in a musical sense consonants have no function, other than as accents: ss or p or k to start or end a sound clearly.
The integration of noises of all kinds has only come about since the middle of this century, and I must say, mainly through tha discovery of new methods of composing the continuum between tones and noises. Nowadays any noise is musical material, and it is possible to select a scale of degrees from sound to noise for agiven composition, or choose an arbitary scale, from the complete range. The balance between tones and noises is not at all numerical one. When I was working on the final section of KONTAKTE, I wanted a situation where tones and noises were in balance. I wanted  to deal with the whole scale of timbres between aleatoric and periodic, and I could certainly not use as many noises as tones, because noises tend to cover the tones, being so to speak more primitive. So, to establish a proper balance between the two in given section of music one has to be careful to reduce the number of noises extremely in realtion to the number of tones.
From this we discover more new principles of musical articulation, for example that I worked with forty-two different scales in this particular work. If you know the piano has a half-tone scale, twelve steps to the octave then imagine forty-two different scales, where the octave is divided into thirteen, fifteen, seventeen, twenty-three steps, and so on. I have used a scale of scales, where the ratio of increase in the step sixe is constant from one scale to the next, and each particular scale is strictly associated with a particular family of tones and noises. Put at its simplest, the noiser the sound, the larger the interval, the bigger step size. The noisest sound in KONTAKTE are two octaves in width, and the scale for these noises is the largest and most simple scale in the whole work: a scale of perfect fifth. A step size of a fifth means that twelve steps covers the whole audible range. The narrower the band-width, on the other hand, and the more the sound approached a pure tone, the finer thescale: this is the principle I have applied. With the purest tones you can make the most subtle melodic gestures, much, much, more refined than what the textbooks say is the smallest interval we can hear, namely the Pythagorean comma 80:81. That’s not true at all. If I use sine waves, and make little glissandi instead of stepwise changes, then Ican rally feel that little change, going far beyond what people say about Chinese music, or in textbooks of physics or perception.
But it all depends on the tone: you cannot just use any tone in anya interval relationship between the anture of the sound and the scale on which it may be composed. Harmony and melody are no longer abstract system to be filled with any given sounds we may choose material. There is a very subtle relationship nowadays between form and material.I would even go so far as to to say that form and material have to be considered as one and the same. I think it is perhaps the most important fact to come out in the twentieth century, that in several fields material and form are nolonger regarded as separate, in ses that I take this material dettemnies its own best form according to its inner nature. The old dialectic based on antimony- or dichotomy-of form and matter has really vanished since we have begun to produce electronic music, and have come to understand the nature and relativity of sound. 

Tidak ada komentar:

Posting Komentar