Eastman Csound Tutorial
END of this chapter -- NEXT CHAPTER (Chapter 3) -- Table of Contents CHAPTER 1 -- CHAPTER 2 -- CHAPTER 3 -- CHAPTER 4 -- CHAPTER 5 -- CHAPTER 6 APPENDIX 1 -- APPENDIX 2

Chapter 2

When a violinst plays a tone, there are continous variations in the amplitude, the timbre and usually (although we may be less aware of it) also in the pitch of this tone. These time varying changes animate the tone, giving it "life," particular expressive qualities and a sense of phrasing or "direction." To produce similar variations in the digital synthesis of tones and noise-like sounds we employ control signals, which specify simple or complex changes in amplitude, pitch, timbre and other musical elements.

There are three basic ways in which a musical parameter can vary over time, and thus there are three broad categories of control signals:

  1. a pattern of overall changes in level (e.g. in amplitude or in pitch) from the beginning to the end of the tone is called an envelope
    Generally such changes can be graphed in 3, 4, 5 or more straight line or curved segments between breakpoint values. The amplitude attack and decay of a tone (changes in loudness), or a pitch glissando, or a crescendo are examples of envelopes.
  2. regularly recurring variations within a tone, such as a vibrato, a fluttertongue or a string tremolo comprise periodic variations in a sound or in a signal
  3. rapid but complex, irregular, random or quasi-randon variations result in aperiodic variations, which often are associated with noise-like elements within the tone

If we focus our attention carefully on the pitch, on the amplitude, on the timbre or on some other element of a single tone, or if we employ digital audio analysis tools to provide us with a graphical display of one of these parameters, we often will be able to identify simultaneous envelope, periodic and aperiodic variations in the parameter. We also may find it somewhat difficult to separate or disentagle these three types of variations. An amplitude envelope itself may include both periodic and aperiodic elements. A (periodic) vibrato may also incorporate a pitch envelope (the vibrato may alternately become wider and then narrower) as well as aperiodic components, such as subtle, irregular variations in the speed or depth of the vibrato. Similarly, aperiodic signals often will incorporate some type of envelope (noise-like components within the sound may become more pronounced, then less pronounced).

Without much apparent conscious thought, but applying their innate and learned musicianship, an accomplished singer or cellist often will vary the pitch or loudness or timbre of a tone simultaneously, and fluidly, in each of these three ways. Each note will have a unique amplitude envelope and pitch envelope, a unique periodic variations in loudness and pitch, and unique irregular or "random" components.

If, while synthesizing a sound, we create a vibrato that remains absolutely constant, unvarying in speed or depth, the vibrato will tend to sound mechanical, contrived, uninteresting, "unmusical" and, before too long, downright irritating. Often, only when we are able to combine envelope, periodic and aperiodic variations to amplitude, to pitch (especially) and to timbre will the resulting synthesis sustain our musical interest.

In sound synthesis, however, we generally must generate separate control signals to vary the amplitude envelope, periodic amplitude variations and rapid, irregular amplitude variations for each tone, and an additional set of control signals may be necessary to vary the pitch envelope, vibrato and random variations in pitch. The unit generators available in synthesis programming languages such as Csound, Reaktor and PD are tools that, of necessity, are generally designed to create a particular type of time varying control signal -- either an envelope or else periodic or aperiodic variations. (A unit generator that enabled us to control all three types of variations simultaneously would probably be too complex to be of much use.) Thus, we often must adopt a modular approach to synthesis, creating envelope, periodic and aperiodic signals, mixing or adding them together and then patching the resulting complex control signal into the amplitude, pitch or timbral input of an oscillator or some other type of audio signal generator.

Much of this chapter will be devoted to the creation of envelopes that can be applied to amplitude, to pitch, and to most other parameters of our sounds, and which we can create with oscillators and with other unit generators specifically designed for this purpose. A second major topic we will consider, at the conclusion of this chapter, will be some methods of reading soundfiles into Csound for processing. In Chapter 3 we will look at ways to create periodic control signals, and in Chapter 4 we will examine some basic types of aperiodic control signals.

First, however, we will look at some alternative ways to specify pitch, and at initialization variables.

2.1. Pitch Converters

[ See the discussion of cpspch in the Csound 5 Reference Manual (The See Also section of the cpspch documentation also will point you to additional pitch conversion opcodes, such as cpsoct, octcps, octpch and pchoct, that you won't need now, but may wish to use at some point in the future.]

If we want equal-tempered pitches, typing in the frequency of each pitch in our scores grows old very quickly. Csound provides several alternative ways to specify frequency or pitch. Here, we will consider:

cycles per second (cps) ;
octave pitch class (pch) ; and
and octave decimal (oct) notations.

Each of these pitch specification methods is useful in certain circumstances.

cps : In cps notation, pitch is expressed in terms of cycles per second (hertz). This is the method we employed in Chapter 1. Ultimately, oscillators always must receive pitch information in terms of cps.
pch : Here, digits to the left of the decimal point specify the octave, while digits to the right specify and equal-tempered pitch class. 8.00 represents middle C; 5.00 specifies the lowest pitch-class C on the piano, and 12.00 the highest "C" on the piano. The "fraction" .01 represents the pitch class "C#", .02 "D", and .11 "B"
In pch notation, any number with a decimal portion higher than .11 gets converted to the next highest octave. Thus, 8.12 is the same as 9.00, 8.15 is the same as 9.03, etc.
oct : Octave designations (to the left of the decimal point) are the same as those in pch notation (8.00 is still middle c). However, now the octave is divided into 100 equal parts; 8.50 is a pitch exactly half-way between middle c and the c above it (F#), 8.75 is 3/4 of an octave above middle c (A 440). A half step has a value of .0833.
oct notation, while less common than cps and pch, is can be useful for representing microtonal tunings, vibratos, glissandi, and other purposes.
If you would like some additional help with conversions between pch, cps and some other common pitch notation formats, consult the following:

Csound also provides several pitch converter utilities that enable us to convert pitch specifications between pch, cps and oct formats. The (ugly) names of these pitch converter utilities consist of two of these 3-letter groups butted together. The first three letters specify the output format of the pitch notation, while the second three letters indicate the input format. Thus, a line of code like this:


means: The argument in p4 is currently in pch notation. Convert this value into its cps equivalent, and use this cps value to compute the oscillator's output value.

Here are some other examples of pitch-conversion operations :

     (operation input value)             (output value)
    octpch(8.01)  returns     8.083 (a half-step above middle C)
    pchoct(9.5)   returns     9.06  (F#,  18semitones  above middle C)
    cpspch(7.09)  returns   220     (A below middle C)
    octcps(294)   returns     8.166 (  D  above  middle  C)
    cpsoct(7.5)   returns   184.99  (F# below middle C)

Next question: How do we use these things? Let's go back to our first orchestra from Chapter 1 and modify the oscillator's second (frequency) argument as follows:

asound oscili p5, cpspch(p4), p6

Now the value in p4 will be converted from pch to cps, before the oscillator performs any computation.

With this cpspch converter in place in our orchestra file, we can now specify equal-tempered pitches in our score:

     Orchestra file:
          sr= 44100
          kr= 2205
          nchnls= 1
          instr 1
          asound oscili p5,cpspch(p4),p6
          out asound

     Score11 input file :
          *f1 0 1024 10 1.;
          *f2 0 1024 10 .3 1. .4 .15 .05;
          *f3 0 1024 9 1. 1. 0 2.7 .67 0 5.4 .3 0 8.1 .1 0;
          i1 0 0 5;
          p3 2;
          p4 no c4/ df3/ g5/ af1/ ef6;
          p5 1. 8000 12000;
          p6 nu 1/ 2/ 3;
     Csound score file output produced by score11 from the input file above:
          f1 0 1024 10 1.
          f2 0 1024 10 .3 1. .4 .15 .05
          f3 0 1024 9 1. 1. 0 2.7 .67 0 5.4 .3 0 8.1 .1 0
            i1 0.000 2.000 8.00 9723 1
            i1 2.000 2.000 7.01 8752 2
            i1 4.000 2.000 9.07 9369 3
            i1 6.000 2.000 5.08 8392 1
            i1 8.000 2.000 10.03 11004 2
The cpspch(p4) opcode within our Csound orchestra, in turn, will convert the p4 value 8.00 to 261.626 hz; 7.01 will be converted to 138.591 hz.; 9.07 will be converted to 784.99 hz., and so on.

2.2. Initialization Values

Suppose that we want to use the value of cpspch(p4) more than once in our instrument block. We might wish, for example, to create several oscillators, each producing some multiple of the frequency specified in p4. Typing the magic incantation cpspch(p4) several times is not an appealing prospect, nor is it necessary. By creating an initialization variable, we can direct Csound to compute this value once, give the result a name (which must begin with the character i), store the result in RAM, and return the value of this variable whenever we call it by name. Here is an example:

     ipitch = cpspch(p4)     
       a1 oscili .5 * p5, ipitch, p6
       a2 oscili .3 * p5, 2.001 * ipitch, p6
       a3 oscili .2 * p5, 4.98 * ipitch, p6
     a1 = a1 + a2 + a3            ; add 'em up
     out a1                       ; and spit 'em out

The first line of code above declares the variable ipitch and assigns to it the value of the operation cpspch(p4). The letter i that begins this output name signifies an operation that is performed at initialization time, before the production of samples begins. Unlike a-rate (audio) and k-rate (control) signals, i-rate (initialization) values are computed only once, at the very beginning of a note.

Data from score p-fields are examples of initialization values. But, as here, initialization values also can be created within an instrument, and can be the result of mathematical or logical expressions (operations). These initialization values remain constant throughout a note, but often change from one note to the next. Each subsequent use of an init variable within the instrument algorithm actually consists of taking the current value of a particular memory location and patching it into an argument in some unit generator. Any string of numbers and/or letters after the initial "i" can be used to label the variable. We could just as well have called it ivalue, i1 or ilikethis as ipitch.

In the example above, all three oscillators access the value of ipitch within their frequency arguments. The second oscillator multiplies the ipitch value for this note by 2.001 (producing a slightly sharp second harmonic), while the third oscillator by multiplies the variable by 4.98 (producing a slightly flat fifth harmonic). The three oscillator signals are mixed together in the proportions 50 % (a1), 30 % (a2) and 20 % (a3), as specified in the amplitude arguments to these oscillators. The result of this mix is then assigned to the audio-rate variable a1. Note that this overwrites the previous value of a1, which can no longer be accessed and presumably is no longer needed. (It had better no longer be needed!)

On the final two lines in this example, we have added comments (which begin with semicolons, the Csound comment symbol) to congratulate ourselves on this successful additive synthesis. The function for the oscillators, supplied in p6 of our score, might well be a sine wave. However, it might also be a more complex signal, with many partial frequencies. If we like the results we get, we might also add several more oscillators to this simple additive synthesis algorithm.

2.3. Envelope Generators

Now that we have greater flexibility in specifying pitch, and at least some control over timbre, our instrument is somewhat more useful, though probably still not something to which we would entrust a performance realization of, say, the complete Goldberg Variations. The most important feature it still lacks is time-varying amplitude control. Currently, the amplitude remains fixed at some level throughout a note. Not only is this wearying to the ear, it also causes clicks at the beginnings and ends of each note. We are asking the loudspeaker cones to start and stop oscillating at this amplitude level almost instantaneously. Since inertia makes this impossible, they will flap around wildly for an instant at the onset and conclusion of each note, producing noise artifacts (clicks).

What we need is a means to shape the amplitude over the duration of each note, so that it rises from 0 (or near 0) to some maximum level determined in our score (probably in p5), then eventually falls back down to 0 at the end of the note. Such a rise and fall (or attack and decay) amplitude pattern is called an envelope.

Many musical sounds have amplitude envelopes with the following properties: There is generally a peak, or "spike" near the beginning of a note, as energy build up and the amplitude rises to its peak value. This rise in physical intensity is called the attack. Then, there is usually some attenuation, down to a steady-state level (typically between 50 and 90 % of the peak level). Actually, the steady-state is rarely very "steady," but usually includes random and/or quasi-periodic variations. Finally, at the end of the note, after a violinist lifts her bow off the string, or a clarinetist stops blowing into the instrument, the amplitude falls off to zero. This is known as the decay, Idiophonic percussive sounds usually contain no "steady state" segment, but rather consist entirely of attack and decay.

What we want to do then, is to create a signal at the control (or, less often, audio) rate that defines such a shape, and then feed this control signal to the amplitude input of our oscillator. There are several ways to create envelopes in Csound. Here, we will survey some of the most commonly used envelope generators.

2.4. Line and Expon

[ See the discussions of line and expon in the Csound 5 Reference Manual ]

The simplest envelope generators available in Csound are line and expon. Unit generator line creates a linear (straight line) connection between two values over a specified duration. Thus, there are three required arguments : (1) a beginning value; (2) the duration over which to move between the two values (generally this duration will be the entire duration of a note); and (3) the closing value. Consider the following modification to our sample orchestra :

     (header omitted here)
     instr 1
     ipitch = cpspch(p4)
     kmart line 0, p3, p5
     asound oscili kmart, ipitch, p6
     out asound

Here we use the unit generator line to create a gradual, linear increase in amplitude from 0, at the start of the note, to our p5 value, at the very end of the note. Since this is a rather prosaic, off-the-shelf envelope, we call the output of this operation kmart. (Not all that funny, but there it is.) Remember that the output name of a unit generator or mathematical operation must begin with an "i," a "k", or an "a," specifying the rate at which the operation is performed, but that the remainder of the name can consist of any string of numbers and/or alphabetical characters.

Next, we patch the current contents of memory location kmart into the amplitude argument of oscili. Note that the operations performed by line must be done before we call oscili, since the oscillator is looking for a control-rate value named kmart. In our instrument code, therefore, the call to line must precede the call to oscili. If we reversed the order of these two lines of code, oscili would be unable to find kmart. Csound would complain bitterly and quit.

Note, too, that the Csound manual indicates that line and expon can be run either at the control rate or at the audio rate (but NOT at the i-rate, since the values CHANGE within a note.) Why did we choose to run line at the k-rate? A single line segment is a simple shape, and not much resolution is needed to get the desired result.

Occasionally, if a control value changes very rapidly (such as an attack that lasts just a few milliseconds) or has a complex shape, better audio quality can be achieved by running this control variable at the audio rate. The general rule, however, is to run control signals at the k-rate. (Audio signals, of course, MUST be run at the audio rate.)

A final observation before we bid a fond adieu to unit generator line. Note in the Csound manual that the mnemonic names for the three arguments to line ("ia," "idur1," and ib) all begin with the letter i. This tells us that each of these inputs is an i-time value which CANNOT change within a note.

Back to work.....

One problem with using line is that linear amplitude changes (and linear pitch changes) often sound rather abrupt or unrealistic. In natural acoustic sounds, changes in amplitude (and also in pitch) more typically follow exponential curves. To obtain a more natural-sounding exponential increase in amplitude intensity, we could substitute expon for line in our instrument algorithm:

kmart expon 1, p3, p5

Notice that we have changed the first argument value from a 0 to a 1. Exponential segments are computed as ratios between two values. There is no ratio between zero and some number. Therefore, zeros are illegal in any exponential operation. If we had used a value of .1 instead of 1.0 for the first argument, the slope of the exponential curve would be much steeper, with most of the increase coming at the very end of the note. The ratio of .1 to p5 is ten times greater than the ratio of 1.0 to p5. A value of .01 would give us an extremely steep curve. If we were using line, however, there would be no audible difference between using 1, 0, .1 or .01 for the first argument.

There is another restriction that applies to all exponential operations. One cannot go from a positive to a negative value, or vice versa. There is no ratio between positive and negative numbers.

Actually, there is an way to create control signals that move exponentially between positive and negative values or vice versa -- by adding an offset. To move exponentially from -.5 to .5, we could do this:

kup expon .01, p3, 1.
kup = kup - .5

In point of fact, the line and expon amplitude envelope examples above, while possibly instructive, are academic for our purposes here. The simple one-segment slopes produced by these two unit generators clearly are inadequate for the generation of amplitude envelopes, which must include at least two segments (a rise and a decay). However, line and expon sometimes are adequate for the creation of other types of control signals. In the following stereo orchestra file, expon is used to create a simple pitch glissando, and line is employed to produce a moving stereo pan:


     Orchestra file :
     instr 1
     ipitch  =  cpspch(p4)
     kpitch   expon   ipitch, p3, p7 * pitch ;glissando control signal
     asound  oscili p5, kpitch , p6
         ;  now add a moving stereo pan :
     kpan   line   p8 , p3  , p9  ;  pan envelope
     outs  kpan * a1 ,  (1.  - kpan) * a1
     Score p-field values for glissando and pan:

     p7  nu 2./.97 ; < multiplier (*p4) for ending pitch
     p8  nu  1./.5 ; < starting %  of signal sent to left channel
     p9  nu  0/.25 ; < end % of signal sent to left channel
Pitch : For the first note, the pitch will begin on the tone specified in p4, then, over the full duration of the note, glissando up one octave (p7 = 2.)
For the second note, the pitch will descend slightly ( to .97 times the starting frequency, or about a quarter tone down).
Pan : The first note will begin with a hard left pan (p8 = 1.). Over the full duration of the note, this sound will move gradually to the right speaker. The second note will begin mid way between the two speakers, and move to a concluding position 3/4 of the way to the right.

The panning operation above will work to some degree, but could be improved. For psychoacousical reasons, a sound emanating mid way between two loudspeakers (a kpan value of .5 or so in the example above) tends to sound softer than the same sound panned hard left or hard right. The panpot circuits on most hardware mixing consoles compensate for this nonlinearity by logarithmically varying the output intensity of signals as the potentiometer position is moved, applying 3 dB of gain when the panpot is centered and no gain when the pot is moved all the way to the right or left.

To improve our panning subroutine in the example above, therefore, we could employ the Csound value converter sqrt which returns the square root of an input value or expression:

         ;  now add a moving stereo pan :
     kpan   line   p8 , p3  , p9 ;  pan envelope
     kleft = sqrt(kpan)
     kright = sqrt(1. - kpan)
     outs  kleft * a1 ,  kright * kpan
   when kpan = 1.000  then kleft = 1.000  and kright = 0.000
   when kpan = 0.900  then kleft = 0.949  and kright = 0.316
   when kpan = 0.800  then kleft = 0.894  and kright = 0.447
   when kpan = 0.700  then kleft = 0.837  and kright = 0.548
   when kpan = 0.600  then kleft = 0.775  and kright = 0.632
   when kpan = 0.500  then kleft = 0.707  and kright = 0.707
   when kpan = 0.400  then kleft = 0.632  and kright = 0.775
   when kpan = 0.300  then kleft = 0.548  and kright = 0.837
   when kpan = 0.200  then kleft = 0.447  and kright = 0.894
   when kpan = 0.100  then kleft = 0.316  and kright = 0.949
   when kpan = 0.000  then kleft = 0.000  and kright = 1.000
Often, this will result in better stereo imaging and "smoother" panning operations.

2.5. Linseg and Expseg

[ See the discussions of linseg and expseg in the Csound 5 Reference Manual ]

linseg and expseg work much like their little sisters line and expon, except that they enable us to include any number of linear (linseg) or exponential (expseg) envelope segments within a note. Thus, they do not have a fixed number of input arguments. Here is an example of how we can "draw" an amplitude envelope with expseg:

kamp expseg   1,        .15,        12000,  .10,    10000,  p3-.5,  6000,  .25,     1
            (value 1) (duration1) (val 2) (dur 2) (val 3) (dur 3) (val 4) (dur 4) (val 5)

The arguments to expseg or linseg, called break-points, present an alternating series of intensity levels and of durations between adjacent intensities. As with expon and line,the first and last values must be intensity levels. The expseg example above creates a series of four exponentially shaped connections ("curves") derived from the ratios between values 1 and 2, then values 2 and 3, and so on, and and zero amplitude values are illegal. If we had used linseg rather than expseg in the line above, then the segments between each pair of values would be linear instead of exponential, and zeros would be permissable.

You might be wondering how we derived the sixth argument, p3-.5, in the example above. Since we want the envelope to last the exact duration of the note, all of the durational arguments must add up to p3. Thus, by adding up the other three durations (.15 , .1 and .25), and getting a total of .5, we simply make the rest of the note the "steady-state" time, (p3 - .5). One thing to watch for: this envelope will not work with notes shorter than .5 seconds. Why?

Here's a more flexible envelope that can vary from note to note:

kamp expseg   1,   p7,    p5,  p3 - (p7+p8) , p9 * p5,  p8,      1
                (value  1) (duration1)  (val  2)   (dur  2)           (val 3)   (dur 3)    (val 4) 

To use this amplitude envelope in our instrument, we would have to add p-fields 7,8 and 9 to our score. p7 will determine attack time, p8 decay time. p9 will indicate the "steady-state" amplitude level as a percentage of p5. Let's assume the following score values :

       p3  3      ;   duration (here, 3 seconds)
       p5  10000  ;   peak amplitude
       p7  .2     ;   attack time
       p8  1.     ;   decay time
       p9  .5     ;   "steady-state" amplitude level

The resulting amplitude will rise over .2 seconds to our p5 level of 10000. Over the next 2.3 seconds - the duration of p3-(p7+p8) - the amplitude will decrease to a value of 5000 (p9*p5), and then decay over the final .5 seconds to a final value of 1 (inaudible).

Many different types of amplitude patterns are now possible. If p9 is greater than 1., for example, the amplitude will rise during the "steady-state," and the maximum amplitude for the note will exceed our p5 value. Attack and decay times can be individually long or short. We can employ random selection procedures in score p-fields 5, 7, 8 and 9 to vary the envelope of each note. The only thing we must watch is that p7+p8 does not exceed p3. Our instrument algorithm is now more powerful. On the down side, we must make more decisions when creating scores for it to compute.

Example ex2-0-1 illustrates the use of expseg to create amplitude and pitch envelopes in which the duration of each note is divided into four segments.

;  #############################################################
;  soundfile ex2-0-1 : expseg example    Csound Tutorial
;  envelopes with 4 segments control amplitude and pitch
;  #############################################################
instr 2
; init values:
; amplitude envelope breakpoints:
 iamp = p5 ;  * 32700
  iamp1 = p4 * iamp
  iamp2 = p6 * iamp
  iamp3 = p7 * iamp
print iamp, iamp1, iamp2, iamp3 ;display breakpoint values for amplitude envelope
; durations for 4 segment amp. and pitch envelopes
 idur = p3
  idur1 = p8 * idur
  idur2 = p9 * idur
  idur3 = p10 * idur
  idur4 = p3 - (idur1 + idur2 + idur3)
print idur1, idur2, idur3, idur4 ;display durations for the segments
; pitch envelope breakpoints:
  ipitch =  cpspch(p11)
  ipitch1 = p12 * ipitch
  ipitch2 = p13 * ipitch
  ipitch3 = p14 * ipitch
; amplitude envelope:
kamp expseg .01, idur1, iamp1, idur2, iamp2, idur3, iamp3,idur4, .01
; pitch envelope:
kpitch expseg ipitch1, idur1, ipitch1, idur2, ipitch2, idur3, ipitch3, idur4, ipitch3

asound oscil3  kamp, kpitch, p15
out asound
< Score11 file used to create soundfile  ex2-0-1 :
*f1 0 4096 10 0 0 .3 .5 0 .2 0 .7 1. .85 .4 .22 .33 .1 .05;
*f2 0 4096 10 0 .1 .2 0 .6 1. .6 .2 0 .05;
*f3 0 4096 10 0 0 0 .7 1. .4 .04 .3 .1 .02 .01 .01;
i2 0 0 3;
p3 nu 6./5.;< nu 3./3.5;
 du  310;
< amplitude envelope
am .9;
p5 nu 28000/15000/22000;   < peak amp. 0 to 1. (max amp)
p4 nu .1/1./ .5;  < amp 1 : 0 - 1., * p5
p6 nu 1./.15/1.;    < amp 2 : 0 - 1., * p5
p7 nu .5/.85/.08;   < amp 3 : 0 - 1., * p5
< durations for 4 part envelopes:
p8  nu .01/.2/.3 ;  < duration1 (* p3)
p9  nu .4/.2/.15 ;  < duration2 (* p3)
p10 nu .3/.2/.3 ;  < duration3 (* p3)
< pitch envelope
p11  no a1 / gs2 / cs1 ; < fundamental pitch
p12 nu .94/1.12 /1.;  < pitch 1 (*p11) start of note
p13 nu 1.06/1./ .94;  < pitch 2 (*p4) middle of note
p14 nu 1.0/.94/ 1.06; < pitch 3 ((*p4)) end of note

p15 nu 1/2/3;  < audio function number

If you compile this example into a soundfile and open the soundfile with a soundfile editor such as rezound or audacity you can see as well as hear the shapes of the amplitude envelopes for the three notes. As a diagnostic aid, we have included print statements to display the values of the durational segments and amplitude breakpoint values.

Try substituting linseg in place of expseg in the ex2-1 instrument to create linear instead of exponential amplitude and pitch envelopes and listen to the difference:

kamp linseg .01, idur1, iamp1, idur2, iamp2, idur3, iamp3,idur4, .01
kpitch linseg ipitch1, idur1, ipitch1, idur2, ipitch2, idur3, ipitch3, idur4, ipitch3

The score for ex2-0-1 is typical of the type of relatively simple score files one normally creates when developing an instrument algorithm. It produces only three notes, in different pitch register, which overlap only slightly so that we can hear the amplitude and pitch envelopes of each note, and has a monophonic output. Once we have the instrument algorithm working to our satisfaction we can begin to create musically more interesting score files for it.

In ex2-0-2 we have modified the orchestra slightly to produce a stereo output and have included a panning envelope for each note, requiring two additional p-fields (p16 and p17) in our instrument and score file.

; #############################################################
;  soundfile ex2-0-2 : expseg example    Csound Tutorial
;  stereo pans added to orchestra file ex2-0-1
;  #############################################################
instr 2
; init values:
; amplitude envelope:
 iamp = p5 ;  * 32700
  iamp1 = p4 * iamp
  iamp2 = p6 * iamp
  iamp3 = p7 * iamp
print p5, iamp1, iamp2, iamp3
; durations for 4 segment envelope
 idur = p3
  idur1 = p8 * idur
  idur2 = p9 * idur
  idur3 = p10 * idur
  idur4 = p3 - (idur1 + idur2 + idur3)
; pitch envelope:
  ipitch =  cpspch(p11)
  ipitch1 = p12 * ipitch
  ipitch2 = p13 * ipitch
  ipitch3 = p14 * ipitch
kamp expseg .01, idur1, iamp1, idur2, iamp2, idur3, iamp3,idur4, .01
kpitch expseg ipitch1, idur1, ipitch1, idur2, ipitch2, idur3, ipitch3, idur4, ipitch3

; --------------- pan between left & right speakers --------
 ipan1 = p16
 ipan2 = p17
 ipan3 = 1. - ipan1
kpan expseg ipan1, idur1, ipan1, idur2, ipan2, idur3, ipan3, idur4, ipan3
asound oscil3  kamp, kpitch, p15
outs  sqrt(kpan) * asound, sqrt(1. - kpan) *  asound
< Score11 file used to create soundfile  ex2-0-2 :
*f1 0 4096 10 0 0 .3 .7 1. .2;
*f2 0 4096 10 0 .4 .05 0 .7 1. 0 .2;
*f3 0 4096 10 0 0 .4 1. 0 0 .5 0 0 .2 0 .05 .03 .02 .01;
i2 0 9;
rs 8374;
p3 mx 5. 1. .33 .5 /6. .33 .5 1.;
 du  mx 9 305. 307 303. 304.;
< amplitude envelope
ampfac 1.6;
p5 mx 4. 3000 6000 1000 15000/6. 10000 15000 2000;   < peak amp. 0 to 1. (max amp)
p4 1. .1 .4;      < amp 1 : 0 - 1., * p5
p6 nu 1.;          < amp 2 : 0 - 1., * p5
p7 1. .5 .15;      < amp 3 : 0 - 1., * p5
< durations for 4 part envelopes:
p8  1. .05 .3;      < duration1 (* p3)
p9  1. .05 .25;     < duration2 (* p3)
p10 1. .05 .25;    < duration3 (* p3)
< pitch envelope
p11  no a3 / gs4 / cs3 ; < fundamental pitch
p11 se 9  a3  gs4 cs3 ds4 d5 f1 ; < fundamental pitch
p12 .5 .98 .94  .5 1.02 1.06;  < pitch 1 (*p11) start of note
p13 1.;               < pitch 2 (*p4) middle of note
p14 .5 .98 .94  .5 1.02 1.06; < pitch 3 ((*p4)) end of note

p15 1. 1 3; < audio function number
< pan envelope
p16  1. .99 .7;      < beginning pan location 1. = hard left, 0 = hard right
     1. .4 .6;     < notes 2, 5, 8 etc.
     1. .01 .3;    < notes 3, 6 ,9 etc.
p17  .5 .01 .3 .5 .99 .7; < pan location, middle of note
      1. .7 .9;
      1. .5 .7;
< (end pan location is the opposite of p16)

Note that we have scaled the p5 values produced by score11 for this score with an ampfac value of 1.6 to raise our peak amplitude close to maxamp. The ampfac scalar scales all p5 values of all instruments in our score (unless we reset this amplitude scalar with another ampfac statement later in our input file to score11.

At this point it would be a good idea to try out some of the procedures discussed above in sample orchestras and scores of your own. It is important not only to understand these procedures, but also to be able to incorporate them into your own instrument algorithms, and to see what problems or questions may arise in using these new procedures in step-by-step fashion, before the bulk of accumulated information becomes too great or confusing.

2.6. Linen and Envlpx

[ See the discussion of linen in the Csound 5 Reference manual ]

Next we will take a look at two additional envelope generators. One is simple, and therefore rather limited. The other is powerful, and therefore more cumbersome to use. Let's take the simple unit generator, linen, first. The four arguments to linen are, respectively :

(1) amplitude, or level (2) rise (attack) time (3) duration (almost always p3) and (4) decay time.
                               (peak amplitude) (rise time) (duration) (decay time)
        kamp  linen  10000,     .15  ,   p3 ,  .35
                           (every note in our score will have an identical  envelope)
        kamp  linen  p5 ,        p6 ,    p3 ,   p7
                (this enables us to vary amplitude, attack and decay values for each note)

In the first example above, the amplitude will start at 0, rise linearly to 10000 over .15 seconds, hold steady at 10000, then fall linearly to 0 over the last .35 seconds. With linen, we really do get a "steady-state" level. However, note that the amplitude (first) argument can be a k-variable - that is, the amplitude value fed to linen can change during a note. Consider the following :

   instr 1
     ktrem  oscil .2, 4/p3, 1  ;tremolo control oscillator
     kamp linen (ktrem * p5)+(.8 * p5), .25, p3, .5 ;amplitude envelope
   audio   oscili  kamp, cpspch(p4), p6  ;audio oscillator
   out audio

The sound produced by this instrument will include a (fairly wide) tremolo, or alternating increases and decreases in amplitude. Assuming that function 1 (specified in the third argument to the control oscillator) is a sine wave, the control oscillator will produce a stream of numbers at the k-rate that move in the shape of a sine wave from 0 to .2, back to 0, then to -.2, then back to 0. It will produce this shape four times per note ( 4/p3, the frequency input to this oscillator).

The amplitude argument to linen is a complex expression in two parts. The first part directs linen to multiply the output of the tremolo oscillator (ktrem) by p5, producing a tremolo of +/- 20 % of our p5 value. To this tremolo, linen adds 80 % of our p5 value without alteration. Thus the maximum amplitude we will get, when the ktrem oscillator is at the top of its sine wave curve, still will equal p5. linen also provides here a .25 second fade-in at the beginning of each note and a .5 second decay at the end.

When an expression includes two or more arithmetic or logical operations, as in the first argument to linen above, it is generally a good idea to use parentheses to guarantee that these operations are performed in the correct order. Strictly speaking, the parentheses were not necessary in the example above. The operations would have been done in the correct order even had they been omitted, but they make the code easier to read and help us to avoid mistakes.) Every "(" must be balanced by a ")" or your orchestra file will not pass the initial Csound syntax check.

Make certain that you understand this example. It is the first instance of a control signal (kamp) which itself includes a time-varying control signal input (ktrem). Such "stacking" or "banking" of control signals is frequently necessary to produce desired changes in amplitude, pitch, timbre, and other musical qualities.


[ See the discussion of envlpx in the Csound 5 Reference Manual.]

envlpx is among the most powerful of the Csound envelope segment generators, but is also among the most complicated to use, so get some coffee if necessary before reading on. envlpx requires seven (count 'em) input arguments. An eighth argument (ixmod) is optional. Here is an example, with typical values filled in for the seven required arguments:

kenv envlpx 10000 , .15 , p3 ,   .25 ,   3     .7 , .001 , -.5
                amp.     rise  duration   decay  function  atss   atdec    [xmod]
                         time             time   number

Arguments one through four are the same as for linen, respectively determining peak amplitude, rise time, duration (almost always p3) and decay time. The remaining arguments are unique to envlpx :

ifne(function number) : envlpx allows us to create a wide variety of rise shapes. This is at times very useful, since not all acoustic sounds have exponential attacks. However, the shape of the attack must be specified in a function table. Thus, envlpx, like oscili and oscil, requires that any function to be used be defined within our score file. This attack function, however, will be read only once (during the rise time of the envelope), and will increase from 0, or near 0, to 1. This argument wants to know the NUMBER of the function to be read -- function 3 in the example above.

Here are some potentially use attack shape function definitions:

*f10 0 1025 5 .005 1024 1.; < exonential rise
*f11 0 1025 7 0 1024 1; < linear rise
*f12 0 1025 9 .25 1 0; < 1st quarter of a sine wave
*f13 0 65 8 0 16 0 .2 0 31.8 1 .1 1 ; < 1/2 of a bell curve
*f14 0 1025 7 0 500 .8 100 .33 424 1. ; < linear rise with a spike in middle
*f15 0 1025 5 .005 300 .7 724 1. ; < modified exponential rise
Figure 2.1, included at the end of the printed version of Chapter 2 of this Tutorial, provides a graphical dispay of these functions.

iatss (attenuation of steady-state) : This is a multiplier for the peak amplitude value. In the example above the amplitude would decrease from the peak value of 10000 to a value of 7000, which would be reached just before the final decay begins. A value of 1.6, by contrast, would cause the amplitude to rise from 10000 (which would no longer be the "peak") to 16000. However, without corresponding changes in timbre, this would not produce a very convincing crescendo.

iatdec (final attenuation multiplier) : This is also a mul- tiplier of peak amplitude, but for the very end of the note, at the conclusion of the decay. Very small values (but not zero, since the decay is exponential) are normal for amplitude envelopes, as in the .01 value in this example.
Why do we need this argument, if it is normally so small? envlpx, like all envelope generators, can be used to create pitch and other types of envelopes as well as amplitude shapes. Thus, there are occasions when we will NOT want this final value to be near zero.

ixmod (optional eighth argument) : Normally, the change between "peak" level, and the level specified by iatss, occurs exponentially. If we include a value for ixmod, however, the shape of this change will be modified, as illustrated in the bottom example on page 27 of the Csound manual. Negative values, between -.01 and a maximum of -.95 will cause a progressively more rapid change to the iatss level (useful, perhaps, for a forte-piano), while positive numbers between .01 up to .95 (an absolute maximum) will cause the change to occur more slowly.

(As of this writing, a long-standing bug in Csound necessitates that the xmod argument be used only if envlpx is running at the k-rate. If envlpx is running at the a-rate, including any xmod value likely will result in spectacular amplitudes far exceeding maxamp.)

2.7. gen5 and gen7

[ See the discussions of GEN7 and GEN5 in the Csound 5 Reference Manual ]

So, we need to be able to create attack function shapes for envlpx. Function generators gen9 and gen10, which compute audio waveforms, are of no help to us here. We need a control shape, not frequency and amplitude ratios. For this purpose, we would use gen5, which creates a table of numbers tracing exponential changes, or, less often, gen7, which creates tables defining linear changes in level.

Let's assume that we want a simple rise shape, which increases exponentially from near 0 to our peak amplitude. Our table of numbers, then, should rise from near 0 to 1., since each of these table values will be multiplied by our peak amplitude. Such a function might look like this :

* f3 0 65 5 .01 64 1;

The table we are calling function 3 is computed at time 0, has 65 numbers, and is created by gen5 (p-fields one through four). Since we want this function to be read only once per note (during the attack), rather than repetitively by an oscillator, we have included an "extended guard point" in the table. Thus, the table size is a power-of-two-plus-one (65), rather than a simple power-of-two (64). The purpose of the extra table number ("guard point") is this : after envlpx (or some other unit generator) has read to the last number in the table, but still needs values for the last batch of samples within the attack, it will use the last value in the table for these final samples, rather than wrapping around the table and interpolating between the first and last entries.

The remaining p-fields in the function definition work somewhat like the break-points in an expseg or linseg statement. Instead of durations, however, we use numbers of points within the table to define the distance between values. To illustrate in terms of our example, we start at a value of .01, then rise over 64 points to a value of 1. 64 points covers the entire duration during which the function is read. If we wanted it to rise during half the duration, then come back down to to our starting value, the function definition would look like this:

                                          |-- rise --|       |---fall ---|
              *f3  0  65  5 .01   32  1  32   .01;

Here, the function starts at a value of .01, moves exponentially over 32 points (half the duration during which the function is read) to a value of 1, then decreases over 32 points to .01. Notice that the sum of the points again totals 64, as it did in our first example, and not 65. Also note that this would not be a usable rise shape for envlpx.

Since ECMC users normally use score11 to generate our Csound scores, our function definition examples above have included the mandatory score11 asterisk the beginnings of each definition, and semi-colons to delineate the ends of lines of code.

gen7 works just like gen05, except linearly. Zero values, while illegal with gen5, are permissable (and common) with gen7. Using gen5 and gen7, we can create arbitrarily complex exponential and linear shapes for use by envlpx, oscillators and other unit generators. Suppose, for example, that we would like an attack shape that includes three preliminary rising and falling spikes before finally reaching its peak. The following envelope would do the trick:

                |- rise -| |- fall -| |- rise -| |- fall -| |- rise -| |- fall -| |- rise -|  
*f1 0 65 5 .01 10 .6   5 .3   15 .8   5 .4  15  .9  4  .5  10  1.;

Although most often used to create control shapes, these two function generators can also be used to create audio waveforms. Here are some common audio waveshapes, similar to those produced by "vintage" analog synthesizers, that we can create with gen7:

  * f1 0 1024 7 -1 1024 1 ; <  sawtooth wave
  * f1 0 1024 7 -1 512 1 512 -1 ;  < triangle wave (odd harmonics)
  * f1 0 1024 7  1 512 1 0  -1 512 -1 ;  < square wave (odd harmonics)
  * f1 0 1024 7  1 104  1 0  -1 920 -1  ;  <  pulse train waveform

Note, in the boldface parameters within the square and pulse wave examples, that we can specify discontinuities in waveforms by "telling" the function generator to move from one value to another over zero points in the table.

One must be careful when employing "sharp edged" waveforms, like those above, in digital synthesis, however, because these waveforms are not band limited in frequency. Each of the waveforms above includes a great many harmonics, and aliasing may result at sampling rates of 44100 r 4800 when the pitch exceeds A 440 or so. Subsequent filtering would not eliminate these artifacts.

;  #############################################################
;  soundfile ex2-1 : envlpx example 1      Csound Tutorial
;  #############################################################

; Orchestra file used to create example soundfiles "ex2-1" and "ex2-2" :-------------------------------------------------
;Fixed wave form instrument, using envlpx to create an amplitude envelope

instr 1
ipitch = cpspch(p4)      ; convert p4 from pch  to  cps
kenv envlpx p5,  p6,  p3,  p7,  60,  p8,  .01,  p9  ; amplitude envelope
   display kenv , p3        ; show envelope shape on sterr
asignal  oscili   kenv,  ipitch,  p10
out asignal
< Score11 file used to create soundfile  ex2-1 :
< score example ex2-1 : envlpx : various envelope shapes and audio functions
* f1 0 64 7  -1  64  1  ; <  sawtooth wave
* f2 0 64 7  -1  32  1  32  -1 ;  <  triangle wave (odd harmonics)
* f3 0 64 7  1  32  1  0  -1  32  -1 ;  <  square wave (odd harmonics)
* f4 0 64 7  1  4  1   0  -1  60  -1  ;  <  pulse train waveform
< function 5 has harmonics 1,4,7,11,15,18
* f5 0 256 10 1. 0  0  .4  0  0  .65  0  0  0  .3  0  0  0  .18  0  0  .08;
< function 6 has harmonics 3,4,7,8,11,12,15,16
* f6 0 256 10 0 0 .5  .8 0 0  .7  1. 0 0  .3  .4  0 0  .2  .12;

< function 7 has a strange assortment of widely spaced partials
*f7 0 256 9 1. 1. 0 9  .7 0  5. 15 .45 0  23 .3  0 31  .15 0  39 .08 0;
< function 8 has only high harmonics (20 through 28)
* f8 0 256 10 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 .2 .4 .6 1. .7 .5 .3 .1;

* f60 0 65 5 .005 64 1;   < envlpx rise shape function

i1 0 0 8;
p3 2;
du .95;
p4 no c3;
p5 nu 6000*3/4000;
 < p6 = rise time
p6 nu .5/ .35/  .2 / .1 /  .07 / .03 / .012 / .005; < rise time
p7 mo 14. .5  .1 ;       < decay time
p8 mo 14. .9  .1;        < atss
p9 0;                    < ixmod
p10 nu 1/2/3/4/5/6/7/8;  < audio waveform function

Appendix Csound score file examples : Chapter 2

Comment on example ex2-1 : The envelope shapes specified in p5 through p9 become progressively sharper, beginning with fairly lengthy attacks and decays, and ending with very short transients. A different audio timbral function is used for each of the eight notes. The first four waveshapes (functions 1-4) are reminiscent of those employed in analog synthesizers of the 1960s and 1970s, while in the concluding four audio functions (functions 5-8) we specify somehwat more interesting (or at least less common) harmonic spectra. Unit generator display will include a graphical "picture" of the amplitude envelope for each note within the sterr terminal output. This can be a useful diagnostic tool when we do not get the results we anticipate. Once we have the instrument algorithm and a usable range of score p-field values for envlpx working to our satisfaction we probably would comment out the call to display by placing a semicolon at the beginning of this line.

  soundfile ex2-2 : envlpx example 2      Csound Tutorial
< Score11 file used to create soundfile  ex2-2 :
      < xmod envelope modifications to a fixed envelope shape

Eastman Csound Tutorial

* f3 0 512 7 -1 220 -1 36 1 512 1 36 -1;   < square wave, sloped sides
* f60 0 65 5 .005 64 1;

i1 0 0 4;
p3 2;
p4 no fs2;
p5  6000;
p6 .1;             < rise time
p7 .5;             < decay time
p8 .1;             < atss
p9 nu .95/.3/-.3/-.95;  < xmod
p10 3;              < audio waveform function

Comments on example ex2-2 : All of the score p-fields are constants except for the ixmod value, which modifies the slope of the change from p5 ("peak" amplitude) to p8*p5 ("attenuation of steady state"). Note that there is some audible difference in the envelopes, but, in the absence of corresponding timbral changes, these differences are rather subtle.

2.8. Creating Envelopes with Oscillators

In addition to such Csound envelope generators as envlpx and expseg, there is another way that we can create envelope patterns: by using oscillators What do you think of this?

kenv oscili p5, 1/p3, 5
a1 oscili kenv, cpspch(p4), 100

Here we are using a control oscillator (kenv) to read function number 5 once over the entire duration of each note (more on this below). Function 5 includes both a rise and a fall:

          *f5 0 65 5 .001 10 .8  10 1. 10  .9  34  .001;
                              |---------- rise -----------||--------- decay --------|
  p-fields: 1 2 3  4   5   6  7  8  9  10  11  12  13

p-fields 5 through 9 specify a rise shape, p-fields 9 through 13 a (longer, more gradual) decay shape. p-fields 5, 7, 9, 11 and 13 within this table will be multiplied by our p5 amplitude. The envelope created by our control oscillator is then passed as the amplitude input to our audio oscillator (a1).

The frequency argument to the control oscillator kenv, 1/p3, assures that function 5 will be read exactly once per note. If p3 is 4 seconds, the frequency argument to the control oscillator is .25 hertz (1/p3, or, here, 1/4), so that it will take exactly four seconds for oscillator kenv to march through the table.

When using an oscillator as an envelope generator, as here, there is an important point to consider. The attack and decay times will always be dependent upon the duration of the note (since they are percentages of the length of the function table, which is being read once). Thus, rise and fall times will vary proportionately with the total durations of various notes. We have no way to specify exact attack or decay times. (If we want to be able to specify exact attack and decay durations, we should use one of the Csound envelope generators, such as expseg or envlpx.) For some applications, however, oscillators can create useful envelopes, especially if we want to read an envelope more than once per note :

kenv oscil p5, 3/p3, 5

This control oscillator would give us triplets (3/p3) for every note in our score.

The instrument used to create soundfiles ex2-3 (a didactic example) and ex2-4 (musically more interesting) contains a control oscillator that generates an amplitude envelope, and an audio oscillator that reads various audio functions:

  Soundfile examples "ex2-3" and "ex2-4"

Orchestra file used to create these two soundfiles::

; control oscillator (kenv) supplies time envelope to audio oscillator
;    score p-fields:
;    p4 =  pitch, in pch notation,
;    p5 = peak amplitude
;    p6 = function number for control oscillator envelope
;    p7 = function number for audio oscillator waveshape
instr 1

kenv oscili p5, 1/p3, p6    ; control oscillator -- creates amplitude envelope
a1 oscili kenv, cpspch(p4),  p7  ; audio oscillator
out a1


< score11 file used  for "ex2-3" :
< different audio waveshapes and time envelope functions
< Envelope functions :
* f52 0 65 7 0 32 1. 32 0;              < linear pyramid: rise & fall
* f62 0 65 5 .01 32 1. 32 .01;          < exponential pyramid rise & fall
* f21 0 65 5 .001 4 .8 10 1. 10 .7 40 .001;
* f22 0 65 5 .001 10 1. 10 .2 10 .8 10 .1 10 .6 14 .001;

< Audio waveshape functions :
* f11 0 1024 10 1. .8 0 .6 0 .4 0 .2 ;  < fundamental & even harmonics
* f12 0 1024 10 1. 0 .33 0 .13 0 .07;   < fundamental & odd harmonics
* f13 0 64 7 -1 32 -1 0 1. 32 1. ;     < square wave, not band-limited
* f14 0 64 7 -1 30 -1 2 1. 30 1. 2 -1.; < square wave, band-limited

i1 0 0 4;
p3 3.;
p4 no c3;                              < pitch (in pch)
p5 8000;                               < amplitude
p6 nu 52/ 62/ 21/ 22;                  < function number for time envelope
p7 nu 11/ 12/ 13/ 14;                  < function number for audio waveshape
< score11 file used to create  "ex2-4" :  alternating audio & envelope functions :
< Envelope functions :
* f21 0 65 5 .001 4 .8 10 1. 10 .7 40 .001;
* f23 0 65 5 .001 6 1. 58 .001;               < rapid attack, long decay
* f24 0 65 5 .001  12  1.  28  .05  10  .8  24  .001; < 2 attacks per note

< Audio waveshape functions :
* f12 0 512 10 1.  0  .33  0  .13  0  .07;   < fundamental & odd harmonics
* f15 0 512 10 0 1. .8  0  .5  .3  0  .15  .1; < harmonics 2,3,5,6,8,9

i1 0 6;
p3 mx 5. .5 .1/1. 1.;
du mx 5. 1. 2. 4./1. 3;
p4 mx 5. c2 c4 b4 b5/1. ds6;         < pitch (in pch)
p5 mx 5. 2000 4000 12000/1. 13000;   < amplitude
p6 nu 21/23/24;                      < function number for time envelope
p7 nu 12/15;                         < function number for audio waveshape

Comment : No, it won't win a Pulitzer Prize, but we are beginning to progress on our Gradus ad Parnassus.

The important point in all of this is that oscillators can be used in many different ways, to generate various kinds of audio and control signals. Oscillators can cycle through function tables repetitively (generating audio from scratch), once per note (creating envelope control signals, which can be applied to amplitude, pitch, or other musical parameters), or even less than once per note (reading only a portion of a table, if the period of oscillation in greater than the duration of a note). Given enough resourcefulness, we could create a variety of intriguing musical sounds and gestures solely by means of oscillators.

2.9. Reading Soundfiles into Csound with soundin and with diskin or diskin2

Most of the signal processing operations we perform on synthetic waveforms created by oscillators also can be performed on samples read into Csound from existing soundfiles. Csound provides several ways in which we can read in soundfiles. These methods include:

soundin and diskin2

[ See the discussions of soundin and diskin2 (recommended over diskin) in the Csound 5 Reference Manual ]

The soundin opcode has one required and one optional argument. The required argument specifies the directory path and the name of the soundfile to be accessed. An optional second argument indicates a skip time (in seconds) into this soundfile. The diskin and diskin2 unit generators work in similar fashion, but offers additional soundfile processing capabilities that we will consider later. diskin2 can do everything that diskin can do and more, and is among the recommended opcodes to use when want to stream a soundfile from disk and transpose it.

Within the first (ifilcod) argument to soundin or diskin there are two ways in which one can tell the unit generator input soundfile(s) to use:

(1) by typing in the full directory path and name of the soundfile, surrounded by double quotes, within the orchestra file, like this:
asound soundin "/snd/allan/monkeysound1" , 1.5
Result: Monophonic soundfile monkeysound1, in my home soundfile directory, will be read into a Csound compile job by soundin. The first 1.5 seconds of this input soundfile will be skipped.

This method has an obvious severe disadvantage. Since the input soundfile argument is a constant within the orchestra file, rather than a variable, this orchestra file can only access a single input soundfile - monkeysound.

(2) by creating one or more Unix soft link files, called soundin.# (where # is an integer), which "point to" the desired soundfiles. These source soundfiles can be in any soundfile directory, but most often are located either within the user's $SFDIR (current working output soundfile) directory, or else, if defined, within the user's SSDIR "sound sample input" directory.

At the ECMC, such soundfile links are most easily created by means of the local utility sflink -- or, for sflib soundfiles, with the sflinksflib (sflinksfl) variant. Some of you may already have used these commands in working with the sf family of Eastman Csound Library instrument algorithms. See the manual page for sflink for usage details. The link files to three soundfiles in the /sflib/env directory, used in example soundfile ex2-5 which follows, were created by means of the following command line:

sflinksfl wind.low 1 caridle 2 riverlock 3
This command creates three link files within the users $SFDIR (current working output soundfile directory):
   soundin.1 which points to the soundfile /sflib/env/wind.low  (low pitched blowing wind)
   soundin.2 which points to the soundfile /sflib/env/caridle.low   (car engine noise)
   soundin.3 which points to the soundfile /sflib/env/riverlock   (gushing water)
In example ex2-5, portions of these three soundfiles are read into Csound by soundin:

;  #############################################################
;  soundfile ex2-5 : soundin            Csound Tutorial
;  #############################################################
Orchestra file used to create this example:
sr= 44100
kr = 4410
ksmps = 10
nchnls = 2

   ;  score p-fields:
   ;  p4 =  soundin.# number {source soundfile} , p6 = skip time
   ;  new amplitude envelope {p5, 7, 8 & 9}
   ;     p5 = "peak" level multiplier
   ;     p7 = rise {fade-in} time
   ;     p8 = decay {fade-out} time
   ;     p9 = "steady-state" level  multiplier
   ;  moving stereo pan {p10 & p11} :
   ;     p10 =  number of left-right pans per note
   ;     p11 = function number for moving pans

instr 1
asig  soundin  p4, p6    ; read in the soundfile
   ; now apply a new envelope to these samples
kamp  expseg  .005 ,p7  , p5  , p3 - (p7 + p8) , p9 , p8 , .005; < amp. envelope
asig = asig * kamp
    ; supply a moving stereo pan between left & right channels
kpan oscili  1. ,  p10/p3 , p11    ; panning control signal
outs  sqrt(kpan) * asig , sqrt(1. - kpan) * asig
Score file used to create "ex2-5" :
  < three panning functions :
*f1  0 65  7  0  32  1.  32 0 ;  < linear rise & fall
*f2  0 129  5  .01  64  1.  64 .01 ;  < exponential rise & fall
*f3  0 1024 9 .5 1. 0;  < 1st half of a sine wave {rise & fall}

i1 0 0 3;   < create 3 output "notes"
p3 4;              < the three notes start at 4 second intervals
du 306. ;          < each output note lasts 6 seconds
p4 nu 1 / 2 / 3 ;  < soundin.# number : all soundfiles are from /sflib/env
                   < 1 = wind.low, 2 = caridle , 3 = riverlock
p5 nu .5 / .9 / .2 ;        < amplitude multipler
p6 1. .5 2.;               < skip time into soundfiles
p7 nu 2. / 1. / .5 ;       < rise {fade-in} times
p8 nu 2. / 1.2 / 2. ;      < decay {fade-out} times
p9 nu .9 / .6 / .2;        < steady state amplitude multiplier

p10 nu 2. / 8. / 3. ;      < number of left-right pans per note
p11 nu 1 / 2 /  3 ;        < pan function table number


Comments on ex2-5: Two signal processing operations are applied to the three source soundfiles:

(1) First, a new fade-in/fade-out amplitude envelope is applied to the input samples by control signal kamp. Note that since the signal inputs already are 16 bit integers ranging between 0 and +/-32767, our p5 and p9 amplitude arguments are decimal multipliers for these samples.
(2) Although the input soundfiles are monophonic, our orchestra is stereo. Control oscillator signal kpan applies a moving stereo pan to each audio signal. The wind.low sound is panned twice by means of a symmetrical linear function (f1), resulting in a smooth oscillation between the left and right speakers. The caridle soundfile, by contrast, is panned more rapidly (eight times) according to an accelerating/decelerating exponential pyramid shape (f2), resulting in a pulsating output. The riverlock soundfile is panned three times following the shape of the first (positive) half of a sine wave (note that the only "harmonic" we create in f3 is .5, which will generate one half of a sine wave):
    *f3   0  1024  9  .5  1  0;  < 1st half of a sine wave {rise & fall}
This also produces uneven movement (alternately speeding up and slowing down)between the speakers.

Note in the Csound reference manual that the soundfiles accessed by soundin or diskin can be mono, stereo or quad. For stereo soundfiles, the left and right channel inputs are retained as separate audio signals within Csound, and each requires a soundin output name. Example:

aleft ,  aright    soundin    3 
Result: The samples from the stereo soundfile associated with link file soundin.3 are read into the Csound compile job by soundin. The left channel input becomes audio signal aleft, and the right channel samples become signal aright, and these two audio signals can be processed independently within our Csound orchestra. The nchnls argument in our orchestra header should be set to 2, and unit generator outs (rather than out), should be used (unless, of course, we subsequently mix aleft and aright down to a mono output).
The diskin2 unit generator offers several soundfile processing options not available with the simpler soundin, including the abilities to transpose the pitch of input soundfiles, to wrap around (loop) the input soundfile, and to perform high quality sample rate conversion, so that, for example, you can read a 44.1k input soundfile into an orchestra that is running at 96k. An additional parameter argument, labeled kpitch in the Csound Reference Manual, controls pitch shifting. (Example algorithm ex3-6 in the next chapter provides an alternative way to skip into a soundfile and a way to read it backwards.)

To add pitch transposition capabilities to the ex2-5 instrument algorithm we could substitute unit generator diskin2 for soundin in our orchestra file, and add an additional p-field (p12, the next available p-field) to our score to control pitch transposition:

     Orchestra file: substitute these 2 lines:
   asig  diskin2  p4, p12, p6    ; read in & transpose soundfiles
        for this line:
   asig  soundin  p4, p6    ; read in soundfiles
     Score11 input file: add this line:
   p12   nu  1. / 1.059 / .75;  < pitch transposition ratio 
Result: The first input soundfile will be untransposed. The second input soundfile will be transposed up a semitone (a transposition ratio of 1.059), and the duration of the output samples will be only .944 (1./1.059) that of the original sound. The third input soundfile will be transposed down a perfect fourth, and its output duration will be 1.333 (1./.75) times the input duration.
Note: For a table of equal tempered pitch transposition ratios like those used in p12 above, consult the ecmchelp file pitchratios.

Some notes on pitch shifting:
diskin2, like most pitch shifting algorithms, performs pitch transposition by resampling the input samples, in much the same manner that oscillators resample the values within a function table to produce different pitches. diskin2 gives users a choice of several interpolation methods (the iwsize argument) to use when pitch transposition is performed.

Pitch shifting by resampling always changes not only the pitch of the sound, but also its duration, decreasing the duration for upward transpositions (since some of the input samples are skipped in order to produce more cycles per second) and increasing the duration for downward pitch shifts (where additional samples are added to "stretch out" the waveform):

output_duration  =  (1. / pitch_shift_ratio)  *  input_duration
We must take these durational changes into account when setting the p3 output durations for each "note" (or "soundfile event") in our score file, increasing our p3 durational values for soundfiles that are transposed dowo that these sounds are not truncated, before diskin reaches the end of the input sample stream.

In the non-looping mode, diskin (and also loscil) will output zeros when they reach the end of an input sample stream. Thus, we do not need to compute the exact output duration for each note in our score file, but merely need to make sure that the p3 output durations we specify will be sufficiently long to hold all of the output samples computed by diskin or loscil (unless, of course, we do not want to read in a complete soundfile, but only a portion of it).

Example: Assume that we want to read in a complete 4 second soundfile five times, with the following pitch shifts:

p12  nu  1.059 / .5 / 1.122 / .75 / .89 ; < pitch shift ratios
If we wish, we can merely set the p3 output durations for all five "notes" to 8 seconds
du    308;
and realize that our output soundfile will include some silence at the end. However, vibratos, tremolos, note attack and decay times, and all other time-varying elements within the sound still will be altered by the durational changes. (We will propose a more elegant solution to the problem of setting output durations for transposed soundfiles within example ex3-6.)

Formants: When we pitch shift a sound by resampling, we also will be shifting the formants -- the resonant (emphasized) frequency bands produced by many acoustic sound sources -- by the same transposition interval, and thus changing the timbre, or apparent size, of the sound source. For unpitched or quasi-pitched idiophones and membranophones, and for many other types of percussive or environmental sounds, the timbral change often will be acceptable. However, for sound sources such as the human voice, which contains prominent formants, pitch shifting by more than a minor third or so will result in the "munchkin effect" (for upward transpositions) or the "sick cow effect" (for downward pitch shifts).

To change the pitch of a sound without changing its duration, or vice versa, one must employ some technique other than resampling. Most often, this is achieved either by

> A final note: Do not confuse pitch shifting resampling algorithms with sample rate conversion resampling algorithms, which change the sampling rate of an input sample stream but do not alter the pitch or duration.

Construct a few new orchestra files, similar to those in this chapter, which make use of some combination of unit generators line, expon, linseg, expseg and envlpx. (using gen5 and gen7 to create different attack shapes for envlpx), as well as control oscillators. Use oscillators to create audio signals in some of your instruments, and soundfiles (read into Csound with soundin or diskin) as signal sources in other instruments. Create some scores for these instrument algorithms, then compile soundfiles with Csound, listen to the results, and improve your orchestra and score files until you begin to get something that resembles music (broadly - but not too broadly - defined).

Eastman Csound Tutorial: End of Chapter 2

TOP of this chapter -- NEXT CHAPTER (Chapter 3) -- Table of Contents CHAPTER 1 -- CHAPTER 2 -- CHAPTER 3 -- CHAPTER 4 -- CHAPTER 5 -- CHAPTER 6 APPENDIX 1 -- APPENDIX 2