Eastman Csound Tutorial
END of this chapter -- NEXT CHAPTER (Chapter 6) -- Table of Contents CHAPTER 1 -- CHAPTER 2 -- CHAPTER 3 -- CHAPTER 4 -- CHAPTER 5 -- CHAPTER 6 APPENDIX 1 -- APPENDIX 2

Chapter 5
FILTERS ; GLOBAL INSTRUMENTS

By this point, you should be able to skate your way through the Csound Reference manual discussions and orc/sco examples of additional unit generators, score file function generators and other resources not covered in this tutorial, and, after some initial head-scratching, come to a working understanding of how to use these resources. From now on, most of our discussions will summarize major points and pitfalls, and will illustrate primarily by example and commentary rather than by point-by-point exegeses of instrument algorithms. However, we will take a somewhat more detailed look at a few concepts, such as filter response curves, because a basic understanding of these concepts is necessary in order to use certain unit generators effectively.

5.1. Filters

This chapter deals primarily with filters - computer algorithms or hardware circuits that attenuate certain frequency bands while passing or even emphasizing other bands. Among the most common types of filters are:

low pass - passes low frequencies, attenuates higher frequencies
high pass - passes high frequenices, attenuates lower frequencies
band pass - passes a band (limited range) of frequencies, and attenuates frequencies above and below this band
band reject (or notch) - attenuates a particular frequency band, but passes frequencies above and below this band

Among the most important characteristics of most filters is the order of the filter, which indicates the sharpness of the filtering, or the slope of the rolloff curve. (The order of a filter indicates the order of the differential equation -- or, with digital filters, the difference equation -- used to create the filter.)

The higher the order, the sharper the filtering. A first order filter rolls off at 6 dB per octave, a second order filter at 12 dB per octave, a third order filter at 18 dB per octave, and so on. Filter orders between one and four are common, but higher orders (e.g. 6 or 8) occasionally are used for sharper filtering.

Filter order is closely related to the Q ("quality") of a filter, which once again is a measure of the sharpness of the filtering, and in most cases equals the number of poles and zeros in the output response curve. See the discussion of filters in the Roads Computer Music Tutorial or in some other filter primer for more information on these concepts.

A low pass filter applies increasing attenuation to frequencies above some point, until "near-total suppression" (generally defined as 60 dB attenuation) is reached. A very sharp low pass filter (such as the smoothing filters of high quality digital-to-analog and analog-to-digital converters) may produce total elimination of frequencies 1/3 of an octave above the cutoff point. In a low pass filter with a more gradual roll-off slope, on the other hand, the difference between the cutoff frequency and the frequency where total suppression is reached might be several octaves.

Filters also are the basic building blocks of some more complex types of audio circuits and digital unit generators, such as many types of reverberators and some types of delay lines and physical modeling constructs that emulate vibrating physical systems such as bowed or plucked strings. Filtering generally introduces a phase shift and also a time delay in the audio signal. Conversely, combining two or more delayed versions of a signal always results in some filtering (partial or complete elimination of some frequencies, and reinforcement of other frequencies). Later in this chapter we will consider comb and alpass, and ways in which these delay line filters can be combined to create reverberators.

5.2. Low pass and High pass filters

[ See the discussions of tone, tonex, atone and atonex in the Csound reference manual.]

tone and atone are, respectively, Csound implementations of first order digital low-pass and high-pass filters. Opcodes tonex and atonex also are implementations of low-pass (tonex) and high-pass (atonex) filters, but enable us to vary the order of the filter. tonex and atonex can do everything that tone and atone can do and more, so I recommend that you generally use tonex and atonex (rather than tone and atone) because of the added flexibility they provide.

tone and atone have two required arguments and one optional argument that is rarely used; tonex and atonex include the same three arguments, but also have a fourth optional argument that determines the order of the filter:

ares tone   asig, khp               [, iskip]
ares tonex  asig, khp [, inumlayer] [, iskip]
ares atone  asig, khp               [, iskip]
ares atonex asig, khp [, inumlayer] [, iskip]
(1) the audio rate asig argument specifies the input audio signal to be filtered;
(2) khp specifies the half-power point, or cutoff frequency, in hertz, which determines the sharpness of the filter rolloff curve. Note that this is a k-rate variable, and so can be varied within a note by a control signal to alter the timbral brightness of a sound.
(3) the optional iskip argument is rarely used and generally is left blank. (If set to any non-zero number it specifies that the input to the filter not be initialized to 0, so that the filter will already have have some input signal present at the start of a note -- possibly from a previous "slurred" note. Don't worry about this for now.)

tonex and tonex include the additional optional third argument inumlayer, which specifies the "number of layers in," or order of, the filter. If inumlayer is set to 1 the result is a first order filter identical to tone. The default value, if this argument is set to 0 or left blank, is 4 (a fourth order filter). Generally I prefer to re-set this default to "1" (a first order filter).

tone and tonex progressively attenuate all frequencies above 0 hertz, while atone and atone apply progressively greater attenuation to all frequencies below the Nyquist freuqnecy (1/2 the sampling rate). At the half-power point, or cutoff frequency, an input frequency component is reduced by 3 dB ( a reduction of 50 % in rated power, which is not the same as amplitude), and the rolloff curve is 6 dB per octave -- not very sharp, as can be seen in the following table.

tone       0 Hz.   H.P.  2*H.P.   3*H.P.   4*H.P.  8*H.P.   16*H.P.  32*H.P.
atone     Nyquist  H.P. .5*H.P. 1/3*H.P. .25*H.P. 1/8*H.P. 1/16*H.P. 1/32*H.P.

dB attenuation    0     -3     -6      -9        -12     -18      -24     -30
percent of original
amplitude remaining 1.    .707  .5      .355      .25     .125     .063    .032
output amplitude  32000 22624 16000   11360      8000     4000     2016    1024

Thus, if we assume an H.P. (half-power) point of 200 hertz, tone will reduce a frequency of 400 hertz (2 * H.P.) by 6 dB, or by about 50 % of its raw unfiltered amplitude level. A sine tone at 400 hertz with an original amplitude of 32000 would have an amplitude of about 16000 after being run through this filter. A frequency of 6400 hertz (32 * the 200 hertz H.P. point) would be attenuated by 30 dB, to about .032 its original amplitude. If the original amplitude of this 6400 hertz sine tone was 32000, the output amplitude would be about 1024.

Similarly, with a 1000 hertz half power point, atone (and atonex, with its inumlayer argument set to "1") will attenuate a frequency of 333 hertz (1/3 * H.P.) by 9 dB, so that only about 36 % of the amplitude inensity remains. An input sine tone with an amplitude of 32000 would have an output amplitude of around 11360 after being run through this filter.

Running a signal through tone or atone (or tonex or atonex) will always cause some loss in amplitude. The greater the effect of the filtering, the greater the amplitude reduction.

The following instrument allows us to apply either a low-pass or high-pass filter to input soundfiles, and to vary the half-power point with a control signal. Three score11 input files, ex5-1-1, ex5-1-2 and ex5-1-3, are provided to demonstrate different capabilities of the instrument. In all three scores, sflib soundfile /sflib/perc/tam.wav, read into Csound by diskin2, provides the input sound to the filter. To enable diskin2 to find and read this soundfile, we create a link file called soundin.5 with the shell command

sflinksfl tam 5
and then place a "5" in p4 of our score files. Example score ex5-1-1, presented first, calls for first order low-pass filtering of the tamtam.

In fact, this chapter includes several examples that read soundfiles into Csound for processing with diskin2, and thus will require that several link files be created if you want to compile these orc/sco examples (or, more likely, modified versions of them) yourself with Csound. Rather than create these links one at a time, as needed for each orc/sco example, you can create all of the link files needed to run all of the diskin2 and soundin examples in all of the chapters of this tutorial at once by typing: mktutsflinks
in a shell window. When you no longer need these link files you can remove all of them by typing: rmtutsflinks
in a shell window, or else typing: rmsf soundin*


;  #############################################################
;  Orchestra file ex5-1  : Tonex/Atonex     Eastman Csound Tutorial
;  orchestra file used for sflib soundfile examples ex5-1-1.wav,
;  ex5-1-2.wav and ex5-1-3.wav
; variable order low pass or high pass filter with time-varying filter cutoff
;  #############################################################
1 nchnls=1 

2 ; mono soundfile input: 
3 ; p4  = soundin. number  
4 ; p5 = ampfac  ; p6 = pitch transposition ratio 
5 ; p7 = skip from front of soundfile 
6 ; p8 = exponential fade in time ; p9 = exponential fade out time 
7 ; p10 through p14: p10 = filt. type (0= lo-pass, 1 = hi-pass) 
8 ; p11 = 1st half-power point, p12 = 2nd h.p. point 
9 ; p13 = rate of change between p11 & p12; p14= func. for change 
10 ; p15 = filter order 

11 instr 39 
12 idur = p3 ; duration of note 
13 isfnum = p4 ; soundfile input 
14 itranspratio = (p6 = 0 ? 1. : p6) ; pitch transposition ratio 
15 iskip = p7 ; skip off front of soundfile 
16 iampfac  = (p5 =0?1:p5 ) ; amplitude multiplier     
17 iorder= (p15 = 0 ? 1 : p15) ; order (sharpness) of low pass filter 
18 ainput  diskin2 isfnum, itranspratio , iskip 

19 ; amplitude envelope --------------------------------------
20 ; fade-in & fade-out defaults & checks:
21 ip8  = (p8 =0?.001:p8 )
22 ip9  = (p9 =0?.001:p9 )
23 ip8 = (p8 < 0 ? abs(p8) * idur : ip8)
24 ip9 = (p9 < 0 ? abs(p9) * idur : ip9)
25 kamp expseg .01,ip8 ,iampfac,idur-(ip8 +ip9 ),iampfac,ip9 ,.01
26 ainput = ainput*kamp

27 ; filtering: --------------------------------------
28   irate = (p13=0? 1/p3 : p13)
29   kfiltenv oscili p12-p11,irate,p14
30   khp = kfiltenv + p11     ; changing half-power point

31 if (p10 = 1) then
32    aoutput tonex ainput, khp, iorder ; lo-pass filter
33 elseif (p10 = 2) then
34    aoutput atonex ainput, khp, iorder ; hi-pass filter
35 elseif (p10 = 0) then
36    aoutput = ainput  ; no filtering
37 else  ; user error; invalid p10 argument given (p10 must be 0 or 1)
38    print p2, p10
39    printks "ERROR. p10 must be 0 or 1. Aborting this note.\n", 1
40    turnoff
41 endif
42   ; display khp, p3 ; remove comment to display time-varying h.p. point

43 out aoutput
44 endin
  -----------------------------------------------------------
 < ECMC Csound Library Tutorial score11 input file >>  ex5-1-1 << :
< This score file is used both with orchestra file ex5-1 to create soundfile
< /sflib/x/ex5-1-1.wav and also with orchestra file ex5-2 to create soundfile
< /sflib/x/ex5-2-1.wav
* f1 0 65 7 0 64 1; < linear ramp from p10 to p11
* f2 0 65 5 .005 64 1; < expo. change from p10 to p11
* f3 0 1025 19 .5 1 0 0; < first half of sine wave (positive portion only)
* f4 0 65 7 0 32 1. 32 0; < linear pyramid, p10 to p11 to p10
* f5 0 65 5 .01 32 1. 32 .01; < expo. pyramid, p10 to p11 to p10
* f6 0 1024 19 1. .5 0 .5;  < unipolar (positive only) sine wave

i39 0 0 6;              < mono input, mono output
p3 3.5;                                                     
du 303;                                                                   
p4 5;    < soundin.# : soundin.5 points to /sflib/perc/tam.wav 
p5 .9;                         < amp. multiplier
p6                             < pitch transposition ratio (0 = 1)
p7 .5;                         < duration skipped from front of sf
p8 .2;                                 < fade in time
p9 .2;                                 < fade out time
< "tonex/atonex" p-fields, p10 through p14 : -------------------------
p10 1;            < filter type: 1 = lo-pass, 2 = hi-pass, 0 = no filt.
p11 nu  50*5/   200;                < 1st half-power point
p12 nu 4000*5/   900;               < 2nd  "  "     "
   < p13 = rate of change btw p10 & p11 (0 = 1/p3)
p13 nu  0  * 5 / 4.;          
p14 nu  1 / 2 / 3 / 4 / 5/ 6;    < function for change
p15 1;                           < order of filter
end;
---------------------------------------------------------

Appendix Csound score file examples : Chapter 5

Lines 12 through 17 of our instrument set initialization variables from several score p-fields. On line 18 diskin2 reads in the input soundfile specified in p4 of our score. In this case, p4 is set to 5, pointing to the link file soundin.5, which in turn is a link we have created before running this Csound job that points to the soundfile /sflib/perc/tam.wav.

An envelope to vary the frequency cutoff (half-power point) of the filter is created on line 29. An oscillator creates the control signal kfiltenv to vary the filter's frequency cutoff between the values specified in p11 and p12 according to the waveshape of the function specified in p14.

p10 is a flag that determines whether a low-pass or hi-pass filter is to be used. If p10 is set to 1, then a low pass filter is applied on line 32; if p10 is set to "2", then high-pass filtering is applied to the audio signal on line 34; if p10 is set to 0 or left blank, filtering is bypassed (see line 36, which sets the output signal to be the same as the input signal). If any number other than 0, 1 or 2 is placed in p10 the instrument prints an error message and aborts the note (lines 38 through 40). Score file ex5-1-1 sets p10 to 1 for all 6 notes, so low-pass filtering is employed.

  1. For note 1; the filter cutoff moves from 50 hertz (p11 at the beginning of the "note" to 4000 hertz ([12) at the end of the note along a linear ramp defined by function number 1, which is specified in p14.
  2. Note 2: The filter cutoff moves from 50 hz. to 4000 hz. exponentially, and the change in brightness is more noticeable
  3. Note 3: The filter cutoff moves from 50 hz. to 4000 hz. and then back to 50 hz. following the shape of the first half of a sine wave (f3).
  4. Note 4: The filter cutoff moves from 50 hz. to 4000 hz. and then back to
  5. Note 5: The filter cutoff moves from 50 hz. to 4000 hz. and then back to 50 hz. 50 hz. exponentially, and the change is more apparent.
  6. Note 6: The filter cutoff varies between 200 hertz and 900 hertz 4 times per second sinusoidally.

Our second score for this instrument, ex5-1-2, is identical to the preceding score in every respect except that the filter order now is set to "4" in p15. The audible effect of the filter is much more pronounced with the sharper filter rolloffs that result from this higher filter order.


    < ECMC Csound Library Tutorial score11 input file >>  ex5-1-2 << :
    <  4th order low-pass filter (tonex) with time-varying filter cutoff
< This score file is used both with orchestra file ex5-1 to create soundfile
< /sflib/x/ex5-1-2.wav and also with orchestra file ex5-2 to create soundfile
< /sflib/x/ex5-2-2.wav
   * f1 0 65 7 0 64 1; < linear ramp from p10 to p11
   * f2 0 65 5 .005 64 1; < expo. change from p10 to p11
   * f3 0 1025 19 .5 1 0 0; < first half of sine wave (positive portion only)
   * f4 0 65 7 0 32 1. 32 0; < linear pyramid, p10 to p11 to p10
   * f5 0 65 5 .01 32 1. 32 .01; p15 4;    < order of filter
  end;
----------------------------------------------------

Note that varying the spectral brightness of a "note" or sound by means of a filter envelope, as in these examples, often has the effect of creating a crescendo or diminuendo, since our psychoacoustical perception of "loudness" depends in large part on the amount of high frequency energy contained in the sound.

Our final score for this instrument, ex5-1-3 is identical to the preceding score ex5-1-2 in every respect except for p10, which now specifies that the high-pass rather than low-pass filter be used:


    < ECMC Csound Library Tutorial score11 input file >>  ex5-1-2 << :
    <  4th order low-pass filter (tonex) with time-varying filter cutoff
< This score file is used both with orchestra file ex5-1 to create soundfile
< /sflib/x/ex5-1-2.wav and also with orchestra file ex5-2 to create soundfile
< /sflib/x/ex5-2-2.wav
   * f1 0 65 7 0 64 1; < linear ramp from p10 to p11
   * f2 0 65 5 .005 64 1; < expo. change from p10 to p11
   * f3 0 1025 19 .5 1 0 0; < first half of sine wave (positive portion only)
   * f4 0 65 7 0 32 1. 32 0; < linear pyramid, p10 to p11 to p10
   * f5 0 65 5 .01 32 1. 32 .01; p15 4;                           < order of filter
   end;
----------------------------------------------------

Note that varying the spectral brightness of a "note" or sound by means of a filter envelope, as in these examples, often has the effect of creating a crescendo or diminuendo, since our psychoacoustical perception of "loudness" depends in large part on the amount of high frequency energy contained in the sound.

Our final score for this instrument, ex5-1-3 is identical to the preceding score ex5-1-2 in every respect except for p10, which now specifies that the high-pass rather than low-pass filter be used:


  < ECMC Csound Library Tutorial score11 input file >>  ex5-1-3 << :
    <  4th order high pass filter (tonex) with time-varying filter cutoff
< This score file is used both with orchestra file ex5-1 to create soundfile
< /sflib/x/ex5-1-3.wav and also with orchestra file ex5-2 to create soundfile
< /sflib/x/ex5-2-3.wav
   * f1 0 65 7 0 64 1; < linear ramp from p10 to p11
   * f2 0 65 5 .005 64 1; < expo. change from p10 to p11
   * f3 0 1025 19 .5 1 0 0; < first half of sine wave (positive portion only)
   * f4 0 65 7 0 32 1. 32 0; < linear pyramid, p10 to p11 to p10
   * f5 0 65 5 .01 32 1. 32 .01; < exponential pyramid, p10 to p11 to p10
   * f6 0 1024 19 1. .5 0 .5;  < unipolar (positive only) sine wave

   i39 0 0 6;              < mono input, mono output
   p3 3.5;                                                     
   du 303;                                                                   
   p4 5;    < soundin.# : soundin.5 points to /sflib/perc/tam.wav 
   p5 .9;                         < amp. multiplier
   p6                             < pitch transposition ratio (0 = 1)
   p7 .5;                         < duration skipped from front of sf
   p8 .2;                                 < fade in time
   p9 .2;                                 < fade out time
   < "tonex/atonex" p-fields, p10 through p14 : -------------------------
  p10 2;          < filter type: 1 = lo-pass, 2 = hi-pass, 0 = no filt.
  p11 nu  50*5/   200;                < 1st half-power point
  p12 nu 4000*5/   900;               < 2nd  "  "     "
      < p13 = rate of change btw p10 & p11 (0 = 1/p3)
  p13 nu  0  * 5 / 4.;          
  p14 nu  1 / 2 / 3 / 4 / 5/ 6;    < function for change
   p15 4;                           < order of filter
   end;
----------------------------------------------------

5.3. balance, rms and gain

[ See the discussion of balance in the Csound reference manual. You might also want to consult the Reference manual discussions of rms and of gain.]

Filtering generally will result in a loss of amplitude, and in most cases the sharper the filtering the greater the attenuation in amplitude. Sometimes this is desired, but often it is not. A succession of sounds with similar input amplitudes may have widely varying output amplitudes after filtering, and thus not sound "balanced," due to varying amounts of energy loss caused by the filtering. The unit generators rms, gain, and, especially, balance are useful in such situations.

rms is an envelope follower. It's output (which must be at the k-rate) is a control signal that tracks, or "follows," the root-mean-squared (average) amplitude level of some audio signal.
gain modifies the root-mean-squared amplitude of an audio signal, providing either attenuation or an increase, so that the audio signal matches (roughly) the level of a control signal.
balance is a combination of rms and gain. It tracks the amplitude envelope of a control (or "comparison") audio signal (specified in the second argument), then modifies the sample values of another audio signal (given in the first argument) to match the average rms level of the control (comparator) signal.

Orchestra file example ex5-2 is identical to orchestra file ex5-1 above, except that we have added a balance opcode after the low-pass and high-pass filters, "balancing" the amplitude of the filtered output signal against original unfiltered input signal to restore amplitude that was lost in the filtering process.


;  #############################################################
;  Orchestra file ex5-2  : Tonex/Atonex     Eastman Csound Tutorial
;  orchestra file used for sflib soundfile examples ex5-2-1.wav,
;  ex5-2-2.wav and ex5-2-3.wav
; unit generator "balance" added to ex5-1 orchestra
;  #############################################################
nchnls=1

; mono soundfile input:
; p4  = soundin. number 
; p5 = ampfac  ; p6 = pitch transposition ratio
; p7 = skip from front of soundfile
; p8 = exponential fade in time ; p9 = exponential fade out time
; p10 through p14: p10 = filt. type (0= lo-pass, 1 = hi-pass)
; p11 = 1st half-power point, p12 = 2nd h.p. point
; p13 = rate of change between p11 & p12; p14= func. for change
; p15 = filter order

instr 39
idur = p3 ; duration of note
isfnum = p4 ; soundfile input
itranspratio = (p6 = 0 ? 1. : p6) ; pitch transposition ratio
iskip = p7 ; skip off front of soundfile
iampfac  = (p5 =0?1:p5 ) ; amplitude multiplier    
iorder= (p15 = 0 ? 1 : p15) ; order (sharpness) of low pass filter

ainput  diskin2 isfnum, itranspratio , iskip 

; amplitude envelope --------------------------------------
; fade-in & fade-out defaults & checks:
ip8  = (p8 =0?.001:p8 )
ip9  = (p9 =0?.001:p9 )
ip8 = (p8 < 0 ? abs(p8) * idur : ip8)
ip9 = (p9 < 0 ? abs(p9) * idur : ip9)
kamp expseg .01,ip8 ,iampfac,idur-(ip8 +ip9 ),iampfac,ip9 ,.01
ainput = ainput*kamp

; low pass filtering: --------------------------------------
  irate = (p13=0? 1/p3 : p13)
  kfiltenv oscili p12-p11,irate,p14
  khp = kfiltenv + p11     ; changing half-power point
if (p10 = 1) then
   aoutput tonex ainput, khp, iorder ; lo-pass filter
  aoutput balance aoutput, ainput ; balance output against input
  elseif (p10 = 2) then
  aoutput atonex ainput, khp, iorder ; hi-pass filter
  aoutput balance aoutput, ainput ; balance output against input
elseif (p10 = 0) then
   aoutput = ainput  ; no filtering
else  ; user error; invalid p10 argument given 
   print p2, p10
   printks "ERROR. p10 must be 0, 1 or 2 . Aborting this note.\n", 1
   turnoff
endif
  ; display khp, p3 ; remove comment to display time-varying h.p. point
out aoutput
endin
-------------------------------------------------------

To "play" this modified orchestra file we can use the same three scores used with orchestra file ex5-1 --- score file examples ex5-1-1, ex5-1-2 and ex5-1-3 above. Soundfiles using these three scores with orchestra file ex5-2 have been compiled in sflib/x:

I suggest that you open companion pairs of "balanced" and "unbalanced" soundfiles together in a soundfile editor such as rezound, so that you can look at the waveforms and amplitudes while listening to and comparing these soundfiles:

rezound sflib/x/ex5-1-2.wav sflib/x/ex5-2-2.wav
This will allow you compare fourth order low-pass filtering without amplitude balancing (in soundfile sflib/x/ex5-1-2.wav) and with amplitude restoration (in soundfile sflib/x/ex5-2-2.wav).
rezound sflib/x/ex5-1-3.wav sflib/x/ex5-2-3.wav
This will allow you to compare 4th order high-pass filtering without (ex5-1-3.wav) and with (ex5-2-3.wav) amplitude balancing.
Complementary filters and controlling timbral brightness

Csound's first order low-pass and high-pass filters (tone and atone, or, as in the instrument above, tonex and atonex with the imumlayer argument set to "1") are complementary filters. Consider the following series of operations:

ainput diskin2 isfnum, itranspratio , iskip
iorder = 1
alo tonex ainput, khp, iorder ; lo-pass filter
ahi atonex ainput, khp, iorder ; hi-pass filter
aoutput = alo + ahi

Here we send our tamtam in parallel through both a first order low-pass filter (tonex) and through a first order high-pass filter (tonex), and then add the low-pass and high-pass outputs together to create our output signal (aoutput). The resulting signal aoutput will be identical in spectrum and amplitude to the original signal ainput.

The complementary nature of these two first order filters be very useful in varying the timbral "brightness" (and, often, the perceived "loudness" and "distance" as well) of sound sources. Now consider this example:

ainput diskin2 isfnum, itranspratio , iskip
iorder = p7
ihp= (p8 < 13. ? cpspch(p8) : p8) ; set half power point for both filters in p8 using "notes" or cps
alo tonex ainput, khp, iorder ; lo-pass filter
ahi atonex ainput, khp, iorder ; hi-pass filter
ibright = p9 ; p9 sets the timbral brightness and should be between 0 and 1.
aoutput = (ibright * ahi) + ((1. - ibright) * alo) ; determine mix of low-pass & high-pass
aoutput balance aoutput, ainput ; balance output amplitude against input
out aoutput

And in our score file:

p7 1 ; < set filter order to first order
p8 no c4; < set half-power point to 261 hertz (middle C)
p9 nu 0/.25/.5/.75/1.; < brightness, 0 (least bright) to 1. (brightest)

Since 5 values have been provided in p9, we can infer that five soundfiles are read in by diskin2 (or, perhaps, the same source soundfile is read in five times). The input soundfile is sent through complementary first order low-pass and high-pass filters. (p7, which determines the order of the filters, is set to "1".) The frequency cutoff of the filters is set to 261 hertz in p8. In setting the half power point for complementary low-pass and high-pass filters, it often works best to set this value to approximately the fundamental frequency in the input soundfile, or perhaps to an octave above this frequency. If the input soundfile has no fundamental frequency (e.g. a cymbal strike), you will need to set the half power point heuristically, by trial-and-error.

The variable ibright, derived from p9 (which should range between 0 and 1.0) determines the percentage of the hi-pass version to be used when we mix these signals together. The reciprocal of p9 (1.0 minus ibright) determines the percentage of the low-pass version to be used. After the high-pass and low-pass versions are mixed, the output signal is balanced in amplitude against the original audio input signal.

For the first "note", we take only the output of the low-pass filter (p9 = 0). For the second, third, fourth and fifth input sounds, we mix an increasing percentage of the high-pass variant, which should lead to increasing timbral brightness in these notes.

What if we want variations in timbral brightness (and thus probably also in percevied loudness) within a single "note," to create crescendos and diminuendos? In this case we would need to change the i-rate variable ibright in the code above to a k-rate variable and, perhaps using expseg or linseg, create a time-varying envelope to control the mix of the low-pass and high-pass versions of our audio signal.

Note: Although only first order low-pass and high-pass filters with the same half power point are fully complementary, it certainly would be possible for us to employ higher order low-pass and high-pass filters in the example above. This will tend to exaggerate the difference between "low brightness" and "high brightness" notes, but sometimes it also can lead to a "hole in the middle," or "hollow-sounding" notes.

Imposing the amplitude envelope of one sound onto another sound

balance can also be a useful signal processor in its own right. In the instrument below, it is used in conjunction with unit generator follow to impose the amplitude envelope of one soundfile onto another soundfile. The arguments to follow are

     ares follow asig, idt 
where

Twelve sflib soundfiles are used in this example, and if you want to use the following orc/sco pair to compile an output soundfile yourself you will need to make soundin.# links first. The simplest way to do this is to type mktutsflinks in a shell window, which will create the links needed for all of the examples in this tutorial that use diskin2 and soundin to read in soundfiles.

In this example, the amplitude envelopes of shorter, more staccato soundfiles (which we will call here control soundfiles are imposed upon longer, sustained soundfiles, which we will call audio soundfiles. The following table shows the control soundfile, the audio soundfile and the soundin.# link number for each of these soundfiles, which are provided in the score in p4 (for the audiosoundfiles) and in p9 (for the control soundfiles):

 Note:           1                    2                      3                 4
Control  link # 9           link #  10              link #  11          link #12
soundfile   perc/tb1.wav       perc/tb2.wav          perc/wb.wav          perc/crt.fs6.wav
-----------------------------------------------------------------------------
Audio    link # 6             link # 7              link #   8          link # 5
soundfile  perc/plate1.wav     perc/gong.ef3.wav     perc/cym1.wav        perc/tam.wav
============================================================================================
Note:           5                    6                        7                    8
Control  link #13            link #  10              link #  11           link #   12
soundfile perc/bongo1.roll.wav perc/sleighbells.wav perc/maracaroll.wav   x/voicetest.fs6.wav
-----------------------------------------------------------------------------
Audio    link # 6            link #  7             link #   8          link #  5
soundfile  perc/plate1.wav     perc/gong.ef3.wav     perc/cym1.wav        perc/tam.wav 

;  #############################################################
;  ex5-3  : balance and follow
;  #############################################################
ksmps=5
nchnls=1
; p4  = soundin.#  number  of audio soundfile
; p5 = ampltidue multiplier  ; p6 = skip from front of audio soundfile
; p7 = optional exponential fade in time ; p8 = optional exponential fade out time
; p9  = soundin.#  number  of control soundfile
; p10 = skip time into control soundfile
; p11 = % of control signal in output signal, 0 to 1.0
; p12 = period for averaging amplitude envelope (sually .01)
instr 1

audio  soundin  p4, p6   ; read in the AUDIO soundfile
acontrol soundin p9 , p10    ; read in the CONTROL soundfile
iperiod = (p12 = 0 ? .01 : p12)
afollow follow acontrol, iperiod ; tracks the envelope of "acontrol"

 ; impose control soundfile envelope on audio file
audio  balance  audio , afollow 
 ; vary output amplitude from note to note :
p5 = ( p5 = 0 ? 1. : p5 )
audio  =  audio * p5

; optional fade-in & fade-out 
if (p7 > 0) && (p8 > 0) then  ; if p7 and p8 both = 0 skip all of this
  ifadein=(p7=0 ? .001 : p7) ; guard against 0 with exponentialfade-in
  ifadeout=(p8=0 ? .001 : p8) ; guard against 0 with exponentialfade-out
  kfades expseg .01, p7, 1.,p3-(p7 + p8), 1., p7,.01
  audio = audio * kfades
  acontrol = acontrol * kfades
endif
out (p11 * acontrol) + ((1. -p11) * audio)
endin 
   -----------------------------------------------------------
 < ECMC Csound Library Tutorial score11 input file >>  ex5-3 << : 
< This example includes 12 soundfile inputs. Before you can run this Csound
< job you must create soundin.# links to these 8 soundfiles. To do this, type:
<   mktutsflinks
 < ECMC Csound Library Tutorial score11 input file >>  ex5-3 << : 
< This example includes 12 soundfile inputs. Before you can run this Csound
< job you must create soundin.# links to these 8 soundfiles. To do this, type:
<   mktutsflinks
i1 0 0 8;                       < mono input, default mono output  
p3 nu  .25 /// 3. /  1.2 //3/; 
du nu 300.153/300.179/300.25/303./ 
       301.18 / 301.43 / 303.3/ 305.6; < 307.29;
p4 nu 6 /7 /8 /5;                < audio soundfiles from /sflib/perc :
      < 6 = plate1, 7 = gong.ef3 , 8 = cym1 , 5 = tam
p5 nu .2 / .6 /  .4 / .6 /
   .6 * 3/ .9;                  < output amplitude multiplier
p6 nu 0/1./.4/2.;                < duration skipped from front of audio sf
p7 nu 0*7/.8;         < optional added fade in time
p8 nu 0*7/.5;         < optional added fade out time
< envelope follower p-fields :
p9  nu 9 / 10 / 11 / 12 /  < soundin # of control soundfiles 
 < 9 = tb1 , 10 = tb2 , 11 = wb , 12 = crt.fs6 
  13 / 14 / 15 / 16;        < mostly from /sflib/perc
 < 13 = bongo1.roll , 14 = sleighbells , 15 = maracaroll , 16 = voicetest
p10  0 ;                  < skip off front of control soundfile
p11 nu .4*4/ .33*4;   < % of control signal in output signal, 0 to 1.0
< p12 = period for "follow"; default = 0 .01;use larger values forlow notes
p12        
end;
-----------------------------------------------------------  

The two soundfiles for each note are read in by two soundin unit generators, and the envelope of the "control" soundfile is tracked by follow:

audio soundin p4, p6 ; read in the AUDIO soundfile
acontrol soundin p9 , p10 ; read in the CONTROL soundfile
afollow follow acontrol, iperiod ; tracks the envelope of "acontrol"

diskin2 might be a better choice than soundin here, since it would allow us to transpose the pitch of both soundfiles, which is not possible with soundin.
The amplitude envelope of the shorter, more staccato control soundfile is imposed on the longer, sustained audio soundfile on this line:

audio  balance  audio , afollow 

(Note: It obviously would not work to try to reverse the functions of the two input soundfiles for each "note." We cannot impose the envelopes of long, sustained soundfiles on a short, staccato soundfiles, because the short soundfiles decay quickly to zero, and would produce no sound during the long steady-state of the sustained soundfile.)

The instrument also provides an optional fade-in (p7) and fade-out (p8), which are used only for the last note of our score), and additionally allows us to mix some of the control soundfile in with the audio output soundfile. The mix between the two signals in determined by p11, which specifies a mix of 40 % of the control signal and 60 % of the audio signal for the first four notes, 33 % of the control signal and 67 % of the audio signal for the last four notes.

This instrument is an example of a situation where the global control rate (kr), or the alternative ksamps argument, can make a difference in the audio output. Note that in this instrument I have set the ksamps value rather high -- to 5 -- so that at a 44.1k sampling rate kr will equal 8820. balance thus will produce a new value every 5 samples. If we had set ksamps to a lower value, such as 20, balance would average the amplitude of every 20 samples, and thus probably would not accurately represent the sharp amplitude peaks of some of the control audio signals such as the temple blocks and woodblock.

5.4. Band pass and band reject filters

[ See the discussion of resonx, reson and areson in the Csound reference manual ]

reson is a first order band-pass filter. The passband is a bell-shaped curve, with progressively greater amounts of attenuation applied to frequencies both above and below a center frequency resonance. resonx is a also a band-pass filter, but includes an additional argument that allows one to set the order of the filter, and thus the sharpness of the filtering. The arguments to reson and resonx are :

     ares reson  asig, kcf, kbw               [, iscl] [, iskip]
     ares resonx asig, kcf, kbw [, inumlayer] [, iscl] [, iskip]

The optional iscl argument is an amplitude scaling factor with three possible values: 0, 1 or 2. I recommend that you always set this argument to "1." If you want to restore lost amplitude after filtering, follow resonx (or almost any Csound filter) with unit generator balance, as in ex5-6.

For the curious, if iscl is set to "0" the filter may become unstable, producing SAMPLES OUT OF RANGE, so this not recommended.
A "1" will cause resonx or reson to attenuate all frequencies except the center frequency. This will often result in considerable amplitude loss in the post-filtered signal. In areson only the center frequency will be "totally" attenuated.
A "2" will cause an internal amplitude adjustment to be made, so that the filtered output will be (very roughly) at the same amplitude level as the pre-filtered signal. However, the amplitude adjustment will not be as accurate as that produced by balance.

The frequency response produced by reson, and by resonx with inumlayer set to "1" is :

center frequency
|
passband
-3dB ---- bandwidth ---- -3dB
-6dB ------- 2 * bandwidth ------- -6dB
-9dB ------------ 3 * bandwidth ------------ -9dB
-12dB ------------------- 4 * bandwidth ------------------ -12dB
(and so on)
Thus, the smaller the bandwidth, the sharper the filtering, and the more sharply defined the resonance.

With a center frequency (Fc) of 1200 hertz and a pass band of 200 hertz:

  • an input sine tone at 1200 hertz will be passed with no amplitude reduction
  • frequencies at 1100 and 1300 hertz will be attenuated by 3 dB (to about 70 % of their original amplitude)
  • frequencies at 1000 and 1400 hertz will be attenuated by roughly 6 dB (to about half their original intensity)

    With a second order band-pass filter (resonx with inumlayer set to 2), the filter rolloff at the upper and lower half-power points will be twice as sharp (12 dB per octave, rather than 6 dB per octave as above). Each increase in filter order produces an additional 6 dB attentuation per octave.

    Note that a band-pass filter with a center frequency of 0 hertz is a low-pass filter, and a band-pass filter centered at the Nyquist frequency is a high-pass filter.

    areson is a complementary first order band-reject ("notch") filter, which produces the sharpest filtering (roughly -60 dB) at the center frequency and progressively less attenuation on either side. The output of this filter -- a "V-shaped" notch -- is the inverse of the output of reson. butterbr is Csound's second order notch filter. Band reject filters are used less frequently than low- , high- and band-pass filters. Occasionally band reject filters with very narrow notch bands are used to reduce 60 hertz and 120 hertz (second harmonic) power supply hum from bad recordings. Band reject filters also can be used to notch out a portion of a complex frequency spectrum (e.g. a tam tam, or white noise), often creating a rather "hollow-sounding" output with a "hole in the middle."

    In the following simple example, the center frequency remains fixed at 1000 hertz and the bandwidth values remain constant during a note. Each "note" (bandwidth value) is compiled twice, first without amplitude balancing and then with the amplitude balanced against the original, unfiltered noise source (see p7 and lines 7-10 in the orchestra):

    ;  #############################################################
    ;  soundfile ex5-4 : Resonx      Eastman Csound Tutorial
    ;  #############################################################
    ; p4 = center frequency  ; p5 = bandwidth
    ; p6 filter order p7 = amp. fade-in  p8 = fade-out
    nchnls=1
    
    1  instr 1
    2     kamp expseg 1,p8,12000,p3-(p8+p9),8000,p9,1
    3   anoise rand kamp
    4     iorder=(p6=0 ? 1 : p6)  ; filter order; reset default to 1
    5   afilter resonx anoise,p4,p5,iorder, 1 ; note iscl amp. scalar set to 1
    6  ; optional balance of output against input signal
    7  if (p7 == 1 ) then  ; if p7 is set to 1 then
    8                       ;  restore ampltude  lost in filtering
    9        afilter balance afilter, anoise
    10  endif
    11     out afilter
    12  endin
      -----------------------------------------------------------
      < ECMC Csound Library Tutorial score11 input file >>  ex5-4 << : 
       < Score11 file used to create Eastman Csound Tutorial soundfile example
    i1 0 0 10;
    p3 rh 4;
    p4 1000;                                    < filter center frequency
    p5 nu 15//100//500//1500//5000//;                 < filter bandwidth
    < filter order is set to  1 in p6
    p6 nu 1;                                    < filter order (def. 0 = 1)
    p7 nu 0/1;    < 0 = no balance, 1 = balance against unfiltered origian lsignal
    p8 .25;                                     < attack time
    p9 .25;                                     < decay time
    end;
      ----------------------------------------------------------- 

    In the next example, both the center frequency and the bandwidth vary within each note. A simple envelope is created to vary the center frquency on line 7. Optional random deviation (used in notes 1 and 3) is created and added to this time varying center frequency on lines 8 and 9. Similar, a time varying bandwidth is created on line 10, and optional random deviation (emplyed in notes 2 and 3) is applied to this bandwidth envelope in lines 11 and 12. The filtered output signal is not balanced against the original input signal.

    ;  #############################################################
    ;  soundfile ex5-5   : center freq. & bandwidth vary during each note
    ;  random deviation also added to  center freq. and bandwidth
    ;  #############################################################
    ; p4 = center frequency  ; p5 = bandwidth ; p6 = rise time
    ; p7 = decay time; p8 through p11 = time varying filter bandwidth values
    nchnls=1
    
    1  instr 1
    2   kamp expseg 1,p6,8000,p3-(p6+p7),5000,p7,1
    3   anoise rand kamp
    4  ; bandpass filter values:
    5    p4 = (p4<15? cpspch(p4) : p4)
    6    p5 = (p5<15? cpspch(p5) : p5)
    7   kcf expon p4,p3,p5                   ; center frequency envelope
    8     kcfrand randi p12,p13   ; random deviation for center frequency
    9   kcf = kcf + (kcfrand * kcf)
    10   kbw expseg p8,p6,p9,p3-(p6+p7),p10,p7,p11   ; bandwidth envelope
    11     kbwrand randi p14,p15   ; random deviation for bandwidth
    12   kbw = kbw + (kbwrand * kbw)
    
    13    afilt reson anoise,kcf,kbw*kcf,2
    14    out afilt
    15  endin
      -----------------------------------------------------------
      < ECMC Csound Library Tutorial score11 input file >>  ex5-5 << : 
      < Score11 file used to create Eastman Csound Tutorial soundfile example
      < ex5-5 : bandpass filter; center freq. & bandwidth vary during each note
    i1 0 0 3;
    p3 4;
    du 1.1;
    p4 no a3/d7/c2;                             < center frequency  1(beginning)
    p5 no a4/ef3/fs2;                           < center frequency  2 (end)
    p6 .25;                                     < attack time
    p7 1.;                                     < decay time
    < all bandwidths are multipliers for center frequency, so higher
    < p4 or p5 values tend to produce wider bandwidths
    p8 nu 5./.03/9.;                          < bandwidth 1 (beginning)
    p9 nu 2./1./.3 ;                          < bandwidth 2 (after p6)
    p10 nu .3/3./6.;                          < bandwidth 3 (before p7)
    p11 nu .05/10./.5;                         < bandwidth 4 (end of  decay)
    p12 nu .18/ 0/.25;< .15;      < random deviation % for center frequency
    p13 nu 6./0/3.;        < rand. de,v. rate for center frequency
    p14 nu 0/.2/.25;< .15;       < rand. dev. % for bandwidth
    p15 nu 0/5./ 3.;       < rand. dev. rate for bandwidth
    end;
      -----------------------------------------------------------
    

    5.4.1. Using band-pass filters in parallel

    Many acoustic sounds, including the human voice, most aerophones (wind instruments) and chordophones (string instruments) and many membranophone and other percussive sounds, produce complex frequency spectra resulting from the amplifaction and filtering of source vibrations by a resonator. The resonator vibrates sympathetically, greatly increasing the amplitude, but in a frequency selective manner. responding much more to some frequencies than to others. Complex resonators such as the sounding board of a piano or harp, or the wooden body of a violin or cello, have many resonances of varying strengths, and occasionally one or two antiresonances as well. The complex frequency response of such resonators may resemble the silhouette of a descending mountain range -- a combination of low pass filtering with many narrow band pass "spikes." The air inside the violin or cello body acts as a simple resonator, producing a formant (strong resonance) within a fairly narrow pass band. Similarly, the throat, mouth, tongue and nasal passages sharply filter human speech and singing, providing both individual vocal timbres and also creating the formants we associate with particular vowels.

    By splitting an audio signal into several paths, each routed through a band-pass filter to produce a particular formant, we can simulate such complex resonant responses. The filters generally should be applied in parallel (with the input to each the unfiltered original signal), since if applied in series the filters largely would cancel each other out.

    5.4.2. Unit generator table; gen02

    [ See the discussions of table and GEN02 in the Csound reference manual]

    Rather than typing in center frequencies and bandwidths repeatedly for several bandpass filters in our scores, it is generally more efficient to create tables of these values. The numbers within such tables can then be read in to the filter arguments as needed.

    Unit generator table provides this capability to read in raw values from a function table.
    (table is the non-interpolating sibling of tablei, which we employed in ex3-6 to read in fuction tables filled with soundfile samples.)
    The two required arguments to table are:

    (1) indx : the location within a table of the value we want.
    The first value in a table is location "0," the second value location "1," and so on.
    (2) ifn : the function number of the table we are using

    Function generator gen02 allows us to type in the exact values we wish to place within a table. If we want a table of five numbers, we need a table size of "8" (since function tables must be powers-of-two or powers-of-two plus one). One other minor problem is that by default most Csound gen routines, including gen 2, normalize the values within the tables they create to a maximum value of floating point "1." Thus, the following call to gen2

    *f90 0 8 2 145 300 650 1380 1720;

    would be normalized, with "1720" becoming "1." in the table and the first four values scaled proportionately. Preceding the call to gen02 with a minus sign, however :

    *f90 0 8 -2 145 300 650 1380 1720;

    will cause the normalization procedure to be skipped, giving us precisely the five integer values we have asked for in locations 0 through 4 of function table "90."

    With all of the foregoing clear (yes?), we present the following instrument algorithm and score, in which we run an alternating series of white noise, pulse train and cymbal audio signals through a filter network that imposes a series of vowel-like formants upon these three sound sources. since the white noise sources tend to sound loud, and the cymbal sources soft, we have included amplitude adjustments to attentuate the white noise signals and boost the cymbal sources.

    
    ;  #############################################################
    ;  soundfile ex5-6 : 5 band-pass filters for vowel-like formant resonances
    ;  #############################################################
    nchnls=1
    
    instr 1
    ivowelfunc=p10 ; table with resonances for male and female vowels 
    kamp envlpx p5,  p6,  p3,  p7,  1,  p8,  .01 ; amplitude envelope
    ; ==== get source audio signal, determined by p9 =======
    if (p9 == 0) then 
       asource  rand  kamp* .6 ; white noise source signal; these tend to 
         ;sound loud, so reduce amplitude of white noise sources by 40 % 
    elseif (p9 == 1) then
    ;  generate a pulse train source signal :
       ipitch = cpspch(p4)  ; pitch for buzz
      if (ivowelfunc <= 87) then
         iharmonics = (5000/ipitch)  ; top frequency of 5000 for male resonances
           else
         iharmonics = (9000/ipitch) ; top frequency of 9000 for female resonances
       endif
       asource  buzz  kamp, ipitch,  iharmonics, 100  ; pulse train source signal
    elseif (p9 == 2) then
        asource  soundin  p12 , p13
        asource = asource * 1.2 * (kamp/32767)  ; impose new envelope on soundfile
          ; the cymbals tend to sound soft, so increase their amp. by 20 %
    else  ; user error; invalid p10 argument given 
       print p2, p10
       printks "ERROR. p10 must be 0, 1 or 2 . Aborting this note.\n", 1
       turnoff
    
    endif
      ; - - - - - - - - - - -
    ;FILTER CENTER FREQUENCIES & RELATIVE AMPS. FROM FUNCTIONS 80-87, 90-94
    iformant1 table 0, ivowelfunc   ; 1st formant frequency
    iformant2 table 1, ivowelfunc   ; 2nd   "   "   "
    iformant3 table 2, ivowelfunc   ; 3rd   "   "   "
    iformant4 table 3, ivowelfunc   ; 4th   "   "   "
    iformant5 table 4, ivowelfunc   ; 5th   "   "   "
    iamp1 table 5, ivowelfunc   ; relative amplitude of 1st formant
    iamp2 table 6, ivowelfunc   ;    "    "    "    "   2nd
    iamp3 table 7, ivowelfunc  ;    "    "    "    "   3rd
    iamp4 table 8, ivowelfunc  ;    "    "    "    "   4th
    iamp5 table 9, ivowelfunc  ;    "    "    "    "   5th
    
    ;  5 first order BANDPASS FILTERS TO SUPPLY THE 5 FORMANTS
    a2 resonx iamp1*asource, iformant1, 1.2*p11 * iformant1,1, 1	
    a3 resonx iamp2*asource, iformant2, 1.05*p11 * iformant2,1, 1
    a4 resonx iamp3*asource, iformant3, .9*p11 * iformant3,1, 1
    a5 resonx iamp4*asource, iformant4, .8*p11 * iformant4,1, 1
    a6 resonx iamp5*asource, iformant5, .7*p11 * iformant5,1, 1
    aformants  = a2 + a3 + a4 + a5 + a6
    aout   balance aformants , asource   ; restore amplitude lost in filtering
       out aout
    endin
    ----------------------------------------------------------- 
     < score11 file  for ex5-6
    
       < Score11 file used to create Eastman Csound Tutorial soundfile example
       < ex5-6 : 5 bandpass filters used to create vowel-like timbres
    * f1 0 65 7 0 40 .75 4 .70 20 1.;	
    

    The vowel functions in the score above are available in the Eastman Csound Library file vowelfuncs for your use (or abuse). To obtain a copy, type

    getfunc vowelfuncs
    An alternative listing of selected vocal formants is included in a Appendix C : Formant values in the online Csound Reference Manual.

    In ex5-6, the center frequencies and bandwidths of the vowel-like resonance both remain fixed, and the audible result is thus somewhat "flat." and robotic-soundinf. If, instead, we "move these resonances around" somewhat, by applying a little bit of random deviation to the center frequencies, bandwidths or both, the resonances may have more "life."

    5.5. Comb, Alpass and Reverberation Filters

    [ See the discussions of comb and alpass in the Csound reference manual]

    comb and alpass filters send an audio signal through a delay line that includes a feedback loop. Very short delay times (less than 40 milliseconds, and often less than 10 ms.) are normally used, resulting in multiple repetitions (too fast to be heard as echos) which fuse together to form a reverberant response. The arguments to both of these unit generators are :

         ares comb   asig, krvt, ilpt [, iskip] [, insmps]
         ares alpass asig, krvt, ilpt [, iskip] [, insmps] 
    (1) the input audio signal (asig)
    (2) a reverberation time, in seconds kvrt)
    The value you supply here determines a signal feedback percentage within the unit generator. Note that this reverberation time argument can be varied within a note by means of control signal inputs.
    (3) loop (delay) time (ilpt). Typical ilpt values range between .001 and .04.

    Comb filters tend to add strong coloration, often of a metallic quality, to an audio signal. Owing to the fixed delay and loop (feedback) time, the reiterations of some frequencies will be in phase (and thus increased in amplitude), while other frequencies will be out of phase, leading to partial or total cancellation. The resulting frequency response of the filter is an alternating series of equally-spaced, equal-strength peaks and nulls. If graphed, this response looks somewhat like the teeth of a comb, but actually more like a repeating triangle wave. The number of peaks is equal to

    loop time * ( sampling rate / 2 )

    Thus, with a sampling rate of 44100 and a loop time of .02, the response of the comb will include 441 peaks and 441 nulls (.02*44100/2). Each peak (and each null) will be spaced 50 hertz apart, from 0 hertz to the Nyquist frequency (here, 22040 hertz). Since these peaks are harmonically related, the output of comb often will have a pitched twang at a fixed frequency of 50 hertz (see table below) and at harmonic multiples of 50 hertz. This is highly undesirable if one wants "natural-sounding" reverberation, and comb filters are a poor choice for this purpose. Rather, they are useful for particular coloristic effects, and as building-blocks in the construction of more sophisticated reverberators.

    The following table indicates the frequency of the lowest peak for comb filters with delay and feedback times between 1 and 20 milliseconds. All other peaks are harmonic multiples of this frequency(2*, 3*, etc.). Note that although the number of peaks varies with different sampling rates (here, SR = 44100 and 96000), the sampling rate has no effect on the frequencies of the peaks (4th column) for any given delay-loop time.

    loop-feedback time  SR = 44100   SR = 96000   Frequency of lowest peak
                        # of peaks   # of peaks   (and spacing between peaks)%
         .001              22          48           1000Herz
         .002              44          96            500
         .003              66          144           333.33
         .004              88          192           250
         .005             110          240           200
         .006             132          288           166.67
         .007             154          336           166.67
         .008             176          384           142.86
         .009             198          432           111.11
         .01              220          480           100
         .02              441          960            50
    

    A delay/feedback time of .0015 would produce a fundamental peak of 666.67 hertz.

    In the orchestra file used to create soundfile example ex5-7, we read in a soundfile (in this example, a portion of sflib/x soundfile voicetest), and then process this input with a comb filter. One problem when using any reverberant signal processor (such as comb and alpass, or any reverberator) is that the output duration must be longer than the duration of the input signal in order to allow the concluding reverberant signal (which continues after the input has died away) to decay completely to zero amplitude. If we do not allow this extra time for the trailing reverberation, the end of each input sound may seem abrupt, and the loudspeakers may produce a click or pop.

    To solve this problem, we create the variable indur ("input duration") at the beginning of our instrument and set it to the value for p3 in our score. We then reset p3, adding a half a second to the duration of the output note during which the comb filter amplitude output will decay to 0. We also create an amplitude envelope (kamp) that allows us to control the output gain (the variable iamp), and, optionally, to apply a fade-in and/or fade-out to the input signal. The duration of this envelope, up until the end of the fade-out, equals indur (the duration, before comb filtering, specified in our score). At the end of this envelope, we tack on .5 seconds of silence to mute any input signal and allow the reverberation to decay:

    
    ; #############################################################
     ;  soundfile ex5-7    :  Comb filter        Csound Tutorial
     ;  #############################################################
     nchnls=1
    
     ; p4  = soundin.#   sflink  number
     ; p5 = amplitude multiplier      ; p6 = skip from front of soundfile
     ; p7 = exponential fade in time ; p8 = exponential fade out time
     ;  comb filter:  p9  =  comb reverb time , p10  =  comb loop time
    
    instr 10
    
    indur=p3 ; duration of input signal specified in the score
     p3 = p3 + .5 ; add .5 seconds to output duration for comb output
                  ; reverberation to decay to 0
     audio soundin p4 , p6
    
     ;    output amplitude envelope: includes optional fade-in & fade-out  and
     ;    .5 seconds at end to mute any input signal
     iamp  = (p5 = 0 ? 1: p5 )
             ifadein  = (p8 =0?.001:p8 )
             ifadeout  = (p8 =0?.001:p8 )
     kamp expseg .01,ifadein ,iamp,indur-(ifadein +ifadeout ),iamp,ifadeout ,.01 , .5 , .001
     audio = audio * kamp
    
     aout  comb audio ,p9, p10
     out aout
     endin
     -----------------------------------------------------------
      < ECMC Csound Library Tutorial score11 input file >>  ex5-7 << : 
       < Score11 file used to create Eastman Csound Tutorial soundfile examples
       < ex5-7 & ex5-8: comb & alpass filter examples
    < This score uses inputsoundfile /snd/sflib/perc/bongo1.roll.wav
    < To create a link numbered 13 to this soundfile type:  
    <    sflinksfl bongo1.roll 13
    i10 0 0 7;         < mono input, default mono output
    p3 2;                                                     
    du  301.17 ;  < .5" will be added to this duration in the orchestra
    p4 13;  < soundin.# : soundin.13 points to /sflib/perc/bongo1.roll
    p5 .5;                             < ampfac
    p6                                 < duration skipped from front of sf
    p7                                   < fade in time
    p8 .05;                              < fade out time
    p9 nu 0/.1/.25/.7/.3///;    < comb or alpass reverb time
    p10 nu .002*4/.005/.017/.033;  < comb or alpass loop time
    end; 
     ----------------------------------------------------------- 

    Notice in the score that the first value in p9 (which sets the reverberation time for the comb filter) is 0, so the comb filter has no effect on this note. In notes 2,3 and 4 p9 is increased (from .1 to .7), so the reverberation becomes progressively longer and the pitched coloration more pronounced.

    With alpass filters, the peaks and nulls, and the resulting coloration of the input sound, are less pronounced, especially when the reverberation time is low. alpass filters do add a coloration to a sound resulting from phase cancellation and reinforcement, but the emphasized frequencies are continuously changing, so that over time the alpass filter passes all frequencies equally.

    Orchestra file ex5-8 is identical to orchestra ex5-7 above, except that an alpass filter is substituted for the comb filter. If you listen to and compare soundfiles /sflib/x/ex5-7.wav and /sflib/x/ex5-8.wav, the principal audible differences between comb and alpass filtering should be evident.

     ; #############################################################
     ;  soundfile ex5-8    :  Alpass filter        Csound Tutorial
    ; Note: score11 file ex5-7 is used as the score for this instrument
     ;  #############################################################
    
     nchnls=1
    instr 10
    
    indur=p3 ; duration of input signal specified in the score
     p3 = p3 + .5 ; add .5 seconds to output duration for alpass output
                  ; reverberation to decay to 0
     audio soundin p4 , p6
    
     ;    output amplitude envelope: includes optional fade-in & fade-out  and
     ;    .5 seconds at end to mute any input signal
     iamp  = (p5 = 0 ? 1: p5 )
             ifadein  = (p8 =0?.001:p8 )
             ifadeout  = (p8 =0?.001:p8 )
     kamp expseg .01,ifadein ,iamp,indur-(ifadein +ifadeout ),iamp,ifadeout ,.01 , .5 , .001
     audio = audio * kamp
    
     aout  alpass audio ,p9, p10
     out aout
     endin

    In both of the preceding examples, 100 % of the input soundfile was sent through the comb or alpass filter, and the output from the instrument consisted solely of the filtered output. More often, however, only a portion of the input signal (say, somewhere between 20% and 60 %) is sent to comb or alpass, and the remaining direct signal is sent straight out. Generally a p-field is introduced in the orchestra and score to handle this wet/dry mix, as in ex5-9 below.

    It is easy to tune the resonant output of a comb filter to a particular pitch. ex5-9 imposes the pitches of a harmonic minor scale on an alternating series of bongo roll, tam tam, babbling brook and gong input soundfiles. If you want to compile a soundfile yourself with this orc/sco pair you must create links to these soundfile so that unit generator soundin can find them. Do this by typing: mktutsflinks if you have not already done this for earlier orc/sco examples.

    
    ; #############################################################
     ;  soundfile ex5-9  :  Tuning comb filters        Csound Tutorial
     ;  #############################################################
     nchnls=1
    
     ; p4  = soundin.#   sflink  number
     ; p5 = amplitude multiplier      ; p6 = skip from front of soundfile
     ; p7 = exponential fade in time ; p8 = exponential fade out time
     ;  comb filter: p9  =  comb reverb time , p10 = comb  resonant. freq.
     ; p11 = wet/dry mix
    
    instr 10
    indur=p3 ; duration of input signal specified in the score
     p3 = p3 + .5 ; add .5 seconds to output duration for comb output
                  ; reverberation to decay to 0
     ain soundin p4 , p6
    
     ;    output amplitude envelope: includes optional fade-in & fade-out  and
     ;    .3 seconds at end to mute any input signal
     iamp  = (p5 = 0 ? 1: p5 )
             ifadein  = (p8 =0?.001:p8 )
             ifadeout  = (p8 =0?.001:p8 )
     amp expseg .01,ifadein ,iamp,indur-(ifadein +ifadeout ),iamp,ifadeout ,.01 , .3 , .001
     ain = ain * amp
    icombpitch = (p10 < 13.0 ? cpspch(p10) : p10 ) ; comb pitch in cps or pch
    iloop = 1/icombpitch
    print p10, icombpitch, iloop
     acomb  comb ain ,p9, iloop
     out (p11 * acomb ) + ((1. -p11) * ain)
     endin
    ----------------------------------------------
      < ECMC Csound Library Tutorial score11 input file >>  ex5-9 << : 
    < This orc/sco pair uses soundin to read in sflib soundfiles
    < Before you can run this orc/sco you must create links to these soundfile
    < The easiest way to do this is to type:   mktutsflinks
    i10 0 0 8;         < mono input, default mono output
    p3 1;                                                     
    du  301.17 ;  < .5" will be added to this duration in the orchestra
    p4 nu 13/5/20/7; < soundin.# : 
     < 13 = bongo1.roll, 5 = tam, 20 = brook 7 = gong.ef3
    p5 .4;                                 < ampfac
    p6                                     < duration skipped from front of sf
    p7  .2;                                   < fade in time
    p8 .2;                                 < fade out time
    p9 nu .1/.3/.6/.9;          < comb or alpass reverb time
    < p10 desired output resonant pitch or freq. of the comb filter
    p10 no c4/d4/ef/f/g/af/b/c5; < harmonic minor scale
    p11 nu .8/.7/.6/.5;      < wet/dry mix : % wet signal
    end;
    ----------------------------------------------

    For the adventurous, the opcodes vcomb and valpass allow one to vary the loop time (and thus the coloration, or "pitch") as well as the reverberation time within a "note" or "event."

    Reverberation unit generators

    There are two basic ways to employ digital signal processing today to add "artificial" reverberation to computer-generated sounds:

    1. Algorithms based on delay lines (comb, alpass and other types of filters). This is the traditional approach. The results can sound cheesy or quite good, depending upon the quality of the reverberation algorithm. High quality algorithms tend to be slow and resource-intensive.
    2. Convolution algorithms, which require an impulse response file of a resonant space, such as a concert hall like Kilbourn. Several convolution opcodes, including convolve, dconv and pconvolve, are available in Csound for performing this type of reverberation. can be a trickly process.

      The convolve opcode employs Fast Fourier Transform (fft) procedures similar to (though considerably faster than) convolution procedures) to filter an input soundfile through the time varying impulse response characteristics of a particular concert hall or other acoustic space. Several steps are necessary to accompish this operation. We will describe these steps briefly here, but a full orc/sco example is beyond the current scope of this tutorial.

      • One must obtain or record a suitable impulse response soundfile. Typically, such soundfiles contain the recording of a single, sharp, very short impulse (a sample value of 1 followed by many zeros) recorded in a particular hall, to capture the reverberant qualities of this acoustical space. Some suitable impulse resonse soundfiles can be downloaded over the Internet. (For example, links to collections of impulse response soundfiles can be found at http://www.csounds.com/resources/impulses.html).
      • Next, one must convert this impulse response soundfile into an FFT frame with the Csound cvanal utility.
      • Finally, one constructs an orchestra file, similar to the example provided in the Csound manual discussion of convolve, along with a companion score file, to apply the reverberant qualities of the impulse response file to an input soundfile. At times, this can be a complicated process, particularly with regard to achieving accurate time synchronization between the dry input signal and the reverberant convolution signal.

    For accurate and effective spatial imaging of sounds, left to right, front to back and high to low, ambisonic processing and decoding over 2, 4, 6, 8 or more loudspeakers can be very effective. If you want to add ambisonic processing to one of your Csound instruments, you can use a script called mkob and, for score files, getscb to add Csound code and p-fields to existing orchestra and score files. The ecmchelp file named mkob can get you started, but this help file is not yet complete, and you should see me for additional help if you wish to try this out. (Warning: It will take you a few hours to get a handle on how to use mkob and getscb.) Ambisonic processing, by itself, does not create reverberation, but rather relies upon the use of an existing reverberation algorithm for this aspect of the sound spatialization.

    Sometimes we do not need or wish to devote the time and energy to fussing with ambisonic processing or convolution reverb, and simply need to add some decent-sounding reverberation to audio signals. This is the situation we will address here, and we will look at some relatively quick procedures for adding delay line-based reverberation to the output of existing Csound instruments.

    Delay line-based reverberators: freeverb

    The original Csound unit generator, called reverb, was based on an algorithm developed by Schroeder that combines four comb filters in parallel followed by two alpass filters in series :

    
                             -----comb 1--------
                             |                 |
                             |-- -comb 2-------|
                             |                 |
        audio input signal ->>                 + --alpass1---alpass2 --> out
                             |                 |
                             |----comb 3-------|
                             |                 |
                             |----comb 4-------|
    

    Each of the filters has a different, prime number loop time (none of which are related by simple ratios). As a result, the overall frequency response is relatively flat, without the obvious pitch coloration of individual comb filters. However, the design of opcode reverb, now more than thirty years old, is not very sophisticated by today's standards. The reverberant signal sometimes includes an annoying flutter or twang, or may hiss like a rattlesnake, and I do not recommend using this old geezer. Opcodes nreverb and reverbsc are better, but of the available delay-line based reverberators available in Csound, I recommend using freeverb, a Csound implementation of the free software stereo out freeverb reverberation algorithm that uses 8 comb filters in parallel followed by four alpass filters in series -- a total of 12 filters per channel.

    The arguments to freeverb are

     aoutL, aoutR freeverb ainL, ainR, kRoomSize, kHFDamp[, iSRate[, iSkip]] 
    1. aout and aoutR are the left and right channel reverbated outputs
    2. ainL and ainR are the left and right channel inputs; for mono-in, stereo-out, the same signal can be sent to both of these inputs.
    3. kRoomSiz, which should be given an argument between 0 and 1, controls the reverberation time and thus the virtual size of the room. The higher the value, up to 1.0, the "larger" the room and the longer the reverberation time.
    4. kHFDam, which also should be set between 0 and 1, attempts to simulate control of high frequency diffusion (how quickly high frequencies decay relative to lower frequencies). In natural (acoustic) room reverberation, high frequencies almost always decay more quickly than lower frequencies. The more sound aborptive material in a room, and the larger the size of the room, the greater the discrepancy between high and low frequency decay rates. High kHFdamp values, between about .7 and 1. (the maximum usable value), tend to produce a "drier", "pingier" (more "staccato") reverberant ambience, like a room with high absorptive coefficients (e.g. a room with thick rugs and drapes and many soft, porous surfaces). When kHFdamp is set to a low value (between about .2 and 0), the reverberation is "wetter," "brighter" for sounds with high frequency spectra and "boomier" for lower pitched sounds, simulating the ambience of a room with hard, reflective surfaces (e.g. cement block walls). Think of the reverberation time (kRoomSize) argument as determining the size of the reverberant room and the (kHFdamp) diffusion parameter as determining the "acoustical treatment" of this room and season to taste. kHFdamp values between about .5 and .8 are most common.
    5. The optional iSRate argument normally should be set to the global variable sr. The output of freeverb may sound slightly different at different sampling rates.
    6. Unless you really know what you are doing, the optional iSkip argument should be left blank (set to 0).

    Note that both kRoomSize and kHFdamp are k-rate arguments, and thus can be varied withan a "note event" by a control signal, and thus it theoretically is possible to vary the size of the reverberant space (turning Kilbourn into the Eastman Theatre) or its absorptive characteristics (removing the curtains and carpets) while a note is being played. Obviously this is not common, however, and may lead to artifacts.

    Without breaking much of a sweat, we could modify the orchestra and score files of ex5-9 to add post-processing reverberation, rather than comb filtering, to our soundfile. We would substitute unit generator freeverb for comb within the orchestra file, use score p9 to control kRoomSize, change p10 to control kHFdamp, and continue to use p11 to determine the wet/dry signal mix. We also could change the output to stereo, and add another score p-field to control the left-right pan location of each output note:

        ; p9 = room size (0 to 1 )
        ; p10 = high freq. damping (0 to 1)
        ; p12 = 0 to 1, hard left to right stereo pan location
       ipanL = sqrt(p12) ;  % of mono input to left channel
       ipanR = sqrt(1. -p12) ; % of mono input to right channel
       arevL, arevR freeverb ipanL * ain, ipanR * ain, p9 , p10, sr; apply reverberation to input signal 
       aoutL = (p11 * arevL) + ((1. - p11) * ain) ; wet-dry mix, left channel 
       aoutR = (p11 * arevL) + ((1. - p11) * ain) ; wet-dry mix, right channel 
       outs aoutL, aoutR
    

    Then, with random selection score values for p9, p10, p11 and p12 such as the following

    p9 1. .2 .8 ;  < randomly vary room size, small to large, for each note
    p10 1. .1 .6;  < randomly vary   high freq. damping, dull ti=o fairly bright
    p11 1. .1 .6;  <  randomly vary wet/dry mix
    p12 .5 .1 .4 .5 .6 .9 ;   < randomly vary L/R pan location 
    

    we could randomly place each output note in a different left-right and "close-distant" ("dry-wet") location, and also vary the apparent room size (reverberation) time and high frequency diffusion ("room brightness") to place each "note" in a "different hall."

    This probably seems a little extravagant, and it also may make our instrument slow and a resource hog. If we create a score in which ten or so notes are sounding simultaneously, each with its own freeverb opcode containing 12 delay-line filters per channel crunching away and consuming RAM, our Csound compile job may slow to a crawl.

    Moreover, we generally do not wish to vary all such post-processing operations so radically from one note to the next. Most often, we simply wish to mix together all of the notes being produced by a given instrument, and then apply a uniform reverberant quality this passage. In order to perform such "global" signal processing operations with Csound, we need to create a separate global instrument within our orchestra file.

    5.6. Global Instruments

    Global instruments often do not generate any audio signal themselves, but instead are often used for post-processing, modifying audio signals that have been generated and mixed together by other instruments within the orchestra file. Often, a global post-processing instrument such as a reverberator or delay line will "play" only one "note," or "event." The instrument is turned on at the beginning of a passage, receives audio input from all of the notes created by one or more 'source" instruments, processes this audio input (e.g. by adding reverberation) and sends it to Csounds output buffer(s), and then is turned off when all input and output are completed. (Occasionally, however, global instruments are employed by advanced users instead as pre-processors, establishing certain variables that will be accessed by other instruments within an orchestra file, or even turning copies of other instruments on and off.) There are a few unique aspects to dealing with global instruments that we have not yet encountered.

    Local and Global Variables
    [ Recommended: See the discussion of global variables within the discussion of Constants and variables in the Csound Reference manual. However, this page also includes information on topics not directly related to global variables, so you may find portions of this reference page confusing. ] manual ]

    All of the i-rate, k-rate and a-rate variables we have looked at so far, with the exception of the header arguments sr, kr, ksmps and nchnls, have been local variables. Local variables, such as "p4," "ipitch," "kenv," "kmart" (remember?) and "a1," are only used, and can only be accessed, by one copy (created by one note event, or I statement, within the score file) of one instrument. Several copies of an instrument can be "playing" simultaneously ("polyphonically"), each with a different ipitch and/or kenv value, without colliding. This is because the ipitch or kenv value for each copy is written to a unique ("local") RAM memory location that can only be accessed by this copy.

    After Csound computes a sample for one instrument copy, it zeros out all of the a-rate local variables, creating these values from scratch on each sample pass. When it completes a control (k-rate) cycle of samples for an instrument copy, it zeros out all the local k values, updating them at the beginning of the next k-rate pass.

    Global variables, by contrast, can be accessed by all copies of all instruments currently in memory. Global variables can be created and updated at the i-rate, k-rate or a-rate, and are preceded by a "g", with output variable names such as gireverbtime (a global initialization variable), gkpitch, (a global k-rate variable) and gaudio. The values a1 and ga1 within an instrument are entirely distinct.

    One other important distinction must be noted : global k-rate and a-rate values are not zeroed out at the end of each control or sample pass. Rather, these values are CARRIED OVER from one pass to the next. For this reason, it is necessary to initialize these variables -- to set them to some initial value (most often to zero) -- with statements such as :

    gi1 init 0 ; set the initial value of variable gi1 to 0
    gkenv init 0
    gaudio init 0

    All global variables that will be used in an orchestra must be initialized before the first i-block, immediately after the header and before the first instrument definition (just as variables normally are declared near the top of programs in languages such as C). These variables can then be accessed and operated upon by any instrument, through such operations as:

    ga1 = ga1 + audio

    This means : Add the current value of local variable audio to the current value of global variable ga1, then assign the result as the new value of ga1

    After all processing has been completed at the end of a sample calculation, or at the end of a control (k) rate cycle of sample calculations, it often is necessary to zero out global control and audio signals to prevent feedback, like this:

    gkenv = 0
    gaudio = 0

    The following orchestra file contains two instruments. The first instrument, wich provides the source audio signals, is a reworking of the gbuzz based orchestra file used in ex4-4. The output has been modified, so that the output of this instrument now is stereo rather than mono. Additionally, the instrument does not write its output to a hard disk soundfile, or send it directly to the DACs for realtime playback, but rather sends its outputs to a global reverberation instrument based on unit generator freeverb.

    
    ;  #############################################################
    ;  Orchestra file  ex5-10 : global instrument freeverb
    ; used to create sflib/x soundfiles ex5-10-1.wav and ex5-10-2.wav
    ;  #############################################################
    nchnls=2
    ; global variables:
    galeft init 0 ; global variable
    garight init 0 ; global variable
    
    instr 1
    kamp expseg 1,.2*p3,20000,.7*p3,8000,.3*p3,1    ; amplitude envelope
    iampmult = (p10 = 0 ? 1. : p10)
    kamp = kamp * iampmult
    
    ; glissando :
       ipitch1 = (p4 > 0 ? cpspch(p4) : abs(p4)) ; negative values = cps
       ipitch2 = (p5 > 0 ? cpspch(p5) : abs(p5)) ; negative values = cps
    kgliss expseg ipitch1,.2*p3,ipitch1,.6*p3,ipitch2,.2*p3,ipitch2
    
    krenv expseg p8,.5*p3,p9, .5*p3,p8 ; kr envelope
    
    abuzz gbuzz kamp,kgliss,p7,p6,krenv,1 
    afilt tone abuzz,1500            ; filter out spurious high freuqncies
    ;  outs sqrt(p11) * afilt, (sqrt(1. - p11) * afilt)
    galeft = galeft + (sqrt(p11) * afilt)
    garight = garight + (sqrt(1. - p11) * afilt)
    endin
    ; -----  global reverberation instrument ------------
    instr 99
       iroomsize = p4
       ihifreqdamp=p6
       iwet = p7
       iglobalamp = (p5 = 0 ? 1. : p5)
    denorm galeft, garight ; guard against denormals on Intel processors
    aoutL, aoutR freeverb galeft, garight, iroomsize, ihifreqdamp, sr
      aoutL = aoutL * iglobalamp
      aoutR = aoutR * iglobalamp
      galeft = galeft * iglobalamp
      garight = garight * iglobalamp
    outs1 (iwet * aoutL) + (( 1. - iwet) * galeft) ; left channel output
    outs2 (iwet * aoutR) + (( 1. - iwet) * garight) ; right channel output
       galeft=0
       garight=0
    endin
    --------------------------------------

    The denorm opcode immediately above freeverb in the global instrument addresses a long-standing bug on Intel processors, which sometimes slow to a crawl when dealing with very small numbers such as might be produced by the end of a reverberant fade-out. denorm mixes a tiny amount of noise into low level signals, which generally eliminates the problem. In this case, noise is added to both of the two global audio signals, galeft and garight.

    Two very similar scores are provided for this instrument. Example score ex5-10-1 adds just a touch of reverberation to the output of the gubzz instrument. p7 in instr 99 (the global reverb instrument) sets the wet/dry mix to 60 % direct (dry) signal and 40 % reverberant signal. p6 in the score for the reverberant instrument sets the kHFdamp to .6, which will cause higher frequencies to decay rather quickly. (See the discussion of the kHFdamp argument to freeverb above.)

    
      < ECMC Csound Library Tutorial score11 input file >>  ex5-10-1 << : 
       < Score11 file used to create Eastman Csound Tutorial soundfile example
     < /sflib/x/ex5-10-1.wav
       < source sounds from ex4-4 gbuzz sub-audio fundamental and glissando
    * f1 0 2048 9 1 1 90;          < cosine function required by gbuzz
    
    i1 0 0 4;   < gbuzz instrument -- creates source sounds
    p3 nu 2/2.5/3./3.5;
    du nu 307./306/305.5/305;
    p4 nu -16/-8.5/-53/-22;           < 1st fundmental freq.
    p5 nu -14.5/-9/-49/-37;          < 2nd fundmental freq.
    
    p6 nu 3/42/13/7;              < lowest harmonic (1=fundamental)
    p7 nu 40/10/40/12;               < number of harmonics
    
    p8 nu .5/.8/.3/ 1.2;               < kr1 (amplitude multiplier) 1 (start & end)
    p9 nu .9/1.4/.9/.4;               < kr2 (amplitude multiplier) 1 (middle)
    p10  nu 1.2/1./.7/1.;        < amp. multiplier
    p11 nu .2/.6/.9/.3;              < spatial placement 0 = L, 1. = R
    end;
    < -   -   -   -   -   -   -   -   -   -   -   -
    i99 0 0 1;   < global reverberation instrument
    p3 13.3 ; 
    p4 .6;        < room size, 0 (very small) to 1, (very large)
    p5            < global amplitude multiplier
     < fairly dry mix (p7 = .4), high freqs. decay fairly quickly (p6 = .6)
    p6  .6;       < high freq. damping, 0 (very bright) to 1. ( dull)
    p7  .4;       < wet dry mix: 0 = all wet, 1 = all dry
    end;
    -------------------------------------------------

    Our second score for orchestra file ex5-10, score file ex5-10-2, is nearly identical to the score file above, but differing only in the values for p6 and p7 in the global instruments. This time the wet/dry mix is set to 70 % wet, 30 % dry, and a low p6 value of .2 causes high frequencies to ring much longer than in the previous score.

    
      < ECMC Csound Library Tutorial score11 input file >>  ex5-10-2 << : 
       < Score11 file used to create Eastman Csound Tutorial soundfile example
     < /sflib/x/ex5-10-2.wav
       < source sounds from ex4-4 gbuzz sub-audio fundamental and glissando
    * f1 0 2048 9 1 1 90;          < cosine function required by gbuzz
    
    i1 0 0 4;   < gbuzz instrument -- creates source sounds
    p3 nu 2/2.5/3./3.5;
    du nu 307./306/305.5/305;
    p4 nu -16/-8.5/-53/-22;           < 1st fundmental freq.
    p5 nu -14.5/-9/-49/-37;          < 2nd fundmental freq.
    p6 nu 3/42/13/7;              < lowest harmonic (1=fundamental)
    p7 nu 40/10/40/12;               < number of harmonics
    
    p8 nu .5/.8/.3/ 1.2;               < kr1 (amplitude multiplier) 1 (start & end)
    p9 nu .9/1.4/.9/.4;               < kr2 (amplitude multiplier) 1 (middle)
    p10  nu 1.2/1./.7/1.;        < amp. multiplier
    p11 nu .2/.6/.9/.3;              < spatial placement 0 = L, 1. = R
    end;
    < ---------------------------------------------------
    i99 0 0 1;   < global reverberation instrument
    p3 13.3 ; 
    p4 .6;        < room size, 0 (very small) to 1, (very large)
    p5            < global amplitude multiplier 
     < wet mix (p7 = .7, high freqs. decay slowly (p6 = .2)
    p6  .2;       < high freq. damping, 0 (very bright) to 1. ( dull)
    p7  .7;      < wet dry mix: 0 = all wet, 1 = all dry
    end;
    ------------------------------------------------

    Often, when learning or using a powerful but potentially complex sound synthesis or processing language such as Csound, Pure Data or Super Collider, it is necessary or more efficient to break a complex task into a series of manageable stages, and to make sure that each stage or module works, and is under your control, before adding more processing operations. If you try to do all of these stages at once, you may get a slew of error messages and not be sure where to begin to correct all of your problems. Part of designing synthesis algorithms is knowing how to break down multi-stage tasks into workable modules. The final two orc/sco examples in this chapter will provide an example.

    I would like to modify and extend the gbuzz instrument in example ex4-4, changing it from mono output to 4 channel quad (not ambisonic B format, but traditional cinema-style quad output). Additionally, I would like to add a delay line with feedback to create echos, and to distribute these echos through the four channel output. We will break this potentially complicated task into two stages:

    1. In ex5-11 we will modify the original ex4-4 instrument for quad playback, without delay line echos.
    2. Once this is working, we will add a global isntrument to produce the echos.
    Quad output

    The quad audio channels and four Genelec loudspeakers in the Linux sound room are numbered as two stereo pairs:

         channel 1: left front speaker      channel 2: right front speaker
         channel 3: left rear speaker       channel 4: right rear speaker

    To modify our orchestra for quad playback one of the first things we need to do is to devise a way to specify quad sound localization points on a simple numberical scale. There are several methods we could use to do this. The method we will employ here is to think of the four speakers as comprising the corners of a square, and sound locations, with values ranging from 0 to 1.0, falling along four imaginary perimeter lines connecting the four speakers. Zero (0) will represent the left front speaker; .25 (25 % of the way along the perimeter) will represent the right front speaker; .5 will represent the right rear speaker and .75 the left rear speaker, and 1.0 againrepresenting the left front speaker. The four speakers are denoted by an X in this diagram:

      0,1.0     .125     .25
         X---------------X
         |    front      |
         |               |
    .875 |   Listeners   | .375
         |               |
         |    rear       |
         X---------------X
       .75     .675      .5

    Note that we will try to position sounds only along the perimeter of the square and not inside the square, which is rarely sucessful with standard quad processing. (The limitations of cinema-style multichannel playback are discussed in section 7.2 of the ECMC Users' Guide). Now, with a single p-field (p12 in score1 files ex5-11 and ex5-12 below), we can specify the quad perimeter localization for each sound with a numerical values ranging between 0 and 1.0. (p11 is not used in orchestra or score file ex5-11. We are saving this p-field to specify the ratio of direct to delayed signal when we add code for a delay line in ex5-12 which follows.)

    In the orchestra and score files for ex5-11 below, additions to the Csound code and score file to ex4-4 that we have added here to implement quad spatialization are provided in bold font:

    ;  #############################################################
    ;  Orchestra file  ex5-11 : reworking of ex4-4 gbuzz orchestra
    ; for quad output
    ;  #############################################################
    nchnls=4
    
    instr 1
    kamp expseg 1,.2*p3,20000,.7*p3,8000,.3*p3,1    ; amplitude envelope
    iampmult = (p10 = 0 ? 1. : p10)
    kamp = kamp * iampmult
    
    ; glissando :
       ipitch1 = (p4 > 0 ? cpspch(p4) : abs(p4)) ; negative values = cps
       ipitch2 = (p5 > 0 ? cpspch(p5) : abs(p5)) ; negative values = cps
    kgliss expseg ipitch1,.2*p3,ipitch1,.6*p3,ipitch2,.2*p3,ipitch2
    
    krenv expseg p8,.5*p3,p9, .5*p3,p8 ; kr envelope
    
    abuzz gbuzz kamp,kgliss,p7,p6,krenv,1 
    afilt tone abuzz,1500            ; filter out spurious high freuqncies
     ; quad spatial placement ----------------------------------------
     ; audio signal spatial locatization to four output channels:
    if ((p12 >= 0) && (p12 <= .25))  then   ; p12 is btw 0 & .25
      ileftfront = sqrt((.25 - p12) * 4)
      irightfront = sqrt(p12 * 4)
         print p2, p12, ileftfront, irightfront
       outq1 ileftfront * afilt
       outq2  irightfront * afilt
    elseif ((p12 > .25) && (p12 <= .5))  then   ; p12 is btw  .25 & .5
      irightfront = sqrt((.5 - p12) * 4)
      irightrear = sqrt(1. - irightfront)
          print p2, p12 , irightfront , irightrear
      outq2 irightfront * afilt 
      outq3 irightrear * afilt
    elseif ((p12 > .5) && (p12 <= .75))  then   ; p12 is btw  .5 & .75
      irightrear = sqrt((.75 - p12) * 4)
      ileftrear = sqrt(1. - irightrear)
          print p2, p12 , irightrear , ileftrear
      outq3 irightfront * afilt
      outq4 irightrear * afilt
    elseif ((p12 > .75) && (p12 <= 1.))  then   ; p12 is btw  .75 & 1.
      ileftrear = sqrt((1. - p12) * 4)
      ileftfront = sqrt(1. - ileftrear)
          print p2, p12 , ileftrear , ileftfront
      outq4 ileftrear * afilt
      outq1 ileftfront * afilt
    else  ; bad  argument, either > 1. or negative
      print p2, p12
      printks "ERROR: Invalid value for p12, which must be between 0 and 1.0. Aborting this note.", 1
      turnoff
    endif
    
    endin
    ------------------------------------------------------
      < ECMC Csound Library Tutorial score11 input file >>  ex5-11 << : 
       < quadraphonic remake of ex4-4 gbuzz orchestra
    * f1 0 8192 9 1 1 90;          < cosine function required by gbuzz
    
    i1 0 0  4;  < gbuzz instrument -- creates source sounds
    p3 nu 2/2.5/3./3.5;
    du  nu 307./306/305.5/305;
    p4 nu -16/-8.5/-53/-22;           < 1st fundmental freq.
    p5 nu -14.5/-9/-49/-37;          < 2nd fundmental freq.
    
    p6 nu 3/42/13/7;              < lowest harmonic (1=fundamental)
    p7 nu 40/10/40/12;               < number of harmonics
    
    p8 nu .5/.8/.3/ 1.2;               < kr1 (amplitude multiplier) 1 (start & end)
    p9 nu .9/1.4/.9/.4;               < kr2 (amplitude multiplier) 1 (middle)
    p10  nu 1.2/1./.7/1.;        < amp. multiplier
    < p11  not used
     < p12 : quad: counter-clockwise circular spatial placement: 0 to 1.0
     < 0 & 1. = left front, .25 = right front, .5 = right rear, .75 = left rear
    < p12 = spatial placement clockwise 0 (front left) to 1 (front left)
    < .25 = front right, .5 = rear right, .75 = rear left, 1. = front left
    p12 nu .1/.3/.6/.85;              
    end;
    -------------------------------------------------------

    The intended spatial localizations of the four output notes, are:

         p12 nu .1/.3/.6/.85;
    1. p12 = .1 : front, slightly to the left of center (.125 would be center palcement)
    2. p12 = .3 : right, closer to the front speaker (right front speaker = .25, right rear speaker = .5)
    3. p12 = .6 : rear, closer to the right rear speaker (right rear speaker = .5, left rear speaker = .75)
    4. p12 = .85 : left, slightly closer to the rear speaker (left rear speaker = .75, left front speaker = 1.0)

    Within the orchestra file, the audio signal for each note is distributed to a pair or audio channels and speakers. An if...else construction determines which pair of channels and speakers must be used, and applies the square root stereo pan formula to boost the signal level of signals mid way between speakers. The quad opcodes outq1 (which routes audio signals to quad channel 1), outq2 (which routes audio signals to quad channel 2), outq3 and outq4 route the required percentages of the audio signals to the appropriate audio channels.

    Global quad delay line instrument

    Example orc/sco pair ex5-12 adds a delay line (unit generator delay) with feedback to the preceding orchestra and score files. The delay opcode is mono in, mono out. If our orchestra were mono, only one delay opcode would be needed in our instrument. If the orchestra were stereo, two delay opcodes would be necessary, one for each audio channel in order to maintain stereo positioning for the echos. For our current quad instrument, four delay unit generators are required for full quad imaging of echos of the source signals.

    Csound code and score p-fields that have been added to our previous orchestra and score in order to implement the delay lines and feedback and their arguments are shown here in bold font:

    
    ;  #############################################################
    ;  Orchestra file  ex5-12 : quad global delay line w/ feedback
    ; used to create sflib/x soundfiles ex5-12
    ;  #############################################################
    nchnls=4
    gadelfrontleft init 0 ; global delay variable for front left speaker
    gadelfrontright init 0 ; global delay variable for front right speaker
    gadelrearright init 0 ; global delay variable for rear right speaker
    gadelrearleft init 0 ; global delay variable for rear left speaker
    gadelfeedback1 init 0 ; global feedback delay signal front left
    gadelfeedback2 init 0 ; global feedback delay signal front right
    gadelfeedback3 init 0 ; global feedback delay signal rear right
    gadelfeedback4 init 0 ; global feedback delay signal rear left
    
    instr 1   ; source gbuzz instrument
    kamp expseg 1,.2*p3,20000,.7*p3,8000,.3*p3,1    ; amplitude envelope
    iampmult = (p10 = 0 ? 1. : p10)
    kamp = kamp * iampmult
    
    ; glissando :
       ipitch1 = (p4 > 0 ? cpspch(p4) : abs(p4)) ; negative values = cps
       ipitch2 = (p5 > 0 ? cpspch(p5) : abs(p5)) ; negative values = cps
    kgliss expseg ipitch1,.2*p3,ipitch1,.6*p3,ipitch2,.2*p3,ipitch2
    
    krenv expseg p8,.5*p3,p9, .5*p3,p8 ; kr envelope
    
    abuzz gbuzz kamp,kgliss,p7,p6,krenv,1 
    afilt tone abuzz,1500            ; filter out spurious high freuqncies
     ; quad spatial placement ----------------------------------------
    idelay=p11  ; % delayed signal
    idirect = (1. - p11) ; % direct signal
    adirect = idirect * afilt 
    adelay = idelay * afilt
     ; direct signal spatial locatization:
    if ((p12 >= 0) && (p12 <= .25))  then   ; p12 is btw 0 & .25
      ileftfront = sqrt((.25 - p12) * 4)
      irightfront = sqrt(p12 * 4)
         print p2, p12, ileftfront, irightfront
       outq1 ileftfront * adirect ; output direct signal, left front
       outq2  irightfront * adirect ; output direct signal, right front
    elseif ((p12 > .25) && (p12 <= .5))  then   ; p12 is btw  .25 & .5
      irightfront = sqrt((.5 - p12) * 4)
      irightrear = sqrt(1. - irightfront)
          print p2, p12 , irightfront , irightrear
      outq2 irightfront * adirect 
      outq3 irightrear * adirect
    elseif ((p12 > .5) && (p12 <= .75))  then   ; p12 is btw  .5 & .75
      irightrear = sqrt((.75 - p12) * 4)
      ileftrear = sqrt(1. - irightrear)
          print p2, p12 , irightrear , ileftrear
      outq3 irightfront * adirect
      outq4 irightrear * adirect
    elseif ((p12 > .75) && (p12 <= 1.))  then   ; p12 is btw  .75 & 1.
      ileftrear = sqrt((1. - p12) * 4)
      ileftfront = sqrt(1. - ileftrear)
          print p2, p12 , ileftrear , ileftfront
      outq4 ileftrear * adirect
      outq1 ileftfront * adirect
    else  ; bad  p12 argument, either > 1. or negative
      print p2, p12
      printks "ERROR: Invalid value for p12, which must be between 0 and 1.0. Aborting this note.", 1
      turnoff
    endif
    ; --------------  delay line  quad location ---------
    idelspace = frac(p12 + .5) ; shift direct signal spacial placement by 180 degrees
    if ((idelspace >= 0) && (idelspace <= .25))  then   ; idelspace is btw 0 & .25
      idelleftfront = sqrt((.25 - idelspace) * 4)
      idelrightfront = sqrt(idelspace * 4)
         print  idelspace, idelleftfront, idelrightfront
       gadelfrontleft = gadelfrontleft + (idelleftfront * adelay)
       gadelfrontright = gadelfrontright + (idelrightfront * adelay)
    elseif ((idelspace > .25) && (idelspace <= .5))  then   ; idelspace is btw  .25 & .5
      idelrightfront = sqrt((.5 - idelspace) * 4)
      idelrightrear = sqrt(1. - idelrightfront)
          print  idelspace , idelrightfront , idelrightrear
      gadelfrontright = gadelfrontright + (idelrightfront * adelay)
      gadelrearright = gadelrearright + (idelrightrear * adelay)
    elseif ((idelspace > .5) && (idelspace <= .75))  then   ; idelspace is btw  .5 & .75
      idelrightrear = sqrt((.75 - idelspace) * 4)
      idelleftrear = sqrt(1. - idelrightrear)
          print  idelspace , idelrightrear , idelleftrear
      gadelrearright = gadelrearright + (idelrightrear * adelay)
      gadelrearleft = gadelrearleft + (idelleftrear * adelay)
    elseif ((idelspace > .75) && (idelspace <= 1.))  then   ; idelspace is btw  .75 & 1.
      idelleftrear = sqrt((1. - idelspace) * 4)
      idelleftfront = sqrt(1. - idelleftrear)
          print  idelspace , idelleftrear , idelleftfront
      gadelrearleft = gadelrearleft + (idelleftrear * adelay)
      gadelfrontleft = gadelfrontleft + (idelleftfront * adelay)
    endif
    endin
    ; -----  global delay line instrument ------------
    instr 99
    ideltime = p4
    ifeedback=p5
    adelayfrontleft  delay gadelfrontleft + gadelfeedback1, ideltime ; 4 delay lines, one
    adelayfrontright  delay gadelfrontright + gadelfeedback2, ideltime ; for each channel
    adelayrearright  delay gadelrearright + gadelfeedback3, ideltime
    adelayrearleft  delay gadelrearleft + gadelfeedback4, ideltime
     ; delay line outputs for each of the 4 channels:
    ;  -------global feedback signal for each of the 4 channels: ----
    ; optional fade-out  of feedback
    kfadeout init 1
    if (p6 != 0 ) then
       kfadeout expseg 1,p3 - p6,1, p6, .001
    endif
    gadelfeedback1 = kfadeout * ifeedback * adelayfrontleft
    gadelfeedback2 = kfadeout * ifeedback * adelayfrontright
    gadelfeedback3 = kfadeout * ifeedback * adelayrearright
    gadelfeedback4 = kfadeout * ifeedback * adelayrearleft
    ; quad output:
    outq1 adelayfrontleft 
    outq2 adelayfrontright
    outq3 adelayrearleft 
    outq4 adelayrearright 
    ; --- clear global signals from gbuzz instrument into delay instrument:
    gadelfrontleft = 0 ; clear signal for front left speaker
    gadelfrontright = 0 ; clear signal for front right speaker
    gadelrearright = 0 ; clear signal for rear right speaker
    gadelrearleft = 0 ; clear signal for left right speaker
    ; note: do NOT clear feedback variables gadelfeedback1 thru gadelfeedback4
    endin
    -----------------------------------------------------------
      < ECMC Csound Library Tutorial score11 input file >>  ex5-12 << : 
       < source sounds from ex4-4 gbuzz sub-audio fundamental and glissando
     < quad global delay instrument with feedback
    * f1 0 8192 9 1 1 90;          < cosine function required by gbuzz
    
    i1 0 0  4;  < gbuzz instrument -- creates source sounds
    p3 nu 2/2.5/3./3.5;
    du  nu 307./306/305.5/305;
    p4 nu -16/-8.5/-53/-22;           < 1st fundmental freq.
    p5 nu -14.5/-9/-49/-37;          < 2nd fundmental freq.
    
    p6 nu 3/42/13/7;              < lowest harmonic (1=fundamental)
    p7 nu 40/10/40/12;               < number of harmonics
    
    p8 nu .5/.8/.3/ 1.2;               < kr1 (amplitude multiplier) 1 (start & end)
    p9 nu .9/1.4/.9/.4;               < kr2 (amplitude multiplier) 1 (middle)
    p11 nu .55/.4/.55/.4;  <  delay/direct mix, 0 to 1.: % delayed signal
     < quad: counter-clockwise circular spatial placement: 0 to 1.0
     < 0 & 1. = left front, .25 = right front, .5 = right rear, .75 = left rear
    < p12 = spatial placement clockwise 0 (front left) to 1 (front left)
    < .25 = front right, .5 = rear right, .75 = rear left, 1. = front left
    p12 nu .9/.6/.4/.1;              
    end;
    < -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -
    i99 0 0 1;   < global instrument for quad delay & feedback
    p3 14.8 ;  < allow time for feedback of last note
    p4 1.;           < delay time  ; if 0 bypassed, no delay 
    p5 .54;          < delay feedback % 0 to 1., usualy .3-.8
    p6 1.;           < optional fade-out time for feedback
    end;
    --------------------------------------------------

    Because we have chosen to make this a quad orchestra, our orchestra files looks a good deal more complicated than it otherwise might appear, because many of the operations we need to perform must be done four times, once for each audio channel and delay line. We could have simplified this orchestra by using the specialized 4-channel delay line opcode vdelayxq rather than opcode delay, but our discussion and usage example of the delay will be of more use to you in constructing delay lines for mono and stereo orchestra files.

    At the top of the orchestra, as required by Csound, all global variables are declared. As is often the case, they are initialized to a value of 0. All eight of the global variables run at the audio (sampling) rate and create "channels" for audio signals. Four of these variables, gadelfrontleft through gadelrearleft, are used to pass signals to be delayed from the source instrument to the global delay instrument. The other four global variables, gadelfeedback1 through gadelfeedback4, are used for feedback created by, and fed back into, the four delay lines.

    p11 in the score determines the amount of signal to be sent to the global delay instrument. The reciprocal value (1.0 - p11) is an amplitude multiplier for the direct signal. Note that in the score we have set p11 rather high for the first and third notes:

     score file:    p11 nu .55/.4/.55/.4;  <  delay/direct mix, 0 to 1.: % delayed signal
    orchestra file:   idelay=p11  ; % delayed signal
                       idirect = (1. - p11) ; % direct signal
                       adirect = idirect * afilt
                       adelay = idelay * afilt 
    For the first and third notes, the first echo will be louder (55 % of the total amplitude for the note) than the direct signal (45 % of the total ampltude).

    The delay time, specified in p4 of the global instrument score, is set to one second, and the feedback % is set to 54 % in p5:

         p4 1.;           < delay time  ; if 0 bypassed, no delay
         p5  .54;         < delay feedback % 0 to 1., usually .3-.8
    And in the orchestra file:
         ideltime = p4
         ifeedback=p5

    We have included an optional fade-out envelope, controlled by score p-field 6, which sets -out time, for the feedback from the delay lines:

       kfadeout init 1
       if (p6 != 0 ) then
          kfadeout expseg 1,p3 - p6,1, p6, .001
       endif
       gadelfeedback1 = kfadeout * ifeedback * adelayfrontleft ; fadeback for channel 1
       gadelfeedback2 = kfadeout * ifeedback * adelayfrontright ; fadeback for channel 2
       gadelfeedback3 = kfadeout * ifeedback * adelayrearright ; fadeback for channel 3
       gadelfeedback4 = kfadeout * ifeedback * adelayrearleft ; fadeback for channel 4<
    This fade-out envelope is not absolutely necessary. However, with fairly high feedback ratios, the feedback can last a long time after the end of the last direct note before it finally decays to zero. There are occasions, as with this score, when we may wish to cut short this long feedback decay.

    In addition to unit generator delay, which we have used here Csound provides man other delay line unit generators. Some of these opcodes (such as vdelay3 provide for varying the delay time, while others (such as multitap enable us to tap into a delay line at 2, 3 or more points.


    Assignment
    1. Try out some of the following unit generators: tonex, atonex, resonx, balance, comb, alpass, freeverb and delay
    2. Learn how to construct global instruments.

    Eastman Csound Tutorial: End of Chapter 5

    TOP of this chapter -- NEXT CHAPTER (Chapter 6) -- Table of Contents CHAPTER 1 -- CHAPTER 2 -- CHAPTER 3 -- CHAPTER 4 -- CHAPTER 5 -- CHAPTER 6 APPENDIX 1 -- APPENDIX 2