Eastman Csound Tutorial
END of this chapter -- NEXT CHAPTER (Chapter 2) -- Table of Contents CHAPTER 1 -- CHAPTER 2 -- CHAPTER 3 -- CHAPTER 4 -- CHAPTER 5 -- CHAPTER 6 APPENDIX 1 -- APPENDIX 2

Chapter 1: Basics

1. Introduction

This tutorial provides an introduction to Csound 5, the current version (introduced in 2005) of the Csound programming language. While reading this tutorial you should have handy access to an online or hardcopy printout of the Csound 5 Reference Manual, and should consult the pertinent sections of this manual [indicated within square bracket, and with HTML links in the online version of this tutorial] while reading the tutorial material covered here.

Reading documents online rather than on the reassuring, even sensuous medium of paper can lead to eye strain and to a truly pitiable geekiness. Here, however, I actually recommend that you use the online version of this tutorial (located at http://ecmc.rochester.edu/ecmc/docs/csound/allan/)) rather than the hardcopy version, because the HTML links provided in the online version to pertinent sections of the Csound 5 Reference Manual enable you to open these passages quickly in tabs and to compare the information provided within this tutorial and within the Reference Manual. Locating these same passages on your own, either within an online or hardcopy printout of the labyrinthan Reference Manual, can be a wearisome and aggravating process.

This tutorial covers only a small fraction of the audio signal generating and processing resources that are available within Csound. However, the resources we will discuss are among the most fundamental elements of computer music generation. Additionally, they are quite powerful and extensible (you can get a lot of mileage out of these basic building blocks), and also will be used to illustrate the logic and syntactical conventions of Csound. An understanding of these conventions will enable you to begin constructing your own instrument algorithms, and to figure out how to use the myriad of resources in Csound that are not covered here.

Distributions of Csound periodically are updated by a volunteer team of Csound users currently led by John ffitch, and we install new versions in the ECMC studios as they become available. New versions often contain a few bugs. ECMC users are asked to advise one of our staff members if you stumble across any undocumented problems.

Computer systems make music by running audio software loaded into RAM that computes samples -- numbers that represent the amplitude of a sound at evenly spaced time (sampling rate) intervals. Common sampling rates in use today are 44100, 48000, 96000 and, less often, 192000 or 88200. If the computer is fast enough, and the signal generating algorithms are not overly complex, and not too many simultaneous notes are called for, this can be done in real time, and the samples can be passed directly to digital-to-analog converters for immediate audition. However, software synthesis is inherently slower than hardware synthesis, and sometimes the time required to compute the samples exceeds the duration of the sounds. In such cases, the samples are first computed and written to a disk file, which can be played at the completion of the compile job.

Composers and (especially) performers of music have benefitted enormously from the increases in computing speed and in memory and data storage capacity of computer systems over the past few decades, and our resulting ability to compute (perform) music in real time. However, one must also bear in mind that acoustical sounds are inherently complex. A single tone played by a violinist involves continuous subtle (or not-so-subtle) variations in timbre, amplitude and pitch. (If you want to hear acoustically simple sounds, listen to the ring tones on your cell phone.) Synthesizing sounds which have similar moment-by-moment variations -- sounds that have a sense of "physicality," or "richness" or "life" and "expressivity," and which sound and feel GOOD -- is therefore a complex and at times tedious task.

Commercial sound synthesis applications (and their open source or freeware/shareware clones) that seek to appeal to the widest possible audience of musicians, often have two paramount (and related) design criteria:

  1. real-time performance ("Musicians want and need immediate feedback. They want to play music, not program it.")
  2. ease of use ("Musicians want to spend their time creating and playing music, and not become bogged down in the complexities of computer programming and acoustics")

Consequently, compromises in audio quality and subtlety, and in extensibility (the range of expressive nuances possible on the software "instruments") often are made in order to realize or facilitate these two principal goals. Users often are provided with "factory pre-sets," (sometimes advertised as "killer sounds," although upon discovering the limitations and frequent "sameness" of such sounds after a period of use the only thing they may succeed in killing is the user's interest in music and in life). Many audio synthesis applications attempt to "hide the complexity" of sound synthesis behind a reassuring interface of exploding menu choices, dials and sliders and the use of MIDI keyboards and continuous controllers.

It is easy to pillory audio applications designed for ease of use. One does not tell a beginning violin student that she can make music more quickly and easily, and have more fun, by limiting herself to the use of pizzicato played in first position. And we have all heard hideous MIDI realizations of orchestral and chamber works. However, such audio applications have provided an introductory gateway into the resources of electroacoustic music for may musicians, and so I do not believe that their utility or integrity should be demeaned. However, they ultimately can be very limiting, and can lead one to work in a very small musical world rather than in one with virtually limitless possibilities.

Some music software has different design criteria. With programming languages such as Csound, SuperCollider 3 and Pure Data and some of the best commercial synthesis applications such as Reaktor and MSP, extensibility and audio quality are more important design goals than ease of use. The synthesis and sound processing architecture is not fixed or limited to a few basic synthesis models. Rather, users can construct and implement their own signal generating, processing and "effects" algorithms by "patching together" various procedures from an extensive library of available utility programs and sub-routines, often called unit generators (or opcodes in Csound), using only those procedures that are needed, and calling as many as will fit in computer memory at once. Thus, if we want or need a 500 oscillators or filters in order to create a very particular or precise type of timbre, and if our computer has sufficient RAM, we can create them with 500 calls to an oscillator or filter unit generator. The values we supply to these oscillators or filters can be arbitrarily complex, involving a mixture of many simultaneous control signals created by other unit generators.

The tradeoff, of course, is that these more powerful "toolkit" applications all have notoriously steep learning curves. You cannot just "play" them out of the box, unless you limit yourself to example synthesis algorithms provided by the author, vendor or other users of the application. But to do so is to sacrifice the greatest strength of the application: extensibility, or the ability to devise genuinely new and unique sounds and approaches to sound synthesis and thus, by extension, to musical composition and performance. The goal here is often quite different, one in which the defining, unique structural logic and expressive nuances of a musical work grow organically out of its unique, finely tuned aural qualities.

To achieve this ability, one must first step back -- sometimes WAY back -- much as a beginning violinist must do, and master some of the basic technical resources and challenges inherent to violin playing or sound syntehsis. While gaining this mastery, one's initial attempts may sound inept, "unmusical" or "ugly as a pig's butt," and the process may be tedious and frustrating. To someone who is already an accomplished composer for acoustical instruments, this "bottom-up" approach to sound synthesis initially may be daunting, uninviting or dispiriting, and for some ultimately not worth the effort. But to other musicians, being compelled to "get into sound" in a very elemental or rudimentary and meticulous manner -- no matter how bad it sounds at first -- ultimately can be liberating, and inform and broaden one's approach to acoustic as well as to electroacoustic music.

SuperCollider, Csound, Pure Data Reaktor and MSP all have unique design philosophies and implementations, and thus provide users with different strengths and challenges or difficulties. With each of these applications, some types of musical operations can be accomplished fairly easily while others require more effort and "fussing around." Each of these applications thus tends to predispose the user toward certain types of musical exploration.

For good and for bad Reaktor and, to a lesser degree, MSP are heavily dependent upon MIDI. To use Reaktor musically, you generally must run it as a plug-in within a host digital audio workstation or sequencing application such as Cubase or Logic. PD and MAX/MSP are well suited to the creation of "formalized" music processes, and especially of algorithmic compositional processes, but provide a smaller range of synthesis tools and possibilities. The emphasis often is on how sounds are being used and manipulated, rather than on the sensuous or memorable quality of the sounds themselves. Super Collider, which is less tied to MIDI, also provides especially powerful tools for algorithmic composition, encourages high level programming (creating processes that control other processes) and dynamic programming, in which the user (or a user-written program) introduces changes in algorithms while they are running.

Csound is the oldest of these synthesis applications, dating back to the 1980s, and provides by far the largest and most varied collection of unit generator subroutines (over 1200), but also, in some ways, the most unforgiving or least "friendly" programming environment, similar, in several respects, to the language C. Composers with a particular interest in timbre often gravitate to CSound because it provides so many "hooks" (ways to modify sounds) as well as so many basic timbral building blocks.

Although various graphical front-ends to Csound are available, such as the Csound 5 GUI, Blue, Cecilia and Csound VST, Csound, like Super Collider, is at heart a text-based rather than graphical programming environment. One creates synthesis algorithms by typing into files with a text editor rather than by mousing on graphical widgets. Initially, graphical algorithms, which look like flow charts, tend to be easier for users to comprehend. However, as synthesis algorithms become increasingly lengthy or complex, the immediacy of graphical displays can quickly evaporate and, instead, present one with a sea of spaghetti connections spread over several windows that can become very difficult to follow.

PD, Super Collider and Csound are distributed freely according to the terms of the GPL (General Public License) are platform-independent, with versions for Macintosh, Linux and Windows systems. (Csound can be downloaded at http://csounds.com/downloads) In fact, these three applications tend to bring together Linux, Windows and Mac users in common forums to share information without the mindless "religious" wars ("Your platform stinks because ...") one too often encounters in internet "help" sites.

In this tutorial, Csound will provide us with an avenue to study and put to creative use music synthesis and sound processing procedures apart from the specifics and limitations of particular hardware and operating systems. Often, the knowledge gained from this process is directly applicable to other music synthesis and audio processing environments as well.

Our primary goal will be not to transform you into Csound programming wizzards, but rather to use some of the tools of Csound to examine various types of computer music resources, procedures and aesthetic possibilities. Those already familiar with MIDI programming techniques or with synthesis procedures techniques will find many familiar concepts within this tutorial. Many of the unit generators in Csound have hardware or software counterparts in synthesizers, samplers, mixing consoles, outboard gear as well as in a broad range of audio software applications. However, the implementation of these procedures presented here will likely be new, and may at times seem more complicated.

1.1. Orchestra and Score Files

Csound requires that the user supply two input files -- an orchestra file and a score file - which, together, define a signal processing algorithm and all of the data (such as the pitch of each note in a melody) needed for the compilation of output samples.

A Csound orchestra file is a user-written computer program, written according to the syntactical conventions of the Csound "language," that defines one or more instruments - audio signal processing algorithms. This program provides the Csound compiler with a step-by-step series of instructions, and some of the necessary argument values, required for the computation of each output sample. Designing an instrument algorithm bears certain similarities to patch editing on a MIDI synth, except that we usually begin with a "blank page" (or from an "init" state), rather than with certain pre-defined operations determined by the hardware architecture of the synth.

Some instrument algorithms or modules (sub-routines within an instrument algorithm) generate audio signals from scratch, by sampling a synthetic waveform, such as a sinusoid, or else a digitized acoustic sound, such as a violin tone. Other instrument algorithms or modules process such signals. producing no sound themselves, but rather modifying audio signals from other sources, perhaps adding reverberation, echos, spectral modifications (EQ) or other type of sound modification.

A score file generally provides values (such as pitch and duration) that vary from note to note. These argument variables for each note are specified in the form of parameter fields (p1, p2, p3 and so on). Additionally, the score file provides any required function definitions, from which Csound creates tables of numbers used to generate or process sounds. The numbers within a function table may represent an audio waveshape, such as a sinusoid or a sawtooth wave, or a digitized acoustic sound. Other types of tables are used to represent "control" or "performance" elements, such as the amplitude envelope, or the time varying vibrato shape or width, within a tone. In still other cases a function table merely provides us with a convenient way to input a complete series of numbers in a single operation. We will return to score parameter fields and function definitions shortly.

Up to this point, ECMC users have been using the Eastman Csound Library instruments, for which score11 score templates have been provided. Throughout this tutorial, we will continue to use the score11 preprocessor to simplify the creation of our actual Csound score files. (Non-ECMC users can refer to the online Appendix, which includes Csound score file compilations for all of the score11 examples in this tutorial.) These Csound score files (called sout, short for "score output" file, by score11) look somewhat like the MIDI controller event list files produced by MIDI sequencers such as Logic and Cubase,. When creating simple one or two note test score files for a new instrument algorithm, however, you might wish to bypass score11 and type in your Csound score file directly.

Csound also provides alternative ways to a series of notes, parameters (such as the delay time for a delay line) or "events" (such as whether a particular instrument subroutine will be executed or bypassed). With certain unit generators one can use real-time input from a MIDI keyboard and/or other MIDI controllers, or else standard type 0 MIDI files created with a sequencer application or or an interactive program such as PD or MAX, to trigger and control Csound orchestras, and only a skeletal score file is required. Some advanced users write their own Csound score generating programs, or employ spreadsheet programs for particular purposes.

1.2. Orchestra file headers

[ See the discussions of Syntax of the orchestra and Orchestra Header Statements in the Csound reference manual ]

Every Csound orchestra must begin with a header, which establishes certain global values that are used by all instruments within the orchestra. Here is a sample Csound header:


1) The sr variable fixes the sampling rate (also called the audio-rate, or a-rate, in Csound). This determines how many samples per second will be calculated for each channel to represent sounds.Typical sampling rates are 44100, 48000 and, for higher audio resolution, 96000 or 88200. All instruments within our orchestra will compute audio samples at this rate.

For initial tests, however, where audio quality is not paramount, we might upon occasion choose to set the sampling rate to a lower value supported by our audio sound card, such 32000 or 22050. This will compute more quickly and require only half as much disk space for output soundfiles. And if we run Csound in real time, sending the samples directly to the system DACs rather than writing them to a soundfile (by means of the ECMC csoundplay command, or its alias csp), lower sampling rates can enable us to employ more complex signal processing procedures, and to play more simultaneous "polyphonic" notes, before reaching the system throughput limitations and the sound begins to break up. On a powerful computer system such as madking we may rarely or never need to lower the sampling rate to avoid glitches in real-time playback, but on a lumbering old system this can sometimes be quite useful.

Remember, however, that the highest frequency that can be represented digitally is one half the sampling rate, which is called the Nyquist frequency. If we attempt to create a sound that includes partial frequencies higher than the Nyquist, these higher frequencies will alias, or "fold over," according to the formula :

sr - freq

Thus, with the sampling rate set to 22050, a frequency of 12000 herz will actually be heard at 10050 Herz. And if we attempted to create a sine wave glissando between 20 Hz. and 22kHz., the resulting pitch would rise from 20 herz up to 11025 herz, but would then glissando back down to 50 herz.

Remember, too, that the "smoothing" filters built into all DACs, as well as the "anti-aliasing" low pass filters within ADCs, also serve to attenuate frequencies above approximately 38 of the sampling rate, reaching "total" attenuation (-60 dB) at around, or slightly below, the Nyquist frequency. With cheap converters, the rolloff is even less steep.

2) kr specifies a control rate. There are many types of subaudio control signals, such as vibrato and tremolo patterns, that do not need to be computed at the sampling rate to achieve a satisfactory resolution. To do so would waste processor cycles. Such control signals, which do not produce sound themselves, but rather provide time-varying amplitude, pitch or some other type of modification to an audio signal, are generated at the control rate we specify here. Our orchestra header above specifies a control rate (k-rate of 2205 updates (computations) per second for all signals running at this rate. Control rates typically vary between about 500 and 5000, but for highest quality on a final production run sometimes can be pushed all the way up to 1/2 of the sampling rate, or even to the sampling rate. However, the k-rate must divide evenly into the s-rate.

3) ksmps is an alternative way to specify the control rate, and specifies the ratio between the s-rate and the k-rate. Thus, in our example above, the ratio is 20 sample calculations for every calculation of control signals. In other words, every value computed at the control rate will be used for 20 successive audio samples, then updated. ksmps must be an integer.

It is not necessary, and, in fact, generally not desirable to specify both kr and ksamps in the orchestra header, as we have done here. More commonly, one specifies only one of these redundant arguments -- whichever you prefer.

4) The final header variable, nchnls, determines the number of output audio channels. Use 1 for mono, 2 for stereo, 4 for quad and 8 for 8-channel, or 5 in the unlikely event that you want to create a five-channel output soundfile.

It is not absolutely necessary to include a header statement at the top of an orchestra file. If no header is included, Csound will use default values of sr=44100 and ksmps=10. When compiling a job with the Csound command, it also is possible to override the sr, kr and ksmps values specified within an orchestra file, perhaps changing the sampling rate from 44100 to 96000 (which would also require changing the kr or ksmps value).

1.3. Audio, Control and Initialization Rates

[ See the discussion of Constants and variables in the Csound reference manual, but do not worry about Global variables yet or anything below Variable Initialization. ]

Csound instrument algorithms are constructed by patching together various unit generators, each of which performs a particular type of mathematical operation. The Csound reference manual describes these unit generators and other features of the compiler in a manner designed for quick reference by experienced users. It is unlikely that you will want to take this elephantine manual along to the beach for a relaxing read. The main body of the manual, Part III. Reference, presents all of Csound's opcodes (unit generators) and operators (available mathematical operations) in an interminably long list. In this tutorial, we will look at a few of the more commonly used opcodes and mathematical operators. This will mean a lot of skipping around within the humongous referencemanual.

Before we examine these unit generators, we need to clarify a few basic things about the syntax with which one writes lines of Csound code. Reprinted below are the definitions for four Csound unit generators as they appear in the reference manual. The first two unit generators (oscil) create basic oscillators. The two concluding lines (rand) create white noise generators.

  output   unit generator  required arguments          optional arguments
("result")  ("opcode")
   kr         oscil        kamp ,  kcps , ifn       [, iphs]
   ar         oscil        xamp ,  kcps , ifn       [, iphs]
   kr         rand         kamp                     [, iseed]
   ar         rand         xamp                     [, iseed]

An oscillator or white noise generator can run either at the audio rate, if this signal will be heard as a sound, or else at the control rate, if the signal will instead control (modify) some characteristic of an audio signal created by some other unit generator. We determine the output rate by the output name (or "result") we supply. If this name begins with a k, the unit generator will run at the control rate; if the name begins with an a, the operation will be computed at the audio rate. We can choose any name we wish for these output names, so long as they begin with a k or an a.

If our oscillator will be producing a vibrato signal that modifies the pitch produced by another oscillator, for example, we might call the result kvibrato, or perhaps kvib, or k1, or, if we are feeling recherche or pompous, kKxg6w. If our white noise generator is producing audible noise, we might call the output anoise, or asignal, or asig, or a2.. It is recommended, however, that you use mnemonically meaningful "result" names, such as kvib or kvibrato, rather than nondescript signal labels such as k1, to help yourself recognize the function or purpose of each signal you create.

To the right of the unit generator name, the arguments (input values) that it needs to perform its computations are listed. Each argument is separated by a comma. Blank spaces or tabs may be included between arguments for ease of reading.

Any argument beginning with an i is one for which a value will be established at the initialization (onset) of the note, and will not change during the note's duration. Any argument starting with a k can change values either at the control (k) rate, or else at the i rate. An argument that begins with an a is updated at the audio-rate. Finally, an argument beginning with an x is one that can be updated at any rate. You can plug a previously created a-rate or k-rate signal into this argument, or give it a constant value. In short, inputs to various mathematical operations may be specified only once per note, or change at either the control or audio rate. Some arguments must be specified at a certain rate, while with other arguments it is up to the user to select the appropriate update rate.

The oscil unit generator has three required arguments - an amplitude (amp) value, a frequency (cps) value, and a function number (fn). It is also possible, if we wish, to include a fourth argument (phs), which specifies where in the function table the oscillator should begin reading (more on this later).

If the oscillator is running at the control rate, the amplitude and frequency arguments can be either constant (i-rate) values, or else k-rate control signals, previously created by other unit generators. The function number argument is fixed (an i-rate value) for the duration of each note. (Normally, we cannot change waveshapes in the middle of a note.) However, the function table number can change from one note to the next. If the oscillator is running at the audio rate, its amplitude argument can be updated at any rate.

The white noise unit generator rand has only one required argument, which determines the amplitude of the noise band. If we run rand at the audio rate, we can update this amplitude value every sample (a-rate), every control period cycle (k-rate), or only once per note (i-rate). If rand is running at the k rate, audio rate amplitude updates are not possible.

1.4. A Simple Instrument Algorithm

See the discussion of Instrument block statments in the Csound reference manual ]

We are now ready to start making some music, or at least some sound. So, without further ado, we present our first orchestra, which we would type into a file with a text editor such as vim. We will call this file first.orc:

sr= 44100
ksmps= 20
nchnls= 1
instr 1
asound oscili 15000, 440, 1
out asound

This is about as simple an orchestra as we could design. Savor it. In a couple of weeks, as you ponder the intricacies on lines 147 through 163 of some distant descendent of this little fellow, you may look back on these few pristine lines with almost unbearable nostalgia. But perhaps not. This orchestra also is so limited that it is highly unlikely you would ever want to use it, or to listen to its output for more than a couple of seconds.

Our orchestra includes a single instrument block, or algorithm. An instrument block consists of three things: A line identifying the number of the instrument block; the body of the instrument; and finally, the word endin, which signifies the end of this instrument block.

Here, we have given the instrument the auspicious number "1." Any number will do (e.g. instr 1, instr 13, instr 275 or instr 633), but every instrument block within an orchestra must have a unique number.

The two line body of this instrument. can be translated as follows: Create an interpolating oscillator (oscili, a cousin of the basic oscil unit generator discussed above). Run this oscillator at the audio-rate, and write the results of its operations into a RAM memory location we will call asound. Give the oscillator a fixed amplitude of 15000 (on a scale of 0 to 32767). Make it sample a waveshape defined in our score by function table number 1. Set the output frequency to 440 herz. Then (out asound) write the output of RAM memory location asound to an output buffer and, from there, to an output soundfile or else to the DACs.

I have indented the two-line body within this instrument,although this is not required and hardly is necessary in so trifling a file. You can insert tabs, any number of spaces between arguments and blank lines whereever you wish to make the code easier for you to read. Subroutines often are indented, but, again, the aesthetics of your coding is up to you.

1.5. Oscil and Oscili

[ See the discussion of oscil , oscili and oscil3 in the Csound reference manual ]

The oscili unit generator in the example above, and its close cousins oscil and oscil3, require a closer look, since oscillators are the most important components of many instruments. An oscillator is a signal generator, which often (but not always) is used to create a periodic signal, in which some pattern is repeated many times. An oscillator consists of two basic components: (1) a table of numbers, loaded into RAM, and (2) a phasor, which is a "pointer," a subroutine that keeps track of the current phase (location, or "index") within the table and calculates the next location in the table from which to read a value. Oscillators in Super Collider, PD, Reaktor, MSP and virtually every other synthesis application work in a very similar fashion.

As noted earlier, Csound oscillators have three required input arguments - amplitude, frequency and function number - and an optional fourth argument (not used in the example above) specifying a starting phase at which the function is to be read.

Function Number (3rd oscillator argument) :

Digital oscillators cycle through a table of numbers (called a function, in Csound), which often represents one cycle of some waveform or shape. Practically any shape can be defined in a function definition, as we will see later, but a particularly common common waveshape is the sine wave. Functions are defined and numbered in the score. The third argument to oscil , oscili or oscil3 specifies the number of the function table to be read by the oscillator. Functions are created and loaded into memory at the beginning of a Csound soundfile compilation job, and unless one employs the -d option of the csound command, all functions within the score file are displayed near the beginning of the the sterr (standard error message) output of the Csound compilation job.

Starting Phase (optional 4th oscillator argument)

The optional fourth argument [iphs] specifies where the oscillator should begin reading within the table. The default, when the argument is left blank, is to begin at the beginning of the table. Valid arguments range between 0 and 1. . A value of .5 would cause the oscillator to begin reading half way (180 degrees) through the table; a value of .333 would cause reading to begin 1/3 of the way (120 degrees) into the table.

For a repetitive sine wave audio signal of, say, 440 herz, the starting phase makes no audible difference, and the argument can be omitted. However, if a control oscillator is creating a subaudio, five herz sine-wave vibrato signal, beginning at a starting phase of .5 (half-way through the sine wave) would cause the resulting pitch first to be lowered, then raised, rather than the reverse.

The partials produced by most acoustic instruments are often not in phase, but these phase differences seem to make little audible difference. (This point has been studied and debated for many decades, however.) Phase differences DO become significant when mixing signals of the same or nearly same frequency (e.g. doubling a note between instruments, or combining direct and delayed signals).

Frequency (2nd oscillator argument) :

The second argument to an oscillator specifies the rate at which it must read through the function table. A value of 440, as in the example above, will cause the oscillator to read through the table ("wrap around") 440 times for each second of sound created.

An oscillator will almost never read every number within a single cycle function table in continuous succession. Rather, since both the sampling rate and the size of the function table are fixed, the oscillator will need to skip over several numbers within the table before taking each new sample reading in order to produce the correct frequency. This "skip" value is called the sampling increment.

In our example, with a sampling rate of 44100 and a requested frequency of 440 Herz, the oscillator will need to spit out 100.22727 samples to represent each cycle :

100.22727 samples per cycle * 440 cycles = 44100 samples per second

Csound function table sizes generally must be a power of two (exceptions are noted later), and a table size of 1 k (1024 numbers) is a typical length. Larger function tables (e.g .2048, 4096 or 8192 numbers in the table) increase both the accuracy of the computation and the execution time somewhat, but often not to any noticeable degree; smaller sized function tables (e.g. 512, 256 or 128 table locations) decrease both accuracy and execution time slightly. To figure out the correct sampling increment, the oscillator employs the formula

                           table length   *   frequency
    sampling increment  =         sampling rate

or, in our example,

                            1024  * 440
    sampling increment   =     44100       =  10.21678

This means that our oscillator will read in the first number from the table, using it to compute the first output sample value, then skip the next ten numbers within the table, using the eleventh number to compute output sample 2, the 21st table number to compute output sample 3, and so on. On approximately every fifth output sample, the oscillator will skip eleven rather than ten numbers within the table.

Every frequency requires a unique sampling increment. However, none of this mathematical unpleasantness need concern the user; the oscillator phasor takes care of all of this automatically.

1.5.1. Interpolating and Truncating Oscillators

This brings us to the difference between interpolating oscillators, such as the Csound oscili, and oscil3 and truncating (non-interpolating) oscillators, like oscil. With a sampling increment of 10.21678, the oscillator should find values at the following points in the table :

0 10.21678 20.43356 and so on

Obviously there is no value at location 10.21678 in the table - only values at locations 10 and 11. Truncating oscillators (oscil) keep accurate track of the cumulative sampling increment, but, in the example above, return the values of table locations 0, 10, 20, 30 and so on. The interpolating oscili, by contrast, will take the time to compute the difference between the numbers in table locations 10 and 11, multiply this difference by .21678, and add the result to the number in table location 10. By interpolating between adjacent table locations for each input sample in this fashion, oscili will provide better resolution (representation) of the waveform, with less round-off error and thus less harmonic distortion. The price? Greater computation time for this particular unit generator, by at least a factor of two.

oscil3 is similar to oscili, but performs cubic interpolation, which a slightly more accurate (and computationally slightly more time consuming). For the highest possible audio quality, we could substitue an oscil3 opcode for the oscili in first.orc.However, the sound produced by this orchestra file is so numbingly boring that no one may notice the subtle difference in audio quality. If the oscillator instead was sampling a more complex function table, such as a table containing the samples of a digitized violin tone, and if we introduced glissandi and other pitch variations, listeners would be more likely to notice the slightly improved audio quality afforded by oscil3. In addition oscil, several other unit generators have truncating, interpolating and cubic interpolation variants, e.g. the delay line opcodes deltap deltapi and deltap3.

If our orchestra file contains a single oscillator, the difference in computation time between oscil, oscili and oscil3 might be negligable. But if our instrument includes twenty interpolating oscillators, and our score requires that many notes be computed simultaneously, the computation time difference becomes more substantial. For non-real-time synthesis of audio signals, oscili is often the better, or at least the safer, choice. For control signals, such as a 5 herz vibrato pattern, however, it is unlikely that the higher resolution would make much audible difference. We would probably use the faster oscil to create this signal, and would run this oscillator at the control rate rather than at the audio rate.

Note that by using very large table sizes -- say, 4096 or 8192 points, rather than 1024 -- to represent a waveform, round-off error can be reduced when truncating oscillators are used. This is a solution we might choose to employ if we want to run Csound in real time with complex synthesis algorithms. Larger tables, however, require more RAM, especially, say, a table containing stereo 96 k Hz 24 bit samples of a digitized acoustic sound. Such tradeoffs between computation time, memory space and signal resolution are encountered frequently in digital synthesis, even with todays's faster computers. When you hear digital playback beginning to cough and sputter or break up, the demands of the audio software are exceeding the current capabilities of the system hardware and software.

Amplitude (1st oscillator argument to oscil, oscili and oscil3) :

The tables of numbers for most sine wave and other synthetic audio functions are floating point values that range between -1. to +1. The amplitude argument to an oscillator specifies a multiplier for each number read in from the function table. On a 16-bit integer system, the output integer samples are scaled between 0 to +/- 32767. Thus, the number 15000 used in our example merely denotes a value within the acceptable range. It is impossible to say whether this value will be perceived as mezzo-piano, forte, or whatever. (Recall that timbre is often a more important factor in our perception of loudness than ampitude.)

In general, we try to create source signals at fairly hot levels, with a maximum amplitude peak somewhere between 15000 and 32000 for 16 bit signals, in order to take advantage of the full 16 bit signal resolution. Level and balance adjustments between different signals within a mix generally are accomplished during mixing operations, in the same manner that one users faders on a mixing console (or vitural faders in a sequencing or audio mixing program) when bouncing multiple tracks down to a stereo master. In creating source soundfiles, however, one must take care not to exceed a maximum amplitude of 32767 (for 16 bit samples) at any given point, or else severe distortion will result. The Csound sterr terminal output provides the error message "samples out of range" whenever the amplitude of a sample exceeds 32767. If several copies of an instrument are playing simultaneously (for example, a chord, or overlapping sustaining notes), be conservative in your initial amplitude arguments. These can always be increased on subsequent runs of the job if you find that the resulting total amplitude values are low.

In sum, we can paraphrase the three required arguments to an oscillator as three questions. These are:

(1) What is the intensity level of the signal? (amplitude);

(2) How many times per second, or at what rate, is the waveshape being produced? (frequency); and

(3) What is the time-varying waveshape of the signal? (function number, which points to a table of numbers that has been compiled and stored in RAM.)

Output statements: out, outs, outs1 and outs2

[ See the discussion of unit generators out and outs in the Csound reference manual ]

The statement out asound on the penultimate line of our sample orchestra file is called a 'standard out' statement. The out unit generator sends the current value of the RAM memory location we have called asound to an output buffer. Here it is added to any value (from other notes being played simultaneously by this instrument, or by other instruments within the orchestra) already in the buffer. Eventually, a group of a thousand or so samples within the buffer are written as successive samples to the disk soundfile, or, if Csound is being run in real time, to the system DACs.

Since our orchestra is monophonic, we don't have to worry about spatial localization. However, if we change our orchestra to stereo (by setting the header nchnls argument to 2), we must use unit generator outs (or else outs1 and outs2), rather than out, in order to specify stereo localization. The standard out statement might look like this:

outs asound, asound

This would send the signal at full amplitude to both output channels. If, instead, the standard out statement looked like this:

outs .7*asound, .3*asound

or else like this :

outs1 .7*asound
outs2 .3*asound

Here 70 % of the signal would be sent to the left channel output (the oscillator signal is multiplied by an amplitude gain scalar of .7) and 30 % of the signal would be sent to the right channel, Thus the signal would be perceived as coming from a stereo location "to the left of center."

For quad output we would change the nchnls argument in our header to 4 and then use the outq opcode:

outq asig1, asig2, asig3 , asig4
Here a different audio signal ( asig1, asig2, asig3 and asig4) is sent to each of the four loudspeakers.

Mathematical expressions

Note that the two arguments to outs (.7 * asound and .3*asound) are examples of simple expressions expressions, or mathematical operations. Expressions sometimes can become much more complex, and require nesting of the (mathematical) operators and their arguments within matching ( and ) parentheses to assure that the operations are performed in the correct order and thus produce the desired result.

out (.5 * (asound1 + asound 2)) + (.3 * asound3)
In this example three audio signals (asound1, asound2 and asound3) are mixed together and sent to a monophonic output. Audio signals asound1 and asound2 are added (mixed) together, and this submix is multiplied by an amplitude scalar of .5, reducing the amplitude by 50 %. Audio signal asound3, scaled to 30 % of its original amplitude, then is added to the mix. If the resulting mix signal is still too hot, causing samples out of range (exceeded 32767) and clipping, we could reduce the overall gain of the mix by 20 %:
out .8 * ( (.5 * (asound1 + asound 2)) + (.3 * asound3) )

Here is an example of potentially bad or incorrect Csound coding:

     out   .5 * asig1 + .2 * asig2 

How should these operations be performed? As
     out   .5 * (asig1 + (.2 * asig2) )       or, instead, as   out   (.5 * asig1) + (.2 * asig2) 
In potentially ambiguous cases like this, Csound does follow a default order of precedence in which mathematical operations are performed. (Multiplication and division are performed before addition and subtraction, for example.) See the discussion of operators +, -, * and / in the Csound reference Manual for more details. However, in your Csound coding, you should always use parentheses to clarify esxecution order whenever there is any possible ambiguity.

1.6. Score files

[ Read the discussion of i Statement (Instrument or Note Statement) in the Csound reference manual ]

Our first.orc file provides a workable, if trivial, instrument algorithm. Now we must tell this algorithm how many notes to play, what the durations of the notes should be, and what waveform the oscillator should read. We could create a score11 file like the following to create a sine wave table (f1, where the 1 corresponds to the function number we have told our oscillator to use), and then to specify a single note lasting five seconds:

Score11 input file:

  *f1 0 1024 10 1; <this line defines function number 1, a sine wave
  i1 0 0 1;        < instrument number 1 plays one note
  p3  5;           < the duration of this note is 5 beats

The resulting score11 output file sout, a Csound format score file, would look like this:

  f1 0 1024 10 1
  i1 0.000 5.000

Alternatively, since our Csound score is so simple, we might find score11 superfluous, and instead type the three lines of the Csound score directly into a file. In this case, we might include Csound comments, which begin with a ; (semicolon) rather than with

  f1 0 1024 10 1  ; this line defines function number 1, a sine wave
  i1 0.000 5.000  ; instrument number 1 plays 1 note lasting 3 seconds

In naming our Csound score file when we save it to disk, we need not call this file "sout.", and, in fact, we should not, since sout files are overwritten everytime we run score11. By convention, just as the names of Csound orchestra files customarily end with a .orc extension, score files generally are given a .sco extension to identify them as Csound score files. The most obvious name for our score file would be first.sco, (matching the name of the orchestra file that will "play" this score).

However, matching orchestra and score files in this manner is not a requirement. In fact, most Csound users in the ECMC studios rely heavily on score11 for score file creation, use the resulting sout files for our score files, and often create many score files for each orchestra file we create. So in this case, seeking to dspel the chill of a cold Rochester January day, I will elect to call my hand-made score file above desire.sco.

Our three line Csound score begins with an f (function> statement, which defines a sine wave. The second line provides one i (instrument or Note ) statement with three parameters:

p1, with an argument of i1, specifies the number of the instrument within our orchestra file that will play this note. You will recall that within our first.orc orchestra file we labeled our single instrument instr 1, so our score file and orchestra file instrument numbers are matched.
p2, with a value of 0.000, specifies the starting time of this note; and
p3, with a value of 5.000, specifies the duration of the note
In all Csound score file i statements, p-fields (parameters) 1, 2 and 3 will always specify, respectively, the number of the instrument within the orchestra file that will play the note, the starting time of the note, and the duration of the note. i statements can include many additional (and usually consecutively numbered) parameter fields (p4, p22, p89, etc.) to provide input argument values for oscillators, filters and other unit generators, but the function of each of these additional p-fields must be defined within the instrument algorithm

The third line of our score -- the letter e -- is an e (end) statement, which is required in all Csound score files to mark the end of the score.

1.7. Running Csound with the csound command

[ See the discussions of The Csound command and Csound command line flags in the Csound reference manual ]

We are now ready to compile our first Csound soundfile by using the csound command, which on ECMC Linux systems can be abbreviated with the shorthand alias cs :

   csound  first.orc  desire.sco  or, on ECMC systems:    cs  first.orc  desire.sco 

This will create a five second 44.1k 16-bit monophonic soundfile names test.wav in your current working soundfile directory ($SFDIR). Csound also will pop open a postscript window to display the sinusoid created by the f1 function table definition in our score. The job will not end until you click on the quit button or the x box in the upper right corner of this postscript window.

The csound command provides a great many flag options for compiling soundfiles and for playing compilations in real time. Here are some of the more frequently used flag options:

     -----  Output format:  -----
-o  NAME  write outfile  output soundfile "NAME" instead of to "test.wav"
-3  create 24 bit audio samples instead of 16 bit short integers
-f  floating poinnt output instead of 16 bit short integers
    (float soundfiles are not playable by all audio applications, but are
    playable with the ECMC play command

-r NUM  override sampling rate in orch. file; set sr to NUM
-k NUM  override control rate in orch. file; set kr to NUM

-R  continually re-write header, so that the soundfile can be played even
    before it has finished compiling (default on ECMC Linuix systems)
-W  write output to a WAVE file (default on Linux and Windows systems)
-A  write output to an AIFF file (default on Macintosh systems)

     -----  Display:  -----
-d    suppress all displays (useful for real-time)
-G    display functions and envelopes in postscript (default, so this flag is almost never needed)
-g    display functions and envelopes in ascii

     ----- Real-time playback  -----
-B NUM   set number of samples held in the DAC hardware buffer to NUM; rarely needed on ECMC Linux systems
-b NUM   set number of samples held in the i/o software buffer to NUM; rarely needed on ECMC Linux systems
     -----  No Sound :  -----
-n   No sound; Perform all processing but do not produce output samples; sometimes
    useful for debugging
-z        list opcodes in this version of Csound
--help    display csound command line flag options 

And here are some examples of how to use these flags when creating soundfiles:

csound --help    (display csound command line flags)
csound -d -o firstry.wav  first.orc desire.sco   (suppress displays; name the output soundfile  firstry.wav)
cs -r 96000 -k 9600 -3 -o desire.wav first.orc desire.sco   (reset sr to 96000 and kr to 9600;
     write 24 bit samples to output file desire.wav; note that we could also change the sr and kr arguments in the header of our 
     orchestra file rather than overriding these values with the Csound command line)

I generally run tests of new Csound instruments, and new scores for all of my instruments, at 44.1k or 48k 16 bit. For important soundfiles that I actually intend to use compositionally, however, I bump up the sr up to 96k and often increase the kr up to 19200, and also create floating point or 24 bit soundfiles. However, I don't want to be bothered changing the headers in orchestra files, and (not surprisingly) I can't remember some of the Csound flag options.

A simple solution has been to create some bash functions (aliases) in the .bashrc file in my home directory, like these:

function cs24 { (csound -W -R -3 -d $@) } # 24 bit
function cs96f { (csound -W -R -f -d -r 96000 -k 19200 $@) } # 96k floats
function cs9624 { (csound -W -R -f -d -r 96000 -k 19200 $@) } # 96k 24 bit
Then, whenever I want to create a 24 bit soundfile with Csound, all I need type is
cs24 -o section1.wav orc sout
To create a 96 k floating point soundfile with very hi control rate, I can type
cs96f -o section1.fl.wav orc sout
To create a 96 k 24 bit soundfile soundfile and use a very hi control rate, the command would be:
cs9624 -o section1.9624.wav orc sout

To run Csound with realtime output on ECMC Linux systems you should use the csoundplay ECMC utility, which can be abbreviated csp:

          csoundplay  first.orc  desire.sco    or     csp  first.orc  desire.sco

csoundplay can be used whether jack is running or no running. If jack is not running, the commands above are equivalent to

csound -+rtaudio=alsa --expression-opt -odac -b 512 -B 1024 -d -m6 first.orc desire.sco
If jack is running, csoundplay will set Csound's output buffer size to match the jack buffer size and will connect the Csound output to the system audio outputs.

CSD files

Sometimes after we have perfected a Csound orchestra file and a companion score file to our liking, it can be convenient to bundle these two files, along with any command line compile flags, into a single file, which Csound calls a csd file. For example, if I want to be able to recreate a soundfile exactly at some point in the future, either on madking or on some other Linux, Mac or Windows computer, or if I would like to email the orchestra and score files and command line flag instructions used to create a soundfile to another user, so that she can reproduce this soundfile exactly, a csd file will make this a simple undertaking. A csd file for first.orc and desire.sco, and instructions to compile the soundfile at a sampling rate of 96000, with a high kr value of 19200 to floating point output samples would look like this:
  ; this CSD file was generated with makecsd v1.1
  -r 96000 -k 19200 -f -o hotstuff.wav
  ; originally tone.orc 
  sr = 44100
  kr = 19200
  nchnls = 1
  instr   1 
      asound oscili 15000, 440, 1 
      out asound

  ; originally tone.sco
  f1 f1 0 1024 10 1  ; this line defines function number 1, a sine wave
  ii1 0.000 5.000  ; instrument number 1 plays 1 note lasting 5 seconds

As you can see, Csound csd files employ a markup format similar to HTML.

By means of the utility makecsd we can consolidate an existing orchestra file and an existing Csound score file into a .csd file. The name of the orchestra file must end with a .orc extension, and name of the score file must end with a .sco extension. (If your score file is "sout," you first must rename this file and give it a .sco extension). Example:

makecsd first.orc desire.sco > desire.csd
After editing file desire.csd with a text editor to fill in the Csound command line options we want, we can use this file to compile the soundfile hotstuff.wav with the command:
    csound  desire.csd

1.8. Creating Function Table Definitions

[ See the discussion of f (function) statement in the Csound reference manual ]

Our next major topic concerns function table definitions, which must be included in our score file whenever we employ one or more oscillators within our orchestra. Like I (note) statements, F (function) statements consist of a array (series) of parameters (variables):

(p1) (p2) (p3) (p4) (p5)
 f1   0   1024  10   1  ; Csound score file function definition
 *f1  0   1024  10   1 ;  <score11 input file definition of the same function
Note that in score11 input files, the f must be preceded by an asterisk, a flag that tells score11 simply to reproduce the line as is (stripped of the asterisk), and the line must conclude with a semicolon.

The first parameter field (f1), specifies the number of the function, in this case 1, although 10, 125 or 1000 would do just as well. One cannot have two different functions within a score with the same number active at the same time. However, two or more oscillators can read simultaneously from the same function table.

The second p-field determines at what time the function will be created. In this case, the function is created at time '0,' that is, before the computation of samples begins.

If we were creating a long, complex soundfile, say, 40 seconds or so, and didn't need this particular function until halfway through, we could give it a starting time of 20. The table then would not be created until 20 beats into the soundfile. This might be elegant, but is rarely required, unless we don't have enough RAM to squeeze in everything that happens in the first 20 seconds. With today's computers, this is rarely a problem.

The third parameter (1024) determines how many numbers will be used to outline the desired shape. As discussed earlier, the higher the number, the greater of resolution, but the more memory space required to store the table. The table size must be either a power of 2, (2, 4, 8, 16, 32, 64, 128, 512, 1024, 2048, 4096, 8192 etc.) or else, in certain cases, a power of two- plus-one (3, 5, 9, 17, 33, 65, 129, etc.).

=> If the function is going to read repetitively by an oscillator, as in our preceding example, then a power of two should be used.
=> If the function will be read only once per note, a power of two plus one is better. Examples of the latter case will be discussed in the section on envelopes.

The fourth p-field in the function definition statement is a call to a particular function generating program, which will actually calculate the numbers of the table and load them into RAM. These programs are called gen routines in Csound (and in many other music compilers as well, since they are all derived from programs of the same name originally written at Bell Labs in the 1960s). Our function table definition invokes gen10.

1.9. GEN Routines : gen10

[ See the discussion of gen10 in the Csound reference manual. ]

gen10 creates a table that represents one cycle of an audio waveform consisting entirely of harmonic partials. By harmonic, we mean that every frequency component is an integer ratio - exactly twice the fundamental frequency, three times the fundamental, and so on. The relative amplitude of each harmonic is indicated, successively, in the remaining p-fields of the function definition. In our example sine wave function definition, only one additional p-field is included, specifying a value of 1. This means that the first partial (the fundamental) has a relative strength of 1, while the remaining harmonics all have a relative strength of 0. This will produce a sine wave.

Suppose we want to create a wave that consists of only odd numbered harmonics, all of equal intensity. Our function definition might now look something like this:

     function definition: *f1 0 1024 10 1 0 1 0 1 0 1 0 1 0 1;
              harmonics:                         1   2  3  4  5   6  7  8  9 10  11
               p-fields:     1    2    3     4   5   6  7  8  9  10 11 12 13 14  15

Here, harmonics 1,3,5,7,9 and 11 all have the same relative strength, while all even numbered partials are suppressed.

Of course, this likely would produce a rather unnatural timbre. It would be much more likely for the various harmonics to have different relative strengths :

*f1 0 512 10 1 0 .33 0 .2 0 .14 0 .11 0 .09;

This makes the fundamental stronger, and the higher odd partials progressively weaker. In fact, the function above would approximate a square wave, though with sloped sides, because the waveshape is band-limited in frequency. We have specified only odd harmonics 1, 3, 5, 7, 9 and 11. Additional, higher odd-numbered harmonics would be necessary to produce the right angles of a true square wave. As of this writing, function definitions, as well as Csound orchestra and score files, can include up to 150 p-fields (a limit that may soon be raised), so we could add many more harmonics if we so desired. However, we would have to be careful about using such a function to create high-pitched tones, especially at lower sampling rates, since the highest harmonics might exceed the Nyquist frequency and alias. [Example: With a sampling rate of 22050 and a pitch of 1000 Herz, any harmonic above number 11 would fold over.]

Although not required, it is generally good practice to give a value of 1 to the strongest partial -- which will not always be the fundamental -- and to scale the others accordingly, as numbers between 0 and 1.

Accessing the orchestra and score file examples in this Tutorial

Doubtless you have found all of the background and technical information above to be a compelling, perhaps even heart stirring narrative, reminiscent of a finely crafted mystery novel with many unexpected twists, and you may be wishing it could all go on forever and ever. Large portions of the remainder of this tutorial, however, will consist of orchestra and score file examples. Soundfiles compiled from the orc/sco pairs are available for your listening and dancing pleasure in the /sflib/x directory. ECMC Linux users can access these example orchestra and score files in the following ways:

The time now has come to put to use what we have learned so far, to experiment with some audio function definitions, and to create a few soundfiles. To do this, however, we must first upgrade our wimpy orchestra file, replacing some of constant values with score p-field variables, so that we can vary the oscillator's amplitude, frequency and function number arguments from note to note. Our revised orchestra file now looks like this:

;  #############################################################
; Soundfile examples  "ex1-1"  and  "ex1-2"
;  #############################################################

Orchestra file used to create these two soundfiles:
   sr= 44100
   ksmps = 20
   nchnls = 1

   instr 1
      asound  oscili  p5, p4 , p6
      out asound

Soundfile example ex1-1.wav in /sflib/x was created by means of the orchestra file above, and the following score11 file:

< Score11 file used to create soundfile example ex1-1 :
*f1 0 1024 10 1.;    < sine wave
*f2 0 1024 10 1. 0 .33 0 .2 0 .14 0 .11 0 .09; < odd harmonics
*f3 0 1024 10 0 .2 0 .4 0 .6 0 .8 0 1. 0 .8 0 .6 0 .4 0 .2 ; < even harmonics
 < function 4 includes harmonics 11 through 20
*f4 0 1024 10 0 0 0 0 0 0 0 0 0 0 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. ;
 < f5 = harmonics 1, 6, 11, 16, 21 , 26 , 31 ,36
*f5 0 1024 10 1 0 0 0 0   .8 0 0 0 0   .7 0 0 0 0   .6 0 0 0 0   .5 0 0 0 0
 .35 0 0 0 0   .2 0 0 0 0   .1;

i1 0 0 10;  < play 10 notes
p3 4;
du .95;
p4 nu 220 * 5 / 27.5 * 5 ;   < frequency :
                             <  5 notes at 220 hz, then 5 notes at 27.5 hz
p5 15000;  < amplitude (constant here for all notes)
p6 nu 1 / 2 / 3 / 4 / 5;

The Csound score file ("sout") produced by the above score11 file looks like this:

f1 0 1024 10 1.
f2 0 1024 10 1. 0 .33 0 .2 0 .14 0 .11 0 .09
f3 0 1024 10 0 .2 0 .4 0 .6 0 .8 0 1. 0 .8 0 .6 0 .4 0 .2
f4 0 1024 10 0 0 0 0 0 0 0 0 0 0 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.
f5 0 1024 10 1 0 0 0 0 .8 0 0 0 0 .7 0 0 0 0 .6 0 0 0 0 .5 0 0 0 0 .35 0 0 0 0 .2 0 0 0 0 .1
  i1 0.000 3.800 220 15000 1
  i1 4.000 3.800 220 15000 2
  i1 8.000 3.800 220 15000 3
  i1 12.000 3.800 220 15000 4
  i1 16.000 3.800 220 15000 5
  i1 20.000 3.800 27.500 15000 1
  i1 24.000 3.800 27.500 15000 2
  i1 28.000 3.800 27.500 15000 3
  i1 32.000 3.800 27.500 15000 4
  i1 36.000 3.800 27.500 15000 5
end of score

Some questions to ponder on this example:

After experimenting with some isolated test tones in this fashion, we often can isolate some material that can be used to create a more intriguing musical gesture. Listen to and study the following example, which makes considerable use of random selection procedures. Three audio waveshape functions similar to f4 in the previous example are employed. The fundamental frequencies at the very end of the example are subaudio (see p4 below).

; Score11 file used to create soundfile example  ex1-2  :
 < function 1 includes harmonics 11 through 20 :
*f1 0 1024 10 0 0 0 0 0 0 0 0 0 0 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. ;
*f2 0 1024 10 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1. 1. 1. 1. 1.
  1. 1. 1.  ; < harmonics 21 thru 28
*f3 0 1024 10 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
   1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. ; < harmonics 30 thru 41

i1 0 8.5 ;  < play for 8.5 beats
p3 mo 4. .6 .15/  4.5 .15 .6;
du mx 8.5  301. 302. 304.;
rs 444;   < this reseed value produced better results than some others I tried
p4  mx 1. 20 50  72/ 6. 72 50  800  500 / 1.5 6. 8. 5.;
     < frequency : pitch gradually rises
p5 mx 4. 2000 4000 6000 8000/4.5 6000 8000 4000;  < amplitude
p6  se 8.5 1 2 3;  < audio function number : randomly selected here

Appendix Csound score file examples : Chapter 1

Note that our long score11 function definition for f3 in the example above extends over two physical lines, but that these two line comprise only a single line of code (a score11 line does not end until a semicolon is reached). In Csound orchestra and score files, however, a newline character (produced by a carriage return) terminates a line of code, unless the newline is preceded by the backslash character \ HERE

1.9.1. gen9

[ See the discussion of GEN09 in the Csound reference manual ]

Function generator gen10, while relatively easy to use, is also somewhat limited. Its cousin, gen9, though more complicated to use, provides greater flexibility, enabling us to specify

(1) only the partials we want ;
(These partials can be either harmonic or inhar- monic, but the use of inharmonic partials requires a little trickery, discussed below.)
(2) the relative strength of each of these partials; and,
(3) the starting phase of each partial.

Each partial, then, requires three p-field arguments within the function table definition. Here is an example:

        f1 0 1024 9 1 .5 0 5. 1. 90 7. .35 180 11. .2 270;
(p-fields): 1    2    3    4  5   6   7  8   9    10  11    12    13   14    15   16

Here, we place a 9 in p4 to summon gen9 to create the table. The remaining p-fields can be roped off in groups of threes (p5-7, p8-10, p11-13 and p14-16). (Alternating italic and bold type are used above to differentiate these groups.) Within each of these groups, the first number is the frequency of the partial (as a multiplier of the fundamental) ; the second number is the relative amplitude strength ; and the third is the initial phase, expressed in degrees (0 to 360). Thus, this function will create a waveform that consists of harmonics 1, 5, 7 and 11, with relative amplitude intensities of .5, 1., .35 and .2. The fifth harmonic will be 90 degrees out of phase with the fundamental, the 7th partial 180 degrees out of phase, and the 11th partial 270 degrees out of phase.

It is unlikely that these phase relationships will make much, if any, audible difference. Note, too, that within gen9, the starting phase of individual partials is expressed in degrees - 0 to 360. By contrast , the optional phase argument to oscillators within an orchestra file, which specifies where the oscillator is to begin reading within the table, must be specified as a fraction between 0 and 1. It is easy to become confused over these two quite different usages of the word "phase." If no one in class asks for clarification on this point, we'll know that you didn't read this section.

One need not specify harmonic partials. Inharmonic frequency ratios are also possible, as in the following function:

*f1 0 1024 9 1. 1. 0 2.7 .5 0 5.4 .33 0 8.1 .12 0;

Here we create four partials, with frequencies (shown here in bold type) that will be, respectively, 1, 2.7, 5.4 and 8.1 times the base frequency we supply to the oscillator.

This looks fine on paper, but it will not give us the audible result we expect. Additional high frequency artifacts will be present, and the timbre will be buzzy rather than "pure." Why? Within the function table, which represents one cycle of this waveform, only the first partial (1) will be symmetrical, beginning and ending at the same phase point. The second partial will contain 2.7 cycles of a sine wave, the third partial 5.4 cycles, and the fourth partial 8.1 cycles. The fractional (.7, .4 and .1) cycle components at the ends of these three partials will result in a discontinuity between the end of the table and the beginning. Each time the oscillator wraps around the table, high frequency artifacts, or "clicks," will result from this asymmetric discontinuity.

This does not prevent us from creating inharmonic partials, but it does require some slight-of-hand within our function table definition and in the frequency input to our oscillator. Consider the orchestra and score11 files, reproduced below, used to create soundfile example ex1-3. The orchestra file is identical to our previous orchestra except for the oscillator frequency argument (shown here in boldface). Within the score11 file, the partial frequencies for both audio functions also are shown here in bold type.

;  #############################################################
; Soundfile example  ex1-3
;  #############################################################

Orchestra file used to create this soundfile:

     sr= 22050
     kr = 2205
     ksmps = 10
     nchnls = 1

     instr 1
     asound  oscili  p5, .1 * p4 , p6
     out asound

score11 file used to create this soundfile:

*f1 0 4096 9 10 .8 0  27 1. 0  54  .4 0  81  .2  0;  <  partials 1, 2.7 , 5.4 & 8.1
 < function 2 includes partials at approximately  2,  3,  9, 10, 16 and 17 times
 < the fundamental {which is missing}
*f2 0 4096 9 21 .4 0  29 .5 0  91 1. 0  100 .7 0  161  .2  0 170 .15 0;

i1 0 0 4;  < play 4 notes
p3  3.;
du .95;
p4  nu 55 / 261.6 ;   <  frequency : alternate between a1  &
c4 {middle c}
p5 nu 12000 / 7000;         < amplitude
p6 nu 1 // 2// ;  < audio function number

Within the function definition of f1, we specify partial frequencies of 10, 27, 54 and 81. Within the orchestra file, we have modified the oscillator frequency argument, directing it to wrap around function tables at a rate one tenth the value specified in p4. Thus, for the first note in our score, where the oscillator frequency is set to 55 herz in p4, the oscillator actually will wrap around the table at a rate of only 5.5 cycles per second. The frequencies of the waveform will be 10, 27, 54 and 81 times this 5.5 herz base, or, respectively, 55, 148.5, 297 and 445.5 herz. The audible result, of course, will be identical to that produced by an oscillator wrapping around a table at 55 herz, with partial frequencies of 1., 2.7, 5.4 and 8.1, except that we have eliminated the discontinuity in the waveform.

The second function in our score, used for notes 3 and 4, specifies partial frequencies of 21, 29, 91, 100, 161 and 170, which actually become frequency ratios of 2.1 , 2.9 , 9.1, 10., 16.1 and 17. These partials are almost, but not quite, harmonic. Therefore, even with no fundamental specified, this spectrum will produce a clearly defined pitch at the phantom fundamental frequency, but with amplitude beating resulting from the slight inharmonicity.

Note, too, that because these two tables produce very complex waveshapes (since they include so many cycles of each partial), we have increased the table sizes from the customary 1024 to 4096.


1) Study and review the material in this chapter, and in the corresponding pages of the Csound Reference Manual. Jot down any questions you have (before you forget them) and bring up these questions in your next lab or at the beginning of our next class. Do this each week.

2) Using the examples within this chapter as initial models, create two or three simple Csound orchestra files, and a few brief scores for these orchestras to play. Your scores should include functions using gen9 and gen10. Compile soundfiles from these orchestra and score files and play these soundfiles, or else run Csound in real-time mode, and make sure that you are getting the sounds that you expect. Be adventurous.

Eastman Csound Tutorial: End of Chapter 1

TOP of this chapter -- NEXT CHAPTER (Chapter 2) -- Table of Contents CHAPTER 1 -- CHAPTER 2 -- CHAPTER 3 -- CHAPTER 4 -- CHAPTER 5 -- CHAPTER 6 APPENDIX 1 -- APPENDIX 2