This is an old revision of the document!
(This rant is mirrored from Cadavers site)
This is another not so deep rant, dealing with things mainly at concept level instead of going deep into the code. However, at the end there's a link to a simple example musicroutine written exclusively for this rant. You should be familiar with programming the SID registers to understand the ideas presented here.
This rant is very much inspired by the music & musicroutine related articles written by Jori Olkkonen (YIP) and published in the “C-lehti” magazine during the year 1987. He did a similar approach, not concentrating on actual ASM code but handling things on a concept level.
There are already hundreds of musicroutines in countless C64 music editors. But still, I think every musically-minded coder should try writing an own C64 musicroutine one day.
Basically, the task of a musicroutine is to play music (and possibly sound effects in addition.) At the very basic level a music routine has to be capable of:
This sounds like a very simple & very oldskool musicroutine. It's been said that the one in Forbidden Forest by Paul Norman is simple & easy to understand.
Now let's talk about a more advanced musicroutine. It should have:
Traditionally, C64 music routines are frame-based. That means they're called from a raster interrupt during each frame (50 times/second for PAL) or by other means of keeping a steady timing.
So, the music routine can count the amount of frames it has been called to generate the tempo of the music. Typically this means decreasing a note duration counter each frame, and when this counter reaches zero, it is time to fetch a new note.
The musicroutine also has to loop through all the 3 voices for multi-voice music. So it's essential to use the X or Y index registers for all accesses to voice variables, to avoid having to write the same code 3 times (in *extremely* optimized musicroutines, like the one in John Player by Aleksi Eeben this rule is deliberately broken!)
The usual flow of musicroutine execution is: - Process one frame of 1st voice - Process one frame of 2nd voice - Process one frame of 3rd voice - Process one frame of non-voice specific things (filter!)
Remember that SID registers are write-only. Therefore you need to have your own way of storing for example the voice frequency to be able to change it smoothly from frame to frame. An easy approach is to have a “ghost registers” for every SID register and at the end of musicroutine execution, dump all the ghost register values to the SID itself. However, not every register has to change on every frame, so this approach wastes some time. It must be noted that this approach produces the best, sharpest sound quality, because there's as little delay as possible between the SID writes.
Another important building block of music: it probably sounds better if it is in tune… Frequency tables for notes exist in the C64 User's Guide and the C64 Programmer's Reference Guide, or you can also calculate one yourself. It's usually easiest & fastest to just store frequencies of all notes, all octaves to a lookup table and get them from there when needed.
As each multiplication of the frequency by 2 raises the voice one octave, frequency slides and vibrato appear slower in the higher octaves than in the lower. I recommend taking a look at the Rob Hubbard musicroutine dissection in C=Hacking Issue 5, there a method to counteract this is shown. Basically, it involves taking a note's frequency and subtracting it from the neighbour note's (one halfstep lower) frequency and using this value as the basis for vibrato & slide speed. Smaller speeds achieved by bit-shifting the value to the right. I used this method in the SadoTracker musicroutine but be warned: it is kind of slow.
An instrument has the ADSR values that will be put to use whenever a note is played using that instrument. For echo-like effects, there could also be the possibility to modify the sustain/release register in the middle of a note with a pattern data command.
Usually in the beginning of a new note, the pulsewidth is initialized to the fixed value given in the instrument data. Some routines might also have the possibility to leave it uninitialized if so desired (continuing the pulsewidth modulation of the previous note.)
As the note plays on, the pulsewidth can then be changed (modulated) for interesting effects. There are many ways to assign the parameters for this, one way is:
Initial pulsewidth Pulsewidth modulation speed Pulsewidth limit low Pulsewidth limit high
So, the speed will be added/subtracted to pulsewidth on each frame and the limits tell when to change the direction (if pulsewidth crosses over its whole range from $fff back to $000 an ugly sound is heard, so it's a good idea to prevent that)
More advanced routines might have a table-based pulsewidth modulation control. The table can contain commands suchs as “add the value <m> to pulsewidth for <n> frames” or “set pulsewidth to <n>” and jump command to a different location in the table / command to end pulse-table execution. I'll be referring to this table based effect execution idea in other effects too, in arpeggios it's most common.
Vibrato can either be part of the instrument data or be controlled with separate pattern data commands. Vibrato usually needs the following parameters:
Time in frames before starting vibrato (unnecessary if using a command) Speed of vibrato (how much the frequency is changed each frame) Width of vibrato (how many frames before changing direction)
It can also be implemented table-based for more possibilities. Note that for the vibrato to stay in tune the frequency must follow the following diagram:
/\ /\ / \ / \ ------/ \ / \ etc. \ / \/
So you see that when vibrato starts, the first part of pitch going up must only be half as long as the rest, to make the frequency go up & down around the correct frequency of the note.
A slide is almost always implemented as a pattern data command rather than something belonging to the instrument data. It requires two parameters:
Slide speed (how much, and to what direction the frequency changes each frame) The duration of the slide
The duration can also be answered by the note duration. It can be interpreted so that the sliding is a special case of a note. When the duration ends, the sliding ends and next note will be read.
There can be also more advanced slides that stop automatically when a “target” note has been reached. This is called “toneportamento” in the Amiga & PC tracker programs.
These are usually a property of the instrument. My typical approach is that the waveform/arpeggio-table will be used in the case of each instrument to initialize the initial note pitch & waveform, even if the note doesn't contain any complex waveform/arpeggio effect (like drumsounds) or an arpeggio loop.
The waveform/arpeggio-table usually contains byte pairs; the other byte is what to put in the waveform register and the other is the note number; either relative (arpeggios) or absolute (drumsounds). As with table-based effect execution in general, there can (should) be a jump command and a command to end the waveform/arpeggio execution. There can also be special cases, like a waveform 0 used to indicate the waveform doesn't change.
There can also be a pattern data command for changing the arpeggio table startlocation for the coming notes. This can be used for example to play different chords without having to create an own instrument for each of them.
This method is what old players such as Hubbard's use. It works well on both PAL & NTSC machines and isn't sensitive to timing.
To execute the hard restart, clear the waveform register's gate bit and set ADSR registers to 0 a couple of frames before the note's end (for example, when the decreasing duration counter hits the value 2)
When starting a new note after a hard restart, the registers should be written to in this order: Waveform - Attack/Decay - Sustain/Release. This is actually the same order they appear in memory and ensures the sharpest attack possible.
Used in newer players such as JCH-player and DMC. Works reliably on PAL machines only, but gives a nice sharp sound.
2 or more frames before the next note, the ADSR is set to a preset value, such as $0000, $0f00 or $f800 (the ADSR setting can also be skipped), and the gatebit is cleared.
On the first frame of the note, the instrument's Attack/Decay and Sustain/Release values are written first. Then, $09 is written to the Waveform register (testbit + gate).
Only on the next (second) frame of the note, its own waveform value is loaded, and the note is actually heard.
In this method it's important that Attack/Decay and Sustain/Release are always written before Waveform. There might also be a necessity for some delays between them for maximum reliability.
Simple: don't reset gatebit, don't do hardrestart, don't reset pulsewidth, just change the frequency. GoatTracker implements tied notes as infinitely fast toneportamentos (slides).
This is a good opportunity for another table-based effect. There can be commands like “add <m> to cutoff frequence for <n> frames”. The additional trouble is that either only one voice must control the filter at a time (there can be a patterndata command to set that) or then the filter operation can be completely separate from the instruments, operated only by pattern data commands.
Playing sound effects within a musicroutine can be done at least in two ways:
It's good to be memory-effective when storing the music data. This chapter offers some suggestions for possible encodings.
It's most likely going to be 8-bit for effectiveness. If you want for example 128 different patterns, one possible encoding for sequence data bytes could be:
$00-$7f - pattern numbers $80-$bf - transpose command $c0-$fe - repeat command $ff - jump command, followed by jump position byte
Here it gets a lot more complicated. How do you represent notes with the smallest amount of bytes possible? Look at Rob Hubbard's routine (see the link above) for one approach, it uses bits to indicate whether additional bytes (like instrument number, possible special commands) are coming in addition to the note itself.
The approach I used in SadoTracker is a bit different, there bytes have encoded meanings like in the sequence data above. I don't remember the exact meanings of all byte ranges but it went something like this:
$00-$5f - Note numbers $60-$bf - Note duration commands $c0-$df - Instrument change commands $e0-$ef - Arpeggio change commands $f0-$fe - Other commands (like setting tie-notes on and off) $ff - End of pattern-mark
GoatTracker musicroutines have the following format:
$00-$5d - Note numbers with command & commanddata bytes $5e - Keyoff (gatebit reset) -||- $5f - Rest (no action) -||- $60-$bd - Note numbers without command & commanddata bytes $be - Keyoff -||- $bf - Rest -||- $c0-$fe - Long rest, $fe is 2 rows, $fd 3 rows etc. $ff - End of pattern-mark
Basically you should save all instrument properties in such format that using them is as fast as possible. For example, a fast, but not most memory-efficient way to store pulsewidth is to store it as a 16-bit value. More memory-efficient way would be to store it only as 8 bits and assume the 4 least significant bits to be always 0. Furthermore, if the nybbles were reversed in memory one could use code like this:
lda pulsewidth,y sta $d402,x sta $d403,x
Here both the high- and low-byte of the pulse are initialized with the same value. The high 4 bits also get copied to the low 4 bits but it doesn't hurt.
As you start writing your musicroutine probably at first you'll be amazed at how fast it is; when you add features you'll be cursing at how slow it is. But careful optimization can bring the speed back in. Some ideas for optimization are presented here.
If you've written a good musicroutine you'll probably want to be make music with it efficiently. Editing music data with an ML monitor or directly in source code can be hardly called effective, therefore an editor is needed. I'll warn you: this is going to be a lot of additional work.
To see the musicroutines I've personally written, you can take a look at:
A really simple musicroutine written exclusively for this rant (contains just note data, no sequencedata and only pulsewidth modulation effect)
GoatTracker (look at the files player1.s - player2.s. Many features, reasonably fast musicroutine(s), sound effect support in player2.s)
GoatTracker V2 (look at the files player1.s - player3b.s. Even more features while keeping approximately same rastertime as old GoatTracker)
NinjaTracker (minimalistic and fast playroutine centered on the wavetable - it does note init, arpeggio and vibrato/slides all in one.)
Also, there are regular playroutine dissections in the SIDin online magazine.
If going in-depth with this topic, the rant could easily end up being 50 pages long, so I think I'll just stop here. In fact, nothing but your imagination (and C64's available memory) limits what you can do within a musicroutine.
Lasse Öörni - firstname.lastname@example.org