OK, thanks guys, those are useful answers, and I'll ask more questions later on, but first, perhaps I should go back and explain what my problem is ...
It's all a case of exactly *when* you change the row timing, and the interaction of that new timing with some of the effects commands, and especially, how to process things efficiently so that we're not paying an undue cost for something that rarely (if ever) happens.
The issue that I have, is that you can change the row speed, and that it effects the length of the exact row that you issue the new speed on.
So if you're on the 1st row in a pattern, and you change Speed1 (with effect 09xx), then that speed is used (or it appears to be used, to me) as the duration of that row.
You can set that effect *anywhere* in the row, which means that the driver will process some, or all, of the other channel's notes/effects before the row speed is changed.
The problem is that some effects commands (particularly Note Delay and Note Cut) need to know what the exact row speed *will* be, and they need to know it as soon as they're encountered during the notes/effects for that row.
Deflemask can get around this chicken-and-egg problem just by scanning the row first and setting the new row speed (if there is one) before processing any of the other notes/effects on the row.
We can't do that (practically) in a sound driver.
I need to decide how to work around that problem.
BTW ... the issue here is really with the "Note Delay" effect.
Is that effect even useful anymore, now that you can achieve so much with the Envelope/Arpeggio/Wave Macros? (Which AFAIK eliminate the need for the weird 1:7 timings that I've seen used).
FYI ... just ignoring that *one* effect would make all of my problems go away (the Note Cut *can* be processed a little-less efficiently, but still work).
One solution is to automatically create a separate invisible "timing" channel, and move all of the changes in row speed into that channel while converting the .dmf file.
It's a bit of a PITA, and will waste some memory for songs that don't change their timings, and cost a *little* bit of extra processing time ... but it has the advantage that it will work reliably, and keep the what-you-see-is-what-you-hear advantage that Deflemask offers.
The alternate solution, is to strip out the timing info, and have the driver use its own timing, for example, the 8 different row speeds that I suggested earlier.
The advantage here is that it is practically free to process, takes almost no extra memory, it gives you very fine-grained control over the tempo for a song, and that it makes changing the tempo *inside-the-game* very, very easy (for boss-rush tracks when the tune gets faster and faster).
But ... even though those 8 speed-settings could be initialized from the settings in the .dmf file when it is converted, you would lose the flexibility to change the row-speed at *any* time within the song data itself, and so you'd lose a bit of the what-you-see-is-what-you-hear advantage.
Now, as a programmer, I find the 2nd solution to be the most-elegant and efficient ... but I'm not the guy that makes the music.
What do you guys feel? Which alternative sounds better to you, and/or can you just live without "Note Delay"?
Or ... perhaps I'll just take a very long walk and *possibly* figure out a way to work around the problem.