But something interesting happens; where ever that playback pointer is of the channel's waveform, the byte in the waveform memory will be replaced with the byte written to DDA port. So you can corrupt or change the waveform over time, in very specific ways, to create timbre type effects.
I forgot to ask you about this when you mentioned this a week ago ...
But, if you're just writing to wherever-in-waveform-ram the pointer is at the time that you do the write, then how can say that it's a "very specific" change?
Wost-case would be that you'd always be writing to exactly-the-same location in waveform ram.
Is there some reasonably efficient way that you know of to predict where the waveform pointer is at some specific time?
Well, that's why I tend to call it waveform corruption. Because it's not exact. You'd have more control if you could do this with the TIMER than 60hz. But either way, you'd need a fixed point counter - and as you've already guess - it's rather course (at least at 60hz). You'd need to know the note and octave the channel, and then you would know what fixed point counter to use with it - or however you want to sync the corruption part, that way the timbre effect scales with the notes. But the idea would be a rate of corruption relative to the frequency of the channel - that way you can get replicate-able results - even if they vary somewhat.
OK, I guess that you could switch to DDA mode and then back to normal mode and that would reset the waveform pointer, and you could time it from there ... but ... that sounds like a lot of effort!
No. But... mednafen author (I always call her that on these forums, because I don't know if she likes her real name used here), discovered a way to 'walk' the waveform pointer. The channel has to be off to do it, but you can reset the waveform pointer and then "walk it". It had something to do with.. meh - forget the reason behind it. But I had it saved on my blog somewhere. I should probably write it down in my notes folder.
Also, I've asked why they had limited the amount of waves in Deflemask to only 32, and basically there's no good reason... In fact, they actively don't want people to push the limits of the system, which is pretty stupid if you ask me, their reasoning is that they want people to compose music that sounds 'genuine', but here's the thing, people aren't gonna stop composing songs that don't show off the hardware just because you allow them to have more than a hundred waves... Oh well, I can live with only 32...
Hah! Considering how deflemask handles long samples for PCE tunes, you'd think they would think otherwise. Let's limit the computer with some artificial limitation because it doesn't fit our point of view - that's a great attitude to have for a tracker, especially for something as simple as expanding on sample memory that's still only a tiny fraction of long wave samples they support.
So from what you say, I could do this on 2 channels and not have to worry about CPU time right?
You could do it on
all channels and not even sweat it. Let me give you a relative example of cpu resource; if you did this on all six channels - it would still be less CPU resource than play a single long sample on just one channel @ 7kz playback rate. Playing a long sample on a single channel would take up 3-4x the resource as it would to do what you're asking for on all six channels. If someone tells you different, they don't know what they're talking about. I mean, it's not like the PCE sound chip has a slow interface like the YM2612 in the Genesis, where you can't just blast whatever amount a bytes in sequence to it (you have to wait for the delays - read the delay/ready flag). The PCE sound chip has no delays; it's just a simple matter of copying over 32bytes per channel. No sweat for the HuC6280.
I use different clocks depending on the song, I usually do custom so that I can have more control over the tempo of the song, but I don't know if I should be using custom clock speeds or not, if not then I can do the block transfer thing to get a tempo relatively close to what I want (9xx for even lines and Fxx for odd lines in Deflemask)
From a programming perspective, it's just easier to stick with vsync timing (60hz). Though in my book, multiples of 60hz (using the TIMER to do this) would be an ok trade off; 120hz, 180hz. Something with a fixed interval in the whole frame (which relates to game loop and logic). Anything higher than 60hz requires special interrupts. And anything
not multiple of 60hz means the music engine has to drift out of sync with the game logic (not to mean out of sync with playing music; just where that music engine will get called relative to a frame logic layout). It can complicate things. It has its own overhead (cpu resource). It can imposed limitations in strange ways (because of timing). 60hz and 120hz are nicer options to work with. 60hz being optimal (NTSC setting for deflemask).
Michirin9801:
Have you seen this trick before:
http://www.pcedev.net/audio/sub_waveform6.xmIf you load it up in a tracker, check out pattern #2 first (it only has two channels playing). See how the waveforms are setup? The main waveform is a triangle wave and the other is a saw wave (with a delay onset). The triangle samples exist only in the upper band of the amplitude range. The saw waveform is played on the other channel, and only has samples in the lower band of the amplitude range. Can you guess why? Normally when you play samples, their amplitudes add together. Having them in separate ranges (positive range vs negative range), you get a more pronounced effect as they add together. Channel 4 is the main sound, and channel 2 is used to
subtract from the main channel's waveform shape. Volume envelopes are used for emphasis (like controlling the amplitude of a modulator affecting a carrier wave). And the final step is pitch sliding. If you've ever looked at some synths waveform output, it looks like to waveforms going out of phase from one another (besides other stuff happening). So if you look at the channel, there are very precise pitch sliding FX to control how the subtractive channel is in phase with the main channel (instead of letting it endlessly drift). This is difficult to do in a tracker, because the depth if pitch sliding directly depends on the main channels note or frequency - and without macros, it has to be done manually for every note or chord. The lower the note, the bigger the steps need to be in the pitch FX column. Think of it as specialized version of detuning two channels.
Now check out pattern #1, only two channels are doing this effect but it's distinct enough to be heard in a chord - giving the whole chord a timbre like effect when the other contributing channels have no timbre change at all. I call it
subtractive waveform phasing. But I'm sure it has a real name, as someone has probably already thought of the idea.