Preamble
========

I have this scanner infrastructure... a natural silbling to this would be a filter infrastructure.
One could add it for input channels in the common worker thread easily.

Now, I don't want to reimplement all the nice filter codes out there... I want to start with LADSPA plugins.
Well, I already have experimented a bit with LADSPA, but perhaps I should go directly for LV2.
Anyhow, the basic structure should not care about this detail.

Having used ardour, the idea of not just applying the filters in one fixed point comes to mind.
When I am a bit more flexible there, I can quite well end up with an audio engine... just like ardour.

These levels of filters come to my mind:

1. modify single inputs before entering the mixer
2. modify the output signal after mixing
3. modify several input signals in combination, replacing the default mixer.

For the third one, I basically would create another level of inputs that depend on other inputs. The third one also could encompass the others as special cases, sort of.
Now, I want to keep the impact on normal program work minimal. Also, when actually using filters, it should be reasonably efficient.
That means that input filters that take CPU time in the input threads are a mandatory specialization. When I want group filters, there's a big question about the danger of having extra work between the output buffers and the inputs.
Hm. The output threads could do the mixing and filtering on their side... that would mean that the mixer has to keep the (preamplified?) input buffers for them while fetching the next ones... or just wait until all outputs are done working with the inputs.

That's a good decision anyway... keeping the non-parallel stuff at a minimum!
Regardless of filter plugins, I should do that.

And back to ardour... of course the idea of making dermixd a JACK client comes up. Having an alternative for recording and (scripted) mastering.
The interesting question there is how well the idea of having every input as a separate thread will scale with massive multichannel work with jack, which is currently a single-threaded thing anyway.
But that has to be investigated separately. Perhaps one could also wait for a stable release of JACK first... Ha! Funny...


Filters and the prebuffer
=========================

For zeroscanning, I have that first level of buffering deep inside the inputs... should that be used as host for filters already? For some time I thought: Yes!
But now... I have doubts. The reason: Responsiveness.
When you change some filter parameter, you want to hear the effect -- soon. The big prebuffer would either need to empty its bowels on each change, seek back in the source file and reread stuff, or, well, it would take one buffer's fill for the changes to take effect. Well, it's like that with the low-cost equalizer of libmpg123.
It's only responsive when you are not using the zeroscannning prebuffer (at least not a large one).

So, it's settled: The filter chain will be modeled after the scanner chain, and it will happen in the input worker thread, during play(). Also, the volume / resampling according to the speed setting will go into play(), too. 
