Re: [linux-audio-dev] Re: Plug-in API progress?

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] Re: Plug-in API progress?
From: David Olofson (audiality_AT_swipnet.se)
Date: ma syys   27 1999 - 18:43:50 EDT


On Mon, 27 Sep 1999, Benno Senoner wrote:
> On Sun, 26 Sep 1999, David Olofson wrote:
> >
> > An RTL driver can easily catch events and time them within 5 µs or so on an
> > average Celeron box. The hardware can be more troublesome...
>
> Yes, but audiality will run as a userspace process on most systems,
> therefore we should include a solution optimized for that case:
> ignore (do not generate/process) timestamps for realtime data,
> like MIDI input.

Actually, it doesn't matter in what context the engine runs, as long as the
drivers can provide correct timing info for audio data. But yes, if an API
for this is not available, or the user doesn't consider this extra resolution
useful, we can just clear all timestamps so that plug-ins will handle all
events before they start processing. (Plug-ins don't need to be aware of this
possibility.)

> For realtime MIDI input we have to live with the scheduling jitter, but since
> it is below/within the MIDI byte transfer time, we are better/as good as
> hardware counterparts.

Yep. And if anyone believes there's more info to get from a MIDI port, there's
always RTL, and the "timestamped MIDI events" solution. However, as most MIDI
controler devices have a scan rate that won't even allow ms accuracy, I guess
it's pretty pointless...

[...]
> Not to mention weird things like plugin-B getting the new parameter change,
> while plugin-A was already processed and generated data with the old parameter
> settings
> = unwanted "analog feel" ??
> :-)

<mode tool="chainsaw" >:-> >
Hmm... I didn't even think about events being sent to multiple plug-ins.
Indeed, that would break the attack of the analog bass drum, that uses two
envelope controlled resonant filters in parallel to get the right punch.
Believe me, I've seen (uhm, heard) this, and I won't have it again... (BTW,
samplers are *samplers* - not workaround solutions. Imagine more dynamic sounds
than a simple bass drum...)
</mode>

[...]
> Of the above example doesn't cover the "more than 1 event per fragment" case
> but it's almost trivial to implement, without adding much overhead.
> (can be implemented by wrapping the above code fragment
around a > while(--num_events_in_this_fragment) { ... } or so)
> Right David ?

Yes, you can do it as I did in some previous example - with an outer loop
decoding events. The only difference would be that with the new
timestamp-per-event system, you'll get an extra check for samples_to_process==0
(the inner loop not doing anything, that is) for each extra event on the same
time stamp.

> I too, was sceptical about the sample accurate event system but I'm
> now almost 100% convinced, since it doesn't seem to have conceptual
> design flaws to me.

Nice! I hope you're right... :-)

> Of course the sample accurate event processing will slow down very much,
> if you send a huge number of events per fragment, but that's another story,
> and we assume that it will not happen in real world.

The input data interfaces (plug-ins, as I would like to do it) and more
importantly, the sequencer, should make sure to send sensible ammounts of
events. Same thing as with MIDI sequencers. A maxed out MIDI port is rather
useles...

> For oscillator modulations we will use low frequency wavetables, instead
> of sending myriads of events, stressing the David's poor memory management
> system.
> :-)

Exactly. I'll hardly manage to get it *that* efficient... And, "parsing" a
control signal buffer will always be the most efficient way it can possibly be
done. It can even be optimized into SIMD code without reducing the timing
accuracy.

[...]
> > My goal is to design and implement a system that can cover the whole range
> > from ultra low latency real time to off-line processing - without ending up as
> > something that doesn't do anything too well... (Simple, eh? ;-) The low overhead
> > system of Quasimodo is very inspiring when trying to turn an inherently complex
> > design into something nice, efficient and useful. Remains to see what all this
> > results in...
>
> To me the event overhead seems not too heavy compared to other processing
> tasks you have to do when doing DSP stuff in software.

Yes, and with the high quality resampling discussion in mind, we're hardly
expecting *less* expensive DSP code in future systems, are we? Remember the
days when you had to code even the control system of a game in optimised asm
code not to slow the whole game down significantly? Nowadays, even the rendering
code of the 3D engine is mostly C or C++... And it doesn't make much difference
trying to optimize it.

I still think fast code rocks, of course. (Hey, I used to be an asm die-hard...
:-) The Linux kernel is a good place to look for fast code that's still possible
to understand, reuse and improve.

//David

 ·A·U·D·I·A·L·I·T·Y· P r o f e s s i o n a l L i n u x A u d i o
- - ------------------------------------------------------------- - -
    ·Rock Solid David Olofson:
    ·Low Latency www.angelfire.com/or/audiality ·Audio Hacker
    ·Plug-Ins audiality_AT_swipnet.se ·Linux Advocate
    ·Open Source ·Singer/Composer


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : pe maalis 10 2000 - 07:27:12 EST