What do you think of this? Focusrite Firewire DSP Effects
What do you think of this? Focusrite Firewire DSP Effects
http://www.focusrite.com/product/liquid_mix/
They claim it only addes 2 - 3 ms of delay to send audio back and forth through Firewire to be processed. The EQs/Compressors run as VST/AU/RTAS plugins. And it has a hands on hardware control surface. (And by the way, it will be both XP and OSX compatible - including the Mac Intel).
Please Creamware - do this!
They claim it only addes 2 - 3 ms of delay to send audio back and forth through Firewire to be processed. The EQs/Compressors run as VST/AU/RTAS plugins. And it has a hands on hardware control surface. (And by the way, it will be both XP and OSX compatible - including the Mac Intel).
Please Creamware - do this!
If it only adds 2-3ms like they state - and the reviews confirm - then I think it is viable. Many software plugins can add this amount of delay.
One of the reviews stated that the latency is better than other DSP plugin options available (I am guessing the reviewer was referring to PowerCore Firewire?)
One of the reviews stated that the latency is better than other DSP plugin options available (I am guessing the reviewer was referring to PowerCore Firewire?)
Last edited by huffcw on Thu Nov 02, 2006 8:16 am, edited 1 time in total.
The thing is that the audio signals are going to/from the card over the PCI/firewire bus.
I've used quite a lot of these kind of systems - UAD1, powercore, Duende etc.
They are NOT suitable for realtime operation, or 'as-good-as-realtime' 1-2ms latencies. You'll just get a lot of popping and crackling unless you turn up the audio card buffer size.
They are intended to be used at the mixing stage, when you can turn up the latency to huge amounts without any significant disadvantages.
The latency CAN be as low as the marketeers state, but that's only if you can run at 32 sample latency on your audio card/CPU, with a simple project.
The latency of these devices is always at least 2x the buffer size of the audio card being used.
I've used quite a lot of these kind of systems - UAD1, powercore, Duende etc.
They are NOT suitable for realtime operation, or 'as-good-as-realtime' 1-2ms latencies. You'll just get a lot of popping and crackling unless you turn up the audio card buffer size.
They are intended to be used at the mixing stage, when you can turn up the latency to huge amounts without any significant disadvantages.
The latency CAN be as low as the marketeers state, but that's only if you can run at 32 sample latency on your audio card/CPU, with a simple project.
The latency of these devices is always at least 2x the buffer size of the audio card being used.
It's interesting that it provides 32 mono channels @ 44.1/48kHz, but only 8 mono channels @ 88.2/96kHz. Obviously this box needs approx' 4 times the amount of DSP power to process at 88.2/96kHz. Likewise, it only provides 2 mono channels when run @ 176.4/192kHz.
The attractive thing about this box is it having some of the liquid channel's technology.
The attractive thing about this box is it having some of the liquid channel's technology.
If you're happy to use this in the way it's intended, as a final mixing tool, it is actually a very good unit. A friend of mine has used both this and the SSL Duende very extensively, and in his opinion they are both very close. He prefers the Duende slightly, because it's slightly quicker to use (there are huuuuge lists of EQ/comps to scroll thru on the Liquid Mix).
It uses dynamic convolution licensed from Sintefex:
http://www.sintefex.com/
I'd personally be happier running my tracks thru good quality, tried and tested models like the SSL Duende stuff, rather than new-fangled dynamic convolution technology. However, a lot of people like dynamic convolution and I have to say that it can sound good.
It uses dynamic convolution licensed from Sintefex:
http://www.sintefex.com/
I'd personally be happier running my tracks thru good quality, tried and tested models like the SSL Duende stuff, rather than new-fangled dynamic convolution technology. However, a lot of people like dynamic convolution and I have to say that it can sound good.
Does anyone know how dynamic convolution works.It seems fascinating.
Core2Quad Q9400 2.66Ghz, Asus P5Q EPU,Radeon HD4350 4Gb Ram,320Gb 7200Rpm,Windows 7 Pro 32 bit,Cubase 4+5,NI Komplete 5+6, Scope 5 - Mix&Master - Synth&Sampler,Pulsar II Classic - PulsarII XTC,.Core2duo 3.00Ghz.Presonus Firestudio Tascam FW1884
just a buzz word - marketing needs good slogans 
basically it's the same as technique as in impulse reverbs, they sample the response of certain high end processors and generate a function which 'prints' this sound charcter upon the digital signal of your source.
No big deal, but a bit abstract, you have to 'think around the corner' (so to say) with such algorithms, many Photoshop filter plugins use exactly the same math - just the signal is represented differently.
it's nothing new btw, it was discussed here almost exactly 3 years ago
http://www.planetz.com/forums/viewtopic ... onvolution
some predictions weren't that precise, tho...
cheers, Tom
(great search function, John !)

basically it's the same as technique as in impulse reverbs, they sample the response of certain high end processors and generate a function which 'prints' this sound charcter upon the digital signal of your source.
No big deal, but a bit abstract, you have to 'think around the corner' (so to say) with such algorithms, many Photoshop filter plugins use exactly the same math - just the signal is represented differently.
it's nothing new btw, it was discussed here almost exactly 3 years ago

http://www.planetz.com/forums/viewtopic ... onvolution
some predictions weren't that precise, tho...

cheers, Tom
(great search function, John !)
Duende=Real deal
Duende is based on same chips and algos SSL uses in their consoles, so no convolution there. I for one am very intrested in Duende. It wouldn't even matter it could not be used real time. Mixing my stuff with SSL is a long time dream, which I never tought could be reality without a number one (or few of them) in my pocket.
And with the music I'm making that wouldn't be a real possibility.
Duende, on the other hand, is priced very attractively. And while it still is a fair a mount of money to hand out, it's a lot more possible to fullfill this dream now than ever before.
If they only could get them PC - drivers out.
As for the "alternatives" - Liquid mix, Waves, etc. Not real deal. So no fulfilling of dreams there. Should test them tho, as Liquid mix is cheaper and offers wider variety of sonic possibilities.
But still, why go with replica when you can have the real deal?
And I tought I would never be able to say that with anything that is manufactured by SSL.
And with the music I'm making that wouldn't be a real possibility.
Duende, on the other hand, is priced very attractively. And while it still is a fair a mount of money to hand out, it's a lot more possible to fullfill this dream now than ever before.
If they only could get them PC - drivers out.
As for the "alternatives" - Liquid mix, Waves, etc. Not real deal. So no fulfilling of dreams there. Should test them tho, as Liquid mix is cheaper and offers wider variety of sonic possibilities.
But still, why go with replica when you can have the real deal?
And I tought I would never be able to say that with anything that is manufactured by SSL.

a few points from the respective SSL page (9000 console)
doesn't read cheap at all, tho
no convolution plugin of the world will ever be able to deal with those items, as it's processing after the analog stage
hypothesis: given you are able to supply equally 'clean' analog sources (in form of digitized multi-channel audio), you may well end in a 'on par' situation if you do a functionally equivalent (post)processing on a Scope system.
if I understand them (SSL) correctly the Duende is a single strip with the same analog parts as the console and with a quality a/d conversion at it's data end.
cheers, Tom
as you can see, it's no tricks - just a careful, pure analog design* DC-coupled for infra-sonic low frequency response
* Circuit design optimised for transient signal response – Zero HF smearing
* True additive summing to limit noise contributions to the lowest possible
* Exceptionally short signal path, free of electrolytic capacitors, for clarity and reliability
* Custom-specified oxygen-free cable throughout
* High rejection of environmental electrical pollution
* Advanced output drivers automatically compensate for the degenerative effects of variable cable capacitance by increasing HF response as the load increases
doesn't read cheap at all, tho

no convolution plugin of the world will ever be able to deal with those items, as it's processing after the analog stage
hypothesis: given you are able to supply equally 'clean' analog sources (in form of digitized multi-channel audio), you may well end in a 'on par' situation if you do a functionally equivalent (post)processing on a Scope system.
if I understand them (SSL) correctly the Duende is a single strip with the same analog parts as the console and with a quality a/d conversion at it's data end.
cheers, Tom
I'm afraid you dont understand correctly. Duende is a firewire device with no I/O. It holds dsp so it can drive the algorithms of SSL 9000. The digital part without analog circuitry.astroman wrote:a few points from the respective SSL page (9000 console)
if I understand them (SSL) correctly the Duende is a single strip with the same analog parts as the console and with a quality a/d conversion at it's data end.
What can be achieved with Duende is similar to recording your stuff on cheaper studio and mixing it with SSL desk. The initial quality of your audio recording is still what it is, but the eq and master compressor are real SSL.
Couple this system with SSL XLogic channel strip and you get SSL signalflow all the way.
If there is someone with profound knowledge about these things, I am intrested to hear how the summing algorithms of host application (Cubase, Nuendo, Protools etc) affect the sound. SSL desks do their summing in their own way, and that is not included with Duende. Is this relevant to the outcome?
thanks for correcting me - their marketing succeeded then, as on quickly browsing I really had the impression the analog part was included... 
in that case (signal fed in from a 'general purpose' IO) I'd even strengthen the afforementioned argument.
their console processors are told to sound great (by those with access to them)
but you never can hear the processor out of this hi-q analog environment
so how much does the processor actually contribute to the sound ?
cheers, Tom

in that case (signal fed in from a 'general purpose' IO) I'd even strengthen the afforementioned argument.
their console processors are told to sound great (by those with access to them)
but you never can hear the processor out of this hi-q analog environment
so how much does the processor actually contribute to the sound ?

cheers, Tom
Duende really does sound very good, I've heard it and like it a lot. It most definitely is proper modelling technology from desks that have been proven in studios for many years (SSL digital series desks).
I haven't heard Liquid Mix, but I know someone who uses one alongside Duende although he would take the Duende if offered the choice.
Waves SSL was made on license granted before SSL came out with their own devices.
Waves SSL = emulation of classic SSL analog channel strips. I have to say it's not very good.
SSL Duende = Exact same algorithms from SSL digital desks. These algorithms are effectively SSL's own models of their analog stuff. Old but great algorithms.
Hope this helps.
I haven't heard Liquid Mix, but I know someone who uses one alongside Duende although he would take the Duende if offered the choice.
Waves SSL was made on license granted before SSL came out with their own devices.
Waves SSL = emulation of classic SSL analog channel strips. I have to say it's not very good.
SSL Duende = Exact same algorithms from SSL digital desks. These algorithms are effectively SSL's own models of their analog stuff. Old but great algorithms.
Hope this helps.
Also, dynamic convolution is their buzzword for an extended use of convolution that they implement.
Without covering too much of the conversation linked above, the theory is based on feeding the system to be 'measured' (compressor, room+mic, etc) an 'impulse' which is infinitesimally short & has every given frequency in the bandwidth you're working at and record the system's 'response', the particular time 'smear', phase shift, spectral balance etc. Then apply this to another input (file, realtime) multiplicitavely to do the 'convolution'. Meaning each sample will be multiplied by the entire response you have (usually you'll truncate too to reduce computational load & processing time). That means that each sample gets the same time 'smear' or whatever.
Now, this is static convolution, because you're recording the system's response only at a given threshold (usually the impulse is the full dynamic range). Also you're technically recording the response to a full spectrum impulse and not it's response to a limited bandwidth input (ie, just the bass or whatever). There are differences in output but at the time of convolution's inception it allowed systems to be sampled far beyond our abilities to model at the time. Now that we've graduated beyond sound design on SGI Indys, dsp power in a dedicated device is enough that it seems they're able to use more than a single impulse response.
I'm not sure if that made sense, but to sum it up they're claiming to be able to go beyond simple static impulse responses. I would guess they're doing some sort of table lookup based on the input sample and using a corresponding impulse response to process that sample. How they're matching up the input sample & response I can only guess, but with realtime processing you have the input level and frequency to go by, although I suspect frequency would require some sort of predictive mechanism to follow along (sinc function perhaps). I would also hazard a guess that they're not using one response per sample value but rather have a simpler number (table of 128 step? 1024? 16000?) and are interpolating between them for in between values. Perhaps they're not even interpreting, but this would require enough responses to avoid any audible distortion by the quantizing of the input sample to a lower resolution table.
Without covering too much of the conversation linked above, the theory is based on feeding the system to be 'measured' (compressor, room+mic, etc) an 'impulse' which is infinitesimally short & has every given frequency in the bandwidth you're working at and record the system's 'response', the particular time 'smear', phase shift, spectral balance etc. Then apply this to another input (file, realtime) multiplicitavely to do the 'convolution'. Meaning each sample will be multiplied by the entire response you have (usually you'll truncate too to reduce computational load & processing time). That means that each sample gets the same time 'smear' or whatever.
Now, this is static convolution, because you're recording the system's response only at a given threshold (usually the impulse is the full dynamic range). Also you're technically recording the response to a full spectrum impulse and not it's response to a limited bandwidth input (ie, just the bass or whatever). There are differences in output but at the time of convolution's inception it allowed systems to be sampled far beyond our abilities to model at the time. Now that we've graduated beyond sound design on SGI Indys, dsp power in a dedicated device is enough that it seems they're able to use more than a single impulse response.
I'm not sure if that made sense, but to sum it up they're claiming to be able to go beyond simple static impulse responses. I would guess they're doing some sort of table lookup based on the input sample and using a corresponding impulse response to process that sample. How they're matching up the input sample & response I can only guess, but with realtime processing you have the input level and frequency to go by, although I suspect frequency would require some sort of predictive mechanism to follow along (sinc function perhaps). I would also hazard a guess that they're not using one response per sample value but rather have a simpler number (table of 128 step? 1024? 16000?) and are interpolating between them for in between values. Perhaps they're not even interpreting, but this would require enough responses to avoid any audible distortion by the quantizing of the input sample to a lower resolution table.
good point, Valis - here's a short note by the inventor (from Focusrite news)
cheers, Tom...that two new US patents covering the much discussed and multi-award winning Dynamic Convolution (tm) process (7,039,194) and the previously less well known techniques for digitally simulating dynamics processors by storing the parameters of an analogue system (7,095,860) have been granted to our strategic partner in Liquid Technology, Sintefex Audio Lda, and Technical Director and Inventor Mike Kemp.
Mike Kemp commented:
"...Dynamic Convolution gets there ... by looking at the output of the analogue gear when you feed it various sample signals. And it takes into account the non-linearities of response as signal level changes. You don't need to know what's in the box, just how it sounds."
The US Patents supplement the European patent 0917707 granted in 2001 covering Dynamic Convolution