Multi Core Optimisations Needed

PC Configurations, motherboards, etc, etc

Moderators: valis, garyb

Post Reply
dawman
Posts: 14368
Joined: Sun Jul 24, 2005 4:00 pm
Location: PROJECT WINDOW

Multi Core Optimisations Needed

Post by dawman »

There are benchmarks made for multiple threads like Divx, and LAME that show huge improvements in performance.

Sadly though our music apps are far from optimisation levels I would like to see.

DAW Bench shows how more plugs can be accessed using quad CPU's, but this performance is from the larger cache that is shared between the cores, not an optimisarion from the software DAW developers.

Until then this beast will be my choice. http://www.anandtech.com/cpuchipsets/in ... spx?i=3251

Let's hope the 6MB L2 cache on this will help us more.

The dual quad design from Intel called Skulltrail is still performing in the comedy club. It is extremely expensive and the lack of real multi-core support shows in the benchmarks.

When this design drops back to a more reasonable price, the audio app developers might have half of their bugs worked out and actually SPAWN the necessary threads we crave.

Here's a chance for that hard working Reaper guy to show 'em how they do it on the big jobs. :wink:

The E8400 is cheap and only a couple of points behind the 8500. But the 8500 Wolf will have unlocked multipliers for the afflicted OC'r's.

Intel has really made some leaps and bounds in the last year and a half.

Hopefully some audio developers will break into their vaults for a little R & D money and get with the program.

Until then the cheap fast Dual Core's seem to be the best bang for a buck.
ScofieldKid
Posts: 307
Joined: Thu Nov 13, 2003 4:00 pm
Location: Oregon
Contact:

Post by ScofieldKid »

Not Intel's finest hour I'm afraid.

Availability on the E8500 has been just about zero. And E8400 shows out of stock in most places.

Would have been nice if they released the E8500. I suspect that the reports of thermal-sensor-reporting behaviors have something to do with all of this. But not much to do about it except wait until Intel ships product.
User avatar
the19thbear
Posts: 1499
Joined: Thu Feb 20, 2003 4:00 pm
Location: Denmark
Contact:

Post by the19thbear »

cubase supports multicore cpu's.
you can click it on/off as an option inside cubase.
husker
Posts: 372
Joined: Thu Feb 05, 2004 4:00 pm
Location: wellington.newzealand

Post by husker »

Ableton Live 6 & 7 supports multi-core (i.e. will use all 8 cores if you have 'em)
User avatar
astroman
Posts: 8454
Joined: Fri Feb 08, 2002 4:00 pm
Location: Germany

Re: Multi Core Optimisations Needed

Post by astroman »

XITE-1/4LIVE wrote:There are benchmarks made for multiple threads like Divx, and LAME that show huge improvements in performance....
that's due to the highly 'geometric' nature of the data, that stuff is rather easy to optimize - as was demonstrated years ago by IBM with the so-called 'Altivec' engine in their PowerPC G4 chips.

Audio (DSP) processing is of a higher complexity and the 'processing path' is much less predictable than in graphics.
You may remember the Announcement of developement kits for NVidea GPUs to speed up signal processing.
I'm not really an expert in this domain, but nevertheless I had a very close look...
and ended with a frustrating perspective
There is indeed an amazing processing power (for bargain) - but only for a handful of very(!) specific implementations. So very specific that I didn't even find it worth thinking further about it. And you bet you REALLY got to think a lot to mentally transform geometric properties into the signal flow domain... :P :D

that's exactly the problem with multicores, but those are easier to handle as they are not tied to a specific task - and of course have less processing elements. So it's not all lost ... :D

cheers, Tom
User avatar
pollux
Posts: 503
Joined: Fri May 28, 2004 4:00 pm
Location: France

Re: Multi Core Optimisations Needed

Post by pollux »

astroman wrote: You may remember the Announcement of developement kits for NVidea GPUs to speed up signal processing.
I'm not really an expert in this domain, but nevertheless I had a very close look...
and ended with a frustrating perspective
We are currently evaluating this SDK (CUDA) for running Montecarlo simulations and recursive algorithms over huge amounts of data.. but this is going OT :P
User avatar
astroman
Posts: 8454
Joined: Fri Feb 08, 2002 4:00 pm
Location: Germany

Post by astroman »

well, it may be OT regarding the subject, but you mention the crucial point:
... over huge amounts of data... existing(!) Data, as I may add ;)
so at least it's very illustrative.

The SDK isn't bad at all, even more if one keeps in mind under which preconditions it's been published - to write the tools for future applications, which press simplyfied to ...there's a vast amount of processing power bla bla... :D

the more data available in the moment of processing, the bigger the (structural) advantage of multi-cores.
in realtime processing of (say) audio streams, there's only a relatively small amount of data and whatever you try to split over cores (to take advantage of their existence) is under (extremely) tight restrictions regarding synchronisation after processing to keep stream integrity.
That's what makes this stuff non-standard and low predictable, opposed to the straight-forward (say) jpg-related 4-corner-pixel-transformations.

cheers, Tom
User avatar
pollux
Posts: 503
Joined: Fri May 28, 2004 4:00 pm
Location: France

Post by pollux »

astroman wrote:well, it may be OT regarding the subject, but you mention the crucial point:
... over huge amounts of data... existing(!) Data, as I may add ;)
so at least it's very illustrative.

The SDK isn't bad at all, even more if one keeps in mind under which preconditions it's been published - to write the tools for future applications, which press simplyfied to ...there's a vast amount of processing power bla bla... :D

the more data available in the moment of processing, the bigger the (structural) advantage of multi-cores.
in realtime processing of (say) audio streams, there's only a relatively small amount of data and whatever you try to split over cores (to take advantage of their existence) is under (extremely) tight restrictions regarding synchronisation after processing to keep stream integrity.
That's what makes this stuff non-standard and low predictable, opposed to the straight-forward (say) jpg-related 4-corner-pixel-transformations.

cheers, Tom
Yes.. We can take serious advantage of the processing pipes in the GPU because the montecarlo simulations and the recursive algos run thousands of times for each sample, and each run shares the data involved. The GPU parallelizes this like a charm.
Samples are grouped so that they reuse a maximum amount of data which is preloaded and cached to reduce overhead.
To achieve this we need the full set of data and the full set of samples.

If we wanted to do the same in realtime, with changing sets of data, and samples arriving on the fly in any order, we'd spend 95% of the time loading and setting up the data and 5% actually calculating. The GPU is absolutely not effective used this way...

Sorry for the OT, but the discussion is interesting :D
User avatar
FrancisHarmany
Posts: 1078
Joined: Sun Jun 02, 2002 4:00 pm
Location: Haarmania

Post by FrancisHarmany »

ScofieldKid wrote:Not Intel's finest hour I'm afraid.

Availability on the E8500 has been just about zero. And E8400 shows out of stock in most places.

Would have been nice if they released the E8500. I suspect that the reports of thermal-sensor-reporting behaviors have something to do with all of this. But not much to do about it except wait until Intel ships product.
I waited about 7 weeks for my E8500!
emphazer
Posts: 98
Joined: Sat Sep 23, 2006 4:00 pm
Location: Germany
Contact:

Post by emphazer »

i'm waiting since 2 month for my yorkfield Q9450 cpu.
got everything else here already
just missing the cpu :D

btw, bought a DFI X48 Lanparty LT2 mobo
hope its a good one for 3x scope cards
Post Reply