Subscribe to DSC Newsletter

The GPU computing in statistics and data mining ??? (Huge parallel performance of the graphical units for numerical operations)

Do you have any experience with a parallel hardware acceleration of computing? Is actual to think about graphical cards as a tool improving performance of statistical and DM tasks & software? What should one to have in mind, when considering to buy modern computer for statistical software?

And…is right my following view on this topic?:

“We know for many years - serial computing strategy of the current computers is often very restrictive for many (parallel) tasks. For example, Artificial neural networks simulated on classical serial (van Neumann architecture CPU´s) computers perform slowly.
About 30 years ago, “parallel accelerators” appeared. Those chips were relative uncommon, expensive and not so power as expected (their architecture was parallel, but only 4, 8, 16 or so cores were implemented – due to simple architecture, they were not able to keep pace with mainstream CPUs, which performance was improving much more dramatically).

Behind the “scene of numerical computing”, evolution of GPU chips is running for a many years. Nowadays, the graphical procesors perform on almost same frequencies as CPU, have more transistors and - the most importantly - they are divided into several hundreds of independent cores (compared to 8 cores in the newest and expensive CPUs).

Unfortunately, GPU developers were not able to see (and fulfill) this empty niche for most time of their history. It seems to be changing now (as a consequence of almost frozen CPUs performance for last five or so years). For example, nVidia is introducing their GPU “Fermi” designated primarily to accelerate numerical, not graphical operations. Similar project is Larabee by Intel. I am looking forward near future of these promising aims…”

Views: 1566

Reply to This

Replies to This Discussion

I see, this is completely unknown area. But IT specialists are becomïng more and more interested in it. One of our BI customer (dealing with considerable large datasets) asked us: "are you going to implement GPU acceleration into STATISTICA 9.1 for numerical outputs?". And I am not able to consider, what we are able to promise (and fulfill) to him...

Do you anybody awareness about nowadays implementation of parallel, especially GPU, acceleration within statistical and data mining packages? STATISTICA, IBM/SPSS, SAS, KXEN, MATLAB...?

Thank you for your responses.
Hi Jiri, Everyone,

I invite you to check out Jacket, the GPU Engine for MATLAB, which our company AccelerEyes has been building (http://www.accelereyes.com). The aim with Jacket is to make it easy to do GPU computing via the high-level M-language. Jacket is a runtime that automatically translates M-code into C/CUDA code for execution on NVIDIA GPUs.

I'm happy to chat with you more about this if you'd like as well. Feel free to EM me at john melonakos accelereyes.com

Let me know if I can be useful to your work.

Best,

John

Hi John,

I've just found this post and taken a look at your website. I need some advice if you would. I'm replacing my two Tesla M1060 cards (computing capability too low) and am considering used Tesla M2070s _OR_ the new GTX 760 cards. Could you offer any insight? I believe the GTX 760 cards may well outperform the older 2070s and are _much_ cheaper.

Would welcome your input.

Regards

Steph

RSS

On Data Science Central

© 2020   TechTarget, Inc.   Powered by

Badges  |  Report an Issue  |  Privacy Policy  |  Terms of Service