Do you have any experience with a parallel hardware acceleration of computing? Is actual to think about graphical cards as a tool improving performance of statistical and DM tasks & software? What should one to have in mind, when considering to buy modern computer for statistical software?
And…is right my following view on this topic?:
“We know for many years - serial computing strategy of the current computers is often very restrictive for many (parallel) tasks. For example, Artificial neural networks simulated on classical serial (van Neumann architecture CPU´s) computers perform slowly.
About 30 years ago, “parallel accelerators” appeared. Those chips were relative uncommon, expensive and not so power as expected (their architecture was parallel, but only 4, 8, 16 or so cores were implemented – due to simple architecture, they were not able to keep pace with mainstream CPUs, which performance was improving much more dramatically).
Behind the “scene of numerical computing”, evolution of GPU chips is running for a many years. Nowadays, the graphical procesors perform on almost same frequencies as CPU, have more transistors and - the most importantly - they are divided into several hundreds of independent cores (compared to 8 cores in the newest and expensive CPUs).
Unfortunately, GPU developers were not able to see (and fulfill) this empty niche for most time of their history. It seems to be changing now (as a consequence of almost frozen CPUs performance for last five or so years). For example, nVidia is introducing their GPU “Fermi” designated primarily to accelerate numerical, not graphical operations. Similar project is Larabee by Intel. I am looking forward near future of these promising aims…”