Intel researchers have published the results of a performance comparison between their latest quad-core Core i7 processor and a two-year-old Nvidia graphics card, and found that the Intel processor can't match the graphics chip's parallel processing performance.
On average, the Nvidia GeForce GTX 280 -- released in June 2008 -- was 2.5 times faster than the Intel 3.2GHz Core i7 960 processor, and more than 14 times faster under certain circumstances, the Intel researchers reported in the paper, called "Debunking the 100x GPU vs. CPU myth: An evaluation of throughput computing on CPU and GPU."
In a bid to discredit claims that GPUs outperform Intel's processors by a factor of 100, researchers compared the performance of the quad-core Core i7 processor with the Nvidia GPU running a set of 14 throughput computing kernels. The comparison was designed to test the parallel processing capabilities of the chips.
As its name suggests, parallel processing involves tackling multiple tasks simultaneously as opposed to serial processing, which requires handling tasks in sequential order.
Graphics chips, with dozens of cores that are used to draw polygons and map textures used to create realistic images on a computer screen, are well-adapted to parallel processing tasks while processors with fewer, more powerful cores, like the Core i7, are better suited for serial processing applications. That's not to say that quad-core chips like the Core i7 can't handle parallel processing tasks; they can, just not as well as GPUs like the GTX280, as the Intel study confirmed.
"It's a rare day in the world of technology when a company you compete with stands up at an important conference and declares that your technology is only up to 14 times faster than theirs," wrote Andy Keane, Nvidia's general manager of GPU computing, on the company's blog, which provided a link to the Intel paper.
Even so, Keane wasn't impressed by the performance margin reported by Intel, listing 10 Nvidia customers that saw application performance improve by a factor of 100 or more by optimizing them to run on GPUs. The performance comparison done by Intel likely did not include the software optimization required to get the best performance from the GPU, he said, noting that Intel didn't provide details of the software code used in the comparison.
"It wouldn't be the first time the industry has seen Intel using these types of claims with benchmarks," he wrote, providing a link to the U.S. Federal Trade Commission antitrust suit filed against Intel in 2009.
In that suit, the FTC alleged previous benchmark results reported by Intel "were not accurate or realistic measures of typical computer usage or performance, because they did not simulate 'real world' conditions."
Regardless of the exact performance difference between CPUs and GPUs, graphics chips are an increasingly common feature in high-performance computing systems, including extremely powerful computers like China's Nebulae system. Nebulae, which is currently the world's second most powerful computer, is powered by a combination of Xeon server chips and Nvidia GPUs.
Adding GPUs to a system can substantially increases performance, while reducing cost and power consumption compared to systems built using only CPUs, said Yury Drozdov, CEO of Singapore-based server maker Novatte.
Last year, Novatte built a system for a financial customer that wanted to run pricing models. The system, which cost more than US$1 million, used 60 Intel Xeon processors and 120 Nvidia GPUs. A system with similar performance built using Xeon processors alone would cost $1.6 million and consume nearly 28 percent more power, making it more costly to operate than the system built with GPUs, Drozdov said.
For its part, Intel recognizes the importance of having a powerful parallel processing chip in its product lineup to complement its CPU line. In May, Intel announced the development of a 50-core chip called Knights Corner, which the company hopes will fend off competition from graphics chip makers in the high-performance computing space. Intel has not said when Knights Corner will be available.
By comparison, Nvidia's GTX280 has 240 processor cores, while the company's recently announced Tesla M20 series GPUs have 448 cores. (Corrected 5:30 p.m. PDT 6/24/10)
Intel: 2-year-old Nvidia GPU Outperforms 3.2GHz Core I7
Subscribe to:
Post Comments (Atom)
0 comments:
Post a Comment