# CPU Vs GPU



## ADF (Aug 22, 2007)

For those who don't know; CPU companies are working to make the GPU obsolete by integrating it on die (e.g. AMD Fusion), they will start with the budget range but they plan on having high end parts beating GPUs in a couple of years. While GPU companies are fighting back with GPU computing that will make graphics cards usual for general purpose processing (GPGPU).

What's your opinion on the subject? Will all in one CPUs replace GPUs and even sound cards by having configured cores for each task? Can the CPU never be configured to beat a GPU? Or will diversifying from just graphics to general computing help the GPU survive?

I'm pretty neutral on the subject, though I don't like the idea of any one company having that much control over what goes into your computer.


----------



## HaTcH (Aug 22, 2007)

I say leave them separate. GPU should do graphics only. The CPU should do the other stuff. 

Really, theres only so much speed you can squeeze out of a chip, so the best way to increase performance is to separate everything up into smaller, simpler processors everywhere. Integrating everything might improve communication between each chip, but then theres power and time sharing that has to go on, which would bring the output down I think.

GPUs can beat CPUs at some types of math, like matrix multiplication and other funky math stuff that isn't usually done in regular computing. GPUs can also chain operations better (pipelining) as input is more uniform and predictable. It doesn't have to deal with things like conditional branches/loops and stuff.

So yeah, a GPU is great, but its more of a specialized CPU. And specialization means you'd be better at one thing, but not so great at other things.


----------



## net-cat (Aug 22, 2007)

Let GPU's be GPU's, but let people have access to them on a lower level so they can use them for other things if they want.

As for trying to make a GPU/CPU combo as powerful as a dedicated GPU, I'll believe it when I see it. (Not saying it's not possible, but I'm not sure it's worth it.)


----------



## ADF (Aug 22, 2007)

It seems to be the direction the industry is taking judging info, everything on one chip is much easier to code for and there is no bus speed to worry about. Nvidia being a pure GPU company of course won't have it, so they are working on GPGPU computing in a effort to fight back.

I can see the advantage of having both technologies; but they seem to be getting ready for a upcoming war against each other, hell Intel has recently demoed ray tracing at 90fps.


----------



## BlackWolfie (Aug 23, 2007)

It'll happen that people put GPU's on CPU's or even when we have so many general purpose cores on the CPU that a quantity of them can be devoted to graphical rendering but you won't get G80 and R600 kinda power GPU's on board of your CPU. If you look at them, CPU's are general purpose processes that understand a wide range of commands that get thrown at them whereas a modern GPU is a specialised piece of mathematical equipment capable of doing certain calculations at an amazing speed hence the complex API's to talk to them.

To be honest the transistor counts say it all, an R600 GPU has (IIRC) 390 million transistors in its make up and even the QX6800 has only 291 million transistors on its die.

The only time i can see discrete graphics solutions becoming obsolete is when we have so many cores ( I.E. the 80 cored CPU intel demoed not that long ago) that it is not possible for us to utilising them. But then again as the number of cores grows things will become multi-threaded to use the more cores to do jobs faster. The big problem with ray tracing as we all know is that it takes huge amounts of processing power to do and after you've done that you've got to texture it, do the physics and then the required processing on sound before you've got a finished frame. 

I think the best solution for now is to leave everything seperate for now. We aren't in a position to replace either with the other so really we can't either. nothing in the CPU world has the kind of grunt needed to do the whole job itself and GPU's are to specialised to do the jobs a CPU does.


as a slightly off topic thought....its going to be interesting trying to program for a quantum processor. i don't claim to know how one works but what i understand about quantum probability certainly makes the whole concept seem interesting. lets just hope they can get a 32 Qubit processor working soon and hopefully at something a little warmer than -273C


----------



## Zero_Point (Aug 23, 2007)

Hmm... Yeah, I dunno. The transistor counts on GPUs these days are astronomical, upwards of 10-15 times more than a dual-core CPU, which IMO is ridiculous.
ANYWAY, a CPU doing the job of a GPU sounds good on paper, but that would mean designing special cores into it specialized for the task, like a general-purpose core or two, then a couple of graphics-dedicated cores, then a physics core, yadda-yadda-yadda... But then as you can imagine CPU prices will shoot through the roof...
As for GPUs doing general-purpose processing, isn't that what I have a multi-core CPU for? A quad-core should be able to take care of many things more efficiently than my GPU can, though the GPU-powered physics demos I've seen are impressive thus-far. If they can leave it at that while the CPU does everything else, that'd be fine with me.
Also, think of the limited upgrade options: If my GPU proves inadequate, I can spend $3-400 and replace it. Like-wise with my CPU (though not nearly as expensive lol). But if I have to replace an my uber-powered over-priced CPU simply because the graphics performance isn't up to par...


----------



## BlackWolfie (Aug 23, 2007)

Zero_Point said:
			
		

> But if I have to replace an my uber-powered over-priced CPU simply because the graphics performance isn't up to par...



That is exactly why the only way the manufacturers could make it happen is for the graphics, physics processing all to be done in software on general purpose cores that when they weren't being used for either of the above could be running other software.

But for your general purpose core to be able to emulate the kind of work a modern GPU does it would have to be so much more powerful and the software would have to be so optimised for its specific task that is almost un-believable. Then you've got to think, ok so i've got a general purpose processor with enough grunt to emulate in software a dedicated graphics and physics processor now what does it need for it to be able to do its work?...the answer, information, lots of it and for the access to it to be extremely fast so you need more on-die cache and more RAM. that's ok too, its not like the FSB is being saturated with information yet but the amount of swapping back and forth might mean the southbridge needs re-evaluating which is of course even more work.


----------



## ADF (Aug 23, 2007)

Zero_Point said:
			
		

> [snip]
> Also, think of the limited upgrade options: If my GPU proves inadequate, I can spend $3-400 and replace it. Like-wise with my CPU (though not nearly as expensive lol). But if I have to replace an my uber-powered over-priced CPU simply because the graphics performance isn't up to par...


This is one of my main issues with incorporating everything into the CPU, not just upgrading but what companies could get away with by having that much control over multiple area's. I also find the idea of a GPU core built into the CPU as being weak compared to dedicated boards; but check around Google, they are making some pretty hefty claims on what it will be capable of.

The AMD fusion processor will be out next year; mainly for portable devices like PDAs and laptops, there are claims of double the G80s performance in later implementations but I have no idea how they plan on doing that. Intel seems to be going down the ray tracing rout as said, I recall reading that physics is not a problem because most of the calculations are already done by the rays. If their claims on achieving 90fps in ray tracing is real; then they could have a ray tracing gaming ready CPU in a couple of years, remember CPUs are allot better at ray tracing than GPUs.


----------



## HaTcH (Aug 23, 2007)

Fundamental problem with creating more and more on-chip cache, is the more you have, the more access 'pins' to it you need. Larger and larger cache just reduces its access time. That's just how it goes :/ Most (if not all) cache is arranged in a square (if you were to draw it out) and you access it basically like a 2D array. So if you increase the cache, you're exponentially increasing the number of lines you need running to it. Just because of how it works, as you increase those lines, access time rises.

On the subject of ray tracing, I think CPUs are better at it than GPUs because it involves a lot of repetitive floating point arithmetic. Like I said earlier, GPUs are specialized. They kick the pants off CPUs in compound math calculations, but *shrug* they're probably not so hot at multiply.. multiply.. multiply.. test... jump.. add... multiply... multiply... ... I love 64 bit processors and their ability to do arbitrary precision. X3

Nother thing. Clock speeds on GPUs... >_> Is it just me or is the fastest ones I've seen only operating at less than a GHz? 

That makes me curious actually... Why haven't graphics card developers created a socketed chip? 

How awesome would that be? Like.. a mini motherboard, you could stick laptop ram in to upgrade the ram, and stick in different GPUs as they got better and better... 

But ADF was right about one thing. Physical technology (multiple cores etc) is coming along MUCH faster than software. If you had an 80 core processor, if you wanted to actually use them, your compiler needs to optimize for it. And compilers for general software don't really handle multiple CPUs all that great. Not for home/gaming applications anyway... The PS3 had like 7 or 8 cores, and I don't think Sony has actually released a game yet that uses anywhere near all of them.

What I'd expect with a GPGPU type computer would be similar to a cell phone. "Hay! Lets take a digital camera, phone, a toaster, and a bathtub and stick em all in one thing!" What does that get you? A phone with a crappy camera, crappy voice, won't toast my bread, and wtf, I can't fit in its tub!

When they combine things, a. they get more expensive and b. all the things suffer loss of quality.


----------



## Ron Overdrive (Aug 23, 2007)

I can see GPU/CPU hybrid chips working out for the portable market (game consoles, laptops, pdas, smart phones, etc.) as it would reduce the amount of space needed for everything and can potentially reduce heat and power consumption (keep in mind its potentially, not guaranteed). As far as the Desktop is concerned, leave them all be independent. There's plenty of space in cases these days and stuff like the Mac Minis are far from practical.


----------



## themocaw (Aug 23, 2007)

If ya ask me, it's gonna turn out to be like onboard video cards included in motherboards, just moved to the processor instead of the motherboard: the standard option, but if you really want the good stuff, you'll buy a separate GPU with all the doohickies and shaders and. . . magic. . . black. . . smoke. . . thingies. . . instead.


----------



## Janglur (Aug 24, 2007)

I also want them seperate.


What if I don't WANT a GPU?  Just simple card capable of sending a 2d window.  No real 3d Capabilities.

Some folks run servers and want to reduce the cost of hardware, power consumption, and heat output, by removing unnecessary devices.  Namely, GPUs.


----------



## DavidN (Aug 24, 2007)

Trouble is, if everything starts following Vista's example, we'll soon need high-end 3D cards just to run an operating system =(


----------



## HaTcH (Aug 24, 2007)

Hehehe.. There's always command line linux if you don't want any graphics! X3

But good luck figuring that out average computer user 

I agree with DavidN. I know with KDE/Gnome you can get really funky 3D accelerated window effects... jiggling, stretching, transparency, 3D desktop switcher, all kinds of neat stuff.. which, in my opinion, has no practical use. I was using a LiveCD on one of my computers recently and I-kid-you-not I sat there and played with windows for like 10 minutes instead of actually installing the OS! XD

*drools* Ohhhh shiney....


----------



## Zero_Point (Aug 24, 2007)

ADF said:
			
		

> This is one of my main issues with incorporating everything into the CPU, not just upgrading but what companies could get away with by having that much control over multiple area's. I also find the idea of a GPU core built into the CPU as being weak compared to dedicated boards; but check around Google, they are making some pretty hefty claims on what it will be capable of.
> 
> The AMD fusion processor will be out next year; mainly for portable devices like PDAs and laptops, there are claims of double the G80s performance in later implementations but I have no idea how they plan on doing that. Intel seems to be going down the ray tracing rout as said, I recall reading that physics is not a problem because most of the calculations are already done by the rays. If their claims on achieving 90fps in ray tracing is real; then they could have a ray tracing gaming ready CPU in a couple of years, remember CPUs are allot better at ray tracing than GPUs.



Interesting article, and yes I've been told by a programmer friend that ray-traced entities are far better suited for physics than polygonal ones. Something I've noticed though is that the resolution on that demo wasn't terribly high. Nowadays no-one's satisfied with the size of their e-penis unless they can play it at a minimum 1280x960 resolution. Hell, pretty much every *modern* PC game I've bought doesn't even go lower than 1024x768.


----------



## net-cat (Aug 24, 2007)

Ah, yes. The financial aspects. I often times forget that when talking about computer hardware. 

For the high-performance, high-cost gaming systems, I'd imagine discrete GPUs will continue to be the order of the day. (Although it wouldn't surprise me to see a separate socket on the mother board for a GPU, rather than a card, at some point.)

For Mr. "Goes into Best Buy and asks for a $1500 system that can play games," I'd imagine platforms like AMD Fusion will be quite popular.


----------



## ADF (Aug 24, 2007)

Whatever the case, developers are not going to add both GPU and CPU/GPU support in their games. Some standard will be decided on, but I doubt it will be a easy decision with both CPU and GPU companies fighting to get their method standardised.

Today it is just something in the news to talk about, but in a couple of years the fighting will reach the consumer desktop and we will be hearing allot more spin on the subject.


----------



## Dragoneer (Aug 24, 2007)

On die GPUs will not replace dedicated cards for a LONG time (years and years) due to the sheer amount of heat they generate. On-die GPUs will probably ramp up the minimum IGP graphics level quite a bit leveraging into cores, but I doubt they'll compete with the best dedicated ATI and Nvidia can produce.

It's a great low to mid level solution, but not high or bleeding edge.


----------



## HaTcH (Aug 24, 2007)

The best I think they can do is continue to increase the bus speed between the CPU and GPU. 

PCIexpress just keeps getting longer and looonger and looooonger X3


----------

