# This whole â€œnext gen game physicsâ€ thing



## ADF (Feb 11, 2007)

The predicted future of game processing is vector units; CPUs that have dedicated cores for each task such as AI, physics, game data, graphical data, sound etc. According to the theory this will provide mind boggling performance in future PCs and is currently being applied in the PS3 Cell processor. But until then we will have to make do with different methods.

There seems to be three groups that have different solutions for achieving next gen physics in games.

CPU â€“ Which is obviously the crowed for using multi core processors for physics.
Havok FX â€“ Running physics using the resources on your graphics card.
PhysX â€“ Dedicated hardware for running physics called a PPU.

Now I've read up on the pros/cons of each and decided that IMO Ageia's solution (The PPU) is what I would prefer to become the standard; there is a good possibility that it won't due to slow adoption but I'm hoping that it will. 

I could go over all the reasons for my choice but first I'd like to hear everyone else's; so whether you have heard of the physics war or not what would you prefer? Multi core processor, on GPU, or seperate dedicated hardware?

Examples

Havok FX
Ageia PhysX
Quad Physics


----------



## WelcomeTheCollapse (Feb 11, 2007)

Dedicated hardware, if I had the moneys to spare. But right now, I have a damn good processor, so I'll stick with that.


----------



## DavidN (Feb 11, 2007)

Dedicated hardware sounds like a fantastic idea, but what worries me is that it's yet another bit of my computer that would need to be regularly upgraded.


----------



## ADF (Feb 11, 2007)

They say they only need to release a new PPU every 4-5 years, new features are added via software updates. Check what's new in the latest patch for instance.

Some fun videos

deformable objects
cloth
liquid "still a bit buggy"


----------



## Silver R. Wolfe (Feb 11, 2007)

Of course dedicated hardware would give you the best results, but I don't think that it's going to catch on with dual core processors being standard now, with quad around the corner and even more soon after that too.


----------



## Cybergarou (Feb 12, 2007)

I would prefer to see the PPU take off. Even with new CPUs, programmers will realize they don't want to waste clocktime for physics when they can use it to have more happen at once. This will especially be true if there is an option to dump that processing onto another piece of hardware.

This has been the way things have progressed in weather modeling for years. Even with the increase in computer power, the physics that allow us to predict the weather has remained at 5% of the total computations done in a modal. If they used PPUs to calculate physics without cutting into the other computations then we would see a substantial improvement in model forecasts.

Even if it doesn't catch on in gaming, the PPU could survive in research long enough for it to come back to gaming.


----------



## ADF (Feb 12, 2007)

The problem with CPUs is you are not getting your worth in physics for the hardware you use to run them. We have had software physics for years so I do not have to go over what it is capable of; but why don't we have real time interactive liquids yet? Or material physics? Or dynamic cloth? And why did we have to wait till Crysis for dynamic damage, just for trees bases at that?

Take a look at the Alan Wake demonstration in the first post; one of the most powerful Conroe processors, overclocked a 1ghz, still ate a entire processor core for that one tornado. How many gamers can afford to purchase the performance required to run that? On top of that wouldn't you rather have the processor deal with game logic and AI rather than dedicate such a vast amount of resources for one tornado simulation? I don't know about you but I don't have Â£300-Â£525 to throw at a high end dual/quad processor that doesn't even do the job efficiently. I would rather have a decent processor that dedicates all its resources to its intended purpose.

The reason physical realism in games is progressing so slowly is because processors are not up to the task; they can do physics, hell they can do graphics if they wanted, but because of their general purpose design they are not the most efficient hardware for running them. If they were we would have true to real life physics years ago instead of simple object collision, the easiest form of physics simulation next to gravity, in games like Half Life 2.

To me CPU physics isn't even in the picture. Nvidia and ATI realized it, Havok realized it, Ageia realized it. Even Intel and AMD probably realize it but just like with graphics processing, sound processing, light processing etc they are fighting to keep it on the CPU to justify the need for their higher end processors. Now if Havok, the worlds current leading physics API developers have abandoned CPUs for different hardware acceleration, that says something.

I honestly don't know what is taking the Quad physics crowed so long; even if you could get all these affects on the CPU it is a Â£525 quad core Vs a Â£125 PPU or a Â£200 GPU.


----------



## ADF (Feb 25, 2007)

*sigh* it happened again, I don't know why but I always seem to get worked up whenever I do a little research into the whole physics war. It is always the same; lies, damn lies and statistics.

The amount of ignorance on the net just makes me angry, even those so called professional hardware sites are getting the facts wrong. Take one review for instance; they said the performance difference with the card was not worth the price, after they had turned off all the hardware accelerated physics affects. I mean what the hell? It is like saying the performance difference GPUs bring is not worth it after you turned off all the eye candy and turn the res down to 800x600. Another review said Havok FX has more software support than the PPU, I was curious as to what they meant since Havok FX is still in development with a supported games list of zero. It turns out the reviewer thought normal Havok physics games like Oblivion and Half Life 2 were GPU physics ready...

This doesn't even include the great swarms of people who actually believe GPU physics will be lossless and will give them mind blowing next gen affects with a simple patch >.=.<

Bah it is infuriating that these idiots are teaching the clueless masses to be anti PPU when they themselves haven't got their facts straight.


----------



## eb7w5yfe (Feb 27, 2007)

I expect that we'll see multi-core CPUs with specialized cores.  Maybe you'll have a 6 core CPU, with 4 traditional cores which work well with branchy code, and 2 stream processing cores which handle massively parallel tasks like graphics and physics well.

The add on cards will probably stick around for awhile, even with specialized cores.  Hopefully graphics and physics cards will end up unified as more of a general purpose stream processing add on.  I think it's silly to have add on cards dedicated to specific tasks, if they can be made more widely useful without losing any of the performance advantage.  There's nothing really unique about graphics, or physics, or even HPC applications; it's just that traditional PC CPUs are more suited to single threaded branchy applications.

I actually am not that well versed in CPU stuff, and some of my terminology may be a bit off.  For further reading, I recommend all the articles here: http://arstechnica.com/articles/paedia/cpu.ars

Anyway, it's interesting how computer CPUs evolve.  Start out with simple integer only cores, tack on floating point co processors, create special purpose graphics hardware, generalize the graphics hardware until it's as complex as the main cpu, and now it seems likely that the graphics stuff will probably be folded back onto the main die as separate cores.  CISC CPUs which start to look more and more like RISC CPUs with CISC compatibility tacked on.  CPUs which concentrate on clock speed only, to CPUs which do much more per clock and per watt.  One constant seems to be that more and more features from main frames make their way into consumer gear.  It's similar for operating systems and programming languages too.  From VMS to Windows NT, or how C# is now incorporating aspects of functional languages (though I guess that's borrowing from academia rather than from mainframes).

Rambling; sorry.  I just find technology fascinating.


----------



## benanderson (Feb 28, 2007)

Buy a games console... that's just one big peice of dedicated hardware. Problem solved! X3
Okay thats enough sarcasm from me...

Physics is just another one of those little extra graphic effects, it doesn't make the game play any better, just prettyer.
Yeah, you get the odd one or two games that pop along every now and then that have good graphics AND good game play. But too many games are all about the graphics and length/quality of the game is just pathetic.
I think programmers/game writers should spend less time on the graphics and this new physics crap and more time on the game itself... like the new Sam and Max game for example. It's graphics don't even need a 3-D accellorator but the game play is so funny that I don't really care if the graphics are upto par or not.

As for dedicated hardware... nice if it would catch on... but it wont. Having to buy a new card that is clearly going to be so-and-so megabucks won't exactly catch on very well since not everyone has Bill-Gates' bank acoount number. I would like a a dedicated sound card for FL Studio, but I don't have it... why? Because it's too expensive... and thats only a sound card... god knows what it'll be like when I need a new hyped-up GPU in the next year.

-BenA.


----------



## ADF (Mar 1, 2007)

I wouldnâ€™t call physics â€œeye candyâ€, well unless you are referring to the Havok FX method of doing it. When referring to eye candy it is generally related to the effects that make the game look good without actually affecting it in any way, a game running on the lowest graphical setting plays exactly the same as the supped up one. By a general rule if removing it hinders the gameplay it is not eye candy.

Physics can be used to add unique gameplay and realism elements to a game, take Half Life 2 for instance where physics are used in puzzles or Prey where gravity can be manipulated. Why is it we are reaching near photo realistic graphics yet cannot simulate even the simplest of real world object and material interaction? General purpose CPUs has always been the physics bottleneck but with the physics war going on this generation things should become more interesting.

As for the PPU becoming integrated into the CPU it is highly possible, Ageia has said they will adapt their product to the needs of the market even if that means backing a competitors product (e.g. Havok FX) and I donâ€™t have a problem with that. Just don't expect to see CPUs with intergrated GPUs and PPUs in the near future so you have to take what you can get today.

What I do have a problem with however is inefficient methods becoming mainstream because their backers marketed them enough to look better than the real innovators. If APIs like Havok FX become the standard next gen physics will become the exclusive of enthusiast gamers who can afford to have two or even three graphics cards.


----------



## Kougar (Mar 27, 2007)

There are some good points.

Frankly though, I would prefer Alan Wake's or Half Life's way of doing physics. While there are those that cannot afford the latest greatest such as Quad Cores to run this kind of physics on, I don't see how having to buy a specialized $160-$300 PPU will be any different than simply investing that $160-$300 into a better processor. Getting a better CPU will affect everything, not just games, which a PPU cannot claim to offer.

Infact, Intel will be price slashing the Q6600 QuadCore down to about $533. By the third quarter of this year they've already stated they will bring the price down to $266. Their entire DualCore Core 2 Duo line will be priced _under_ $300, even for the 3ghz E6850. At a "mere" $266, I think having to buy any kind of PPU when a QuadCore can be had, which has far more uses, would be extremely annoying for me.

Eb7 already kinda linked to ArsTechnica's piece on Valve's new physics that will fully max out a QuadCore PC, if desired. I could always be wrong, but my opinion on the Tornado demo with Alan Wake was more to simply show it can be done on the CPU, I doubt they will design the game to be that resource intensive that a QuadCore was needed. They stated only a DualCore would be required. Same thing as game developers, they can't build everything and the kitchen sink into their game because it has to play on the largest common denominator of GPUs on the market so they can shoot for the most sales. On the flip side of that, HL Episode Two and Alan Wake will simply be two games that *can* make use of everything an enthusiast can give them though.


----------



## ADF (Mar 27, 2007)

The problem with just investing into a more powerful CPU is the same as buying a better CPU to run graphics, what if the best of the best commercially available is still not enough? The GPU/PPU being a hardware tweaked dedicated solution will always be more efficient at the task than a general purpose CPU.

Doing high level physics on the CPU is too resource intensive; massive amounts of resources are required to do the simplest of things which is why physics has been progressing so slowly over the years, if that wasn't the case then everyone wouldn't be talking about alternatives to CPU physics. All we have today is a bunch of objects bouncing around the screen when physics is so much more, to put the workload on the CPU would waste cycles that could have gone towards game data and AI. Take Alan Wake for example, it requires quite a bit of CPU power to continuously stream data in so there are no loading screens when travelling over vast distances. Would you really want to waste potential such as this in next gen PC games because of the additional workload physics will put on the CPU? I mean just look at what basic rigid body physics can do to a dual core, let alone cloth or liquids.


----------



## Kougar (Mar 28, 2007)

The GPU is a hardware tweaked, dedicated solution for graphics crunching, but I don't need to point out how quickly these can become outdated overnight, let alone woefully inadequate for high/ultra game settings they eventually become. I think the same would result with a PPU if it ever took off, although maybe on not quite as fast a pace as GPUs I'll grant.

If you take physics out of the games that use Havok Physic,s or their own version of CPU dependant physics, then quite a few of them will lose most of their multithreading ability in the process. Except in rare cases like Alan Wake, games without the threads for physics they hardly utilize the CPU at all. Alan Wake is still the only game that I know of that mandates it can't run on a single core.

The CPU is woefully inadequate for generating graphics, so I'd have to question that video as I don't exactly know what parts were CPU dependant and otherwise. Valve's demonstration of their new particle engine was a good example of what you are saying though, and I do remember seeing the flag demo for Ageia's PhsyX... I simply don't see game developers using the CPU to it's full potential as is in real life situations however. Anyone can design a physics program sophisticated with enough variables to max out anything, but no one is coming close to doing it in games, not even Alan Wake will max out a QuadCore although it'll load all four of them. Even with physics being done on the CPU with all the games that now incorperate havok physics engines, dual-cores are not really fully utilized yet.

Edit: I never mentioned it, but now with "stream processors" I wouldn't mind seeing old GPUs turned into physics processors. The unified shader was designed to compute pixel, vertex, geometry, and physics, and with anywhere from 36-128 of them they should be more powerful than Ageia's current offering but just as dedicated, assuming nVidia would fix their driver issues to enable physics use. Everyone tends to end up with old/outdated GPUs, turning one into a PPU would be great I think.


----------



## ADF (Mar 28, 2007)

The problem with GPU accelerated physics is pixel pipelines are read only, it makes sense for graphics but not for physics. The end result of this is physics that cannot affect gameplay due to lack of any collision detection, the CPU can be utilised to give it gameplay physics but this just leaves you with the same gameplay you had previously with additional stuff thrown in to make it look prettier.

Not any GPU can be used for physics either; you still need a rather decent last gen one rather than anything lying around due to the pipeline/memory requirements, plus it is almost given that a GPU emulating physics will not be as efficient as hardware designed for it. A second GPU PCI-E slot isn't exactly a common thing either since SLI is mostly for the enthusiast gamers.

The way I see it they mostly intend for GPU physics to be run on a 8 series or ATI equivalent card along with your graphical data, I would prefer the GPU just to do its job rather than trying to diversify its uses personally. They say it only eats a few pipelines but the tech demos say otherwise, it ate the entire 7 series GPU in most cases as they really pushed the physics load to show off it can go near the PPUs performance.


----------



## Kougar (Mar 28, 2007)

Parts of the pipeline are read only, but no longer the entire pipeline, as it is currently configured with unified shader GPUs. Technically with stream processing//unified shader GPUs the pixel pipeline as it was does not exist. The shaders that replaced it are fully read/write capable. My arguement for GPU folding is only considering if unified shader GPUs are being used, for both the reasons you mentioned and these. :wink: 

Here's an excerpt from Ars Technica... this is specifically why the same shader in the pipeline can to pixel, physics, vertex, or geometry work. The next step would be to develop the software to create actual changes in gameplay that you talk about, where results of the physics calculations can directly change gameplay. G80 offers the hardware to do that.



> The 8800 is clearly the product they were talking about; it's actually built from the ground up as a highly multithreaded, general-purpose stream processor, with the GPU functionality layered over it in software. This is the reverse of existing general-purpose GPU (GPGPU) approaches. So with the G80, a programmer can write a stream program in a regular high-level language (HLL) that compiles directly to the stream processor, without the additional overhead that goes along with translating HLL programs into a graphics-specific language like OpenGL's GLSL.
> 
> Ideally, a program for the G80 would consists of hundreds of stream processing threads running simultaneously on the GPU's many arrays of tiny, scalar stream processors. These threads could do anything from graphics and physics calculations to medical imaging or data visualization.





> Each of these 16 blocks acts as a giant, 4,000-entry virtual register file for a stream processor in the tile. Each SP can read from and write to this virtual register file as it executes a thread, with the result that the G80 can perform in one pass the kind of inter-element vector arithmetic operations that required multiple passes on previous GPUs.
> 
> The hardware also provides support for load/store access between main memory and this virtual register file, which is why it's a bit like the Cell processor's local store. Finally, this virtual register file is automatically backed by an L2 and, ultimately, by DRAM. Because the register file contents can be paged out to DRAM, this means that the CPU can access the results of a stream computation with a simple memory read.
> 
> ...Programmers will be able to write gameplay-affecting physics code in C, and have it run on the G80.


 Source

The only reason this isn't currently being used by anyone is nvidia just released a beta SDK to compile programs in late February. Well, and from what I've learned via Folding@home it's so buggy and full of issues that they cannot even use it to create a new program to fold on G80 hardware yet without fixing the drivers themselves. Nvidia won't get around to fixing their CUDA drivers/SDK until they are done playing catch up with basic Vista drivers, which will still be a minimum of 2 months.

Since you bring up physics on G79 hardware, there is a huge reason why distributed computing projects like Folding@Home do not use nVidia's 7900 and older hardware. It is not double-precision accurate, the results would have to be rerun through the entire pipeline to create this. I think the term was 16bit. F@H uses ATI graphics shaders on x1900/x1800/x1600 hardware because it has the double-precision 32bit ability already built into the hardware. Because all of this, I'd be amazed their old GPU hardware could even run any kind of emulated physics. 

Funnily enough all of this is why I fully recommend ATI over nVidia hardware anyday. Anyway, I would rather see physics put otherwise unused CPU cores to use, or bar that put that old unified shader GPU back to use again, before having to cough up $160 to $300 for a PPU that I can't do anything else with.


----------



## ADF (Mar 28, 2007)

Really? Because I hear it still suffers from the same problems, even the official FAQ says Havok FX is only capable of effect physics and relies on CPUs for anything that affects gameplay. It doesn't really note if the new GPUs make any difference to this limitation.

I personally dislike the rout GPUs are taking, not the unification of shaders but the GPU trying to be a jack of all trades. When I buy a new graphics card I do so because I want my games to look better and run at higher resolutions. I don't want to have to turn down the graphical settings, the reason I purchased the card in the first place, because the GPU wants to play CPU.

I am not daft, they can spin modern GPUs can handle both effortlessly all they want, I know it is going to eat enough of your performance to require turning down settings. Well it would require turning down settings but I imagine such games would purposely be made at lower quality than intended, this way users cannot see what they lost for those extra effects.

People are talking about doing all sorts of things with graphics cards, probably to grab shares from other markets, that they forget why we have GPUs in the first place. The path they are taking is only going to result in needing needlessly more powerful (and expensive) hardware because it is juggling tasks other hardware should be doing.

To me Havok FX is just the GPU companies trying to justify needing their highest end cards and sli setups by finding new ways to increase their worload, something that will hurt the low to mainstream users who cannot afford such hardware.


----------



## Kougar (Mar 29, 2007)

Well, there is a difference between Havok physics, and Havok FX. Havok physics is in everything from HL2 to Company of Heroes and uses the CPU, while the new Havok FX is designed to use SM3.0 shader equiped graphics cards. When I mention it, I'm referring to the CPU version. I don't know of any games that use Havok FX specifically, but you are right in that Havok FX is only capable of effect physics. I never said otherwise. Until G80, the hardware to enable gameplay affecting physics didn't exist anyway, hence my post above detailing the Ars Technica article. Hardware always comes out before the software that makes full use of it.

The jack of all trades bit is mostly hype on GPU companies part in my opinion, but I see some value in it. I'd get satisfaction knowing I can use my GPU to do F@H work, for instance, which would compute literal circles around the best CPU out there today. That's not speculation. 

If you think GPU companies are losing focus then I'm not sure what you will make of CPU companies. BOTH Intel and AMD have now announced plans to build mid/low end GPUs integrated into their processors *by the middle of next year.* ATI/nVidia are going to have to do something to not go under when this happens, and playing on their biggest strengths (Namely king at anything requiring massively parallel computing) will be needed. If you care to believe Intel, they plan to have a x16 times more powerful graphics card than an 8800GTX by 2009-2010. 

Currently the unified shaders is probably what has given Nvidia such a boost over ATI cards, since a single 8800GTX is better than a pair of x1950XTX cards in Crossfire in almost every game. R600 is going to be the same way but a level higher than that, IMHO good enough to beat the 8800 Ultra card when it comes out in ~20 days.


----------



## ADF (Mar 29, 2007)

My apologies then, I did think you were referring to Havok FX.

All the companies diversifying does concern me though, hardware used to be simple but now it is going all grey. I have heard of people saying everything will be on die one day; frankly I couldn't think of a worse future than one component regulating all hardware tasks, no single company should have that much control over what goes in your computer.

Think about it; if say a company like Intel created a chip with many cores that covers sound, graphics, general processing, physics etc. what is to stop them making modifications the user doesn't want? If you don't have to worry about all the hardware working together implementing hardware DRM/HDCP wouldn't be too difficult. If you wanted to upgrade the performance in one area you would have to replace the entire chip which I am sure would not be cheap, everyone would be force fed the same general specs were if you wanted just one specification you would have to get everything else along with it. Also not forgetting that everything being controlled by cpu companies would make it practically impossible for new competition to arise, you would never see anything like the GPU or PPU appear as the new hardware just doesn't leave room for expansion.

It would be like the Microsoft of hardware, trapped and forced to accept what you are given.

These and so many other reasons I would dislike everything being on die, though I imagine they also have the CPU manufactures rubbing their hands together in glee.


----------



## Kougar (Mar 30, 2007)

Well, the integration thing is a different topic I won't get into. I'll just say that AMD needs whatever advantages it can get as their future outlook is damn bleek. I challenge anyone out there to prove otherwise. Nobody will be able to say AMD has a single superior architecture design over Intel by the middle of next year once Nehalem ships, and they have nothing new forthcoming until that time either as far as new designs go. 

For those many users who don't need gaming quality hardware such as laptop users, having a CPU/GPU hybrid would be far superior to an integrated GPU in both performance and power savings. And the more things are integrated, the less there is to worry about some part of the DRM chain failing to work. Not that I am for HDCP or the upcoming DPCP (DisplayPort's version of HDCP), infact I think the idea is completely flawed when it is far easier to pirate the film in other to watch it instead of sticking to any legal methods that exist.


----------



## BrutusCroc (Mar 30, 2007)

Kougar said:
			
		

> Well, the integration thing is a different topic I won't get into. I'll just say that AMD needs whatever advantages it can get as their future outlook is damn bleek. I challenge anyone out there to prove otherwise. Nobody will be able to say AMD has a single superior architecture design over Intel by the middle of next year once Nehalem ships, and they have nothing new forthcoming until that time either as far as new designs go.



Man, don't go near certain tech related websites, some of them have such die hard AMD fans they still think Conroe is fake... and people still eat that stuff up.

I used to be in the AMD boat until thier lapse, all my stuff has been AMD (4 machines here) but I have not had the desire to upgrade yet, but I certainly would get a Core 2 Duo if I did.  The only reason I can see buying AMD in todays market is if you want to take advantage of thier severe price slashing and want the cheapest possible dual core system.  It would really suck though if AMD was wiped from the planet, if you were old enough to remember the "pre-clone cpu" days of late 80s and early 90s... Intel with its command over the market charged whatever it wished for CPUs.  Remember $1000+ 486s?

As far as HDCP my only words for DRM is it can't die in a fire quickly enough.  It's not 1985 anymore with worn VHS tapes and degraded generations, if I buy a movie in a digital format it theoretically should last me as long as I can keep it on an operating media.  I'd buy a movie for $40 a pop if it came in 1080p and allowed me to convert it to whatever I want for different playback devices.


----------



## Kougar (Mar 30, 2007)

Well, there are always plenty of mis-informed "people" to go around that can't get a clue even when it's forcefed to them  It's just ridiculous, such as those that'll read a 8800GTS review, then come away with the idea buying a second 7900GT for almost the price of a 8800GTS will actually give them better performance. AMD developed quite the fanbase as they used to be the "underdog", the anti-monopoly company, etc etc etc.

Don't get me wrong, AMD *will* surpass today's Core 2 Duo in performance and especially the server market with their K10 launch in just a handful of months now. But after that they plan to do the logical 45nm shrink of K10... but after that? They have nothing at all listed or planned that they have publically announced. By the time they shrink K10 to 45nm, Intel will have launched Nehalem in reply, basically the nuke card. Nehalem will be using every single advantage AMD has claimed for the past 3 years against them, molded on a very advanced form of today's Core 2 Duo. As it stands, based on public knowledge I don't see AMD ever regaining the performance/performance per watt crown again after Nehalem launches. If they had some details to divulge about future chips it would have been in the best interest to at least give a smattering of details to save their stock stock price from reaching a new all time low this week.

I can only speculate, but I currently imagine Nehalem to be a roughly 3.8ghz part with maybe double the performance of a Core 2 Duo, very likely higher especially with higher clock speeds. It's going to be mass murder on AMD's prices all over again to try and stay price/performance competitive.


----------



## ADF (Mar 30, 2007)

The CPU industry has been leap frog for a good couple of years, so I wouldn't worry about AMD. As for the fanatical fans it isn't too surprising, keep in mind when AMD ruled there was people who 'chose' pentium 4s for their gaming computers despite being much slower. People just have brand preference and will stick with them even when they are not the leaders at that present time, they will find ways to justify it. I wouldn't necessary call it a bad thing as we need people to fund companies who are not doing so well to help keep competition alive, hell if there was no one to kick Intel in the face when they screwed up with the pentium 4 line they wouldn't have invested the money into the killer conroe.


----------



## Kougar (Mar 31, 2007)

If they want to buy based on brand name or price alone and no other factors, then that's more then fine by me, I have nothing against that. I on the other hand do get annoyed when they basically say something to the effect that a Ferrari Enzo is outclassed by a '93 Ford Focus, which lived up to the Found On Road Dead nickname.

I have to disagree though, I honestly never one was to give a second thought if AMD was ever going to make it before, but for the first time I really think don't have anything in their repertoire, whether it is new CPU designs or new CPU technologies that Intel hasn't already incorperated into their own products. There's been naught a peep of anything, and knowing the company/FAB process they will be busy spending 2008 turning their 65nm K10 chip into a 45nm K10 chip, which at most will help cost of production and a somewhat higher clock speed, IF they use the new fab process technology which they haven't decided upon.


----------

