ShadowDjinn, on December 06 2012 - 03:53 PM, said:
uh... clock speed is defined by the number of instructions carried out per second.
1 "clock" = 1 "instruction"
now I'm sure you're about to spout some other inane drivel about how your CPU is better than everyone elses.
fact of the matter is this; you haven't added anything constructive to the thread so go troll somewhere else.
and while I'm at it; you can cut the proverbial about CPU bottlenecks, the bottleneck is in the game code pure and simple.
granted you might have a system that can grunt it out, but not everyone does.
Like I told some other guy who told me the same thing, don't tell me what to do. You're not a mod. This is my opinion and experience talking, so if you don't like it, tough. Scroll past my posts. (Indeed, if you don't like this long post, you'd be well served by doing that now.)
As for your statement of clock equaling instructions_ Congratulations! You're flat-out and completely
WRONG!
Since you seem to act like you know CPUs, I'll educate you so you can get it right next time. A CPU's clockrate is how many times its clock oscillates per second, expressed in Hz. Let's take two current CPUs as examples: The Core i7 2600K has a 3.4 GHz clockrate, and the AMD FX-8350 has a 4.2 GHz clockrate. (They both have boost tech that lets them get higher, but we won't get into that as it's not important.)
Now, let's get into instructions. An instruction, obviously, is a piece of work the CPU does. How quickly the CPU can do this is partially based on its own execution resources, as well as how fast it runs, and a few other things like out-of-order execution, but generally speaking, there are relatively few instructions that can be done, guaranteed, in one clock cycle (mostly simple and basic things, like integer adds). One Instruction per clock is the ideal (as it would then mean that faster CPUs are a simple matter of which has the faster clockrate), but it's not the reality. For example, the original 8087 Math Coprocessor took between 70-100 cycles to do a FADD (Floating-Point Addition) instruction. The 80387 could do it in 23-34 cycles. An 80486 (or 80487) could do it in 8-20. An Athlon 64 could do it in 1-5. This means that even if you scaled up the 8087 to Athlon 64 speeds somehow, it still would not be able to do it as quickly as a real Athlon 64 can; they are simply more efficient processors. This is when the arguments of clockrate come into play.
As I said earlier, few instructions are "one clock per cycle" and in reality, CPUs now often do out-of-order execution, pipelining, and so on to make these things as quick as possible to keep the pipelines saturated yet fast as possible, but they're still not ideal. Eventually, some instruction will come along that, simply put, takes more than a handful of cycles to execute.
It's here where things begin to differ.
Intel focused basically on efficiency per clock; in other words, trying to tune their execution units to getting things done in as few clockcycles as possible. They went the raw clockrate path before (Pentium 4, anyone_) and those of us who were around then remember all too well that the things were hot, inefficient, and inferior, especially once the Athlon 64 came out. (
We also remember that overclocking them would eventually kill them.) The fact that AMD took over as the best performance-per-dollar processor sent them back to the drawing board, which resulted in them designing the Core architecture by going back to the P6 architecture that was used in the Pentium III, and that move rewarded them with returning to the top around 2006 and being there ever since, while AMD has floundered.
To compare the two design philosophies, Intel designs have relatively beefy math units, perform things like out-of-order execution, and so on. AMD has focused on power with heavily-multithreaded workloads and high clockrates, and excels in that area; the more parallel the workload, the better their processors will do. In layman's terms, this means that while the AMD will do better with stuff that can be scaled up over a bunch of cores equally, the Intel will be better with stuff that demands more number-crunching.
Unfortunately for AMD, games fall into the latter camp, not the former.
The simple fact is that even at base clocks, my quad-core 3.4 GHz i7-2600K
WILL stomp AMD's octo-core, 4.2 GHz FX-8350.
It just plain will. The only points the AMD will best my processor is things like 3D Modelling, Movie encoding, or similar things that can be highly parallelized across many cores. (A good deal of this is because current AMD CPUs support Fused Multiply-Add, while Intels do not; next year's Haswell will support this however.) In things like games, or things that demand only a few threads to be heavily loaded, there is no current AMD processor in existence that can catch up to what I currently own. Furthermore, for all of that, it consumes more energy (130W vs. 95W TDP with mine), runs 800 MHz faster at stock speeds than mine... and it still can't top mine in most areas. My processor still keeps close to it while having basically half the processor cores it does (though HyperThreading gives it a boost), and I'm sure that if I had a true octo-core processor, the Intel would handily win those benchmarks.
AMD processors are, simply put, inferior processors in all but a very few areas, and unless you do lots of those areas, they do not justify the money. Since I play a lot of games, my money is more justified buying what I have. In almost all other areas outside of the few that AMD excels in by (fairly small) margins, my processor does more work, at a slower clockspeed, while generating less heat. It is, therefore, more efficient.
The game can certainly be tuned to perform better on them, but he is simply not going to get a magical 60 FPS, max details thing going at identical settings to an Intel-powered system, not without doing things like turning down the resolution. Unreal Engine (Which this game runs on) is very CPU hungry and not so much GPU hungry. AMD CPUs are, thanks to the way they are made, at a disadvantage to running games on the engine.
Here's an example using another UE3 game, Arkham City. Framerate wise, there's a 22 FPS difference, and in terms of consistency, my processor will render 99% of all frames within about 20ms, while his will do it in about 29ms - that adds up over time. Simply put, the Intels crunch the frames in less time, more efficiently, and more consistently than the AMDs do. This is one of the big reasons why I will enjoy a considerably faster and smoother gameplay experience than he would.
When even a dual-core chip (the i5-655K) is overall providing smoother gameplay than an octo-core FX-8150, you no longer can blame the game code at that point. You can only blame the architecture behind the processor. I'll let their end-of-testing conclusion sum it up, as it will do it probably better than I can.
Quote
Ivy Bridge moves the ball forward, but Intel made even more performance progress in the transition from the prior-generation Lynnfield 45-nm processors—such as the Core i5-760 and i7-875K—to the 32-nm Sandy Bridge chips. From Sandy to Ivy, some of the potential speed benefits of the die shrink were absorbed by the reduction of the desktop processor power envelope from 95W to 77W.
Sadly, with Bulldozer, AMD has moved in the opposite direction. The Phenom II X4 980, with four "Stars" cores at 3.7GHz, remains AMD's best gaming processor to date. The FX-8150 is slower than the Phenom II X6 1100T, and the FX-6200 trails the X4 980 by a pretty wide margin. Only the FX-4170 represents an improvement from one generation to the next, and it costs more than the Phenom II X4 850 that it outperforms. Meanwhile, all of the FX processors remain 125W parts.
We don't like pointing out AMD's struggles any more than many of you like reading about them. It's worth reiterating here that the FX processors aren't hopeless for gaming—they just perform similarly to mid-range Intel processors from two generations ago. If you want competence, they may suffice, but if you desire glassy smooth frame delivery, you'd best look elsewhere. Our sense is that AMD desperately needs to improve its per-thread performance—through IPC [Instructions Per Clock --DP] gains, higher clock speeds, or both—before they'll have a truly desirable CPU to offer PC gamers.
Since this review got published, the FX-8350 has come out; that would be the one to go to if, for some reason, he was not doing a full system rebuild anytime soon but just had to have a faster experience. But if he is planning on doing a new system sometime soon, Intel's Haswell CPUs come out next spring, and I see essentially zero reason to recommend an AMD over them for any reason except for budgetary ones. They're simply inferior in almost every area.
Is that to say the devs should not work on AMD optimization_ No, they absolutely should. Plenty of people do still have AMD CPUs, and so the game needs to run at least reasonably acceptably on them. They do not right now, so the devs have some work to do. But the devs can only do so much before, eventually, it is out of their hands. So while they can (and should) improve performance on AMD processors, eventually it gets to a point where the wall is no longer how well the game performs on AMD processors, but how AMD processors perform period - and at that point, no amount of tuning or tweaking will really improve things further.
Edited by DarkPulse, December 07 2012 - 04:41 AM.