A few months back we investigated CPU core misconceptions, explaining how overall processor performance is affected not only by how many cores a CPU has, but other factors including cache levels and capacity. This was an interesting and unique look at Intel’s 10th-gen series in an article we titled "How CPU Cores & Cache Impact Gaming Performance." Basically what we did was compare the Core i9-10900K, Core i7-10700K and Core i5-10600K at the same 4.2 GHz frequency, with the same memory, memory timings, ring bus frequency, and so on.
Then compared the three CPUs with only 6 cores / 12 threads enabled to see how much of a difference the L3 cache capacity made when it came to gaming performance. After that, we compared that data to the 10700K and 10900K with 8 cores enabled, and finally the 10900K with all of its 10 cores turned on.
Long story short, it turns out that in almost all games, it's not the core count but the L3 cache capacity that is responsible for the improved performance seen across the higher-end Intel parts. Of course, down the track the extra cores will see those higher-end parts pull even further ahead, but at least on today's games it’s all about the L3 cache.
That investigation later morphed to a quad-core version where we included Core i3 models and a similar take for AMD CPUs, where we looked at 10 years of AMD CPU progress and back to Intel for the same.
To wrap up that content, we thought we should add the new Intel Alder Lake 12th-gen CPUs to the data pool, so here we are, and it's been a more involved process than we first imagined. Whereas all other CPU architectures had one, two, or maybe three different configurations, 12th-gen Core has three per CPU.
For example, the 10th-gen CPUs had a 20MB L3 cache with the Core i9 model, 16MB for i7 and 12MB for the i5 models, Alder Lake’s cache capacity is segmented in a similar fashion, 20MB of L3 for the Core i5, 25MB for i7 and 30 MB for i9. But then on top of that we had to work out what kind of core configuration we should test. Four P-cores, Four E-cores or a mixture of both? The correct answer was of course all three configurations and that’s provided us with a wealth of juicy data to go over.
To be clear, with four P-cores enabled we were using Hyper-Threading, so this is a 4-core/8-thread configuration. Basically SMT was enabled when supported for all test configurations. This means because the E-cores don’t support SMT the four E-core configuration was 4 cores with 4 threads. Then the mixed configuration which featured two P-cores with two E-cores was a 4-core/6-thread configuration.
For testing we’ve used the MSI Z690 Tomahawk Wi-Fi DDR4 as we wanted to use the same DDR4-3200 CL14 low-latency memory that was used to test all the other CPU architectures that support DDR4. In our testing, DDR5-6000 has not shown to be any faster for gaming, but most importantly we wanted to keep the data as apples to apples as possible for this feature. Finally, all configurations were tested using the Radeon RX 6900 XT. Let’s dive into the data.
Benchmarks
Starting with Rainbow Six Siege, there’s quite a bit to go over, so bear with me. First let’s just look at the Core i9-12900K, we see with four P-cores enabled and locked at 4.2 GHz that this configuration was good for 510 fps, just 3% faster than AMD’s Zen 3 architecture.
Then with two P-cores and two E-cores enabled, performance dropped by 15% which is a fairly significant reduction, and then with just four E-cores enabled performance drops by a further 12% which isn’t that much and not nearly the decline I was expecting. Quite shockingly, in this title four E-cores were able to match the performance of the Core i9-11900K, though the 11th-gen architecture does suck in this title, but still I didn’t expect to see any results like this.
Intel has been the best processor so far
ReplyDeletewow, have learnt a lot from this?
ReplyDelete