How We Test: CPU Game Benchmarks

ebn benghazi
[ad_1]






Today, we are discussing a topic that is often raised when we do our CPU game benchmarks. As you know, we perform a ton of CPU and GPU benchmarks throughout the year, much of which is dedicated to gaming. The goal is to determine which CPU will offer you the most for your money at a given price, now and with optimism in the future.



It was not long ago, we compared the Core i5-8400 and the Ryzen 5 2600. Overall, the R5 2600 was faster when refined, but it was more expensive by image, making it the cheapest and most practical option.




For this match we compared the two processors in 36 parts at three resolutions. Because we want to use the maximum of visual quality parameters in the game and apply as much load as possible, the GeForce GTX 1080 Ti is our graphic weapon of choice. This helps to minimize GPU throttling bottlenecks that can mask potential weaknesses when analyzing CPU performance.





The problem is that many readers seem to be wondering why we are doing this, and I guess without thinking completely, go to the comments section to criticize the content for being misleading and unrealistic.



This is something we have seen over and over again and we have addressed it directly in the comments. Often, other readers have also come to the rescue to inform their peers why the tests are done in a certain way. But as the CPU scene became more competitive again, we thought we would approach this topic more broadly and hopefully explain a little better why we are testing all the CPUs with the most powerful gaming GPU available to date.



When we test new processors, we have two main objectives in mind: # 1 to know how it works right now, and # 2 how is it "future-proof". Will it still serve you well in a year, for example?



As mentioned a moment ago, it all boils down to removing the bottleneck of the GPU. We do not want the graphics card to be the performance limiting component when measuring processor performance, and there are a number of reasons why this is important to do, and I will cover them in this article.



Let's start by explaining why high-end GPU testing is not misleading and unrealistic ...



Yes, that's right. It is unlikely that anyone will want to pair a GeForce GTX 1080 Ti with a processor of less than $ 200. However, when we perform dozens and dozens of hours of comparative analysis of a set of components, we aim to cover as many basics as possible in order to provide you with the best advice. purchase possible.



Obviously, we can only test with the games and hardware that are available now, which makes it a bit harder to predict how components like the CPU will still behave to be released games using graphics cards more modern, say a year or two on the track.





Assuming you do not upgrade your processor every time you buy a new graphics card, it's important to determine how the processor works and compare with competing products, when it's not limited by the GPU. That's because if you can pair your new Pentium G5400 processor with a modest GTX 1050 Ti today, in a year you could have a graphics card racing twice as much processing power, and in 2 -3 years old who knows.



For example, if we compare the Pentium G5400 to the Core i5-8400 with a GeForce GTX 1050 Ti, we will determine that in the latest and greatest games of today, the Core i5 does not have the same power. brings no real benefit in terms of performance (see graph below). This means that in a year or two, when you go to a performance level equivalent to that of the GTX 1080, you will wonder why the GPU's use is running around 60% and you do not see the performance that you should be.





Here is another example that we can use: early 2017, upon the release of the Pentium G4560, we released a GPU scaling test where we observed that a GTX 1050 Ti n & rsquo; Was not faster with the Core i7-6700K than with the Pentium processor. ]

However, using a GTX 1060, the Core i7 was 26% faster on average, which meant that the G4560 had already created a bottleneck, but we could only know that because a high-end GPU was used. With the GTX 1080, we find that the 6700K is almost 90% faster than the G4560, a GPU that, at this time next year, will offer the best midrange performance, much like what we see by comparing the GTX 980 and GTX 1060, for example.








Now, with this example, you could say that the G4560 was just $ 64, while the $ 6700K was $ 340, of course the Core i7 was going to be miles faster. We do not disagree. But in this 18-month example, we can see that the 6700K had significantly more margin, which we would not have known if we had tested with the 1050 Ti or even the 1060.



One could also argue that even today, at an extreme resolution like 4K there would be little or no difference between the G4560 and 6700K and this might be true for some titles, but not for any reason. others like Battlefield 1 multiplayer and will certainly not be true in a year or two when games become even more demanding in terms of CPU.



Also, do not fall into the trap of assuming everyone uses Ultra quality settings or targets at only 60 fps. There are a lot of players using a midrange GPU that opts for mid-to-high and even low settings to push frequencies well beyond 100 fps, and these are not just players with 144-Hz displays Despite popular belief, there is a serious advantage to having in quick shooters by going well beyond 60 fps on a 60 Hz display, but that's a discussion for another time.



Coming back for a moment to Kaby Lake's dual-core, trading a $ 64 processor for something more upscale is not a big deal, that's why we gave a an accolade at the ultra-affordable G4560. But if we compare more expensive processors such as the Core i5-7600K and the Ryzen 5 1600X for example, it is very important to test without limitations GPU ...



Let's go back to our discussion of the comparison between the Core i5-8400 and the Ryzen 5 2600, with three resolutions tested. Let's take a look at the results of Mass Effect Andromeda. These performance trends are very similar to the previous chart, is not it? You could almost rename 720p in GTX 1080, 1080p in GTX 1060 and 1440p in GTX 1050 Ti.





Since many suggested that these two processors under $ 200 should have been tested with a GPU containing an MSRP of less than $ 300, let's see what it could have done to our three tested resolutions.












Now we know that the GTX 1060 has 64% less CUDA cores and in Andromeda Mass Effect this leads to about 55% fewer frames in 1080p and 1440p using a 5GHz Core i7-7700K and we see that in these two graphics of my 35 games Vega 56 vs GTX 1070 Ti reference conducted last year. The GTX 1060 spit out 61 fps on average at 1080p and only 40fps at 1440p.



Graph below: GTX 1060 (red) vs GTX 1080 Ti (full bar)





Here's where the GTX 1060 is on our graph compared to the GTX 1080 Ti. The first red line indicates the low result of 1% and the second red line indicates the average frame rate. Even in 720p, we are massively bound to the GPU. If I had tested that with the GTX 1060 or maybe even the 1070, all the results would have shown that the two processors can maxi these particular GPUs in modern titles, even at extremely low resolutions .



In fact, you could run the Core i3-8100 and Ryzen 3 2200G in the mix and the results would lead us to believe that neither one nor the other CPU is less than the Core i5-8400 with regard to modern games. Of course, there will be the extremely intense extreme CPU title that shows a small drop in performance, but the real difference would be masked by the lower GPU's performance.



I have seen some people suggest testers to test with extreme high end GPUs in an effort to make the results fun, but come on, it's a bit too silly to entertain. As I said, the intention is to determine which product will serve you best in the long run, not to stand on the edge of your seat for an extreme reference battle to death.





As for providing more "real" results when testing with a low-end GPU, I would say that if we've tested a range of GPUs at a range of resolutions and quality settings, you're not going to see the sort of real results that many claim to deliver. Given the huge and unrealistic business that this type of test would become for more than a few selected games, the best option is to test with a high end GPU. And if you can do it at 2 or 3 resolutions, as we often do, it will mimic the performance of GPU scaling.



Do not get me wrong, this is not a stupid suggestion to test with lower quality graphics cards, it's just a different kind of test. In the end, I have the impression that those who suggest that this testing methodology does so with a very narrow point of view. Playing Mass Effect Andromeda with a GTX 1060 but using average settings you will see the same type of frequencies you will get with the GTX 1080 Ti using ultra quality settings. So do not make a second mistake of assuming everyone under the same conditions as you.



Players have a wide and varied range of requirements, so we try our best to use a method that covers as many bases as possible. For gaming CPU comparisons, we want to determine in a large volume of games which product offers the best overall performance, as it will probably be the best in a few years. Since GPU-limited testing tells you little or nothing, it's something we try to avoid.



Business Shortcuts:





[ad_2]

Source link

إرسال تعليق

Cookie Consent
نستخدم ملفات تعريف الارتباط على هذا الموقع لتحليل حركة المرور، وتذكر تفضيلاتك، وتحسين تجربتك.
Oops!
يبدو أن هناك مشكلة في اتصالك بالإنترنت. يرجى الاتصال بالإنترنت والبدء في التصفح مرة أخرى.
AdBlock Detected!
لقد اكتشفنا أنك تستخدم إضافة حظر الإعلانات في متصفحك.
الإيرادات التي نحصل عليها من الإعلانات تُستخدم لإدارة هذا الموقع، نطلب منك إضافة موقعنا إلى قائمة الاستثناءات في إضافة حظر الإعلانات الخاصة بك.