Inicio Gaming Intel’s XeSS tested in depth vs DLSS – the Digital Foundry technology review

Intel’s XeSS tested in depth vs DLSS – the Digital Foundry technology review

0
Intel’s XeSS tested in depth vs DLSS – the Digital Foundry technology review

Intel’s debut Arc graphics cards are arriving soon and ahead of the launch, Digital Foundry was granted exclusive access to XeSS – the firm’s promising upscaling technology, based on machine learning. This test – and indeed our recent interview with Intel – got here about within the wake of our AMD FSR 2.0 vs DLSS vs native rendering evaluation, where we devised a gauntlet of image quality scenarios to essentially put these latest technologies through their paces. We suggested to Intel that we would love to place XeSS to the test in the same way and the corporate answered the challenge, providing us with pre-release builds and a top-of-the-line Arc A770 GPU to check them on.

XeSS is exciting stuff. It’s what I consider to be a second generation upscaler. First-gen efforts, comparable to checkerboarding, DLSS 1.0 and various temporal super samplers attempted to make half resolution images appear to be full resolution images, which they achieved to varied degrees of quality. Second generation upscalers comparable to DLSS 2.x, FSR 2.x and Epic’s Temporal Super Resolution aim to reconstruct from quarter resolution. So within the case of 4K, the aim is to make a native-like image from only a 1080p base pixel count. XeSS takes its place alongside these technologies.

To do that, XeSS uses information from current and former frames, jittered over time. This information is combined or discarded based on a sophisticated machine learning model running on the GPU – and within the case of Arc graphics, it’s running directly on its XMX (matrix multiplication) units on its Xe cores. The Arc A770 is the most important GPU within the Arc stack, with 32 Xe cores in total. Arc’s XMX units process using the int-8 format in a massively parallel way, making it quick. For non-Arc GPUs, XeSS works in a different way. They use a «standard» (less advanced) machine learning model, with Intel’s integrated GPUs using a dp4a kernel and non-intel GPUs using kernel using technologies enabled by DX12’s Shader Model 6.4. Which means there’s nothing to stop you running XeSS on, say, an Nvidia RTX card – but as it isn’t tapping into Nvidia’s own ML hardware, you mustn’t expect it to run as fast as DLSS. Similarly, because it is using the «standard» model, there could also be image quality concession compared to the more advanced XMX model exclusive to Arc GPUs. The performance and quality of XeSS on non-Intel cards is something we’ll be in future.

Here it’s – a 30 minute deep-dive into the standard of Intel’s XeSS upscaler, stacked up against native resolution rendering and DLSS, tested across a spread of resolutions.

Despite the fact that XeSS has a price in its own right – even with XMX unit acceleration – its profit is available in the performance it saves in comparison with native resolution rendering. We did the vast bulk of our XeSS tests using a construct of Shadow of the Tomb Raider, with integration into the sport carried out by Intel itself. Testing a 4K output, the performance mode increased frame-rate by 88 percent, balanced mode by 66 percent, quality mode by 47 percent and ultra quality by 23 percent. The quantity of performance saved over native rendering quality depends upon the rendering load in query. In Tomb Raider with every part maxed at 4K, XeSS offers an excellent performance uptick on its performance or balanced modes.

Should you reduce the output resolution though, or reduce the settings quality, the GPU is less taxed and gains can be less impressive. For instance, at 1440p at those self same settings, performance mode only increases performance by 52 percent. Conversely, the heavier the rendering load is, the greater the savings can be. For instance in 3D Mark’s XeSS test with tons of ray traced reflections and more at 4K, XeSS in performance mode delivers 177 percent more performance than native rendering. Put simply, the more intensive the rendering, the larger the gain using XeSS.

I discussed ultra quality mode and it is a setting not offered by competing upscalers, so here’s a table showing how all of those scalers and their modes compare. DLSS, for instance, tops out with its quality mode, upscaling from a native 1440p for a 4K output. XeSS ultra quality mode goes further, at 1656p. With all of those scalers, the more raw data you give via higher internal resolutions, typically the upper quality the output.



Quality Mode 1080p Output 1440p Output 2160p Output
AMD FSR 2.x Performance 960×540 1280×720 1920×1080
Nvidia DLSS 2.x Performance 960×540 1280×720 1920×1080
Intel XeSS Performance 960×540 1280×720 1920×1080
AMD FSR 2.x Balanced 1129×635 1506×847 2227×1253
Nvidia DLSS 2.x Balanced 1114×626 1486×835 2259×1270
Intel XeSS Balanced 1120×630 1493×840 2240×1260
AMD FSR 2.x Quality 1280×720 1706×960 2560×1440
Nvidia DLSS 2.x Quality 1280×720 1706×960 2560×1440
Intel XeSS Quality 1280×720 1706×960 2560×1440
Intel XeSS Ultra Quality 1472×828 1962×1104 2944×1656

Let’s talk in regards to the actual test methodology. To a certain degree, there’s only up to now the written word can go when talking about image quality, so my advice could be to observe the video embedded at the highest of the page and to check out the screenshot comparison zoomer further on down. Nonetheless, in putting these scalers through their paces, I primarily test vibrant scenes with no motion blur, so darkness and distortion don’t impact the comparisons. I also typically test with sharpness sliders on the minimum – the problem here being that XeSS doesn’t have a sharpness slider. As an alternative, the ML training model is tuned to product what Intel believes are one of the best results – something that will change in future. Also: I typically test 4K outputs in performance mode, 1440p in balanced mode and 1080p in quality mode. The lower the output resolution, typically the more raw image data reconstruction techniques require to get an excellent image.

My tests start with a take a look at still imagery – where I’d expect all upscalers to supply nigh-on perfect results. From there on out, it’s about testing scenarios where temporally-based upscalers traditionally have problems: starting with flickering in high quality detail or moiré patterns together with associated challenges comparable to vegetation and hair rendering and what happens with transparencies including water particle effects. The impact on post-process quality (for instance, depth of field) can be tested and within the case of Shadow of the Tomb Raider, I also tested ray tracing quality too. RT is a challenge for these upscaling algorithms as the quantity of rays traced correlates with native resolution – and since we’re upscaling, we should always assess the impact to quality vs native resolution RT.

One other area to deal with concerns movement. The essential concept of those upscaling techniques is to re-use information from prior frames and inject it into the present one. Nonetheless, fast camera movement, on-screen animation and ‘disocclusion’ (for instance, the on-screen Lara suddenly moving to disclose latest detail) is a stern challenge in that latest detail must be resolved with less data to work with.

Please enable JavaScript to make use of our comparison tools.

The video and screenshots go into rather more depth via more useful media, but I can summarise my overall findings and the excellent news is that on its first attempt, Intel has delivered an upscaling technology that is comparable to DLSS and much like the Nvidia technique, can exceed the standard of native resolution rendering with standard TAA. There continues to be some work for Intel though. For starters, the moiré effect stands out as one in every of its clearest weaknesses. Even DLSS is just not completely proof against this, but XeSS clearly presents this artefact in additional scenarios. This is unquestionably the most important difference seen between XeSS, DLSS and native resolution rendering and the realm where I believe Intel still must do probably the most work.

By way of transparencies, differences between DLSS and XeSS are minor, with XeSS being a touch blurrier – though we do not know the extent to which DLSS sharpening could also be influencing the comparison. Water is something quite different and one other major difference, but I imagine that this one is more of an integration issue greater than the rest. At native resolution and using DLSS, it really works high quality, but with XeSS it doesn’t, causing the water to jitter, the effect increasing the lower the input resolution. As of right away, it’s a reasonably distracting artefact, especially because it means increased aliasing for anything that’s within the water itself when the camera moves. DLSS has its issues by way of clarity with water, however the jitter artefact is more offputting.

Particle rendering may be difficult for these upscalers, but XeSS and DLSS each work well here. Nonetheless, hair rendering has a pixellated effect on XeSS that puts it each behind DLSS and native resolution rendering – in reality, owing to the sport’s standard TAA method, I believe DLSS actually beats native on this respect. Nonetheless, the sport’s RT shadows in each DLSS and XeSS lack the clarity of native output owing to the lower internal resolution meaning fewer rays being traced. Normally, XeSS and DLSS look the identical, but occasionally some shadows with XeSS exhibit a slight wobble to their edges – a slight flicker almost – not visible on the DLSS side. But that was about it.

Wealthy Leadbetter and Alex Battaglia discuss with Intel Fellow Tom Petersen about Arc graphics – what’s happened within the last 12 months, what we should always expect at launch – and the way the tech pushes machine learning and ray tracing features.

Interestingly, the areas where I expected XeSS to be most challenged shows some good results against DLSS. Disocclusion is where I discovered AMD’s FSR 2.0 to struggle most against native resolution rendering and DLSS and in my tests, I discovered no discernable difference between native rendering, DLSS and XeSS. Disocclusion doesn’t appear to cause large image discontuinities with XeSS, which is a wonderful result. XeSS can be adept in handling extreme movement near the camera, even beating out DLSS, with clearer image features on movement when Lara attacks.

Shadow of the Tomb Raider is a wonderful test case owing to its top quality, high frequency visuals, ray tracing support and the indisputable fact that it is a third-person motion game – the fundamental character often revealing detail that may very well be hard for an upscaler to trace. And having looked in extreme depth at literally tons of of captions, I believe I can draw an informed verdict of XeSS’s quality in its debut showing – on this implementation, not less than.

Firstly, I believe the performance uplift is great, but that comes with the territory with these reconstruction techniques. I truthfully think it is sort of at all times price it to make use of image reconstruction over native rendering, especially since XeSS will work on nearly all modern GPUs from any vendor. With reference to the standard of the resolve, I’m generally very positive about XeSS because it didn’t appear to have issues in those areas which are notoriously hard to get right. Fast movement had a coherent look and didn’t alias or smear, disocclusion didn’t cause large fizzling and particles and transparencies generally look good, not less than at 4K performance mode or higher. As a testament to its quality, I needed to double and triple check my comparisons to be certain that I used to be not mixing up DLSS and XeSS.

The God of War FSR 2.0 vs DLSS vs native resolution rendering video was DF’s try to set the usual for assessing upscaling technologies and was the content we used to pitch Intel in getting a primary take a look at XeSS.

But still, there are issues here, the most important being the moiré artefacts, which manifests nearly each time tiling detail shows up, tarnishing an otherwise excellent presentation. Similarly, there may be also the jittered water issue which needs , although like I said earlier, I believe that is an integration problem versus some sort of fundamental weakness with XeSS itself.

Even with these issues, I’d say XeSS is shaping as much as be a terrific success. Like one of the best upscaling solutions, it might probably beat the look of native 4K in some scenarios – even in performance mode, which uses a 1080p base image. It’s directly competitive with DLSS in scenarios that I consider to be ‘hard mode’ for image reconstruction techniques. That said, it have to be mentioned that it’s competitive with a rather older version of DLSS in Shadow of the Tomb Raider and a more moderen title with a more modern integration could see differences. And yes, it goes without saying that our tests were based on only one implementation. There are numerous others coming out soon enough when Arc launches. We did get access to another XeSS implementations though and based on what I even have seen in Diofield Chronicles or 3D Mark’s XeSS test, it’s clear that the Intel tech can produce great imagery, outshining native rendering at times.

We’ll have more on XeSS soon, including the way it runs on other GPUs. Yes, it does work – I tested Shadow of the Tomb Raider on an RTX 3070, for instance – and after all, we will not wait to see it running on more games. Modern Warfare 2, for instance, should include it on day one, while the tech may even find an excellent home within the likes of Death Stranding and Hitman 3, amongst many others. Plug-ins for Unreal Engine and Unity? They’re already done apparently. So despite the intense depth of this review, in a way, the XeSS story has only just begun and we stay up for sharing more in the longer term.

Dejar respuesta

Please enter your comment!
Please enter your name here