What have we learned these past few years? It’s easy: native 4K is a waste of horsepower. Sony’s own checkerboard solutions in games like Horizon Zero Dawn already proved that point, but NVidia’s proprietary DLSS – or Deep Learning Super Sampling, to give it its full title – has sealed the deal.
For those of you who don’t know – and, to be clear, we’re far from experts in the field – DLSS works by using artificial intelligence to recreate a lower resolution image with the quality of a higher resolution one. What this means is that you can effectively render at a sub-native resolution, and reap all of the hardware performance benefits that come with that, but still display a crystal-clear image at the end.
It’s a game-changer, and it looks like PlayStation may be exploring its own solution. A patent filed by the platform holder refers to an “information processing device”, and the abstract proposes a technology that can utilise “pre-learned data […] for generating a reproduction image that represents the appearance of the object”. It’s ambiguous, but it sounds similar to DLSS to our untrained ear.
So why is this a good thing? Well, native 4K is computationally expensive; it takes a lot of processing power to render all of those pixels. DLSS means that said horsepower can be allocated elsewhere – on physics, effects, AI, lighting, framerate, etc – without any meaningful loss to the overall image quality. To be fair, Sony’s done an impressive job with checkerboard, so we’re excited to see what it’s potentially got cooking here.
[source freepatentsonline.com, via reddit.com]
Comments 59
Native 4K is a waste. If Sony can come up with something half as good as DLSS then we're in for a fun generation!
@get2sammyb I am certain someone is going to find some way to twist this into a negative
So it’s just AI controlled checkerboarding? So instead of using an average value of near pixels in order to ‘guess’ values instead of natively rendering them, it uses machine learning to build up a picture of what the screen ‘should’ look like.
Doesn't this technology need any kind of hardware or is it always software.
Yes please!
I play purely 4K only. Anything checkerboard or DLSS is better than simply running sub-4K on a 4K TV.
And pretty often I can’t quite hit the 60fps and have to go down to locking the frames to 30fps to have at least some kind of smooth experience without sporadic lags.
With DLSS you even get a better picture with sometimes double the frame rates (e.g. Deliver Us The Moon).
@get2sammyb At the end of the day is still going to be called fake 4K 😐 However if they can guarantee 60fps+ because of this I'll take it.
@AdamNovice Well it requires one hell of a graphics card regardless. Hence why it’s only available with an RTX card. The neural network is stored through your drivers though, so it’s not built into the cards. However, RTX cards have AI accelerators and I haven’t heard if the PS5 contains them. Cerny surely would have mentioned it in his talk because it’s a big deal.
@get2sammyb I would say Sony's implementation of CB rendering is 'half' as good as DLSS and has proven it can really look great an compare favourably with Native 4k in games like HZD and God of War.
Digital Foundry has done a good video today of comparing Death Stranding on PS4 Pro using CB and on PC using both Performance (1080p) and Quality (1440p) modes. What it shows is that DLSS 2.0 is as good as 4k in the vast majority of situations with some 'smearing' on moving objects - but only certain objects. Its more stable too and even rendering 50% (1080p) of the amount of pixels CB rendering does, its a sharper, more stable image.
Of course to make it even better, Devs would need to code the game better - mark certain objects to better track motion to remove smearing - but as DF state, the extra time it takes to process and upscale using AI easily offsets the extra time it takes to render a 'native' 4k image and the results are almost indistinguishable.
No doubt Sony will have their own proprietary version of DLSS as that is nVidia's version. MS have DirectML (Machine Learning) as you would expect their naming to follow with DirectRT (Ray Tracing) in DirectX12 - so unless AMD develop their own version and Sony use that, Sony would need to develop their own.
I think its a better use of resources and even if 'Native' 4k can be achieved, its still better to use DLSS (or equivalent) and boost frame rates and/or visual effects (like RT) or settings (higher quality shadows, better draw distance etc).
I'd rather use this and get a solid 60 fps
I have experience using DLSS on my 2070 and i love it. I don't know why more developers don't add this to their games. BTW its not only for 4k either, you can play a game on 1080p with DLSS on and the game will look a lot better also, very amazing tech.
@BAMozzy DirectML isnt really DLSS. DirectML is just an API to program for INT8/INT4 TOPS. I don't expect any of the two consoles to have something even half as good as DLSS, when even the Xbox Series X only has up to 48.5 INT8 TOPS and 97 INT4 TOPS, while the RTX 2060 Super has 115 INT8 TOPS and 230 INT4 TOPS in its tensor cores alone, which increases even more once you consider its CUDA cores. The lowest end RTX card, more than twice as powerful in ML than the only console confirmed to support ML.
@AdamNovice Yes, nvidia GPUs uses dedicated hardware cores for this, Tensor Cores if i remember, which i think is really the point of this, if you have to use compute power to do it, i don't now if the FPS gain will be as good a DLSS 2, but even if it's not, i will still prefer a "fake" 4k if that means more 60fps games
Also, what if AI cores are the rumored unannounced feature of PS5, at this point i'm expecting nothing new from the BC front.
I'm just glad if the resolution talk can be over. Not only because they can focus on more exciting stuff than just a number of pixels, but also the marketing and keyboard warriors discussion are such a waste of space and energy. DF couldnt even figure out what resolution the UE5 demo was without being told.
Any software solution for this will very likely take away a significant chunk of the CPU power, reducing game logic budget and be more likely to limit the game to 30fps simply because the logic will become the new bottleneck. But this might still be useful for a game that is aiming 30fps anyways but having trouble reaching native 4k.
I could see a lot more potential for this tech in a mid-gen hardware refresh, with some hardware support.
@Makina I would take checkerboard 4k/60fps over native 4k/30fps any day, but sadly, that does not seem to do as well on the marketing check list.
@Tharsman Absolutely Agree.
And I have always understood that publishers wants visuals over frame rate because is better for marketing. But devs can give us Performance Mode option.
They have the excuse of weak jaguar cores this gen, that excuse is over for PS5/XSX. No reason for not having both options.
4k/30 for marketing, 2k/60 for players.
@nessisonett
What about the 2nd wired article where EA mentioned that the GPU can power machine learning tools?
Is that related?
'I could be really specific and talk about experimenting with ambient occlusion techniques, or the examination of ray-traced shadows," says Laura Miele, chief studio officer for EA. "More generally, we’re seeing the GPU be able to power machine learning for all sorts of really interesting advancements in the gameplay and other tools."
@JJ2 Good spot, I still would have thought Cerny would cover ML considering it’s a big part of essentially everything going forward but hey, that’s an actual dev mentioning it! I can’t wait until there’s a proper teardown, there’s still so much we don’t know.
Basically it's magic
I'm almost sick of hearing about "native" 4K at this point, we all know its a waste of resources and always has been, only fanboys of a certain console seem to care about it and use it as a stick to beat the PS4 Pro with. Despite this the best looking game I've played this generation is The Last of Us 2 at 1440p, whilst the likes of Horizon and Death Stranding look absolutely amazing. Then you have games such as Doom or the recent Resident Evil titles where you can't even see the difference between 4K or resolutions below it due to all the post processing effects.
We seem to be getting to a point where these consoles have a lot more power but most of that is poured down the drain in pixel counts
Will it be available on the launch model?
@nessisonett @JJ2 it's difficult to know, as what that developer is saying would also fit with something like CUDA, while DLSS uses something a lot closer to tensor. The main difference is a trade-off between performance and accuracy, with CUDA being good for general parallel processing and tensor being mostly better for ML, where the fuzziness of algorithms makes the loss of precision less of a big deal. The issue here is how the lower performance for CUDA translates into poorer power efficiency, and with my understanding of how the PS5 uses a fixed power budget to manage heat, this would mean reducing the amount of other resources available to the system. All of this would make a CUDA-based DLSS implementation pointless.
Fingers crossed I'm just being pessimistic, but I'd prefer to temper my expectations.
With PlayStation 5 using PSSL: their own programming language featuring the Geometry Engine, instead of Direct X Ultimate API that Series X is using, this is entirely possible.
Good, I hope devs can use that power to something else like framerate, ai, texture, RT, or others rather than chasing native 4K.
Blast processing confirmed
@nessisonett actually there has been chatter about Sony revealing their AI stuff. The ICE Team employed a load of machine learning experts just before the PS4 Pro dropped. It's been a topic at SIGGRAPH for years and years now, so I'd have expected all of the big players in the games industry to have been investigating this. Honestly? I'd be more surprised if PS5 didn't have hardware accelerated ML at this point, especially given their emphasis on VR going forward.
That's great news. I was also wandering, isn't it possible to make the same thing with FPS?
It's over a decade now that modern Tvs offer artificial 60 or more FPS in their "fluid motion" mode which is pretty great imho and seems weird that consoles can't make something like that.
@kohiba99 they could, but it would make the game "feel" weird. If your eye is say seeing a 60 fps image, but input is to a 30 fps game it would feel sluggish and unresponsive. Also half of the frames would be "guesswork" and given you might be making an input at 50% of the time on a guessed frame could lead to a horrid feeling game.
@SirAngry not to mention the added latency it introduces by using the frames before and after to guess at the intermediary frames. This necessarily puts you at least 2 frames behind where you should be in addition to any lag earlier in the pipeline, and you have to have this lag consistently, otherwise viewers notice.
This only works for non-interactive video where the lag doesn't matter
@theheadofabroom indeed that's another issue. There are plenty of potential pitfalls in ML frame boosting, far moreso than image quality. There's also far more processing "gain" from reduced image quality over frames, and the resulting overhead should allow for more frames any way. I'm sure if I looked hard enough I could find a technical paper where someone has actually attempted it just to see how bad it actually is, but I think anyone with any experience of game coding would be able to discern the various pitfalls just by sitting down and playing them out in their head.
Native 4K is a lot better and is noticeable if you have an actual 1K+ screen. That said i don't care much for 4K right now because i don't own a 4K TV, i would rather have 60fps then 4K however its naive to think there isn't much difference between Native 4K and what Sony tries to do.
https://www.google.com/amp/s/www.eurogamer.net/amp/digitalfoundry-2020-control-dlss-2-dot-zero-analysis
Digital foundry did a great video on this a few months ago using control. Short answer is it’s almost as good as native using the latest build. They did notice some slight haloing of object edged when you zoom in but it isn’t noticeable in play.
My current plan is to play through control when it’s on steam using this for 1080p with all the ray tracing on max. Apparently with that I can get close to 60fps locked on my 2070 super as it renders at half resolution 960x540) and does the DLSS to that.
@kratreus DirectML is the general thing, but the have DirectML Superresolution that upscale from 1080p to 4k. They even did demos with that. https://mobile.twitter.com/klobrille/status/1124717224256782336
https://www.techradar.com/news/xbox-series-x-specs
Search for DLSS I'm the text and they talk about it.
"But it doesn't end there. AMD and Microsoft also seem to be targeting Nvidia's DLSS technology with RDNA 2 and the Xbox Series X. If you're not familiar with DLSS, or deep learning super sampling, it's a technology that uses dedicated hardware on Turing graphics cards to upscale images through AI.
Nvidia graphics cards have dedicated Tensor cores that handle this, but AMD is taking another approach. Instead, AMD will be relying on the raw throughput of the GPU, and executing the machine learning workloads through 8- and 4-bit integer operations – much lower precision than the 32-bit operations that are typically used in graphics workloads. This should result in a huge amount of power for this up-scaling without sacrificing too much. "
My guess is that how ever checkerboard is used, it will somehow be upgraded in the PS5.
@SirAngry
Hmm I can see what you are saying but from my experience playing ps4 games on my TV with this mode turned on, I didn't get any actual problem besides some image artifacts in very fast camera movement.
And all I can say, it's a life-saver! I'm feeling dizzy when I forget it off and play in native 30fps.
Checkerboarding isn't as expensive as DLSS in terms of GPU processing needed. So I expect an enhanced version of checkerboarding which improves image quality, but doesn't place as much as a drag on GPU resources as DLSS does.
@kohiba99 you shouldn’t be using any post-processing settings on your tv for gaming if you care about latency at all. Most tv’s come with a game setting to give you the best response time possible which turns off all post processing effects as well usually.
I’ve seen TVs have 120+ms response time with the artificial refresh rates setting on and ~33ms response time in game mode. Big difference.
@kratreus diiiiiiaaaaaammmmnnnnn.....
Ima need to read this again after dis here coffee...
@MarcG420 I guess it's fine for an individual to choose to increase their latency if it's their preference, they only hurt themselves and it may be worth the trade-off if they deem it so. It's definitely not something you'd want turned on by default for everyone.
@Party_Cannon The point of DLSS is to take away some workload off the gpu, this is done on Nvdia cards by using the tensor cores it comes with. That is why you see cleaner resolution with higher frame fates. So the using DLSS is not expensive on the gpu, if the ps5 gpu has something similar like NVdias then this would be a really good thing for ps5 games in the future.
@carlos82 "We seem to be getting to a point where these consoles have a lot more power but most of that is poured down the drain in pixel counts"
Yeah, that's my concern about next-gen tech, too. This weird fixation with hitting a certain pixel count that doesn't even match what the majority of consumers have access to is going to come at the cost of performance and graphical advancements that would make the next-generation feel like a proper leap.
I'd almost rather pay more to upgrade my PC and have a weaker system that, nevertheless, nets me peak game performance at 1080p.
@Ralizah for me i think the sweet spot is 1440p. 1440p with the right anti aliasing looks very sharp! 4k is overrated in my humble opinion.
@Onion_Knight Honestly, I think a 1440p standard would be ideal for me. Still improves image quality on 1080p sets (via downsampling), and doesn't look terrible on 4K sets.
I don't understand language of your tribe... 😂 Looks that time changes a lot till openGL...
But while here are lot of technical experts (know way more than me) I have a question. Do you know, how is called effect that while some movement, in corners of screen everything starts to get blury? When I focused on it, red color desinchronize a few pixels up, and blue color a few pixels down. It's like 3D effect in movies. It is pretty annoying, but didn't found anywhere in settings of ps4 how to switch it off.
@djlard
>When I focused on it, red color desinchronize a few pixels up, and blue color a few pixels down
You mean Chromatic aberration? Most games have an option to disable it, even on console. But depend entirely on the game to allow players to switch it on and off.
@MarcG420
You are right but I prefer some latency over the dizziness of 30 frames.
And I don't play competitive games so latency isn't actual a problem.
Let's hope ps5 give us 60fps as a native option.
@get2sammyb well that means most likely would have to have new consoles I believe
@kohiba99 to each their own I suppose.
I prefer 30hz over a 7-8 frame delay personally.
@Makina Maybe, don't know how it's called... But I know it is worse than "Motion blur". It feels like you look thru bottom of glass... Simply awful.
edit: I've been googling and yes, it is it. For the first I thought it is an error or bug, but it seems it is some kind of effect, that ruins visual quality of game. One would say, that they enlight themselves after that fiasco with Motion Blur...
@BAMozzy maybe we'll end up seeing a new and improved CB implementation. Alex made a lot of points about the shimmering artifacts that come in when using CB so maybe if they could find a way to smoothen the image AI won't be necessary, because we haven't heard anything about machine learning in PS5.
@OmegaStriver this is what I think too. Watch digital foundry's video on death stranding, it explains everything.
@get2sammyb It so is. Even visually. 1440p native with upscaling is perfectly fine. I'd rather have a consistent 120 fps over 4K (don't even need the upscaling TBH, just buttery smooth gaming).
@get2sammyb So are you saying that checkboard rendering is less than half as good as DLSS?
@kratreus It's likely you will be correct but we don't yet know what RDNA2 will be composed of so its not clear AMD will not have Tensor like component on the die.
@Juanalf no such thing as fake 4k
If Sony has some sort of dlss in ps5 it would be check mate for all that people, that praise almost 2tf more powerful xsx. On the other hand xsx should handle some type of dlss to. Lets see.
It doesn't matter what SONY do because nothing will beat Microsoft's 12TF. Just look at Halo Infinite's graphics
@Gts AMD GPUs usually have 4 shader engines, each are composed of multiple Compute Units (14 in the Series X, because 56 total CUs with 4 disabled), then these compute units contain 64 stream processors or shader cores. Since the shader cores are no longer exclusively used for shaders, its now called Stream processors by AMD, and CUDA cores by NVIDIA. Microsoft's implementation of INT8 and INT4 operations are done in the stream processors. However, these integer operations cannot be done in parallel with the floating point operations. Although due to requiring less precision than FP32, it can do 4 INT8 and 8 INT4 operations the same time it takes to do 1 FP32. It will share the processing resources with floating point operations, hence it cannot do 12 TFLOPs if it chooses to use integer ops in Mixed precision processing. It also cannot do 48.5 INT8 or 97 INT4 TOPS unless the Stream Processors are exclusively dedicated to that. That's why Microsoft demonstrated using it in a current-generation game at 1080P, because it does not need to use the full capabilities of the GPU, it has more resources to use for lower precision operations. For comparison, the RTX 2060 Super can do 115 INT8 and 230 INT4 TOPS in parallel with 7.2 TFLOPS because those operations are offloaded to the tensor cores, thats why DLSS has extreme performance gains because it doesnt have to sacrifice anything.
I doubt Sony's implementation (if there is) will be different from microsoft, because having dedicated silicon just for machine learning is a waste of die space when only a handful of devs will support it. The cost-efficiency just isnt there, which is the point of consoles, removing everything not needed for gaming to reduce hardware cost as much as possible for the best price-performance ratio. RTX Cards are different because PCs do alot more than gaming, and the dedicated silicon for Ray tracing and tensor cores are the reason why RTX cards are double the initial price of the GTX cards it replaced. AMD's implementation of accelerated machine learning and ray tracing is smarter because it makes sure everything in the GPU will always be used.
https://www.eurogamer.net/articles/digitalfoundry-2020-inside-xbox-series-x-full-specs
Some more things I'd like to clarify to you are DLSS and Supersampling. Supersampling is a type of Anti-aliasing usually abbreviated as SSAA. I wont go in-depth about Anti-aliasing. Most PC games can do up to 4X SSAA at best. DLSS takes this to another level by leveraging machine learning to be able to do 64X SSAA. SSAA uses GPU resources alone, while DLSS are trained in supercomputers then implemented using the tensor cores in RTX GPUs. In the example you've shown, it used SSAA more efficiently because the GPU is capable of doing more less-precise operations at the same time. Unlike DLSS, it is not trained in supercomputers beforehand, it is done locally, so the resulting image will not be as good as DLSS and won't be able to recreate the fine details of a native 4K resolution, although it will be noticeably better than just traditional bilinear upscaling and its native resolution. This is why 4K via DLSS is as good or even better than native 4K, because it is trained in supercomputers beforehand to achieve the highest fidelity using a lower native resolution with an extremely high-fidelity teacher data, then the resulting data is implemented locally in real-time by the Tensor cores. Local implementations of SSAA cannot use extremely high fidelity teacher data (ex. 16K resolution with 16K textures and extremely high polygon 3D models) because the performance hit will be significant, it will only use a higher resolution as the teacher data while using the same in-game assets (best looking games only use 4K textures at most), so the accuracy will suffer.
@TooBarFoo It's possible that AMD will implement it in their desktop GPUs, but it's unlikely that Sony and Microsoft will on their consoles. Having dedicated silicon for just machine learning is a waste of resources and die space, it contradicts the cost-efficiency goal of consoles, when only a handful of devs will support it. Tensor cores and RT cores are the primary reason why RTX cards cost double the initial price of the GTX cards it replaced. Microsoft's implementation is more cost effective because it does not waste any die space while still being able to accelerate integer operations, although there will be performance sacrifices. The cost-efficiency of that solution still outweighs the benefits of dedicated silicon.
Show Comments
Leave A Comment
Hold on there, you need to login to post a comment...