Microsoft Reveals DirectX 12_2 Features for AMD, Intel, Nvidia, Qualcomm GPUs
Microsoft has published some details on its upcoming DirectX 12_2 feature level and which companies will be supporting these new capabilities in upcoming GPUs. If you’re wondering how DirectX 12_2 and DirectX 12 Ultimate relate to each other, they’re the same thing — if a GPU supports DX12U, it’ll also support 12_2 and the reverse appears to also be true.
A DirectX feature level is Microsoft’s way of defining what specific capabilities a GPU is capable of. A GPU that can only handle DX 12_0 or 12_1, for example, would support the low-latency command structures and performance-improving aspects of DX12 as an API, but would offer no support for Microsoft’s DXR (DirectX ray tracing) technology. Microsoft typically only puts a major brand push behind whole-integer updates (DirectX 10, 11, 12), but the company is making a bit of an exception with DirectX 12_2, which is where the “DirectX 12 Ultimate” concept comes from.
Whose Getting 12_2 Support?
Here’s the official phrasing, straight from Microsoft (and likely AMD, Nvidia, Intel, and Qualcomm):
- Feature level 12_2 is supported on NVIDIA GeForce RTX and NVIDIA Quadro RTX GPUs.
- AMD’s upcoming RDNA 2 architecture based GPUs will include full feature level 12_2 support.
- Intel’s roadmap includes discrete GPUs that will empower developers to take full advantage of Feature Level 12_2.
- Microsoft is collaborating with Qualcomm to bring the benefits of DirectX feature level 12_2 to Snapdragon platforms.
Intel’s statement is a bit different than any of the others. It’s the only company to imply that support is a bit conditional, with the qualifier that the company’s roadmap includes “discrete” GPUs that take advantage of 12_2. The fly in the ointment here is probably Tiger Lake. If a GPU doesn’t support ray tracing, it can’t claim to support DirectX 12_2, which is why Nvidia also makes it clear that support is limited to the company’s RTX family of cards.
It’s going to be a long time before we see ray tracing in an integrated GPU core sitting alongside a CPU inside the same socket. At the moment, the performance penalty for using the feature would nuke any benefit of enabling it. Obviously AMD and Intel will eventually add it, but ray tracing is going to remain a somewhat high-end capability for now. Even if AMD, Nvidia, and Intel deploy DXR against each other throughout the full range of their product stacks this generation, it won’t likely be useful on lower-end cards for at least a generation after this one, and possibly longer than that.
The feature I’m most curious about besides ray tracing in DX12_2 is variable rate shading, which I’m still hoping will see wider adoption in future GPUs and titles. The interesting thing about VRS is the way it could boost performance of lower-end GPUs, allowing smaller and lighter systems to pack a decent amount of graphics performance (at least, in supported titles). We’ve talked about variable rate shading here — if you need a quick refresher, it’s a method of rendering that allows a GPU to reserve limited horsepower for the most detailed areas of the scene intended to draw the eye rather than lavishing the same amount of detail (and power) on every section of a frame.
Should you expect these features to immediately revolutionize gaming? No. As much as I’d love to say otherwise, we’re looking at an ongoing slow burn for features like ray tracing, though the new consoles from Sony and Microsoft will obviously boost the feature. It’s taken time for Nvidia to push a moderate number of titles into market and it’ll take a few years more for that number to grow to the point that you could realistically call it a library. Last gen showed that ray tracing was possible, this generation will showcase it in ways that drive adoption directly, and next generation will probably see broader top-to-bottom availability, even on lower-end hardware.