microsoft / d3d11on12 Goto Github PK
View Code? Open in Web Editor NEWThe Direct3D11-On-12 mapping layer
License: MIT License
The Direct3D11-On-12 mapping layer
License: MIT License
Hi,
I'm trying to enable direct mapping of decoded buffers from Media Foundation as DirectX 12 textures.
The problem is MediaFoundation doesn't support DirectX 12.
I tried two approaches:
My question is:
how do we enable interop between MediaFoundation and DirectX 12?
It seems that the IMFDXGIBuffer is not an NT handle while DirectX 12 requires an "NT handle"?
I also set MF_SA_D3D11_SHARED to true.
It doesn't seem that Media Foundation provides any option to allocate an "NT handle"?
Please advise.
I have implemented an Unreal Engine 4 module which allows one to use desktop windows as a texture with minimal overhead / latency. It works fine on DirectX11, but I would need to make it work for DirectX12 too. For this I am using the D3D11On12 APIs to get a ID3D11Device, which I can provide to the Direct3D11CaptureFramePool of WinRT on creation. Long story short: I got the D3D12 texture updated, but it only works until the capture frame pool fills up, as the frames are not closed and the FrameArrived event is not firing anymore.
To isolate the issue I have removed all rendering code besides creating the D3D11On12 device and the pool. It still does the same thing. It seems that when using a D3D11On12 device the frame pool fails. I am not experienced with DirectX 12 yet, I have tried issuing flush commands to the command queue etc. but the issue persists. I am assuming the D3D12CommandQueue I extracted from unreal engine gets executed, since otherwhile the D3D11 => D3D12 texture copies would never be performed, but they are. I am not sure how the capture frame pool works, but in theory it should be able to reuse frames after I close them.
I am doing something wrong here? Or is this a compatibility issue?
I guess my other option would be to create a D3D11 device and use shared resources with the D3D12 one, but that will have more overhead.
When using D3D11on12, when using Desktop Duplication, calling IDXGIOutputDuplication::AcquireNextFrame
results in DXGI_ERROR_ACCESS_LOST
.
Tracing this as far as I can it seems to be caused by CGraphicsCommandQueue::AcquireKeyedMutex
which calls D3DKMTAcquireKeyedMutex2
which returns 0x80
(WAIT_ABANDONED
)
Note, this code is functioning perfectly on DX11 directly.
Hi. I am trying to use this code as an example of usage of D3D12 translation layers.
I compiled successfully my version of d3d11on12.dll from the code from the repository, but I can not use it instead of the system one.
At the moment, my setup is: Windows 10 64 10.0.18363.836 (latest), Win SDK 10.0.19041.0 (latest or penultimate), WDK same as SDK, VS 2017.
While running the D3D1211On12 sample from the DirectX-Graphics-Samples (which works fine with system dll) I just put my d3d11on12.dll next to the file and every time I get this error "0x887a0004: The specified device interface or feature level is not supported on this system.".
I tried to use the d3d11.dll from different versions of Windows, but this did not work. I also tried to debug it, but as far as I can tell the problem inside the d3d11.dll, loading of my version of d3d11on12.dll goes fine and is further unloaded due to mismatches of some flags inside d3d11.dll.
What else could be wrong, are there any ideas?
Most of the code is understandable, but there are some things that are not obvious, and yet I want to have the ability to debug them.
Many thanks.
D3D11 gave no explicit control over batching query resolve operations, so it appears that 'EndQuery' is the place that ResolveQueryData operations are inserted.
The D3D11 pattern of:
For tasks such as Occlusion Culling results in this pattern in D3D12:
This appears to inhibit draw-level parallelism as a ResolveQueryData operation sits between every draw. Some means (waves hands) of batching up Resolves into one operation before first use would help immeasurably!
I'm using D3D11on12
for interop with direct2d.
There is a one time 120 millisecond render where as the render work usually takes 1.5 milliseconds.
If the render work load is smaller it takes longer to happen. Seems something akin to when std::vector has to resize.
Do you know about this? Can you give any guidance?
Hi ,
I'm trying to debug a directx 11 app using PIX + D3D11On12.
It says in the PIX documentation:
The D3D11On12 layer will translate calls to [ID3DUserDefinedAnnotation] into PIX markers for you
See: https://devblogs.microsoft.com/pix/debugging-d3d11-apps-using-d3d11on12/
However, they're not showing up in the PIX capture?
Please advise
I can compile and generate D3D11On12.dll. But do not know how to use it? Directly replace C:\Windows\System32
then reboot,I found that doesn't work by https://docs.microsoft.com/en-us/windows-hardware/test/hlk/testref/c510c85c-9da1-4028-b396-4b1b5117f5c5
In short we want to know how windows11 loads my compiled dll(D3D11On12.dll) and it works fine.thank you for your help。
My app uses the following technologies:
hardware: AMD WX7100
The app does not create a swapchain. all rendering is preformed off-screen to a texture.
I replaced the D3D11 device creation code with d3d11on12 code: create a d3d12device and command queue, and call D3D11On12CreateDevice.
Other than that nothing was changed.
The problem I notice is that the textures are rendered incorrectly. It is as if their internal memory layout interpretation has somehow changed. I would guess that this is somehow related to their hardware dependent layout as the way it looks reminds me of the AMD micro-macro texture memory tiling.
Non-textured geometry seems to be rendered OK.
Here is a simple texture on a quad:
Hi All,
Recently I've tried to compile the d3d11on12, but got lots of error. The first one is missing "d3dx12.h", which is included from the "external/d3d11on12.h". By searching on the internet it's found that the "d3dx12.h" is only a helper header file, which could be found inner another project, DirectX-Graphic-samples. I wonder if this header could be placed into the "external/" folder directly, instead of providing such a misleading error information.
What's more, after I copy the "d3dx12.h" and place it under the "external/" folder, the compiler complains that it could not find the definition of "CD3DX12_RESOURCE_DESC1", which is used in line 135 of "external/d3d11on12.h." I've checked the "d3dx12.h" with only CD3DX12_RESOURCE_DESC defined, Is this a typo for the tailing "1"?? Hope it could be confirmed.
Many thanks.
Reproduced on two PCs:
When decoding video frames using an IMFSourceReader
with a D3D11 device created with D3D11On12CreateDevice
the GPU shared memory usage appears to grow unbounded. If there are enough frames to decode then it will reach the shared memory limit.
With the exact same code using a "real" D3D11 device the GPU shared memory usage remains flat. This is the expected result since each IMFSample
is decoded and then immediately released.
I can repro with any AVC or HEVC video in MP4 or MPEG-TS. But the video stream needs to be long enough before the memory usage can be observed. Decoding 1,000 frames should be enough to at least see a difference between D3D11 and D3D11on12.
Here's a sample file in case you need it. This is the 'main.ts' file referenced in the sample code.
main.ts MPEG-TS with 960x540 AVC (186 MB)
DxDiag-AMD.txt
DxDiag-Nvidia.txt
Video Codec is at 100% and memory usage is flat. This is the expected result.
The GPU shared memory usage grows linearly as soon as the decoding begins. Interestingly, there is also some Copy work going on that does not appear in the test above.
The results are similar, though not identical, on a Windows 11 laptop with GTX 1660 Ti.
Right now, the build is overly complex because it requires D3D12TranslationLayer
and D3D11On12
to be siblings, both included by a (not-provided) top-level CMakeLists.txt
. This is useful for us because we want to build D3D12TranslationLayer
once, and share that .lib among multiple other mapping layers, but it would be way more convenient for anybody else building this layer to just grab this repo and have it pull dependencies automatically.
I'd have more interest for the opposite, making D3D12-only games work with D3D11 older hardware.
Apologies if this isn't the best avenue to ask this, but was wondering if this project is something that is appropriate to use for conversion of DirectX versions in regard to the consoles, specifically from the Xbox One to Xbox Series X? Of course other API calls will need to be manually adjusted, but can this be included in an Xbox project or is this targeted for only Windows?
A common D3D11 pattern was to issue pairs of UpdateSubresource + Draw, where constant buffers were updated just-in-time before each draw.
This pattern, when translated to D3D12 is:
That pattern means no draw overlaps with any other.
Some sort of 'rename' operation could occur, similar to the MAP_DISCARD behaviour from D3D11 to better allow GPU parallelism.
When runing wgf11draw Draw:1 with -11On12 option, the translation layer failed to create PSO because of invalid D3D12_GRAPHICS_PIPELINE_STATE_DESC argument.
This application only use Vertex Shader and StreamOut. It doesn't have Pixel Shader, render target and depth stencil buffer. And its Vertex Shader doesn't output a SV_POSITION value.
So in the D3D12_GRAPHICS_PIPELINE_STATE_DESC argument, the D3D12_STREAM_OUTPUT_DESC::RasterizedStream should be set to D3D12_SO_NO_RASTERIZED_STREAM, and the D3D12_DEPTH_STENCIL_DESC::DepthEnable, D3D12_DEPTH_STENCIL_DESC::StencilEnable should be set to false.
But the translation layer generates a D3D12_GRAPHICS_PIPELINE_STATE_DESC with D3D12_STREAM_OUTPUT_DESC::RasterizedStream set to 0 and D3D12_DEPTH_STENCIL_DESC::DepthEnable set to true. This result in an invalid argument error when creating PSO.
In D3D11On12::StreamOutShader::ProduceDesc, we may need to check if there is any output parameter has "SV_POSITION" semantic name. If not, we should set RasterizedStream to D3D12_SO_NO_RASTERIZED_STREAM and disable depth and stencil.
There is one known gap with D3D11On12’s deferred context support, which is if someone tries to create an ID3D11CommandList that has a Map(DISCARD) operation on a buffer, and then execute that command list more than once.
https://github.com/microsoft/D3D11On12/blob/master/src/device.cpp#L955
There are important files that Microsoft projects should all have that are not present in this repository. A pull request has been opened to add the missing file(s). When the pr is merged this issue will be closed automatically.
Microsoft teams can learn more about this effort and share feedback within the open source guidance available internally.
I switched from a hwnd SwapChain to a composition SwapChain and started getting a semi non-reproducible device removal bug, which usually happens in the first few frames.
Debug output:
After Present
D3D12: Removing Device.
After MoveToNextFrame
D3D12 ERROR: ID3D12Device::RemoveDevice: Device removal has been triggered for the following reason (DXGI_ERROR_ACCESS_DENIED: The application attempted to use a resource it does not access to. This could be, for example, rendering to a texture while only having read access.). [ EXECUTION ERROR #232: DEVICE_REMOVAL_PROCESS_AT_FAULT]
The only resources I might not have access to are the swap chain buffers, so it seems like there is a bug. This doesn't happen if I comment out my D2D draw calls, but does still happen even if I am not rendering to the swap chain buffers but some secondary resource.
Also, could I get some guidance on how to recover? Present
always returns 0, only on the next call to EndDraw
do I get a non-zero hresult D2DERR_RECREATE_TARGET
.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.