jp7677 / dxvk-nvapi Goto Github PK
View Code? Open in Web Editor NEWAlternative NVAPI implementation on top of DXVK.
License: MIT License
Alternative NVAPI implementation on top of DXVK.
License: MIT License
See https://github.com/jp7677/dxvk-nvapi/runs/5708604320?check_suite_focus=true compared to https://github.com/jp7677/dxvk-nvapi/runs/5439563691?check_suite_focus=true
There was a Wine update in-between, lets wait until the next Wine update before investigating...
The NVIDIA Linux driver ships libnvidia-api.so.1
since driver version 525. This library seems to be the Linux counterpart of nvapi64.dll
from Windows. Lets use this issue to discuss ideas and its relevance to this project.
The version from the 525 driver contains the following methods (based on the function pointers described in the 520 NVAPI Open Source SDK) (thanks @Saancreed )
NvAPI_Initialize
NvAPI_Unload
NvAPI_GetErrorMessage
NvAPI_EnumPhysicalGPUs
NvAPI_GPU_GetShaderSubPipeCount
NvAPI_GPU_GetGpuCoreCount
NvAPI_GPU_GetFullName
NvAPI_GPU_GetPCIIdentifiers
NvAPI_GPU_GetBusType
NvAPI_GPU_GetBusId
NvAPI_GPU_GetVbiosRevision
NvAPI_GPU_GetVbiosOEMRevision
NvAPI_GPU_GetVbiosVersionString
NvAPI_GPU_GetCurrentPCIEDownstreamWidth
NvAPI_GPU_GetPhysicalFrameBufferSize
NvAPI_GPU_GetVirtualFrameBufferSize
NvAPI_GPU_GetBoardInfo
NvAPI_GPU_GetArchInfo
NvAPI_GPU_GetHDCPSupportStatus
NvAPI_GPU_GetTachReading
NvAPI_GPU_GetECCStatusInfo
NvAPI_GPU_GetECCConfigurationInfo
NvAPI_GPU_GetVirtualizationInfo
NvAPI_GPU_GetLicensableFeatures
NvAPI_GPU_GetVRReadyData
NvAPI_GPU_GetPstatesInfoEx
NvAPI_GPU_GetPstates20
NvAPI_GPU_GetCurrentPstate
NvAPI_GPU_GetDynamicPstatesInfoEx
NvAPI_GPU_GetThermalSettings
NvAPI_GPU_GetAllClockFrequencies
NvAPI_GPU_QueryIlluminationSupport
NvAPI_GPU_GetIllumination
NvAPI_GPU_SetIllumination
NvAPI_GPU_ClientIllumDevicesGetInfo
NvAPI_GPU_ClientIllumDevicesGetControl
NvAPI_GPU_ClientIllumDevicesSetControl
NvAPI_GPU_ClientIllumZonesGetInfo
NvAPI_GPU_ClientIllumZonesGetControl
NvAPI_GPU_ClientIllumZonesSetControl
Was recommended to report here. Originally I brought up the issue at ValveSoftware/Proton#4370.
Error message as soon as game starts:
Log output with PROTON_LOG=1 %command%
as launch option: steam-747350.log
When launch option is set to PROTON_ENABLE_NVAPI=0 %command%
the game works without issue. I have PROTON_ENABLE_NVAPI=1
in /etc/environment
just to avoid adding it to every game's launch option.
Specs: https://linux-hardware.org/?probe=d32441b2c3
SteamVR version: 1.25.6
Well, actually not only that, without wine-nvml
in the prefix it does this (logs provided by CME on LGD):
NvAPI_QueryInterface (0xad298d3f): Unknown function ID
DXVK-NVAPI hotfix-tlou-pso-memory (tlou-i.exe)
Successfully acquired Vulkan vkGetInstanceProcAddr @ 0x3b6dc40a0
NvAPI Device: NVIDIA GeForce GTX 1060 6GB (530.30.2)
NvAPI Output: \\.\DISPLAY1
NvAPI Output: \\.\DISPLAY2
NvAPI Output: \\.\DISPLAY3
NvAPI_Initialize: OK
NvAPI_QueryInterface (0x33c7358c): Unknown function ID
NvAPI_QueryInterface (0x593e8644): Unknown function ID
NvAPI_EnumPhysicalGPUs: OK
NvAPI_SYS_GetDriverAndBranchVersion: OK
NvAPI_GPU_GetArchInfo: OK
NvAPI_Initialize: OK
NvAPI_SYS_GetDriverAndBranchVersion: OK
NvAPI_EnumPhysicalGPUs: OK
NvAPI_GPU_GetPCIIdentifiers: OK
NvAPI_GPU_GetArchInfo: OK
NvAPI_GPU_GetGpuCoreCount: No implementation
… and otherwise it does:
NvAPI_QueryInterface (0xad298d3f): Unknown function ID
DXVK-NVAPI hotfix-tlou-pso-memory (tlou-i.exe)
Successfully acquired Vulkan vkGetInstanceProcAddr @ 0x3b6dc40a0
Successfully loaded nvml.dll
NVML loaded and initialized successfully
NvAPI Device: NVIDIA GeForce GTX 1060 6GB (530.30.2)
NvAPI Output: \\.\DISPLAY1
NvAPI Output: \\.\DISPLAY2
NvAPI Output: \\.\DISPLAY3
NvAPI_Initialize: OK
NvAPI_QueryInterface (0x33c7358c): Unknown function ID
NvAPI_QueryInterface (0x593e8644): Unknown function ID
NvAPI_EnumPhysicalGPUs: OK
NvAPI_SYS_GetDriverAndBranchVersion: OK
NvAPI_GPU_GetArchInfo: OK
NvAPI_Initialize: OK
NvAPI_SYS_GetDriverAndBranchVersion: OK
NvAPI_EnumPhysicalGPUs: OK
NvAPI_GPU_GetPCIIdentifiers: OK
NvAPI_GPU_GetArchInfo: OK
NvAPI_GPU_GetGpuCoreCount: OK
NvAPI_GPU_GetPstates20: No implementation
… after which it dies in both cases. That gives us two problems, maybe more if the game tries to call more functions after we fix the one that's causing it to explode now:
NvAPI_GPU_GetPstates20
, which we can either fake or try to retrieve from libnvidia-api.so.1
somehow. If the former is enough to satisfy the game, I'd say we could avoid the issues mentioned in the other thread, at least for now.NvAPI_GPU_GetGpuCoreCount
, so either Proton would need to ship wine-nvml
or we need to fake a success if NVML can't be used.Graphics Card MSI RTX 2080 Gaming X Trio.
Kernel 6.2.x (several in 6.2 tested)
dxvk: several different versions tested
driver: 525.47.13 and 530.41.03
wine: several versions tested from 8.0.4 Staging to Wine Proton GE
Observed behaviour:
When starting the Windows Version of Unigine Superposition through lutris (wine/dxvk) with dxvk-nvapi 0.6.2 Temperature is missing in the top right corner and in the result. (maybe some other sensor as well)
Selecting a prior version in lutris (0.5.4) will show temperatures just fine.
I am hacking together a tiny testing app for testing nvAPI functions in both windows and wine. Maybe not that highly usable for gaming tests and stuff, but found it a useful thingy to test what replies one get from nvAPI for various functions.
It is compiled as a .exe, so it runs in both windows and wine. The goal being that i can verify and compare "things to expect" in a easy manner when putting together replies from nvapi. (I have a old GTX970 in a win10 box that i use for testing)
Use it if you like, or if you have easier/better tools - feel free to let me know :) Close this "issue" if you feel it is useless or not interesting tho. I felt it easy to add simple tests<->replies atleast.
https://github.com/SveSop/nvtt
Hi! I tried to install windows version with DLSS support to check how it working on my book {5800H + RTX3070}. In a simple window with game preferences I can choose DLSS, but then in a game it simply can't be changed, but not grayed. Because of optimus nature I tried to blacklist AMD iGPU, update nvapi to v0.5.1 and nvngx_dlss to 2.3.5, insert '__NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=nvidia PROTON_ENABLE_NVAPI=1 PROTON_HIDE_NVIDIA_GPU=0 %command%' in a steam game options.
Log always report:
DLSS support test, code = 0x00000001, info: NVSDK_NGX_Result_Success
Disabling DLSS as it is not supported on non Nvidia cards
I'm using latest Proton experimental in Steam preferences. For comparison Hellblade is working flawlessly with DLSS and DXR in DX12 mode and with DLSS in DX11 mode with only 1 option PROTON_ENABLE_NVAPI=1 %command%.
Rise of the Tomb Raider optimus.log
steam-391220.log
Rise of the Tomb Raider external display blacklist iGPU.log
os: arch linux
gpu: rtx 3050
cpu: i5 11400h
envar DXVK_NVAPI_LOG_LEVEL=info log when running the game:
DXVK-NVAPI v0.5.4 (eso64.exe)
NvAPI Device: NVIDIA GeForce RTX 3050 Laptop GPU (520.56.6)
NvAPI Output: \\.\DISPLAY1
NvAPI_Initialize: OK
NvAPI_QueryInterface (0x33c7358c): Unknown function ID
NvAPI_QueryInterface (0x593e8644): Unknown function ID
NvAPI_D3D_GetCurrentSLIState: OK
NvAPI_Unload: OK
log when running the tests.
DXVK-NVAPI v0.5.4 (nvapi64-tests.exe)
info: Game: nvapi64-tests.exe
info: DXVK: v1.10.3
info: Built-in extension providers:
info: Win32 WSI
info: OpenVR
info: OpenXR
info: OpenVR: could not open registry key, status 2
info: OpenVR: Failed to locate module
info: Enabled instance extensions:
info: VK_KHR_get_surface_capabilities2
info: VK_KHR_surface
info: VK_KHR_win32_surface
info: NVIDIA GeForce RTX 3050 Laptop GPU:
info: Driver: 520.56.6
info: Vulkan: 1.3.205
info: Memory Heap[0]:
info: Size: 4096 MiB
info: Flags: 0x1
info: Memory Type[1]: Property Flags = 0x1
info: Memory Type[4]: Property Flags = 0x7
info: Memory Heap[1]:
info: Size: 11834 MiB
info: Flags: 0x0
info: Memory Type[0]: Property Flags = 0x0
info: Memory Type[2]: Property Flags = 0x6
info: Memory Type[3]: Property Flags = 0xe
info: Intel(R) UHD Graphics (TGL GT1):
info: Driver: 22.2.99
info: Vulkan: 1.3.230
info: Memory Heap[0]:
info: Size: 11834 MiB
info: Flags: 0x1
info: Memory Type[0]: Property Flags = 0xf
NvAPI Device: NVIDIA GeForce RTX 3050 Laptop GPU (520.56.6)
NvAPI Output: \\.\DISPLAY1
NvAPI_Initialize: OK
NvAPI_GetInterfaceVersionString: OK
--------------------------------
Interface version: DXVK_NVAPI
NvAPI_SYS_GetDriverAndBranchVersion: OK
Driver version: 520.56
Driver branch: r510_v0.5.4
NvAPI_GPU_CudaEnumComputeCapableGpus: OK
NvAPI_EnumPhysicalGPUs: OK
----------------------------
GPU 0
NvAPI_GPU_GetGPUType: OK
GPU type: 2 (Discrete)
NvAPI_GPU_GetPCIIdentifiers: OK
Device ID: 0x25a210de
Subsystem ID: N/A
NvAPI_GPU_GetFullName: OK
Full name: NVIDIA GeForce RTX 3050 Laptop GPU
NvAPI_GPU_GetBusId: OK
NvAPI_GPU_GetBusSlotId: OK
Bus:Slot ID: PCI:01:00
NvAPI_GetGPUIDfromPhysicalGPU: OK
Board ID: 0x100
NvAPI_GPU_GetPhysicalFrameBufferSize: OK
Physical framebuffer size: 4096MB
NvAPI_GPU_GetAdapterIdFromPhysicalGpu: OK
Adapter ID/LUID: f2030000-00000000 (0x00000000/0x000003f2)
NvAPI_GPU_GetArchInfo: OK
Architecture ID: 0x00000170 (Ampere)
Implementation ID: 0x00000002
Compute capable: Yes (Compute GPU topology flags: 0x0b)
NvAPI_GPU_GetVbiosVersionString: OK
VBIOS version: N/A
NvAPI_GPU_GetDynamicPstatesInfoEx: No implementation
Current GPU utilization: N/A
Current memory utilization: N/A
NvAPI_GPU_GetThermalSettings: No implementation
Current GPU temperature: N/A
NvAPI_GPU_GetCurrentPstate: No implementation
Current performance state: N/A
NvAPI_GPU_GetAllClockFrequencies: No implementation
Current graphics clock: N/A
Current memory clock: N/A
Current video clock: N/A
NvAPI_Unload: OK
===============================================================================
All tests passed (34 assertions in 1 test case)
Once most PRs are merged and things have settled a bit we should
__func__
Is Reflex support possible within dxvk-nvapi's scope, or would there be additional work (like the shim DLLs for DLSS by Nvidia) required to make it work on Linux?
This might do the trick https://mesonbuild.com/Code-formatting.html
Watch Dogs Legion has never worked with vkd3d-proton until very recently after some dxil fixes. However, the option for DLSS is grayed out, even though the option for ray tracing is not. Any time I've seen this on a DLSS 2.X game it's been because DXVK-NVAPI wasn't loaded for one reason or another. However, I'm getting a dxvk-nvapi.log, so it is in fact being loaded.
Log:
---------- 2021-11-14 07:21:26 ----------
NvAPI_QueryInterface 0xad298d3f: Unknown function ID
DXVK-NVAPI v0.4-54-g16d36c7 (WatchDogsLegion.exe)
NVML loaded and initialized successfully
NvAPI Output: \\.\DISPLAY1
NvAPI Output: \\.\DISPLAY2
NvAPI Device: NVIDIA GeForce RTX 3090 (495.44.0)
DXVK_NVAPI_DRIVER_VERSION is set to '49649', reporting driver version 496.49
NvAPI_Initialize: OK
NvAPI_QueryInterface 0x33c7358c: Unknown function ID
NvAPI_QueryInterface 0x593e8644: Unknown function ID
NvAPI_EnumPhysicalGPUs: OK
NvAPI_GPU_GetPCIIdentifiers: OK
NvAPI_QueryInterface NvAPI_Mosaic_EnumDisplayGrids: Not implemented method
NvAPI_SYS_GetDriverAndBranchVersion: OK
NvAPI_DISP_GetDisplayIdByDisplayName \\.\DISPLAY1: OK
NvAPI_DISP_GetDisplayIdByDisplayName \\.\DISPLAY2: OK
NvAPI_SYS_GetDriverAndBranchVersion: OK
NvAPI_DISP_GetDisplayIdByDisplayName \\.\DISPLAY1: OK
NvAPI_DISP_GetDisplayIdByDisplayName \\.\DISPLAY2: OK
NvAPI_SYS_GetDriverAndBranchVersion: OK
NvAPI_DISP_GetDisplayIdByDisplayName \\.\DISPLAY1: OK
NvAPI_DISP_GetDisplayIdByDisplayName \\.\DISPLAY2: OK
NvAPI_EnumPhysicalGPUs: OK
NvAPI_SYS_GetDriverAndBranchVersion: OK
NvAPI_DISP_GetDisplayIdByDisplayName \\.\DISPLAY1: OK
NvAPI_DISP_GetDisplayIdByDisplayName \\.\DISPLAY2: OK
NvAPI_Initialize: OK
NvAPI_QueryInterface NvAPI_DRS_CreateSession: Not implemented method
NvAPI_EnumPhysicalGPUs: OK
NvAPI_GPU_GetAdapterIdFromPhysicalGpu: OK
NvAPI_QueryInterface 0xf2400ab: Unknown function ID
NvAPI_EnumPhysicalGPUs: OK
NvAPI_GPU_GetArchInfo: OK
NvAPI_SYS_GetDriverAndBranchVersion: OK
NvAPI_QueryInterface 0xa782ea46: Unknown function ID
NvAPI_QueryInterface NvAPI_DRS_FindApplicationByName: Not implemented method
NvAPI_QueryInterface NvAPI_DRS_DestroySession: Not implemented method
NvAPI_SYS_GetDriverAndBranchVersion: OK
NvAPI_DISP_GetDisplayIdByDisplayName \\.\DISPLAY1: OK
NvAPI_DISP_GetDisplayIdByDisplayName \\.\DISPLAY2: OK
NvAPI_DISP_GetDisplayIdByDisplayName \\.\DISPLAY1: OK
NvAPI_DISP_GetDisplayIdByDisplayName \\.\DISPLAY2: OK
Weirdly enough, I'm not getting a dxgi.log.
Distribution: Arch
GPU: Nvidia RTX 3090 (EVGA XC3 Ultra)
Nvidia Driver: 495.44
Wine Version: lutris-ge-6.21
DXVK-NVAPI version: latest master
VKD3D-Proton version: latest master
Trying to launch Control to get DLSS. Ray tracing works fine. If I disable dxvk and dxvk_nvapi the game launches fine.
Arch, 3070 TI, tried nvidia driver 495 and 470.94, vulkan driver 1.2.202. Using lutris.
DXVK v1.9.2L
VKD3D v.25L
DXVK NVAPI v0.5
I have placed nvapi.dll, nvapi64, mvl.dll in both system32 and syswow64. In lutris -> wine configuration -> libraries -> added nvapi, nvapi64 and dxgi to native. lutris executable is Control_DX12 with argument "-dx12".
DXVK NVAPI log: https://pastebin.com/PqsDMzge
DXVK log: https://pastebin.com/a8Ewwua7
Lutris log: https://pastebin.com/FB4dMVem
Am I doing something wrong here?
Once DXVK sees a new release, put that version into the v0.5 release notes (https://github.com/jp7677/dxvk-nvapi/releases/tag/v0.5).
Just a reminder so I wont forget...
BmLauncher_dxgi.log
steam-35140.log
This unofficial game launcher makes use of NvAPI but doesn't work with dxvk-nvapi supposedly due to missing implementations
Remove this "feature" asap. thanks
Reminder to self to update the readme with DXVK_ENABLE_NVAPI from DXVK 1.10.1 and Proton 7.0 applying this automatically.
Game needs WINEDLLOVERRIDES=nvngx= to get past the initial crash after the first loading screen.
Afterwards the game does not want to enable DLSS, this is the log:
---------- 2021-10-31 04:15:01 ----------
DXVK-NVAPI experimental-6.3-20211027 (SOTTR.exe)
NvAPI Output: \\.\DISPLAY1
NvAPI Device: NVIDIA GeForce RTX 3060 (495.44.0)
NvAPI_Initialize: OK
NvAPI_QueryInterface 0x33c7358c: Unknown function ID
NvAPI_QueryInterface 0x593e8644: Unknown function ID
NvAPI_QueryInterface NvAPI_Stereo_IsEnabled: Not implemented method
NvAPI_QueryInterface NvAPI_Stereo_SetDriverMode: Not implemented method
NvAPI_DISP_GetGDIPrimaryDisplayId: OK
NvAPI_Mosaic_GetDisplayViewportsByResolution: Mosaic not active
NvAPI_Mosaic_GetDisplayViewportsByResolution: Mosaic not active
NvAPI_Mosaic_GetDisplayViewportsByResolution: Mosaic not active
NvAPI_Mosaic_GetDisplayViewportsByResolution: Mosaic not active
NvAPI_Mosaic_GetDisplayViewportsByResolution: Mosaic not active
NvAPI_Mosaic_GetDisplayViewportsByResolution: Mosaic not active
NvAPI_Mosaic_GetDisplayViewportsByResolution: Mosaic not active
NvAPI_Mosaic_GetDisplayViewportsByResolution: Mosaic not active
NvAPI_Mosaic_GetDisplayViewportsByResolution: Mosaic not active
NvAPI_Mosaic_GetDisplayViewportsByResolution: Mosaic not active
NvAPI_Mosaic_GetDisplayViewportsByResolution: Mosaic not active
NvAPI_Mosaic_GetDisplayViewportsByResolution: Mosaic not active
NvAPI_Mosaic_GetDisplayViewportsByResolution: Mosaic not active
NvAPI_Mosaic_GetDisplayViewportsByResolution: Mosaic not active
NvAPI_Mosaic_GetDisplayViewportsByResolution: Mosaic not active
NvAPI_Mosaic_GetDisplayViewportsByResolution: Mosaic not active
NvAPI_Mosaic_GetDisplayViewportsByResolution: Mosaic not active
NvAPI_Mosaic_GetDisplayViewportsByResolution: Mosaic not active
NvAPI_Mosaic_GetDisplayViewportsByResolution: Mosaic not active
NvAPI_Mosaic_GetDisplayViewportsByResolution: Mosaic not active
NvAPI_Mosaic_GetDisplayViewportsByResolution: Mosaic not active
NvAPI_Mosaic_GetDisplayViewportsByResolution: Mosaic not active
NvAPI_Mosaic_GetDisplayViewportsByResolution: Mosaic not active
NvAPI_Mosaic_GetDisplayViewportsByResolution: Mosaic not active
NvAPI_Mosaic_GetDisplayViewportsByResolution: Mosaic not active
NvAPI_Mosaic_GetDisplayViewportsByResolution: Mosaic not active
NvAPI_Mosaic_GetDisplayViewportsByResolution: Mosaic not active
NvAPI_Mosaic_GetDisplayViewportsByResolution: Mosaic not active
NvAPI_Mosaic_GetDisplayViewportsByResolution: Mosaic not active
NvAPI_Mosaic_GetDisplayViewportsByResolution: Mosaic not active
NvAPI_Mosaic_GetDisplayViewportsByResolution: Mosaic not active
NvAPI_QueryInterface 0xad298d3f: Unknown function ID
NvAPI_Initialize: OK
NvAPI_QueryInterface NvAPI_DRS_CreateSession: Not implemented method
NvAPI_EnumPhysicalGPUs: OK
NvAPI_GPU_GetAdapterIdFromPhysicalGpu: OK
NvAPI_QueryInterface 0xf2400ab: Unknown function ID
NvAPI_EnumPhysicalGPUs: OK
NvAPI_GPU_GetArchInfo: OK
NvAPI_SYS_GetDriverAndBranchVersion: OK
NvAPI_QueryInterface 0xa782ea46: Unknown function ID
NvAPI_QueryInterface NvAPI_DRS_FindApplicationByName: Not implemented method
NvAPI_QueryInterface NvAPI_DRS_DestroySession: Not implemented method
NvAPI_Initialize: OK
NvAPI_QueryInterface NvAPI_D3D11_EnumerateMetaCommands: Not implemented method
NvAPI_Initialize: OK
NvAPI_D3D11_CreateSamplerState: OK
NvAPI_EnumPhysicalGPUs: OK
NvAPI_GPU_GetAdapterIdFromPhysicalGpu: OK
NvAPI_EnumPhysicalGPUs: OK
NvAPI_GPU_GetArchInfo: OK
NvAPI_SYS_GetDriverAndBranchVersion: OK
NvAPI_D3D11_IsNvShaderExtnOpCodeSupported 12 (NV_EXTN_OP_FP16_ATOMIC): OK
NvAPI_D3D11_IsNvShaderExtnOpCodeSupported 12 (NV_EXTN_OP_FP16_ATOMIC): OK
NvAPI_QueryInterface NvAPI_D3D_QueryModifiedWSupport: Not implemented method
NvAPI_EnumPhysicalGPUs: OK
NvAPI_GPU_GetAdapterIdFromPhysicalGpu: OK
NvAPI_EnumPhysicalGPUs: OK
NvAPI_GPU_GetArchInfo: OK
NvAPI_SYS_GetDriverAndBranchVersion: OK
NvAPI_QueryInterface NvAPI_D3D12_EnumerateMetaCommands: Not implemented method
God of War recently received an update that causes it to crash when NVAPI is enabled.
PROTON_ENABLE_NVAPI=1
=> crash
PROTON_ENABLE_NVAPI=0
=> works
There's a couple of unknown functions mentioned in the log but they don't seem to be part of the open source NVAPI headers as far as I can tell.
Hi, according to many users it is a longstanding problem since many dxvk-nvapi versions ago and still reproduces on latest release as well.
Here is the log:
Setup info:
MHW (which is a DLSS 1.1.13 title) crashes shortly after launch with DXVK-NVAPI enabled. Setting WINEDLLOVERRIDES='nvngx='
allows it to run again.
Proton log file (created with env DXVK_NVAPI_LOG_LEVEL=info PROTON_LOG=1 PROTON_ENABLE_NVAPI=1 prime-run %command%
): steam-582010.log
One line of interest is probably this:
011c:err:d3d12_device_vkd3d_ext_CreateCubinComputeShaderWithName: Failed to create cubin function module, vr -3.
Tested on GTX 1660 Ti Mobile (driver version 470.62.02), using Arch Linux, current Proton Experimental (but with Soldier runtime disabled) and DXVK-NVAPI from ca1d79c because I wasn't sure if the issue is related.
Hi team,
I really need to read memory information in nvapi.
i tried
NvGetMemMarker = NvQueryInterface(0x42aea16aul);
NvGetMemType = NvQueryInterface(0x57F7CAAC);
It's working on window but with wine + dxvk-nvapi i got error.
Do you have any advice?
Thank team,
I have done some experimentation with the demo "Fluidmark" http://www.ozone3d.net/benchmarks/physx-fluidmark/ to figure out if i could get it to enable GPU based physx. So far not so good, but the staging implementation of nvapi/nvcuda does work with this demo.
I made a test implementation of what i suspect could be the nvapi call the demo uses - NvAPI_GPU_CudaEnumComputeCapableGpus
(cos fiddling with this on the staging implementation will make it show "CPU only").
I also did a quick update to my tiny test proggy, and from comparing staging <-> dxvk-nvapi <-> windows, it shows that the gpuhandle
part of what i am experimenting with is different. The staging implementation uses a "fake" id for this, as i guess they do not actually check any "real" gpu data like is done with dxvk-nvapi. I believe it is the demo that is outputting this error message:
ERROR: Could Not Get Primary Adapter Handle
(I cannot find this particular error message in the source)
I have not studied staging implementation of nvcuda in depth, but to me it kinda seems like a wrapper to libcuda.so (much like wine-nvml does), so i have hopes that it would work without changes.
https://github.com/SveSop/dxvk-nvapi/tree/physx
Reason i started fiddling with this was that i have some vague memories of some game (Batman ?) that did not work with physx/nvapi, but ever since the default dxvk setting is to fake amd card it has not been a huge issue. I kinda thought that if dxvk-nvapi could make use of staging implementation of nvcuda for PhysX (for those games utilizing this) it could be some boost to perf?
Don't have any particular tests to perform, other than this Fluidmark demo i found so far.
Let me know if you think it is worth pursuing more, or if it is just a useless waste of time :)
EDIT:
The implementation was missing NvAPI_GetPhysicalGPUFromGPUID
. I've added that to my branch and a minor change to NvAPI_GPU_CudaEnumComputeCapableGpus
The Fluidmark demo now seems to work.
Hello there,
It’s been a while, but some update (from Daz Studio) broke iRay rendering on nvidia GPU through Wine. I’m only a recent user, so I don’t know how it used to work or if dxvk_nvapi was needed, but I think it used to work before the first releases of dxvk, when wine-staging was needed. Rendering works fine on CPU, but it’s much slower.
I’ve checked the logs and tried with and without dxvk_nvapi. Here are the logs from dxvk_nvapi:
---------- 2022-01-04 18:46:40 ----------
NvAPI_QueryInterface 0xad298d3f: Unknown function ID
DXVK-NVAPI v0.5-20-ge23d450 (DAZStudio.exe)
NVML loaded and initialized successfully
NvAPI Device: NVIDIA GeForce RTX 3060 (495.46.0)
NvAPI Output: \\.\DISPLAY1
NvAPI_Initialize: OK
NvAPI_QueryInterface 0x33c7358c: Unknown function ID
NvAPI_QueryInterface 0x593e8644: Unknown function ID
NvAPI_GetInterfaceVersionString: OK
NvAPI_EnumLogicalGPUs: OK
NvAPI_EnumPhysicalGPUs: OK
NvAPI_QueryInterface 0x1efc3957: Unknown function ID
NvAPI_EnumNvidiaDisplayHandle 0: OK
NvAPI_GetPhysicalGPUsFromDisplay: OK
NvAPI_QueryInterface NvAPI_GetAssociatedNvidiaDisplayName: Not implemented method
NvAPI_GetErrorMessage -3 (NVAPI_NO_IMPLEMENTATION): OK
NvAPI_EnumNvidiaDisplayHandle 1: End enumeration
NvAPI_EnumNvidiaUnAttachedDisplayHandle 0: End enumeration
NvAPI_QueryInterface NvAPI_GPU_GetBusType: Not implemented method
NvAPI_GetErrorMessage -3 (NVAPI_NO_IMPLEMENTATION): OK
NvAPI_GPU_GetFullName: OK
NvAPI_GPU_GetVbiosVersionString: OK
NvAPI_Initialize: OK
NvAPI_SYS_GetDriverAndBranchVersion: OK
NvAPI_EnumPhysicalGPUs: OK
NvAPI_EnumLogicalGPUs: OK
NvAPI_QueryInterface NvAPI_EnumTCCPhysicalGPUs: Not implemented method
NvAPI_GetErrorMessage -3 (NVAPI_NO_IMPLEMENTATION): OK
NvAPI_GetPhysicalGPUsFromLogicalGPU: OK
I’ve started by reading the logs from Daz Studio, and there isn’t more, but I can still provide the log file if needed. Also, when dxvk_nvapi is disabled, I get the error on the NvAPI_Initialize
method (3 times), and not on these 3 methods.
I believe the issue is due to these methods not being implemented on dxvk_nvapi. I would sincerely love to help and try to add them, but I’m merely a python beginner and c++ looks like sorcery to me (and, well, we’re talking about some low level coding). But I’d be happy to help or try if someone can point me in the right direction.
Also, DazStudio physic simulation, dforce, used to work, but now it only says "no OpenCL 1.2 compatible device found". I have absolutely no idea if it’s related, there is no information in the logs about it, but well, maybe if we fix the NvAPI issue we’ll get something.
Thank you guys for your work, it’s still pretty amazing.
Since the new DLSS support was enabled for DX12 games, I've gone about testing the DX12 DLSS games in my library. Control and Cyberpunk 2077 both work perfectly with DLSS. Watch Dogs 2 doesn't but that's because it doesn't work with vkd3d-proton at all yet. My next DX12 DLSS title to text was Battlefield V, which unlike the other games uses DLSS 1 and not 2. When trying to run the game w/ dxvk-nvapi, the Frostbite window pops up but is all white, and a few seconds later it crashes. I tried several times with the same result. Disabling dxvk-nvapi and just using vkd3d-proton by itself causes the game to work again.
System Information
Distribution: Arch Linux
Kernel: 5.14.7-tkg-cfs
GPU: Nvidia RTX 3090
CPU: AMD Ryzen 9 5900X
Nvidia Drivers: 470.74
bfv_dxgi.log and dxvk-nvapi.log:
logs.tar.gz
As a follow up to #89
There are a few more low hanging fruits which can be implemented using new functions:
Implementation for all three should be pretty identical.
Using SetDepthBoundsTest seems to make only marginal or no difference at all with regards to performance / frames per seconds. I'm not sure if this is expected considering it is used quite often. May be there is more to it.
Looking a bit into the code there is nothing obvious that seems wrong. The DXVK side also seem to do what it should (based on the very limited knowledge I have). The driver is also doing something with it since hacking difference values for min/max bounds does effect the output and fps.
A desperate attempt to use VK_DYNAMIC_STATE_DEPTH_BOUNDS_TEST_ENABLE_EXT
yielded no difference, at least with one bechmark that uses DBT. I'm attaching that patch here anyway for not getting lost.
Despite of DXVK-NVAPI being used and DXVK configured accordingly, the game doesn't correctly identify the GPU driver and Ubisoft Connect overlay throws a warning pop up (which can be closed, so nothing dramatic):
It also doesn't help to spoof my 3060 in DXVK config via dxgi.customVendorId = 10de
dxgi.customDeviceId = 2504
.
Not sure how the game actually queries driver information, i.e. if it is fixable in DXVK-NVAPI. But it does recognize DXVK_NVAPI_DRIVER_VERSION=47212
.
List of NvAPI functions that UE4 uses:
NvAPI_Initialize
NvAPI_Unload
NvAPI_D3D1x_BindSwapBarrier
NvAPI_D3D1x_JoinSwapGroup
NvAPI_D3D1x_QueryMaxSwapGroup
NvAPI_D3D1x_Present
NvAPI_EnumPhysicalGPUs
NvAPI_GPU_GetPstates20
NvAPI_D3D11_IsNvShaderExtnOpCodeSupported
NvAPI_D3D_GetCurrentSLIState
NvAPI_Disp_GetHdrCapabilities
NvAPI_Disp_HdrColorControl
NvAPI_DISP_GetDisplayIdByDisplayName
NvAPI_GetErrorMessage
NvAPI_D3D12_CreateGraphicsPipelineState
NvAPI_D3D12_CreateComputePipelineState
NvAPI_D3D12_IsNvShaderExtnOpCodeSupported
NvAPI_D3D_GetObjectHandleForResource
NvAPI_D3D_BeginResourceRendering
NvAPI_D3D_EndResourceRendering
NvAPI_D3D11_SetDepthBoundsTest
NvAPI_D3D11_BeginUAVOverlap
NvAPI_D3D11_EndUAVOverlap
NvAPI_D3D_SetResourceHint
NvAPI_D3D11_SetNvShaderExtnSlot
Just putting this here, so it does not get drowned on Discord.
Hi
Can you make new build 0.5 for example because your last release is old.
Some change have been make.
Best Regards
We should also check that all implemented methods are part of the interface table from the headers. This would also catch the missing header bits here #31
From the 520 NvAPI headers:
// FUNCTION NAME: NvAPI_GPU_GetAdapterIdFromPhysicalGpu
//! \deprecated Do not use this function - it is deprecated in release 520. Instead, use NvAPI_GPU_GetLogicalGpuInfo.
NvAPI_GPU_GetAdapterIdFromPhysicalGpu
provides the adapter LUID which is needed a.o. for DLSS. I assume at some point newer DLSS versions will use NvAPI_GPU_GetLogicalGpuInfo
. We should be prepared for that.
The specification says:
//! \ingroup gpu
typedef struct _NV_LOGICAL_GPU_DATA_V1
{
NvU32 version; //!< [in] Structure version.
void *pOSAdapterId; //!< [out] Returns OS-AdapterId. User must send memory buffer of size atleast equal to the size of LUID structure before calling the NVAPI.
NvU32 physicalGpuCount; //!< [out] Number of physical GPU handles associated with the specified logical GPU handle.
NvPhysicalGpuHandle physicalGpuHandles[NVAPI_MAX_PHYSICAL_GPUS]; //!< [out] This array will be filled with physical GPU handles associated with the given logical GPU handle.
//!< The array index refers to the Physical Gpu Index (Idx).
//!< Idx value is the same as D3D11 MultiGPUDevice GPU index, D3D12 node index, OpenGL GL_NV_gpu_multicast GPU index.
//!< When converted to a bit mask (1 << Idx), it matches:
//!< 1. Vulkan deviceNodeMask in VkPhysicalDeviceIDProperties
//!< 2. CUDA deviceNodeMask returned by cuDeviceGetLuid
NvU32 reserved[8]; //!< Reserved for future use. Should be set to ZERO.
} NV_LOGICAL_GPU_DATA_V1;
//! \ingroup gpu
typedef NV_LOGICAL_GPU_DATA_V1 NV_LOGICAL_GPU_DATA;
#define NV_LOGICAL_GPU_DATA_VER1 MAKE_NVAPI_VERSION(NV_LOGICAL_GPU_DATA_V1,1)
#define NV_LOGICAL_GPU_DATA_VER NV_LOGICAL_GPU_DATA_VER1
///////////////////////////////////////////////////////////////////////////////
//
// FUNCTION NAME: NvAPI_GPU_GetLogicalGpuInfo
//
//! This function is used to query Logical GPU information.
//!
//! SUPPORTED OS: Windows 7 and higher
//!
//!
//! \since Release: 421
//!
//! \param [in] hLogicalGpu logical GPU Handle.
//! \param [in,out] pLogicalGpuData Pointer to NV_LOGICAL_GPU_DATA structure.
//!
//! \return This API can return any of the error codes enumerated in #NvAPI_Status. If there are return error codes with
//! specific meaning for this API, they are listed below.
//!
//! \ingroup gpu
///////////////////////////////////////////////////////////////////////////////
NVAPI_INTERFACE NvAPI_GPU_GetLogicalGpuInfo(__in NvLogicalGpuHandle hLogicalGpu, __inout NV_LOGICAL_GPU_DATA *pLogicalGpuData);
(Sorry to ask in an issue, didn’t see any other contact)
The newest version of DXVK-NVAPI now includes “Forwarding to latencyflex”, does this mean we only need the latencyflex vullkan layer, and none of the wine stuff? (Sorry to ask and thanks for making this!
Special K is like the swiss army knife for modding things like game input or frame pacing. I'm interested as it has support for adding Reflex to basically every game, but it seems to do some very awkward checks against the NvAPI implementation. The Nvidia integrations does not activate without passing these checks.
It checks for the existence of the following functions, which according to them is "undocumented and only known to those who signed NDA" (why are they making fun of the NDA is another question):
NvAPI_GPU_GetRamType
NvAPI_GPU_GetFBWidthAndLocation
NvAPI_GPU_GetPCIEInfo
NvAPI_GetPhysicalGPUFromGPUID
NvAPI_GetGPUIDFromPhysicalGPU
See the full code here.
Dealing with NDA related things sound like a good way to get legal trouble and get blackmailed by NV, so I wonder if it is within NvAPI's scope to implement these?
Some benchmark also measure GPU utilization and temperature to provide fancy statistics. Unfortunately some (AC-Valhalla) do not cope that well with the fact that DXVK-NVAPI does not implement those and show fancy values. This seem to have no effect on actual gameplay.
We might stub those methods and return 0. That said I'm unsure if this is a good idea since 'zero' is something very different than 'I don't know'.
An alternative might be to look into getting those value from NVML (https://developer.nvidia.com/nvidia-management-library-nvml) which is present on Linux. That said DXVK-NVAPI is a PE library, using a native library is I guess not that trivial without introducing a Wine dll for NVML.
Hi, I figured this issue would be applicable here.
Ever since Horizon Zero Dawn has added DLSS, FSR, etc. there was a rendering bug in respect to vkd3d-proton.
This is now fixed, but it appears that enabling dxvk-nvapi in order to enable DLSS causes the game to crash on launch. I have also set dxgi.nvapiHack = False
with dxvk (not sure if this applies here or not).
Here is a log file after running the game with PROTON_LOG=1 PROTON_ENABLE_NVAPI=1 PROTON_HIDE_NVIDIA_GPU=0 gamemoderun %command%
:
steam-1151640.log
Snooping around my Proton Experimental files I see a build artefact that lists the git sha of the version of dxvk-nvapi being used:
acbcf35e327f2d189e1a9322bc1359a22d36cf6a dxvk-nvapi (v0.4-37-gacbcf35)
was this potentially fixed in dxvk-nvapi 0.5?
Let me know if I can provide anything else or do some testing (might need some instructions as to where to stick any related DLLs.)
Enabling DLSS in Baldur's Gate 3 causes the game to completely freeze up.
Game was launched with PROTON_ENABLE_NVAPI=1 %command%
Arch Linux
Kernel 6.2.11
Nvidia 3090 w/ 530.41.03 drivers
Proton Experimental Bleeding Edge
Ryzen 3900x
I've been trying to work out if the existing DLSS support is supposed to be enough to enable frame generation? There hasn't been any unambiguous statement from nvidia. I've been trying out Portal RTX on a 4090 and the Frame Generation option is not exposed, although the rest of the DLSS functionality works just fine.
Is there additional work nvidia needs to do? Is this related to the fact that dxvk-nvapi reports Ampere instead of Ada as noted in the 0.6 release notes - although that says its only for DLSS 2.0 -> 2.4 and Portal RTX should be using 3.0? or something else?
Thanks!
Hi,
interesting library.. can we get like DXVK and VKD3D GitHub actions CI support so we don't have to wait for (point) releases for prebuilt binaries..
thanks..
Hi guys, i am currently use DXVK in windows using the same that uses in linux for many games and help me too much, games newers like World war 3 (WW3) but in some cases i would like get more perfomance
Could I use it library in some games in windows 10? What should i do to try?
I aprovech for do a question out of this thread, is about of the new NATIVE library of DXVK for windows, it not work well for me in somegames, i prefer use the linux libraries
https://github.com/Joshua-Ashton/dxvk-native/releases/ (it work bad for my windows)
What should i use? Can i combine it or the original DXVK with your NVAPI libraries?
Please help me, DXVK its great for me and allow me play in 60 fps or more with high or ultra settings in many games with my low specs gtx 1050ti and i7 4790k.
Thank u very much guys !!!!
This is an issue in either DXVK or DXVK-NVAPI. Most likely NVAPI, so I'll only report the issue here. Let me know if it should be reported to DXVK instead.
It seems like the API is NvAPI_SYS_GetDriverAndBranchVersion
.
We respond with 337.88 which is a driver from mid-2014. Some modern games such as Battlefield 5 and Star Wars Battlefront 2 say "hell naw, that's too old". The games then refuse to run. This error box is from Battlefield 5:
Proposed solutions/ideas from best to worst:
References for some previous mentions of the "driver 337.88" issue:
As mentioned in #15, NvAPI should be unloaded only after Unload
has been called for each previous call to Initialize
, otherwise call chains like Initialize
, Initialize
, Unload
, $SomeFunction
will lead to an error. Code sample below can be used to reproduce the issue (while it works fine on Windows), caused by the block that starts at line 74:
#include <windows.h>
#include <libloaderapi.h>
#include <array>
#include <iostream>
#include <memory>
#include <stdexcept>
#include <string>
#include "nvapi.h"
#include "nvapi_interface.h"
using namespace std::literals;
static constexpr int nvapi_interface_table_size = sizeof(nvapi_interface_table) / sizeof(*nvapi_interface_table);
int main(void)
{
std::unique_ptr<std::remove_pointer_t<HMODULE>, decltype(&FreeLibrary)> nvapi = {LoadLibraryA("nvapi64.dll"), FreeLibrary};
if (!nvapi) throw std::runtime_error{"Failed to load NvAPI."s};
std::cout << "NvAPI loaded." << std::endl;
auto QueryInterface = reinterpret_cast<void *(*)(NvU32 id)>(GetProcAddress(nvapi.get(), "nvapi_QueryInterface"));
if (!QueryInterface) throw std::runtime_error{"Failed to find nvapi_QueryInterface."s};
auto QueryInterfaceByName = [&QueryInterface](const std::string& funcName) -> void*
{
void* result = nullptr;
for (int i = 0; i < nvapi_interface_table_size; ++i)
{
auto& entry = nvapi_interface_table[i];
if (entry.func == funcName)
{
*(void**)(&result) = QueryInterface(entry.id);
break;
}
}
if (!result) throw std::runtime_error{"Failed to find function: "s + funcName};
return result;
};
#define QueryInterfaceTyped(x) reinterpret_cast<decltype(&x)>(QueryInterfaceByName(#x));
auto Initialize = QueryInterfaceTyped(NvAPI_Initialize);
auto Unload = QueryInterfaceTyped(NvAPI_Unload);
auto GetErrorMessage = QueryInterfaceTyped(NvAPI_GetErrorMessage);
auto EnumPhysicalGPUs = QueryInterfaceTyped(NvAPI_EnumPhysicalGPUs);
auto GPU_GetFullName = QueryInterfaceTyped(NvAPI_GPU_GetFullName);
auto GPU_GetThermalSettings = QueryInterfaceTyped(NvAPI_GPU_GetThermalSettings);
auto GPU_GetDynamicPstatesInfoEx = QueryInterfaceTyped(NvAPI_GPU_GetDynamicPstatesInfoEx);
NvAPI_ShortString errormsg = {0};
#define NvAPI_Call(f, ...) \
if (auto res = f(__VA_ARGS__); res != NVAPI_OK) \
{ \
if (auto gem = GetErrorMessage(res, errormsg); gem != NVAPI_OK) \
("GetErrorMessage failed with status "s + std::to_string(gem)).copy(errormsg, NVAPI_SHORT_STRING_MAX - 1); \
throw std::runtime_error{#f + " failed: "s + errormsg}; \
}
NvAPI_Call(Initialize);
std::cout << "NvAPI initialized." << std::endl;
{
NvAPI_Call(Initialize);
std::cout << "NvAPI initialized (2nd time)." << std::endl;
NvAPI_Call(Unload);
std::cout << "NvAPI unloaded (1 unload left)." << std::endl;
}
std::array<NvPhysicalGpuHandle, NVAPI_MAX_PHYSICAL_GPUS> nvGPUHandles;
NvU32 gpuCount;
NvAPI_Call(EnumPhysicalGPUs, nvGPUHandles.data(), &gpuCount);
std::cout << "Found " << gpuCount << " physical GPUs." << std::endl;
for (NvU32 i = 0; i < gpuCount; ++i)
{
NvAPI_ShortString gpuFullName;
NvAPI_Call(GPU_GetFullName, nvGPUHandles[i], gpuFullName);
std::cout << "GPU " << i << ": " << gpuFullName << std::endl;
NV_GPU_THERMAL_SETTINGS_V1 thermals = {};
thermals.version = NV_GPU_THERMAL_SETTINGS_VER_1;
NvAPI_Call(GPU_GetThermalSettings, nvGPUHandles[i], NVAPI_THERMAL_TARGET_ALL, reinterpret_cast<NV_GPU_THERMAL_SETTINGS*>(&thermals));
std::cout << "Sensors: " << thermals.count << std::endl;
for (NvU32 s = 0; s < thermals.count; ++s)
{
std::cout << " Controller: " << thermals.sensor[s].controller << std::endl;
std::cout << " CurrentTemp: " << thermals.sensor[s].currentTemp << std::endl;
std::cout << " DefMaxTemp: " << thermals.sensor[s].defaultMaxTemp << std::endl;
std::cout << " DefMinTemp: " << thermals.sensor[s].defaultMinTemp << std::endl;
std::cout << " Target: " << thermals.sensor[s].target << std::endl;
}
NV_GPU_DYNAMIC_PSTATES_INFO_EX pstates = {};
pstates.version = NV_GPU_DYNAMIC_PSTATES_INFO_EX_VER;
NvAPI_Call(GPU_GetDynamicPstatesInfoEx, nvGPUHandles[i], &pstates);
std::cout << "Pstates flags: " << pstates.flags << std::endl;
for (NvU32 d = 0; d < NVAPI_MAX_GPU_UTILIZATIONS; ++d)
{
if (pstates.utilization[d].bIsPresent)
std::cout << "Domain " << d << " is present, utilization: " << pstates.utilization[d].percentage << "%" << std::endl;
else
std::cout << "Domain " << d << " not present" << std::endl;
}
}
NvAPI_Call(Unload);
std::cout << "NvAPI unloaded." << std::endl;
}
NvAPI_GPU_GetAdapterId From Physical Gpu returns error when nvidia card is not set as primary in bios setup.
Hello!
Initially I would like to say great thanks for your job!!! Gods bless you!
I have laptop with amd integrated and nvidia discrete gpu.
Playing with dlss feature in cyberpunk 2077 I found and issue.
When my nvidia card is set up as primary in bios it works fine. I mean cyberpunk detects dlss can be available.
But when I make my integrated amd card as primary in bios it causes NvApi error and dlss is not available in cyberpunk.
For running lutris I use command:
__NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=nvidia __VK_LAYER_NV_optimus=NVIDIA_only DXVK_FILTER_DEVICE_NAME="NVIDIA GeForce RTX 3060 Laptop GPU" lutris -d
opensuse tumbleweed , kernel 5.16.1-1
lutris 0.5.9.1
dxvk-nvapi v0.5.2
wine 7.0 staging or lutris-ge7-.0
nvidia rtx 3060
Logs are below.
cp_log_nvidia_gpu_is_not_set_prime.txt
cp_log_nv_set_prime.txt
Things for the Readme before the next release:
DXVK_NVAPI_LOG_LEVEL=none
env settingDXVK_NVAPI_DRIVER_VERSION
env variableDXVK_LOG_LEVEL=none DXVK_NVAPI_LOG_LEVEL=none WINEDEBUG=-all WINEDLLOVERRIDES=nvapi64=n wine nvapi64-tests.exe
, eventually link to nvtt
Title
Wanted to ask the guru here about this step
Proton 6.3-6 and newer includes DXVK-NVAPI but it is disabled by default.
Use PROTON_ENABLE_NVAPI=1 as game launch argument in Steam to enable DXVK-NVAPI. Use additionally PROTON_HIDE_NVIDIA_GPU=0 for certain titles that don't rely on D3D for obtaining the current GPU type and have this Proton setting applied.
Copy and replace nvapi.dll/nvapi64.dll into the dist/lib/wine/nvapi/dist/lib64/wine/nvapi folder of your Proton installation, e.g. in ~/.steam/steam/steamapps/common/Proton 7.0/ if you want to manually update the included version
Is this still needed for proton experimental. Am using bleeding edge. And if so, is there a game I can test to see the difference between having it and not enabling it?
I was doing some videos at https://www.youtube.com/@xtremelinux but I want to make sure if I talk about this, I do not make any mistakes explaining the benefits and what is used for. Thank you.
It's time to also build full releases on GitHub and not just snapshots from master.
Is it possible to implement DLSS2 via a DLSS-compatible FSR2 implementation such as https://github.com/PotatoOfDoom/CyberFSR2 like Latencyflex is a Reflex-compatible implementation?
That would be a great boon for non-Nvidia users.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.