Git Product home page Git Product logo

nvidia-settings's People

Contributors

aaronp24 avatar alexgoinsnv avatar anbe42 avatar aritger avatar bquest-nvidia avatar dadap avatar gravemind avatar jelly avatar jyavenard avatar liam-middlebrook avatar nkrishnia avatar pdelagarza-nvidia avatar rdl-28 avatar rmorell avatar tpgxyz avatar tractix avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nvidia-settings's Issues

missing files in 375.10 nvidia-settings sources

   CC           libXNVCtrlAttributes/NvCtrlAttributesNvml.c
libXNVCtrlAttributes/NvCtrlAttributesNvml.c:36:18: fatal error: nvml.h: No such file or directory
 #include "nvml.h"
                  ^
compilation terminated.

UI/Images

I feel like it is time that the UI/Images received an update. For example the images of monitors in the 'Display Configuration' tab are extremely outdated. The NVIDIA branding is also outdated. The usability of the UI could also be improved.

Could not open nvidia-settings

I have been working my way out installing nvidia drivers for my ubuntu 16.04 for a long time but am not able to complete it somehow. I've searched almost the entire web but with no affirmation on correct process whatsoever so raising here is my last resort.
I've installed Nvidia through .run file from the official driver page. But after everything is complete i'm unable to run nvidia-settings on my system as this shows me the error:

You do not appear to be using the NVIDIA X driver. Please edit your X configuration file (just run nvidia-xconfig as root), and restart the X server

I've tried the mentioned process but nothing happened. On my console it shows

nvidia-settings could not find the registry key file. This file should
   have been installed along with this driver at
   /usr/share/nvidia/nvidia-application-profiles-key-documentation. The
   application profiles will continue to work, but values cannot be
   prepopulated or validated, and will not be listed in the help text.
   Please see the README for possible values and descriptions.

I can go on about telling all the procedures i've tried on the web for 4-5 days but nothing seems to happen. How can i just make this nvidia-settings work/run? I've also tried understanding the code which is showing this error if it could resolve anything out but ofc nothing helped.

Is it because optimus is still not totally supported by nvidia for linux and i need to use a third party software (bumblebee) to resolve this? Is running prime-select nvidia going to help in any way?
Also i don't want to install Nvidia through package manager as i'll need CUDA 8 installation later again through .run file so want to keep things consistent.

So how can i make this nvidia-settings finally work on my system (after installing through .run file as people seem to be able to run it through nvidia-367 package installation. Is there anything i'm missing for this?)

Here's my system specification:

00:02.0 VGA compatible controller [0300]: Intel Corporation 4th Gen Core Processor Integrated Graphics Controller [8086:0416] (rev 06) (prog-if 00 [VGA controller])
Subsystem: Lenovo 4th Gen Core Processor Integrated Graphics Controller [17aa:380a]
Flags: bus master, fast devsel, latency 0, IRQ 31
Memory at b5000000 (64-bit, non-prefetchable) [size=4M]
Memory at c0000000 (64-bit, prefetchable) [size=256M]
I/O ports at 6000 [size=64]
Expansion ROM at <unassigned> [disabled]
Capabilities: <access denied>
Kernel driver in use: i915
Kernel modules: i915


07:00.0 3D controller [0302]: NVIDIA Corporation GK208M [GeForce GT 740M] [10de:1292] (rev a1)
Subsystem: Lenovo GK208M [GeForce GT 740M] [17aa:380a]
Flags: bus master, fast devsel, latency 0, IRQ 17
Memory at b3000000 (32-bit, non-prefetchable) [size=16M]
Memory at a0000000 (64-bit, prefetchable) [size=256M]
Memory at b0000000 (64-bit, prefetchable) [size=32M]
I/O ports at 4000 [size=128]
Capabilities: <access denied>
Kernel driver in use: nvidia
Kernel modules: nvidiafb, nouveau, nvidia_drm, `nvidia`

lsb_release -a

No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 16.04.1 LTS
Release:    16.04
Codename:   xenial

uname -r
4.4.0-34-generic

i can provide more information xorg.conf etc if needed. This will likely help others as well in resolving this problem (or tell me if there's any better place i can resolve this perhaps a mailing list etc).

glxinfo.h:23:10: fatal error: GL/glx.h: No such file or directory

Gentoo 64-bit, driver 396.24

make[1]: Entering directory '/tmp/nvidia-settings/src'
Package gtk+-2.0 was not found in the pkg-config search path.
Perhaps you should add the directory containing `gtk+-2.0.pc'
to the PKG_CONFIG_PATH environment variable
No package 'gtk+-2.0' found
Package gtk+-2.0 was not found in the pkg-config search path.
Perhaps you should add the directory containing `gtk+-2.0.pc'
to the PKG_CONFIG_PATH environment variable
No package 'gtk+-2.0' found
   CC           command-line.c
In file included from command-line.c:32:0:
glxinfo.h:23:10: fatal error: GL/glx.h: No such file or directory
 #include <GL/glx.h>
          ^~~~~~~~~~
compilation terminated.
make[1]: *** [Makefile:313: _out/Linux_x86_64/command-line.o] Error 1
make[1]: Leaving directory '/tmp/nvidia-settings/src'
make: *** [Makefile:23: all] Error 2

And unfortunately, ordinary nvidia-settings bundled with nvidia-drivers pulls in a lot of various messy stuff...

Support the XDG Base Directory Specification

https://specifications.freedesktop.org/basedir-spec/basedir-spec-latest.html

Settings are currently stored at $HOME/.nvidia-settings-rc. Following the spec allows for a more organized home directory which makes doing backups easier. The new default location would be $XDG_CONFIG_HOME if set or default to $HOME/.config. Something like $XDG_CONFIG_HOME/nvidia/settings would work.

This doesn't have to do with nvidia-settings, but it would also be nice if the nvidia drivers followed the spec too. They currently use $HOME/.nv/GLCache for data that could be moved to probably $XDG_CACHE_HOME.

Underscan disables panning with "apply" button, but not with xorg.conf

Hello,

My system:

  • Fresh install of Linux Mint 18.3 on a Mac Mini 2010 (works great).
  • Video card is nVidia 320m, outputting to a Panasonic Plasma (nominally 1080p).
  • nVidia drivers v340.104.

My cheap TV requires ~50 px underscan to not clip my desktop around the edges. If I open the nvidia-settings gui, I can easily set this up. I can then save the configuration to /etc/X11/xorg.conf, and the underscanned viewport will persist on restart, as per what's written (or what I change) in xorg.conf.

Importantly, when I click the "apply" button in the nvidia-settings gui, the mouse is constrained to the reduced viewport, as desired, and the screen does not pan when I ram the mouse into any of the screen edges. However, after a reboot or log in, the viewport remains correct, but now moving the mouse to the bottom or left of the screen pans it by the extra ~100 px buffer.

Some kind of magic happens when I press the "Apply" button that is not included in the xorg.conf output? I have tried setting ConstrainCursor manually in a few sections, but it has no effect.

Here is my xorg.conf file for reference:

# nvidia-settings: X configuration file generated by nvidia-settings
# nvidia-settings:  version 361.42  (buildd@lgw01-18)  Tue Apr  5 14:33:28 UTC 2016

Section "ServerLayout"
    Identifier     "Layout0"
    Screen      0  "Screen0" 0 0
    InputDevice    "Keyboard0" "CoreKeyboard"
    InputDevice    "Mouse0" "CorePointer"
    Option         "Xinerama" "0"
EndSection

Section "Files"
EndSection

Section "Module"
    Load           "dbe"
    Load           "extmod"
    Load           "type1"
    Load           "freetype"
    Load           "glx"
EndSection

Section "InputDevice"
    # generated from default
    Identifier     "Mouse0"
    Driver         "mouse"
    Option         "Protocol" "auto"
    Option         "Device" "/dev/psaux"
    Option         "Emulate3Buttons" "no"
    Option         "ZAxisMapping" "4 5"
EndSection

Section "InputDevice"
    # generated from default
    Identifier     "Keyboard0"
    Driver         "kbd"
EndSection

Section "Monitor"
    # HorizSync source: edid, VertRefresh source: edid
    Identifier     "Monitor0"
    VendorName     "Unknown"
    ModelName      "Panasonic-TV"
    HorizSync       15.0 - 68.0
    VertRefresh     23.0 - 61.0
    Option         "DPMS"
EndSection

Section "Device"
    Identifier     "Device0"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BoardName      "GeForce 320M"
EndSection

Section "Screen"
    Identifier     "Screen0"
    Device         "Device0"
    Monitor        "Monitor0"
    DefaultDepth    24
    Option         "Stereo" "0"
    Option         "nvidiaXineramaInfoOrder" "DFP-1"
    Option         "metamodes" "nvidia-auto-select +0+0 {viewportout=1822x1024+49+27}"
    Option         "SLI" "Off"
    Option         "MultiGPU" "Off"
    Option         "BaseMosaic" "off"
    SubSection     "Display"
        Depth       24
    EndSubSection
EndSection

How to configure my input stream parameters?

screenshot from 2017-09-25 02-22-59

Hi.
I have connected my Video In cable to the capture card.
In the "Nvidia X server settings "window, Video Format is 1920x1080i ,but component sampling and color space is UNKNOW,how can I get the input data color space parameter?
with the default (bufferInternalFormat,bitPerComponent,Sampling),the sample program alway crash down with error (Could not bind video input device).
Thanks.

Unable to control RGB LEDs

Hi, I'm the developer of GWE and I'm looking for a way to control the RGB lighting of Nvidia cards under Linux.

Currently the nvidia-settings app only allows to control the brightness via the GPULogoBrightness property but this option is not available on many new cards. For example:

  • Gainward GeForce RTX 2080 Ti Phoenix GS
  • Gigabyte G1 Gaming GTX 1080
  • EVGA 1080 FTW
  • ASUS ROG STRIX GTX 1080 Ti

Are there plans to add Linux support for the RGB lighting of these cards?

Add ability to build a shared library

There once was an effort here at #1 but sadly it's still not possible to build a shared library. Could you add the ability to build a shared library? If not, would you accept a PR that implements that?

git tags out of sync (375.20)

It looks like the git tag and commits are not in sync with the latest tarball at:

ftp://download.nvidia.com/XFree86/nvidia-settings/

cheers

Cannot overclock GTX 960

Here's some system info:

  • Distro: Arch
  • NVIDIA packages:
    • lib32-nvidia-utils 415.25-1
    • nvidia 415.25-6
    • nvidia-lts 1:415.25-5
    • nvidia-settings 415.25-1
    • nvidia-utils 415.25-1
    • opencl-nvidia 415.25-1
  • Coolbits: 31 (tried also 8, 12, 28)
  • Kernel: 4.20.1-arch1-1-ARCH
  • Desktop: GNOME
  • xorg.conf:
Section "ServerLayout"
    Identifier     "Layout0"
    Screen      0  "Screen0"
    InputDevice    "Keyboard0" "CoreKeyboard"
    InputDevice    "Mouse0" "CorePointer"
EndSection

Section "Files"
EndSection

Section "InputDevice"
    # generated from default
    Identifier     "Mouse0"
    Driver         "mouse"
    Option         "Protocol" "auto"
    Option         "Device" "/dev/psaux"
    Option         "Emulate3Buttons" "no"
    Option         "ZAxisMapping" "4 5"
EndSection

Section "InputDevice"
    # generated from default
    Identifier     "Keyboard0"
    Driver         "kbd"
EndSection

Section "Device"
    Identifier "Device0"
    Driver     "nvidia"
    VendorName "NVIDIA Corporation"
    BoardName  "GeForce GTX 960"
    # Note: I tried to put the coolbits option here as well, as opposed to the Screen section
EndSection

Section "Monitor"
    Identifier     "Monitor0"
    VendorName     "Unknown"
    ModelName      "Unknown"
    HorizSync       28.0 - 33.0
    VertRefresh     43.0 - 72.0
    Option         "DPMS"
EndSection

Section "Screen"
    Identifier     "Screen0"
    Device         "Device0"
    Monitor        "Monitor0"
    Option         "AllowIndirectGLXProtocol" "off"
    Option         "TripleBuffer" "on"
    Option         "metamodes" "DP-2: 2560x1440_60 +0+0 {ForceCompositionPipeline=On, ForceFullCompositionPipeline=On}, HDMI-0: 1920x1080_60 +2560+360 {ForceCompositionPipeline=On, ForceFullCompositionPipeline=On}"
    Option         "Coolbits" "31"
    SubSection     "Display"
        Depth       24
        Modes      "nvidia-auto-select"
    EndSubSection
EndSection

Here's what happens when I try to overclock:

$ nvidia-settings -a "[gpu:0]/GPUGraphicsClockOffset[3]=10"


ERROR: Error assigning value 10 to attribute 'GPUGraphicsClockOffset' (trinity-zero:1[gpu:0]) as specified in assignment
       '[gpu:0]/GPUGraphicsClockOffset[3]=10' (Unknown Error).

And here is the output of querying the same value:

$ nvidia-settings -q "[gpu:0]/GPUGraphicsClockOffset[3]"   

  Attribute 'GPUGraphicsClockOffset' (trinity-zero:1[gpu:0]): 0.
    The valid values for 'GPUGraphicsClockOffset' are in the range -90 - 1000 (inclusive).
    'GPUGraphicsClockOffset' can use the following target types: X Screen, GPU.

I also tried to change the offset from the GUI, but it has no effect.

375.66 sources missing

Any chance to have all the 375.66 sources in the various repositories?

  • nvidia-settings
  • nvidia-xconfig
  • nvidia-persistenced
  • nvidia-modprobe
  • nvidia-installer

Thanks,
--Simone

SDI capture card data transfer to CUDA sample code.

hi,
I have read the entire book [quadro sdi capture programmer's guide],and read all the sample code in this repository.
is there a complete sample which can demostrate that transfer sdi frame data to cuda ,and processing the capture image .

How to initialize Pixmap using blending datafile for WarpBlend API?

Hi,

I have been trying to using Warp Blend API by customizing nv-control-warpblend.c sample code. I am reading warping and blending data for each display from separate data files. Warping is working fine(except times when same WarpMesh is allocated to multiple displays). Blending however is not working as expected. Following is my code with relevant line indicating how I have called API for blending:
https://gist.github.com/pranavkantgaur/70fa882e58311ed8eddc9c7fec78c0d3#file-nvcontrol-warp-blend-with-file-configuration-c-L264

What I see in my output is a displaced(wrt. neighbor displays) texture. I would like to verify if I have initialized the pixmap correctly?

nvidia-settings compiled with NVML_EXPERIMENTAL crashes when coolbits=8 is used

nvidia-settings: libXNVCtrlAttributes/NvCtrlAttributesNvml.c:1709: NvCtrlNvmlGetValidAttributeValues: Assertion 'ret2 == NvCtrlAttributeNotAvailable' failed.

I do not know how experimantal the NVML support is, but it seems that some of the attributes are not handled. nvidia-settings -verbose gives WARNING: Unhandled integer attribute GPUGraphicsClockOffsetAllPerformanceLevels (424) of GPU (0) WARNING: Unhandled integer attribute GPUGraphicsClockOffset (409) of GPU (0) WARNING: Unhandled integer attribute GPUMemoryTransferRateOffsetAllPerformanceLevels (425) of GPU (0) WARNING: Unhandled integer attribute GPUMemoryTransferRateOffset (410) of GPU (0) so I believe those are the attributes to be handled. Those probaby also match the PowerMizer options when coolbits=8 is used.

Power management limit

Hello,

I checked all options of then nvidia-settings, but didn't see option like nvidia-smi -pl. Would you like explain me how to change that with nvidia-settings if it possible?

Cannot overclock from terminal or libXNVCtrl when running Xorg rootless

This the output of nvidia-settings -a "[gpu:0]/GPUGraphicsClockOffset[3]=125" in rootless Xorg:

ERROR: Error assigning value 0 to attribute 'GPUGraphicsClockOffset' (sporif-pc:0[gpu:0]) as specified in assignment '[gpu:0]/GPUGraphicsClockOffset[3]=125'
       (Unknown Error).

Anything using libXNVCtrl to overclock also fails, e.g. gwe, tuxclocker.

But the strange thing is overclocking through the nvidia-settings gui still works.

Rootless Xorg can be achieved on Nvidia by setting either needs_root_rights = yes in
/etc/X11/Xwrapper.config or modeset=1 on the nvidia_drm module.

Changes for 396.45 have gtk2 bugs

bug

Click "Graphics Information". All the new tabs are without a vertical slider and you only see what fits on the tabs:

(nvidia-settings:21224): Gtk-WARNING **: gtk_scrolled_window_add(): cannot add non scrollable widget use gtk_scrolled_window_add_with_viewport() instead

(nvidia-settings:21224): Gtk-WARNING **: gtk_scrolled_window_add(): cannot add non scrollable widget use gtk_scrolled_window_add_with_viewport() instead

(nvidia-settings:21224): Gtk-WARNING **: gtk_scrolled_window_add(): cannot add non scrollable widget use gtk_scrolled_window_add_with_viewport() instead

(nvidia-settings:21224): Gtk-WARNING **: gtk_scrolled_window_add(): cannot add non scrollable widget use gtk_scrolled_window_add_with_viewport() instead

(nvidia-settings:21224): Gtk-WARNING **: gtk_scrolled_window_add(): cannot add non scrollable widget use gtk_scrolled_window_add_with_viewport() instead

Please update the Icon section in the .desktop file

Please do not use the absolute path for the nvidia-settings icon. Just use this instead:

Icon=nvidia-settings

That way, people like me who use a custom icon theme will get the correct one. More information here:

https://standards.freedesktop.org/icon-theme-spec/icon-theme-spec-latest.html#install_icons
https://standards.freedesktop.org/icon-theme-spec/icon-theme-spec-latest.html#icon_lookup

Thank you.

Tasks

No tasks being tracked yet.

nvidia-settings issue while selecting refresh rate on a rotated display.

Came across this bug on my system while adding a secondary monitor in portrait mode, it appears to be specific to how the nvidia-settings gui interprets refresh rate settings - both of my monitors have a native refresh rate of 165 and it was a bit of a pain to figure out how to set my rotated monitor to its native refresh rate, until I discovered how these inconsistencies worked.

The nvidia settings appears to default to the resolution to "auto" when setting the rotation via X Server Display Configuration. You can then select the native resolution and refresh rate, but the representation for the display in the X Screen viewport gets switched to landscape mode. However, once you select a different refresh rate, the orientation switches back to portrait mode. Here are some example screenshots:

STEP 1
Open nvidia-settings and rotate the display you wish to rotate, the resolution defaults to "auto" and refresh rate dropdown menu is greyed out, as expected:

2020-11-16_11-55

STEP 2
Select the native resolution in order to set the desired refresh rate, but take note that the first rate you select causes the monitors representation in the overall X Screen layout to revert to landscape mode:

incorrect

STEP 3
Now, select the actual refresh rate you want to use, this will correct these discrepancies:

correct

SUMMARY
What I ended up having to do was rotate my display and set the native resolution, then I had to an undesired refresh rate before selecting my desired one for it to work properly.

I believe the expected behavior is to choose resolution, orientation, and refresh rate correctly the first time.

Let me know if there are any additional details you'd like me to provide.

Release 384.111 missing

Hello, the code for 384.111 is missing. Here and in all other repositories (nvidia-xconfig, etc.).

Thanks.

Qt5 port

I'm trying to make GTK-free desktop. One of the apps that still needs porting to Qt is nvidia-settings. What about porting it to Qt5? I would be even ready to do this port, would it be added to NVIDIA's github repo?

Missing colour space AKA format options compared to windows, & no button to save settings

Hi! After a month of diagnostics and back & forth with my motherboard manufacturer, I’ve finally managed to get my GeForce 1080 Ti to output full res & refresh rate to my secondary monitor (LG TV) via HDMI, in windows. Yay! The solution was basically to try every possible combination of resolution settings until 3840216060hz worked:

win_options

Firstly this was weird insofar as initially I didn’t have the option for YCbCr420, only RGB and YCbCr444 & YCbCr422 (IIRC). Only once I changed from 30 to 60Hz did the 420 option appear. Again IIRC, but essentially not all options were available initially, even though the program doesn’t know whether any of them will work until the user tries them, so why would any of them not be present?

Secondly, main issue: this isn’t possible in Linux because:

linux_options

  1. You can’t change the refresh rate here, you have to do it in a different place (which is conceptually fine but a bit annoying)

  2. Only RGB & 444 are available, no 422 nor 420 (which is proven to work with my setup in windows)

  3. Even if 420 WAS present, there’s no apply/save button on this screen, it’s missing, potentially by complete accident.

Because of this I can’t use my 2nd monitor to the full capabilities of the monitor/GPU/HDMI cable/OS.

Hopefully useful info here:

version_etc

IDK why it says “screens 1” while currently outputting to two screens…

Thanks for any advice. I presume me having the latest (mainline/ubuntu PPA) driver version means I have the latest settings program version? (since having found this is a community github project, maybe not...)

Cheers!

Nvidia-settings do not remain after configuration

.nvidia-settings-rc will not load settings at boot

Workaround requires creating autostart bash script to use settings permanently

#!/bin/bash
(sleep 1 && DISPLAY=":0.0" nvidia-settings -a "OpenGLImageSettings=3" -a "[gpu:0]/GPUPowerMizerMode=1") &

NV-CONTROL: *_THRESHOLD attributes readings always to 127

It seems that using NV-CONTROL to read the threshold temperature gives you always a values of 127.

The attributes affected are:

  • NV_CTRL_GPU_CORE_THRESHOLD
  • NV_CTRL_GPU_DEFAULT_CORE_THRESHOLD
  • NV_CTRL_GPU_MAX_CORE_THRESHOLD

I modified the nv-control-targets.c to print those values, together with the core temp:

GPU 0 information:
   Product Name                    : GeForce RTX 2080 Ti
   Core temp                     > : 33
   Core threshold temp           > : 127
   Core default core threshold   > : 127
   Core max core threshold       > : 127
   GPU UUID                        : GPU-22569caf-c41b-be44-07e1-314d0904febb
   Coolers on GPU                  : COOLER-1, COOLER-0
   Thermal Sensors on GPU          : THERMAL-SENSOR-0
   Connected Display Devices       : DPY-5
   Framelock Devices               : 
   Number of X Screens on GPU 0    : 1

As you can see the core temp is correctly read but all the threshold are stuck to 127.

The same happens if I use the X Protocol to communicate with NV-CONTROL from Python.

nvidia-smi shows the correct values:

$ nvidia-smi -q -x | grep "<temperature>" -A7
		<temperature>
			<gpu_temp>33 C</gpu_temp>
			<gpu_temp_max_threshold>94 C</gpu_temp_max_threshold>
			<gpu_temp_slow_threshold>91 C</gpu_temp_slow_threshold>
			<gpu_temp_max_gpu_threshold>89 C</gpu_temp_max_gpu_threshold>
			<memory_temp>N/A</memory_temp>
			<gpu_temp_max_mem_threshold>N/A</gpu_temp_max_mem_threshold>
		</temperature>

I was able to reproduce this issue on all the GPU i tested it:

  • 2 different GeForce GTX 2080 Ti (driver 415.25)
  • GeForce GTX 1080 Ti (driver 415.25)
  • GeForce GTX 1060 6GB (driver 396.54)

I am currently developing an alternative to MSI Afterburner for Linux so I would like to know if and when a fix for this will be provided (and also if I can get some help from Nvidia in testing the application):
image

nvidia-settings does not save color settings

"Color Correction" tab shows "Warning: The color settings have been changed ..." as soon as the current window is changed (for example, right under "Do you want to quit?" popup if you click [Quit] button after tuning Brightness), then the program resets to default color settings and writes RedBrightness/GreenBrightness/BlueBrightness=0.0000 to .nvidia-settings-rc.

Details: under openbox (Ubuntu 16.04 LXDE) "palette update" callback in ctkcolorcorrection.c is issued after every window change. Disabling g_signal_connect for callback_palette_update fixes the problem (testing patch: 17_avoid_reset_to_default_color_settings.zip)

GPULogoBrightness not working on Gigabyte Aorus 1080 Ti

Setting the brightness on my GPU has no visual effect, though the command is accepted and shows the updated setting when queried.

gish@StarGazer:~$ nvidia-settings -q gpus

1 GPU on StarGazer:0

    [0] StarGazer:0[gpu:0] (GeForce GTX 1080 Ti)

      Has the following names:
        GPU-0
        GPU-69c6c833-8116-b74f-ef7b-31eef4aff471
gish@StarGazer:~$ nvidia-settings -q gpus

1 GPU on StarGazer:0

    [0] StarGazer:0[gpu:0] (GeForce GTX 1080 Ti)

      Has the following names:
        GPU-0
        GPU-69c6c833-8116-b74f-ef7b-31eef4aff471

Ubuntu compiling issue.

Hi,
I spend lot of time on compiling "nvidia-settings" codes.I need to use the api in this to configure and get gpu info,and SDI captures card info.
when this lib is compiled,I got lots of dependence error,such like X11,vulpaud,gtk.
IS THERE a compiled lib ,or HOW TO compile tutorials?
Thanks very much!!

Add an option to override the power limit.

It is possible to override the power limit on Linux with nvidia-smi, but it would be better to have a GUI option for it.
Can you add a power limit slider to the Powermizer menu if coolbits value is set to 8 (when OC is available)?

Support alternative but still default application prorile locations

cat /usr/local/bin/nvidia-bug-report.sh | grep application
        append_file_or_dir_silent "$DIR/.nv/nvidia-application-profiles-rc"
        append_file_or_dir_silent "$DIR/.nv/nvidia-application-profiles-rc.backup"
        append_file_or_dir_silent "$DIR/.nv/nvidia-application-profiles-rc.d"
        append_file_or_dir_silent "$DIR/.nv/nvidia-application-profiles-rc.d.backup"
        append_silent "$DIR/.nv/nvidia-application-profile-globals-rc"
        append_silent "$DIR/.nv/nvidia-application-profile-globals-rc.backup"
append_file_or_dir_silent "/etc/nvidia/nvidia-application-profiles-rc"
append_file_or_dir_silent "/etc/nvidia/nvidia-application-profiles-rc.d"
append_file_or_dir_silent /usr/share/nvidia/nvidia-application-profiles-*-rc

The reason nvidia-bug-report.sh appends so many files to the report log is these are all default locations for the rc (and key...) files to exist. For example Clear Linux has a philosophy of viewing /usr/share as sacred and stores its default config files there. /etc/ is a better place for the rc files, imo and allows CL users for example to remove /etc and /var iirc to do a factory reset.

The driver uses the same logic as the bug reporting tool. For nvidia-settings to be anything but impotent it must implement the SAME hunting logic as the driver. Just looking in /usr/share became no longer acceptable after the driver started hunting.

The driver doesn't hunt for 1 file to use it parses each file per application, later entries, later files override previous settings.

See https://download.nvidia.com/XFree86/Linux-x86_64/390.77/README/profiles.html to understand how the logic works.

Dithering setting doesn't stay disabled

When I choose to disable Dithering for both my displays in Nvidia Settings and then close and reopen Nvidia Settings, it reverts back to "Auto - Enabled". I can also see in the .nvidia-settings.rc file after saving it with Dithering disabled that there is a bunch of "Dithering=0" (for Auto) lines instead of the expected "Dithering=2" (for Disabled)

I can set to disable dithering though the command line with:

nvidia-settings --assign="Dithering=2"

And then checking if dithering is disabled with:

nvidia-settings -q CurrentDithering

shows that it is indeed disabled.

Not sure why setting it though the GUI does not work.

parsing hostname with -a does not find display

Good day,

while trying to help someone over at the Nvidia developer forum for linux

https://forums.developer.nvidia.com/t/nvidia-settings-attribute-gpumemorytransferrateoffsetallperformancelevels-is-not-available/166846

(maybe you can take a look to help that guy :) )

I stumbled over an issue, that using the hostname in an attribute assignment (or query) results in nvidia-settings not finding the display.
Maybe I'm doing it wrong, but as far as I can see I'm not ;)

The output of nvidia-settings -h shows:

  The ASSIGN argument to the '--assign' command line option is of the form:
  
    {DISPLAY}/{attribute name}[{display devices}]={value}
  
  This assigns the attribute {attribute name} to the value {value} on the X Display {DISPLAY}.  {DISPLAY} follows the usual {host}:{display}.{screen} syntax of the DISPLAY environment v
  ariable and is optional; when it is not specified, then it is implied following the same rule as the --ctrl-display option.  If the X screen is not specified, then the assignment is m
  ade to all X screens.  Note that the '/' is only required when {DISPLAY} is present.
  
  {DISPLAY} can additionally include a target specification to direct an assignment to something other than an X screen.  A target specification is contained within brackets and consist
  s of a target type name, a colon, and the target id.  The target type name can be one of "screen", "gpu", "framelock", "fan", "thermalsensor", "svp", or "dpy"; the target id is the in
  dex into the list of targets (for that target type).  The target specification can be used in {DISPLAY} wherever an X screen can be used, following the syntax {host}:{display}[{target
  _type}:{target_id}].

The output of nvidia-settings -q screens shows on my system:

1 X Screen on darklord:0

[0] darklord:0.0 (GeForce RTX 2080 Ti)

  Has the following name:
    SCREEN-0

Which tells me darklord:0 or darklord:0.0 are {host}:{display}.{screen} identifiers to use.

So these are the commands I tried:

nvidia-settings -a darklord:0/GPUMemoryTransferRateOffsetAllPerformanceLevels=0
ERROR: Unable to find display on any available system

nvidia-settings -a darklord:0.0/GPUMemoryTransferRateOffsetAllPerformanceLevels=0
ERROR: Unable to find display on any available system

nvidia-settings -a darklord:0[gpu:0]/GPUMemoryTransferRateOffsetAllPerformanceLevels=0
ERROR: Unable to find display on any available system

nvidia-settings -a darklord:0.0[gpu:0]/GPUMemoryTransferRateOffsetAllPerformanceLevels=0
ERROR: Unable to find display on any available system

But omitting the host specification results in success:

nvidia-settings -a :0[gpu:0]/GPUMemoryTransferRateOffsetAllPerformanceLevels=0
Attribute 'GPUMemoryTransferRateOffsetAllPerformanceLevels' (darklord:0[gpu:0]) assigned value 0.

nvidia-settings -a :0.0/GPUMemoryTransferRateOffsetAllPerformanceLevels=0
Attribute 'GPUMemoryTransferRateOffsetAllPerformanceLevels' (darklord:0.0) assigned value 0.

nvidia-settings --version
nvidia-settings: version 455.23.05

driver version: 460.39 (from ppa)
distro: linux mint 19.3 (ubuntu 18.04 based)
kernel: 5.9.16 (custom)

Digital vibrance/Image Sharpening/ Color Space missing for rtx 2070 in Linux

card rtx 2070
Asus Rog Strix 2070 Advanced edition
lspci gives;
VGA compatible controller: NVIDIA Corporation Device 1f07 (rev a1)

tried 4 different monitors, several different versions of ubuntu/kubuntu linux mint.
driver used;
NVIDIA-Linux-x86_64-418.56.run as well as drivers from ppa

It would appear the driver is bugged in monitor detection regardless of what i throw at it, including monitors that work fine in windows 10 regardless of port. Two of the other monitors were able to use digital vibrance fine with a gtx 1050. I also tried adding correct EDID's for each monitor, same result

Would really like to get this resolved since one of the major reasons I bought the rtx 2070 was to not have washed out/dull colors in linux.

Will provide any additional information.

I originally posted this in linux mint forums as well as devtalk for linux drivers, a forum member recommended me to post it here. Hopefully the bug can be caught and the information is getting in the right hands to resolve it.

How To compile nvidia-settings

I was running nvidia-settings iteratively over several ubuntu desktop environments for comparison purposes. I'd like to compile nvidia-settings from this repo with modified code inside for further debugging reports. I've already made changes internally for nvidia-settings that shouldn't break it.

Can you give a clear procedure for successfully compiling this repo? including dependencies as well?

primary display checkbox toggle

When I apply a configuration of a secondary screen, it automaticly becomes the primary screen. This while the checkbox is checked on the page of the first screen, and not on the page of the second screen.

Workaround without code modification: just uncheck the checkbox and check it again. Then apply the configuration.

Workaround with code modification:

--- ctkdisplayconfig.c	2017-12-19 08:58:34.765653703 +0100
+++ ctkdisplayconfig.c	2017-12-19 08:56:40.638983837 +0100
@@ -8331,7 +8331,7 @@
         }
         if (screen->no_scanout) continue;
 
-		if (screen->primaryDisplay && ctk_object->primary_display_changed) {
+        if (screen->primaryDisplay) {
             ret = NvCtrlSetStringAttribute(screen->ctrl_target,
                                            NV_CTRL_STRING_NVIDIA_XINERAMA_INFO_ORDER,
                                            screen->primaryDisplay->typeIdName);

I havn't researched the code enought to say it won't cause bugs, if anyone has a proper fix please share :-)

settings for headless computers

There is no way to set fan speed without utilizing nvidia-settings which in turn requires X and correct display settings. A lot of people use Nvidia cards only for its CUDA. They do not need X running on their servers.

So the question is that if it is possible to disentangle nvidia-settings from X? Why or why not?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.