Git Product home page Git Product logo

jupyter-resource-usage's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

jupyter-resource-usage's Issues

Add a new /api/nbresuse/v1 endpoint

#45 restored the /metrics endpoint, so that JupyterLab (and other frontends) can display the metrics:

image

The reason to move away from /metrics is to not shadow the default Prometheus endpoint (see #22 for more infos).

However this is a breaking change. We should carefully plan the rollout to avoid confusing users with an older version of JupyterLab. For the classic notebook this is not really an issue as the notebook extension is bundled with the package on PyPI.

cc @Gsbreddy who has already started working on this in side PRs.

Jupyter Resource Usage Roadmap

Let's keep discussions on what to do next with nbresuse in this meta issue.


  • Restore the /metrics endpoint to keep backward compatibility for the 0.3.x releases: #45
  • Decide where the repo should live: keep under yuvipanda or more to another GitHub organization (or move to Jupyter Server)? #24 -> moved to https://github.com/jupyter-server/jupyter-resource-usage
  • Keep the /metrics endpoint for the 0.3.x releases to not break backward compatibility with the lab status bar and other extensions that rely on nbresuse.
  • Drop Deprecate the /metrics endpoint in 0.4.x so it doesn't shadow the Prometheus endpoint anymore: #68
  • Either in a 0.3.x release or in 0.4.x: add a new /api/metrics/v1 endpoint to retrieve the metrics as JSON (just like the current /metrics endpoint). However the format of the response is still to be defined - #52 - new endpoint added in #68
  • Change the endpoint used in the JupyterLab status bar: https://github.com/jupyterlab/jupyterlab/blob/4fe4dcfe5c9dc329bed2dcf2602f569ddef8a8a0/packages/statusbar/src/defaults/memoryUsage.tsx#L291 -> remove from core lab and create a federated lab extension for nbresuse: #69
  • Drop the deprecated /metrics endpoint: #75
  • Add kernel metrics: #31
  • Add more metrics (Network I/O or other metrics supported by psutil).
  • Switch to event stream? #7. This will change the way frontends consume the API and will be a breaking change
  • Add tests: #49
  • Document the nbresuse API: #48
  • Document the Prometheus metrics: #51

Detecting CPU and RAM limits inside container

Hi! I've been forwarded from the issue (See more detail in the link): https://discourse.jupyter.org/t/detecting-cpu-and-ram-limits-on-mybinder-org/4640

Since the CPU and memory total value reported by psutil is not reflecting docker hard limit value.
docker options for cpu and memory respectively: --cpus=1 --memory=1g

$ docker run -it --memory 1g jupyter/minimal-notebook bash
jovyan@ae5ee84233e0:~$ python
Python 3.7.3 (default, Mar 27 2019, 22:11:17) 
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import psutil
>>> psutil.virtual_memory().total
16742875136 # I didn't want this value! but 1GB!!

To address this issue, I want to contribute by implementing the following stuffs.

  • Detect current OS
  • Check if it's inside container or host
  • Get cpu and memory metrics from cgroups info if it's inside container.

Any contribution guide would be so much appreciated!

how to fix the position of memory status

when it shows me, it keep on the far away from the window, and hiding behind scroll.
i will add the picture, then could see this?

and could you tell me how to fix it up?
image
above one is when the small window size.

i want to use full screen of window, then it goes away from the view.
image

how to fix the position of that memory status under Python 3 mark or something?

please help me

Document the soft-deprecated endpoint

#68 switches the main endpoint to /api/metrics/v1, but keeps the original /metrics under a flag.

This should be documented, for example in a new CHANGELOG.md.

Also previous releases of JupyterLab 1.x and 2.x include a status bar item that requests metrics from the deprecated /metrics endpoint. It can be tricky to understand which versions of lab and nbresuse are compatible. However #69 should help with this in the long run.

This is less problematic for classic notebook users since the frontend ships with nbresuse (#69 would do the same for the lab frontend)

Report kernel metrics

This was briefly mentioned in #22 (comment). Opening a new issue for better tracking.

It would indeed be really useful to track cpu and memory usage per kernel. The frontend could then query this data and display a more granular view on the resources being used.

This sounds like it should be doable when the kernels are local to the notebook server. But it might be slightly more complicated in the case of remote kernels.

Clarify notebook/jupyter_server usage

Description

The code is not a full valid jupyter_server extension as it is based on notebook handler:

from notebook.base.handlers import IPythonHandler

Dependency needs update:

install_requires=["notebook>=5.6.0", "prometheus_client", "psutil>=5.6.0"],

The backport entrypoint is missing:

_load_jupyter_server_extension = load_jupyter_server_extension

Expected behavior

The extension is fully using jupyter_server.

Context

The mix-up was seen following that issue: jupyter-server/jupyter_server#488

After changing the parameter to c.Spawner.cmd = ["jupyter-labhub"], the warning is emitted only for this handler:

python3.7/site-packages/notebook/base/handlers.py:131: RuntimeWarning: Expected to see HubAuthenticatedHandler in <class 'jupyter_resource_usage.api.ApiHandler'>.mro()

add JSON content type in response header

For the JSON API (currently /metrics but /api/nbresuse moving forward), it might be useful to add a "application/json" content type response header (which may allow for slightly easier parsing in certain clients).

Also - is there a set of upcoming changes / plans for nbresuse wrt the prometheus vs. JSON endpoints; the support for the statusbar etc.?

Before we contribute additional changes, it might be good to know what the plan is, so that we can figure out how best to target changes. I'm piecing together activity from various issues, but it would be good to see what was finally agreed upon moving forward.

Add docs to configure with Jupyter Server

For now the readme mentions configuring ResourceDisplay with c.NotebookApp.ResourceUseDisplay.

We should also document configuring c.ServerApp.ResourceUseDisplay in PREFIX/etc/jupyter/jupyter_server_config.py when using Jupyter Server directly (for example if only JupyterLab is installed).

nbresuse component for jupyterlab status bar

The Jupyterlab status bar aims to show all relevant states of the notebook -- showing the current memory usage would be an asset. It would be great to integrate this into the status bar.

Incompatible with Builtin Prometheus Metrics?

Hi there,

Love this extension - just wanted to check for clarification on how it is supposed to overlap with the builtin notebook prometheus metrics. I noticed when I install it, it takes over the metrics endpoint:

curl 'localhost:8888/metrics?token=...'
{"rss": 57704448, "limits": {}}

This seems to conflict with these metrics:

https://github.com/jupyter/notebook/blob/master/notebook/prometheus/metrics.py

Just wanted to check if this is expected or if I'm installing it wrong

Document the nbresuse API

Once the /metrics endpoint is back, we should document the API in the README.
We can do the same for the new /api/nbresuse/v1 endpoint that should be used in the long run.

The could also be a section to link to other repositories that depend on nbresuse and consume its endpoints:

Minimize requests to /metrics using some logic about connectivity from browser->server

I started filling out jupyterlab's feature request form but opted to open the issue here instead. My goal is to identify the source of various web-requests made from browser's and try to avoid them if they are doomed to fail because of a shut down notebook-server / jupyter-server.

In JupyterLab they have the IConnectionLost interface for example which can be used to act smarter for example. Here is an example of how its used.

I understand that nbresuse is something not only used by JupyterLab, so perhaps its solution must be standalone, but perhaps we could provide a hook or similar to allow us make use of knowledge about the connectivity to the server from the browser, implemented in JupyterLab or similar.

Here is an example of the amount of requests that arrives to JupyterHub, and how they can be totally cluttered when a user server shuts down, and the /user/username1 route is removed from the JupyterHub proxy, which make all requests to go /hub/user/username1 as a default fallback.

image

Problem

Make various logs more readable and reduce load on other services.

Proposed Solution

I hope it's possible to make use of a common interface to determine if the notebook-server / jupyter-server is available, and cut down on requests if it isn't. I'm not sure how much of what's below is something to fix within JupyterLab and how much is to fix in other projects, but I suspect that the /metrics calls come from the [nbresuse]

Additional context

I took logs from my JupyterHub, and found the following.

/api/terminals - 5 min interval

04:47:54 503 GET /hub/user/hub-user1/api/terminals?1595738873967
04:52:55 503 GET /hub/user/hub-user1/api/terminals?1595739174718

/api/sessions - 5 min interval

04:47:59 503 GET /hub/user/hub-user1/api/sessions?1595738878979
04:52:59 503 GET /hub/user/hub-user1/api/sessions?1595739179448

/api/kernepspecs - unknown interval

04:50:25 503 GET /hub/user/hub-user1/api/kernelspecs?1595739024887

/api/kernels - unknown interval

# increments of 25s, 66s, 98s, 156s between calls... Hmmm....
04:47:50 503 GET /hub/user/hub-user1/api/kernels?1595738869887
04:48:15 503 GET /hub/user/hub-user1/api/kernels?1595738894919
04:49:21 503 GET /hub/user/hub-user1/api/kernels?1595738961260
04:50:49 503 GET /hub/user/hub-user1/api/kernels?1595739049098
04:53:25 503 GET /hub/user/hub-user1/api/kernels?1595739205581

/api/contents - ~10 second interval

04:46:55 503 GET /hub/user/hub-user1/api/contents/hub-user1-storage-folder?content=1&1595738815090
04:47:06 503 GET /hub/user/hub-user1/api/contents/hub-user1-storage-folder?content=1&1595738825617
04:47:16 503 GET /hub/user/hub-user1/api/contents/hub-user1-storage-folder?content=1&1595738836264
04:47:27 503 GET /hub/user/hub-user1/api/contents/hub-user1-storage-folder?content=1&1595738846731
04:47:37 503 GET /hub/user/hub-user1/api/contents/hub-user1-storage-folder?content=1&1595738857252

/api/metrics - ~5 second interval

# ~5 s
04:46:46 503 GET /hub/user/hub-user1/metrics?1595738805671
04:46:51 503 GET /hub/user/hub-user1/metrics?1595738811272
04:46:57 503 GET /hub/user/hub-user1/metrics?1595738816741
04:47:02 503 GET /hub/user/hub-user1/metrics?1595738822330
04:47:08 503 GET /hub/user/hub-user1/metrics?1595738827786

Add tests for the nbresuse endpoints

Let's add tests to make sure the nbresuse endpoints are covered:

  • /metrics (for now, to be deprecated and then removed in other versions)
  • /api/nbresuse/v1 when it is introduced

There can also be tests to check that the metrics are also reported to Prometheus.

Not showing (caused by 404 on Endpoint /api/metrics/v1?)

I have the installed this package via pip, yet no memory usage is shown in the jupyter GUI.

I have the following:

jupyter-client            6.1.7
jupyter-console           6.2.0
jupyter-core              4.6.3
jupyter-packaging         0.7.12
jupyter-resource-usage    0.5.1
jupyter-server            1.4.1
jupyterlab                3.0.10
jupyterlab-execute-time   2.0.2
jupyterlab-git            0.23.3
jupyterlab-latex          2.0.0
jupyterlab-pygments       0.1.2
jupyterlab-server         2.3.0
jupyterlab-system-monitor 0.8.0
jupyterlab-topbar         0.6.1
jupyterlab-vim            0.13.4
JupyterLab v3.0.10
/home/jooa/.local/share/jupyter/labextensions
        jupyterlab-execute-time v2.0.2 enabled OK (python, jupyterlab_execute_time)
        jupyterlab-topbar-extension v0.6.1 enabled OK (python, jupyterlab-topbar)
        jupyterlab-system-monitor v0.8.0 enabled OK (python, jupyterlab-system-monitor)
        @axlair/jupyterlab_vim v0.13.4 enabled OK (python, jupyterlab_vim)
        @jupyter-server/resource-usage v0.5.0 enabled OK (python, jupyter-resource-usage)

Other labextensions (built into JupyterLab)
   app dir: /home/jooa/.local/share/jupyter/lab
        @jupyterlab/debugger v3.0.7 enabled OK
        @jupyterlab/git v0.23.3 enabled  X
        @jupyterlab/toc v5.0.6 enabled OK
        jupyterlab-theme-toggle v0.6.1 enabled OK
        nbdime-jupyterlab v2.0.1 enabled  X

   The following extension are outdated:
        @jupyterlab/git
        nbdime-jupyterlab

Yet,

Mar 23 18:59:00 vm-129-69 jupyter[1806765]: [W 2021-03-23 18:59:00.435 ServerApp] 404 GET /jupyter/api/metrics/v1?1616522340417 (193.174.53.84) 1.03ms referer=[snip]

appears in the logs.

Why is the endpoint returning a 404? What am I missing?

Switch to using eventstreams

Eventstreams are a much better fit for this than polling via HTTP requests as we do now. We should switch to it (plus a polyfill for IE users)

Does this works with a Windows (10) O/S?

[E 11:15:07.398 NotebookApp] Uncaught exception GET /metrics (::1)
HTTPServerRequest(protocol='http', host='localhost:8888', method='GET', uri='/metrics', version='HTTP/1.1', remote_ip='::1', headers={'Accept-Language': 'en-US,en;q=0.8,fr;q=0.6', 'Accept-Encoding': 'gzip, deflate, sdch, br', 'X-Requested-With': 'XMLHttpRequest', 'Host': 'localhost:8888', 'Accept': 'application/json, text/javascript, /; q=0.01', 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36', 'Connection': 'keep-alive', 'Referer': 'http://localhost:8888/notebooks/Dask/DUNS_EXPLODE.ipynb', 'Cookie': '_xsrf=2|1d88f79c|8ce8953f8aee94e480ea55651e84d5f0|1485879788; username-localhost-8888="2|1:0|10:1485969095|23:username-localhost-8888|44:NTM1Yjc3NWI0YjE2NGI5YTg0MmI1ODk4NzczMzAxZWY=|ef8abc29b00a0e429b29b5dad4c164ac63ba2ec84ef0041832f374144918bcdd"'})
Traceback (most recent call last):
File "C:\Users\jmbertoncelli\Miniconda2\envs\snake27.30\lib\site-packages\tornado\web.py", line 1467, in _execute
result = method(*self.path_args, **self.path_kwargs)
File "C:\Users\jmbertoncelli\Miniconda2\envs\snake27.30\lib\site-packages\nbresuse\handlers.py", line 21, in get
self.finish(json.dumps(get_metrics()))
File "C:\Users\jmbertoncelli\Miniconda2\envs\snake27.30\lib\site-packages\nbresuse\handlers.py", line 14, in get_metrics
'memory': int(os.environ.get('MEM_LIMIT', None))
TypeError: int() argument must be a string or a number, not 'NoneType'
[E 11:15:07.401 NotebookApp] {
"Accept-Language": "en-US,en;q=0.8,fr;q=0.6",
"Accept-Encoding": "gzip, deflate, sdch, br",
"X-Requested-With": "XMLHttpRequest",
"Host": "localhost:8888",
"Accept": "application/json, text/javascript, /; q=0.01",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36",
"Connection": "keep-alive",
"Referer": "http://localhost:8888/notebooks/Dask/DUNS_EXPLODE.ipynb",
"Cookie": "_xsrf=2|1d88f79c|8ce8953f8aee94e480ea55651e84d5f0|1485879788; username-localhost-8888="2|1:0|10:1485969095|23:username-localhost-8888|44:NTM1Yjc3NWI0YjE2NGI5YTg0MmI1ODk4NzczMzAxZWY=|ef8abc29b00a0e429b29b5dad4c164ac63ba2ec84ef0041832f374144918bcdd""
}

Jupyterlab memory limit warning

Hi all,
I have been using the nbresuse for a week now. Everything is working fine in notebook but not in the case of jupyter-lab.
In the lab, I am able to see the memory getting displayed in status bar but when the memory crosses the mem_limit it doesn't show up warning(red colour). In notebook I can see red colour but not in Lab.

I need to know if the warning feature (showing red colour on crossing mem_limit) works/enabled in the status bar for jupyterlab.

Please help me with this information. Thanks for any inputs.

Getting "AttributeError: can't set attribute" error when starting Jupyter Lab with mem_limit and/or cpu_limit

Hi all,
I am trying to expose metric via Prometheus.
Versions I am using are:
python 3.6
jupyterlab 1.2.15
notebook 6.0.3
[email protected]
[email protected]

i am passing MEM_LIMIT and CPU_LIMIT as environment variables from the jupyter hub spawner.
I am getting the following error when starting JupyterLab with nbresuse 0.3.4

[E 2020-05-21 09:18:05.132 admin ioloop:801] Exception in callback functools.partial(<function wrap.<locals>.null_wrapper at 0x7ffa31de98c8>, <Future finished exception=AttributeError("can't set attribute",)>) Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/tornado/ioloop.py", line 758, in _run_callback ret = callback() File "/usr/local/lib/python3.6/dist-packages/tornado/stack_context.py", line 300, in null_wrapper return fn(*args, **kwargs) File "/usr/local/lib/python3.6/dist-packages/tornado/ioloop.py", line 779, in _discard_future_result future.result() File "/usr/local/lib/python3.6/dist-packages/tornado/gen.py", line 307, in wrapper result = func(*args, **kwargs) File "/usr/lib/python3.6/types.py", line 248, in wrapped coro = func(*args, **kwargs) File "/usr/local/lib/python3.6/dist-packages/nbresuse/prometheus.py", line 34, in __call__ metrics = self.apply_memory_limits(memory_metrics()) File "/usr/local/lib/python3.6/dist-packages/nbresuse/prometheus.py", line 49, in apply_memory_limits metrics.max_memory = self.config.mem_limit AttributeError: can't set attribute

the exception is trhown here
and from my knowledge it is because in that line it is trying to set a parameter in a NamedTuple which is immutable.
I managet to solve it by using _replace method instead.
I wander if i am doign something wrong or this is a bug in the last release.
I the second case, i can create a pull request with the fix.

Ciao.

nbresuse not showing up in jupyter

I have pip installed jupyter_contrib_nbextensions and nbresuse

But it doesnt seem to be showing up in jupyter?

Do I need to manually enable it even though my notebook version is 6.1.0?

image

nbresuse reports total Jupyter server memory

It looks like nbresuse is reporting the memory use for the entire server process, including all notebooks.

class MetricsHandler(IPythonHandler):
    def get(self):
        """
        Calculate and return current resource usage metrics
        """
        config = self.settings['nbresuse_display_config']
        cur_process = psutil.Process()
        all_processes = [cur_process] + cur_process.children(recursive=True)
        rss = sum([p.memory_info().rss for p in all_processes])

Is it possible to report memory use for each notebook individually? I don't know anything about jupyter architecture, but maybe there's a way to get a pointer to the calling notebook or PID?

Document the Prometheus metrics

Similar idea as in #49, but for the Prometheus metrics.

Since nbresuse reports metrics to Prometheus, we can document what metrics are being added.

ResourceUseDisplay instance expected an int or a callable, not the str

Installing nbresuse with conda for jupyterlab 2.2.9, I get following error at cpu_limit or memlimit when I use like this:

docker run --name ${NAME}_cont \
	--cpus=$CONTAINER_CPU_PER --cpuset-cpus=$CONTAINER_CPU_ARRAY \
	--memory=${CONTAINER_MEM} \
	[...]
	$IMAGE_NAME start.sh jupyter lab \
	--NotebookApp.password=$(<../.hashedPass.pwd) \
    --NotebookApp.base_url="/" \
    --NotebookApp.ResourceUseDisplay.mem_limit=${CONTAINER_MEM}   \
    --NotebookApp.ResourceUseDisplay.track_cpu_percent=True \
    --NotebookApp.ResourceUseDisplay.cpu_limit=$CONTAINER_CPU_NUM \
    --NotebookApp.allow_origin='https://colab.research.google.com' \
    --NotebookApp.port_retries=0
    

W 23:51:55.317 LabApp] Error loading server extension nbresuse
Traceback (most recent call last):
File "/opt/conda/lib/python3.8/site-packages/notebook/notebookapp.py", line 1945, in init_server_extensions
func(self)
File "/opt/conda/lib/python3.8/site-packages/nbresuse/init.py", line 35, in load_jupyter_server_extension
resuseconfig = ResourceUseDisplay(parent=nbapp)
File "/opt/conda/lib/python3.8/site-packages/traitlets/config/configurable.py", line 104, in init
self.config = config
File "/opt/conda/lib/python3.8/site-packages/traitlets/traitlets.py", line 604, in set
self.set(obj, value)
File "/opt/conda/lib/python3.8/site-packages/traitlets/traitlets.py", line 593, in set
obj._notify_trait(self.name, old_value, new_value)
File "/opt/conda/lib/python3.8/site-packages/traitlets/traitlets.py", line 1217, in _notify_trait
self.notify_change(Bunch(
File "/opt/conda/lib/python3.8/site-packages/traitlets/traitlets.py", line 1227, in notify_change
return self._notify_observers(change)
File "/opt/conda/lib/python3.8/site-packages/traitlets/traitlets.py", line 1264, in _notify_observers
c(event)
File "/opt/conda/lib/python3.8/site-packages/traitlets/traitlets.py", line 888, in compatible_observer
return func(self, change)
File "/opt/conda/lib/python3.8/site-packages/traitlets/config/configurable.py", line 208, in _config_changed
self._load_config(change.new, traits=traits, section_names=section_names)
File "/opt/conda/lib/python3.8/site-packages/traitlets/config/configurable.py", line 175, in _load_config
setattr(self, name, deepcopy(config_value))
File "/opt/conda/lib/python3.8/site-packages/traitlets/traitlets.py", line 604, in set
self.set(obj, value)
File "/opt/conda/lib/python3.8/site-packages/traitlets/traitlets.py", line 578, in set
new_value = self._validate(obj, value)
File "/opt/conda/lib/python3.8/site-packages/traitlets/traitlets.py", line 610, in _validate
value = self.validate(obj, value)
File "/opt/conda/lib/python3.8/site-packages/traitlets/traitlets.py", line 1981, in validate
self.error(obj, value)
File "/opt/conda/lib/python3.8/site-packages/traitlets/traitlets.py", line 690, in error
raise TraitError(e)
traitlets.traitlets.TraitError: The 'mem_limit' trait of a ResourceUseDisplay instance expected an int or a callable, not the str '14427566899'.

It's always a str, it doesn't matter I use hardcoded number, | bc ,or expr.

Doesn't display memory value on my jupyter notebook

I've got a multi-user environment using the jupyter notebook on a server. This extension is not giving me memory value used by that Jupyter notebook. I've shared the screenshot.

Can you help me to find out what am I missing?

Screen Shot 2019-04-12 at 1 21 25 PM

JupyterLab regression

JupyterLab expects the memory usage information from NBResuse (i.e. what's waiting for it at the <base_url>/metrics endpoint) to have the JSON format specified by the following interface:

  export interface IMetricRequestResult {
    rss: number;
    limits: {
      memory?: {
        rss: number;
        warn?: number;
      };
    };
  }

However, after the update to use Prometheus (0.3.4+), the JSON response emitted "by NBResuse" (i.e. at the <base_url>/metrics endpoint) is now of the form as Prometheus's HTTP API, see here. So JupyterLab can't read it, so thinks NBResuse isn't installed, and thus no longer displays memory usage information even when NBResuse is installed.

There seems to be a fairly clear mapping between the arguments passed to the constructors of the Prometheus Gauge's here in the NBResuse code and the response of the metadata portion of the Prometheus HTTP API, but I'm not sure about the format of the HTTP response encoding the actual values of the metrics. That knowledge seems to be embedded implicitly in the Javascript code for the NBExtension, but I still feel I need to look into it more closely to be sure.

Since this is a feature regression, I think it should be a priority to find some way to fix this as soon as possible, although it will probably require making a PR to JupyterLab. If that's the case, then we should probably hold off on making any such PR before stabilizing the "API" for NBResuse.

how to deal with pip?

As I understand, it is not possible to install this package from pip and this issue is quite new. How should I deal with this problem? Is it possible to install by hand, not from pip?

Not showing

jupyter                           1.0.0
jupyter-client                    6.1.11
jupyter-console                   6.0.0
jupyter-contrib-core              0.3.3
jupyter-contrib-nbextensions      0.5.1
jupyter-core                      4.7.0
jupyter-nbextensions-configurator 0.4.1
jupyter-resource-usage            0.5.1
jupyter-server                    1.2.2
jupyterlab                        3.0.5
jupyterlab-server                 2.1.2
jupyterlab-system-monitor         0.7.0
jupyterlab-topbar                 0.6.0
nbresuse                          0.4.0
{
    // System Monitor
    // jupyterlab-system-monitor:plugin
    // System Monitor
    // ********************************

    // CPU Settings
    // Settings for the CPU indicator
    "cpu": {
        "label": "CPU: "
    },

    // Memory Settings
    // Settings for the memory indicator
    "memory": {
        "label": "Mem: "
    },

    // Refresh Rate (ms)
    // Refresh Rate to sync metrics data
    "refreshRate": 5000
}

The jupyterlab UI has no changes.
Is there any other configs needed after installing?

Resource usage of current kernel?

Hi, I just installed this extension and I already don't want to miss it any more.

Would it be possible to add a second metric that shows only the memory usage of the currently active kernel?
This would be very useful to find out how much RAM I should request for running a certain notebook.

Adding kernel specific metrics to nbresuse

Hi All,
I have implemented the solution to capture kernel specific prometheus metrics and would like to contribute back.

Please let me know if it's ok to raise a PR.

Thank you.

Add GPU usage?

I was just wondering if this is possible.

Also I wanted to say this extension is awesome thank you.

Get memory limits from psutil?

We'd like to automatically extract the total memory on the system via psutil rather than set the memory limit through a config option. It would also be useful to have another metric for total memory in use on the system (along with the memory used by Jupyter). Before submitting a PR, I wanted to check if this is desirable.

Add `api` in metrics url

We have seen noticeable jupyterhub performance degradation and finally tracked to the use of jupyterlab-statusbar in the new default notebook image. When a user has the notebook server stopped, the metrics endpoint is hit continuously if the browser is left open.

As currently the metrics url are not with api in it, it is not guarded by JupyterHub's short circuit for inactive users but instead gets redirected to the hml spawner page for the xhr requests. If the spawner page happens to do anything fancy such as retrieving auth_state, it becomes quite an issue when there are lots of sessions like this.

The proposed fixed is to make the handler in nbresuse listen on /api/metrics instead of /metrics, add a redirect for compatibility, and patch jupyterlab-statusbar to use the new url.

(credit goes to @popcornylu for logs and performance analysis)

Drop the deprecated /metrics endpoint

Originally posted by @manics in #73 (comment)

If you're renaming the package this might be a good opportunity to make any breaking changes? e.g. following up from #68 /metrics could be completely removed instead of just deprecated?

We should consider doing this for the next major release, so adding to the 0.5.0 milestone.

Jupyter Notebook (Sagemaker Instance) : Not displaying Memory

Installed nbresuse as per following steps ;

pip install nbresuse
jupyter serverextension enable --py nbresuse --sys-prefix
jupyter nbextension install --py nbresuse --sys-prefix
jupyter nbextension enable --py nbresuse --sys-prefix

But When I start the notebook, memory is not displayed on screen.
and Yes, I have psutil already installed.

image

any help is much appreciated.

Seeking new maintainers

Hello! I unfortunately do not have any time to maintain this, so am looking for maintainers to take over. I'd like to transfer this to a different organization, and ideally remove myself from being a maintainer :)

How to access memory usage metrics with Prometheus?

I'd be interested in using this extension to pull some memory usage related metrics to Prometheus, but I could not find any mention in the Wiki page on how to do it. Could anyone add a couple of lines with instruction and endpoint address? Thanks!

nbresuse does not work on shared host (works on my own machine though)

I used pip install nbresuse on my Mac and it works well. However, it does not work when I use an account within AWS instance where jupyter was installed in another account. The difference is that anaconda was installed in my directory in Mac.

What are the options to make nbresuse work for my account in an AWS instance?

Rename to jupyter-resource-usage

Now that the repo has been moved to the jupyter-server organization (jupyter-server/team-compass#3), we should be able to rename the Python package and other instances of nbresuse to jupyter-resource-usage.

And publish a new version on PyPI with the new name.

TODO

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.