Git Product home page Git Product logo

ai-lab's Introduction

header image

GitHub last commit

All-in-one AI development container for rapid prototyping, compatible with the nvidia-docker GPU-accelerated container runtime as well as JupyterHub. This is designed as a lighter and more portable alternative to various cloud provider "Deep Learning Virtual Machines". Get up and running with a wide range of machine learning and deep learning tasks by pulling and running the container on your workstation, on the cloud or within JupyterHub.

What's included?

frameworks

ide

Using the AI Lab Container

This image can be used together with NVIDIA GPUs on workstation, servers, cloud instances. It can also be used via JupyterHub deployments as no additional ports are required things like for TensorBoard. Please note that the following instructions assume you already have the NVIDIA drivers and container runtime already installed. If not, here are some quick instructions.

Pulling the container

docker pull nvaitc/ai-lab:20.03

Running an interactive shell (bash)

nvidia-docker run --rm -it nvaitc/ai-lab:20.03 bash

Run Jupyter Notebook

The additional command line flags define the following options:

  • forward port 8888 to your host machine
  • mount /home/$USER as the working directory (/home/jovyan)
nvidia-docker run --rm \
 -p 8888:8888 \
 -v /home/$USER:/home/jovyan \
 nvaitc/ai-lab:20.03

Run JupyterLab by replacing tree with lab in the browser address bar.

There is a default blank password for the Jupyter interface. To set your own password, pass it as an environment variable NB_PASSWD as follows:

nvidia-docker run --rm \
 -p 8888:8888 \
 -v /home/$USER:/home/jovyan \
 -e NB_PASSWD='mypassword' \
 nvaitc/ai-lab:20.03

Run Batch Job

It is also perfectly possible to run a batch job with this container, be it on a workstation or as part of a larger cluster with a scheduler that can schedule Docker containers.

nvidia-docker run --rm bash nvaitc/ai-lab:20.03 -c 'echo "Hello world!" && python3 script.py'

Additional Instructions

For extended instructions, please take a look at: INSTRUCTIONS.md.

INSTRUCTIONS.md contains full instructions and addresses common questions on deploying to public cloud (GCP/AWS), as well as using PyTorch DataLoader or troubleshooting permission issues with some setups.

If you have any ideas or suggestions, please feel free to open an issue.

FAQ

1. Can I modify/build this container myself?

Sure! The Dockerfile is provided in this repository. All you need is a fast internet connection and about 1h of time to build this container from scratch.

Should you only require some extra packages, you can build your own Docker image using nvaitc/ai-lab as the base image.

For a detailed guide, check out BUILD.md.

2. Do you support MXNet/some-package?

See Point 1 above to see how to add MXNet/some-package into the container. I had chosen not to distribute MXNet/some-package with the container as it is less widely used and is large in size, and can be easily installed with pip since the environment is already properly configured. If you have a suggestion for a package that you would like to see added, open an issue.

3. Do you support multi-node or multi-GPU tasks?

Multi-GPU has been tested with tf.distribute and Horovod, and it works as expected. Multi-node has not been tested.

4. Can I get hardware accelerated GUI (OpenGL) applications?

Yes! Be sure to pull the vnc version of the container e.g. nvaitc/ai-lab:20.03-vnc and use the "New" menu in Jupyter Notebook to launch a new VNC Desktop. This will allow you to use a virtual desktop interface. Next, you need to allow the container to access your host X server (this may be a security concern for some people).

xhost +local:root
nvidia-docker --rm run \
 -e "DISPLAY" \
 -v /tmp/.X11-unix:/tmp/.X11-unix:rw \
 -p 8888:8888 \
 -v /home/$USER:/home/jovyan \
 nvaitc/ai-lab:20.03-vnc

Next, start your application adding vglrun in front of the application command (e.g. vglrun glxgears). You can see a video of SuperTuxKart running in the VNC desktop here.

5. How does this contrast with NGC containers?

NVIDIA GPU Cloud (NGC) features NVIDIA tuned, tested, certified, and maintained containers for deep learning and HPC frameworks that take full advantage of NVIDIA GPUs on supported systems, such as NVIDIA DGX products. We recommend the use of NGC containers for performance critical and production workloads.

The AI Lab container was designed for students and researchers. The container is primarily designed to create a frictionless experience (by including all frameworks) during the initial prototyping and exploration phase, with a focus on iteration with fast feedback and less focus on deciding on specific approaches or frameworks. This is not an official NVIDIA product!

If you would like to use NGC containers in an AI Lab like container, there is an example of how you can build one yourself. Take a look at tf-amp.Dockerfile. Do note that you are restricted from distributing derivative images from NGC containers in a public Docker registry.

6. What GPUs do you support?

The container supports compute capability 6.0, 6.1, 7.0, 7.5:

  • Pascal (P100, GTX 10-series)
  • Volta (V100, Titan V)
  • Turing (T4, RTX 20-series)

7. Any detailed system requirements?

  1. Ubuntu 18.04+, or a close derivative distro
  2. NVIDIA drivers (>=418, or >=410 Tesla-ready driver)
  3. NVIDIA container runtime (nvidia-docker)
  4. NVIDIA Pascal, Volta or Turing GPU
    • If you have a GTX 10-series or newer GPU, you're fine
    • K80 and GTX 9-series cards are not supported

Support

  • Core Maintainer: Timothy Liu (tlkh)
  • This is not an official NVIDIA product!
  • The website, its software and all content found on it are provided on an “as is” and “as available” basis. NVIDIA/NVAITC does not give any warranties, whether express or implied, as to the suitability or usability of the website, its software or any of its content. NVIDIA/NVAITC will not be liable for any loss, whether such loss is direct, indirect, special or consequential, suffered by any party as a result of their use of the libraries or content. Any usage of the libraries is done at the user’s own risk and the user will be solely responsible for any damage to any computer system or loss of data that results from such activities.
  • Please open an issue if you encounter problems or have a feature request

Adapted from the Jupyter Docker Stacks

GitHub contributors GitHub

ai-lab's People

Contributors

imgbotapp avatar tlkh avatar traceto-tze avatar tzeusy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ai-lab's Issues

K80 GPUs are ignored

Describe the bug
For 19.09, tensorflow appears to ignore K80 GPUs due to them having a compute capability of 3.7. Specifically, the message returned by tensorflow is:

Ignoring visible gpu device (device: 0, name: Tesla K80, compute capability: 3.7) with Cuda compute capability 3.7. The minimum required Cuda capability is 6.0.

To Reproduce

  1. Pull image down.
  2. Start container.
  3. From shell run python3
  4. Once inside python session, run import tensorflow as tf and then tf.test.gpu_device_name()

Expected behavior
Expect it to identify the K80s and attach to them, especially as the tensorflow GPU page lists hardware requirements as being only compute capability 3.5 or above (https://www.tensorflow.org/install/gpu).

Screenshots
N/A

Client (please complete the following information):

  • OS: [Linux]
  • Browser [N/A]

Container Version
19.09

Additional context
I'm confused on whether this is a tensorflow issue or a specific container issue, since tensorflow does not seem to indicate they've removed support for K80s.

Have a generator.py to generate Dockerfiles from blocks

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is.

Have a generator.py to generate Dockerfiles from blocks.

Describe the solution you'd like
A clear and concise description of what you want to happen.

  • decompose Dockerfiles into blocks (e.g. TF block, PyTorch block)
  • have a simple python script to generate Dockerfiles with specific settings
    • could have a NGC specific shim as well?
  • build all, and push to Docker Hub

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

NIL

Additional context
Add any other context or screenshots about the feature request here.

See: https://github.com/ufoym/deepo#build-your-own-customized-image-with-lego-like-modules

WS-2019-0047 Medium Severity Vulnerability detected by WhiteSource

WS-2019-0047 - Medium Severity Vulnerability

Vulnerable Library - tar-2.2.1.tgz

tar for node

Library home page: http://registry.npmjs.org/tar/-/tar-2.2.1.tgz

Path to dependency file: /ai-lab/package.json

Path to vulnerable library: /tmp/git/ai-lab/node_modules/tar/package.json

Dependency Hierarchy:

  • cli-0.5.0.tgz (Root Library)
    • application-manager-0.5.0.tgz
      • electron-rebuild-1.8.4.tgz
        • node-gyp-3.8.0.tgz
          • tar-2.2.1.tgz (Vulnerable Library)

Found in HEAD commit: 082fbcbc8c2ef8fc2f905bd9e108a64bbeabbbfe

Vulnerability Details

Versions of node-tar prior to 4.4.2 are vulnerable to Arbitrary File Overwrite. Extracting tarballs containing a hardlink to a file that already exists in the system, and a file that matches the hardlink will overwrite the system's file with the contents of the extracted file.

Publish Date: 2019-04-05

URL: WS-2019-0047

CVSS 2 Score Details (5.0)

Base Score Metrics not available

Suggested Fix

Type: Upgrade version

Origin: https://www.npmjs.com/advisories/803

Release Date: 2019-04-05

Fix Resolution: 4.4.2


Step up your Open Source Security Game with WhiteSource here

Tornado gives WebSocketClosedError

Describe the bug
A clear and concise description of what the bug is.

Tornado gives WebSocketClosedError constantly but the notebook environment seems to be fine. Strange behaviour.

[E 06:36:44.455 NotebookApp] Exception in callback <bound method WebSocketMixin.send_ping of ZMQChannelsHandler(58c3a0b8-fed0-4680-897a-833a1b543566)>
    Traceback (most recent call last):
      File "/opt/conda/lib/python3.6/site-packages/tornado/ioloop.py", line 907, in _run
        return self.callback()
      File "/opt/conda/lib/python3.6/site-packages/notebook/base/zmqhandlers.py", line 189, in send_ping
        self.ping(b'')
      File "/opt/conda/lib/python3.6/site-packages/tornado/websocket.py", line 447, in ping
        raise WebSocketClosedError()
    tornado.websocket.WebSocketClosedError

To Reproduce
Steps to reproduce the behavior:

Use container 0.7-test

Expected behavior
A clear and concise description of what you expected to happen.

Not give the error message

Screenshots
If applicable, add screenshots to help explain your problem.

Client (please complete the following information):

  • OS: [e.g. Windows/macOS] Ubuntu 18.04
  • Browser [e.g. Chrome, Safari] Chrome

Container Version

nvaitc/ai-lab:0.7-test

Additional context
Add any other context about the problem here.

ImportError: Extension horovod.torch has not been built

I have run the following command to test horovod pytorch frame,

the error occurs:
jovyan@560c5fd869da:~$ mpirun -np 1 -bind-to none -map-by slot -x NCCL_DEBUG=INFO -x LD_LIBRARY_PATH -x PATH -mca pml ob1 -mca btl ^openib python pytorch_mnist.py
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/site-packages/horovod/torch/init.py", line 27, in
file, 'mpi_lib_v2')
File "/opt/conda/lib/python3.6/site-packages/horovod/common/util.py", line 50, in check_extension
'Horovod with %s=1 to debug the build error.' % (ext_name, ext_env_var))
ImportError: Extension horovod.torch has not been built. If this is not expected, reinstall Horovod with HOROVOD_WITH_PYTORCH=1 to debug the build error.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "pytorch_mnist.py", line 8, in
import horovod.torch as hvd
File "/opt/conda/lib/python3.6/site-packages/horovod/torch/init.py", line 30, in
file, 'mpi_lib', '_mpi_lib')
File "/opt/conda/lib/python3.6/site-packages/horovod/common/util.py", line 50, in check_extension
'Horovod with %s=1 to debug the build error.' % (ext_name, ext_env_var))
ImportError: Extension horovod.torch has not been built. If this is not expected, reinstall Horovod with HOROVOD_WITH_PYTORCH=1 to debug the build error.

Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.


mpirun.real detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:

Process name: [[36009,1],0]
Exit code: 1

Yet, pytorch has been installed.

jovyan@560c5fd869da:~$ conda list |grep torch
pytorch 1.3.0 py3.6_cuda10.0.130_cudnn7.6.3_0 pytorch
torchtext 0.4.0 pypi_0 pypi
torchvision 0.4.1 py36_cu100 pytorch

CVE-2019-18874 (High) detected in psutil-5.6.5.tar.gz

CVE-2019-18874 - High Severity Vulnerability

Vulnerable Library - psutil-5.6.5.tar.gz

Cross-platform lib for process and system monitoring in Python.

Library home page: https://files.pythonhosted.org/packages/03/9a/95c4b3d0424426e5fd94b5302ff74cea44d5d4f53466e1228ac8e73e14b4/psutil-5.6.5.tar.gz

Path to dependency file: /tmp/ws-scm/ai-lab/src/requirements.txt

Path to vulnerable library: /ai-lab/src/requirements.txt

Dependency Hierarchy:

  • psutil-5.6.5.tar.gz (Vulnerable Library)

Found in HEAD commit: e3e518ee3838cd1daf07f6023d0beed8d85ea2e1

Vulnerability Details

psutil (aka python-psutil) through 5.6.5 can have a double free. This occurs because of refcount mishandling within a while or for loop that converts system data into a Python object.

Publish Date: 2019-11-12

URL: CVE-2019-18874

CVSS 3 Score Details (7.5)

Base Score Metrics:

  • Exploitability Metrics:
    • Attack Vector: N/A
    • Attack Complexity: N/A
    • Privileges Required: N/A
    • User Interaction: N/A
    • Scope: N/A
  • Impact Metrics:
    • Confidentiality Impact: N/A
    • Integrity Impact: N/A
    • Availability Impact: N/A

For more information on CVSS3 Scores, click here.


Step up your Open Source Security Game with WhiteSource here

Need to increase shm size when using PyTorch DataLoader

Describe the bug
I was using google could platform. When using dataloader, multiprocessing=True, got thread is killed: bus error because docker limits resources.

Solution
When opening jupyter notebook in the shell, run
sudo nvidia-docker run --shm-size=1g --rm -p 8888:8888 -v /home/$USER:/home/jovyan nvaitc/ai-lab
instead of
sudo nvidia-docker run --rm -p 8888:8888 -v /home/$USER:/home/jovyan nvaitc/ai-lab

The new command assigns 1 GB to the container which allows multiprocessing

CVE-2020-5310 (High) detected in Pillow-6.2.2-cp27-cp27mu-manylinux1_x86_64.whl

CVE-2020-5310 - High Severity Vulnerability

Vulnerable Library - Pillow-6.2.2-cp27-cp27mu-manylinux1_x86_64.whl

Python Imaging Library (Fork)

Library home page: https://files.pythonhosted.org/packages/12/ad/61f8dfba88c4e56196bf6d056cdbba64dc9c5dfdfbc97d02e6472feed913/Pillow-6.2.2-cp27-cp27mu-manylinux1_x86_64.whl

Path to dependency file: /tmp/ws-scm/ai-lab/src/requirements.txt

Path to vulnerable library: /ai-lab/src/requirements.txt

Dependency Hierarchy:

  • Pillow-6.2.2-cp27-cp27mu-manylinux1_x86_64.whl (Vulnerable Library)

Found in HEAD commit: dab8b0d6173a6e6c0ded3fd56a59526e1effc0d8

Vulnerability Details

libImaging/TiffDecode.c in Pillow before 6.2.2 has a TIFF decoding integer overflow, related to realloc.

Publish Date: 2020-01-03

URL: CVE-2020-5310

CVSS 3 Score Details (8.8)

Base Score Metrics:

  • Exploitability Metrics:
    • Attack Vector: Network
    • Attack Complexity: Low
    • Privileges Required: None
    • User Interaction: Required
    • Scope: Unchanged
  • Impact Metrics:
    • Confidentiality Impact: High
    • Integrity Impact: High
    • Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-5310

Release Date: 2020-01-03

Fix Resolution: 7.0.0


Step up your Open Source Security Game with WhiteSource here

CVE-2018-16487 High Severity Vulnerability detected by WhiteSource

CVE-2018-16487 - High Severity Vulnerability

Vulnerable Library - lodash-3.10.1.tgz

The modern build of lodash modular utilities.

Library home page: http://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz

Path to dependency file: /ai-lab/package.json

Path to vulnerable library: /tmp/git/ai-lab/node_modules/json-rpc2/node_modules/lodash/package.json

Dependency Hierarchy:

  • @theia/go-0.3.17.tgz (Root Library)
    • go-language-server-0.1.7.tgz
      • json-rpc2-1.0.2.tgz
        • lodash-3.10.1.tgz (Vulnerable Library)

Found in HEAD commit: 082fbcbc8c2ef8fc2f905bd9e108a64bbeabbbfe

Vulnerability Details

A prototype pollution vulnerability was found in lodash <4.17.11 where the functions merge, mergeWith, and defaultsDeep can be tricked into adding or modifying properties of Object.prototype.

Publish Date: 2019-02-01

URL: CVE-2018-16487

CVSS 3 Score Details (9.8)

Base Score Metrics:

  • Exploitability Metrics:
    • Attack Vector: Network
    • Attack Complexity: Low
    • Privileges Required: None
    • User Interaction: None
    • Scope: Unchanged
  • Impact Metrics:
    • Confidentiality Impact: High
    • Integrity Impact: High
    • Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2018-16487

Release Date: 2019-02-01

Fix Resolution: 4.17.11


Step up your Open Source Security Game with WhiteSource here

vnc error

docker run --runtime=nvidia --ipc=host -d --rm -p 10000:8888 -v /home/$USER:/home/jovyan nvaitc/ai-lab:20.06-vnc
docker run --runtime=nvidia -e "DISPLAY" -v /tmp/.X11-unix:/tmp/.X11-unix:rw --ipc=host -d --rm -p 10000:8888 -v /home/$USER:/home/jovyan nvaitc/ai-lab:20.06-vnc

i use two docker run but not work vnc

스크린샷, 2020-06-25 12-51-38

How to render with GPUs in VNC Desktop

Thanks for sharing the code
I am running the container on a remote host and I want to display the window of glxspheres64
with virtualgl on GPUs.
I start the docker container with:
nvidia-docker run -d -p 8888:8888 -v /home/$USER/work/work1:/home/jovyan -e JUPYTER_ENABLE_LAB=yes nvaitc/ai-lab:19.09-vnc
And I start the vnc desktop and run the following command:
/opt/Turbovnc/bin/vncserver :5
vglrun -d :5.0 ./glxspheres64
But the result shown in the figure illustrates that opengl renderer is llvmpipe, not GPUs.
github
How can I render with GPUs and display the window.
Thank you.

CVE-2020-5312 (High) detected in Pillow-6.2.2-cp27-cp27mu-manylinux1_x86_64.whl

CVE-2020-5312 - High Severity Vulnerability

Vulnerable Library - Pillow-6.2.2-cp27-cp27mu-manylinux1_x86_64.whl

Python Imaging Library (Fork)

Library home page: https://files.pythonhosted.org/packages/12/ad/61f8dfba88c4e56196bf6d056cdbba64dc9c5dfdfbc97d02e6472feed913/Pillow-6.2.2-cp27-cp27mu-manylinux1_x86_64.whl

Path to dependency file: /tmp/ws-scm/ai-lab/src/requirements.txt

Path to vulnerable library: /ai-lab/src/requirements.txt

Dependency Hierarchy:

  • Pillow-6.2.2-cp27-cp27mu-manylinux1_x86_64.whl (Vulnerable Library)

Found in HEAD commit: dab8b0d6173a6e6c0ded3fd56a59526e1effc0d8

Vulnerability Details

libImaging/PcxDecode.c in Pillow before 6.2.2 has a PCX P mode buffer overflow.

Publish Date: 2020-01-03

URL: CVE-2020-5312

CVSS 3 Score Details (8.8)

Base Score Metrics:

  • Exploitability Metrics:
    • Attack Vector: Network
    • Attack Complexity: Low
    • Privileges Required: None
    • User Interaction: Required
    • Scope: Unchanged
  • Impact Metrics:
    • Confidentiality Impact: High
    • Integrity Impact: High
    • Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-5312

Release Date: 2020-01-03

Fix Resolution: 7.0.0


Step up your Open Source Security Game with WhiteSource here

Add instructions for accessing notebook via other ports

When you are using cloud, you probably will need to access jupyter notebook via ports other than 8888

By default, the command
sudo nvidia-docker run --rm -p 8888:8888 -v /home/$USER:/home/jovyan nvaitc/ai-lab
allows you to access jupyter notebook through port 8888

If you wish accessing other ports, add the port you want to access to your firewall rule and then run
sudo nvidia-docker run --rm -p [new port]:8888 -v /home/$USER:/home/jovyan nvaitc/ai-lab

CVE-2017-16137 Medium Severity Vulnerability detected by WhiteSource

CVE-2017-16137 - Medium Severity Vulnerability

Vulnerable Library - debug-0.8.1.tgz

small debugging utility

Library home page: http://registry.npmjs.org/debug/-/debug-0.8.1.tgz

Path to dependency file: /ai-lab/package.json

Path to vulnerable library: /tmp/git/ai-lab/node_modules/debug/package.json

Dependency Hierarchy:

  • core-0.5.0.tgz (Root Library)
    • application-package-0.5.0.tgz
      • changes-stream-2.2.0.tgz
        • debug-0.8.1.tgz (Vulnerable Library)

Found in HEAD commit: 082fbcbc8c2ef8fc2f905bd9e108a64bbeabbbfe

Vulnerability Details

The debug module is vulnerable to regular expression denial of service when untrusted user input is passed into the o formatter. It takes around 50k characters to block for 2 seconds making this a low severity issue.

Publish Date: 2018-06-07

URL: CVE-2017-16137

CVSS 3 Score Details (5.3)

Base Score Metrics:

  • Exploitability Metrics:
    • Attack Vector: Network
    • Attack Complexity: Low
    • Privileges Required: None
    • User Interaction: None
    • Scope: Unchanged
  • Impact Metrics:
    • Confidentiality Impact: None
    • Integrity Impact: None
    • Availability Impact: Low

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://nodesecurity.io/advisories/534

Release Date: 2017-09-27

Fix Resolution: Version 2.x.x: Update to version 2.6.9 or later. Version 3.x.x: Update to version 3.1.0 or later.


Step up your Open Source Security Game with WhiteSource here

CVE-2020-5311 (High) detected in Pillow-6.2.2-cp27-cp27mu-manylinux1_x86_64.whl

CVE-2020-5311 - High Severity Vulnerability

Vulnerable Library - Pillow-6.2.2-cp27-cp27mu-manylinux1_x86_64.whl

Python Imaging Library (Fork)

Library home page: https://files.pythonhosted.org/packages/12/ad/61f8dfba88c4e56196bf6d056cdbba64dc9c5dfdfbc97d02e6472feed913/Pillow-6.2.2-cp27-cp27mu-manylinux1_x86_64.whl

Path to dependency file: /tmp/ws-scm/ai-lab/src/requirements.txt

Path to vulnerable library: /ai-lab/src/requirements.txt

Dependency Hierarchy:

  • Pillow-6.2.2-cp27-cp27mu-manylinux1_x86_64.whl (Vulnerable Library)

Found in HEAD commit: dab8b0d6173a6e6c0ded3fd56a59526e1effc0d8

Vulnerability Details

libImaging/SgiRleDecode.c in Pillow before 6.2.2 has an SGI buffer overflow.

Publish Date: 2020-01-03

URL: CVE-2020-5311

CVSS 3 Score Details (8.8)

Base Score Metrics:

  • Exploitability Metrics:
    • Attack Vector: Network
    • Attack Complexity: Low
    • Privileges Required: None
    • User Interaction: Required
    • Scope: Unchanged
  • Impact Metrics:
    • Confidentiality Impact: High
    • Integrity Impact: High
    • Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-5311

Release Date: 2020-01-03

Fix Resolution: 7.0.0


Step up your Open Source Security Game with WhiteSource here

TensorBoard magic does not work

Describe the bug
A clear and concise description of what the bug is.

It is possible to run TensorBoard from within Jupyter (although it is just an iframe).

# magic
%load_ext tensorboard.notebook
%tensorboard --logdir {logs_base_dir}

Additional information on usage is available: https://github.com/tensorflow/tensorboard/blob/master/docs/r2/tensorboard_in_notebooks.ipynb

However, because it only tries to connect to localhost from the browser, it fails.

To Reproduce
Steps to reproduce the behavior:

%load_ext tensorboard.notebook
%tensorboard --logdir {logs_base_dir}

Expected behavior
A clear and concise description of what you expected to happen.

TensorBoard to show nicely within the Jupyter Notebook

Screenshots
If applicable, add screenshots to help explain your problem.

Client (please complete the following information):

  • OS: [e.g. Windows/macOS] macOS/Ubuntu
  • Browser [e.g. Chrome, Safari] Chrome

Container Version

0.7-test

Additional context
Add any other context about the problem here.

[Solved] Unable to run autokeras

Describe the bug

When trying autokeras simple example code (https://github.com/jhfjhfj1/autokeras/blob/master/examples/a_simple_example/mnist.py), the kernel dies.

Most likely due to package version mismatches.

To Reproduce

Install autokeras with the following (to avoid reinstalling many packages including TF):

  1. pip install lightgbm imageio GPUtil
  2. pip install autokeras --no-deps

Installing autokeras with dependencies does not seem to resolve the issue.

Expected behavior

Expected it to work

Screenshots
NIL

Client (please complete the following information):

  • OS: Ubuntu
  • Browser Chrome

Container Version 0.1

Additional context
Add any other context about the problem here.

CVE-2018-3721 Medium Severity Vulnerability detected by WhiteSource

CVE-2018-3721 - Medium Severity Vulnerability

Vulnerable Library - lodash-3.10.1.tgz

The modern build of lodash modular utilities.

Library home page: http://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz

Path to dependency file: /ai-lab/package.json

Path to vulnerable library: /tmp/git/ai-lab/node_modules/json-rpc2/node_modules/lodash/package.json

Dependency Hierarchy:

  • @theia/go-0.3.17.tgz (Root Library)
    • go-language-server-0.1.7.tgz
      • json-rpc2-1.0.2.tgz
        • lodash-3.10.1.tgz (Vulnerable Library)

Found in HEAD commit: 082fbcbc8c2ef8fc2f905bd9e108a64bbeabbbfe

Vulnerability Details

lodash node module before 4.17.5 suffers from a Modification of Assumed-Immutable Data (MAID) vulnerability via defaultsDeep, merge, and mergeWith functions, which allows a malicious user to modify the prototype of "Object" via proto, causing the addition or modification of an existing property that will exist on all objects.

Publish Date: 2018-06-07

URL: CVE-2018-3721

CVSS 3 Score Details (6.5)

Base Score Metrics:

  • Exploitability Metrics:
    • Attack Vector: Network
    • Attack Complexity: Low
    • Privileges Required: Low
    • User Interaction: None
    • Scope: Unchanged
  • Impact Metrics:
    • Confidentiality Impact: None
    • Integrity Impact: High
    • Availability Impact: None

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://nvd.nist.gov/vuln/detail/CVE-2018-3721

Release Date: 2018-06-07

Fix Resolution: 4.17.5


Step up your Open Source Security Game with WhiteSource here

CVE-2019-0542 High Severity Vulnerability detected by WhiteSource

CVE-2019-0542 - High Severity Vulnerability

Vulnerable Library - xterm-3.9.2.tgz

Full xterm terminal, in your browser

Library home page: https://registry.npmjs.org/xterm/-/xterm-3.9.2.tgz

Path to dependency file: /ai-lab/package.json

Path to vulnerable library: /tmp/git/ai-lab/node_modules/xterm/package.json

Dependency Hierarchy:

  • terminal-0.5.0.tgz (Root Library)
    • xterm-3.9.2.tgz (Vulnerable Library)

Found in HEAD commit: 082fbcbc8c2ef8fc2f905bd9e108a64bbeabbbfe

Vulnerability Details

A remote code execution vulnerability exists in Xterm.js when the component mishandles special characters, aka "Xterm Remote Code Execution Vulnerability." This affects xterm.js.

Publish Date: 2019-01-09

URL: CVE-2019-0542

CVSS 3 Score Details (9.8)

Base Score Metrics:

  • Exploitability Metrics:
    • Attack Vector: Network
    • Attack Complexity: Low
    • Privileges Required: None
    • User Interaction: None
    • Scope: Unchanged
  • Impact Metrics:
    • Confidentiality Impact: High
    • Integrity Impact: High
    • Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://github.com/xtermjs/xterm.js/releases

Release Date: 2019-01-09

Fix Resolution: 3.10.1


Step up your Open Source Security Game with WhiteSource here

WS-2019-0032 Medium Severity Vulnerability detected by WhiteSource

WS-2019-0032 - Medium Severity Vulnerability

Vulnerable Library - js-yaml-3.7.0.tgz

YAML 1.2 parser and serializer

Library home page: https://registry.npmjs.org/js-yaml/-/js-yaml-3.7.0.tgz

Path to dependency file: /ai-lab/package.json

Path to vulnerable library: /tmp/git/ai-lab/node_modules/js-yaml/package.json

Dependency Hierarchy:

  • cli-0.5.0.tgz (Root Library)
    • application-manager-0.5.0.tgz
      • css-loader-0.28.11.tgz
        • cssnano-3.10.0.tgz
          • postcss-svgo-2.1.6.tgz
            • svgo-0.7.2.tgz
              • js-yaml-3.7.0.tgz (Vulnerable Library)

Found in HEAD commit: 082fbcbc8c2ef8fc2f905bd9e108a64bbeabbbfe

Vulnerability Details

Versions js-yaml prior to 3.13.0 are vulnerable to Denial of Service. By parsing a carefully-crafted YAML file, the node process stalls and may exhaust system resources leading to a Denial of Service.

Publish Date: 2019-03-26

URL: WS-2019-0032

CVSS 2 Score Details (5.0)

Base Score Metrics not available

Suggested Fix

Type: Upgrade version

Origin: https://www.npmjs.com/advisories/788/versions

Release Date: 2019-03-26

Fix Resolution: 3.13.0


Step up your Open Source Security Game with WhiteSource here

Soliciting ideas to reduce size of Docker image

Is your feature request related to a problem? Please describe.

We've already brought down the size of our Docker image from 7GB+ to 5GB+ (this is the compressed image size as reported by Docker Hub) by eliminating redundant file operations, merging layers etc. etc.

However, we're still on the lookout for ideas on how we can make the image smaller.

Describe alternatives you've considered

We based the container on the devel version of the CUDA container, so that adds almost 1GB to the image size. We'd like to take that out, but we need to compile RAPIDS, and we haven't gotten a multi-stage build to work yet.

WS-2019-0019 Medium Severity Vulnerability detected by WhiteSource

WS-2019-0019 - Medium Severity Vulnerability

Vulnerable Library - braces-1.8.5.tgz

Fastest brace expansion for node.js, with the most complete support for the Bash 4.3 braces specification.

Library home page: https://registry.npmjs.org/braces/-/braces-1.8.5.tgz

Path to dependency file: /ai-lab/package.json

Path to vulnerable library: /tmp/git/ai-lab/node_modules/jscodeshift/node_modules/braces/package.json

Dependency Hierarchy:

  • cli-0.5.0.tgz (Root Library)
    • application-manager-0.5.0.tgz
      • webpack-cli-2.0.12.tgz
        • jscodeshift-0.5.1.tgz
          • micromatch-2.3.11.tgz
            • braces-1.8.5.tgz (Vulnerable Library)

Found in HEAD commit: 082fbcbc8c2ef8fc2f905bd9e108a64bbeabbbfe

Vulnerability Details

Version of braces prior to 2.3.1 are vulnerable to Regular Expression Denial of Service (ReDoS). Untrusted input may cause catastrophic backtracking while matching regular expressions. This can cause the application to be unresponsive leading to Denial of Service.

Publish Date: 2019-03-25

URL: WS-2019-0019

CVSS 2 Score Details (5.0)

Base Score Metrics not available

Suggested Fix

Type: Upgrade version

Origin: https://www.npmjs.com/advisories/786

Release Date: 2019-02-21

Fix Resolution: 2.3.1


Step up your Open Source Security Game with WhiteSource here

Install APEX for better PyTorch performance

Is your feature request related to a problem? Please describe.

Install APEX for better PyTorch performance

Additional context
Add any other context or screenshots about the feature request here.

This suggestion pops up in AutoKeras

Open MPI not found in output of mpirun --version.

I run the following command to test horovod:
horovodrun -np 4 -H localhost:4 python keras_mnist.py

the error occurs:
2019-11-15 08:51:09.228813: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.10.0
Open MPI not found in output of mpirun --version.
Traceback (most recent call last):
File "/opt/conda/bin/horovodrun", line 21, in
run.run()
File "/opt/conda/lib/python3.6/site-packages/horovod/run/run.py", line 717, in run
mpi_run(settings, common_intfs, env)
File "/opt/conda/lib/python3.6/site-packages/horovod/run/mpi_run.py", line 58, in mpi_run
'horovodrun convenience script does not find an installed OpenMPI.\n\n'
Exception: horovodrun convenience script does not find an installed OpenMPI.

Choose one of:

  1. Install Open MPI 4.0.0+ and re-install Horovod (use --no-cache-dir pip option).
  2. Run distributed training script using the standard way provided by your MPI distribution (usually mpirun, srun, or jsrun).
  3. Use built-in gloo option (horovodrun --gloo ...).

=============================================
!mpirun --version
mpirun.real (OpenRTE) 4.0.1

Report bugs to http://www.open-mpi.org/community/help/

RapidsAI is broken

Describe the bug
RapidsAI isnt working in ai-lab:20.03 or 20.06

To Reproduce
import cudf

Expected behavior
no error

Screenshots

import cudf

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-1-e13365c50bc4> in <module>
----> 1 import cudf

/opt/conda/lib/python3.6/site-packages/cudf/__init__.py in <module>
      3 import cupy
      4 
----> 5 import rmm
      6 
      7 from cudf import core, datasets

/opt/conda/lib/python3.6/site-packages/rmm/__init__.py in <module>
     15 import weakref
     16 
---> 17 from rmm.rmm import (
     18     RMMError,
     19     _finalize,

/opt/conda/lib/python3.6/site-packages/rmm/rmm.py in <module>
     17 
     18 import numpy as np
---> 19 from numba import cuda
     20 
     21 import rmm._lib as librmm

/opt/conda/lib/python3.6/site-packages/numba/__init__.py in <module>
    296 
    297 # Initialize typed containers
--> 298 import numba.typed

/opt/conda/lib/python3.6/site-packages/numba/typed/__init__.py in <module>
----> 1 from .typeddict import Dict
      2 from .typedlist import List

/opt/conda/lib/python3.6/site-packages/numba/typed/typeddict.py in <module>
     20 
     21 
---> 22 @njit
     23 def _make_dict(keyty, valty):
     24     return dictobject._as_meminfo(dictobject.new_dict(keyty, valty))

/opt/conda/lib/python3.6/site-packages/numba/core/decorators.py in njit(*args, **kws)
    234         warnings.warn('forceobj is set for njit and is ignored', RuntimeWarning)
    235     kws.update({'nopython': True})
--> 236     return jit(*args, **kws)
    237 
    238 

/opt/conda/lib/python3.6/site-packages/numba/core/decorators.py in jit(signature_or_function, locals, target, cache, pipeline_class, boundscheck, **options)
    171                    targetoptions=options, **dispatcher_args)
    172     if pyfunc is not None:
--> 173         return wrapper(pyfunc)
    174     else:
    175         return wrapper

/opt/conda/lib/python3.6/site-packages/numba/core/decorators.py in wrapper(func)
    187         disp = dispatcher(py_func=func, locals=locals,
    188                           targetoptions=targetoptions,
--> 189                           **dispatcher_args)
    190         if cache:
    191             disp.enable_caching()

/opt/conda/lib/python3.6/site-packages/numba/core/dispatcher.py in __init__(self, py_func, locals, targetoptions, impl_kind, pipeline_class)
    668         """
    669         self.typingctx = self.targetdescr.typing_context
--> 670         self.targetctx = self.targetdescr.target_context
    671 
    672         pysig = utils.pysignature(py_func)

/opt/conda/lib/python3.6/site-packages/numba/core/registry.py in target_context(self)
     45             return nested
     46         else:
---> 47             return self._toplevel_target_context
     48 
     49     @property

/opt/conda/lib/python3.6/site-packages/numba/core/utils.py in __get__(self, instance, type)
    329         if instance is None:
    330             return self
--> 331         res = instance.__dict__[self.name] = self.func(instance)
    332         return res
    333 

/opt/conda/lib/python3.6/site-packages/numba/core/registry.py in _toplevel_target_context(self)
     29     def _toplevel_target_context(self):
     30         # Lazily-initialized top-level target context, for all threads
---> 31         return cpu.CPUContext(self.typing_context)
     32 
     33     @utils.cached_property

/opt/conda/lib/python3.6/site-packages/numba/core/base.py in __init__(self, typing_context)
    257 
    258         # Initialize
--> 259         self.init()
    260 
    261     def init(self):

/opt/conda/lib/python3.6/site-packages/numba/core/compiler_lock.py in _acquire_compile_lock(*args, **kwargs)
     30         def _acquire_compile_lock(*args, **kwargs):
     31             with self:
---> 32                 return func(*args, **kwargs)
     33         return _acquire_compile_lock
     34 

/opt/conda/lib/python3.6/site-packages/numba/core/cpu.py in init(self)
     45     def init(self):
     46         self.is32bit = (utils.MACHINE_BITS == 32)
---> 47         self._internal_codegen = codegen.JITCPUCodegen("numba.exec")
     48 
     49         # Add ARM ABI functions from libgcc_s

/opt/conda/lib/python3.6/site-packages/numba/core/codegen.py in __init__(self, module_name)
    643         self._llvm_module.name = "global_codegen_module"
    644         self._rtlinker = RuntimeLinker()
--> 645         self._init(self._llvm_module)
    646 
    647     def _init(self, llvm_module):

/opt/conda/lib/python3.6/site-packages/numba/core/codegen.py in _init(self, llvm_module)
    652         self._tm_features = self._customize_tm_features()
    653         self._customize_tm_options(tm_options)
--> 654         tm = target.create_target_machine(**tm_options)
    655         engine = ll.create_mcjit_compiler(llvm_module, tm)
    656 

TypeError: create_target_machine() got an unexpected keyword argument 'jitdebug'

Client (please complete the following information):

  • CentOS/Docker
  • Browser: Chrome

Container Version
20.03 and 20.06

Additional context
Add any other context about the problem here.

Jupyter token not displayed / password needed

Describe the bug
The Jupyter cli output doesn't show a token. So I can't login to jupyter notebook (it asks for a password).

To Reproduce

docker run \
	--rm \
	-p 8888:8888 \
	-v `pwd`/$aiws:/home/jovyan/workspace \
	--gpus all \
	nvaitc/ai-lab:19.11-tf2-vnc

Expected behavior
The cli should be showing a token like http://127.0.0.1:8888/?token=d82884987e890f3d4698ab5565b802adbb35c8b8d0287d6a. The screenshots at https://github.com/NVAITC/ai-lab/blob/master/INSTRUCTIONS.md also show tokens.

Screenshots

Container must be run with group "root" to update passwd file
Executing the command: jupyter notebook
[I 07:34:57.049 NotebookApp] Writing notebook server cookie secret to /home/jovyan/.local/share/jupyter/runtime/notebook_cookie_secret
[I 07:34:57.688 NotebookApp] [jupyter_nbextensions_configurator] enabled 0.4.1
[I 07:34:58.905 NotebookApp] JupyterLab extension loaded from /opt/conda/lib/python3.6/site-packages/jupyterlab
[I 07:34:58.905 NotebookApp] JupyterLab application directory is /opt/conda/share/jupyter/lab
[I 07:34:59.837 NotebookApp] Serving notebooks from local directory: /home/jovyan
[I 07:34:59.837 NotebookApp] The Jupyter Notebook is running at:
[I 07:34:59.838 NotebookApp] http://(99b044404c91 or 127.0.0.1):8888/
[I 07:34:59.838 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[I 07:36:04.970 NotebookApp] 302 GET / (172.17.0.1) 0.93ms
[I 07:36:04.980 NotebookApp] 302 GET /tree? (172.17.0.1) 0.93ms
[W 07:36:51.404 NotebookApp] 401 POST /login?next=%2Flab (172.17.0.1) 1.83ms referer=http://localhost:8888/login?next=%2Flab

Client (please complete the following information):

  • Linux xyz 5.0.0-36-generic #39~18.04.1-Ubuntu SMP Tue Nov 12 11:09:50 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
  • Firefox

Container Version
nvaitc/ai-lab:19.11-tf2-vnc

Additional context
None

WS-2018-0210 Low Severity Vulnerability detected by WhiteSource

WS-2018-0210 - Low Severity Vulnerability

Vulnerable Library - lodash-3.10.1.tgz

The modern build of lodash modular utilities.

Library home page: http://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz

Path to dependency file: /ai-lab/package.json

Path to vulnerable library: /tmp/git/ai-lab/node_modules/json-rpc2/node_modules/lodash/package.json

Dependency Hierarchy:

  • @theia/go-0.3.17.tgz (Root Library)
    • go-language-server-0.1.7.tgz
      • json-rpc2-1.0.2.tgz
        • lodash-3.10.1.tgz (Vulnerable Library)

Found in HEAD commit: 082fbcbc8c2ef8fc2f905bd9e108a64bbeabbbfe

Vulnerability Details

In the node_module "lodash" before version 4.17.11 the functions merge, mergeWith, and defaultsDeep can be tricked into adding or modifying properties of the Object prototype. These properties will be present on all objects.

Publish Date: 2018-11-25

URL: WS-2018-0210

CVSS 2 Score Details (3.5)

Base Score Metrics not available

Suggested Fix

Type: Change files

Origin: lodash/lodash@90e6199

Release Date: 2018-08-31

Fix Resolution: Replace or update the following files: lodash.js, test.js


Step up your Open Source Security Game with WhiteSource here

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.