Git Product home page Git Product logo

tensorflow-nvjetson's Introduction

tensorflow-nvJetson

TensorFlow for Nvidia Jetson TX1/TX2.

Install Latest Build of Tensorflow

Setup Environment

# Setting in .bashrc or .zshrc or other bash
export PATH=/usr/local/cuda/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH

$ sudo apt-get install libcupti-doc
export LD_LIBRARY_PATH=/usr/local/cuda/extras/CUPTI/lib64:$LD_LIBRARY_PATH

# Install python
$ sudo apt install python3 python3-dev

# Install hdf5
$ sudo apt-get install pkg-config libhdf5-100 libhdf5-dev

Install pip

$ wget https://bootstrap.pypa.io/get-pip.py -o get-pip.py
$ sudo python3 get-pip.py

Install numpy, keras

# requirements for numpy,keras
$ sudo apt install libhdf5-dev
$ sudo pip3 install numpy keras scipy

Install at Release

You can download wheel file at Release Page

Install by curl

sh -c "$(curl -fsSL https://tfjetson.peterlee0127.com/installTF.sh)"

Install by wget

sh -c "$(wget https://tfjetson.peterlee0127.com/installTF.sh -O -)"

This script will download lastest build tensorflow in this repository.

P.S. I recommend to donwload needed file, not use git clone. Using git clone will download all file in this repository.

Use the NVIDIA Official build

Python 2.7

pip install --extra-index-url=https://developer.download.nvidia.com/compute/redist/jp33 tensorflow-gpu

Python 3.5

pip3 install --extra-index-url=https://developer.download.nvidia.com/compute/redist/jp33 tensorflow-gpu

Nvidia Forum

TensorRT

Using TensorRT in TensorFlow

Install uff exporter for Jetson

TensorRT Test by TensorFlow

TensorRT test

Nvidia Jetson

JetPack 4.2, Tensorflow 2.0.0

2019 10/03

JetPack 3.3, TensorFlow 1.10

2018 8/13

  1. cuDNN v7.1.5
  2. CUDA 9.0
  3. Python 2.7 and Python 3.5
  4. TensorRT 4.0 GA

JetPack 3.2, TensorFlow 1.9

2018 7/11

  1. cuDNN 7.0
  2. CUDA 9.0
  3. Python 3.5

This package build with tensorRT.

JetPack 3.2, TensorFlow 1.8

2018 4/30.

  1. cuDNN 7.0
  2. CUDA 9.0
  3. Python 2.7

This package build with tensorRT.

JetPack 3.2, TensorFlow 1.7

2018 3/29.

  1. cuDNN 7.0
  2. CUDA 9.0
  3. Python 2.7

This package build with tensorRT.

JetPack 3.2, TensorFlow 1.6

  1. cuDNN 7.0
  2. CUDA 9.0
  3. Python 2.7

This package didn't build with tensorRT.

JetPack 3.2, TensorFlow 1.5

  1. cuDNN 7.0
  2. CUDA 9.0
  3. Python 2.7

If you had this kind of Memory Error.

2018-02-23 16:45:13.345534: W ./tensorflow/core/common_runtime/gpu/pool_allocator.h:195] could not allocate pinned host memory of size: 267264.  
2018-02-23 16:45:13.345585: E tensorflow/stream_executor/cuda/cuda_driver.cc:967] failed to alloc 240640 bytes on host: CUDA_ERROR_UNKNOWN.   
2018-02-23 16:45:13.345634: W ./tensorflow/core/common_runtime/gpu/pool_allocator.h:195] could not allocate pinned host memory of size: 240640.   
2018-02-23 16:45:13.345683: E tensorflow/stream_executor/cuda/cuda_driver.cc:967] failed to alloc 216576 bytes on host: CUDA_ERROR_UNKNOWN.   

You can modify your tensorflow program. It should works.

config = tf.ConfigProto()
config.gpu_options.allow_growth = True

session = tf.Session(config=config, ...)

Install

Tensorflow 1.7.0
$ sudo pip install tensorflow-1.7.0-cp27-cp27mu-linux_aarch64.whl

Tensorflow 1.6.0
$ sudo pip install tensorflow-1.6.0-cp27-cp27mu-linux_aarch64.whl

Output of the test code

GPU Test

2017-07-26 17:21:02.457118: E tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:879] could not open file to read NUMA node: /sys/bus/pci/devices/0000:00:00.0/numa_node
Your kernel may have been built without NUMA support.
2017-07-26 17:21:02.457263: I tensorflow/core/common_runtime/gpu/gpu_device.cc:955] Found device 0 with properties:
name: NVIDIA Tegra X2
major: 6 minor: 2 memoryClockRate (GHz) 1.3005
pciBusID 0000:00:00.0
Total memory: 7.67GiB
Free memory: 5.30GiB
2017-07-26 17:21:02.457343: I tensorflow/core/common_runtime/gpu/gpu_device.cc:976] DMA: 0
2017-07-26 17:21:02.457374: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 0:   Y
2017-07-26 17:21:02.457407: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:0) -> (device: 0, name: NVIDIA Tegra X2, pci bus id: 0000:00:00.0)
2017-07-26 17:21:02.457448: I tensorflow/core/common_runtime/gpu/gpu_device.cc:657] Could not identify NUMA node of /job:localhost/replica:0/task:0/gpu:0, defaulting to 0.  Your kernel may not have been built with NUMA support.
[[ 22.  28.]
 [ 49.  64.]]

test_tftrt.py

$ python test_tftrt.py
WARNING:tensorflow:From /usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/datasets/base.py:198: retry (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.
Instructions for updating:
Use the retry module or similar alternatives.
2018-04-02 11:25:15.649281: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:865] ARM64 does not support NUMA - returning NUMA node zero
2018-04-02 11:25:15.649495: I tensorflow/core/grappler/devices.cc:51] Number of eligible GPUs (core count >= 8): 0
2018-04-02 11:25:15.657161: I tensorflow/contrib/tensorrt/convert/convert_nodes.cc:2624] Max batch size= 100 max workspace size= 33554432
2018-04-02 11:25:15.657245: I tensorflow/contrib/tensorrt/convert/convert_nodes.cc:2630] starting build engine
2018-04-02 11:25:19.985906: I tensorflow/contrib/tensorrt/convert/convert_nodes.cc:2635] Built network
2018-04-02 11:25:19.989301: I tensorflow/contrib/tensorrt/convert/convert_nodes.cc:2640] Serialized engine
2018-04-02 11:25:19.990305: I tensorflow/contrib/tensorrt/convert/convert_nodes.cc:2648] finished engine my_trt_op0 containing 7 nodes
2018-04-02 11:25:19.990493: I tensorflow/contrib/tensorrt/convert/convert_nodes.cc:2668] Finished op preparation
2018-04-02 11:25:19.990663: I tensorflow/contrib/tensorrt/convert/convert_nodes.cc:2676] OK finished op building
2018-04-02 11:25:20.027849: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1344] Found device 0 with properties:
name: NVIDIA Tegra X2 major: 6 minor: 2 memoryClockRate(GHz): 1.3005
pciBusID: 0000:00:00.0
totalMemory: 7.67GiB freeMemory: 1.83GiB
2018-04-02 11:25:20.027937: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1423] Adding visible gpu devices: 0
2018-04-02 11:25:20.027992: I tensorflow/core/common_runtime/gpu/gpu_device.cc:911] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-04-02 11:25:20.028024: I tensorflow/core/common_runtime/gpu/gpu_device.cc:917]      0
2018-04-02 11:25:20.028050: I tensorflow/core/common_runtime/gpu/gpu_device.cc:930] 0:   N
2018-04-02 11:25:20.028165: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3926 MB memory) -> physical GPU (device: 0, name: NVIDIA Tegra X2, pci bus id: 0000:00:00.0, compute capability: 6.2)
2018-04-02 11:25:21.487230: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1423] Adding visible gpu devices: 0
2018-04-02 11:25:21.488576: I tensorflow/core/common_runtime/gpu/gpu_device.cc:911] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-04-02 11:25:21.488624: I tensorflow/core/common_runtime/gpu/gpu_device.cc:917]      0
2018-04-02 11:25:21.488659: I tensorflow/core/common_runtime/gpu/gpu_device.cc:930] 0:   N
2018-04-02 11:25:21.488788: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3926 MB memory) -> physical GPU (device: 0, name: NVIDIA Tegra X2, pci bus id: 0000:00:00.0, compute capability: 6.2)
2018-04-02 11:25:21.570046: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1423] Adding visible gpu devices: 0
2018-04-02 11:25:21.570280: I tensorflow/core/common_runtime/gpu/gpu_device.cc:911] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-04-02 11:25:21.570316: I tensorflow/core/common_runtime/gpu/gpu_device.cc:917]      0
2018-04-02 11:25:21.570337: I tensorflow/core/common_runtime/gpu/gpu_device.cc:930] 0:   N
2018-04-02 11:25:21.570446: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3926 MB memory) -> physical GPU (device: 0, name: NVIDIA Tegra X2, pci bus id: 0000:00:00.0, compute capability: 6.2)
2018-04-02 11:25:21.628937: I tensorflow/core/grappler/devices.cc:51] Number of eligible GPUs (core count >= 8): 0
2018-04-02 11:25:21.635393: I tensorflow/contrib/tensorrt/convert/convert_nodes.cc:2624] Max batch size= 100 max workspace size= 33554432
2018-04-02 11:25:21.635480: I tensorflow/contrib/tensorrt/convert/convert_nodes.cc:2628] Using FP16 precision mode
2018-04-02 11:25:21.635507: I tensorflow/contrib/tensorrt/convert/convert_nodes.cc:2630] starting build engine
2018-04-02 11:25:22.054581: I tensorflow/contrib/tensorrt/convert/convert_nodes.cc:2635] Built network
2018-04-02 11:25:22.056254: I tensorflow/contrib/tensorrt/convert/convert_nodes.cc:2640] Serialized engine
2018-04-02 11:25:22.056768: I tensorflow/contrib/tensorrt/convert/convert_nodes.cc:2648] finished engine my_trt_op1 containing 7 nodes
2018-04-02 11:25:22.056962: I tensorflow/contrib/tensorrt/convert/convert_nodes.cc:2668] Finished op preparation
2018-04-02 11:25:22.057143: I tensorflow/contrib/tensorrt/convert/convert_nodes.cc:2676] OK finished op building
2018-04-02 11:25:22.075579: I tensorflow/core/grappler/devices.cc:51] Number of eligible GPUs (core count >= 8): 0
2018-04-02 11:25:22.081608: I tensorflow/contrib/tensorrt/convert/convert_nodes.cc:2410] finished op preparation
2018-04-02 11:25:22.081704: I tensorflow/contrib/tensorrt/convert/convert_nodes.cc:2418] OK
2018-04-02 11:25:22.081732: I tensorflow/contrib/tensorrt/convert/convert_nodes.cc:2419] finished op building
2018-04-02 11:25:22.112265: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1423] Adding visible gpu devices: 0
2018-04-02 11:25:22.112386: I tensorflow/core/common_runtime/gpu/gpu_device.cc:911] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-04-02 11:25:22.112424: I tensorflow/core/common_runtime/gpu/gpu_device.cc:917]      0
2018-04-02 11:25:22.112452: I tensorflow/core/common_runtime/gpu/gpu_device.cc:930] 0:   N
2018-04-02 11:25:22.112562: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3926 MB memory) -> physical GPU (device: 0, name: NVIDIA Tegra X2, pci bus id: 0000:00:00.0, compute capability: 6.2)
2018-04-02 11:25:22.199192: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1423] Adding visible gpu devices: 0
2018-04-02 11:25:22.199323: I tensorflow/core/common_runtime/gpu/gpu_device.cc:911] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-04-02 11:25:22.199350: I tensorflow/core/common_runtime/gpu/gpu_device.cc:917]      0
2018-04-02 11:25:22.199375: I tensorflow/core/common_runtime/gpu/gpu_device.cc:930] 0:   N
2018-04-02 11:25:22.199478: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3926 MB memory) -> physical GPU (device: 0, name: NVIDIA Tegra X2, pci bus id: 0000:00:00.0, compute capability: 6.2)
2018-04-02 11:25:22.239846: W tensorflow/contrib/tensorrt/log/trt_logger.cc:34] DefaultLogger Int8 support requested on hardware without native Int8 support, performance will be negatively affected.
2018-04-02 11:25:22.626763: I tensorflow/contrib/tensorrt/convert/convert_graph.cc:298] Starting Calib Conversion
2018-04-02 11:25:22.627250: I tensorflow/contrib/tensorrt/convert/convert_graph.cc:310] Num Calib nodes in graph= 1
2018-04-02 11:25:23.703319: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1423] Adding visible gpu devices: 0
2018-04-02 11:25:23.703421: I tensorflow/core/common_runtime/gpu/gpu_device.cc:911] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-04-02 11:25:23.703452: I tensorflow/core/common_runtime/gpu/gpu_device.cc:917]      0
2018-04-02 11:25:23.703475: I tensorflow/core/common_runtime/gpu/gpu_device.cc:930] 0:   N
2018-04-02 11:25:23.703567: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3926 MB memory) -> physical GPU (device: 0, name: NVIDIA Tegra X2, pci bus id: 0000:00:00.0, compute capability: 6.2)
Pass

Tensorflow 1.7(build with TensorRT) is larger than 100MB. I split the whl file to 2 part. Please use following command to merge file.

merge file

$ cat tensorflow-1.7.0-cp27-cp27mu-linux_aarch64.whl.part-* > tensorflow-1.7.0-cp27-cp27mu-linux_aarch64.whl

split file

$ split -b 70m tensorflow-1.7.0-cp27-cp27mu-linux_aarch64.whl tensorflow-1.7.0-cp27-cp27mu-linux_aarch64.whl-part-


Install System on SSD (Solid State Disk)

You can find information at jetsonhacks.

jetsonhacks-install-samsung-ssd-on-nvidia-jetson-tx1


buymeacoffee

tensorflow-nvjetson's People

Contributors

calhu avatar dependabot[bot] avatar imgbotapp avatar lifefeel avatar peterlee0127 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tensorflow-nvjetson's Issues

Building script available?

Hi,

First off, thanks for your work! It's been of tremendous help to me 👍 I was wondering if you had the scripts that you use to build this? I'd like to do nightly builds on my CI system, and I can't afford for the TX2 to take hours every time...

Cheers!

The GPU freeMemory shown here is too low

I successfully compiled TF-1.9.0 followed to your steps. Also got the correct verification. Thanks for your code there.

But, when running the RT optimization graph, there is too low freeMemory left, and I often make mistakes while running. Also on your test_tftrt.py output logger:

2018-04-02 11:25:20.027849: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1344] Found > device 0 with properties:
name: NVIDIA Tegra X2 major: 6 minor: 2 memoryClockRate(GHz): 1.3005
pciBusID: 0000:00:00.0
totalMemory: 7.67GiB freeMemory: 1.83GiB

Is the maximum value of the workspace set by RT, it will be occupied, and other programs can no longer be used?

I am only running the VGG-16 model on the TX2 with tensorRT,and the max_workspace_size_bytes=4096 << 20. When the model is running, the output is Cuda Error in execute: 9. There is my output logger:

Instructions for updating:
Use the retry module or similar alternatives.
2018-07-20 08:45:50.186786: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:865] ARM64 does not support NUMA - returning NUMA node zero
2018-07-20 08:45:50.186957: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1344] Found device 0 with properties:
name: NVIDIA Tegra X2 major: 6 minor: 2 memoryClockRate(GHz): 1.3005
pciBusID: 0000:00:00.0
totalMemory: 7.67GiB freeMemory: 2.55GiB
2018-07-20 08:45:50.187010: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1423] Adding visible gpu devices: 0
2018-07-20 08:45:50.187092: I tensorflow/core/common_runtime/gpu/gpu_device.cc:911] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-07-20 08:45:50.187131: I tensorflow/core/common_runtime/gpu/gpu_device.cc:917] 0
2018-07-20 08:45:50.187159: I tensorflow/core/common_runtime/gpu/gpu_device.cc:930] 0: N
2018-07-20 08:45:50.187293: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Created > TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 2316 MB memory) -> physical GPU (device: 0, name: NVIDIA Tegra X2, pci bus id: 0000:00:00.0, compute capability: 6.2)
2018-07-20 08:47:14.036891: E tensorflow/contrib/tensorrt/log/trt_logger.cc:38] DefaultLogger cudnnConvolutionLayer.cpp (254) - Cuda Error in execute: 9

Do you know what is going on here? Thanks a lot.

1.13 and 2.0?

Thank you for creating this resource.

Any chance we'll see 1.13 and 2.0?

Cheers,

Tutorial for building tensorflow for aarch64

Hi Peter,

I appreciate that you are saving the wheels of tensorflow for aarch64 architecture. I am trying to build the same but for Python3. Could you please post me the instructions to build the tensorflow (Patches needed for Basel and other packages for the building).

I currently use Nvidia DrivePX2 as my target.

Thank you and best regards,
Sanjay

Bad merge/concat split whl files on TX2

I tried to merge the whl files using the command cat tensorflow-1.7.0-cp27-cp27mu-linux_aarch64.whl.part-* > tensorflow-1.7.0-cp27-cp27mu-linux_aarch64.whl but I get the following error when I try to install using PIP:

Processing ./tensorflow-1.7.0-cp27-cp27mu-linux_aarch64.whl
Exception:
Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/pip/basecommand.py", line 209, in main
    status = self.run(options, args)
  File "/usr/lib/python2.7/dist-packages/pip/commands/install.py", line 328, in run
    wb.build(autobuilding=True)
  File "/usr/lib/python2.7/dist-packages/pip/wheel.py", line 748, in build
    self.requirement_set.prepare_files(self.finder)
  File "/usr/lib/python2.7/dist-packages/pip/req/req_set.py", line 360, in prepare_files
    ignore_dependencies=self.ignore_dependencies))
  File "/usr/lib/python2.7/dist-packages/pip/req/req_set.py", line 577, in _prepare_file
    session=self.session, hashes=hashes)
  File "/usr/lib/python2.7/dist-packages/pip/download.py", line 798, in unpack_url
    unpack_file_url(link, location, download_dir, hashes=hashes)
  File "/usr/lib/python2.7/dist-packages/pip/download.py", line 705, in unpack_file_url
    unpack_file(from_path, location, content_type, link)
  File "/usr/lib/python2.7/dist-packages/pip/utils/__init__.py", line 617, in unpack_file
    flatten=not filename.endswith('.whl')
  File "/usr/lib/python2.7/dist-packages/pip/utils/__init__.py", line 502, in unzip_file
    zip = zipfile.ZipFile(zipfp, allowZip64=True)
  File "/usr/lib/python2.7/zipfile.py", line 770, in __init__
    self._RealGetContents()
  File "/usr/lib/python2.7/zipfile.py", line 811, in _RealGetContents
    raise BadZipfile, "File is not a zip file"
BadZipfile: File is not a zip file

I get the same error when using 1.8. Do you have a pre-merged file anywhere?

Building Tensorflow from source on TX2 without CUDA

Hi,
I am using your script to build TF from source. During the config step, I entered "N" for "Do you wish to build TensorFlow with CUDA support". The reason why I chose this option is that I want to use CPU version of Tensorflow on TX2.

This is the error i get when i give that option in my configure step:
_ERROR: /home/nvidia/.cache/bazel/_bazel_nvidia/328b7684270f6fe173a4d80bc65bde15/external/local_config_cuda/crosstool/BUILD:4:1: Traceback (most recent call last):
File "/home/nvidia/.cache/bazel/_bazel_nvidia/328b7684270f6fe173a4d80bc65bde15/external/local_config_cuda/crosstool/BUILD", line 4
error_gpu_disabled()
File "/home/nvidia/.cache/bazel/bazel_nvidia/328b7684270f6fe173a4d80bc65bde15/external/local_config_cuda/crosstool/error_gpu_disabled.bzl", line 3, in error_gpu_disabled

Have you tried building TF CPU-only version in TX2? Let me know!

Thanks
Sri

install error

hi,i am new in linux,and i install tensorflow1.5 on TX2 which show error like this:

nvidia@tegra-ubuntu:/liuhao$ sudo pip install tensorflow-1.5.0-cp27-cp27mu-linux_aarch64.whl
[sudo] password for nvidia:
The directory '/home/nvidia/.cache/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
The directory '/home/nvidia/.cache/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Processing ./tensorflow-1.5.0-cp27-cp27mu-linux_aarch64.whl
Collecting enum34>=1.1.6 (from tensorflow==1.5.0)
Exception:
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/pip/basecommand.py", line 209, in main
status = self.run(options, args)
File "/usr/lib/python2.7/dist-packages/pip/commands/install.py", line 317, in run
requirement_set.prepare_files(finder)
File "/usr/lib/python2.7/dist-packages/pip/req/req_set.py", line 360, in prepare_files
ignore_dependencies=self.ignore_dependencies))
File "/usr/lib/python2.7/dist-packages/pip/req/req_set.py", line 512, in _prepare_file
finder, self.upgrade, require_hashes)
File "/usr/lib/python2.7/dist-packages/pip/req/req_install.py", line 273, in populate_link
self.link = finder.find_requirement(self, upgrade)
File "/usr/lib/python2.7/dist-packages/pip/index.py", line 442, in find_requirement
all_candidates = self.find_all_candidates(req.name)
File "/usr/lib/python2.7/dist-packages/pip/index.py", line 400, in find_all_candidates
for page in self._get_pages(url_locations, project_name):
File "/usr/lib/python2.7/dist-packages/pip/index.py", line 545, in _get_pages
page = self._get_page(location)
File "/usr/lib/python2.7/dist-packages/pip/index.py", line 648, in _get_page
return HTMLPage.get_page(link, session=self.session)
File "/usr/lib/python2.7/dist-packages/pip/index.py", line 757, in get_page
"Cache-Control": "max-age=600",
File "/usr/share/python-wheels/requests-2.9.1-py2.py3-none-any.whl/requests/sessions.py", line 480, in get
return self.request('GET', url, **kwargs)
File "/usr/lib/python2.7/dist-packages/pip/download.py", line 378, in request
return super(PipSession, self).request(method, url, *args, **kwargs)
File "/usr/share/python-wheels/requests-2.9.1-py2.py3-none-any.whl/requests/sessions.py", line 468, in request
resp = self.send(prep, **send_kwargs)
File "/usr/share/python-wheels/requests-2.9.1-py2.py3-none-any.whl/requests/sessions.py", line 576, in send
r = adapter.send(request, **kwargs)
File "/usr/share/python-wheels/CacheControl-0.11.5-py2.py3-none-any.whl/cachecontrol/adapter.py", line 46, in send
resp = super(CacheControlAdapter, self).send(request, **kw)
File "/usr/share/python-wheels/requests-2.9.1-py2.py3-none-any.whl/requests/adapters.py", line 376, in send
timeout=timeout
File "/usr/share/python-wheels/urllib3-1.13.1-py2.py3-none-any.whl/urllib3/connectionpool.py", line 610, in urlopen
_stacktrace=sys.exc_info()[2])
File "/usr/share/python-wheels/urllib3-1.13.1-py2.py3-none-any.whl/urllib3/util/retry.py", line 228, in increment
total -= 1
TypeError: unsupported operand type(s) for -=: 'Retry' and 'int'
You are using pip version 8.1.1, however version 9.0.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
nvidia@tegra-ubuntu:
/liuhao$

Python 3?

Do you also have wheel files for python 3?

Error

I receive this error :
Processing ./tensorflow-1.3.0rc0-cp27-cp27mu-linux_aarch64.whl
Exception:
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/pip/basecommand.py", line 209, in main
status = self.run(options, args)
File "/usr/lib/python2.7/dist-packages/pip/commands/install.py", line 328, in run
wb.build(autobuilding=True)
File "/usr/lib/python2.7/dist-packages/pip/wheel.py", line 748, in build
self.requirement_set.prepare_files(self.finder)
File "/usr/lib/python2.7/dist-packages/pip/req/req_set.py", line 360, in prepare_files
ignore_dependencies=self.ignore_dependencies))
File "/usr/lib/python2.7/dist-packages/pip/req/req_set.py", line 577, in _prepare_file
session=self.session, hashes=hashes)
File "/usr/lib/python2.7/dist-packages/pip/download.py", line 798, in unpack_url
unpack_file_url(link, location, download_dir, hashes=hashes)
File "/usr/lib/python2.7/dist-packages/pip/download.py", line 705, in unpack_file_url
unpack_file(from_path, location, content_type, link)
File "/usr/lib/python2.7/dist-packages/pip/utils/init.py", line 617, in unpack_file
flatten=not filename.endswith('.whl')
File "/usr/lib/python2.7/dist-packages/pip/utils/init.py", line 500, in unzip_file
zipfp = open(filename, 'rb')
IOError: [Errno 2] No such file or directory: '/home/nvidia/tensorflow-1.3.0rc0-cp27-cp27mu-linux_aarch64.whl'

Also do I need to install any additional library etc. ?

Status of Tensorflow 2.0 for Jetson.

Most of other compile issues resolved.

NVIDIA Jetson TX2
JetPack 3.3
python 3.5
tensorflow-2.0.0a0-cp35-cp35m-linux_aarch64.whl

Current Issue.
Cuda_driver adds the check of gpu memory(Jetson GPU don't have a vram, it share the memory). I don't have any idea to fix it now...
It's seems from tensorflow 1.13. This version use the CUDA 10.

2019-03-08 19:32:56.481099: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1021]      0
2019-03-08 19:32:56.481123: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1034] 0:   N
2019-03-08 19:32:56.481220: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1149] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 2264 MB memory) -> physical GPU (device: 0, name: NVIDIA Tegra X2, pci bus id: 0000:00:00.0, compute capability: 6.2)
2019-03-08 19:33:01.647196: F tensorflow/stream_executor/cuda/cuda_driver.cc:1184] Check failed: PointerIsValid(gpu_dst) Destination pointer is not actually on GPU: 67666247680
[1]    27238 abort (core dumped)  python3 tensorflow-nvJetson/tf-test/test_tftrt.py
(.tensorflow) ➜  ~ python3 tensorflow-nvJetson/tf-test/gpu.py
Limited tf.compat.v2.summary API due to missing TensorBoard installation
2019-03-08 20:03:58.971054: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcuda.so.1
2019-03-08 20:03:59.022948: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:976] ARM64 does not support NUMA - returning NUMA node zero
2019-03-08 20:03:59.023976: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcuda.so.1
2019-03-08 20:03:59.024117: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1467] Found device 0 with properties:
name: NVIDIA Tegra X2 major: 6 minor: 2 memoryClockRate(GHz): 1.3005
pciBusID: 0000:00:00.0
totalMemory: 7.67GiB freeMemory: 2.66GiB
2019-03-08 20:03:59.024172: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1546] Adding visible gpu devices: 0
2019-03-08 20:03:59.024270: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.9.0
2019-03-08 20:03:59.024721: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1015] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-03-08 20:03:59.024755: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1021]      0
2019-03-08 20:03:59.024783: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1034] 0:   N
2019-03-08 20:03:59.024932: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1149] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 2419 MB memory) -> physical GPU (device: 0, name: NVIDIA Tegra X2, pci bus id: 0000:00:00.0, compute capability: 6.2)
Device mapping:
/job:localhost/replica:0/task:0/device:GPU:0 -> device: 0, name: NVIDIA Tegra X2, pci bus id: 0000:00:00.0, compute capability: 6.2
2019-03-08 20:03:59.025870: I tensorflow/core/common_runtime/direct_session.cc:316] Device mapping:
/job:localhost/replica:0/task:0/device:GPU:0 -> device: 0, name: NVIDIA Tegra X2, pci bus id: 0000:00:00.0, compute capability: 6.2

MatMul: (MatMul): /job:localhost/replica:0/task:0/device:GPU:0
2019-03-08 20:03:59.028368: I tensorflow/core/common_runtime/placer.cc:61] MatMul: (MatMul)/job:localhost/replica:0/task:0/device:GPU:0
a: (Const): /job:localhost/replica:0/task:0/device:GPU:0
2019-03-08 20:03:59.028451: I tensorflow/core/common_runtime/placer.cc:61] a: (Const)/job:localhost/replica:0/task:0/device:GPU:0
b: (Const): /job:localhost/replica:0/task:0/device:GPU:0
2019-03-08 20:03:59.028492: I tensorflow/core/common_runtime/placer.cc:61] b: (Const)/job:localhost/replica:0/task:0/device:GPU:0
2019-03-08 20:04:02.965650: F tensorflow/stream_executor/cuda/cuda_driver.cc:1184] Check failed: PointerIsValid(gpu_dst) Destination pointer is not actually on GPU: 67660218368
[1]    27628 abort (core dumped)  python3 tensorflow-nvJetson/tf-test/gpu.py
Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.
W0316 22:30:39.139987 547609665536 deprecation.py:506] From /home/nvidia/.tensorflow/lib/python3.5/site-packages/tensorflow/python/training/slot_creator.py:187: calling Zeros.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor
2019-03-16 22:31:37.686594: F tensorflow/core/kernels/random_op_gpu.cu.cc:64] Non-OK-status: CudaLaunchKernel(FillPhiloxRandomKernelLaunch<Distribution>, num_blocks, block_size, 0, d.stream(), gen, data, size, dist) status: Internal: unknown error
Fatal Python error: Aborted

Thread 0x0000007f49500200 (most recent call first):
  File "/usr/lib/python3.5/threading.py", line 293 in wait
  File "/usr/lib/python3.5/queue.py", line 164 in get
  File "/home/nvidia/.tensorflow/lib/python3.5/site-packages/tensorflow/python/summary/writer/event_file_writer.py", line 159 in run
  File "/usr/lib/python3.5/threading.py", line 914 in _bootstrap_inner
  File "/usr/lib/python3.5/threading.py", line 882 in _bootstrap

Thread 0x0000007f49d00200 (most recent call first):
  File "/usr/lib/python3.5/threading.py", line 293 in wait
  File "/usr/lib/python3.5/queue.py", line 164 in get
  File "/home/nvidia/.tensorflow/lib/python3.5/site-packages/tensorflow/python/summary/writer/event_file_writer.py", line 159 in run
  File "/usr/lib/python3.5/threading.py", line 914 in _bootstrap_inner
  File "/usr/lib/python3.5/threading.py", line 882 in _bootstrap

Thread 0x0000007f80146000 (most recent call first):
  File "/home/nvidia/.tensorflow/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1408 in _call_tf_sessionrun
  File "/home/nvidia/.tensorflow/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1320 in _run_fn
  File "/home/nvidia/.tensorflow/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1335 in _do_call
  File "/home/nvidia/.tensorflow/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1329 in _do_run
  File "/home/nvidia/.tensorflow/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1153 in _run
  File "/home/nvidia/.tensorflow/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 930 in run
  File "/home/nvidia/.tensorflow/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 5448 in _run_using_default_session
  File "/home/nvidia/.tensorflow/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 2616 in run
  File "tensorflow/tensorflow/examples/tutorials/mnist/mnist_with_summaries.py", line 143 in train
  File "tensorflow/tensorflow/examples/tutorials/mnist/mnist_with_summaries.py", line 187 in main
  File "/home/nvidia/.tensorflow/lib/python3.5/site-packages/absl/app.py", line 251 in _run_main
  File "/home/nvidia/.tensorflow/lib/python3.5/site-packages/absl/app.py", line 300 in run
  File "/home/nvidia/.tensorflow/lib/python3.5/site-packages/tensorflow/python/platform/app.py", line 40 in run
  File "tensorflow/tensorflow/examples/tutorials/mnist/mnist_with_summaries.py", line 214 in <module>
[1]    3310 abort (core dumped)  python3 tensorflow/tensorflow/examples/tutorials/mnist/mnist_with_summaries.p

Tensorflow hangs

Hi,

I used yout script to install tensorflow on Nvidia Jetson TX2. It hangs when i try to run gpu.py. Here is the stack trace:
/usr/local/lib/python2.7/dist-packages/h5py/init.py:36: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type.
from ._conv import register_converters as _register_converters
2018-06-12 01:22:24.660209: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:865] ARM64 does not support NUMA - returning NUMA node zero
2018-06-12 01:22:24.660419: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1356] Found device 0 with properties:
name: NVIDIA Tegra X2 major: 6 minor: 2 memoryClockRate(GHz): 1.3005
pciBusID: 0000:00:00.0
totalMemory: 7.67GiB freeMemory: 4.19GiB
2018-06-12 01:22:24.660468: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1435] Adding visible gpu devices: 0

Have you seen this issue before?

Thanks
Sri

test_tftrt.py does not seem to work with your 1.9 whl install?

No, quite sure, yet, what is wrong:-(

ubuntu@tegra-ubuntu:~$ python3 test_tftrt.py 
2018-07-20 04:04:32.686105: F tensorflow/core/framework/op.cc:55] Non-OK-status: RegisterAlreadyLocked(op_data_factory) status: Already exists: Op with name _ScopedAllocator
Aborted (core dumped)

The gpu.py test program seems to be happy after a reboot now:

ubuntu@tegra-ubuntu:~$ python3 gpu.py
2018-07-20 04:08:28.954407: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:864] ARM64 does not support NUMA - returning NUMA node zero
2018-07-20 04:08:28.954549: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1392] Found device 0 with properties: 
name: NVIDIA Tegra X2 major: 6 minor: 2 memoryClockRate(GHz): 1.3005
pciBusID: 0000:00:00.0
totalMemory: 7.67GiB freeMemory: 5.75GiB
2018-07-20 04:08:28.954645: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1471] Adding visible gpu devices: 0
2018-07-20 04:08:30.430963: I tensorflow/core/common_runtime/gpu/gpu_device.cc:952] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-07-20 04:08:30.431039: I tensorflow/core/common_runtime/gpu/gpu_device.cc:958]      0 
2018-07-20 04:08:30.431067: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0:   N 
2018-07-20 04:08:30.431276: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1084] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 5387 MB memory) -> physical GPU (device: 0, name: NVIDIA Tegra X2, pci bus id: 0000:00:00.0, compute capability: 6.2)
Device mapping:
/job:localhost/replica:0/task:0/device:GPU:0 -> device: 0, name: NVIDIA Tegra X2, pci bus id: 0000:00:00.0, compute capability: 6.2
2018-07-20 04:08:30.433324: I tensorflow/core/common_runtime/direct_session.cc:288] Device mapping:
/job:localhost/replica:0/task:0/device:GPU:0 -> device: 0, name: NVIDIA Tegra X2, pci bus id: 0000:00:00.0, compute capability: 6.2

MatMul: (MatMul): /job:localhost/replica:0/task:0/device:GPU:0
2018-07-20 04:08:30.434435: I tensorflow/core/common_runtime/placer.cc:886] MatMul: (MatMul)/job:localhost/replica:0/task:0/device:GPU:0
a: (Const): /job:localhost/replica:0/task:0/device:GPU:0
2018-07-20 04:08:30.434494: I tensorflow/core/common_runtime/placer.cc:886] a: (Const)/job:localhost/replica:0/task:0/device:GPU:0
b: (Const): /job:localhost/replica:0/task:0/device:GPU:0
2018-07-20 04:08:30.434526: I tensorflow/core/common_runtime/placer.cc:886] b: (Const)/job:localhost/replica:0/task:0/device:GPU:0
[[22. 28.]
 [49. 64.]]

CUDA 9.2

I had a hard time to install tensorflow on drive px2.
Drive px2 has CUDA 9.2 and cudnn 7.2.1.
please help.......
Nvidia just said refer to here

Steps to building

If you can elaborate the steps you took to build the file, it can be very helpful

tensorflow-1.9.0 中 protobuf 版本问题

请问一下,你在编译 tensorflow-1.9.0-cp35-cp35m-linux_aarch64.whl 时的 protobuf 版本是多少?我在安装过程中默认安装的版本是protobuf 3.6.0,但是在运行TF的时候给出以下错误:

[libprotobuf FATAL google/protobuf/stubs/common.cc:61] This program requires version 3.5.0 of the Protocol Buffer runtime library, but the installed version is 2.6.1. Please update your library. If you compiled the program yourself, make sure that your headers are from the same version of Protocol Buffers as your link-time library. (Version verification failed in "bazel-out/arm-opt/genfiles/tensorflow/contrib/tpu/proto/tpu_embedding_config.pb.cc".)
terminate called after throwing an instance of 'google::protobuf::FatalException'
what(): This program requires version 3.5.0 of the Protocol Buffer runtime library, but the installed version is 2.6.1. Please update your library. If you compiled the program yourself, make sure that your headers are from the same version of Protocol Buffers as your link-time library. (Version verification failed in "bazel-out/arm-opt/genfiles/tensorflow/contrib/tpu/proto/tpu_embedding_config.pb.cc".)

尝试过卸载重装
pip3 uninstall protobuf
pip3 install protobuf
pip3 install protobuf==2.6.1
pip3 install protobuf==3.5.0.post1

均无效,还是有此错误提示。

Publish whole binaries

Hi!

Just a suggestion 😄 Why don't you publish the whole whl binaries as a whole file? I know that GitHub won't allow you to do that in the Git repo itself, however it is possible if you create releases of the repository.

Thanks again for your work!

Process for building from source

Hey, I've been trying to build tensorflow from source for a few days. I keep getting a lot of errors of files not being found or created. I'm trying to build on a Jetson TX2 with python 3.5. Can you tell me how you went about it.

Thanks for the help!

can't support cuda9

since a newer version of jetpack for TX2 has been released, cuda9 is required. Can you share the wheel which supports cuda9?

TensorFlow for JetPack3.2

Hi, peterlee0127

This is Aasta from NVIDIA.

Thanks for sharing your TensorFlow build for JetPack3.1.
It really helps our user.

Could you also update a TensorFlow wheel for JetPack3.2 DP, which uses CUDA 9.0/cuDNN 7?
The upgrade will be extremely helpful to us.

Thanks.

build cpu only version

I was wondering if you have had any success installing tensorflow cpu only version? If you could detail the steps you used to build the wheel file that wold be great.

ERROR: No default_toolchain found for cpu 'aarch64'.

When I run ./buildTensorflow.sh, the error occurs, why?

nvidia@tegra-ubuntu:/media/nvidia/udisk/nvidiaTX2TF/nvJethson$ sudo ./buildTensorflow.sh 
Reading package lists... Done
Building dependency tree       
Reading state information... Done
python-numpy is already the newest version (1:1.11.0-1ubuntu1).
python-wheel is already the newest version (0.29.0-1).
python-dev is already the newest version (2.7.12-1~16.04).
python-pip is already the newest version (8.1.1-2ubuntu0.4).
0 upgraded, 0 newly installed, 0 to remove and 304 not upgraded.
Cloning into 'tensorflow'...
remote: Counting objects: 407886, done.
remote: Compressing objects: 100% (31/31), done.
remote: Total 407886 (delta 7), reused 11 (delta 4), pack-reused 407851
Receiving objects: 100% (407886/407886), 187.59 MiB | 908.00 KiB/s, done.
Resolving deltas: 100% (324757/324757), done.
Checking connectivity... done.
Checking out files: 100% (12827/12827), done.
Checking out files: 100% (7870/7870), done.
Note: checking out 'v1.8.0'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b <new-branch-name>

HEAD is now at 93bc2e2... Merge pull request #18928 from tensorflow/release-patch-4-1
WARNING: --batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command "bazel shutdown".
You have bazel 0.16.1- (@non-git) installed.
Please specify the location of python. [Default is /usr/bin/python]: 


Invalid python path: 2 cannot be found.
Please specify the location of python. [Default is /usr/bin/python]: /usr/bin/python


Found possible Python library paths:
  /usr/local/lib/python2.7/dist-packages
  /usr/lib/python2.7/dist-packages
Please input the desired Python library path to use.  Default is [/usr/local/lib/python2.7/dist-packages]

Do you wish to build TensorFlow with jemalloc as malloc support? [Y/n]: y
jemalloc as malloc support will be enabled for TensorFlow.

Do you wish to build TensorFlow with Google Cloud Platform support? [Y/n]: n
No Google Cloud Platform support will be enabled for TensorFlow.

Do you wish to build TensorFlow with Hadoop File System support? [Y/n]: n
No Hadoop File System support will be enabled for TensorFlow.

Do you wish to build TensorFlow with Amazon S3 File System support? [Y/n]: n
No Amazon S3 File System support will be enabled for TensorFlow.

Do you wish to build TensorFlow with Apache Kafka Platform support? [Y/n]: n
No Apache Kafka Platform support will be enabled for TensorFlow.

Do you wish to build TensorFlow with XLA JIT support? [y/N]: n
No XLA JIT support will be enabled for TensorFlow.

Do you wish to build TensorFlow with GDR support? [y/N]: n
No GDR support will be enabled for TensorFlow.

Do you wish to build TensorFlow with VERBS support? [y/N]: n
No VERBS support will be enabled for TensorFlow.

Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: n
No OpenCL SYCL support will be enabled for TensorFlow.

Do you wish to build TensorFlow with CUDA support? [y/N]: y
CUDA support will be enabled for TensorFlow.

Please specify the CUDA SDK version you want to use, e.g. 7.0. [Leave empty to default to CUDA 9.0]: 


Please specify the location where CUDA 9.0 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: 


Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 7.0]: 


Please specify the location where cuDNN 7 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:/usr/lib/aarch64-linux-gnu


Do you wish to build TensorFlow with TensorRT support? [y/N]: y
TensorRT support will be enabled for TensorFlow.

Please specify the location where TensorRT is installed. [Default is /usr/lib/aarch64-linux-gnu]:


Please specify the NCCL version you want to use. [Leave empty to default to NCCL 1.3]: 2.2


Please specify the location where NCCL 2 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:/usr/local


Please specify a list of comma-separated Cuda compute capabilities you want to build with.
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.
Please note that each additional compute capability significantly increases your build time and binary size. [Default is: 3.5,5.2]


Do you want to use clang as CUDA compiler? [y/N]: n
nvcc will be used as CUDA compiler.

Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]: 


Do you wish to build TensorFlow with MPI support? [y/N]: n
No MPI support will be enabled for TensorFlow.

Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]: 


Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: n
Not configuring the WORKSPACE for Android builds.

Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See tools/bazel.rc for more details.
	--config=mkl         	# Build with MKL support.
	--config=monolithic  	# Config for mostly static monolithic build.
Configuration finished
Starting local Bazel server and connecting to it...
WARNING: The following configs were expanded more than once: [cuda]. For repeatable flags, repeats are counted twice and may lead to unexpected behavior.
ERROR: No default_toolchain found for cpu 'aarch64'. Valid cpus are: [
  k8,
  piii,
  arm,
  darwin,
  ppc,
]
INFO: Elapsed time: 95.184s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (2 packages loaded)
./buildTensorflow.sh: 13: ./buildTensorflow.sh: bazel-bin/tensorflow/tools/pip_package/build_pip_package: not found

Building from source?

This is so awesome! Thanks so much for releasing these compiled versions!

I'm trying to build Tensorflow on an NVIDIA Drive PX2. The GPU of a PX2 is equivalent to the Jetson TX2 and typically building from source with TX2 instructions yields good results (ie. a successful build). Could you share some information on how you obtained these build files? (ie. what version of Bazel/gcc did you use? did you have to patch any of the tensorflow code such as tensorflow/workspace.bzl?) Thanks again!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.