Git Product home page Git Product logo

dependablesystemslab / tensorfi Goto Github PK

View Code? Open in Web Editor NEW
51.0 14.0 23.0 12.65 MB

TensorFI is a fault injection framework for injecting both hardware and software faults into applications written using the TensorFlow framework. You can find more information about TensorFI in the paper below.

Home Page: http://blogs.ubc.ca/karthik/files/2018/08/TensorFI-Camera-Ready.pdf

License: MIT License

Shell 1.32% Python 98.68%
fault injection machine learning tensorflow

tensorfi's Introduction

TensorFI: A fault injector for TensorFlow applications

GitHub license PyPI - Downloads GitHub issues GitHub stars GitHub forks

TensorFI is a fault injector for TensorFlow applications written in Python. It instruments the Tensorflow graph to inject faults at the level of individual operators. Unlike other fault injection tools, the faults are injected at a higher level of abstraction, and hence can be easily mapped to the Tensorflow graph. Further, the fault injector can be configured though a YAML file.

Following are the installation instructions and dependencies. For details on how TensorFI works, how to use or modify it for your purposes, how to contribute and licensing information, please refer our Wiki.

If you find TensorFI useful, please cite the following paper: "TensorFI: A Flexible Fault Injection Framework for TensorFlow Applications, Zitao Chen, Niranjhana Narayanan, Bo Fang, Guanpeng Li, Karthik Pattabiraman, Nathan DeBardeleben, Proceedings of the IEEE International Symposium on Software Reliability Engineering (ISSRE), 2020.

Find a copy of the TensorFI paper here.

Updates: 2019-07

We now support fault injection in complex ML models such as LeNet, AlexNet, as well as support for single bit-flip injection mode. Some DNN models are provided in /Tests directory. For starters, you can try running the LeNet.py under /Tests/DNN-model/LeNet-mnist/ to inject faults in a CNN (it'll automatically download the dataset and the config file is set up).

You can now create your customized TensorFlow operations for injection, by using the built-in TensorFlow implementation, to support injection on new ML models.

Using TF Keras:

A simple MLP model implemented using TF Keras module is created and tested with TensorFI. Try it at /Tests/keras-mnist.py.

1. Supported Platforms

TensorFI has been tested on the following platforms and versions:

  1. Ubuntu Linux (v 4.10) with TensorFlow (v. 1.4.1)
  2. Ubuntu Linux (v 4.4) with TensorFlow (v. 1.5)
  3. Ubuntu Linux (v 16.4) with TensorFlow (v. 1.10.0)
  4. MacOSX (v10.12 and v10.13) with TensorFlow (v 1.5 and v 1.10.0)

In general, any UNIX platform should work. We haven't tested it on Windows.

2. Dependencies

  1. TensorFlow Framework (v 1.0 or greater)

  2. Python (v2.7 or greater, but not v3.x.x)

  3. PyYaml (v3 or greater)

  4. SciKit module in Python

  5. Sklearn module in Python

  6. enum module in Python

  7. numpy package (part of TensorFlow)

  8. (Optional) matplotlib package in Python

  9. (Optional) tkinter package in Python

3. Installation Instructions

Installing as a pypi package:

We now provide TensorFI in a pypi package, so that you can install TensorFI using pip:

pip install TensorFI

In this way, TensorFI will be installed into the existing python environment. Alternatively, you can install TensorFI in a virutal environment as outlined below.

Using the install Bash scripts

The easiest way to install TensorFI is to use the provided install-lib.sh and install-dep.sh scripts which will install the Anaconda package manager and the required dependencies, setting the appropriate paths. These do not directly install all packages to your existing environment; but create a virtual env and then installing required packages in that. This is so that you can deactivate it and return to your original environment at any time.

First, execute the install-env.sh. This installs Anaconda for creating your virtual environment to run any TensorFI programs.

After the script executes, source your ~/.bashrc file so the path variables are updated to use Anaconda further.

Next, execute the install-dep.sh. This creates an anaconda3 virtual environment called "tensorfi" and installs the other dependencies.

Manual installation

If you choose to do all of the installations yourself (if you don't want to use the automated script or you have trouble running it), you can follow the procedure outlined below:

  1. To install, first install PyYaml v3 and above. For example, you would type:

    pip install PyYaml
    
  2. Install TensorFlow. You don't need to install the GPU version if you don't want to. Make sure you install TensorFlow for Python 2.7, not 3. TensorFlow installation instructions can be found at:

    https://www.tensorflow.org/install/

  3. Install the scipy and sklearn modules. On both Ubuntu and MacOS, type:

    pip install scipy
    pip install sklearn
    

Make sure you have YAML support. If not, try pip install yaml or pip install pyyaml depending on your preference.

Setting your Python path for TensorFI

Set your PYTHONPATH to the TENSORFIHOME where TENSORFIHOME is where you've installed TensorFI (This assumes you're using Bash as your shell).

export PYTHONPATH=$PYTHONPATH:$TENSORFIHOME

You can skip this step if you are using a virtual environment to run TensorFI.

4. Running TensorFI test files after installation

Run the test files by going to the TENSORFIHOME directory and running runAll.sh in Tests. All the tests should pass if your installation was successful. The script will also check if you have all of the above packages installed correctly.

./Tests/runAll.sh

NOTE: The runAll script will create new subdirectories in the TENSORFIHOME directory (faultLogs, logs and stats), so make sure you have the permissions to do so when you run it (or you can manually create the directories before). Also, make sure the python interpreter name is correct in the script (it defaults to python) - if not, change it.

5. Visual demonstrations

If you want a visual demo of TensorFI, try running autoencoder.py from TensorFIHOME directory.

python Tests/autoencoder.py

You will see the original images (without fault injection) and the faulty images (with fault injection) for different fault probabilities ranging from 0.01 to 1.0 in the images. The images are saved under the Tests/Images sub-directory in PNG format (make sure this directory exists first).

Another visual demo is when you run variational-autoencoder.py This will also show you the original and faulty images.

python Tests/variational-autoencoder.py

Yet another visual demo is when you run GANs (Generative Adversarial Networks). The images with and without faults are saved under the Tests/Images sub-directory.

python Tests/gan.py

NOTE: Both use the matplotlib and the python-tk libraries so you'll need to install the libraries for the demo.

tensorfi's People

Contributors

cclinus avatar elaineyao avatar flyree avatar karthikp-ubc avatar lpalazzi avatar nniranjhana avatar zitaoc avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tensorfi's Issues

py_func Crashes

Environment info

Operating System:
NAME="Ubuntu"
VERSION="20.04.1 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.1 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal

Installed version of CUDA and cuDNN: None

(please attach the output of ls -l /path/to/cuda/lib/libcud*):

(base) ali@simon:/tmp/mozilla_ali0$ ls -l /path/to/cuda/lib/libcud*
ls: cannot access '/path/to/cuda/lib/libcud*': No such file or directory

If installed from binary pip package, provide:

  1. Which pip package you installed.
  2. The output from python -c "import tensorflow; print(tensorflow.version)".

If installed from sources, provide the commit hash: 11b3284

Steps to reproduce

  1. Instantiate ResNet50 with keras.
  2. Load TensorFI on it.
  3. Run prediction with fault injections enabled.

The code is bellow:

from tensorflow.keras.applications.resnet50 import ResNet50
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications.resnet50 import preprocess_input, decode_predictions
import numpy as np
import TensorFI as fi
from tensorflow.keras.backend import get_session

model = ResNet50(weights='imagenet')

img_path = 'val_5.JPEG'
img = image.load_img(img_path, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)

session = get_session()

tf = fi.TensorFI(session, disableInjections=False, logLevel=50)

preds = session.run(model.outputs[0], feed_dict={model.inputs[0]: x})

# preds = model.predict(x)

# decode the results into a list of tuples (class, description, probability)
# (one such list for each sample in the batch)
print('Predicted:', decode_predictions(preds, top=3)[0])
# Predicted: [(u'n02504013', u'Indian_elephant', 0.82658225), (u'n01871265', u'tusker', 0.1122357), (u'n02504458', u'African_elephant', 0.061040461)]

here is the input image used:
val_5

when I turn off the injections I get the expected output:

 ('Predicted:', [(u'n04399382', u'teddy', 0.81401235), (u'n02105641', u'Old_English_sheepdog', 0.032959767), (u'n04008634', u'projectile', 0.020169798)])

What have you tried?

  1. tracing the code which ends in some c execution and terminates by a check in py_func.cc

Logs or other output that would be helpful

(If logs are large, please upload as attachment).

/home/ali/anaconda/envs/tensorfi/bin/python /home/ali/Desktop/Code/TensorFI/resnet50/model.py
WARNING:tensorflow:From /home/ali/Desktop/Code/TensorFI/resnet50/model.py:6: The name tf.keras.backend.get_session is deprecated. Please use tf.compat.v1.keras.backend.get_session instead.

WARNING:tensorflow:From /home/ali/anaconda/envs/tensorfi/lib/python2.7/site-packages/tensorflow/python/ops/init_ops.py:1251: calling __init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor
2021-02-05 18:53:34.853793: I tensorflow/core/platform/cpu_feature_guard.cc:145] This TensorFlow binary is optimized with Intel(R) MKL-DNN to use the following CPU instructions in performance critical operations:  SSE4.1 SSE4.2 AVX AVX2 FMA
To enable them in non-MKL-DNN operations, rebuild TensorFlow with the appropriate compiler flags.
2021-02-05 18:53:34.881342: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2394305000 Hz
2021-02-05 18:53:34.881859: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5582bd2a2eb0 executing computations on platform Host. Devices:
2021-02-05 18:53:34.881909: I tensorflow/compiler/xla/service/service.cc:175]   StreamExecutor device (0): <undefined>, <undefined>
OMP: Info #212: KMP_AFFINITY: decoding x2APIC ids.
OMP: Info #210: KMP_AFFINITY: Affinity capable, using global cpuid leaf 11 info
OMP: Info #154: KMP_AFFINITY: Initial OS proc set respected: 0-3
OMP: Info #156: KMP_AFFINITY: 4 available OS procs
OMP: Info #157: KMP_AFFINITY: Uniform topology
OMP: Info #179: KMP_AFFINITY: 1 packages x 2 cores/pkg x 2 threads/core (2 total cores)
OMP: Info #214: KMP_AFFINITY: OS proc to physical thread map:
OMP: Info #171: KMP_AFFINITY: OS proc 0 maps to package 0 core 0 thread 0 
OMP: Info #171: KMP_AFFINITY: OS proc 1 maps to package 0 core 0 thread 1 
OMP: Info #171: KMP_AFFINITY: OS proc 2 maps to package 0 core 1 thread 0 
OMP: Info #171: KMP_AFFINITY: OS proc 3 maps to package 0 core 1 thread 1 
OMP: Info #250: KMP_AFFINITY: pid 90837 tid 90837 thread 0 bound to OS proc set 0
2021-02-05 18:53:34.882399: I tensorflow/core/common_runtime/process_util.cc:115] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
2021-02-05 18:53:35.374807: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1412] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set.  If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU.  To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile.
/home/ali/Desktop/Code/TensorFI/TensorFI/fiConfig.py:270: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  params = yaml.load(pStream)
Unable to open log file faultLogs/NoName-log
Starting log at 2021-02-05 18:53:40.952907


---------------------------------------
2021-02-05 18:53:43.067374: F tensorflow/python/lib/core/py_func.cc:466] Check failed: DataTypeCanUseMemcpy(t.dtype()) 

Process finished with exit code 134 (interrupted by signal 6: SIGABRT)

FI Configuration Doubt

Hello, I have a doubt regarding the configuration file and its correct filling.
Here is the file:

ScalarFaultType: None
TensorFaultType: bitFlip-element 

Ops:
- ALL = 1.0

Instances: 
- SUB = 4
- RESHAPE = 1
- MAX-POOL = 2
- MATMUL = 2
- RELU = 3
- ADD = 4
- SOFT-MAX = 1
- CONV2D = 2
- MUL = 4
- BIASADD = 4
- ASSIGN = 8
- IDENTITY = 9
- FILL = 1

InjectMode: "oneFaultPerRun"

The behavior I would like to obtain is to inject one and only one fault per run in just one operator.
Am I right?

Reshape returning NoneType

We've experienced an error during testing a CapsuleNet implementation and it gives this error:
ERROR:root:Encountered exception exceptions.TypeError: injectFaultPack() takes exactly 2 arguments (3 given) [[Node: fi_import/primarycap_reshape/Reshape/shape = PyFunc[Tin=[DT_INT32, DT_INT32, DT_INT32], Tout=[DT_INT32], token="pyfunc_359", _device="/job:localhost/replica:0/task:0/cpu:0"](fi_import/primarycap_reshape/strided_slice, import/primarycap_reshape/Reshape/shape/1, import/primarycap_reshape/Reshape/shape/2)]]

see the tensorboard graph of this part:

image

the code for this part is
outputs = keras.layers.Reshape(target_shape=[-1, 8], name='primarycap_reshape')(output)

we changed the graph to tensorflow graph in order to run it on session for tensorfi using this.

Is there any way we can solve this problem?

Add the support for float16

When running VGG model provided in the repo, I get a TypeError: Unknown type <dtype: 'float16'>. For now, TensorFI supports four types - int and floats of 32 and 64, it might be good to add the support for float16.

Also, there might be a bug at line 284 in injectFault.py

else:
         raise TypeError("Unknown type" + type) 

as it'll raise TypeError: cannot concatenate 'str' and 'DType' objects

Maybe it should be written as

else:
          raise TypeError("Unknown type" + str(type))

Thanks

Vgg11 dataset not working on python2

I am trying to use TensorFI with the provided vgg11 model, unfortunately when I try to train it I get the following error
ValueError: unsupported pickle protocol: 4.

If I understood correctly it is a problem related to the pickle version used, could you provide me with the correct dataset?
Thank you

Exclude certain operators from injection

Injecting operators like Shape should be excluded. Accuracy reported might be wrong in these cases, as number of elements changes.

Also, curate a list of operators which just get values from the tf object, and/or not involved in computation and exclude them for fault injection.

Choosing "ALL" in the Ops configuration file will then work better.

Unable to execute run on " + str(fiTensor)

when running the LeNet.py file, it encounts a error in tensorFI.py (101lines). It print following error information - "logging.error("Unable to execute run on " + str(fiTensor))". I don't know how this error happens, it is every appreciated for answering my question. Best wishes !

TypeError: injectFaultSlice() takes exactly 1 argument (3 given)

Hi,
I am trying to run TensorFI on an object detection framework. I am using TensorFlow 1.15 and the algorithm of Object Detection is implemented in Python 2.7 and tested with Tensorflow 1.2 and 1.4.

I just initialise the injection phase by using
fi = ti.TensorFI(sess, name = "FrustumPointNet", logLevel = 50,disableInjections = False)

And that is the only change I do the original code which runs fine. The error that I am getting is:

Log:
2021-06-11 16:29:40.268495: W tensorflow/core/framework/op_kernel.cc:1639] Invalid argument: exceptions.TypeError: injectFaultSlice() takes exactly 1 argument (3 given) Traceback (most recent call last): File "/u/atyagi2/anaconda3/envs/tf1.15/lib/python2.7/site-packages/tensorflow_core/python/ops/script_ops.py", line 235, in __call__ ret = func(*args) TypeError: injectFaultSlice() takes exactly 1 argument (3 given)

This is my first time using the tool and hence I am not able to trace back the origin of this error. I would appreciate if you can point out what might be going wrong here.

P.S: I am able to run the DNN-Modles examples given, so I don't know where things are going wrong.

ERROR:root:Unable to execute run on Tensor("fi_Mean_1:0", dtype=float32)

Hi,
While running the logistic_regression.py test case, I am encountering an error.

System Information:
Python=2.7
TensorFlow=1.1.5 (GPU Support)

Command used:
python Tests/logistic_regression.py

Log
WARNING:tensorflow:From Tests/logistic_regression.py:19: read_data_sets (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use alternatives such as official/mnist/dataset.py from tensorflow/models.
WARNING:tensorflow:From /u/atyagi2/anaconda3/envs/tf1.15/lib/python2.7/site-packages/tensorflow_core/contrib/learn/python/learn/datasets/mnist.py:260: maybe_download (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.
Instructions for updating:
Please write your own downloading logic.
WARNING:tensorflow:From /u/atyagi2/anaconda3/envs/tf1.15/lib/python2.7/site-packages/tensorflow_core/contrib/learn/python/learn/datasets/mnist.py:262: extract_images (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tf.data to implement this functionality.
Extracting /tmp/data/train-images-idx3-ubyte.gz
WARNING:tensorflow:From /u/atyagi2/anaconda3/envs/tf1.15/lib/python2.7/site-packages/tensorflow_core/contrib/learn/python/learn/datasets/mnist.py:267: extract_labels (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tf.data to implement this functionality.
Extracting /tmp/data/train-labels-idx1-ubyte.gz
WARNING:tensorflow:From /u/atyagi2/anaconda3/envs/tf1.15/lib/python2.7/site-packages/tensorflow_core/contrib/learn/python/learn/datasets/mnist.py:110: dense_to_one_hot (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tf.one_hot on tensors.
Extracting /tmp/data/t10k-images-idx3-ubyte.gz
Extracting /tmp/data/t10k-labels-idx1-ubyte.gz
WARNING:tensorflow:From /u/atyagi2/anaconda3/envs/tf1.15/lib/python2.7/site-packages/tensorflow_core/contrib/learn/python/learn/datasets/mnist.py:290: init (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use alternatives such as official/mnist/dataset.py from tensorflow/models.
WARNING:tensorflow:From Tests/logistic_regression.py:28: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.

WARNING:tensorflow:From Tests/logistic_regression.py:39: The name tf.log is deprecated. Please use tf.math.log instead.

WARNING:tensorflow:From Tests/logistic_regression.py:41: The name tf.train.GradientDescentOptimizer is deprecated. Please use tf.compat.v1.train.GradientDescentOptimizer instead.

WARNING:tensorflow:From Tests/logistic_regression.py:44: The name tf.global_variables_initializer is deprecated. Please use tf.compat.v1.global_variables_initializer instead.

WARNING:tensorflow:From Tests/logistic_regression.py:47: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.

2021-06-09 20:47:16.283375: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
2021-06-09 20:47:16.295312: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties:
name: NVIDIA GeForce RTX 2080 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.545
pciBusID: 0000:5e:00.0
2021-06-09 20:47:16.295797: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 1 with properties:
name: NVIDIA GeForce RTX 2080 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.545
pciBusID: 0000:d8:00.0
2021-06-09 20:47:16.297107: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2021-06-09 20:47:16.299884: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
2021-06-09 20:47:16.302552: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0
2021-06-09 20:47:16.303956: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0
2021-06-09 20:47:16.306989: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0
2021-06-09 20:47:16.309497: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0
2021-06-09 20:47:16.316184: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2021-06-09 20:47:16.318198: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0, 1
2021-06-09 20:47:16.321113: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA
2021-06-09 20:47:16.334672: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2100000000 Hz
2021-06-09 20:47:16.336866: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x563559e9e1d0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2021-06-09 20:47:16.336902: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2021-06-09 20:47:16.638280: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties:
name: NVIDIA GeForce RTX 2080 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.545
pciBusID: 0000:5e:00.0
2021-06-09 20:47:16.638773: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 1 with properties:
name: NVIDIA GeForce RTX 2080 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.545
pciBusID: 0000:d8:00.0
2021-06-09 20:47:16.638838: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2021-06-09 20:47:16.638858: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
2021-06-09 20:47:16.638874: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0
2021-06-09 20:47:16.638891: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0
2021-06-09 20:47:16.638908: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0
2021-06-09 20:47:16.638949: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0
2021-06-09 20:47:16.638966: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2021-06-09 20:47:16.640706: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0, 1
2021-06-09 20:47:16.640753: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2021-06-09 20:47:16.642010: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-06-09 20:47:16.642032: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165] 0 1
2021-06-09 20:47:16.642048: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0: N N
2021-06-09 20:47:16.642058: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 1: N N
2021-06-09 20:47:16.643804: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3050 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 2080 Ti, pci bus id: 0000:5e:00.0, compute capability: 7.5)
2021-06-09 20:47:16.645540: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 3050 MB memory) -> physical GPU (device: 1, name: NVIDIA GeForce RTX 2080 Ti, pci bus id: 0000:d8:00.0, compute capability: 7.5)
2021-06-09 20:47:16.649265: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x56355ad10da0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2021-06-09 20:47:16.649295: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): NVIDIA GeForce RTX 2080 Ti, Compute Capability 7.5
2021-06-09 20:47:16.649308: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (1): NVIDIA GeForce RTX 2080 Ti, Compute Capability 7.5
2021-06-09 20:47:17.779916: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
Epoch: 0001 cost= 1.183628618
Epoch: 0002 cost= 0.665431309
Epoch: 0003 cost= 0.552812954
Epoch: 0004 cost= 0.498707291
Epoch: 0005 cost= 0.465449734
Epoch: 0006 cost= 0.442609885
Epoch: 0007 cost= 0.425485988
Epoch: 0008 cost= 0.412153071
Epoch: 0009 cost= 0.401357169
Epoch: 0010 cost= 0.392367188
Epoch: 0011 cost= 0.384738742
Epoch: 0012 cost= 0.378193522
Epoch: 0013 cost= 0.372418279
Epoch: 0014 cost= 0.367249739
Epoch: 0015 cost= 0.362749526
Epoch: 0016 cost= 0.358563266
Epoch: 0017 cost= 0.354849673
Epoch: 0018 cost= 0.351454730
Epoch: 0019 cost= 0.348317068
Epoch: 0020 cost= 0.345396632
Epoch: 0021 cost= 0.342762511
Epoch: 0022 cost= 0.340233821
Epoch: 0023 cost= 0.337921471
Epoch: 0024 cost= 0.335748342
Epoch: 0025 cost= 0.333688100
Optimization Finished!
Accuracy: 0.9135
/u/atyagi2/anaconda3/envs/tf1.15/lib/python2.7/site-packages/TensorFI/fiConfig.py:270: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
params = yaml.load(pStream)
WARNING:tensorflow:From /u/atyagi2/anaconda3/envs/tf1.15/lib/python2.7/site-packages/TensorFI/modifyGraph.py:34: py_func (from tensorflow.python.ops.script_ops) is deprecated and will be removed in a future version.
Instructions for updating:
tf.py_func is deprecated in TF V2. Instead, there are two
options available in V2.
- tf.py_function takes a python function which manipulates tf eager
tensors instead of numpy arrays. It's easy to convert a tf eager tensor to
an ndarray (just call tensor.numpy()) but having access to eager tensors
means tf.py_functions can use accelerators such as GPUs as well as
being differentiable using a gradient tape.
- tf.numpy_function maintains the semantics of the deprecated tf.py_func
(it is not differentiable, and manipulates numpy arrays). It drops the
stateful argument making all functions stateful.

WARNING:tensorflow:From /u/atyagi2/anaconda3/envs/tf1.15/lib/python2.7/site-packages/TensorFI/modifyGraph.py:34: py_func (from tensorflow.python.ops.script_ops) is deprecated and will be removed in a future version.
Instructions for updating:
tf.py_func is deprecated in TF V2. Instead, there are two
options available in V2.
- tf.py_function takes a python function which manipulates tf eager
tensors instead of numpy arrays. It's easy to convert a tf eager tensor to
an ndarray (just call tensor.numpy()) but having access to eager tensors
means tf.py_functions can use accelerators such as GPUs as well as
being differentiable using a gradient tape.
- tf.numpy_function maintains the semantics of the deprecated tf.py_func
(it is not differentiable, and manipulates numpy arrays). It drops the
stateful argument making all functions stateful.

Accuracy (with no injections): 0.9135
WARNING:tensorflow:From Tests/logistic_regression.py:85: The name tf.summary.FileWriter is deprecated. Please use tf.compat.v1.summary.FileWriter instead.

WARNING:tensorflow:From Tests/logistic_regression.py:85: The name tf.summary.FileWriter is deprecated. Please use tf.compat.v1.summary.FileWriter instead.

2021-06-09 20:49:13.454866: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties:
name: NVIDIA GeForce RTX 2080 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.545
pciBusID: 0000:5e:00.0
2021-06-09 20:49:13.456646: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 1 with properties:
name: NVIDIA GeForce RTX 2080 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.545
pciBusID: 0000:d8:00.0
2021-06-09 20:49:13.456835: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2021-06-09 20:49:13.456945: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
2021-06-09 20:49:13.457038: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0
2021-06-09 20:49:13.457137: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0
2021-06-09 20:49:13.457219: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0
2021-06-09 20:49:13.457327: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0
2021-06-09 20:49:13.457408: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2021-06-09 20:49:13.462890: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0, 1
2021-06-09 20:49:13.463164: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-06-09 20:49:13.463219: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165] 0 1
2021-06-09 20:49:13.463259: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0: N N
2021-06-09 20:49:13.463295: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 1: N N
2021-06-09 20:49:13.467357: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3050 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 2080 Ti, pci bus id: 0000:5e:00.0, compute capability: 7.5)
2021-06-09 20:49:13.468799: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 3050 MB memory) -> physical GPU (device: 1, name: NVIDIA GeForce RTX 2080 Ti, pci bus id: 0000:d8:00.0, compute capability: 7.5)
2021-06-09 20:49:13.482558: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties:
name: NVIDIA GeForce RTX 2080 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.545
pciBusID: 0000:5e:00.0
2021-06-09 20:49:13.483020: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 1 with properties:
name: NVIDIA GeForce RTX 2080 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.545
pciBusID: 0000:d8:00.0
2021-06-09 20:49:13.483080: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2021-06-09 20:49:13.483100: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
2021-06-09 20:49:13.483123: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0
2021-06-09 20:49:13.483138: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0
2021-06-09 20:49:13.483155: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0
2021-06-09 20:49:13.483172: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0
2021-06-09 20:49:13.483189: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2021-06-09 20:49:13.484656: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0, 1
2021-06-09 20:49:13.484699: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-06-09 20:49:13.484712: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165] 0 1
2021-06-09 20:49:13.484724: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0: N N
2021-06-09 20:49:13.484734: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 1: N N
2021-06-09 20:49:13.485860: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3050 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 2080 Ti, pci bus id: 0000:5e:00.0, compute capability: 7.5)
2021-06-09 20:49:13.486318: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 3050 MB memory) -> physical GPU (device: 1, name: NVIDIA GeForce RTX 2080 Ti, pci bus id: 0000:d8:00.0, compute capability: 7.5)
ERROR:root:Encountered exception pyfunc_6 returns 2 values, but expects to see 1 values.
[[node fi_add (defined at /u/atyagi2/anaconda3/envs/tf1.15/lib/python2.7/site-packages/tensorflow_core/python/framework/ops.py:1748) ]]

Original stack trace for u'fi_add':
File "Tests/logistic_regression.py", line 78, in
fi = ti.TensorFI(sess, name = "logistReg", logLevel = 30, disableInjections = True)
File "/u/atyagi2/anaconda3/envs/tf1.15/lib/python2.7/site-packages/TensorFI/tensorFI.py", line 189, in init
self.fiMap = mg.modifyNodes(graph, fiPrefix)
File "/u/atyagi2/anaconda3/envs/tf1.15/lib/python2.7/site-packages/TensorFI/modifyGraph.py", line 98, in modifyNodes
newOp = createFIFunc(op.type, inputs, outputTypeList, name)
File "/u/atyagi2/anaconda3/envs/tf1.15/lib/python2.7/site-packages/TensorFI/modifyGraph.py", line 34, in createFIFunc
res = tf.py_func(fiFunc, inputs, outputTypes, name = name)
File "/u/atyagi2/anaconda3/envs/tf1.15/lib/python2.7/site-packages/tensorflow_core/python/util/deprecation.py", line 324, in new_func
return func(*args, **kwargs)
File "/u/atyagi2/anaconda3/envs/tf1.15/lib/python2.7/site-packages/tensorflow_core/python/ops/script_ops.py", line 513, in py_func
return py_func_common(func, inp, Tout, stateful, name=name)
File "/u/atyagi2/anaconda3/envs/tf1.15/lib/python2.7/site-packages/tensorflow_core/python/ops/script_ops.py", line 495, in py_func_common
func=func, inp=inp, Tout=Tout, stateful=stateful, eager=False, name=name)
File "/u/atyagi2/anaconda3/envs/tf1.15/lib/python2.7/site-packages/tensorflow_core/python/ops/script_ops.py", line 318, in _internal_py_func
input=inp, token=token, Tout=Tout, name=name)
File "/u/atyagi2/anaconda3/envs/tf1.15/lib/python2.7/site-packages/tensorflow_core/python/ops/gen_script_ops.py", line 170, in py_func
"PyFunc", input=input, token=token, Tout=Tout, name=name)
File "/u/atyagi2/anaconda3/envs/tf1.15/lib/python2.7/site-packages/tensorflow_core/python/framework/op_def_library.py", line 794, in _apply_op_helper
op_def=op_def)
File "/u/atyagi2/anaconda3/envs/tf1.15/lib/python2.7/site-packages/tensorflow_core/python/util/deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "/u/atyagi2/anaconda3/envs/tf1.15/lib/python2.7/site-packages/tensorflow_core/python/framework/ops.py", line 3357, in create_op
attrs, op_def, compute_device)
File "/u/atyagi2/anaconda3/envs/tf1.15/lib/python2.7/site-packages/tensorflow_core/python/framework/ops.py", line 3426, in _create_op_internal
op_def=op_def)
File "/u/atyagi2/anaconda3/envs/tf1.15/lib/python2.7/site-packages/tensorflow_core/python/framework/ops.py", line 1748, in init
self._traceback = tf_stack.extract_stack()

ERROR:root:Unable to execute run on Tensor("fi_Mean_1:0", dtype=float32)
Accuracy (with injections): None

Is it possible to elaborate on what the issue might be as this is the only test which is not passing

TF Keras Layers

Is TensorFI compatible with keras layers shipped with tensorflow?

My network is (just for reference) for MNIST dataset:

conv_1 = tf.keras.layers.Conv2D(32, (5, 5), padding="same", activation="relu")(X)
max_pool_1 = tf.keras.layers.MaxPool2D()(conv_1)
conv_2 = tf.keras.layers.Conv2D(64, (5, 5), padding="same", activation="relu")(max_pool_1)
max_pool_2 = tf.keras.layers.MaxPool2D()(conv_2)
flat = tf.keras.layers.Flatten()(max_pool_2)
dense_1 = tf.keras.layers.Dense(1024, activation="relu")(flat)
droput = tf.keras.layers.Dropout(dropout_probability)(dense_1)
dense_2 = tf.keras.layers.Dense(10, activation="softmax")(droput)

A typical error while trying to run the FI with demo file is:

E0826 17:29:57.478169 140562744907136 tensorFI.py:98] Encountered exception pyfunc_49 returns 0 values, but expects to see 1 values.
         [[node fi_dense/bias (defined at /thesis/TensorFI/TensorFI/modifyGraph.py:34) ]]

Original stack trace for u'fi_dense/bias':

The module has been installed as you suggest in the readme file.

Bit Flips in the weights

Hi,
As far as I understand, TensorFI is capable of injecting faults at the level of nodes but not in the nodes rather the output of one node acts as the input to the other nodes.

If we want to study the effects of bit flips in the weights of a model, how can we achieve it with TensorFI?

Is there a way to inject the bit flips inside the 'weight node'?

BatchNorm and data format.

I've experienced a bug regarding the data format with the following format: BatchSize x Channels x H x W. By default, tensorfi when parses a layer that relies on data format, such as conv2d, it doesn't save that variable and then it will crash when runs.

How do I implement the operator tf.layers.BatchNorm? it takes a tensor, and 4 algebraic parameters to apply normalization to the tensor.

TypeError: __init__() got an unexpected keyword argument 'fiConf'

While specifying the path for config file, I am getting this error:

Traceback (most recent call last):
File "train/test.py", line 516, in
test_from_rgb_detection(FLAGS.output+'.pickle', FLAGS.output)
File "train/test.py", line 298, in test_from_rgb_detection
batch_one_hot_to_feed, batch_size=batch_size)
File "train/test.py", line 172, in inference
fi = ti.TensorFI(sess,fiConf = "/u/atyagi2/frustum-pointnets/confFiles/default.yaml", name = "FrustumPointNet", logLevel = 50,disableInjections = False)
TypeError: init() got an unexpected keyword argument 'fiConf'

Is 'fiConfi' the right keyword or I am doing something wrong?

'injectFaultMean' doesn't work

I got this following error. I noticed in the source code def injectFaultMean(a, b), there is a FIXME - This only works if we call np.mean on b[0]. Need to figure out why. I was wondering what this injection function wants to do and how to avoid this error, thanks.

UnknownError (see above for traceback): exceptions.IndexError: tuple index out of range
Traceback (most recent call last):

  File "/home/elaine/.conda/envs/tensorfi/lib/python2.7/site-packages/tensorflow/python/ops/script_ops.py", line 206, in __call__
    ret = func(*args)

  File "/home/elaine/pycharmProjects/TensorFI-master/TensorFI/injectFault.py", line 551, in injectFaultMean
    res = np.mean(a, b[0])

  File "/home/elaine/.conda/envs/tensorfi/lib/python2.7/site-packages/numpy/core/fromnumeric.py", line 2957, in mean
    out=out, **kwargs)

  File "/home/elaine/.conda/envs/tensorfi/lib/python2.7/site-packages/numpy/core/_methods.py", line 57, in _mean
    rcount = _count_reduce_items(arr, axis)

  File "/home/elaine/.conda/envs/tensorfi/lib/python2.7/site-packages/numpy/core/_methods.py", line 50, in _count_reduce_items
    items *= arr.shape[ax]

IndexError: tuple index out of range

encounter errors when running with tf.layers.flatten

Hi,

I encountered the InvalidArgumentError error when running the attached code with tensorFI. The problem seems to be in the flatten() operation, which essentially calls Shape, Pack, Reshape, etc.

#!/usr/bin/python

from __future__ import print_function
import tensorflow as tf
import TensorFI as ti
import numpy as np

x = tf.placeholder(shape=(None, 2, 2), dtype='float32')
def my_func(arg,x):
  arg = tf.convert_to_tensor(arg, dtype=tf.float32)
  t_arg = tf.layers.flatten(x)
  return tf.matmul(t_arg, arg)

value_3 = my_func(np.array([[1.0, 2.0], [3.0, 4.0],[2.1,2.5],[3.0,4.0]], dtype=np.float32),x)

s = tf.Session()
init = tf.global_variables_initializer()
print("Initial : ", s.run(init))

fi = ti.TensorFI(s,name = "var", logLevel = 10, disableInjections = False)
#fi = ti.TensorFI(s,name = "var", logLevel = 10, disableInjections = True)

logs_path = "./logs"
logWriter = tf.summary.FileWriter( logs_path, s.graph )

print("variable test : ", s.run(value_3,feed_dict = {x:[[[1.0,2.0],[3.0,4.0]]]}))

OverflowError: cannnot convert float infinity to integer in faultTypes.py Line 95

I'm getting the following error when running TensorFI/Tests/DNN-model/LeNet-mnist/LeNet.py,

File "/home/xxx/TensorFI/faultTypes.py", line 95, in getBinary
      integer = bin(int(number)).lstrip("0b")

OverflowError: cannot convert float infinity to integer

I'm not sure whether there is infinity value and if so, should we truncate the value in the corresponding source code?

Thanks,

isinstance(number, int) fails to capture other int types

In getBinary() in randomBitFlip() of faultTypes.py:

The check isinstance(number, int) fails to capture other int types, for example, "numpy.int32" type. I've simulated a trace from pdb to illustrate this behavior:

-> integer, dec = getBinary(val)
(Pdb) s
--Call--
> /home/niranjhana/anaconda/envs/tensorfi/lib/python2.7/site-packages/TensorFI/faultTypes.py(91)getBinary()
-> def getBinary(number):
(Pdb) print number
1024
(Pdb) isinstance(number, int)
False
(Pdb) type(number)
<type 'numpy.int32'>
(Pdb) 

So it fails trying to get the decimal part of it, going into the else condition treating it as a float value.

Python version support and Windows implementation

I am willing to use your tool for scientific purposes. So, have you implemented anything on windows so far to this date?
I was also curious if anyone here has developed any model and used TensorFI within python 3.x.x?

issues with bitFlip-tensor fault type

Configuring tensors with the bitFlip-tensor over all operators in mnist_nn.py returns an accuracy of 10000+, when the value should be between 0 and 1.

The accuracy returned is between the range for other fault types. Will need to debug in depth to figure out where the issue is, and what causes this inflation of accuracy. Btw, we should also have a sanity check on the accuracy output.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.