Git Product home page Git Product logo

ctuning / ck-mlperf Goto Github PK

View Code? Open in Web Editor NEW
32.0 9.0 23.0 1.35 MB

This repository is outdated! Join the open MLPerf workgroup to participate in the development of the next generation of automation workflows for MLPerf benchmarks:

Home Page: https://bit.ly/mlperf-edu-wg

License: BSD 3-Clause "New" or "Revised" License

Shell 7.78% Python 51.17% Dockerfile 5.20% C++ 16.12% Roff 1.78% Jupyter Notebook 17.80% Batchfile 0.14%

ck-mlperf's Introduction

Note that this repository is outdated: we are now using the next generation of the MLCommons CK workflow automation meta-framework (Collective Mind aka CM) developed by the open working group. Feel free to join this community effort to learn how to modularize ML Systems and automate their benchmarking, optimization and deployment in the real world!

Collective Knowledge workflows for MLPerf

compatibility automation

License

Linux/MacOS: Travis Build Status

All CK components from the community are now aggregated in one CK repository.

News

  • April 2021 We are very excited to join forces with OctoML.ai! Contact Grigori Fursin for more details!
  • March 2021 For your convenience, all CK components for ML Systems are now aggregated in one GitHub repository! They can be also searched for at the cKnowledge.io portal!
  • March 2021 See our ACM TechTalk about the CK technology, reproducible research, FAIR principles and MLPerf.
  • March 2021 The overview of the CK technology has appeared in the Philosophical Transactions A, the world's longest-running journal where Newton published: DOI, ArXiv.

Table of Contents

Installation

Install CK

$ python -m pip install ck --user
$ ck version
V1.11.1

Pull CK repositories

Pull repos (recursively, pulls ck-env, ck-tensorflow, etc.):

$ ck pull repo:ck-mlperf

MLPerf Inference v0.5

Using CK is optional for MLPerf Inference v0.5.

Unofficial CK workflows

We (unofficially) support two tasks out of three (i.e. except for Machine Translation). Full instructions are provided in the official MLPerf Inference repository:

CK workflows for official application with Docker

You can run the official vision application with CK model and dataset packages.

Install datasets

ImageNet 2012 validation dataset

Download the original dataset and auxiliaries:

$ ck install package --tags=image-classification,dataset,imagenet,val,original,full
$ ck install package --tags=image-classification,dataset,imagenet,aux

Copy the labels next to the images:

$ ck locate env --tags=image-classification,dataset,imagenet,val,original,full
/home/dvdt/CK-TOOLS/dataset-imagenet-ilsvrc2012-val
$ ck locate env --tags=image-classification,dataset,imagenet,aux
/home/dvdt/CK-TOOLS/dataset-imagenet-ilsvrc2012-aux
$ cp `ck locate env --tags=aux`/val.txt `ck locate env --tags=val`/val_map.txt

COCO 2017 validation dataset
$ ck install package --tags=object-detection,dataset,coco,2017,val,original
$ ck locate env --tags=object-detection,dataset,coco,2017,val,original
/home/dvdt/CK-TOOLS/dataset-coco-2017-val

Install and run TensorFlow models

NB: It is currently necessary to create symbolic links if a model's file name is different from the one hardcoded in the application for each profile. For example, for the tf-mobilenet profile (which can be used both for the non-quantized and quantized MobileNet TF models), the application specifies mobilenet_v1_1.0_224_frozen.pb , but the quantized model's file is mobilenet_v1_1.0_224_quant_frozen.pb.

ResNet
$ ck install package --tags=mlperf,image-classification,model,tf,resnet
$ export MODEL_DIR=`ck locate env --tags=model,tf,resnet`
$ export DATA_DIR=`ck locate env --tags=dataset,imagenet,val`
$ export EXTRA_OPS="--accuracy --count 50000 --scenario SingleStream"
$ ./run_and_time.sh tf resnet cpu
...
TestScenario.SingleStream qps=1089.79, mean=0.0455, time=45.880, acc=76.456, queries=50000, tiles=50.0:0.0447,80.0:0.0465,90.0:0.0481,95.0:0.0501,99.0:0.0564,99.9:0.0849

MobileNet non-quantized
$ ck install package --tags=mlperf,image-classification,model,tf,mobilenet,non-quantized
$ export MODEL_DIR=`ck locate env --tags=model,tf,mobilenet,non-quantized`
$ export DATA_DIR=`ck locate env --tags=dataset,imagenet,val`
$ export EXTRA_OPS="--accuracy --count 50000 --scenario Offline"
$ ./run_and_time.sh tf mobilenet cpu
...
TestScenario.Offline qps=352.92, mean=3.2609, time=4.534, acc=71.676, queries=1600, tiles=50.0:2.9725,80.0:4.0271,90.0:4.0907,95.0:4.3719,99.0:4.4811,99.9:4.5173

MobileNet quantized
$ ck install package --tags=mlperf,image-classification,model,tf,mobilenet,quantized
$ ln -s `ck locate env --tags=mobilenet,quantized`/mobilenet_v1_1.0_224{_quant,}_frozen.pb`
$ export MODEL_DIR=`ck locate env --tags=model,tf,mobilenet,quantized`
$ export DATA_DIR=`ck locate env --tags=dataset,imagenet,val`
$ export EXTRA_OPS="--accuracy --count 50000 --scenario Offline"
$ ./run_and_time.sh tf mobilenet cpu
...
TestScenario.Offline qps=128.83, mean=7.5497, time=12.419, acc=70.676, queries=1600, tiles=50.0:7.8294,80.0:11.1442,90.0:11.7616,95.0:12.1174,99.0:12.9126,99.9:13.1641

SSD-MobileNet non-quantized
$ ck install package --tags=mlperf,object-detection,model,tf,ssd-mobilenet,non-quantized
$ ln -s `ck locate env --tags=model,tf,ssd-mobilenet,non-quantized`/{frozen_inference_graph.pb,ssd_mobilenet_v1_coco_2018_01_28.pb}
$ export MODEL_DIR=`ck locate env --tags=model,tf,ssd-mobilenet,non-quantized`
$ export DATA_DIR=`ck locate env --tags=dataset,coco,2017,val`
$ export EXTRA_OPS="--accuracy --count 5000 --scenario Offline"
$ ./run_and_time.sh tf ssd-mobilenet cpu
...
TestScenario.Offline qps=5.82, mean=8.0406, time=27.497, acc=93.312, mAP=0.235, queries=160, tiles=50.0:6.7605,80.0:10.3870,90.0:10.4632,95.0:10.4788,99.0:10.4936,99.9:10.5068

SSD-MobileNet quantized
$ ck install package --tags=mlperf,object-detection,model,tf,ssd-mobilenet,quantized
$ ln -s `ck locate env --tags=model,tf,ssd-mobilenet,quantized`/{graph.pb,ssd_mobilenet_v1_coco_2018_01_28.pb}
$ export MODEL_DIR=`ck locate env --tags=model,tf,ssd-mobilenet,quantized`
$ export DATA_DIR=`ck locate env --tags=dataset,coco,2017,val`
$ export EXTRA_OPS="--accuracy --count 5000 --scenario Offline"
$ ./run_and_time.sh tf ssd-mobilenet cpu
...
TestScenario.Offline qps=5.46, mean=9.4975, time=29.310, acc=94.037, mAP=0.239, queries=160, tiles=50.0:7.9843,80.0:12.2297,90.0:12.3646,95.0:12.3965,99.0:12.4229,99.9:12.4351

SSD-ResNet

TODO

CK workflows for official application without Docker

Install prerequisites

To run the official vision app natively (i.e. without Docker), first install Python prerequisites such as OpenCV, TensorFlow and COCO Python API:

$ ck detect soft --tags=compiler,python --full_path=`which python3`
$ ck install package --tags=lib,tensorflow,v1.14,vcpu,vprebuilt
$ ck install package --tags=lib,python-package,cv2
$ ck install package --tags=tool,coco,api

Then, install the latest LoadGen package:

$ ck install package --tags=mlperf,inference,source,upstream.master
$ ck install package --tags=lib,python-package,absl
$ ck install package --tags=lib,python-package,mlperf,loadgen

NB: The most important thing during installation is to select the same version of Python 3 (if you have more than one registered with CK). Check that each package "needs" exactly the same version of Python 3 after installation:

$ ck show env --tags=lib,tensorflow,v1.14,vcpu,vprebuilt
Env UID:         Target OS: Bits: Name:                              Version: Tags:
087035468886d589   linux-64    64 TensorFlow library (prebuilt, cpu) 1.14.0   64bits,channel-stable,host-os-linux-64,lib,needs-python,needs-python-3.6.7,target-os-linux-64,tensorflow,tensorflow-cpu,tf,tf-cpu,v1,v1.14,v1.14.0,vcpu,vprebuilt

$ ck show env --tags=lib,python-package,cv2
Env UID:         Target OS: Bits: Name:                 Version: Tags:
5f31d16b444d6b8c   linux-64    64 Python OpenCV library 3.6.7    64bits,cv2,host-os-linux-64,lib,needs-python,needs-python-3.6.7,opencv,python-package,target-os-linux-64,v3,v3.6,v3.6.7

$ ck show env --tags=tool,coco,api
Env UID:         Target OS: Bits: Name:            Version: Tags:
885a8f71bf1219da   linux-64    64 COCO dataset API master   64bits,api,coco,compiled-by-gcc,compiled-by-gcc-8.3.0,host-os-linux-64,needs-python,needs-python-3.6.7,target-os-linux-64,tool,v0,vmaster,vtrunk

$ ck show env --tags=lib,python-package,mlperf,loadgen
Env UID:         Target OS: Bits: Name:                            Version: Tags:
462592cb2beeaf63   linux-64    64 MLPerf Inference LoadGen library master   64bits,host-os-linux-64,lib,loadgen,mlperf,mlperf-loadgen,mlperf_loadgen,needs-python,needs-python-3.6.7,python-package,target-os-linux-64,v0,vmaster

Modify run_local.sh

Modify the run_local.sh script under v0.5/classification_and_detection as follows:

$ git diff
diff --git a/v0.5/classification_and_detection/run_local.sh b/v0.5/classification_and_detection/run_local.sh
index 1262991..7597403 100755
--- a/v0.5/classification_and_detection/run_local.sh
+++ b/v0.5/classification_and_detection/run_local.sh
@@ -9,5 +9,5 @@ if [ ! -d $OUTPUT_DIR ]; then
     mkdir -p $OUTPUT_DIR
 fi
 
-python python/main.py --profile $profile $common_opt --model $model_path $dataset \
-    --output $OUTPUT_DIR $EXTRA_OPS $@
+ck virtual env --tag_groups="lib,tensorflow-cpu,v1.14,vcpu,vprebuilt lib,python-package,cv2 tool,coco lib,python-package,mlperf,loadgen" \
+--shell_cmd="python3.6 python/main.py --profile $profile $common_opt --model $model_path $dataset --output $OUTPUT_DIR $EXTRA_OPS $@"

NB: Use exactly the same Python version as your prerequisites "need" (only the major and minor version numbers e.g. 3.6, not 3.6.7).

Use run_local.sh

See above for how to specify datasets and models.

Example: MobileNet non-quantized
$ ck install package --tags=mlperf,image-classification,model,tf,mobilenet,non-quantized
$ export MODEL_DIR=`ck locate env --tags=model,tf,mobilenet,non-quantized`
$ export DATA_DIR=`ck locate env --tags=dataset,imagenet,val`
$ export EXTRA_OPS="--count 1024 --scenario Offline"
$ ./run_local.sh tf mobilenet cpu
...
TestScenario.Offline qps=237.10, mean=3.3406, time=4.319, queries=1024, tiles=50.0:2.9683,80.0:4.2340,90.0:4.2692,95.0:4.2827,99.0:4.2932,99.9:4.2932

ck-mlperf's People

Contributors

bellycat77 avatar ens-lg4 avatar g4v avatar gfursin avatar me2x avatar psyhtest avatar slahiruk avatar tjablin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ck-mlperf's Issues

Update Image Classification script for the Closed division

I've created a copy of the script we used for benchmarking Image Classification models for the Closed division of MLPerf Inference v0.5. Below are some suggestions for updating this script for MLPerf Inference v0.7.

  • On a platform like Firefly-RK3399, we support 3 workflow variants (implementations/backends): TFLite, ArmNN with Neon support, ArmNN with OpenCL support. For v0.5, we modified the script to specify which variant is "active" and relaunched. Near the submission deadline, we introduced iteration over implementations and backends into the audit.sh script. Now we should do similarly for the run.sh script. Bonus points for spotting other improvements based on the audit.sh script or the run.sh script for the Open division!

  • In addition to rpi4, add other platforms that do not support ArmNN with OpenCL such as Jetson TX1 (tx1). In fact, perhaps write a function that checks whether the given platform is in an OpenCL "blacklist"?

  • MobileNet has been dropped, so this leaves ResNet only. For simplicity, we will only support one preprocessing method (using OpenCV). Therefore, we should change this:

models=( "mobilenet" "resnet" )
models_tags=( "model,tflite,mobilenet-v1-1.0-224,non-quantized" "model,tflite,resnet,no-argmax" )
# Preferred preprocessing methods per model.
models_preprocessing_tags=( "full,side.224,preprocessed,using-opencv" "full,side.224,preprocessed,using-tensorflow" )

to

models=( "resnet" )
models_tags=( "model,tflite,resnet,no-argmax" )
# Preferred preprocessing methods per model.
models_preprocessing_tags=( "full,side.224,preprocessed,using-opencv" )
  • We should update the library versions: currently, ArmNN v20.05 and TFLite v1.15.3 are the latest supported (although we may be able to support TFLite 2.x?)

  • In principle, the difference between Closed and Open division submissions will be primarily in the list of models to use: ResNet only for the Closed; all MobileNet variants for the Open. Having a unified script would be great!

  • Another idea for unification. We typically have two variants of a workflow: with LoadGen and without LoadGen support e.g. program:image-classification-tflite-loadgen and program:image-classification-tflite. For testing purposes, we should be able to specify which program variants to use as e.g. elsewhere. This choice can manifest itself e.g. here.

Implement MLPerf ResNet50 package for TensorFlow

The MLPerf ResNet50 reference by @guschmue supports weights both in the TensorFlow and ONNX formats.

@guschmue notes:

The benchmark is a reference implementation that is not meant to be the fastest implementation possible. It is written in python which might make it less suitable for lite models like mobilenet or large number of cpu's. We are thinking to provide a c++ implementation with identical functionality in the near future.

In fact, we already have such an implementation wrapped in CK: ck-tensorflow:program:image-classification-tf-cpp. Let's check it works with the ResNet50 weights in the TensorFlow format by providing a CK package.

Move all MLPerf models to Zenodo

Packages such as:

  • model-tf-mlperf-resnet50
  • model-tf-mlperf-mobilenet
    already have the model graphs deposited on Zenodo. We should identify all model graphs that currently reside elsewhere: Amazon S3, TensorFlow.org, private DropBox accounts, etc. - and deposit them on Zenodo too.

TFLite models

TF models

compiling TFLite 1.13 for android28-arm64 with Android NDK r18b fails

I tried to build TFLite 1.13 for android28-arm64 with Android NDK r18b but it fails. I used the following commands:

$ ck detect soft:compiler.llvm.android.ndk --target_os=android28-arm64
$ ck install package --tags=lib,tflite,v1.13 --env.CK_HOST_CPU_NUMBER_OF_PROCESSORS=2 --target_os=android28-arm64

Here is the output:
output.log

Generalise and document TF -> TFLite conversion

@bellycat77 has managed to convert the TF ResNet50 v1.5 model used in MLPerf Inference to TFLite with the following script:

import tensorflow as tf

graph_def_file = "resnet50_v1.pb"
input_arrays = ["input_tensor"]
output_arrays = ["softmax_tensor"]

converter = tf.contrib.lite.TFLiteConverter.from_frozen_graph(
  graph_def_file, input_arrays, output_arrays)
tflite_model = converter.convert()
open("resnet50_v1.tflite", "wb").write(tflite_model)

We should generalise and automate this via a CK script. For example, the input file can come from a dependency on a TF model, whereas the input and output arrays can be specified in the model's metadata.

RNN-T CmdGen improvements

The RNN-T CmdGen is work-in-progress. We started it for the v0.7 submission round, but eventually did not submit due to a belatedly discovered postprocessing issue. Not surprisingly, it needs more love to get into shape.

Currently, the usage is:

$ ck run cmdgen:benchmark.speech-recognition-loadgen --model=rnnt \
--scenario=singlestream --mode=accuracy \
--sut=aws-g4dn.4xlarge

Future improvements:

  • The --model parameter should be optional. (We only support one model after all.)
  • The --sut parameter should allow any SUT name. At the moment, it is restricted to handful, and alternatives result in an error (CK error: [cmdgen] build_map[sut] is missing both 'aws-g4dn.4xsmall' and '###' values!.)
  • The record name (e.g. mlperf-closed-aws-g4dn.4xlarge-pytorch-v1.15.1-rnnt-singlestream) must include the --mode to allow keeping both performance and accuracy experiment entries simultaneously. At the moment, a previously recorded experiment entry for one mode must be removed (ck rm local:experiment:mlperf-closed*rnnt* -f) to allow for the other mode.
  • The record name should not include a bogus inference engine version (v1.15.1). The default inference engine name (pytorch) should be customizable according to the plugins used.

Way to convert resnet50's onnx format to quantized tensorrt engine

Hi,
I am trying to reproduce the resnet50 benchmark result shows on mlperf.org (edged/closed division row 4). Currently I can achieve 2.4ms latency on Jetson Xavier AGX SingleStream scenario . However, I use trtexec to generate the int8 engine file from resnet50_v1.onnx and trtexec does not support calibration (it uses random weight instead). Could you tell me how did you convert resnet50_v1.onnx to int8 engine file (or plan file) you use in this this repo?

Implement a script for cross-checking two image classification experiments

The script can search for all experiments with specific tags (e.g. mlperf,image-classification,mobilenet-v1-1.0-224) and offer to select a pair of them for comparison (e.g. one from a reference implementation and another from a vendor optimised one; or one from an floating-point implementation and another from a quantised one).

For example, the reference MobileNet implementation can be benchmarked as follows:

$ ck benchmark program:image-classification-tflite \
--repetitions=10 --env.CK_BATCH_SIZE=1 --env.CK_BATCH_COUNT=2 \
--record --record_repo=local --record_uoa=mlperf-mobilenet-v1-1.00-224-tflite-0.1.7-performance \
--tags=mlperf,image-classification,mobilenet-v1-1.0-224,tflite-0.1.7,performance \
--skip_print_timers --skip_stat_analysis --process_multi_keys

Finalize image classification preprocessing

We should update all image classification clients we support for MLPerf to use new preprocessing (in the order of priority - high to low):

Time-permitting, we should also update non-MLPerf clients:

At the same time, we should also:

ImageNet preprocessing fails with SciPy 1.3.0

Hi

Thank you for the latest docker file. The build is failing on my system with the following error at imageNet inference stage

  -----------------------------------
Installing to /home/dvdt/CK_TOOLS/dataset-imagenet-preprocessed
From: /home/dvdt/CK_TOOLS/dataset-imagenet-ilsvrc2012-val-min , To: /home/dvdt/CK_TOOLS/dataset-imagenet-preprocessed , Size: 224 , Crop: 87.5 , InterSize: 0 , 2GU: 0,  2BGR: False, OFF: 0, VOL: '', FOF: image_list
.txt, DTYPE: uint8, EXT: rgb8, IMG: 
Traceback (most recent call last):
  File "/home/dvdt/CK_REPOS/ck-env/package/dataset-imagenet-preprocessed/preprocess_image_dataset.py", line 186, in <module>
    output_filenames = preprocess_files(selected_filenames, source_dir, destination_dir, crop_percentage, square_side, inter_size, guentherization_mode, convert_to_bgr, data_type, new_file_extension)
  File "/home/dvdt/CK_REPOS/ck-env/package/dataset-imagenet-preprocessed/preprocess_image_dataset.py", line 134, in preprocess_files
    convert_to_bgr = convert_to_bgr)
  File "/home/dvdt/CK_REPOS/ck-env/package/dataset-imagenet-preprocessed/preprocess_image_dataset.py", line 87, in load_image
    img = scipy.misc.imread(image_path)
**AttributeError: module 'scipy' has no attribute 'misc'**

Details
CK version: 1.9.7
Python version used by CK: 3.7.3 (default, Mar 27 2019, 22:11:17) 
[GCC 7.3.0]

docker --version
Docker version 18.09.6, build 481bc77156

OS - Centos7 on GCP

TFLite MobileNet quantized (int8) fails with OpenCV preprocessing for MobileNet (rgbf32)

Running MobileNet quantized with TFLite on images preprocessed with OpenCV for MobileNet into rgbf32 fails:

$ ck benchmark program:image-classification-tflite --speed --skip_print_timers \
--repetitions=1 --env.CK_BATCH_SIZE=1 --env.CK_BATCH_COUNT=10 \
--dep_add_tags.images=preprocessed,using-opencv,normalized-for.mobilenet \
--dep_add_tags.weights=mobilenet,quantized
...
Summary:
-------------------------------
Graph loaded in 0.001913s
All images loaded in 0.002327s
All images classified in 0.460941s
Average classification time: 0.043231s
Accuracy top 1: 0.0 (0 of 10)
Accuracy top 5: 0.0 (0 of 10)
--------------------------------

The same works fine with TF-C++:

$ ck benchmark program:image-classification-tf-cpp --speed --skip_print_timers \
--repetitions=1 --env.CK_BATCH_SIZE=1 --env.CK_BATCH_COUNT=10 \
--dep_add_tags.images=preprocessed,using-opencv,normalized-for.mobilenet \
--dep_add_tags.weights=mobilenet,quantized
...
Summary:
-------------------------------
Graph loaded in 0.081466s
All images loaded in 0.002623s
All images classified in 0.528722s
Average classification time: 0.011366s
Accuracy top 1: 0.7 (7 of 10)
Accuracy top 5: 0.9 (9 of 10)
--------------------------------

or with TFLite on images preprocessed using OpenCV into rgb8:

$ ck benchmark program:image-classification-tflite --speed --skip_print_timers \
--repetitions=1 --env.CK_BATCH_SIZE=1 --env.CK_BATCH_COUNT=10 \
--dep_add_tags.images=preprocessed,using-opencv,rgb8 \
--dep_add_tags.weights=mobilenet,quantized
...
Summary:
-------------------------------
Graph loaded in 0.001789s
All images loaded in 0.000663s
All images classified in 0.477427s
Average classification time: 0.043434s
Accuracy top 1: 0.7 (7 of 10)
Accuracy top 5: 0.9 (9 of 10)
--------------------------------

Inconsistent total image classification time

Due to warm up effects, we usually suggest to run our image classification clients with --env.CK_BATCH_COUNT=2. As we explain:

When using the batch count of N, the program classifies N images, but the slow first run is not taken into account when computing the average classification time

This works a treat for the average classification time, but leads to an inconsistency when reporting the total classification time: tflite and tf-cpp truthfully include the time of the first batch into the total, while tf-py and onnx do not:

($ ck benchmark program:image-classification-tflite \
--repetitions=10 --env.CK_BATCH_SIZE=1 --env.CK_BATCH_COUNT=2
...
Processing batches...

Batch 1 of 2

Batch loaded in 0.00802251 s
Batch classified in 0.16831 s

Batch 2 of 2

Batch loaded in 0.00776105 s
Batch classified in 0.0762354 s
...
Summary:
-------------------------------
Graph loaded in 0.000663s
All images loaded in 0.015784s
All images classified in 0.244545s
Average classification time: 0.076235s
Accuracy top 1: 0.5 (1 of 2)
Accuracy top 5: 1.0 (2 of 2)
--------------------------------
$ ck benchmark program:image-classification-tf-cpp \
--repetitions=10 --env.CK_BATCH_SIZE=1 --env.CK_BATCH_COUNT=2
...
Processing batches...

Batch 1 of 2
Batch loaded in 0.00341696 s
Batch classified in 0.355268 s

Batch 2 of 2
Batch loaded in 0.00335902 s
Batch classified in 0.0108837 s
...
Summary:
-------------------------------
Graph loaded in 0.053440s
All images loaded in 0.006776s
All images classified in 0.366151s
Average classification time: 0.010884s
Accuracy top 1: 0.5 (1 of 2)
Accuracy top 5: 1.0 (2 of 2)
--------------------------------
$ ck benchmark program:image-classification-tf-py \
--repetitions=10 --env.CK_BATCH_SIZE=1 --env.CK_BATCH_COUNT=2
...
Weights loaded in 0.293122s

Batch 1 of 2
Batch loaded in 0.001036s
Batch classified in 0.121501s

Batch 2 of 2
Batch loaded in 0.001257s
Batch classified in 0.013995s
...
Summary:
-------------------------------
Graph loaded in 1.115745s
All images loaded in 0.002293s
All images classified in 0.013995s
Average classification time: 0.013995s
Accuracy top 1: 0.5 (1 of 2)
Accuracy top 5: 1.0 (2 of 2)
--------------------------------
$ ck benchmark program:image-classification-onnx-py --cmd_key=preprocessed \
--repetitions=10 --env.CK_BATCH_SIZE=1 --env.CK_BATCH_COUNT=2
...
Batch 1 of 2
Batch loaded in 0.001307s
Batch classified in 0.186297s

Batch 2 of 2
Batch loaded in 0.000721s
Batch classified in 0.029533s
...
Summary:
-------------------------------
Graph loaded in 0.018409s
All images loaded in 0.002028s
All images classified in 0.029533s
Average classification time: 0.029533s
Accuracy top 1: 0.5 (1 of 2)
Accuracy top 5: 1.0 (2 of 2)
--------------------------------

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.