Git Product home page Git Product logo

doodle's Introduction

Doodle Image Recognition by MaruLabo

It is a classification model of 'doodle' image. Classify from 28 pixel square grayscale images. The learning data uses "The Quick, Draw! Dataset".

The model binaries are included in the release.

There is the program source code in src/doodle directory.

  • model.py: The model definition.
  • inputs.py: Dataset preprocess definition.
  • metrics.py: Metrics calculations.

About Datasets

Data is not included in this repository. The data setup program downloads "The Quick, Draw! Dataset".

It data made available by Google, Inc. under the Creative Commons Attribution 4.0 International license. https://creativecommons.org/licenses/by/4.0/

Examples

for Web

  • examples/tensorflow_js_simple: TensorFlow.js + PWA
  • examples/tensorflow_js: TensorFlow.js + Vue.js

for Android

  • examples/tensorflow_lite_android: TensorFlow Lite (Java)

for Raspberry Pi

  • examples/tensorflow_lite_rpi: TensorFlow Lite (C++)

How to use

Install dependency packages:

$ pip install -r requirements.txt

Training on local machne:

$ python ./tools/train_local.py -c ./config.yaml

You will get results such as:

###### metrics #################################################################
global_step                   : 2000
loss                          : 0.402699053288
macro_average/accuracy        : 0.973749995232
macro_average/f_measure       : 0.870679736137
macro_average/precision       : 0.875779151917
macro_average/recall          : 0.880717754364
macro_class_0/accuracy        : 0.994791686535
macro_class_0/f_measure       : 0.968742370605
macro_class_0/precision       : 0.956623375416
macro_class_0/recall          : 0.985714316368
macro_class_1/accuracy        : 0.981249988079
macro_class_1/f_measure       : 0.870249092579
macro_class_1/precision       : 0.824982345104
macro_class_1/recall          : 0.930000007153
macro_class_2/accuracy        : 0.951041579247
macro_class_2/f_measure       : 0.762770295143
macro_class_2/precision       : 0.769061267376
macro_class_2/recall          : 0.773762583733
macro_class_3/accuracy        : 0.914583384991
macro_class_3/f_measure       : 0.667931675911
macro_class_3/precision       : 0.730173766613
macro_class_3/recall          : 0.627005696297
macro_class_4/accuracy        : 0.981250107288
macro_class_4/f_measure       : 0.876377105713
macro_class_4/precision       : 0.921818137169
macro_class_4/recall          : 0.857774138451
macro_class_5/accuracy        : 0.978124976158
macro_class_5/f_measure       : 0.913400948048
macro_class_5/precision       : 0.870901882648
macro_class_5/recall          : 0.9650349617
macro_class_6/accuracy        : 0.978124916553
macro_class_6/f_measure       : 0.885983645916
macro_class_6/precision       : 0.92440110445
macro_class_6/recall          : 0.864556491375
macro_class_7/accuracy        : 0.981249928474
macro_class_7/f_measure       : 0.897269070148
macro_class_7/precision       : 0.904123425484
macro_class_7/recall          : 0.909956753254
macro_class_8/accuracy        : 0.993749976158
macro_class_8/f_measure       : 0.958296000957
macro_class_8/precision       : 0.966666579247
macro_class_8/recall          : 0.956547617912
macro_class_9/accuracy        : 0.983333289623
macro_class_9/f_measure       : 0.905776977539
macro_class_9/precision       : 0.889040350914
macro_class_9/recall          : 0.93682539463
micro_average/accuracy        : 0.973749995232
micro_average/f_measure       : 0.868749916553
micro_average/precision       : 0.868749976158
micro_average/recall          : 0.868749976158

How to convert the model format?

TFLite model format (*.tflite)

Using tools/convert_tflite.py and your SavedModel directory (it have saved_model.pb file):

$ python ./tools/convert_tflite.py <path/to/your/savedmodel/dir> <filename/output.tflite>

More details can be seen: python tools/convert_tflite.py -h.

TensorFlow.js model format (tensorflowjs_model.pb and others)

Using tools/convert_tfjs.py and your SavedModel directory.

$ python tools/convert_tfjs.py <path/to/your/savedmodel/dir> <path/to/output/dir>

More details can be seen: python tools/convert_tflite.py -h.

License

MIT license

doodle's People

Contributors

dependabot[bot] avatar hideya avatar ornew avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

doodle's Issues

Should not include model binaries in the repository.

@hideya
Currently it seems that the repository contains model binaries(example/tensorflow_js_simple/public). This is undesirable. When model data exceeding 200 MB is kept committed, git becomes very heavy.

Model data is included in the release. Please change the usage so that model data is downloaded separately. If you need to fixture the model data version, you can explicitly specify the release tag version.

When change of usage is confirmed, I delete the binary data by going back to the history.
Sorry to trouble you, but thank you.

Want to change input feature label to string.

In consideration of expandability, it is a problem that model and label information are not tied together. This issue is possible to correspond by hyperparameterizing the vocabulary list. The problem is how to keep compatible with the current code.

  1. Switch the behavior with hyper parameter.
  2. Create code separately.
  3. Breaking change.

[doodle.ipynb] Itemized list's numbers slipping off by one

Comments on Jupyter notebook in Japanese.

to: @ornew

以下の Itemized List ですが、コードでは「0」からスタートしているのに、Markdownでの実際の表示は、自動的に「1」からに変更されちゃってます(githubのMarkdownでもJupyter notebookのものでも。ちなみに、List の先頭を全部「1.」にしても、1から5に書き換えられます)。

doodle/docs/doodle.ipynb

Lines 70 to 74 in 3e00f02

"0. モデルのパラメータを初期化する\n",
"1. 学習用データに対する予測を計算する\n",
"2. 教師ラベルと予測結果の誤差を計算する\n",
"3. 誤差を最小化するようにモデルのパラメータを更新する\n",
"4. **誤差が十分に小さくなるまで**1-3を繰り返す\n",

実際の表示は以下のようになってます。
output

結果、「1-3を繰り返す」の部分のステップの指定が、1ズレてしまっています。
加えて、以下の図の①②③ともずれてしまっています。

e

RuntimeError: MetaGraphDef associated with tags 'serve' could not be found in SavedModel. To inspect available tag-sets in the SavedModel, please use the SavedModel CLI: `saved_model_cli`

Using TensorFlow backend.
Traceback (most recent call last):
File "untitled245.py", line 112, in
args.strip_debug_ops
File "untitled245.py", line 56, in convert_to_tfjs
meta_graph = tf.saved_model.loader.load(sess, tags, savedmodel_dir)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/saved_model/loader_impl.py", line 197, in load
return loader.load(sess, tags, import_scope, **saver_kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/saved_model/loader_impl.py", line 350, in load
**saver_kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/saved_model/loader_impl.py", line 275, in load_graph
meta_graph_def = self.get_meta_graph_def_from_tags(tags)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/saved_model/loader_impl.py", line 251, in get_meta_graph_def_from_tags
" could not be found in SavedModel. To inspect available tag-sets in"
RuntimeError: MetaGraphDef associated with tags 'serve' could not be found in SavedModel. To inspect available tag-sets in the SavedModel, please use the SavedModel CLI: saved_model_cli

Maybe better to have tf.tidy() around tensor operations?

The following lines involves tensor operations, including creating temporary ones.

const img = tf.fromPixels(imageData, 1).toFloat()
const v255 = tf.scalar(255.)
const grayscaled = tf.div(tf.sub(v255, img), v255)
const results = this._model.execute({
'image_1': grayscaled.expandDims(0)
})
this.probabilities = this.read_tensor(results, 'model/probabilities')

Checking google's example code like the following, tf.tidy() is used around such operations:

https://github.com/tensorflow/tfjs-examples/blob/bf51024bd04fa6197b410db10e5210d4604aef43/mnist/index.js#L114-L122

I'm wondering whether it is needed for this case, although I realize that the tensor that hold the result is explicitly disposed, so there maybe no need for tf.tidy().

Still, it might be easier for beginners to read the code as a reference if tf.tidy() is used instead of the explicit dispose().

I'm creating this issue, just in case.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.