Git Product home page Git Product logo

emoji-scavenger-hunt's Introduction

👾 Emoji Scavenger Hunt 👾

Emoji Scavenger Hunt is an experimental web based game that makes use of TensorFlow.js to identify objects seen by your webcam or mobile camera in the browser. We show you emojis 🍌 ⏰ ☕️ 📱 and you have to find those objects in the real world before your timer runs out 🏆 👍.

Find out how we built this experiment by reading our article on the Tensorflow blog or try it for yourself at g.co/emojiscavengerhunt.

Development

yarn prep

Running yarn prep will use yarn to get the right packages and setup the right folders. If you don't have yarn you can install it via homebrew (for Mac). If you’re already running node/npm with nvm (our recommendation) you can install yarn without node using brew install yarn --without-node.

In order to start local development we also require the installation of the Google Cloud SDK and associated App Engine Components. These are used for the local webserver and pushing to app engine for static site hosting.

Once you have both installed you can run the local development server with:

yarn dev

This task uses watchify to continually watch for changes to JS and SASS files and recompiles them if any changes are detected. You can access the local development server at http://localhost:3000/

When building assets for production use:

yarn build

This will minify SASS and JS for serving in production.

Build your own model

You can build your own image recognition model by running a Docker container. Dockerfiles are in training directory.

Prepare images for training by dividing them into directories for each label name that you want to train. For example: the directory structure for training cat and dog will look as follows assuming image data is stored under data/images.

data
└── images
    ├── cat
    │   ├── cat1.jpg
    │   ├── cat2.jpg
    │   └── ...
    └── dog
        ├── dog1.jpg
        ├── dog2.jpg
        └── ...

Once the sample images are ready, you can kickstart the training by building and running the Docker container.

$ cd training
$ docker build -t model-builder .
$ docker run -v /path/to/data:/data -it model-builder

After the training is completed, you'll see three files in the data/saved_model_web directory:

  • tensorflowjs_model.pb (the dataflow graph)
  • weights_manifest.json (weight manifest file)
  • group1-shard*of* (collection of binary weight files)

They are SavedModel files in a web-friendly format converted by the TensorFlow.js converter. You can build your own game using your own custom image recognition model by replacing the corresponding files under the dist/model/ directory with the newly generated ones.

The training script will also generate a file called scavenger_classes.ts which works in conjunction with your generated custom model. You need to replace the file at src/js/scavenger_classes.ts with this newly generated scavenger_classes.ts file so that the labels of your model match with the trained data. After replacing the file you can run the build script normally to test your model in a browser. See the README file for information on running a preview server.

Update the game logic in src/js/game.ts if needed.


## License

Copyright 2018 Google LLC

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

https://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

## Credits

This is an experiment and collaboration between Google Brand Studio and the [PAIR](https://ai.google/pair/) teams at Google.

## Final Thoughts

This is not an official Google product. We will do our best to support and maintain this experiment but your mileage may vary.

We encourage open sourcing projects as a way of learning from each other. Please respect our and other creators’ rights, including copyright and trademark rights when present, when sharing these works and creating derivative work.

If you want more info on Google's policy, you can find that [here](https://policies.google.com/)

emoji-scavenger-hunt's People

Contributors

dependabot[bot] avatar japegrape avatar jbruwer avatar shareefalis avatar tushuhei avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

emoji-scavenger-hunt's Issues

Number of Objects Trained

Out of curiosity, how many objects have been trained with the model packaged in the /dist folder?

Is it working on Android ?

Hi, first congrats on this amazing project !

Has anyone succeed to make it work on a Android mobile phone camera ?
Or do you have any hints on what to change in the code to make it work on it ?

Error when creating bottlenecks

For some reason, creating bottlenecks abruptly stops and returns this error:

INFO:tensorflow:Creating bottleneck at /tmp/bottleneck/asystasia/asystasia gangetica.jpg_mobilenet_1.0_224.txt
Traceback (most recent call last):
  File "/tensorflow/tensorflow/examples/image_retraining/retrain.py", line 1487, in <module>
    tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/platform/app.py", line 126, in run
    _sys.exit(main(argv))
  File "/tensorflow/tensorflow/examples/image_retraining/retrain.py", line 1188, in main
    bottleneck_tensor, FLAGS.architecture)
  File "/tensorflow/tensorflow/examples/image_retraining/retrain.py", line 500, in cache_bottlenecks
    resized_input_tensor, bottleneck_tensor, architecture)
  File "/tensorflow/tensorflow/examples/image_retraining/retrain.py", line 438, in get_or_create_bottleneck
    if not os.path.exists(bottleneck_path):
  File "/usr/lib/python3.5/genericpath.py", line 19, in exists
    os.stat(path)
UnicodeEncodeError: 'ascii' codec can't encode character '\xe9' in position 37: ordinal not in range(128)

What does it mean?

cannot run it ...

[2] /bin/sh: dev_appserver.py: command not found error Command failed with exit code 127. info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.

Detecting Retrained Data

I am just looking for some clarity on ensuring I have the right approach on a retrained model:

Assuming I trained just two categories via transfer learning in the retrain.py:

data
└── images
├── cat
│ ├── cat1.jpg
│ ├── cat2.jpg
│ └── ...
└── dog
├── dog1.jpg
├── dog2.jpg
└── ...

Would index 0 be cat and index 1 dog? I assume the order of my output labels needs to match my scavenger classes.

Do i remove the other not trained classes and only leave two?

export const SCAVENGER_CLASSES: {[key: number]: string} = { 0: 'cat', 1: 'dog' };

Side note: I am a little confused, when trying to test my retrained model and its image classification accuracy manually, before converting it to the tensorflowjs format , I used
https://github.com/tensorflow/tensorflow/blob/r1.7/tensorflow/examples/label_image/label_image.py

But this script only worked with my tmp/output_graph.pb, not my saved_model.pb, is that correct?

Running the docker command returns error

After cloning the project locally and running the docker command returns the following error. Please help

Step 1/8 : FROM gcr.io/tensorflow/tensorflow
manifest for gcr.io/tensorflow/tensorflow:latest not found

QR code

Can you add QR code on the Desktop web site, so it will be easy to switch to mobile from desktop Browser?

"Browser or device doesn't support this experiment"?

untitled

Hi I tried setting up this app using the available codes through VirtualBox Ubuntu 16.04 but it says "Browser or device doesn't support this experiment". The same thing appeared when I tried the online demo. However at my Windows OS (of the same device), the online demo works perfectly. May I know what the problem actually is?

Running docker returns errors regarding Tensorflow

I've been having trouble managing my tensorflow installations within the docker container. I installed an earlier version of tensorflow (1.9.0) which I specified in docker file and it made this error go away:

AttributeError: module ‘tensorflow’ has no attribute ‘app’

But while building my model, this error came up:

class UnliftedInitializerVariable(resource_variable_ops.UninitializedVariable): AttributeError: module 'tensorflow.python.ops.resource_variable_ops' has no attribute 'UninitializedVariable'
Do I need to upgrade the code to tensorflow 2.0 or is there a problem with the way I installed tensorflow? Any help would be appreciated.

ZeroDivisionError: integer division or modulo by zero

So I am reusing the same codes from this repo for retraining. It was all working for me for the past few days until now. I received the issue below:

Traceback (most recent call last):
  File "/tensorflow/tensorflow/examples/image_retraining/retrain.py", line 1486, in <module>
    tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/platform/app.py", line 126, in run
    _sys.exit(main(argv))
  File "/tensorflow/tensorflow/examples/image_retraining/retrain.py", line 1252, in main
    FLAGS.architecture))
  File "/tensorflow/tensorflow/examples/image_retraining/retrain.py", line 549, in get_random_cached_bottlenecks
    image_dir, category)
  File "/tensorflow/tensorflow/examples/image_retraining/retrain.py", line 254, in get_image_path
    mod_index = index % len(category_list)
ZeroDivisionError: integer division or modulo by zero

Trying to Build your own model but returns an error

Using a windows machine at the moment, I got ready and followed the instructions as followed:

$ cd training $ docker build -t model-builder . $ docker run -v /path/to/data:/data -it model-builder

However, after hitting enter, the error below shows up.

/usr/local/lib/python3.5/dist-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters
ERROR:tensorflow:Image directory '/data/images' not found.
Traceback (most recent call last):
  File "/tensorflow/tensorflow/examples/image_retraining/retrain.py", line 1486, in <module>
    tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/platform/app.py", line 126, in run
    _sys.exit(main(argv))
  File "/tensorflow/tensorflow/examples/image_retraining/retrain.py", line 1138, in main
    class_count = len(image_lists.keys())
AttributeError: 'NoneType' object has no attribute 'keys'

I play the game with my own trained model. But it can't identify objects.

Hi, Emoji Scavenger Hunt is really cool game.

I play the game with my own trained model. But it can't identify objects. It takes a picture of any object and tells me I found the item.

I didn't have any error, so I think maybe the problem is from the image data.

I prepare 400 images about OK(gesture) for training.
Should I need to prepare more images or select pure background images?

Training new models

I followed the instructions to build my own model. Added 190 images of smiles and 190 images of eyes, all cropped, same size (200x100).

I ran the docker command with the default settings, 4000 steps. Here is part of the console output:

INFO:tensorflow:2018-09-11 23:16:32.141023: Step 3100: Train accuracy = 100.0% INFO:tensorflow:2018-09-11 23:16:32.141567: Step 3100: Cross entropy = 0.000412 INFO:tensorflow:2018-09-11 23:16:32.537921: Step 3100: Validation accuracy = 100.0% (N=100) INFO:tensorflow:2018-09-11 23:17:17.138179: Step 3200: Train accuracy = 100.0% INFO:tensorflow:2018-09-11 23:17:17.138496: Step 3200: Cross entropy = 0.000361 INFO:tensorflow:2018-09-11 23:17:17.527896: Step 3200: Validation accuracy = 100.0% (N=100) INFO:tensorflow:2018-09-11 23:17:56.203106: Step 3300: Train accuracy = 100.0% INFO:tensorflow:2018-09-11 23:17:56.203532: Step 3300: Cross entropy = 0.000178 INFO:tensorflow:2018-09-11 23:17:56.570616: Step 3300: Validation accuracy = 100.0% (N=100) INFO:tensorflow:2018-09-11 23:18:40.495192: Step 3400: Train accuracy = 100.0% INFO:tensorflow:2018-09-11 23:18:40.495422: Step 3400: Cross entropy = 0.000145 INFO:tensorflow:2018-09-11 23:18:40.999290: Step 3400: Validation accuracy = 100.0% (N=100) INFO:tensorflow:2018-09-11 23:19:34.927443: Step 3500: Train accuracy = 100.0% INFO:tensorflow:2018-09-11 23:19:34.927855: Step 3500: Cross entropy = 0.000417 INFO:tensorflow:2018-09-11 23:19:35.369817: Step 3500: Validation accuracy = 100.0% (N=100) INFO:tensorflow:2018-09-11 23:20:21.830019: Step 3600: Train accuracy = 100.0% INFO:tensorflow:2018-09-11 23:20:21.830568: Step 3600: Cross entropy = 0.000333 INFO:tensorflow:2018-09-11 23:20:22.357888: Step 3600: Validation accuracy = 100.0% (N=100) INFO:tensorflow:2018-09-11 23:21:09.905047: Step 3700: Train accuracy = 100.0% INFO:tensorflow:2018-09-11 23:21:09.905420: Step 3700: Cross entropy = 0.000110 INFO:tensorflow:2018-09-11 23:21:10.347650: Step 3700: Validation accuracy = 100.0% (N=100) INFO:tensorflow:2018-09-11 23:21:57.355424: Step 3800: Train accuracy = 100.0% INFO:tensorflow:2018-09-11 23:21:57.355648: Step 3800: Cross entropy = 0.000422 INFO:tensorflow:2018-09-11 23:21:57.912983: Step 3800: Validation accuracy = 100.0% (N=100) INFO:tensorflow:2018-09-11 23:22:43.254789: Step 3900: Train accuracy = 100.0% INFO:tensorflow:2018-09-11 23:22:43.255021: Step 3900: Cross entropy = 0.000189 INFO:tensorflow:2018-09-11 23:22:43.670463: Step 3900: Validation accuracy = 100.0% (N=100) INFO:tensorflow:2018-09-11 23:23:24.426763: Step 3999: Train accuracy = 100.0% INFO:tensorflow:2018-09-11 23:23:24.427008: Step 3999: Cross entropy = 0.000133 INFO:tensorflow:2018-09-11 23:23:24.857894: Step 3999: Validation accuracy = 100.0% (N=100)

I added all the files, modified everything and got it working, but no matter what I put in front of the camera, it turns out to be correct everytime for eyes and smiles.

Would you suggest more images? More steps? Bigger images?

I got "in matMul: inputs must be rank 2" error when I used retained model.

Hi I love Emoji Scavenger Hunt. I want to play the game with my own trained model. But I got the following error. Can you please help me?

Summary

I created a model and deployed it. However I got Error: Error in matMul: inputs must be rank 2, got ranks 1 and 2. error during starting the game. I want to solve it.

Steps to Reproduce

# Git clone from my forked repo
$ git clone --depth 1 --branch matmul-error https://github.com/y-zono/emoji-scavenger-hunt.git

# Training
$ cd emoji-scavenger-hunt/training/
$ docker build -t model-builder .
$ docker run -v /xxx/xxx/emoji-scavenger-hunt/training/data:/data -it model-builder

# Copy the trained files into dist
$ cp data/saved_model_web/group1-shard1of1 ../dist/model/group1-shard1of1
$ cp data/saved_model_web/tensorflowjs_model.pb ../dist/model/web_model.pb
$ cp data/saved_model_web/weights_manifest.json ../dist/model/weights_manifest.json

# Run the app
$ cd ..; yarn prep; yarn dev

# Open browser and click "LET'S PLAY"
http://localhost:3000

Expected Results

The game is started normally.

Actual Results

The following message was showed on the page.

It looks like your browser or device doesn’t support this experiment. It’s designed to work best on mobile (iOS/Safari or Android/Chrome). 😭

The following error was occurred at matmul.ts

Error: Error in matMul: inputs must be rank 2, got ranks 1 and 2.

https://github.com/tensorflow/tfjs-core/blob/master/src/ops/matmul.ts#L49

Notes

  1. The versions are the following.
@tensorflow/[email protected]
@tensorflow/[email protected]

tensorflow v1.7.0 for training
  1. Chrome browser version is 65.0.3325.181(Official Build)(64bit)

  2. The original trained model worked normally on my local PC.

  3. I added some images into my own forked repo for training and changed some files like game.ts and scavenger_classes.ts.

master...y-zono:matmul-error

  1. I downloaded the cat and dog images from image-net
$ wget http://www.image-net.org/api/text/imagenet.synset.geturls?wnid=n02123045 -O cat.txt
$ wget http://www.image-net.org/api/text/imagenet.synset.geturls?wnid=n02087122 -O dog.txt
$ i=0; for file in `head -100 cat.txt`;do wget $file -O cat-$i.jpg; let i++ ;done
$ i=0; for file in `head -100 dog.txt`;do wget $file -O dog-$i.jpg; let i++ ;done

New Logo

Hello sir.
You have a great app, unfortunately this app does not have a logo yet, may I donate a logo for your app?

How to proper install it and run

Hi! I'm trying to run this project but Idk what dependencies are needed and where to download them, please can someone help me on how to set up it properly

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.