Git Product home page Git Product logo

nodejs-vision's Introduction

THIS REPOSITORY IS DEPRECATED. ALL OF ITS CONTENT AND HISTORY HAS BEEN MOVED TO GOOGLE-CLOUD-NODE

Google Cloud Platform logo

release level npm version

Google Cloud Vision API client for Node.js

A comprehensive list of changes in each version may be found in the CHANGELOG.

Read more about the client libraries for Cloud APIs, including the older Google APIs Client Libraries, in Client Libraries Explained.

Table of contents:

Quickstart

Before you begin

  1. Select or create a Cloud Platform project.
  2. Enable billing for your project.
  3. Enable the Google Cloud Vision API API.
  4. Set up authentication with a service account so you can access the API from your local workstation.

Installing the client library

npm install @google-cloud/vision

The Google Cloud Vision API Node.js Client API Reference documentation also contains samples.

Supported Node.js Versions

Our client libraries follow the Node.js release schedule. Libraries are compatible with all current active and maintenance versions of Node.js. If you are using an end-of-life version of Node.js, we recommend that you update as soon as possible to an actively supported LTS version.

Google's client libraries support legacy versions of Node.js runtimes on a best-efforts basis with the following warnings:

  • Legacy versions are not tested in continuous integration.
  • Some security patches and features cannot be backported.
  • Dependencies cannot be kept up-to-date.

Client libraries targeting some end-of-life versions of Node.js are available, and can be installed through npm dist-tags. The dist-tags follow the naming convention legacy-(version). For example, npm install @google-cloud/vision@legacy-8 installs client libraries for versions compatible with Node.js 8.

Versioning

This library follows Semantic Versioning.

This library is considered to be stable. The code surface will not change in backwards-incompatible ways unless absolutely necessary (e.g. because of critical security issues) or with an extensive deprecation period. Issues and requests against stable libraries are addressed with the highest priority.

More Information: Google Cloud Platform Launch Stages

Contributing

Contributions welcome! See the Contributing Guide.

Please note that this README.md, the samples/README.md, and a variety of configuration files in this repository (including .nycrc and tsconfig.json) are generated from a central template. To edit one of these files, make an edit to its templates in directory.

License

Apache Version 2.0

See LICENSE

nodejs-vision's People

Contributors

alexander-fenster avatar bcoe avatar beccasaurus avatar bradmiro avatar callmehiphop avatar crwilcox avatar dpebot avatar fhinkel avatar gcf-owl-bot[bot] avatar greenkeeper[bot] avatar happyhuman avatar jkwlui avatar jmdobry avatar jmuk avatar justinbeckwith avatar lukesneeringer avatar munkhuushmgl avatar nirupa-kumar avatar nnegrey avatar normankong avatar release-please[bot] avatar renovate-bot avatar renovate[bot] avatar sofisl avatar steffnay avatar stephenplusplus avatar summer-ji-eng avatar swcloud avatar xiaozhenliu-gg5 avatar yoshi-automation avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nodejs-vision's Issues

Accept strings for images on the feature methods.

We should accept strings on the single-feature methods and build the image object.

Example:

vision.faceDetection('https://location.com/image.jpeg').then(response => ...);
vision.faceDetection('local_file.jpeg').then(response => ...);

We can essentially detect the string and build the right object:

let request = {
  image: {
    correctKey: str,
  },
  features: ['FACE_DETECTION'],
}

Users needing additional arguments (e.g. imageContext) will have to pass the full request object instead.

TypeError: client.setCredentials is not a function

From @samartan on November 29, 2017 11:7

Hello everyone! I have been trying to follow the Cloud vision with Node.js codelab. I am able to start the google sign in flow but after I have selected an account to sign in. I get TypeError as explained below.

Sorry for changing the title that many times. I got a bit sloppy there.

Please run down the following list and make sure you've tried the usual "quick fixes":

[done] Search the issues already opened: https://github.com/GoogleCloudPlatform/google-cloud-node/issues
[done] Search StackOverflow: http://stackoverflow.com/questions/tagged/google-cloud-platform+node.js
[done] Check our Troubleshooting guide: https://googlecloudplatform.github.io/google-cloud-node/#/docs/guides/troubleshooting
[done] Check our FAQ: https://googlecloudplatform.github.io/google-cloud-node/#/docs/guides/faq

If you are still having issues, please be sure to include as much information as possible:

Environment details

  • OS: MacOs
  • Node.js version:v8.9.1
  • npm version:5.5.1
  • google-cloud-node version: v6.11.1

Steps to reproduce

Follow steps from the codelab.

Error shown:

home/user/cloud-vision/start/lib/oauth2.js:172
      client.setCredentials(tokens);
             ^
TypeError: client.setCredentials is not a function
    at /home/user/cloud-vision/start/lib/oauth2.js:172:14
    at /home/user/cloud-vision/start/node_modules/google-auth-library/lib/auth/oauth2client.js:95:13
    at Request._callback (/home/user/cloud-vision/start/node_modules/google-auth-library/lib/transporters.js:113:17)
    at Request.self.callback (/home/user/cloud-vision/start/node_modules/google-auth-library/node_modules/request/request.js:186:22)
    at emitTwo (events.js:106:13)
    at Request.emit (events.js:191:7)
    at Request.<anonymous> (/home/user/cloud-vision/start/node_modules/google-auth-library/node_modules/request/request.js:1163:10)
    at emitOne (events.js:96:13)
    at Request.emit (events.js:188:7)
    at IncomingMessage.<anonymous> (/home/user/cloud-vision/start/node_modules/google-auth-library/node_modules/request/request.js:1085:12)

Further Information:

The credentials set up is proper because I put in the details into the "Complete" version of the Codelab and I could sigin perfectly. So I thought there must be something wrong with the coding part but I have checked it twice and everything is the same as in the codelab.

Thanks!

Copied from original issue: googleapis/google-cloud-node#2758

React-Native suuport

is it possible to provide small example usage of cloud vision api in React Native?

When will Vision types be available [Typescript]

It would be truly helpful (actually it's necessary) to have types declared for using Vision in typescript projects, Im currently creating Firebase Cloud Functions in typescript but the ones related to Vision are harder to make without types.

Face detection test failure

Environment details

While running tests in automation with Kokoro

  • OS: NA
  • Node.js version:NA
  • npm version:NA
  • @google-cloud/vision version: 0.23.0

Steps to reproduce

  1. Create a new PR [https://github.com//pull/269]
  2. Automated tests get triggered with Kokoro and the sample tests fail.
    This does fail locally too.

Tried debugging the issue

  • first it was looking for a directory that did not exist vision : Fixed that
  • next got an error Canvas is not a constructor

Google Cloud Vision built with Webpack

Dear everyone,

My project is using ReactJS, then I want to use Google-Cloud-Vision to check Identity Card.
But when I added @google-cloud/vision like below tutorial
https://cloud.google.com/vision/docs/libraries#client-libraries-usage-nodejs
I faced the problem is:

./node_modules/grpc/node_modules/node-pre-gyp/lib/util/versioning.js
17:20-67 Critical dependency: the request of a dependency is an expression

./node_modules/grpc/node_modules/node-pre-gyp/lib/pre-binding.js
20:22-48 Critical dependency: the request of a dependency is an expression

./node_modules/grpc/src/grpc_extension.js
32:12-33 Critical dependency: the request of a dependency is an expression

./node_modules/http2/lib/protocol/index.js
46:12-19 Critical dependency: require function is used in a way in which dependencies cannot be statically extracted

Search for the keywords to learn more about each warning.
To ignore, add // eslint-disable-next-line to the line before.

and I can see that @google-cloud/vision is require http2 , but this library has been DEPRECATED.

Please help me to resolve this problem.

Thanks & Best Regards

Nothing returned

Steps:

  1. I've set up node and billing all correctly.
  2. Using VSC, create a folder "test", ran "npm init" created a package.json, after installing "vision" lib this the package.json
{
  "name": "test",
  "version": "1.0.0",
  "description": "test",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1",
    "start": "index.js"
  },
  "author": "",
  "license": "ISC",
  "dependencies": {
    "@google-cloud/vision": "^0.22.1"
  }
}

  1. According to Google's quickstart, create index.js as
// Imports the Google Cloud client library
const vision = require('@google-cloud/vision');

// Creates a client
const client = new vision.ImageAnnotatorClient();

// Performs label detection on the image file
client
  .labelDetection('label.jpg')   // .jpg file has a word art.
  .then(results => {
    const labels = results[0].labelAnnotations;

    console.log('Labels:');
    labels.forEach(label => console.log(label.description));
  })
  .catch(err => {
    console.error('ERROR:', err);
  });
  1. set a breakpoint inside index.js run "npm start", it's not hit and DEBUG CONSOLE doesn't show any label text.

OCR ignores many single-character words

Hi,

Google Vision's "Document Text Detection" is a great product! I lead an engineering team at a startup, which uses Google Vision to extract text from 1+ million PDF pages per month. For the most part, we are very happy with the service and we look forward to using it to analyze many millions of images over the coming months.

For our use case, there is one huge problem with the OCR output we get from Google Vision: it often omits single characters which appear by themselves. Common examples:

  • "$" with a space between the dollar sign and the number that goes with it
  • "0" the number zero, when presented as a single character
  • "C" the letter C, when used as a single-letter abbreviation
  • "O" the letter O, when presented as a single-letter abbreviation

These characters are actually very important information for my use case. Other OCR products like Tesseract and ABBYY correctly read and present these characters while Google Vision does not, which leads me to suspect that Google Vision might have been intentionally trained or programmed to exclude these from its output. Is it likely that this will be fixed in the near future? (If not, I might need to invest my team's resources into building a workaround.)

vision.ImageAnnotatorClient is not a constructor?

I am new to node.js and have struggled with the above error message for over 6 hours. Ive tried every example I can find on the web but always end up with an errr when trying to create a client object. Stripped down code as follows always results in the error in the title. Any help would be greatly appreciated. Thank you.

'use strict';

const path = require('path');
const gcs = require('@google-cloud/storage')();
const vision = require('@google-cloud/vision')();

exports.tagImage = (event) => {
  const object = event.data;

  
  const client = new vision.ImageAnnotatorClient();

};

Product Search tests are failing

The Product Search tests have not been running on CI and they weren't running from npm test either

I moved all of the ./samples/productSearch/system-test/ tests into ./samples/system-test/ and they're running now, but failing ⚠️

I played around to get them passing for a bit, but wasn't able to quickly resolve all of the failures

Filing an issue for someone to pick this up

See related PR which makes the tests actually run!

Set to [DO NOT MERGE] because the tests are failing

safeSearchDetection not finding image

I've been stuck on this problem for the past 36 hours. I'm going through the Codelabs tutorial for Firebase Cloud Functions

My app is already deployed to firebase hosting, I'm using version 0.14.0 of nodejs-vsion. However, when doing the image moderation part and after deploying, first I got this error

TypeError: Vision is not a constructor

referring to my require and constructor statements

const Vision = require('@google-cloud/vision');
const vision = new Vision();

Which are copied exactly from the tutorial.

I saw in the documentation, that I should use

const vision = require('@google-cloud/vision');
const client = new vision.ImageAnnotatorClient();

So I changed my code to

const Vision = require('@google-cloud/vision');
const vision = new Vision.ImageAnnotatorClient();

It deploys but now when I upload an image, it displays on the app but it doesn't get blurred as it's supposed to. Instead, I get an error in the function logs saying

Error: No image present.
at _coerceRequest (/user_code/node_modules/@google-cloud/vision/src/helpers.js:68:21)
at ImageAnnotatorClient. (/user_code/node_modules/@google-cloud/vision/src/helpers.js:223:12)
at ImageAnnotatorClient.wrapper [as annotateImage] (/user_code/node_modules/@google-cloud/vision/node_modules/@google-cloud/common/src/util.js:746:29)
at ImageAnnotatorClient. (/user_code/node_modules/@google-cloud/vision/src/helpers.js:140:17)
at /user_code/node_modules/@google-cloud/vision/node_modules/@google-cloud/common/src/util.js:777:22
at ImageAnnotatorClient.wrapper [as safeSearchDetection] (/user_code/node_modules/@google-cloud/vision/node_modules/@google-cloud/common/src/util.js:761:12)
at exports.blurOffensiveImages.functions.storage.object.onChange.event (/user_code/index.js:75:17)
at correctMediaLink (/user_code/node_modules/firebase-functions/lib/providers/storage.js:78:20)
at /user_code/node_modules/firebase-functions/lib/cloud-functions.js:35:20
at process._tickDomainCallback (internal/process/next_tick.js:135:7)

Here is the relevant code straight from the tutorial,

const image = {
    source: {imageUri: `gs://${object.bucket}/${object.name}`}
};

return vision.safeSearchDetection(image)
    .then(batchAnnotateImagesResponse => {

So I switched to the code that is in the docs, instead of using the 'image' object, I just put the image url directly inside the safeSearchDetection function

return vision.safeSearchDetection(`gs://${object.bucket}/${object.bucket}`)

(The back-ticks around the argument won't display correctly because they are how markdown displays code format) (edited by @stephenplusplus)

And now the error in the Function logs is

blurOffensiveImages 
  [ 
    { 
        faceAnnotations: [],
        landmarkAnnotations: [],
        logoAnnotations: [],
        labelAnnotations: [],
        textAnnotations: [],
        safeSearchAnnotation: null,     
        imagePropertiesAnnotation: null,    
        error:   {   
                details: [],        
                code: 7,        
                message: 'Error opening file: gs://friendlychat-XXXX.appspot.com/XXXXXXX/-XXXXXXXXX/XXXXXXX.jpg.' },
        cropHintsAnnotation: null,     
        fullTextAnnotation: null,     
        webDetection: null 
   } 
]

My object.bucket is the 'friendlychat-XXXX.appspot.com' part and my object.name is the 'XXXXXXX/-XXXXXXXXX/XXXXXXX.jpg' part

I don't know what else to do. I've tried reverting back to version 0.12.0 and 0.11.0 and nothing helps. Those just give me different errors requiring me to change the vision = new Vision... constructor. And even after making adjustments, the image still isn't able to be found.

Again, the image uploads, but the function to blur isn't running because it can't find the image. I'm really stuck here.

An in-range update of eslint is breaking the build 🚨

Version 4.15.0 of eslint was just published.

Branch Build failing 🚨
Dependency eslint
Current Version 4.14.0
Type devDependency

This version is covered by your current version range and after updating it in your project the build failed.

eslint is a devDependency of this project. It might not break your production code or affect downstream projects, but probably breaks your build or test tools, which may prevent deploying or publishing.

Status Details
  • ci/circleci: node8 Your tests passed on CircleCI! Details
  • ci/circleci: node9 Your tests failed on CircleCI Details
  • continuous-integration/appveyor/branch AppVeyor build succeeded Details
  • ci/circleci: node7 Your tests passed on CircleCI! Details
  • ci/circleci: node4 Your tests passed on CircleCI! Details
  • ci/circleci: node6 Your tests passed on CircleCI! Details

Release Notes v4.15.0
  • 6ab04b5 New: Add context.report({ messageId }) (fixes #6740) (#9165) (Jed Fox)
  • fc7f404 Docs: add url to each of the rules (refs #6582) (#9788) (Patrick McElhaney)
  • fc44da9 Docs: fix sort-imports rule block language (#9805) (ferhat elmas)
  • 65f0176 New: CLIEngine#getRules() (refs #6582) (#9782) (Patrick McElhaney)
  • c64195f Update: More detailed assert message for rule-tester (#9769) (Weijia Wang)
  • 9fcfabf Fix: no-extra-parens false positive (fixes: #9755) (#9795) (Erin)
  • 61e5fa0 Docs: Add table of contents to Node.js API docs (#9785) (Patrick McElhaney)
  • 4c87f42 Fix: incorrect error messages of no-unused-vars (fixes #9774) (#9791) (akouryy)
  • bbabf34 Update: add ignoreComments option to indent rule (fixes #9018) (#9752) (Kevin Partington)
  • db431cb Docs: HTTP -> HTTPS (fixes #9768) (#9768) (Ronald Eddy Jr)
  • cbf0fb9 Docs: describe how to feature-detect scopeManager/visitorKeys support (#9764) (Teddy Katz)
  • f7dcb70 Docs: Add note about "patch release pending" label to maintainer guide (#9763) (Teddy Katz)
Commits

The new version differs by 14 commits.

  • e14ceb0 4.15.0
  • 2dfc3bd Build: changelog update for 4.15.0
  • 6ab04b5 New: Add context.report({ messageId }) (fixes #6740) (#9165)
  • fc7f404 Docs: add url to each of the rules (refs #6582) (#9788)
  • fc44da9 Docs: fix sort-imports rule block language (#9805)
  • 65f0176 New: CLIEngine#getRules() (refs #6582) (#9782)
  • c64195f Update: More detailed assert message for rule-tester (#9769)
  • 9fcfabf Fix: no-extra-parens false positive (fixes: #9755) (#9795)
  • 61e5fa0 Docs: Add table of contents to Node.js API docs (#9785)
  • 4c87f42 Fix: incorrect error messages of no-unused-vars (fixes #9774) (#9791)
  • bbabf34 Update: add ignoreComments option to indent rule (fixes #9018) (#9752)
  • db431cb Docs: HTTP -> HTTPS (fixes #9768) (#9768)
  • cbf0fb9 Docs: describe how to feature-detect scopeManager/visitorKeys support (#9764)
  • f7dcb70 Docs: Add note about "patch release pending" label to maintainer guide (#9763)

See the full diff

FAQ and help

There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.


Your Greenkeeper Bot 🌴

GRPC version wrong on node v9 or 10

Environment details

  • OS: Mac (but using docker)
  • Node.js version: 8, 9, 10
  • npm version:
  • @google-cloud/vision version: 0.19.0

Steps to reproduce

  1. Set the node version to v9.0.0 OR v10.0.0
  2. npm install this library
  3. Copy the example code into a js file, run that file
  4. you get the below output
Error: Failed to load gRPC binary module because it was not installed for the current system
Expected directory: node-v59-linux-x64-glibc
Found: [node-v57-linux-x64-glibc]
This problem can often be fixed by running "npm rebuild" on the current system
Original error: Cannot find module '/app/node_modules/grpc/src/node/extension_binary/node-v59-linux-x64-glibc/grpc_node.node'
    at Object.<anonymous> (/app/node_modules/grpc/src/grpc_extension.js:53:17)
    at Module._compile (module.js:641:30)
    at Object.Module._extensions..js (module.js:652:10)
    at Module.load (module.js:560:32)
    at tryModuleLoad (module.js:503:12)
    at Function.Module._load (module.js:495:3)
    at Module.require (module.js:585:17)
    at require (internal/module.js:11:18)
    at Object.<anonymous> (/app/node_modules/grpc/src/client_interceptors.js:145:12)
    at Module._compile (module.js:641:30)

Have to use node v8 for this to actually work

Example code for `textDetection` is incorrect

Following is the example code under the textDetection method:

var image = {
  source: {imageUri: 'gs://path/to/image.jpg'}
};
vision.textDetection(image).then(response => {
  // doThingsWith(response);
  console.log(response); // I added this portion; not part of the example.
}).catch(err => {
  console.error(err);
});

Running the above example, throws the following error trace:

Error: No image present.
    at _coerceRequest (/Users/Bensooraj/Desktop/NodeJS_Play/nisome/node_modules/@google-cloud/vision/src/helpers.js:69:21)
    at ImageAnnotatorClient.<anonymous> (/Users/Bensooraj/Desktop/NodeJS_Play/nisome/node_modules/@google-cloud/vision/src/helpers.js:224:12)
    at ImageAnnotatorClient.wrapper [as annotateImage] (/Users/Bensooraj/Desktop/NodeJS_Play/nisome/node_modules/@google-cloud/common/src/util.js:749:29)
    at ImageAnnotatorClient.<anonymous> (/Users/Bensooraj/Desktop/NodeJS_Play/nisome/node_modules/@google-cloud/vision/src/helpers.js:141:17)
    at /Users/Bensooraj/Desktop/NodeJS_Play/nisome/node_modules/@google-cloud/common/src/util.js:780:22
    at new Promise (<anonymous>)
    at ImageAnnotatorClient.wrapper [as textDetection] (/Users/Bensooraj/Desktop/NodeJS_Play/nisome/node_modules/@google-cloud/common/src/util.js:764:12)
    at /Users/Bensooraj/Desktop/NodeJS_Play/nisome/routes/index.js:40:10
    at Layer.handle [as handle_request] (/Users/Bensooraj/Desktop/NodeJS_Play/nisome/node_modules/express/lib/router/layer.js:95:5)
    at next (/Users/Bensooraj/Desktop/NodeJS_Play/nisome/node_modules/express/lib/router/route.js:137:13)

The above error came in from here:

if (!is.object(request) || is.undefined(request.image)) {
return callback(new Error('No image present.'));

where the request object is expected to nest an image object within as mentioned in the documentation:

screen shot 2018-05-04 at 5 14 32 pm

This is what worked for me:

// Imports the Google Cloud client library
const vision = require('@google-cloud/vision');
// Creates a client
const client = new vision.ImageAnnotatorClient();
// Uploaded image, available in base64Image format,
// is converted to a Buffer
let imageBuffer = Buffer.from(req.body.base64Image, 'base64');

// Call the Cloud Vision client, to detect text
client
    .textDetection({
      image: { content: imageBuffer }
    }).then(response => {
      console.log(response);
    }).catch(err => {
      console.error(err);
    });

I believe the example code in the document should change to:

const request = {
  image: {source: {imageUri: 'gs://path/to/image.jpg'}},
};
// OR, if you are using an image buffer
// const request = {
//   image: { content: imageBuffer }
// };

vision.textDetection(request).then(response => {
  // doThingsWith(response);
  console.log(response); // I added this portion; not part of the example.
}).catch(err => {
  console.error(err);
});

Or,

let image = {
  source: {imageUri: 'gs://path/to/image.jpg'}
};
// OR, if you are using an image buffer
// let image = {
//   content: imageBuffer
// };
vision.textDetection({ image }).then(response => {
  // doThingsWith(response);
}).catch(err => {
  console.error(err);
});

Environment details

  • OS: System Version: macOS 10.12.6 (16G29) | Kernel Version: Darwin 16.7.0
  • Node.js version: v9.2.0
  • npm version: 6.0.0
  • @google-cloud/vision version: 0.19.0

How to limit the text detection region?

I'm using Google Vision to detect text from an image. I want to detect text in some regions of the image, but I can't find any example about it.

Is it supported?

function detectFulltext() {
  const vision = require('@google-cloud/vision');
  const client = new vision.ImageAnnotatorClient();
  const imageUrl = 'http://www.learnjapanesefree.com/img/Plain-form-of-verbs-1.jpg';

  client
    .documentTextDetection(imageUrl)
    .then(results => {
      const fullTextAnnotation = results[0].fullTextAnnotation;
      console.log(fullTextAnnotation.text);
    })
    .catch(err => {
      console.error('ERROR:', err);
    });
}

Can't get it to work in an ionic project - possibly typescript related?

Hi,

I am trying to test this in a simple ionic project, but as soon as I include in a project it fails with:

TypeError: Cannot read property 'prototype' of undefined
at patch (http://localhost:8100/build/vendor.js:230900:54)
at Object. (http://localhost:8100/build/vendor.js:230761:18)
at Object. (http://localhost:8100/build/vendor.js:230998:30)
at webpack_require (http://localhost:8100/build/vendor.js:55:30)
at Object. (http://localhost:8100/build/vendor.js:230701:10)
at webpack_require (http://localhost:8100/build/vendor.js:55:30)
at Object.module.exports (http://localhost:8100/build/vendor.js:230633:18)
at webpack_require (http://localhost:8100/build/vendor.js:55:30)
at Object.module.exports (http://localhost:8100/build/vendor.js:148017:17)
at webpack_require (http://localhost:8100/build/vendor.js:55:30)

Environment details

  • Windows 10
  • Node.js version: 9.3.0
  • npm version: 5.6.0
  • @google-cloud/vision version: v0.22.1

Steps to reproduce

0 . install ionic/cordova: $ npm install -g ionic cordova

  1. ionic start (just generate a simple tab project)
  2. add to page home.ts above @component decorator: (probably better to make a provider instead but it failes never the less)

// Imports the Google Cloud client library
import vision from '@google-cloud/vision'; // or require depending on typescript version
const client = new vision.ImageAnnotatorClient();

  1. ionic serve

I am not an expert in any way but this gets me stuck wanting to try it out with using HTTP calls myself as I have seen others doing in ionic projects.

Thanks!

Robert

ImageAnnotatorClient.batchAnnotateImages returns frequent "bad image data" requests.

  • OS: MacOS, Linux
  • Node.js version: 10.8.0
  • npm version: 6.2.0
  • @google-cloud/vision version: 0.21.0

Steps to reproduce

  1. batch up multiple requests using ImageAnnotatorClient,batchAnnotateImages even just batches with 1 request.
  2. run against API with detection type 'documentTextDetection'
  3. repeat same file with ImageAnnotatorClient.documentTextDetection() without batch.
  4. count files with error codes using both approaches. Batch uploads produce higher numbers of error code "3" with "bad image data".

I have tested this from the same client computer and batchAnnotateImages fails on roughly 10% of my files, only to work the next time on the same file.

The client.documentTextDetection approach works reliably though.

About languageHints

I'm glad to using this feature. But i have a question about configuring options for documentTextDetection.
i have to recognize Mongolian (crylic) charecter from document, and set languageHints to languageHints: ["mn"] .
I can't set options to client.documentTextDetection method.
Please help, how to set features or options for client.documentTextDetection method.

Thanks!
Best regards

Can't seem to access Vision.types that we had in 0.12

This is a followup from googleapis/google-cloud-node#2516

I had this issue where we could not compare Likelyhoods and this was fixed using Types.

However in 0.13.0 I can't seem to find the types anymore. I used to be able to do:

var Vision = require('@google-cloud/vision')
var visionClient = new Vision({...})

visionClient.annotateImage({...})
  .then(responses => {
    var response = responses[0]
    console.log(Vision.types.Likelihood[response.safeSearchAnnotation.adult]) // 1
  })

Are the types published in a different node modules?

Vision 'detect' not returning values from remote url since 0.90.0

Follow this closed issue #2087 , I tried different versions of vision to see when it starts happening.

I was running vision 0.70.0 and install vision like this:

npm install google-cloud --save
var gcloud = require('google-cloud');
var vision = gcloud.vision();

And it detects remote URL's like a charm.

Since hearing this way of installing google-cloud is deprecated, I tried installing the bundle @google-cloud/vision. That's when I encountered the same error as the closed issue. Looking closely, the new install updated my version of vision. I tried 0.70.0 and 0.80.0 which works fine.

From 0.90.0 remote URL detects gives this error:

PartialFailureError: A failure occurred during this request.
    at /Users/jasonluu/Documents/v2/node_modules/@google-cloud/vision/src/index.js:420:15
    at /Users/jasonluu/Documents/v2/node_modules/@google-cloud/vision/src/index.js:121:5
    at _combinedTickCallback (internal/process/next_tick.js:74:11)
    at process._tickDomainCallback (internal/process/next_tick.js:122:9)
errors: 
   [ { code: 500,
       message: 'image-annotator::error(12): We can not access the URL currently. Please download the image and pass it in.',
       type: 'labels' } ] }

Configure vision client with API key?

Environment details

  • OS: Mac OS
  • Node.js version: 8.11.1
  • npm version: 5.6.0
  • @google-cloud/vision version: 0.19.0

Steps to reproduce

None, this is a question.

Question

Is it possible to configure the vision client with an API key rather than a Credentials object?

Our use case does not allow us checking in a credentials.json file into the repository, but we are able to use environment variables. It would be nice to be able to use the vision lib rather than POSTing directly to the images:annotate endpoint.

I've tried stringifying the Credentials object, then parsing and passing it into the constructor, but then I get strange errors like Auth error:Error: invalid_grant: Invalid JWT Signature. Plus that just seems like more hoops to jump through than necessary, considering the Vision API supports API keys.

// Throws like 10 invalid_grant errors

const visionClient = new vision.ImageAnnotatorClient({
    credentials: JSON.parse(GOOGLE_APPLICATION_CREDENTIALS),
});

visionClient.cropHints(buffer)
    .then(results => console.log(results));

Vision is not a constructor

I'm using the current release, and tried:

const {Vision} = require('@google-cloud/vision');

const token = {} //json for the google application credential

const vision = new Vision(token);

But it yields error that vision is not a constructor. What am I missing?

Implementing manual methods for new features in v1p3beta1

With the new v1p3beta1 brings Product Search and Object Localization in the features enum, so we need to implement the new methods.

We currently uses helpers.js which exports a single helper() method which wraps the gapic exported clients for all versions. The helper augments the exports with custom methods like client.textDetection() and client.objectLocalization() // only available in v1p3beta1.

Question regarding safe search free with label detection

So on the pricing page for google vision there's this section;

Which states that safe search detection is free with label detection, however I cannot see any way using the SDK to make a request for both labels and safe search in one. Doing them separately at the moment, and in the console the stats are saying that there are 2 calls (which presumably we'll get charged for eventually).

Am I missing something here, is there actually a way to do this with the SDK.

Face Detection Tutorial Issues

There are a few problems with the cloud vision face tutorial:

Setup

  • canvas in optionalDependencies: This is a required dependency.
  • "Put it all together": This section does not have any description and does not have copy-pasteable code I would expect in an "All together" section.
  • node faceDetection face.png:

Running

After the setup, run:
node faceDetection face.png

You'll get the error:

ERROR: { Error: ENOENT: no such file or directory, open 'face.png' errno: -2, code: 'ENOENT', syscall: 'open', path: 'face.png' }
{ Error: ENOENT: no such file or directory, open 'face.png' errno: -2, code: 'ENOENT', syscall: 'open', path: 'face.png' }
^C

What you really need is:
node faceDetection.js resources/face.png

ERROR: { Error: 7 PERMISSION_DENIED: Cloud Vision API has not been used in project cloud-devshell-dev before or it is disabled. Enable it by visiting https://console.developers.googl
e.com/apis/api/vision.googleapis.com/overview?project=cloud-devshell-dev then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems a
nd retry.

Fixing permissions

I tried to enable the Vision API in the Cloud Shell, first trying to find the API:

gcloud services list

ERROR: (gcloud.services.list) PERMISSION_DENIED: Not allowed to get project settings for project cloud-devshell-dev

I'm not sure if I could enable the API without knowing the id. Maybe I needed to create a new project rather than cloud-devshell-dev.

Guessing at the API id:

gcloud services enable vision.googleapis

User does not have permission to access service [vision.googleapis:enable] (or it may not exist): The caller does not have permission.

At this point I gave up. It would be ideal if you could just "Open in Cloud Shell", npm i, and npm run detect.

I first found this tutorial on GitHub. The process of switching between cloud.google.com, GitHub tutorial README, GitHub main README, and Cloud Shell is very confusing.

Using Buffer doesn't work

Environment details

  • OS: OSX
  • Node.js version: v6.11.4
  • npm version: 3.10.10
  • @google-cloud/vision version: ^0.13.0

Steps to reproduce

I have a firebase function, that accepts base64 string, then convert it to buffer.
Everything runs with no problem, but the response from Vision API is always an empty array. I pass the same image with the file path, I got a correct response.

here my is an example

// req.base64data is a string from the client using FileReader API 
client
  .faceDetection( new Buffer(req.base64data,'base64'))
  .then(results => {
    const faces = results[0].faceAnnotations;
        console.log("RESULT:", faces.length);
        return res.send({data:faces.length});
  })
  .catch(err => {
    console.error('ERROR:', err);
  });
```

Thanks!

Action required: Greenkeeper could not be activated 🚨

🚨 You need to enable Continuous Integration on all branches of this repository. 🚨

To enable Greenkeeper, you need to make sure that a commit status is reported on all branches. This is required by Greenkeeper because it uses your CI build statuses to figure out when to notify you about breaking changes.

Since we didn’t receive a CI status on the greenkeeper/initial branch, it’s possible that you don’t have CI set up yet. We recommend using Travis CI, but Greenkeeper will work with every other CI service as well.

If you have already set up a CI for this repository, you might need to check how it’s configured. Make sure it is set to run on all new branches. If you don’t want it to run on absolutely every branch, you can whitelist branches starting with greenkeeper/.

Once you have installed and configured CI on this repository correctly, you’ll need to re-trigger Greenkeeper’s initial pull request. To do this, please delete the greenkeeper/initial branch in this repository, and then remove and re-add this repository to the Greenkeeper App’s white list on Github. You'll find this list on your repo or organization’s settings page, under Installed GitHub Apps.

Inconsistent OCR text extraction response

Hi,

I am using Google vision 'documentTextDetection' for one of my projects. My aim is to detect text from images while checking I get the impression that am getting inconsistent text extraction for same images(ie different link, but image is same) and getting different results.

I am using '@google-cloud/vision' node npm for the same. Also noticed that some of the characters are mismatching in the results

eg: In most of the cases '0' is recognizing as O(https://samsung-nudge.s3.eu-central-1.amazonaws.com/4.jpeg), 5 as S (https://samsung-nudge.s3.eu-central-1.amazonaws.com/4.jpeg), / as I (https://samsung-nudge.s3.eu-central-1.amazonaws.com/1.jpeg), etc

let imageurl= 'https://samsung-nudge.s3.eu-central-1.amazonaws.com/barcode1540752102759.jpeg'
client
.documentTextDetection(imageurl)
.then(results => {
	   console.log('results', JSON.stringify(results[0].textAnnotations))
})
.catch(err => {
	console.error('GOOGLE VISION ERROR:', err);
	reject(err)
});

Same image giving different results

https://samsung-nudge.s3.eu-central-1.amazonaws.com/barcode1540752102759.jpeg

MODEL:\nRH60H8138WZ\nPOWER:\n230V/ 50Hz\nCOMPRESSOR:\n2007 - 000029\nMODEL CODE:\nRH6OH8138WZ/SS\nSERIAL NO:\n07KH43AG300046M\n

https://samsung-nudge.s3.eu-central-1.amazonaws.com/m.jpeg

MODEL:\nRH60H8138WZ\nPOWER:\n230V/ 50Hz\nCOMPRESSOR:\n2007 - 000029\nMODEL CODE:\nRH60H8138WZ/SS\nSERIAL NO:\n07KH43AG300046M\n

Please let me know why am getting inconsistent responses? Also, let me know anything I can do to improve the results.

Can't find mapping between ILSVRC2012_ID and human readable strings

From @stefano9-4 on May 25, 2018 21:11

Hi,

I am working on a personal project that adds noise to the images of the ILSVRC2012 validation set in order to miss-classify them (using Google's cloud vision). My problem is that I can't seem to find the correct mapping between the IDs that are provided to me by the devkit task 1&2 provided here: http://www.image-net.org/challenges/LSVRC/2012/nonpub-downloads.

Online I found this: https://gist.github.com/xkumiyu/dd200f3f51986888c9151df4f2a9ef30 and this: https://gist.github.com/xkumiyu/dd200f3f51986888c9151df4f2a9ef30 but when I try to compare the IDs to the labels they don't seem to match with neither of the lists.

Any help would be greatly appreciated :-)

Copied from original issue: googleapis/google-cloud-node#2823

PDF/TIFF Document Text Detection on local files.

Greetings,
Please consider this as a question/doubt !!
Documentation about text detection from PDF and TIFF files stored in Google Cloud Storage is clearly explained. Can we also detect the text from local PDF files ? If so can you please update the documentation.
I tried but couldn't be able to detect the text from the locally saved PDF files.

Thanks!
Pavan.

annotations.proto` was not found

Hello,
require('@google-cloud/vision')

causes the following error:
Error: The include google/api/annotations.proto was not found.

We can not access the URL currently. Please download the content and pass it in

Hi,

When trying to extract text from images, the API is giving following error.

Error:

[{"faceAnnotations":[],"landmarkAnnotations":[],"logoAnnotations":[],"labelAnnotations":[],"textAnnotations":[],"localizedObjectAnnotations":[],"safeSearchAnnotation":null,"imagePropertiesAnnotation":null,"error":{"details":[],"code":14,"message":"We can not access the URL currently. Please download the content and pass it in."},"cropHintsAnnotation":null,"fullTextAnnotation":null,"webDetection":null,"context":null}]

Sometimes am getting the correct API response, sometimes not. It was working fine before but today am getting "We can not access the URL currently. Please download the content and pass it in." a lot. Can you please check the issue and let me know how to fix it.

Sample code:

client
.documentTextDetection('https://samsung-nudge.s3.eu-central-1.amazonaws.com/c.jpeg')
.then(results => {
	console.log('results', JSON.stringify(results));
	resolve(results);
})
.catch(err => {
	console.error('GOOGLE VISION ERROR:', err);
	reject(err)
});
  • Linux:
  • Node 8
  • npm version: 5.6.0
  • @google-cloud/vision version: 0.22.1

Drop dependency on nodejs-common

This module pulls in @google-cloud/common, which is a fairly meaty module. It's only using it for the implementation of promisify:
https://github.com/googleapis/nodejs-common/blob/master/src/util.ts#L934

I'm thinking about creating a new npm module with just the promisify functions we use in common. pify doesn't work exactly the same way, in that it's multiArgs flag causes it to return the results as an array, but ALSO causes the errors to come back as an array 🤦‍♀️

Any thoughts? @stephenplusplus @ofrobots @googleapis/node-team

Google vision api grpc error for linux environment

npm install --save @google-cloud/vision runs for windows properly but it does not install node-gyp for linux, It gives error :
Error: Failed to load gRPC binary module because it was not installed for the current system
Expected directory: node-v57-linux-x64-glib
Found: [node-v57-win32-x64-unknown]

Methods should be in verb form, not noun form.

As part of moving to Vision partials, the single-feature methods changed from verb form (detectFaces) to noun form (faceDetection).

The reason for this is that when I originally wrote this and those methods were dynamically applied, I used the value in the enum. So, the Feature.Type enum in the proto has FACE_DETECTION, and I converted it to camelCase and applied it to the class. I did some experimentation to try and keep it in verb form (e.g. detectFaces or even detectFace) but ultimately did not feel confident that it could work. The enum values were not sufficiently consistent to do that reliably, pluralizing words automatically is hard, etc.

@jgeewax has suggested a few ideas:

A one-liner in the client library written manually.

I am cynical about this. It sounds attractive and I went over several iterations on it, but I ultimately decided it was probably going to cause more problems than it solved. Basically, it relies on domain knowledge being carried forward indefinitely, potentially by people unfamiliar with the rationale. If that breaks down (and I expect it will), then you end up in a situation where you have inconsistent methods or features with no helper methods at all.

A proto annotation.

I think this would work really well.

Configuration checked in alongside the proto that the ML team maintains.

I am skeptical about this. I think it is likely to end up in a situation where we have features with no helper method at all, which feels like a bigger quality loss in the long run.

asyncBatchAnnotateFiles filename output concatenates "output-x-to-x.json"

Environment details

  • OS: windows 10
  • Node.js version: v9.2.0
  • npm version:6.5.0
  • @google-cloud/vision version: ^0.23.0

Steps to reproduce

function processFilename(fileName) {
// Path to PDF file within bucket

//  const gcsSourceUri = `gs://${bucketName}/pdfs/${fileName}`;
let gcsSourceUri = `gs://${bucketName}/${fileName}`;
let gcsDestinationUri = `gs://${bucketName}/${fileName}.json`;

let inputConfig = {
    // Supported mime_types are: 'application/pdf' and 'image/tiff'
    mimeType: 'application/pdf',
    gcsSource: {
        uri: gcsSourceUri,
    },
};
let outputConfig = {
    gcsDestination: {
        uri: gcsDestinationUri,
    },
};
//    let features = [{ type: 'DOCUMENT_TEXT_DETECTION', model: "builtin/latest" }];
let features = [{ type: 'DOCUMENT_TEXT_DETECTION' }];
let request = {
    requests: [{
        inputConfig: inputConfig,
        features: features,
        outputConfig: outputConfig,
    }, ],
};

client
    .asyncBatchAnnotateFiles(request)
    .then(results => {
        const operation = results[0];
        // Get a Promise representation of the final result of the job
        operation
            .promise()
            .then(filesResponse => {

                //                    console.log(JSON.stringify(filesResponse));

                let destinationUri = filesResponse[0].responses[0].outputConfig.gcsDestination.uri;
                console.log('Json saved to: ' + destinationUri);

                //          console.log(filesResponse[0].responses);
            })
            .catch(function(error) {
                console.log(error);
            });
    })
    .catch(function(error) {
        console.log(error);
    });

}

for example the input filename:
aabb.pdf
then the output will be:
aabb.pdf.jsonoutput-1-to-1.json

(if the pdf contained 1 page)

Thanks!

An in-range update of uuid is breaking the build 🚨

Version 3.3.0 of uuid was just published.

Branch Build failing 🚨
Dependency [uuid](https://github.com/kelektiv/node-uuid)
Current Version 3.2.1
Type devDependency

This version is covered by your current version range and after updating it in your project the build failed.

uuid is a devDependency of this project. It might not break your production code or affect downstream projects, but probably breaks your build or test tools, which may prevent deploying or publishing.

Status Details
  • ci/circleci: node8 Your tests passed on CircleCI! Details
  • ci/circleci: node10 Your tests passed on CircleCI! Details
  • continuous-integration/appveyor/branch AppVeyor build succeeded Details
  • ci/circleci: node6 Your tests failed on CircleCI Details

Commits

The new version differs by 17 commits ahead by 17, behind by 1.

  • 1cb9826 chore(release): 3.3.0
  • f3e48ff fix: fix #229
  • 854df05 chore: update dev dependencies (to fix security alerts)
  • 02161b2 Merge branch 'master' of github.com:kelektiv/node-uuid
  • beffff8 fix: assignment to readonly property to allow running in strict mode (#270)
  • 931c70d Merge branch 'master' of github.com:kelektiv/node-uuid
  • 0705cd5 feat: enforce Conventional Commit style commit messages (#282)
  • 2e33970 chore: Add Active LTS and Active versions of Node.js (#279)
  • 205e0ed fix: Get correct version of IE11 crypto (#274)
  • d062fdc fix: assignment to readonly property to allow running in strict mode (#270)
  • c47702c fix: mem issue when generating uuid (#267)
  • cc9a182 feat: enforce Conventional Commit style commit messages (#282)
  • 44c7f9f Add Active LTS and Active versions of Node.js (#279)
  • 153d331 fix: Get correct version of IE11 crypto (#274)
  • df40e54 fix assignment to readonly property to allow running in strict mode (#270)

There are 17 commits in total.

See the full diff

FAQ and help

There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.


Your Greenkeeper Bot 🌴

Getting deprecation warning with grpc

Getting an error when running on cloud functions.
The error is: "(node:2) DeprecationWarning: grpc.load: Use the @grpc/proto-loader module with grpc.loadPackageDefinition instead"

Environment details

  • OS:
  • Node.js version: google cloud functions (I guess it's node 6).
  • npm version: using 6.1.0 locally. Not sure if cloud functions reinstall the packages when I deploy.
  • @google-cloud/vision version: 0.20

Steps to reproduce

Using the package in cloud functions.

Single-feature methods are not added dynamically.

The Vision API does not add single-feature methods dynamically the way we intended it to do.

Instead, the structure is in place, but due to documentation restrictions, we still manually define each method. This means that when the enum is expanded, new methods will not be defined.

We should write a JSDoc plugin so appropriate documentation is automatically generated, then iterate over the enum.

samples tests failure

Several samples tests failed in today's nightly testing for Vision Node.js client library:

https://circleci.com/gh/googleapis/nodejs-vision/2102

✖ detect.v1p1beta1 › should detect web entities using geographical metadata
✖ detect › should detect web entities with geo metadata in remote file
✖ detect › should detect web entities with geo metadata in local file

Looks like something has changed with the geo data so the server returned different answers than the test expected. We want to take nightly test failures seriously as they can mask real problems, so can I ask you to take a look at the failures and fix them if possible.

Thanks!

TypeError: Cannot read property 'ImageAnnotatorClient' of undefined

I'm new to nodejs, trying google handwriting text recognition. The error log and code is given below, please suggest me.

function test(){
    const vision = require('@google-cloud/vision').v1p3beta1;
    const fs = require('fs');
    const client = new vision.ImageAnnotatorClient();
    const fileName = `C:/Users/sandr/Downloads/menu.jpg`;

    const request = {
    image: {
        content: fs.readFileSync(fileName),
    },
    feature: {
        languageHints: ['en-t-i0-handwrit'],
    },
    };
    client
    .documentTextDetection(request)
    .then(results => {
        const fullTextAnnotation = results[0].fullTextAnnotation;
        console.log(`Full text: ${fullTextAnnotation.text}`);
    })
    .catch(err => {
        console.error('ERROR:', err);
    });  
}

console.log(test());
    const client = new vision.ImageAnnotatorClient();
                              ^

TypeError: Cannot read property 'ImageAnnotatorClient' of undefined
    at test (D:\Softwares\nodejs-docs-samples-master\nodejs-docs-samples-master\functions\ocr\app\MyApp.js:8:31)
    at Object.<anonymous> (D:\Softwares\nodejs-docs-samples-master\nodejs-docs-samples-master\functions\ocr\app\MyApp.js:34:13)
    at Module._compile (internal/modules/cjs/loader.js:689:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:700:10)
    at Module.load (internal/modules/cjs/loader.js:599:32)
    at tryModuleLoad (internal/modules/cjs/loader.js:538:12)
    at Function.Module._load (internal/modules/cjs/loader.js:530:3)
    at Function.Module.runMain (internal/modules/cjs/loader.js:742:12)
    at startup (internal/bootstrap/node.js:279:19)
    at bootstrapNodeJSCore (internal/bootstrap/node.js:696:3)

Environment details

  • OS: Windows 10
  • Node.js version: 10.10.0
  • npm version: 6.4.1
  • @google-cloud/vision version: 0.19.0

Thanks!

Cannot find module \'redis\'

VSC, ran this in powershell:

npm run samples-test

got

C:\nodejs-vision-master\samples\system-test\textDetection.test.js:25
   24: test.cb(`should detect texts`, t => {
   25:   const redis = require('redis');
   26:   const client = redis.createClient();

  Error thrown in test:

  Error {
    code: 'MODULE_NOT_FOUND',
    message: 'Cannot find module \'redis\'',
  }

Request to document the `credentials` parameter

When using the GCF UI to generate a key for google vision I am provided with a JSON file. However, this repository doesn't state how to use the JSON file.

I believe the documentation for this repo would increase greatly by adding the following code sample to the README:

const vision = require('@google-cloud/vision');
const credentials = require('./vision-grpc-test-google-api-keys-3aa2156ccb74.json');

// Creates a client
const client = new vision.ImageAnnotatorClient({
  credentials
});

Without this code sample the example in the README would fail as the user isn't authenticated. One needs to either install and configure gcloud, use the GOOGLE_APPLICATION_CREDENTIALS environment variable (another great README candidate), or hunt down the credentials parameter (I discovered it in #63).

One can eventually come across the above documentation, but adding it to the README would help reduce friction for new users.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.