Git Product home page Git Product logo

node-red-labs's Introduction

Watson on Node-RED

This repository is a collection of examples on how to use the Watson nodes in Node-RED.

To use these nodes, you first have to set up your environment. The information on the introduction to Node-RED page will get you started in a few minutes. It helps you with:

  • setting up your environment
  • build your first 'Hello World' flow
  • additional information on the non-Watson Nodes you use in the labs

There are different types of labs in this repository:

  • Basic examples are simple, standalone examples of how to call the individual Watson Node-RED nodes.

  • Advanced labs are where different Watson Node-RED nodes are combined to create more complex applications.

  • Watson Contribution Nodes show how to add Watson Developer Cloud contribution nodes to Node-RED

  • Node-RED Starter Kits are pre-built applications from which to start your own prototypes

Feel free to use this content, please let us know what you think of it!

Region

The examples have been tested against the US South region but have not been fully tested against any other region. Most labs will work in other regions but there are some Watson utilities that only work in the US South region.

Contributing

Do you want to contribute to this project? Please follow those instructions on this page.

If you would like to contribute by updating or creating new nodes for the Watson Developer Cloud API, then switch to the node-red-node-watson project.

License

MIT. Full license text is available in LICENSE.

node-red-labs's People

Contributors

aairom avatar ahujas avatar annaet avatar arlemi avatar boneskull avatar charlielito avatar chrisparsonsdev avatar chughts avatar dancunnington avatar emmajdaws avatar germanattanasio avatar gnietof avatar hannahsaid avatar hansb001 avatar kwiatks avatar paulread avatar philippe-gregoire avatar salilahuja avatar sirspidey avatar smchamberlin avatar ylecleach avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

node-red-labs's Issues

Visual Recognition Lab needs some updates

Hi all,

first of all great labs!

One thing I came across: The visual recognition lab would needs some updates

  • The linked pictures are not available any more in the Get Image URL Template Node
  • Template screens in Node-Red look a bit different now (Syntax Highlight, Format)

Text to Speech - Extract & GenerateReply

The Extract and GenerateReply in the Text to Speech flow has a bug. The payload is an object that carries both filename and the text to process. So in the Extract leave the text where it is. Then the GenerateReply output should be referring to the {{payload.text_to_say}}


You want to say

<p><q>{{payload.text_to_say}}</q></p>
<p>Hear it:</p>
<audio controls autoplay>
  <source src="{{req._parsedUrl.pathname}}/sayit?text_to_say={{payload.text_to_say}}" type="audio/wav">
    Your browser does not support the audio element.
</audio>
<form action="{{req._parsedUrl.pathname}}">
    <input type="text" name="text_to_say" id="" value="{{payload.text_to_say}}" />
    <input type="submit" value="Try Again" />
</form>

Set audio file name

If you set the audio file name, then the extension / mime type will get recognised, and the user will only be given a selection of audio players to use. I suggest using a function with the following code -

msg.headers = {
'Content-Type' : 'audio/wav',
"Content-Disposition" : "attachment;filename=ttsaudio.wav"
};
return msg;

If you want to try to play the file in the browser (but that will need a lot more client side html & javascript) try
"Content-Disposition" : "inline"

Minor editting issues creating NLC service on bluemix

Highlight Button
You will need a Classifier ID, this can be obtained by clicking the Classifiers button and the Classifier ID is shown. Circle the button to highlight.
nlc_toolkit_training

Minor Typo in heading
"Connecting to a existing NLC Service on Bluemix" should read
"Connecting to an existing NLC Service on Bluemix"

Formatting suggested change from
"In this lab we will assume (for now) that you have created a NLC Service in Bluemix and now wish to (a) check it's status and (b) ask it a question via calls in Node-RED. "
to
"In this lab we will assume, for now, that you have created a NLC Service in Bluemix and now wish to:

  • - (a) check it's status and
  • - (b) ask it a question via calls in Node-RED"

Typo
"Open your Node-RED flow editor and drag/drop an two Inject nodes, two Function nodes, one http request node and one Debug node and join up as shown below :"
should read
"Open your Node-RED flow editor, then drag/drop two Inject nodes, two Function nodes, one http request node and one Debug node and join up as shown below :"

No blank
Double-click the top Inject node and select Blank from the option - I do not see blank in the options and therefore I have chosen string with nothing.

get NLC status
missing "; from the end of the line
also you should use your own classifier id

second function button
The end of the line is not shown on screen please ensure that you copy to end of line
also you should use your own classifier id

don't mix dash with underscore

The project has things like node-RED_labs where you are using UPPERCASE, underscore and dashes at the same time.

I recommend you to use snake case so it should be: node_red_labs

NLC Bluemix region is set to US South

Minor point but I would highlight the region setting.
Maybe highlight the whole line
Note For this exercise please check that your Bluemix region is set to US South

Speech-to-Text Lab does not fully transcribe the provided .wav file

Another issue I have seen is that within the Speech-to-Text lab the resulting transcription is:
"the space shuttle enterprise was the first orbiter of the space shuttle system". The .wav file
contains though a longer recording after a quick pause.

Not sure whether this is about the inactivity_timeout parameter or the recognition type used within the Node-Red node. But we should transcribe the whole .wav in our sample or change the sample ;-).
Best though would be the ability to pass parameter to configure the transcribing (inactivity timeout, etc.). Perhaps this is already possible?

Thanks.

Text to speech lab - issue with generated .wav file

When I follow the steps in the 'text to speech' lab, the .wav file is created but I receive the following error when trying to play it with Windows Media Player:

Windows Media Player cannot play the file. The Player might not support the file type or might not support the codec that was used to compress the file.

Any suggestions appreciated.

My instance of Node-Red Visual Recognition node is no longer working

I previously was able to easily connect my Visual Recognition node in Node-Red as I had added it as a service for my product. Now I am getting "could not authorize" the visualization node. After conferring with bluemix support back and forth, they told me it is because Node-Red updated but did not also update its support for the visual recognition node. This is really unfortunate as my entire tutorial which involved zero coding in Node-Red is now useless in demonstrating how easy it was!!

Do not see many Watson nodes

I created an app from the Bluemix Node-RED Starter Boilerplate. In the Node-RED editor I don't see Watson nodes such as AlchemyData News nor Dialog.

How to set msg.alchemy_options?

Hi,
The documentation doesn't show how to set msg.alchemy_options for Alchemy vision.
Could you please provide an example how imagePostMode can be set as child of msg.alchemy_options?
Thank you!

Further broken links

Hello,

On the basic labs page, the 'personality insights' link return an 404 error code with the message 'can't find the page'. The 'visual recognition' link also seem to be broken, if you wouldn't mind taking a look at that please too?

Thanks

Two missing links

Re: issue #69, there are still two dangling links that I could not relate to anything in the repo:
basic_examples\relationship_extraction\README.md re_input_file
basic_examples\relationship_extraction\README.md images\fe_inject_blank.png

Using NLC from Node-RED erratum

Create a new Application using the IOT Starter Boilerplate.
Why?? Surely you can use the existing boilerplate. Then you already have the NLC bound!! You can then start at "Drag a Natural Language Classifier (NLC) node to the palette"

Fix the url to call to the service

The instructions ask the http request to be set to /tts/sayit and then in the invoke instructions it states /talk/sayit... need to keep it either 'tts' or 'talk'

Missing link !

"A completed flow file card be found here". Is the final line but the url for here leads to 404

Using NLC from Node-RED typos etc

Change
"Add 3 more injectm NLC and Debug nodes as shown below"
to
"Add 4 Inject nodes, 3 Classifier nodes and 3 debug nodes as shown below:"

Alter text
"Change the second NLC node to contain List in the dropdown"
to
"Change the first new Classifier node to List in the dropdown"

Alter text
"Change the next NLC to Classify and the final one to Remove. "
to
"Change the next classifier node to Classify and the final one to Remove. "

Alter text
"NOTE : For the Remove Inject node you must change the Inject string to a Classifier that exists (you noted one down earlier)." add highlighting to draw attention to the classifier id.

Alchemy API : daily limit

During testing of more advanced labs we came across the message "Alchemy API request error: daily-transaction-limit-exceeded" when sending data to the AlchemyAPI Node-RED node. This is important if end users of the labs reach this limit. At present we do not know what the actual limit is and whether Bluenmix is enforcing it or AlchempyAPI team (indirectly).

Bluemix Node Red Natural Language Classifier

I want to change my index.html page to display my Node Red flow. How do I add these two flows?

1.) Website Flow:
HTTPRequest->Template->HTTPResponse sequence

2.) NLC Flow:
[{"id":"b82d2276.47d2e","type":"inject","name":"","topic":"","payload":"","payloadType":"none","repeat":"","crontab":"","once":false,"x":409,"y":468,"z":"7ada38b4.8525c8","wires":[["e211d200.1dee3"]]},{"id":"a565f810.5a9a08","type":"inject","name":"Ask","topic":"","payload":"Is it hot ?","payloadType":"string","repeat":"","crontab":"","once":false,"x":409,"y":526,"z":"7ada38b4.8525c8","wires":[["f6ceadee.09315"]]},{"id":"e211d200.1dee3","type":"function","name":"get NLC status","func":"msg.url="https://gateway.watsonplatform.net/natural-language-classifier/api/v1/classifiers/D385B2-nlc-530";\nreturn msg;","outputs":1,"noerr":0,"x":598,"y":467,"z":"7ada38b4.8525c8","wires":[["9eb5ad14.614a5"]]},{"id":"f6ceadee.09315","type":"function","name":"Ask NLC question","func":"msg.url="https://gateway.watsonplatform.net/natural-language-classifier/api/v1/classifiers/D385B2-nlc-530/classify?text=" + encodeURI(msg.payload);\nreturn msg;","outputs":1,"noerr":0,"x":585,"y":525,"z":"7ada38b4.8525c8","wires":[["9eb5ad14.614a5"]]},{"id":"9eb5ad14.614a5","type":"http request","name":"","method":"GET","ret":"txt","url":"","x":811,"y":489,"z":"7ada38b4.8525c8","wires":[["f3ac9f51.0c536"]]},{"id":"f3ac9f51.0c536","type":"debug","name":"","active":true,"console":"false","complete":"false","x":1020,"y":491,"z":"7ada38b4.8525c8","wires":[]}]

Reference: https://github.com/watson-developer-cloud/node-red-labs/tree/master/basic_examples/natural_language_classifier

Text to Speech only creates .wav files

I have tried the samples with both .flac & .ogg formats but the results are always .wav.
I changed the option in the node
screen shot 2016-07-27 at 15 43 11
and the type in "set headers"
screen shot 2016-07-27 at 15 47 52
but the output was still a .wav.

Problem with response HTML

Hi guys. I have followed the Visual Recognition tutrial however, the sample HTML is not producing the label_name or label_score variables

picture1

Suggestions?

Watson_comtribution_nodes

Section on Watson contribution nodes needs to be amended to point at our new repository. Plus add in mention for box and Dropbox nodes.

Broken Links

Hello,

On the 'Watson Node-RED services labs' page, the links 'Tradeoff Analytics', 'Text to speech', 'Visual recognition', 'Alchemy Vision', 'Alchemy Feature Extraction', 'Natural Language Classifier BETA service' and 'Relationship Extraction' don't seem to be working correctly.

Dropbox setup

I had to give my app a name before I could create the app and continue. I assume there is no restriction on naming
screen shot 2016-04-14 at 12 19 55

Project layout and the use of watson as folder name

Since the folder are within a NODERedWatson organization. You don't need to use the word watson in each folder. In fact the examples are using the Watson Developer Cloud (which include Alchemyapi) not Watson in general (WEA, WDA, etc..).
I would recommend to use have the next layout:

/README.md        # Overview, Getting Started, how to run the examples, license, etc...
/images           # folder containing the images(snake_case)
/advance_examples # instead of watson_advanced_labs
/basic_examples   # instead of watson_services_labs
/LICENSE          # no need to use .txt

Incorrect link

Hello,

On the Introduction to Node Red page the link on line 72 'Check out this page' returns a 404 error 'File not found'.

Use README.md when possible in example folders

When looking at the different example folders we should try to call the example instructions README.md since it will automatically loaded/parsed by Github. When using lab_tradeoff_analytics_widget.md users need on it click render it.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.