Git Product home page Git Product logo

assistant-conversation-nodejs's Introduction

Actions SDK Node.js Fulfillment Library

⚠️ Warning: Conversational Actions will be deprecated on June 13, 2023. For more information, see Conversational Actions Sunset.

This fulfillment library exposes a developer friendly way to fulfill Actions SDK handlers for the Google Assistant.

NPM Version Build Status

Setup Instructions

Make sure Node.js >=10.18.0 is installed.

Install the library with either npm install @assistant/conversation or yarn add @assistant/conversation if you use yarn.

Example Usage

// Import the appropriate service and chosen wrappers
const {
  conversation,
  Image,
} = require('@assistant/conversation')

// Create an app instance
const app = conversation()

// Register handlers for Actions SDK

app.handle('<YOUR HANDLER NAME>', conv => {
  conv.add('Hi, how is it going?')
  conv.add(new Image({
    url: 'https://developers.google.com/web/fundamentals/accessibility/semantics-builtin/imgs/160204193356-01-cat-500.jpg',
    alt: 'A cat',
  }))
})

Frameworks

Export or run for your appropriate framework:

Firebase Functions

const functions = require('firebase-functions')

// ... app code here

exports.fulfillment = functions.https.onRequest(app)

Actions Console Inline Editor

const functions = require('firebase-functions')

// ... app code here

// name has to be `ActionsOnGoogleFulfillment`
exports.ActionsOnGoogleFulfillment = functions.https.onRequest(app)

Self Hosted Express Server

const express = require('express')
const bodyParser = require('body-parser')

// ... app code here

const expressApp = express().use(bodyParser.json())

expressApp.post('/fulfillment', app)

expressApp.listen(3000)

AWS Lambda API Gateway HTTP proxy integration

// ... app code here

exports.fulfillment = app

Next Steps

Take a look at the docs and samples linked at the top to get to know the platform and supported functionalities.

Library Development Instructions

This library uses yarn to run commands. Install yarn using instructions from https://yarnpkg.com/en/docs/install or with npm: npm i -g yarn.

Install the library dependencies with yarn. If you want to run any of the sample apps, follow the instructions in the sample README.

Functionality

Public interfaces, classes, functions, objects, and properties are labeled with the JSDoc @public tag and exported at the top level. Everything that is not labeled @public and exported at the top level is considered internal and may be changed.

References & Issues

Make Contributions

Please read and follow the steps in the CONTRIBUTING.md.

License

See LICENSE.

assistant-conversation-nodejs's People

Contributors

canain avatar fleker avatar kkocel avatar taycaldwell avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

assistant-conversation-nodejs's Issues

Docs page not easy to use

TLDR: The documentation reference is not as cross-linked within itself as it can be. Features like Globals and Search do not work. Key information is missing in the Media constructor documentation. The latest package version has no public docs (3.5.0)?

Through some googling I ended up here: https://actions-on-google.github.io/assistant-conversation-nodejs/3.3.0/index.html

And wanted to learn about different media types a node-js client could push into a conversation.
There is this example:

  conv.add('Hi, how is it going?')
  conv.add(new Image({
    url: 'https://developers.google.com/web/fundamentals/accessibility/semantics-builtin/imgs/160204193356-01-cat-500.jpg',
    alt: 'A cat',
  }))

I clicked on the Conversation class in the left menu, then ended up here https://actions-on-google.github.io/assistant-conversation-nodejs/3.3.0/classes/conversation_conv.conversationv3.html. Image is mentioned in the types and once clicking on it, I end up here: https://actions-on-google.github.io/assistant-conversation-nodejs/3.3.0/classes/conversation_prompt_content_image.image.html.

This is the only way to reach this. Also note that any of the second pages will not reflect in the left side menu. One must click through the page directly and can not browse like other doc tools (javadoc etc).

Clicking on Globals ends up with 404.

Click on Search opens a input field with "Search Index not available". Entering anything & pressing enter leads to 404: https://actions-on-google.github.io/assistant-conversation-nodejs/3.3.0/classes/undefined

I'm looking for other types to push through Google Actions, for example a YouTube link or a HLS video stream.
So there should be the Media class: https://actions-on-google.github.io/assistant-conversation-nodejs/3.3.0/classes/conversation_prompt_content_media.media.html. The documentation lacks any information or points on the supported formats.

The MediaObject class contains 4 optional parameters and no description how to use the class: https://actions-on-google.github.io/assistant-conversation-nodejs/3.3.0/interfaces/api_schema.mediaobject.html.

Also note the version (3.3.0) in the URLs, which does not match the current version of 3.5.0. No link to the docs is provided in the Github releases tab, which could be helpful. Old docs version may want to redirect to newer versions.

Getting Unsuccessful webhook call: Failed to translate JSON to ExecuteHttpResponse for table

I have tried to show a table with the array of datas but when i test the action i am getting the error as below

Unsuccessful webhook call: Failed to translate JSON to ExecuteHttpResponse..

But when i check the logs i am getting the row values like below

{
  "responseJson": {
    "session": {
      "id": "ABwppHE5M8EGlWf3YmpUUGPQ5xxHh-cb2QYyF_YUarZbF_jXq-Ad2iKDtyI8XAyvWPp4hHnQockBWMZuQA",
      "params": {},
      "languageCode": ""
    },
    "prompt": {
      "override": false,
      "content": {
        "table": {
          "button": {},
          "columns": [
            "Date",
            "Time",
            "Place"
          ],
          "image": {},
          "rows": [
            "20-10-2020",
            "11:20",
            "Test"
          ],
          "subtitle": "",
          "title": ""
        }
      }
    }
  }
}

Here is the implementation of my adding table in the conv

    const tempDatas = ['20-10-2020', '11:20', 'Test'];
      conv.add(
        new Table({
          dividers: true,
          columns: ['Date', 'Time', 'Place'],
          rows: tempDatas
        })
      );

I have used the same logic in google-actions plugin there it works fine

Test Simulator fails to load latest Actions Builder changes

I'm working on migrating from Dialogflow to Actions Builder, things have been going well however since yesterday I've been unable to load the latest Actions Builder changes in the Test Simulator, blocking my ability to test/develop my action.

I'm getting the following errors in the web console:

Content Security Policy: The page’s settings blocked the loading of a resource at inline (“script-src”).
Failed to load resource: the server responded with a status of 504 () /m/actions/agents/draft/createlocalizeduserpreview:1
Failed to load resource: the server responded with a status of 401 () GetPeople:1

image

The 'updating preview' spinning loader shows for about 2.5 mins before this error shows.

There are no errors/warnings reported in the develop tab.

How do I proceed with this issue?

Thanks

Security vulnerability due to outdated google-auth-library dependency

Running npm run audit results in a high-severity vulnerability

High            Prototype Pollution in node-forge                             

  Package         node-forge                                                    

  Patched in      >= 0.10.0                                                     

  Dependency of   @assistant/conversation                                       

  Path            @assistant/conversation > google-auth-library > gtoken >      
                  google-p12-pem > node-forge                                   

  More info       https://npmjs.com/advisories/1561     

The core issue appears to be the usage of an obsoleted version of the "google-auth-library": "^5.10.1" dependency which prevents us from updating the node-forge upstream dependency.

Browsing Carousel?

hey, what happened to the Browsing Carousel response type?
can't find any examples

could you point me out?
thanks

Media status 'finished' bugged on nest mini

Hello,

Media status is bugged in Nest mini (1st and 2nd generation) and Google Home (1st generation). It works normally on Nest hub.
The 'finished' event is just not triggering after another media has ended. The bug started about week ago.

I tried on new and old projects where it was working before and it seems like it is bugged everywhere.
I have also tried on different accounts, different networks, and even on clean project (with nothing but single intent and media status handling to loop some audioclips).

Forced to make intents global when used as custom intents in scenes

I'm currently migrating from Dialogflow to Actions Builder, things have gone well so far however after added custom intents to my scenes the test simulator prompts me with the warning "Intent 'intent_name' is used as an action, but not added as a global event." blocking my ability to test the action until I configure the intent as global.

Since configuring intents as global enables implicit invocation it seems inappropriate to apply it to all intents, especially those which have no business being accessed implicitly.

Has anyone experienced this warning? Any tips to get past this error without configuring the intent as global?

Cheers

Does Account Linking Work On Google TV / Android TV ?

I build Google actions apps,
The Devices successfully work with Account Linking is on my phone and my Nest Hub.
Then I try to use my Google actions app on my TV's (Android TV and Google TV) and seems not to work,
Even my TV's not asking for do login/sign In,
After that Text appears on my TV saying "You cannot access this Application"

So I try with Linking my Email Account on my Phone, then I back to try on my TV's, still not work

any suggestion?
Please help, Thanks.

User ID

I don't see any ID property on the User class but is there a way to retrieve this somewhere, or is there an equivalent/proxy for this?

How to disable seeking on media response?

How can I disable seeking on media responses? So the user cannot seek forward/backwards in the current playing track?

This is how I currently do it:

conv.add(new Media({
        mediaObjects: [{
        name: 'Trance Track 1',
        description: 'Media description',
        url: 'https://my-url.de/Trance1.mp3',
        image: {
          large: new Image({
            url: 'https://somewhere/whateverimage.jpg',
            alt: 'my image here'
          })
        }
    }],
        mediaType: MediaType.Audio,
        optionalMediaControls: [OptionalMediaControl.Paused, OptionalMediaControl.Stopped],
        startOffset: '0s'
    }));

I didn't find anything in the documentations on https://developers.google.com/assistant/conversational/prompts-media

Also didn't find anything on stackoverflow and didn't get any answer on stackoverflow, yet:
https://stackoverflow.com/questions/65039702/how-to-disable-manual-seeking-on-google-actions-assistant-conversations-media

So my last option is to ask here.

How to explicitly handle lambda request?

So i know i could do

module.exports.googleHandler = googleActionApp

But i want to explicitly handle the call myself, so i could do something before the response is sent. Something like this:

module.exports.googleHandler = async function(event: any, context: any, callback: any) {
  if (!googleActionApp) {
      googleActionApp = createApp()
  }
  (googleActionApp as LambdaHandler)(event, context, callback)
}

I could see my handler's first method is starting to get called, but it always exits pre-maturally. Any suggestions?

Basically i want to flush logging before exit, if anyone know how to have a response middleware, it'll do the job too.

How to get decoded token and user profile after account linking completed?

In this section, the document says

If you use the Actions on Google Fulfillment library for Node.js, it takes care of validating and decoding the token for you, and gives you access to the profile content, as shown in the following code snippets.

and I tried to deploy the completely same code with the sample code snippet in preview mode,
but I could not get any user profile but got just a string (it's like RwczovL2xoMy5nb2...) from conv.headers.authorization.

Of course I tried JSON.stringify(conv) but I could not find decoded token nor user profile from it.

How can I get user email or name from conv in webhook?

I've almost given up using Account Linking feature on Actions on Google...

Cannot access slot original value

When a slot is filled, I'd like to use it's resolved value under the hood but still refer to it by what the user said.
For example, if a slot called country_slot is for the user to choose a country, and they say "Venezuela", the slot resolves this to ISO code VEN but I still need to tell the user Ok, you've chosen $session.params.country_slot.original. It seems this is not possible. The prompt is completely omitted from the conversation. I assume because the evaluation of this expression fails.

This would be consistent with the way to address an intent's parameter original and resolved values as intent.params.['param_name'].original and intent.params.['param_name'].resolved.

Looing at the flow of the conversation in the Actions Console, I can see that there's a slotMatch object with this information, but I can't seem to access this either.

{
  "slotMatch": {
    "nluParameters": [
      {
        "key": "country_slot",
        "value": {
          "original": "Spain",
          "resolved": "ESP"
        }
      }
    ]
  },
  "responses": []
}

Feature request: enqueue, track finished event, track started event, playback nearly finished event

Hi,

the playlist feature is a good start.

  1. additional events
    It would be great to have additional events:
  • track finished event: triggered when a track playback was finished and the next track
  • track started event: triggered when a track playback was started or even started again after pause - focused on tracks instead of the whole playlist
  • playback nearly finished event: triggered when the last track of the playlist was started
  1. Enqueue feature
    For filling the playlist at nearly finished, it would be great to have an enqueue feature which will add tracks to the playlist instead of replacing.

I would need such a feature to implement the radio skill we currently already have for alexa, also for google assistant.

The events about started/finished/paused/resumed track is needed for our performance reporting and playback history.

User information via Google Sign-In for multiple languages

Hey there, I'm developing an action that can send an e-mail if the user wish to.

The problem that I tried to solve was to get the user e-mail, the only way that I found was to use the Google Sign-In method, but it simply doesn't work with my pt-BR action.

When I transition to the account linking scene nothing happens, but when the language it's en-US works just fine.

So I looked at the account linking policy and I'm not sure if a third party OAuth using the google account as a linking method would be certified, because it says: Don't request any OAuth scope from Google unless the user is signing in to your service using Google Sign-In. Don't encourage users to agree to additional Google OAuth scopes by directing them to a website or Action.

What would be the best approach to get the e-mail from the user google account for an action in pt-BR?

index.js

menu
Build Actions for Google Assistant using Actions Builder (Level 1)

Overview
Set up
Start a conversation
Create your Action's conversation
Implement fulfillment
Congratulations!
5. Implement fulfillment
Currently, your Action's responses are static; when a scene containing a prompt is activated, your Action sends the same prompt each time. In this section, you implement fulfillment that contains the logic to construct a dynamic conversational response.

Key terms:

Fulfillment: The code that contains the logic for your Action. A webhook triggers calls to your fulfillment based on events that occur within your Actions.
Your fulfillment identifies whether the user is a returning user or a new user and modifies the greeting message of the Action for returning users. The greeting message is shortened for returning users and acknowledges the user's return: "A wondrous greeting, adventurer! Welcome back to the mythical land of Gryffinberg!"

For this codelab, use the Cloud Functions editor in the Actions console to edit and deploy your fulfillment code.

Key terms:

Cloud Functions editor: A built-in editor in the Actions console, which you can use to edit and deploy your fulfillment code using Cloud Functions for Firebase.
Your Action can trigger webhooks that notify your fulfillment of an event that occurs during an invocation or specific parts of a scene's execution. When a webhook is triggered, your Action sends a request with a JSON payload to your fulfillment along with the name of the handler to use to process the event. This handler carries out some logic and returns a corresponding JSON response.

Build your fulfillment
You can now modify your fulfillment in the inline editor to generate different prompts for returning users and new users when they invoke your Action.

To add this logic to your fulfillment, follow these steps:

Click Develop in the navigation bar.
Click the Webhook tab in the navigation bar.
Select the Inline Cloud Functions checkbox.
Click Confirm. Boilerplate code is automatically added for the index.js and package.json files.
d4702f1de6404285.png

Replace the contents of index.js with the following code:
index.js

const { conversation } = require('@assistant/conversation');
const functions = require('firebase-functions');

const app = conversation({debug: true});

app.handle('greeting', conv => {
let message = 'A wondrous greeting, adventurer! Welcome back to the mythical land of Gryffinberg!';
if (!conv.user.lastSeenTime) {
message = 'Welcome to the mythical land of Gryffinberg! Based on your clothes, you are not from around these lands. It looks like you're on your way to an epic journey.';
}
conv.add(message);
});

exports.ActionsOnGoogleFulfillment = functions.https.onRequest(app);

Multi room/multi device - next track stops playback on other devices

Hi,

when playing music (with multiple devices) - the playback on the other devices stop when playing the next track.
How can I prevent this? I would like to have continued multi device playback.

My implementation currently is:

I handle next track when I get the media status "FINISHED" I add the next track with:

conv.add(new Media({
        mediaObjects: [{
            name: nextTrack.display_title + " - " + nextTrack.display_artist,
            description: nextTrack.display_title,
            url: nextTrack.streamUrl,
            image: {
              large: new Image({
                url: nextTrack.assetUrl,
                alt: nextTrack.display_title
              })
            }
        }],
        mediaType: MediaType.Audio,
        optionalMediaControls: [OptionalMediaControl.Paused, OptionalMediaControl.Stopped],
        startOffset: '0s'
      }));

As I can see there is no "playback nearly finished" event or "enqueue" command. As something like that could fix that.

How can I solve this problem?

How to off the mic after conv.add()

My project rejected because the mic was open after conversion. How to off the mic after conv.add(). The conv.close() function is not available.

Slot filling transforms parameter to lowercase

When using slot filling for a slot whose name consists of lower and upper cases, the slot name in the conv.intent.params object will be completely lower case when the slot is filled by slot filling (the user didn't specify the slot in his utterance and Assistant asks the user for a value), but it will be as expected when the user specifies the slot value in his utterance.

Example / Steps to reproduce:

  • Create a custom type searchTerm (in my case it's a free form text type)
  • Create an intent SearchIntent with the intent paramater searchTerm of data type searchTerm
  • Add the training phrases search and search for <searchTerm>
  • Create a scene that will call intent SearchIntent when it is matched and then transitions to a slot filling scene, e.g. SearchIntentSlotFilling
  • Customize the prompts (e.g. "What would you like to search for?")
  • Call your webhook when slot filling status is FINAL

Scenario 1:

  • User invokes SearchIntent by saying "Search for cameras"
  • Slot is filled, Assistant calls webhook
  • In webhook, slot is available under conv.intent.params.searchTerm, as expected

Scenario 2:

  • User invokes SearchIntent by saying "Search"
  • Slot is not filled, Assistant asks user "What would you like to search for?"
  • User says "cameras"
  • Slot is now filled, Assistant calls webhook
  • In webhook, slot is available under conv.intent.params.searchterm, which breaks code that expects conv.intent.params.searchTerm

Workarounds:

  • Only use lower cases in slot type names (search_term)
  • Use option "Customize slot value writeback" which seems to work correctly. However, this writes the slot value to a session parameter instead of an intent parameter which might be undesired. It also doesn't offer access to the original and the resolved value.
  • Use conv.scene.slots.searchTerm.value - however, this doesn't offer access to the original and the resolved value

On another note, there is a typo/mistake in the documentation for reading intent parameters, it should be:

conv.intent.params['param_name'].original
conv.intent.params['param_name'].resolved

instead of

conv.intent.params.['param_name'].original
conv.intent.params.['param_name'].resolved

Action Simulator is not working

Request in the Simulator shows the following:
Invocation Error
You cannot use standard Google Assistant features in the Simulator. If you want to try them, use Google Assistant on your phone or other compatible devices.
test

Is it possible to set endOffset to media object ?

Hi,
Let we say, that we have a long file to play. I would like to instruct device in order to play media in specify range for e.g:
from 5s to 55s. I can set startOffset to 5s but it I could not find a way how to set endOffset on media object.
Is it possible ?

conv.add(new Media({
    mediaObjects: [
      {
        name: 'Media name',
        description: 'Media description',
        url: 'https://storage.googleapis.com/automotive-media/Jazz_In_Paris.mp3',
        image: {
          large: JAZZ_IN_PARIS_IMAGE,
        }
      }
    ],
    mediaType: 'AUDIO',
    optionalMediaControls: ['PAUSED', 'STOPPED'],
    startOffset: '5.0000001s'.
//    endOffset:  '55.0000001s'.  
  }));

Bug at merging Simple Responses from multiple webhook requests

When calling the webhook multiple times in one scene and sending simple responses there is a bug at merging the simple responses.

prompt from the first webhook call

{
    "override": false,
    "firstSimple": {
        "speech": "<speak><audio src=\"https://www.example.com/audio/file1.mp3\"></speak>",
        "text": "Text 1"
    }
}

prompt from the second webhook call

{
    "override": false,
    "firstSimple": {
        "speech": "<speak><audio src=\"https://www.example.com/audio/file2.mp3\"></audio> <audio src=\"https://www.example.com/audio/file3.mp3\"></audio></speak>",
        "text": " Text 2"
    }
}

merged prompt in the response send to the user

{
    "firstSimple": {
        "speech": "<speak><speak><audio src=\"https://www.example.com/audio/file1.mp3\"></speak> <audio src=\"https://www.example.com/audio/file2.mp3\"/> <audio src=\"https://www.example.com/audio/file3.mp3\"/></speak>",
        "text": "Text 1 Text2"
    }
}

So with the two speak tags the SSML is invalide and is not spoken out.
Sometimes the speech object is completely missing.

button link not showing when returing a card

Hello,

I'm having a hard time trying to return a conversational prompt with a card containing a link. I tried reading the documentation and followed one of your tests, but I can't get it working. This is an excerpt of my code:

    conv.add(new Card({
       title: "this is title",
       image: new Image({
         url: OPEN_STREET_MAP_LOGO,
         alt: 'Open Street Map logo',
       }),
       button: new Link({
         name: 'Learn more',
         open: {
           url: 'https://www.google.com/about/',
         },
       }),
    }));

This is the JSON returned by my webhook (copied from the simulator):

{
  "responseJson": {
    "session": { ... },
    "prompt": {
      "override": false,
      "content": {
        "card": {
          "title": "this is title",
          "image": {
            "alt": "Google app logo",
            "height": 0,
            "url": "https://wiki.openstreetmap.org/w/images/thumb/7/79/Public-images-osm_logo.svg/240px-Public-images-osm_logo.svg.png",
            "width": 0
          },
          "button": {
            "name": "Learn more",
            "open": {
              "url": "https://www.google.com/about/"
            }
          }
        }
      },
      "firstSimple": {
        "speech": "Here is a map for your project",
        "text": ""
      }
    }
  }
}

Below you can find the rendered card, without button. Am I doing something wrong here?

bug_conversational_agent

Some phrases of an intent doesn't work while audio player is active

I implemented a simple 'next episode' use case in my action with assistant-conversation-nodejs which works like this:

The user starts a content which consists out of multiple episodes (the audio player starts with the first episode). If the user says something like "next" or "previous" the action navigates through the list of the content.

You can see the scene in this screenshot

01

In the following screenshot you'll see all the phrases of the MORE Intent

02

As you can see there are different phrases like "weiter" ("next" in english) and "mehr" ("more" in english). The right side shows that "weiter" matches MORE.

When I'm testing the application (developer console or google home device or assistant iOS app) I am able to navigate through the content by using the phrase "mehr/more" but the phrase "weiter/next" won't work (same issue for previous with the phrase "zurück/back"). In the console "weiter/next" doesn't show any event or transition, on the device "weiter" brings the google home device into the listening mode. "mehr/more" works fine on any testing device.

This behaviour does only occur when the audio player is active, because when I say something which the action doesn't recognize like "klslkdalasksdakllasdjdlsadj" my action answers "sorry I didn't understand that, can you please repeat?" and I answer this question with "weiter/next" the MORE Intent in my scene does finally match and the next episode starts.

Are there any audio player events which have conflicts with this use case? Is there maybe a given way to implement next and previous which I didn't find in the documentation?

Thank you in advance.

Cheers
Frank

Media Stop / Pause not working

I followed the example for building media responses.
Unfortunately the media player is not responding to pause or stop.
I am using the provided code sample to handle the mediastatus, after pausing the system starts playing again.

// Media status
app.handle('media_status', (conv) => {
  const mediaStatus = conv.intent.params.MEDIA_STATUS.resolved;
  switch(mediaStatus) {
    case 'FINISHED':
      conv.add('Media has finished playing.');
      break;
    case 'FAILED':
      conv.add('Media has failed.');
      break;
    case 'PAUSED' || 'STOPPED':
      if (conv.request.context) {
        // Persist the media progress value
        const progress = conv.request.context.media.progress;
      }
      // Acknowledge pause/stop
      conv.add(new Media({
        mediaType: 'MEDIA_STATUS_ACK'
        }));
      break;
    default:
      conv.add('Unknown media status received.');
  }
});

also the part for acknowledge pause/stop seems to be incorect. I am getting an typeerror by providing the mediaType as a string.

Instead i'm using the following:

import { Media } from "@assistant/conversation";
import { MediaType } from "@assistant/conversation/dist/api/schema";

conv.add(new Media({
      mediaType: MediaType.MediaStatusACK
 }));;

Self Hosted Restify Server

Hi,

I'm trying to use the @assistant/conversation in restify,
But I get this error:

(node:17) UnhandledPromiseRejectionWarning: Error: Handler not found for handle name:
at Function.handler (/app/node_modules/@assistant/conversation/dist/conversation/conversation.js:109:23)
at Function.standard [as handler] (/app/node_modules/@assistant/conversation/dist/assistant.js:49:32)
at omni (/app/node_modules/@assistant/conversation/dist/assistant.js:41:20)
at nextTick (/app/node_modules/restify/lib/chain.js:167:13)
at process._tickCallback (internal/process/next_tick.js:61:11)
(node:17) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 2)
(node:17) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.

Example:

// Import the appropriate service and chosen wrappers
const {
    conversation,
    Image,
    Card
} = require('@assistant/conversation');

// Create an app instance
const app = conversation({ });

app.handle('greeting', conv => {
    conv.add('Hi, how is it going?')
    conv.add(new Image({
      url: 'https://developers.google.com/web/fundamentals/accessibility/semantics-builtin/imgs/160204193356-01-cat-500.jpg',
      alt: 'A cat',
    }));
});

// ... restify code

server.post('/assistant', app);

I use the restify bodyParser

server.use(restify.plugins.bodyParser({ maxBodySize: 1000000 }));

Look like some JSON parser problem.
JSON parse on body by restify:

{"handler":{"name":"greeting"},"intent":{"name":"YES","params":{},"query":"Sim"},"scene":{"name":"Start","slotFillingStatus":"UNSPECIFIED","slots":{},"next":{"name":"actions.scene.END_CONVERSATION"}},"session":{"id":"ABwppHHi7JYqhUAqDyFt63dcPtQ0or_Lx4ILhgWqoscrfrX74j2aECq4yHVPlPbloQ_uidDNS4swocEGjsletXOAt5E_RA7Z","params":{},"typeOverrides":[],"languageCode":""},"user":{"locale":"pt-BR","params":{},"accountLinkingStatus":"ACCOUNT_LINKING_STATUS_UNSPECIFIED","verificationStatus":"VERIFIED","packageEntitlements":[],"lastSeenTime":"2020-10-08T09:32:08Z"},"home":{"params":{}},"device":{"capabilities":["SPEECH","RICH_RESPONSE","LONG_FORM_AUDIO"]}}

console.log(`HANDLER NAME: ${req.body.handler.name}`); HANDLER NAME: greeting

Without '@assistant/conversation' works fine:
Example:

server.post('/assistant', (req, res, next) => {
        res.json({
            "session": {
              "id": "example_session_id",
              "params": {}
            },
            "prompt": {
              "override": false,
              "content": {
                "card": {
                  "title": "Card Title",
                  "subtitle": "Card Subtitle",
                  "text": "Card Content",
                  "image": {
                    "alt": "Google Assistant logo",
                    "height": 0,
                    "url": "https://developers.google.com/assistant/assistant_96.png",
                    "width": 0
                  }
                }
              },
              "firstSimple": {
                "speech": "This is a card rich response.",
                "text": ""
              }
            }
          }
        );
        return next();
    });

Account Linking With Google Sign

I've problem with Sign In With Google Account,
the Conversation not asking for Login/Signin.

I'm follwing this instruction ,
I have a question, what can i do with "Client ID" that issued by Google to my Actions ?
there is doesn't have no instruction with that Client ID.

this is what I have done :
app.js

const {
  conversation,
} = require('@assistant/conversation');
const app = conversation({
  debug: true,
  clientId: 'xxxxx-25b0akb0ienv9tnb0oa570l2po16l1ca.apps.googleusercontent.com'
})

// Register handlers for Actions SDK intents

app.handle('linkAccount', async conv => {
  let payload = conv.headers.authorization;
  if (payload) {
    // Get UID for Firebase auth user using the email of the user
      const email = payload.email;
      if (!conv.user.params.uid && email) {
        try {
          conv.user.params.uid = (await auth.getUserByEmail(email)).uid;
        } catch (e) {
          if (e.code !== 'auth/user-not-found') {
            throw e;
          }
          // If the user is not found, create a new Firebase auth user
          // using the email obtained from Google Assistant
          conv.user.params.uid = (await auth.createUser({email})).uid;
        }
      }
    }
})

Heres my action builder Image

On Simulator page, I go to Settings and set to on "Simulate unverified users"
and simualtor says : "Since your voice wasn’t recognized, I can’t do that right now. Check the Voice Match settings in the Google Home app.cancel account linking"

and I check on response, there is nothing mentioning about my credentials.
maybe there is step that i missed ?

startOffset - simulator issue

Following the example here , following code always is playing media file from the beginning.

conv.add(new Media({
    mediaObjects: [
      {
        name: 'Media name',
        description: 'Media description',
        url: 'https://storage.googleapis.com/automotive-media/Jazz_In_Paris.mp3',
        image: {
          large: JAZZ_IN_PARIS_IMAGE,
        }
      }
    ],
    mediaType: 'AUDIO',
    optionalMediaControls: ['PAUSED', 'STOPPED'],
    startOffset: '5.0000001s'
  }));

It is look like startOffset is always ignored on simulator.

Media progress set to 0 when playing media failed.

Hi,
After media object is send to Nest device, then mp3 file is start playing.
After 20 secs, mp3 url which was send to device is not an available anymore (expired or deleted).
Service is notified a little bit later and media_status handler receive information about FAILED status.
Conversation input object(conv.request.context.media.progress) has always set progress to 0 in case of error, despite of user was able to listen 20secs at least.

Is there any other way how we can find out about progress of playing media, when media failed?

Exit/Close conversation from Webhook

Hi,

I am building an action and using webhook. There is a use case in the intent handler when after giving prompt to user I would like to close the conversation from webhook itself. I went through the @assistant/conversation api documentation but couldn't find any way to do that.

Any help?

Thanks-in-advance

Feature request: Sleep timer

When i say "Sleep timer" to google assistant it lets me know about sleep timer feature that is supposed to work with media. ("set sleep timer for x minutes"). However this seems to not work with conversational actions media prompt.

AWS Lambda API Gateway HTTP proxy integration, don't enter the app.handle

I am trying to use a lambda function as a webhook and my handler call is not working.
The same code works normally in inline cloud functions in the actions console.

Here is my code in lambda:

`exports.handler = async (event) => {

const { conversation, Image, Card, Simple, Suggestion, List  } = require('@assistant/conversation');

const app = conversation({debug: true});

app.handle('start_conversation', conv => {
  let message = 'A wondrous greeting, adventurer! Welcome to the mythical land of  Gryffinberg! Based on your clothes, you are not from around these lands. It looks like you\'re on your way to an epic journey.';
  if (conv.user.lastSeenTime) {
    message = 'A wondrous greeting, adventurer! Welcome back to the mythical land of Gryffinberg!';
  }
  conv.add(message);
});

exports.fulfillment = app

};`

image

Here is my request json.

{
"requestJson": {
"handler": {
"name": "start_conversation"
},
"intent": {
"name": "actions.intent.MAIN",
"params": {},
"query": "Falar com o app teste-man"
},
"scene": {
"name": "actions.scene.START_CONVERSATION",
"slotFillingStatus": "UNSPECIFIED",
"slots": {},
"next": {
"name": "start"
}
},
"session": {
"id": "ABwppHGM0ZY2HN8IoKGqbffg3-EEHCMwFvLj0qPrSnJhk6LC68HXLEcm-Hg2iOxda6312YvzaEsI32dB9y0",
"params": {},
"typeOverrides": [],
"languageCode": ""
},
"user": {
"locale": "pt-BR",
"params": {},
"accountLinkingStatus": "ACCOUNT_LINKING_STATUS_UNSPECIFIED",
"verificationStatus": "VERIFIED",
"packageEntitlements": [],
"lastSeenTime": "2020-07-31T14:27:51Z"
},
"home": {
"params": {}
},
"device": {
"capabilities": [
"SPEECH",
"RICH_RESPONSE",
"LONG_FORM_AUDIO"
]
}
}
}

Verifying requests

In previous versions of a library that supports Actions on Google, there was a way to verify that the request actually came from Google. There does not seem to be a way to do so using this library.

Although a JWT is provided in the google-assistant-signature header, there doesn't seem to be any reference to this header in the code. There also doesn't seem to be any documentation about how to verify the request, either with or without the library.

media prompt type issue

Following the example here, this:

conv.add(new Media({
    mediaObjects: [
      {
        name: 'Media name',
        description: 'Media description',
        url: 'https://storage.googleapis.com/automotive-media/Jazz_In_Paris.mp3',
        image: {
          large: JAZZ_IN_PARIS_IMAGE,
        }
      }
    ],
    mediaType: 'AUDIO',
    optionalMediaControls: ['PAUSED', 'STOPPED'],
    startOffset: '2.12345s'
  }));

will throw a type error of: Argument of type 'Media' is not assignable to parameter of type 'PromptItem'.

Possible fix:

export type PromptItem =
  string |
  Simple |
  Content |
  Link |
  Suggestion |
  Canvas |
  OrderUpdate |
  Media

@taycaldwell

Only one Suggestion can be added (except debugging is enabled?!)

The following code is only working when the app is started in debugging mode:

const {conversation, Card, Image, Suggestion} = require('@assistant/conversation');

const app = conversation({
  debug: true,
});

app.handle('get_meta_data', async (conv) => {

  const currentMeta = await getMeta(); // false in case of an error
  
  conv.session.params.currentMeta = currentMeta;
  
 if (!currentMeta) {
    conv.add("Couldn't fetch the meta data at the moment");
 } else {
    conv.add(`Currently playing: ${currentMeta.artist} - ${currentMeta.song}`);
    conv.add(new Card({
      title: "Currently playing:",
      subtitle: `${currentMeta.artist} - ${currentMeta.song}`,
      image: new Image({
        url: currentMeta.album,
        alt: `Album cover for ${currentMeta.artist} - ${currentMeta.song}`,
      }),
      button: {
        name: "Know more about this music",
        open: {
          url: currentMeta.musicUrl,
        },
      },
    }));
  }
  
  conv.add(new Suggestion({ title: 'Continue radio' }));
  conv.add(new Suggestion({ title: 'Quit' }));
});

If debug is set to false, the JSON response is malformed and the action returns an error to the user.
If I remove one of the Suggestions, the code works also when debug=false.

Collection & List not working

I used the example code for visual selection and tried to implement a collection.

Once i use the collection my webhook response with Unexpected internal error id=xxxx

const ASSISTANT_LOGO_IMAGE = new Image({
  url: 'https://developers.google.com/assistant/assistant_96.png',
  alt: 'Google Assistant logo'
});

app.handle('Collection', conv => {
  conv.add("This is a collection.");
  
  // Override type based on slot 'prompt_option'
  conv.session.typeOverrides = [{
    name: 'prompt_option',
    mode: 'TYPE_REPLACE',
    synonym: {
      entries: [
        {
          name: 'ITEM_1',
          synonyms: ['Item 1', 'First item'],
          display: {
             title: 'Item #1',
             description: 'Description of Item #1',
             image: ASSISTANT_LOGO_IMAGE,
                }
        },
        {
          name: 'ITEM_2',
          synonyms: ['Item 2', 'Second item'],
          display: {
             title: 'Item #2',
             description: 'Description of Item #2',
             image: ASSISTANT_LOGO_IMAGE,
                }
        },
        {
          name: 'ITEM_3',
          synonyms: ['Item 3', 'Third item'],
          display: {
             title: 'Item #3',
             description: 'Description of Item #3',
             image: ASSISTANT_LOGO_IMAGE,
                }
        },
        {
          name: 'ITEM_4',
          synonyms: ['Item 4', 'Fourth item'],
          display: {
             title: 'Item #4',
             description: 'Description of Item #4',
             image: ASSISTANT_LOGO_IMAGE,
                }
        },
        ]
    }
  }];
  
  // Define prompt content using keys
  conv.add(new Collection({
    title: 'Collection Title',
    subtitle: 'Collection subtitle',
    items: [
      {
        key: 'ITEM_1'
      },
      {
        key: 'ITEM_2'
      },
      {
        key: 'ITEM_3'
      },
      {
        key: 'ITEM_4'
      }
    ],
  }));
});

Enforce typed input

I am developing a Google Assistant action using the Actions SDK, and I want to force the user to type their input, rather than speak it. Is there a way to do this? Or at the very least, am I able to check whether they spoke or typed the input?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.