Git Product home page Git Product logo

serverless-offline's Introduction

Serverless Offline

⚠️ We are looking for maintainers! This package is entirely community-driven. Please send an email to dherault to start helping now. ⚠️

This Serverless plugin emulates AWS λ and API Gateway on your local machine to speed up your development cycles. To do so, it starts an HTTP server that handles the request's lifecycle like APIG does and invokes your handlers.

Features

  • Node.js, Python, Ruby, Go, Java (incl. Kotlin, Groovy, Scala) λ runtimes.
  • Velocity templates support.
  • Lazy loading of your handler files.
  • And more: integrations, authorizers, proxies, timeouts, responseParameters, HTTPS, CORS, etc...

This plugin is updated by its users, I just do maintenance and ensure that PRs are relevant to the community. In other words, if you find a bug or want a new feature, please help us by becoming one of the contributors ✌️ ! See the contributing section.

Documentation

Installation

First, add Serverless Offline to your project:

npm install serverless-offline --save-dev

Then inside your project's serverless.yml file add following entry to the plugins section: serverless-offline. If there is no plugin section you will need to add it to the file.

Note that the "plugin" section for serverless-offline must be at root level on serverless.yml.

It should look something like this:

plugins:
  - serverless-offline

You can check whether you have successfully installed the plugin by running the serverless command line:

serverless --verbose

the console should display Offline as one of the plugins now available in your Serverless project.

Usage and command line options

In your project root run:

serverless offline or sls offline.

to list all the options for the plugin run:

sls offline --help

All CLI options are optional:

corsAllowHeaders

Used as default Access-Control-Allow-Headers header value for responses. Delimit multiple values with commas.
Default: 'accept,content-type,x-api-key'

corsAllowOrigin

Used as default Access-Control-Allow-Origin header value for responses. Delimit multiple values with commas.
Default: '*'

corsDisallowCredentials

When provided, the default Access-Control-Allow-Credentials header value will be passed as 'false'.
Default: true

corsExposedHeaders

Used as additional Access-Control-Exposed-Headers header value for responses. Delimit multiple values with commas.
Default: 'WWW-Authenticate,Server-Authorization'

disableCookieValidation

Used to disable cookie-validation on hapi.js-server.

dockerHost

The host name of Docker.
Default: localhost

dockerHostServicePath

Defines service path which is used by SLS running inside Docker container.

dockerNetwork

The network that the Docker container will connect to.

dockerReadOnly

Marks if the docker code layer should be read only.
Default: true

enforceSecureCookies

Enforce secure cookies

host

-o Host name to listen on.
Default: localhost

httpPort

Http port to listen on.
Default: 3000

httpsProtocol

-H To enable HTTPS, specify directory (relative to your cwd, typically your project dir) for both cert.pem and key.pem files.

ignoreJWTSignature

When using HttpApi with a JWT authorizer, don't check the signature of the JWT token.

lambdaPort

Lambda http port to listen on.
Default: 3002

layersDir

The directory layers should be stored in.
Default: ${codeDir}/.serverless-offline/layers'

localEnvironment

Copy local environment variables.
Default: false

noAuth

Turns off all authorizers.

noPrependStageInUrl

Don't prepend http routes with the stage.

noTimeout

-t Disables the timeout feature.

prefix

-p Adds a prefix to every path, to send your requests to http://localhost:3000/[prefix]/[your_path] instead.
Default: ''

reloadHandler

Reloads handler with each request.

resourceRoutes

Turns on loading of your HTTP proxy settings from serverless.yml.

terminateIdleLambdaTime

Number of seconds until an idle function is eligible for termination.

useDocker

Run handlers in a docker container.

useInProcess

Run handlers in the same process as 'serverless-offline'.

webSocketHardTimeout

Set WebSocket hard timeout in seconds to reproduce AWS limits (https://docs.aws.amazon.com/apigateway/latest/developerguide/limits.html#apigateway-execution-service-websocket-limits-table).
Default: 7200 (2 hours)

webSocketIdleTimeout

Set WebSocket idle timeout in seconds to reproduce AWS limits (https://docs.aws.amazon.com/apigateway/latest/developerguide/limits.html#apigateway-execution-service-websocket-limits-table).
Default: 600 (10 minutes)

websocketPort

WebSocket port to listen on.
Default: 3001

Any of the CLI options can be added to your serverless.yml. For example:

custom:
  serverless-offline:
    httpsProtocol: 'dev-certs'
    httpPort: 4000
      foo: 'bar'

Options passed on the command line override YAML options.

By default you can send your requests to http://localhost:3000/. Please note that:

  • You'll need to restart the plugin if you modify your serverless.yml or any of the default velocity template files.
  • When no Content-Type header is set on a request, API Gateway defaults to application/json, and so does the plugin. But if you send an application/x-www-form-urlencoded or a multipart/form-data body with an application/json (or no) Content-Type, API Gateway won't parse your data (you'll get the ugly raw as input), whereas the plugin will answer 400 (malformed JSON). Please consider explicitly setting your requests' Content-Type and using separate templates.

Run modes

node.js

Lambda handlers with serverless-offline for the node.js runtime can run in different execution modes and have some differences with a variety of pros and cons. they are currently mutually exclusive and it's not possible to use a combination, e.g. use in-process for one Lambda, and worker-threads for another. It is planned to combine the flags into one single flag in the future and also add support for combining run modes.

worker-threads (default)

  • handlers run in their own context
  • memory is not being shared between handlers, memory consumption is therefore higher
  • memory is being released when handlers reload or after usage
  • environment (process.env) is not being shared across handlers
  • global state is not being shared across handlers
  • easy debugging

NOTE:

in-process

  • handlers run in the same context (instance) as serverless and serverless-offline
  • memory is being shared across lambda handlers as well as with serverless and serverless-offline
  • no reloading capabilities as it is [currently] not possible to implement for commonjs handlers (without memory leaks) and for esm handlers
  • environment (process.env) is being shared across handlers as well as with serverless and serverless-offline
  • global state is being shared across lambda handlers as well as with serverless and serverless-offline
  • easy debugging

docker

  • handlers run in a docker container
  • memory is not being shared between handlers, memory consumption is therefore higher
  • memory is being released when handlers reload or after usage
  • environment (process.env) is not being shared across handlers
  • global state is not being shared across handlers
  • debugging more complicated

Python, Ruby, Go, Java (incl. Kotlin, Groovy, Scala)

the Lambda handler process is running in a child process.

Invoke Lambda

To use Lambda.invoke you need to set the lambda endpoint to the serverless-offline endpoint:

import { env } from "node:process"
import aws from "aws-sdk"

const lambda = new aws.Lambda({
  apiVersion: "2015-03-31",
  // endpoint needs to be set only if it deviates from the default
  endpoint: env.IS_OFFLINE
    ? "http://localhost:3002"
    : "https://lambda.us-east-1.amazonaws.com",
})

All your lambdas can then be invoked in a handler using

import { Buffer } from "node:buffer"
import aws from "aws-sdk"

const { stringify } = JSON

const lambda = new aws.Lambda({
  apiVersion: "2015-03-31",
  endpoint: "http://localhost:3002",
})

export async function handler() {
  const clientContextData = stringify({
    foo: "foo",
  })

  const payload = stringify({
    data: "foo",
  })

  const params = {
    ClientContext: Buffer.from(clientContextData).toString("base64"),
    // FunctionName is composed of: service name - stage - function name, e.g.
    FunctionName: "myServiceName-dev-invokedHandler",
    InvocationType: "RequestResponse",
    Payload: payload,
  }

  const response = await lambda.invoke(params).promise()

  return {
    body: stringify(response),
    statusCode: 200,
  }
}

You can also invoke using the aws cli by specifying --endpoint-url

aws lambda invoke /dev/null \
  --endpoint-url http://localhost:3002 \
  --function-name myServiceName-dev-invokedHandler

List of available function names and their corresponding serverless.yml function keys are listed after the server starts. This is important if you use a custom naming scheme for your functions as serverless-offline will use your custom name. The left side is the function's key in your serverless.yml (invokedHandler in the example below) and the right side is the function name that is used to call the function externally such as aws-sdk (myServiceName-dev-invokedHandler in the example below):

serverless offline
...
offline: Starting Offline: local/us-east-1.
offline: Offline [http for lambda] listening on http://localhost:3002
offline: Function names exposed for local invocation by aws-sdk:
           * invokedHandler: myServiceName-dev-invokedHandler

To list the available manual invocation paths exposed for targeting by aws-sdk and aws-cli, use SLS_DEBUG=* with serverless offline. After the invoke server starts up, full list of endpoints will be displayed:

SLS_DEBUG=* serverless offline
...
offline: Starting Offline: local/us-east-1.
...
offline: Offline [http for lambda] listening on http://localhost:3002
offline: Function names exposed for local invocation by aws-sdk:
           * invokedHandler: myServiceName-dev-invokedHandler
[offline] Lambda Invocation Routes (for AWS SDK or AWS CLI):
           * POST http://localhost:3002/2015-03-31/functions/myServiceName-dev-invokedHandler/invocations
[offline] Lambda Async Invocation Routes (for AWS SDK or AWS CLI):
           * POST http://localhost:3002/2014-11-13/functions/myServiceName-dev-invokedHandler/invoke-async/

You can manually target these endpoints with a REST client to debug your lambda function if you want to. Your POST JSON body will be the Payload passed to your function if you were to calling it via aws-sdk.

The process.env.IS_OFFLINE variable

Will be "true" in your handlers when using serverless-offline.

Docker and Layers

To use layers with serverless-offline, you need to have the useDocker option set to true. This can either be by using the --useDocker command, or in your serverless.yml like this:

custom:
  serverless-offline:
    useDocker: true

This will allow the docker container to look up any information about layers, download and use them. For this to work, you must be using:

  • AWS as a provider, it won't work with other provider types.
  • Layers that are compatible with your runtime.
  • ARNs for layers. Local layers aren't supported as yet.
  • A local AWS account set-up that can query and download layers.

If you're using least-privilege principals for your AWS roles, this policy should get you by:

{
  "Statement": [
    {
      "Action": "lambda:GetLayerVersion",
      "Effect": "Allow",
      "Resource": "arn:aws:lambda:*:*:layer:*:*"
    }
  ],
  "Version": "2012-10-17"
}

Once you run a function that boots up the Docker container, it'll look through the layers for that function, download them in order to your layers folder, and save a hash of your layers so it can be re-used in future. You'll only need to re-download your layers if they change in the future. If you want your layers to re-download, simply remove your layers folder.

You should then be able to invoke functions as normal, and they're executed against the layers in your docker container.

Additional Options

There are 5 additional options available for Docker and Layer usage.

  • dockerHost
  • dockerHostServicePath
  • dockerNetwork
  • dockerReadOnly
  • layersDir

dockerHost

When running Docker Lambda inside another Docker container, you may need to configure the host name for the host machine to resolve networking issues between Docker Lambda and the host. Typically in such cases you would set this to host.docker.internal.

dockerHostServicePath

When running Docker Lambda inside another Docker container, you may need to override the code path that gets mounted to the Docker Lambda container relative to the host machine. Typically in such cases you would set this to ${PWD}.

dockerNetwork

When running Docker Lambda inside another Docker container, you may need to override network that Docker Lambda connects to in order to communicate with other containers.

dockerReadOnly

For certain programming languages and frameworks, it's desirable to be able to write to the filesystem for things like testing with local SQLite databases, or other testing-only modifications. For this, you can set dockerReadOnly: false, and this will allow local filesystem modifications. This does not strictly mimic AWS Lambda, as Lambda has a Read-Only filesystem, so this should be used as a last resort.

layersDir

By default layers are downloaded on a per-project basis, however, if you want to share them across projects, you can download them to a common place. For example, layersDir: /tmp/layers would allow them to be shared across projects. Make sure when using this setting that the directory you are writing layers to can be shared by docker.

Authorizers

Token authorizers

As defined in the Serverless Documentation you can use API Keys as a simple authentication method.

Serverless-offline will emulate the behaviour of APIG and create a random token that's printed on the screen. With this token you can access your private methods adding x-api-key: generatedToken to your request header. All api keys will share the same token.

Custom authorizers

Only custom authorizers are supported. Custom authorizers are executed before a Lambda function is executed and return an Error or a Policy document.

The Custom authorizer is passed an event object as below:

{
  "authorizationToken": "<Incoming bearer token>",
  "methodArn": "arn:aws:execute-api:<Region id>:<Account id>:<API id>/<Stage>/<Method>/<Resource path>",
  "type": "TOKEN"
}

The methodArn does not include the Account id or API id.

The plugin only supports retrieving Tokens from headers. You can configure the header as below:

"authorizer": {
  "authorizerResultTtlInSeconds": "0",
  "identitySource": "method.request.header.Authorization", // or method.request.header.SomeOtherHeader
  "type": "TOKEN"
}

Remote authorizers

You are able to mock the response from remote authorizers by setting the environmental variable AUTHORIZER before running sls offline start

Example:

Unix: export AUTHORIZER='{"principalId": "123"}'

Windows: SET AUTHORIZER='{"principalId": "123"}'

JWT authorizers

For HTTP APIs, JWT authorizers defined in the serverless.yml can be used to validate the token and scopes in the token. However at this time, the signature of the JWT is not validated with the defined issuer. Since this is a security risk, this feature is only enabled with the --ignoreJWTSignature flag. Make sure to only set this flag for local development work.

Serverless plugin authorizers

If your authentication needs are custom and not satisfied by the existing capabilities of the Serverless offline project, you can inject your own authentication strategy. To inject a custom strategy for Lambda invocation, you define a custom variable under offline called customAuthenticationProvider in the serverless.yml file. The value of the custom variable will be used to require(your customAuthenticationProvider value) where the location is expected to return a function with the following signature.

offline:
  customAuthenticationProvider: ./path/to/custom-authentication-provider
// ./path/to/customer-authentication-provider.js

module.exports = function (endpoint, functionKey, method, path) {
  return {
    getAuthenticateFunction() {
      return {
        async authenticate(request, h) {
          // your implementation
        },
      }
    },

    name: "your strategy name",
    scheme: "your scheme name",
  }
}

A working example of injecting a custom authorization provider can be found in the projects integration tests under the folder custom-authentication.

Custom headers

You are able to use some custom headers in your request to gain more control over the requestContext object.

Header Event key Example
cognito-identity-id event.requestContext.identity.cognitoIdentityId
cognito-authentication-provider event.requestContext.identity.cognitoAuthenticationProvider
sls-offline-authorizer-override event.requestContext.authorizer { "iam": {"cognitoUser": { "amr": ["unauthenticated"], "identityId": "abc123" }}}

By doing this you are now able to change those values using a custom header. This can help you with easier authentication or retrieving the userId from a cognitoAuthenticationProvider value.

Environment variables

You are able to use environment variables to customize identity params in event context.

Environment Variable Event key
SLS_COGNITO_IDENTITY_POOL_ID event.requestContext.identity.cognitoIdentityPoolId
SLS_ACCOUNT_ID event.requestContext.identity.accountId
SLS_COGNITO_IDENTITY_ID event.requestContext.identity.cognitoIdentityId
SLS_CALLER event.requestContext.identity.caller
SLS_API_KEY event.requestContext.identity.apiKey
SLS_API_KEY_ID event.requestContext.identity.apiKeyId
SLS_COGNITO_AUTHENTICATION_TYPE event.requestContext.identity.cognitoAuthenticationType
SLS_COGNITO_AUTHENTICATION_PROVIDER event.requestContext.identity.cognitoAuthenticationProvider

You can use serverless-dotenv-plugin to load environment variables from your .env file.

AWS API Gateway Features

Velocity Templates

Serverless doc ~ AWS doc

You can supply response and request templates for each function. This is optional. To do so you will have to place function specific template files in the same directory as your function file and add the .req.vm extension to the template filename. For example, if your function is in code-file: helloworld.js, your response template should be in file: helloworld.res.vm and your request template in file helloworld.req.vm.

Velocity nuances

Consider this requestTemplate for a POST endpoint:

"application/json": {
  "payload": "$input.json('$')",
  "id_json": "$input.json('$.id')",
  "id_path": "$input.path('$').id"
}

Now let's make a request with this body: { "id": 1 }

AWS parses the event as such:

{
  "payload": {
    "id": 1
  },
  "id_json": 1,
  "id_path": "1" // Notice the string
}

Whereas Offline parses:

{
  "payload": {
    "id": 1
  },
  "id_json": 1,
  "id_path": 1 // Notice the number
}

Accessing an attribute after using $input.path will return a string on AWS (expect strings like "1" or "true") but not with Offline (1 or true). You may find other differences.

CORS

Serverless doc

For HTTP APIs, the CORS configuration will work out of the box. Any CLI arguments passed in will be ignored.

For REST APIs, if the endpoint config has CORS set to true, the plugin will use the CLI CORS options for the associated route. Otherwise, no CORS headers will be added.

Catch-all Path Variables

AWS doc

Set greedy paths like /store/{proxy+} that will intercept requests made to /store/list-products, /store/add-product, etc...

ANY method

AWS doc

Works out of the box.

Lambda and Lambda Proxy Integrations

Serverless doc ~ AWS doc

Works out of the box. See examples in the manual_test directory.

HTTP Proxy

Serverless doc ~ AWS doc - AWS::ApiGateway::Method ~ AWS doc - AWS::ApiGateway::Resource

Example of enabling proxy:

custom:
  serverless-offline:
    resourceRoutes: true

or

    YourCloudFormationMethodId:
      Properties:
        ......
        Integration:
          Type: HTTP_PROXY
          Uri: 'https://s3-${self:custom.region}.amazonaws.com/${self:custom.yourBucketName}/{proxy}'
          ......
      Type: AWS::ApiGateway::Method
custom:
  serverless-offline:
    resourceRoutes:
      YourCloudFormationMethodId:
        Uri: "http://localhost:3001/assets/{proxy}"

Response parameters

AWS doc

You can set your response's headers using ResponseParameters.

May not work properly. Please PR. (Difficulty: hard?)

Example response velocity template:

"responseParameters": {
  "method.response.header.X-Powered-By": "Serverless", // a string
  "method.response.header.Warning": "integration.response.body", // the whole response
  "method.response.header.Location": "integration.response.body.some.key" // a pseudo JSON-path
},

WebSocket

Usage in order to send messages back to clients:

POST http://localhost:3001/@connections/{connectionId}

Or,

import aws from 'aws-sdk'

const apiGatewayManagementApi = new aws.ApiGatewayManagementApi({
  apiVersion: '2018-11-29',
  endpoint: 'http://localhost:3001',
});

apiGatewayManagementApi.postToConnection({
  ConnectionId: ...,
  Data: ...,
});

Where the event is received in the lambda handler function.

There's support for websocketsApiRouteSelectionExpression in it's basic form: $request.body.x.y.z, where the default value is $request.body.action.

Debug process

Serverless offline plugin will respond to the overall framework settings and output additional information to the console in debug mode. In order to do this you will have to set the SLS_DEBUG environmental variable. You can run the following in the command line to switch to debug mode execution.

Unix: export SLS_DEBUG=*

Windows: SET SLS_DEBUG=*

Interactive debugging is also possible for your project if you have installed the node-inspector module and chrome browser. You can then run the following command line inside your project's root.

Initial installation: npm install -g node-inspector

For each debug run: node-debug sls offline

The system will start in wait status. This will also automatically start the chrome browser and wait for you to set breakpoints for inspection. Set the breakpoints as needed and, then, click the play button for the debugging to continue.

Depending on the breakpoint, you may need to call the URL path for your function in separate browser window for your serverless function to be run and made available for debugging.

Interactive Debugging with Visual Studio Code (VSC)

With newer versions of node (6.3+) the node inspector is already part of your node environment and you can take advantage of debugging inside your IDE with source-map support. Here is the example configuration to debug interactively with VSC. It has two steps.

Step 1 : Adding a launch configuration in IDE

Add a new launch configuration to VSC like this:

{
  "cwd": "${workspaceFolder}",
  "name": "Debug Serverless Offline",
  "request": "launch",
  "runtimeArgs": ["run", "debug"],
  "runtimeExecutable": "npm",
  "sourceMaps": true,
  "type": "node"
}

Step2 : Adding a debug script

You will also need to add a debug script reference in your package.json file

Add this to the scripts section:

Unix/Mac: "debug" : "export SLS_DEBUG=* && node --inspect /usr/local/bin/serverless offline"

Windows: "debug": "SET SLS_DEBUG=* && node --inspect node_modules\\serverless\\bin\\serverless offline"

Example:

....
"scripts": {
  "debug" : "SET SLS_DEBUG=* && node --inspect node_modules\\serverless\\bin\\serverless offline"
}

In VSC, you can, then, add breakpoints to your code. To start a debug sessions you can either start your script in package.json by clicking the hovering debug intellisense icon or by going to your debug pane and selecting the Debug Serverless Offline configuration.

Resource permissions and AWS profile

Lambda functions assume an IAM role during execution: the framework creates this role and set all the permission provided in the iamRoleStatements section of serverless.yml.

However, serverless offline makes use of your local AWS profile credentials to run the lambda functions and that might result in a different set of permissions. By default, the aws-sdk would load credentials for you default AWS profile specified in your configuration file.

You can change this profile directly in the code or by setting proper environment variables. Setting the AWS_PROFILE environment variable before calling serverless offline to a different profile would effectively change the credentials, e.g.

AWS_PROFILE=<profile> serverless offline

Simulation quality

This plugin simulates API Gateway for many practical purposes, good enough for development - but is not a perfect simulator. Specifically, Lambda currently runs on Node.js v12.x, v14.x and v16.x (AWS Docs), whereas Offline runs on your own runtime where no memory limits are enforced.

Usage with other plugins

When combining this plugin with other plugins there are a few things that you need to keep in mind.

You should run serverless offline start instead of serverless offline. The start command fires the offline:start:init and offline:start:end lifecycle hooks which can be used by other plugins to process your code, add resources, perform cleanups, etc.

The order in which plugins are added to serverless.yml is relevant. Plugins are executed in order, so plugins that process your code or add resources should be added first so they are ready when this plugin starts.

For example:

plugins:
  - serverless-middleware # modifies some of your handler based on configuration
  - serverless-webpack # package your javascript handlers using webpack
  - serverless-dynamodb # adds a local dynamo db
  - serverless-offline # runs last so your code has been pre-processed and dynamo is ready

That works because all those plugins listen to the offline:start:init to do their processing. Similarly they listen to offline:start:end to perform cleanup (stop dynamo db, remove temporary files, etc).

Credits and inspiration

This plugin was initially a fork of Nopik's Serverless-serve.

License

MIT

Contributing

Yes, thank you! This plugin is community-driven, most of its features are from different authors. Please update the docs and tests and add your name to the package.json file. We try to follow Airbnb's JavaScript Style Guide.

Contributors

dnalborczyk dherault computerpunc frozenbonito leonardoalifraco
dnalborczyk dherault computerpunc frozenbonito leonardoalifraco
medikoo apancutt chardos daniel-cottone bryantbiggs
medikoo apancutt chardos daniel-cottone bryantbiggs
pgrzesik mikestaub Bilal-S qswinson juanjoDiaz
pgrzesik mikestaub Bilal-S qswinson juanjoDiaz
zoellner frsechet johncmckim dl748 ThisIsNoZaku
zoellner frsechet johncmckim dl748 ThisIsNoZaku
darthtrevino NicolasSeiler miltador moroine gertjvr
darthtrevino NicolasSeiler miltador moroine gertjvr
bytekast jormaechea thomaschaaf DorianMazur dortega3000
bytekast jormaechea thomaschaaf DorianMazur dortega3000
tom-marsh rwynn robbtraister joubertredrat jack-seek
tom-marsh rwynn robbtraister joubertredrat jack-seek
perkyguy ansraliant hueniverse james-relyea sulaysumaria
perkyguy ansraliant hueniverse james-relyea sulaysumaria
ondrowan franciscocpg AyushG3112 AlexHayton Andorbal
ondrowan franciscocpg AyushG3112 AlexHayton Andorbal
andreipopovici awwong1 emmoistner coyoteecd OrKoN
andreipopovici awwong1 emmoistner coyoteecd OrKoN
trevor-leach bebbi paulhbarker njriordan adieuadieu
trevor-leach bebbi paulhbarker njriordan adieuadieu
encounter leemhenson c24w Bob-Thomas ALOHACREPES345
encounter leemhenson c24w Bob-Thomas ALOHACREPES345
djcrabhat marccampbell matt-peck purefan mzmiric5
djcrabhat marccampbell matt-peck purefan mzmiric5
pmuens pierreis raySavignone Rawne selcukcihan
pmuens pierreis raySavignone Rawne selcukcihan
shalvah footballencarta francisu patrickheeney re1ro
shalvah footballencarta francisu patrickheeney re1ro
andidev arnas clschnei d10-cc ablythe
andidev arnas clschnei d10-cc ablythe
pettyalex domdomegg apalumbo rion18 anishkny
pettyalex domdomegg apalumbo rion18 anishkny
cameroncooper cmuto09 dschep dimadk24 dwbelliston
cameroncooper cmuto09 dschep dimadk24 dwbelliston
efrain17 eabadjiev tqfipe garunski jeroenvollenbrock
efrain17 eabadjiev tqfipe garunski jeroenvollenbrock
joewestcott LoganArnett perrin4869 njyjn DocLM
joewestcott LoganArnett perrin4869 njyjn DocLM
Trott rfranco uh-zz Raph22 randytarampi
Trott rfranco uh-zz Raph22 randytarampi
PsychicCat petetnt thepont RichiCoder1 rishi8094
PsychicCat petetnt thepont RichiCoder1 rishi8094
rloomans roberttaylor426 wwsno gribnoysup sergiodurand
rloomans roberttaylor426 wwsno gribnoysup sergiodurand
sethetter shineli-not-used-anymore stesie jeromemacias kdybicz
sethetter shineli-not-used-anymore stesie jeromemacias kdybicz
kenleytomlin kevinhankens kerueter kohanian kyusungpark
kenleytomlin kevinhankens kerueter kohanian kyusungpark
lalifraco-devspark DynamicSTOP brazilianbytes Marcel-G neverendingqs
lalifraco-devspark DynamicSTOP brazilianbytes Marcel-G neverendingqs
msjonker Takeno kelchm mjmac mohokh67
msjonker Takeno kelchm mjmac mohokh67
AlexHladin ojongerius parasgera furipon308 hsz
AlexHladin ojongerius parasgera furipon308 hsz
jeffhall4 jgilbert01 polaris340 khanguyen88 kobanyan
jeffhall4 jgilbert01 polaris340 khanguyen88 kobanyan
leruitga-ss livingmine lteacher martinmicunda nick-w-nick
leruitga-ss livingmine lteacher martinmicunda nick-w-nick
nori3tsu ppasmanik ryanzyy adikari tom-stclair
nori3tsu ppasmanik ryanzyy adikari tom-stclair
tveal constb stevemao trsrm ittus
tveal constb stevemao trsrm ittus
Ankcorn expoe-codebuild tiagogoncalves89 tuanmh Gregoirevda
Ankcorn expoe-codebuild tiagogoncalves89 tuanmh Gregoirevda
vivganes gcphost YaroslavApatiev zacacollier akinboboye
vivganes gcphost YaroslavApatiev zacacollier akinboboye
allenhartwig ctbaird demetriusnunes dependabot[bot] drace-rgare
allenhartwig ctbaird demetriusnunes dependabot[bot] drace-rgare
ericctsf BorjaMacedo BrandonE guerrerocarlos chrismcleod
ericctsf BorjaMacedo BrandonE guerrerocarlos chrismcleod
icarus-sullivan cnuss christophgysin cdubz danmactough
icarus-sullivan cnuss christophgysin cdubz danmactough
GeneralistDev designfrontier daniel0707 dnicolson dbunker
GeneralistDev designfrontier daniel0707 dnicolson dbunker
dobrynin DavideSegullo domaslasauskas enolan EduardMcfly
dobrynin DavideSegullo domaslasauskas enolan EduardMcfly
thejuan adam-nielsen againer AlbertXingZhang aldenquimby
thejuan adam-nielsen againer AlbertXingZhang aldenquimby
alebianco-doxee koterpillar aliclark andersem triptec
alebianco-doxee koterpillar aliclark andersem triptec
m0ppers cspotcode AndrewCEmil mapsi aliatsis
m0ppers cspotcode AndrewCEmil mapsi aliatsis
akaila ac360 austin-payne BenjaminBergerM iansu
akaila ac360 austin-payne BenjaminBergerM iansu
gdinn MEGApixel23 idmontie ihendriks jacintoArias
gdinn MEGApixel23 idmontie ihendriks jacintoArias
JacekDuszenko jgrigg janicduplessis jsnajdr horyd
JacekDuszenko jgrigg janicduplessis jsnajdr horyd
jasonfungsing jaydp17 jeremygiberson josephwarrick jlsjonas
jasonfungsing jaydp17 jeremygiberson josephwarrick jlsjonas
jonathonadams joostfarla TheTeaCat eeroniemi minibikini
jonathonadams joostfarla TheTeaCat eeroniemi minibikini
em0ney webdeveric fernyettheplant fernandomoraes panva
em0ney webdeveric fernyettheplant fernandomoraes panva
Edweis frodeaa gbroques ganey geoffmanningcleartrace
Edweis frodeaa gbroques ganey geoffmanningcleartrace
grakic-glopal guillaume mebibou balassy bayoudhi
grakic-glopal guillaume mebibou balassy bayoudhi
enapupe aardvarkk
enapupe aardvarkk

serverless-offline's People

Contributors

apancutt avatar bilal-s avatar bryantbiggs avatar bytekast avatar chardos avatar computerpunc avatar daniel-cottone avatar darthtrevino avatar dherault avatar dl748 avatar dnalborczyk avatar dorianmazur avatar dortega3000 avatar frozenbonito avatar frsechet avatar gertjvr avatar johncmckim avatar jormaechea avatar juanjodiaz avatar leonardoalifraco avatar medikoo avatar mikestaub avatar miltador avatar moroine avatar nicolasseiler avatar pgrzesik avatar qswinson avatar thisisnozaku avatar thomaschaaf avatar zoellner avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

serverless-offline's Issues

Add CORs support

I'm using serverless-cors-plugin to add CORs support to my AAG endpoints, so that I can test/use my API in the browser and on hostnames/ports that differ from my API.

I am using swagger-ui to test the live endpoints, and would also like to use swagger-ui to do the same with my serverless-offline server, and, yet, I get the following error:

XMLHttpRequest cannot load http://localhost:3092/SomeEndpoint. Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:3091' is therefore not allowed access.

I noticed mention of setting response headers in the docs, and I am about to test this out now, but I worry that my efforts will affect my live deployments, so I am hesistant.

Either way, if CORs support could be added as a fundamental feature of this plugin, that would be really nice.

GET by id doesn't seem to work

Hi,

I try call follow API address http://localhost:3000/locations/1 but it seems offline plugin is not passing id param to the lambda handler.

s-functions.json

    {
      "path": "locations/{id}",
      "method": "GET",
      "type": "AWS",
      "authorizationType": "none",
      "apiKeyRequired": false,
      "requestParameters": {
        "integration.request.path.id": "method.request.path.id"
      },
      "requestTemplates": {
        "application/json": "{\"operation\":\"read\",\"id\":\"$input.params('id')\"}"
      },
      "responses": {
        "400": {
          "statusCode": "400"
        },
        "default": {
          "statusCode": "200",
          "responseParameters": {},
          "responseModels": {},
          "responseTemplates": {
            "application/json": ""
          }
        }
      }
    }

handler

function handler(event, context) {
    console.log('Received event:', JSON.stringify(event, null, 2));
}

output

 Received event: {
  "isOffline": true
}

Debuggers support

The --debugOffline flag seems to only print some debug info to console. I would expect that to also send the --debug flag to the node process, so it exposes the debugger listening in the default port (5858), or setting --debug=PORT, to use a custom port, just as it happens with the node command. This way we could use any debugging tool to attach to that port and debug line by line from our code editor.

Content-type override

On non-GET and non-HEAD request (typically a POST), the content type for incoming requests is overridden to 'application/json'.
Will be fixed in v2.3.

Question - how can we use post method with this plugin

I can read this comment from author

When no Content-Type header is set on a request, API Gateway defaults to application/json, and so does the plugin. But if you send a application/x-www-form-urlencoded or a multipart/form-data body with a application/json (or no) Content-Type, API Gateway won't parse your data (you'll get the ugly raw as input) whereas the plugin will answer 400 (malformed JSON). Please consider explicitly setting your requests' Content-Type and using separates templates.

but don't know what i have to do.can you please explain.

my try

s-templates.json

{
  "apiGatewayRequestTemplate": {
    "application/json": {
      "test": "$input.params('test')"
    }
  }
}

part of s-function.json

"endpoints": [
    {
      "path": "user/login",
      "method": "POST",
      "type": "AWS",
      "authorizationType": "none",
      "authorizerFunction": false,
      "apiKeyRequired": false,
      "requestParameters": {},
      "requestTemplates": "$${apiGatewayRequestTemplate}",
      "responses": {
        "400": {
          "statusCode": "400"
        },
        "default": {
          "statusCode": "200",
          "responseParameters": {},
          "responseModels": {},
          "responseTemplates": {
            "application/json": ""
          }
        }
      }
    }
  ],

but issue like above

post_method

what i have to change to fix this issue ?

Babel runtime requires return Promise?

Hi guys, thanks again a great plugin, this is compulsory for development using Gateway & Lambda. Currently I'm having a function with babel runtime. However, I always get the following error when using context.done. This is really a show-stopper.
Serverless: Warning: context.done called twice within handler 'user_bookmark_create'!

My function is very simple though:

'use strict';

export default (event, context) => {
    setTimeout(function(){
        context.done(null, 'DONE');
    }, 2000);
};

It looks like we require a Promise to be returned from handler here:
From src\index.js:

// Promise support
                if (funRuntime === 'babel' && !this.requests[requestId].done) {
                  if (x && typeof x.then === 'function' && typeof x.catch === 'function') {
                    x.then(lambdaContext.succeed).catch(lambdaContext.fail);
                  }
                  else if (x instanceof Error) lambdaContext.fail(x);
                  else lambdaContext.succeed(x);
                }

The handler above is working perfectly with Lamdba.

So my questions is why do we require returning a promise if we use babel runtime?.

Looking at the code here:

// Promise support
                if (funRuntime === 'babel' && !this.requests[requestId].done) {
                  if (x && typeof x.then === 'function' && typeof x.catch === 'function') {
                    x.then(lambdaContext.succeed).catch(lambdaContext.fail);
                  }
                  else if (x instanceof Error) lambdaContext.fail(x);
                  else lambdaContext.succeed(x);
                }

Looks like the last else is causing problem, I think we shouldn't call lambdaContext.succeed(x); as the handler might be still working and going to execute context.done as my code above.

Thanks again!!! ( I'm more than happy to PR to fix this. )

Does not work given serverless getting started doc

When using "sls function create", the s-function.json created contains this fragment:

  "endpoints": [
    {
      "path": "ivgen2",
      "method": "POST",  // <--- I CHANGED THIS TO SUITE MY FUNCTION
      "type": "AWS",
      "authorizationType": "none",
      "authorizerFunction": false,
      "apiKeyRequired": false,
      "requestParameters": {},
      "requestTemplates": {
        "application/json": ""
      },
      "responses": "$${apiGatewayResponseTemplate}"  // <-- THIS TOO
    }
  ],

In particular, "requestTemplates" has a template for "application/json" which is set to "". This seems fine for AWS. But not for offline ...

In index.js we have:

              let event = {};

              if (requestTemplate) {
                try {
                  debugLog('_____ REQUEST TEMPLATE PROCESSING _____');
                  // Velocity templating language parsing
                  const velocityContext = createVelocityContext(request, this.velocityContextOptions, request.payload || {});
                  event = renderVelocityTemplateObject(requestTemplate, velocityContext);
                } catch (err) {
                  return this._reply500(response, `Error while parsing template "${contentType}" for ${funName}`, err, requestId);
                }
              }

              event.isOffline = true;
              debugLog('event:', event);

In this case, requestTemplate is "", which resolves to false (node 4.4.4), and the if block is not executed, leaving event equal to { isOffline: true } and nothing else! Here is the offline debug for my case:

[debug] requestId: 7351255607791245
[debug] contentType: application/json
[debug] requestTemplate: 
[debug] payload: { code: 'BadRequest', message: 'oops' }
[debug] Invalidating cache...
[debug] Loading handler... (/Users/peebles/serverless/functions/ivgen2/handler)
[debug] event: { isOffline: true }

See how the "payload" is ignored, not passed into "event". My function then does not see my payload.

If requestTemplate is not defined, or if it is set to "" (as sls seems to do when creating a new function), then a POST will never see the payload.

In case you are wondering, this is how I called my function:

$ curl -v -X POST -H "Content-Type: application/json" --data '{"code":"BadRequest","message": "oops"}' http://localhost:3000/ivgen2

Problem with parsing requestTemplate

It seems that serverless-offline is not correctly handling my request template.

In my s-function.json, I have the following:

"requestParameters": {
    "integration.request.path.id": "method.request.path.id"
},
"requestTemplates": {
    "application/json": "{\"id\":\"$input.params('id')\"}"
},

But the end result is the handler is receiving a crazy object:

{ '0': undefined,
  '1': undefined,
  '2': undefined,
  '3': undefined,
  '4': undefined,
  '5': undefined,
  '6': undefined,
  '7': undefined,
  '8': undefined,
  '9': undefined,
  '10': undefined,
  '11': undefined,
  '12': undefined,
  '13': undefined,
  '14': undefined,
  '15': undefined,
  '16': undefined,
  '17': undefined,
  '18': undefined,
  '19': undefined,
  '20': undefined,
  '21': undefined,
  '22': undefined,
  '23': undefined,
  '24': undefined,
  '25': undefined,
  '26': undefined,
  '27': undefined,
  bold: undefined,
  to: [Function],
  toEnd: [Function],
  underline: undefined,
  strikethrough: undefined,
  italic: undefined,
  inverse: undefined,
  grey: undefined,
  black: undefined,
  yellow: undefined,
  red: undefined,
  green: undefined,
  blue: undefined,
  white: undefined,
  cyan: undefined,
  magenta: undefined,
  greyBG: undefined,
  blackBG: undefined,
  yellowBG: undefined,
  redBG: undefined,
  greenBG: undefined,
  blueBG: undefined,
  whiteBG: undefined,
  cyanBG: undefined,
  magentaBG: undefined,
  rainbow: undefined,
  zebra: undefined,
  stripColors: undefined,
  zalgo: undefined,
  isOffline: true }

Trying to debug it, it looks like the issue is within renderVelocityTemplateObject.js as the templateObject parameter passed in is actually a string, so the following line iterates over every character:

for (let key in templateObject) {

Am I doing something wrong or is this an issue in serverless-offline?

Thanks!

Request template not parsed/rendered correctly

First of all, thanks a lot for this plugin! It's an awesome time saver and makes development and debugging so much easier!

I am however having some issues with the request templates, which I hope you can help with.

The template I am using is very basic:

{
  "apiRequestTemplate": {
    "application/json": {
      "httpMethod": "$context.httpMethod",
      "body": "$input.json('$')",
      "queryParams": "$input.params().querystring",
      "headerParams": "$input.params().header",
      "headerParamNames": "$input.params().header.keySet()",
      "contentTypeValue": "$input.params().header.get('Content-Type')"
    }
  }
}

So when I hit my endpoint with the query string "?first=12&other=34" I was expecting the "queryParams" property to give me an array or an object, but what I am getting is a string like this one: "{first=12, other=34}". This is obviously not valid json, so it doesn't get parsed once merged with the template

I'm new to all this, but while debugging, it looked like the velocity context did have an object as input, so something happened at rendering time.

The same thing happens for headerParams and headerParamNames.

If this help, I am using serverless-offline v2.3.2 on Windows

Thanks!

Doesn't seem to support the MOCK endpoint type (for CORS)

The MOCK endpoint type would be super helpful, enabling us to implement the CORS preflight OPTIONS endpoint without invoking the handler.

Examples:

Note that for our current use case, we only need CORS for offline.

Finally, I'm new to serverless, so I could be missing something important and/or obvious...

Array in $input.json('$.value') expanded as double array

Hi. I'm testing with a simple template like this:

  "requestTemplates": {
    "application/json": "{\"value\":$input.json('$.value')}"
  },

It expands as expected with a string value:

curl -H 'Content-Type: application/json' -d '{"value":"hello"}' 'http://localhost:3000/echo'
{"value":"hello"}

However, with an array value, the array is expanded as a double array:

curl -H 'Content-Type: application/json' -d '{"value":["hello"]}' 'http://localhost:3000/echo'
{"value":[["hello"]]}

The problem seems to trace to this line in jsonPath.js:

const result = JSONPath({ json, path, wrap: false });

When I modify it like this, expansion works properly also for array values:

const result = JSONPath({ json, path, wrap: true })[0];

Could it be changed in the official version without breaking anything?

Does not work with node v0.10.44

A pretty minor issue but on node 0.10.44 when running the sls offline start

/usr/local/lib/node_modules/serverless/bin/serverless:5
let argv   = require('minimist')(process.argv.slice(2));
^^^
SyntaxError: Unexpected strict mode reserved word

Why even, do you ask?
Because Lambda only supports node 0.10.x

I was trying to be diligent and develop against the target platform.

CORS OPTIONS preflight request does not have cors headers set

Ok, so what I posted here, is actually this:

If a CORS preflight request (OPTIONS) is sent to an endpoint, serverless-offline fails to set the appropriate CORS headers.

"Normal" GET request (CORS headers are present):

$ curl -H "Origin: http://localhost:2702" -H "Access-Control-Request-Method: GET" -H "Access-Control-Request-Headers: X-Requested-With" -X GET --verbose http://localhost:2703/Travellers/version
*   Trying ::1...
* connect to ::1 port 2703 failed: Connection refused
*   Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 2703 (#0)
> GET /Travellers/version HTTP/1.1
> Host: localhost:2703
> User-Agent: curl/7.43.0
> Accept: */*
> Origin: http://localhost:2702
> Access-Control-Request-Method: GET
> Access-Control-Request-Headers: X-Requested-With
>
< HTTP/1.1 200 OK
< content-type: application/json;charset=UTF-8
< vary: origin
< access-control-allow-origin: http://localhost:2702
< access-control-allow-credentials: true
< access-control-expose-headers: WWW-Authenticate,Server-Authorization
< cache-control: no-cache
< content-length: 19
< Date: Wed, 29 Jun 2016 15:55:43 GMT
< Connection: keep-alive
<
* Connection #0 to host localhost left intact
{"version":"0.2.7"}

versus

CORS preflight OPTIONS request (CORS headers are not present):

$ curl -H "Origin: http://localhost:2702" -H "Access-Cod: GET" -H "Access-Control-Request-Headers: X-Requested-With" -X OPTIONS --verbose http://localhost:2703/Travellers/version
*   Trying ::1...
* connect to ::1 port 2703 failed: Connection refused
*   Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 2703 (#0)
> OPTIONS /Travellers/version HTTP/1.1
> Host: localhost:2703
> User-Agent: curl/7.43.0
> Accept: */*
> Origin: http://localhost:2702
> Access-Control-Request-Method: GET
> Access-Control-Request-Headers: X-Requested-With
>
< HTTP/1.1 200 OK
< content-type: application/json; charset=utf-8
< cache-control: no-cache
< content-length: 54
< Date: Wed, 29 Jun 2016 15:56:44 GMT
< Connection: keep-alive
<
* Connection #0 to host localhost left intact
{"message":"CORS error: Some headers are not allowed"}

Thus, requests which require a CORS preflight will fail with something like this:

XMLHttpRequest cannot load http://localhost:2703/Travellers/version. Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:2702' is therefore not allowed access.

Cheers,
Chris

authorizer function support

It would be great to add support for customer authorizer functions so I can simulate authorization/authentication locally.

Cannot read property 'serverlessPath' of undefined

For serverless v0.5

Added the plugin, then ran sls offline start and got this error.

C:\Users\Hyr\AppData\Roaming\npm\node_modules\serverless\node_modules\bluebird\js\release\async.js:49
        fn = function () { throw arg; };
                           ^

TypeError: Cannot read property 'serverlessPath' of undefined
  at module.exports.S.classes.Plugin._createRoutes.functions.forEach.fun.endpoints.forEach.ep.server.route (C:\Users\Hyr\Desktop\www\hyr\client-api\node_modules\serverless-offline\src\index.js:13:51)
  at Serverless._loadPlugins (C:\Users\Hyr\AppData\Roaming\npm\node_modules\serverless\lib\Serverless.js:284:25)
  at Serverless.loadProjectPlugins (C:\Users\Hyr\AppData\Roaming\npm\node_modules\serverless\lib\Serverless.js:253:12)
  at C:\Users\Hyr\AppData\Roaming\npm\node_modules\serverless\lib\Serverless.js:110:15
  at processImmediate [as _immediateCallback] (timers.js:383:17)
From previous event:
  at Serverless.init (C:\Users\Hyr\AppData\Roaming\npm\node_modules\serverless\lib\Serverless.js:105:8)
  at Object.<anonymous> (C:\Users\Hyr\AppData\Roaming\npm\node_modules\serverless\bin\serverless:17:12)
  at Module._compile (module.js:413:34)
  at Object.Module._extensions..js (module.js:422:10)
  at Module.load (module.js:357:32)
  at Function.Module._load (module.js:314:12)
  at Function.Module.runMain (module.js:447:10)
  at startup (node.js:141:18)
  at node.js:933:3

Not able to specify timeout with a template value

While this work for deploys this does not work for local dev

// s-function.json
{
  "name": "parts-read",
  "runtime": "nodejs4.3",
  "description": "",
  "customName": false,
  "customRole": false,
  "handler": "handler.handler",
  "timeout": "$${defaultTimeout}",
  "memorySize": "$${defaultMb}",
  "authorizer": {},
  "custom": {
    "excludePatterns": []
  },
  "endpoints": [/*...*/],
  "events": [],
  "environment": "$${envVars}",
  "vpc": {
    "securityGroupIds": [],
    "subnetIds": []
  }
}
// s-templates.json
{
  "apiRequestTemplate": {
    "application/json": {
      "httpMethod": "$context.httpMethod",
      "body": "$input.json('$')",
      "queryParams": "$input.params().querystring",
      "headerParams": "$input.params().header",
      "headerParamNames": "$input.params().header.keySet()",
      "contentTypeValue": "$input.params().header.get('Content-Type')"
    }
  },
  "defaultMb": 128,
  "defaultTimeout": 6,
  "envVars": {
    "DATABASE_URL": "${dbConnectionString}",
    "MANDRILL_API_KEY": "${mandrillApiKey}",
    "SERVERLESS_PROJECT": "${project}",
    "SERVERLESS_STAGE": "${stage}",
    "SERVERLESS_REGION": "${region}"
  }
}

None of my JSON parameters come through in event object

I can't seem to figure out what is going on I have a simple Serverless function like:

var aws = require('aws-sdk');


exports.handler = function (event, context) {


      var tpl = event.tpl || "";

      var resp = {
        "status":   "success",
        "message": tpl
      }
      context.succeed(JSON.stringify(event));
};

and I'm using Postman to call it with this JSON:

{
"tpl" : "blog-template"
}

But the response I get back is missing the tpl param as this is all I get back:

{
  "isOffline": true
}

Does anyone have an idea what could be going on?

environment variables aren't set

Hi again,

I noticed that when running with serverless-offline none of the in s-function.json specified environment variables like SERVERLESS_STAGE or SERVERLESS_REGION are popuplated. Is this on purpose? Is there a way to have them populated? In some functions I'm kind of relying on the stage to determine NODE_ENV and would really appreciate having this available while in serverless-offline as well :)

Thanks,
Chris

eslint support, and watch files

It would be good to have the option to parse the code through eslint when starting the offline command, and show the lint warnings/errors as you save the files (watch).

Bug in renderVelocityTemplateObject.js and quick fix

Hi,

I have follow request template which is not parsed properly by offline plugin:

{
  "requestTemplates": {
    "application/json": "{\n \"body\" : $input.json('$'),\n \"headers\": {\n #foreach($header in $input.params().header.keySet())\n \"$header\": \"$util.escapeJavaScript($input.params().header.get($header))\" #if($foreach.hasNext),#end\n \n #end\n },\n \"method\": \"$context.httpMethod\",\n \"params\": {\n #foreach($param in $input.params().path.keySet())\n \"$param\": \"$util.escapeJavaScript($input.params().path.get($param))\" #if($foreach.hasNext),#end\n \n #end\n },\n \"query\": {\n #foreach($queryParam in $input.params().querystring.keySet())\n \"$queryParam\": \"$util.escapeJavaScript($input.params().querystring.get($queryParam))\" #if($foreach.hasNext),#end\n \n #end\n } \n}"
  }
}

To fix the problem make change on code line 43 and add context parameter:

from:

const alternativeResult = tryToParseJSON(renderVelocityString(toProcess));

to:

const alternativeResult = tryToParseJSON(renderVelocityString(toProcess, context));

Uncaught error: Cannot set property 'isOffline' of null if payload is empty

Hello there,

I just discovered that if you fire a POST request with no payload at all it will throw the following error:

Serverless: POST /Invite (λ: Invite-post)
Debug: internal, implementation, error
    TypeError: Uncaught error: Cannot set property 'isOffline' of null
  at server.route.handler.error (/Users/cspeer/Development/Atameo/api/node_modules/serverless-offline/src/index.js:389:31)
  at Object.internals.handler (/Users/cspeer/Development/Atameo/api/node_modules/hapi/lib/handler.js:96:36)
  at request._protect.run (/Users/cspeer/Development/Atameo/api/node_modules/hapi/lib/handler.js:30:23)
  at internals.Protect.run (/Users/cspeer/Development/Atameo/api/node_modules/hapi/lib/protect.js:64:5)
  at exports.execute (/Users/cspeer/Development/Atameo/api/node_modules/hapi/lib/handler.js:24:22)
  at each (/Users/cspeer/Development/Atameo/api/node_modules/hapi/lib/request.js:383:16)
  at iterate (/Users/cspeer/Development/Atameo/api/node_modules/hapi/node_modules/items/lib/index.js:36:13)
  at done (/Users/cspeer/Development/Atameo/api/node_modules/hapi/node_modules/items/lib/index.js:28:25)
  at internals.Auth.payload (/Users/cspeer/Development/Atameo/api/node_modules/hapi/lib/auth.js:223:16)
  at each (/Users/cspeer/Development/Atameo/api/node_modules/hapi/lib/request.js:383:16)
  at iterate (/Users/cspeer/Development/Atameo/api/node_modules/hapi/node_modules/items/lib/index.js:36:13)
  at done (/Users/cspeer/Development/Atameo/api/node_modules/hapi/node_modules/items/lib/index.js:28:25)
  at onParsed (/Users/cspeer/Development/Atameo/api/node_modules/hapi/lib/route.js:395:20)
  at Subtext.parse (/Users/cspeer/Development/Atameo/api/node_modules/hapi/lib/route.js:416:20)
  at next (/Users/cspeer/Development/Atameo/api/node_modules/hapi/node_modules/subtext/lib/index.js:43:16)
  at internals.object (/Users/cspeer/Development/Atameo/api/node_modules/hapi/node_modules/subtext/lib/index.js:164:20)
  at Object.internals.jsonParse (/Users/cspeer/Development/Atameo/api/node_modules/hapi/node_modules/subtext/lib/index.js:270:16)
  at Object.internals.object (/Users/cspeer/Development/Atameo/api/node_modules/hapi/node_modules/subtext/lib/index.js:254:26)
  at Wreck.read (/Users/cspeer/Development/Atameo/api/node_modules/hapi/node_modules/subtext/lib/index.js:156:19)
  at finish (/Users/cspeer/Development/Atameo/api/node_modules/hapi/node_modules/subtext/node_modules/wreck/lib/index.js:299:20)
  at wrapped (/Users/cspeer/Development/Atameo/api/node_modules/hapi/node_modules/hoek/lib/index.js:867:20)
  at onReaderFinish (/Users/cspeer/Development/Atameo/api/node_modules/hapi/node_modules/subtext/node_modules/wreck/lib/index.js:370:16)
  at g (events.js:286:16)
  at emitNone (events.js:91:20)
  at emit (events.js:185:7)
  at finishMaybe (_stream_writable.js:488:14)
  at endWritable (_stream_writable.js:498:3)
  at Writable.end (_stream_writable.js:463:5)
  at IncomingMessage.onend (_stream_readable.js:517:10)
  at IncomingMessage.g (events.js:286:16)
  at emitNone (events.js:91:20)
  at IncomingMessage.emit (events.js:185:7)
  at endReadableNT (_stream_readable.js:926:12)
  at _combinedTickCallback (internal/process/next_tick.js:74:11)
  at process._tickDomainCallback (internal/process/next_tick.js:122:9)

If you however supply a payload as simple as {} everything is fine!
Thanks for this awesome plugin!

Cheers,
Chris

Pass METHOD and PATH to event

In the serverless docs, there is a part about how to structure your endpoints and functions to consolidate the code. It mentions that API Gateway can pass the METHOD and PATH of the endpoint to the lambda event. Those can then be used to route the request to a specific chunk of code instead of having multiple lambdas. It would be nice if this plugin could do that too.
Thanks

x-form-url-encoded body should not be parsed by offline server

I have an API Gateway endpoint that accepts form POST data (ie, w/ mime application/x-form-url-encoded). API Gateway will not parse this data, instead this has to be done in the lambda, so the event payload in AWS is a string.

The problem is that the Hapi server always parses this data into a JavaScript object. Which means my lambda code that works on AWS will not work through serverless offline.

I tried to find a way to get the raw request body through the Hapi server, but couldn't figure out how.

EDIT: Using serverless offline version 2.5.1 & serverless version 0.5.5.

Error - internal, implementation, error

Debug: internal, implementation, error
    TypeError: response.send is not a function
  at Offline._reply500 (E:\wamp\www\project\node_modules\serverless-offline\src\index.js:470:16)
  at server.route.handler.Object.keys.forEach.createLambdaContext (E:\wamp\www\project\node_modules\serverless-offline\src\index.js:444:29)
  at Object.exports.execute.internals.prerequisites.internals.handler.callback [as handler] (E:\wamp\www\project\node_modules\serverless-offline\node_modules\hapi\lib\handler.js:96:36)
  at E:\wamp\www\project\node_modules\serverless-offline\node_modules\hapi\lib\handler.js:30:23
  at [object Object].internals.Protect.run.finish [as run] (E:\wamp\www\project\node_modules\serverless-offline\node_modules\hapi\lib\protect.js:64:5)
  at exports.execute.finalize (E:\wamp\www\project\node_modules\serverless-offline\node_modules\hapi\lib\handler.js:24:22)
  at each (E:\wamp\www\project\node_modules\serverless-offline\node_modules\hapi\lib\request.js:383:16)
  at iterate (E:\wamp\www\project\node_modules\serverless-offline\node_modules\hapi\node_modules\items\lib\index.js:36:13)
  at done (E:\wamp\www\project\node_modules\serverless-offline\node_modules\hapi\node_modules\items\lib\index.js:28:25)
  at [object Object].internals.Auth.test.internals.Auth._authenticate (E:\wamp\www\project\node_modules\serverless-offline\node_modules\hapi\lib\auth.js:210:16)
  at internals.Auth.test.internals.Auth.authenticate (E:\wamp\www\project\node_modules\serverless-offline\node_modules\hapi\lib\auth.js:202:17)
  at each (E:\wamp\www\project\node_modules\serverless-offline\node_modules\hapi\lib\request.js:383:16)
  at iterate (E:\wamp\www\project\node_modules\serverless-offline\node_modules\hapi\node_modules\items\lib\index.js:36:13)
  at done (E:\wamp\www\project\node_modules\serverless-offline\node_modules\hapi\node_modules\items\lib\index.js:28:25)
  at internals.state.next (E:\wamp\www\project\node_modules\serverless-offline\node_modules\hapi\lib\route.js:350:16)
  at each (E:\wamp\www\project\node_modules\serverless-offline\node_modules\hapi\lib\request.js:383:16)
  at iterate (E:\wamp\www\project\node_modules\serverless-offline\node_modules\hapi\node_modules\items\lib\index.js:36:13)
  at Object.exports.serial (E:\wamp\www\project\node_modules\serverless-offline\node_modules\hapi\node_modules\items\lib\index.js:39:9)
  at [object Object].internals.Request.internals.Request._execute.internals.Request._lifecycle.each [as _lifecycle] (E:\wamp\www\project\node_modules\serverless-offline\node_modules\hapi\lib\request.js:386:11)
  at [object Object].internals.Request.internals.Request._execute (E:\wamp\www\project\node_modules\serverless-offline\node_modules\hapi\lib\request.js:301:21)
  at Domain.<anonymous> (E:\wamp\www\project\node_modules\serverless-offline\node_modules\hapi\lib\connection.js:244:25)
  at Domain.run (domain.js:228:14)
  at [object Object].internals.Protect.run.internals.Protect.enter (E:\wamp\www\project\node_modules\serverless-offline\node_modules\hapi\lib\protect.js:80:17)
  at Server.<anonymous> (E:\wamp\www\project\node_modules\serverless-offline\node_modules\hapi\lib\connection.js:242:30)
  at emitTwo (events.js:87:13)
  at Server.emit (events.js:172:7)
  at HTTPParser.parserOnIncoming [as onIncoming] (_http_server.js:528:12)
  at HTTPParser.parserOnHeadersComplete (_http_common.js:88:23)

can you please explain why i am getting this error,is this bug or i have issue with my code

i love your work :)

JSON using VTL different between serverless on API Gateway vs offline

I am using this construct in my s-templates.json as part of my requestParameters:

    "params": "{#foreach($param in $input.params().path.keySet())\"$param\": \"$util.escapeJavaScript($input.params().path.get($param))\" #if($foreach.hasNext),#end#end}",

When I call my API from API Gateway, my Lambda receives the params as a string, and I have to call JSON.parse on the string.

When I call my API from serverless-offline, my Lambda receives the params as an object (which I actually prefer).

Would be great if both returned the same response, which I guess means offline should match API Gateway/Lambda.

Overwriting Content-Type header in response not working

I'm serving web content (css, js) through my Lambda. For this I have defined a central endpoint that handles various different content types. The actual one is set dynamically in the response. Take this endpoint response as an example:

      "responses": {
        "404.*": {
          "statusCode": "404",
          "responseParameters": {},
          "responseModels": {},
          "responseTemplates": {
            "application/json": "#set($inputRoot = $input.path('$.errorMessage'))\n$inputRoot"
          }
        },
        "default": {
          "statusCode": "200",
          "responseParameters": {
            "method.response.header.Content-Type": "integration.response.body.contentType"
          },
          "responseModels": {},
          "responseTemplates": {
            "application/json": "#set($inputRoot = $input.path('$.content'))\n$inputRoot"
          }
        }
      }

The json response from my Lambda looks something like this:

{ "contentType": "text/css", "content": "h1 { ... }" }

The expected behavior is that the HTTP Content-Type header is set to text/css (as defined in the responseParameters section) and the response body to h1 { ... }. This works fine when deployed to AWS API Gateway. However, the serverless-offline-plugin doesn't seem to handle the responseParameters properly. Instead it always uses the default application/json; chartset=utf-8. If I set any other random header instead of Content-Type (e.g. X-Content-Type), it does contain the correct and expected value.

I'm using v2.3.0 of the plugin.

Serverless v0.5.0 env breaking changes

The Serverless has released v0.5.0 with env breaking changes and dropped serverless-helpers-js module so my question is if the offline plugin should be responsible to load these env variables from s-function.json as I tried run my app but it doesn't seem the env variables are passed.

s-function.json

  "environment": {
    "SERVERLESS_PROJECT": "${project}",
    "SERVERLESS_STAGE": "${stage}",
    "SERVERLESS_REGION": "${region}"
  }

optionally disable timeout

I have a use case for there being an optionally to disable or lengthen the timeout when running offline.

Specifically I have function that has to load a sizeable file from s3 to process requests. This of course takes very little time from lambda but can easily timeout locally. I can of course adjust the config for when I'm running it locally but this isn't really a good solution.

I think the ideal would be to be able to run the server with a command line option to set the timeout to another length or 0 for infinite.

Does not work with POST?

I tried to access data in event object but it's undefined.
Thanks for creating this awesome plugin 👍

Set env var when Offline

Hi there,

Lambda best practices suggest setting up database connections (in my case, DynamoDB Local) outside of the event handler. I noticed that isOffline is appended to the event object, but we have no other way to determine if we're running offline before that.

My suggestion would be to set a process.env.IS_OFFLINE or similar within the server code, so we can check for it within init code, before calling the handler.

Problem is, I have no idea where in src/index.js this would go. I'll take a look and open a PR, but I wanted your opinion first.

Problem with CORS support

I have a problem with CORS. I’ve got it as far as returning 200 from OPTIONS, but then get this in Chrome:

    XMLHttpRequest cannot load https://localhost.xxx.com:3000/accounts. 
    Response to preflight request doesn't pass access control check: 
    No 'Access-Control-Allow-Origin' header is present on the requested resource. 
    Origin 'https://localhost.xxx.com:3001' is therefore not allowed access.

Where xxx is my domain name for my SSL cert. Note that the ports are different on the origin and XMLHttpRequest, therefore, $.ajax adds 'Origin'. This seems to be the case even if I specify crossDomain: false.

I'm invoking serverless offline like this:

    sls offline start --corsAllowOrigin '*' --corsAllowHeaders 'x-amz-date,authorization,content-type,accepts' -H {path to my cert/key}

Charles Proxy gets this response:

    {
      "message": "CORS error: Some headers are not allowed"
    } 

Looking at the code, I suspect this is happening somewhere in hapi, which I have not used before. I installed node-debug, but didn't even hit a breakpoint in the handler.

Before I dig further into debugging, wonder if anyone else has run into this?

This is failing in Chrome Version 51.0.2704.63 (64-bit) on Mac, which is my main debug environment. It seems to work on Mac Firefox.

BTW, thanks for your quick response on my other issue, yesterday.

Custom Authorizer HowTo

Hi there,

I'm using a custom authorizer and it works great when deployed to lambda and API Gateway, but serverless-offline never invokes my authorizer. So, my question is whether there is more configuration work to be done on my part or whether you could provide a simple walk-through of how to make serverless-offline respect a custom authorizer :)

Thanks a lot,
Chris

$util.escapeJavaScript inconsistency

When using $util.escapeJavaScript('Hello%0AWorld') in a template,
APIG parses: Hello↵World
Offline parses: Hello\nWorld

This is due to the Offline dependency js-string-escape that does not match APIG's $util.escapeJavaScript.

Will fix. Until then this does the trick: foo.replace(/\\n/g, '\n');

support for scheduled events

is there a way to get scheduled events supported?
ie;=:
"events": [
{
"name": "schedule",
"type": "schedule",
"config": {
"schedule": "rate(5 minutes)"
}
}
]

responseParameters support

Would like support for responseParameters.

Not sure if the example below is valid but would like to dynamically load responses to the header, such as location for a newly created item.

"responses": {
   "default": {
      "statusCode": "201",
      "responseParameters": {
         "method.response.header.Location": "integration.response.body.location"
      },
      "responseModels": {},
      "responseTemplates": {
         "application/json": "$input.path('$.body')"
      }
   }
}

Ability to run lambda functions offline "locally"

Sorry for empty post in your emails, ill edit this now.

is there a way to use lambda functions that are not deployed, via invoke in serverless offline mode?

eg .then(() => invoke('timeout', {user, delay: 70})), would execute the lambda function. But it would call it in AWS and not try running it in offline mode.

like theres a command to run sls function run timeout that runs the function locally
so could this be abstracted to be used inside the code so that when you run serverless offline it would run the functions called in the code locally.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.