Git Product home page Git Product logo

node-custom-lambda's Introduction

Note: This repo is in maintenance mode as we assess whether Serverless GitHub Actions might provide a better experience going forward

LambCI

Serverless continuous integration

Launch CloudFormation Stack Serverless App Repository LambCI Build Status Gitter


Automate your testing and deployments with:

  • 1000 concurrent builds out of the box (can request more)
  • No maintenance of web servers, build servers or databases
  • Zero cost when not in use (ie, 100% utilization)
  • Easy to integrate with the rest of your AWS resources

Contents


What is it?

LambCI is a package you can upload to AWS Lambda that gets triggered when you push new code or open pull requests on GitHub and runs your tests (in the Lambda environment itself) – in the same vein as Jenkins, Travis or CircleCI.

It integrates with Slack, and updates your Pull Request and other commit statuses on GitHub to let you know if you can merge safely.

LambCI in action

It can be easily launched and kept up-to-date as a CloudFormation Stack, or you can manually create the different resources yourself.

Installed languages

  • Node.js 12.x (including npm/npx)
  • Python 3.6 (including pip)
  • Gcc 7.2 (including c++)

Supported languages

Prerequisites

Current Limitations (due to the Lambda environment itself)

  • No root access
  • 500MB disk space
  • 15 min max build time
  • Bring-your-own-binaries – Lambda has a limited selection of installed software
  • 3.0GB max memory
  • Linux only

You can get around many of these limitations by configuring LambCI to send tasks to an ECS cluster where you can run your builds in Docker.

Installation

You don't need to clone this repository – the easiest way to install LambCI is to deploy it from the Serverless Application Repository or directly spin up a CloudFormation stack. This will create a collection of related AWS resources, including the main LambCI Lambda function and DynamoDB tables, that you can update or remove together – it should take around 3 minutes to spin up.

You can use multiple repositories from the one stack, and you can run multiple stacks with different names side-by-side too (eg, lambci-private and lambci-public).

If you'd prefer to run your stack after cloning this repository, you can use npm run deploy – this depends on AWS SAM CLI being installed.

1. Create a GitHub token

You can create a token in the Personal access tokens section of your GitHub settings. If you're setting up LambCI for an organization, it might be a good idea to create a separate GitHub user dedicated to running automated builds (GitHub calls these "machine users") – that way you have more control over which repositories this user has access to.

Click the Generate new token button and then select the appropriate access levels.

LambCI only needs read access to your code, but unfortunately GitHub webhooks have rather crude access mechanisms and don't have a readonly scope for private repositories – the only options is to choose repo ("Full control").

Private GitHub access

If you're only using LambCI for public repositories, then you just need access to commit statuses:

Public GitHub access

Then click the "Generate token" button and GitHub will generate a 40 character hex OAuth token.

2. Create a Slack token (optional)

You can obtain a Slack API token by creating a bot user (or you can use the token from an existing bot user if you have one) – this direct link should take you there, but you can navigate from the App Directory via Browse Apps > Custom Integrations > Bots.

Pick any name, and when you click "Add integration" Slack will generate an API token that looks something like xoxb-<numbers>-<letters>

Add Slack bot

3. Launch the LambCI CloudFormation stack

You can either deploy it from the Serverless Application Repository or use this direct CloudFormation link or navigate in your AWS Console to Services > CloudFormation, choose "Create Stack" and use the S3 link:

CloudFormation Step 1

Then click Next where you can enter a stack name (lambci is a good default), API tokens and a Slack channel – you'll also need to make up a secret to secure your webhook and enter it as the GithubSecret – any randomly generated value is good here, but make sure you still have it handy to enter when you setup your webhooks in GitHub later on.

CloudFormation Step 2

Click Next, and then Next again on the Options step (leaving the default options selected), to get to the final Review step:

CloudFormation Step 3

Check the acknowledgments, click Create Change Set and then Execute to start the resource creation process:

CloudFormation Step 4

Once your stack is created (should be done in a few minutes) you're ready to add the webhook to any repository you like!

You can get the WebhookUrl from the Outputs of the CloudFormation stack:

CloudFormation Step 5

Then create a new Webhook in any GitHub repo you want to trigger under Settings > Webhooks (https://github.com/<user>/<repo>/settings/hooks/new) and enter the WebhookUrl from above as the Payload URL, ensure Content type is application/json and enter the GithubSecret you generated in the first step as the Secret:

GitHub Webhook Step 1

Assuming you want to respond to Pull Requests as well as Pushes, you'll need to choose "Let me select individual events", and check Pushes and Pull requests.

GitHub Webhook Step 2

Then "Add webhook" and you're good to go!

By default LambCI only responds to pushes on the master branch and pull requests (you can configure this), so try either of those – if nothing happens, then check Services > CloudWatch > Logs in the AWS Console and see the Questions section below.

Installing as a nested stack in another CloudFormation stack

You can also embed LambCI in your own stack, using a AWS::Serverless::Application resource:

AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31

Resources:
  LambCI:
    Type: AWS::Serverless::Application
    Properties:
      Location:
        ApplicationId: arn:aws:serverlessrepo:us-east-1:553035198032:applications/lambci
        SemanticVersion: 0.11.2
      Parameters:
        GithubToken: '123456789abcdef123456789abcdef123456789'
        GithubSecret: 'my-web-secret'
        SlackChannel: '#general'
        SlackToken: 'xoxb-123456789-abcdefABCDEFabcdef'

Outputs:
  S3Bucket:
    Description: Name of the build results S3 bucket
    Value: !GetAtt LambCI.Outputs.S3Bucket
  WebhookUrl:
    Description: GitHub webhook URL
    Value: !GetAtt LambCI.Outputs.WebhookUrl

If you save the above as template.yml, then you can use the AWS SAM CLI to deploy from the same directory:

sam deploy --stack-name lambci --capabilities CAPABILITY_IAM CAPABILITY_AUTO_EXPAND

Configuration

Many configuration values can be specified in a .lambci.js, .lambci.json or package.json file in the root of your repository – and all values can be set in the DynamoDB configuration table (named <stack>-config, eg, lambci-config)

For example, the default command that LambCI will try to run is npm ci && npm test, but let's say you have a python project – you could put the following in .lambci.json in your repository root:

{
  "cmd": "pip install --user tox && tox"
}

(LambCI bundles pip and adds $HOME/.local/bin to PATH)

If you have a more complicated build setup, then you could specify make or create a bash script in your repository root:

{
  "cmd": "./lambci-test.sh"
}

Overriding default properties

LambCI resolves configuration by overriding properties in a cascading manner in the following order:

  1. Default config (see below)
  2. global project key in lambci-config DynamoDB table
  3. gh/<user>/<repo> project key in lambci-config DynamoDB table
  4. lambci property in package.json file in repository root
  5. .lambci.js or .lambci.json file in repository root

You can use the command line to edit the DynamoDB config values:

lambci config secretEnv.GITHUB_TOKEN abcdef01234
lambci config --project gh/mhart/kinesalite secretEnv.SLACK_TOKEN abcdef01234

Or the AWS console:

Global config in DynamoDB

So if you wanted to use a different Slack token and channel for a particular project, you could create an item in the config table with the project key gh/<user>/<repo> that looks similar to the global config above, but with different values:

{
  project: 'gh/mhart/kinesalite',
  secretEnv: {
    SLACK_TOKEN: 'xoxb-1234243432-vnjcnioeiurn'
  },
  notifications: {
    slack: {
      channel: '#someotherchannel'
    }
  }
}

Using the command line:

lambci config --project gh/mhart/kinesalite secretEnv.SLACK_TOKEN xoxb-1234243432-vnjcnioeiurn
lambci config --project gh/mhart/kinesalite notifications.slack.channel '#someotherchannel'

Config file overrides

Here's an example package.json overriding the cmd property:

{
  "name": "some-project",
  "scripts": {
    "lambci-build": "eslint . && mocha"
  },
  "lambci": {
    "cmd": "npm ci && npm run lambci-build"
  }
}

And the same example using .lambci.js:

module.exports = {
  cmd: 'npm ci && npm run lambci-build'
}

The ability to override config properties using repository files depends on the allowConfigOverrides property (see the default config below).

Branch and pull request properties

Depending on whether LambCI is building a branch from a push or a pull request, config properties can also be specified to override in these cases.

For example, to determine whether a build should even take place, LambCI looks at the top-level build property of the configuration. By default this is actually false, but if the branch is master, then LambCI checks for a branches.master property and if it's set, uses that instead:

{
  build: false,
  branches: {
    master: true
  }
}

If a branch just has a true value, this is the equivalent of {build: true}, so you can override other properties too – ie, the above snippet is just shorthand for:

{
  build: false,
  branches: {
    master: {
      build: true
    }
  }
}

So if you wanted Slack notifications to go to a different channel to the default for the develop branch, you could specify:

{
  branches: {
    master: true,
    develop: {
      build: true,
      notifications: {
        slack: {
          channel: '#dev'
        }
      }
    }
  }
}

You can also use regular expression syntax to specify config for branches that match, or don't match (if there is a leading !). Exact branch names are checked first, then the first matching regex (or negative regex) will be used:

// 1. Don't build gh-pages branch
// 2. Don't build branches starting with 'dev'
// 3. Build any branch that doesn't start with 'test-'
{
  build: false,
  branches: {
    '/^dev/': false,
    '!/^test-/': true,
    'gh-pages': false,
  }
}

Default configuration

This configuration is hardcoded in utils/config.js and overridden by any config from the DB (and config files)

{
  cmd: 'npm ci && npm test',
  env: { // env values exposed to build commands
  },
  secretEnv: { // secret env values, exposure depends on inheritSecrets config below
    GITHUB_TOKEN: '',
    GITHUB_SECRET: '',
    SLACK_TOKEN: '',
  },
  s3Bucket: '', // bucket to store build artifacts
  notifications: {
    slack: {
      channel: '#general',
      username: 'LambCI',
      iconUrl: 'https://lambci.s3.amazonaws.com/assets/logo-48x48.png',
      asUser: false,
    },
  },
  build: false, // Build nothing by default except master and PRs
  branches: {
    master: true,
  },
  pullRequests: {
    fromSelfPublicRepo: true, // Pull requests from same (private) repo will build
    fromSelfPrivateRepo: true, // Pull requests from same (public) repo will build
    fromForkPublicRepo: { // Restrictions for pull requests from forks on public repos
      build: true,
      inheritSecrets: false, // Don't expose secretEnv values in the build command environment
      allowConfigOverrides: ['cmd', 'env'], // Only allow file config to override cmd and env properties
    },
    fromForkPrivateRepo: false, // Pull requests from forked private repos won't run at all
  },
  s3PublicSecretNames: true, // Use obscured names for build HTML files and make them public. Has no effect in public repositories
  inheritSecrets: true, // Expose secretEnv values in the build command environment by default
  allowConfigOverrides: true, // Allow files to override config values
  clearTmp: true, // Delete /tmp each time for safety
  git: {
    depth: 5, // --depth parameter for git clone
  },
}

SNS Notifications (for email, SMS, etc)

By default, the CloudFormation template doesn't create an SNS topic to publish build statuses (ie, success, failure) to – but if you want to receive build notifications via email or SMS, or some other custom SNS subscriber, you can specify an SNS topic and LambCI will push notifications to it:

notifications: {
  sns: {
    topicArn: 'arn:aws:sns:us-east-1:1234:lambci-StatusTopic-1WF8BT36'
  }
}

The Lambda function needs to have permissions to publish to this topic, which you can either add manually, or by modifying the CloudFormation template.yaml and updating your stack.

Add a top-level SNS topic resource (a commented-out example of this exists in template.yaml):

  StatusTopic:
    Type: AWS::SNS::Topic
    Properties:
      DisplayName: LambCI

And ensure the Lambda function has permissions to publish to it:

  BuildLambda:
    Type: AWS::Serverless::Function
    Properties:
      # ...
      Policies:
        # ...
        - SNSPublishMessagePolicy:
            TopicName: !Ref StatusTopic

Build status badges

Each branch has a build status image showing whether the last build was successful or not. For example, here is LambCI's latest master status (yes, LambCI dogfoods!):

LambCI Build Status

You can see the URLs for the branch log and badge image near the start of the output of your build logs (so you'll need to run at least one build on your branch to get these):

Branch log: https://<bucket>/<project>/branches/master/<somehash>.html
Branch status img: https://<bucket>/<project>/branches/master/<somehash>.svg

Updating

You can update your CloudFormation stack at any time to change, add or remove the parameters – or even upgrade to a new version of LambCI.

In the AWS Console, go to Services > CloudFormation, select your LambCI stack in the list and then choose Actions > Update Stack. You can keep the same template selected (unless you're updating LambCI), and then when you click Next you can modify parameters like your GitHub token, Slack channel, etc.

LambCI will do its best to update these parameters correctly, but if it fails or you run into trouble, just try setting them all to blank, updating, and then update again with the values you want.

If you've (only) modified template.yaml locally, then you'll need to run npm run template and use build/versioned.yaml to update your stack.

If you've modified other LambCI code locally, you can update with npm run deploy – this requires AWS SAM CLI to be installed.

Updating to 0.10.0 from earlier versions

Updating to 0.10.0 should Just Work™ using the new template – however GitHub shut down the use of SNS hooks, which is how LambCI was previously triggered, so you'll need to go through any repositories on GitHub that you had setup with previous LambCI versions, remove the SNS hook if it wasn't removed already (in Settings), and add the new webhook as laid out in Installation.

Security

The default configuration passes secret environment variables to build commands, except when building forked repositories. This allows you to use your AWS credentials and Git/Slack tokens in your build commands to communicate with the rest of your stack. Set inheritSecrets to false to prevent this.

HTML build logs are generated with random filenames, but are accessible to anyone who has the link. Set s3PublicSecretNames to false (only works for private repositories) to make build logs completely private (you'll need to use the AWS console to access them), or you can remove s3Bucket entirely – you can still see the build logs in the Lambda function output in CloudWatch Logs.

By default, the /tmp directory is removed each time – this is to prevent secrets from being leaked if your LambCI stack is building both private and public repositories. However, if you're only building private (trusted) repositories, then you can set the clearTmp config to false, and potentially cache files (eg, in $HOME) for use across builds (this is not guaranteed – it depends on whether the Lambda environment is kept "warm").

If you discover any security issues with LambCI please email [email protected].

Language Recipes

The default command is npm ci && npm test which will use Node.js 12.14.1 and npm 6.13.6.

The way to build with different Node.js versions, or other languages entirely, is just to override the cmd config property.

LambCI comes with a collection of helper scripts to setup your environment for languages not supported out of the box on AWS Lambda – that is, every language except Node.js and Python 3.6

Node.js

LambCI comes with nave installed and available on the PATH, so if you wanted to run your npm install and tests using Node.js v10.x, you could do specify:

{
  "cmd": "nave use 10 bash -c 'npm ci && npm test'"
}

If you're happy using the built-in npm to install, you could simplify this a little:

{
  "cmd": "npm ci && nave use 10 npm test"
}

There's currently no way to run multiple builds in parallel but you could have processes run in parallel using a tool like npm-run-all – the logs will be a little messy though!

Here's an example package.json for running your tests in Node.js v8, v10 and v12 simultaneously:

{
  "lambci": {
    "cmd": "npm ci && npm run ci-all"
  },
  "scripts": {
    "ci-all": "run-p ci:*",
    "ci:node8": "nave use 8 npm test",
    "ci:node10": "nave use 10 npm test",
    "ci:node12": "nave use 12 npm test"
  },
  "devDependencies": {
    "npm-run-all": "*"
  }
}

Python

LambCI comes with pip installed and available on the PATH, and Lambda has Python 3.6 already installed. $HOME/.local/bin is also added to PATH, so local pip installs should work:

{
  "cmd": "pip install --user tox && tox"
}

Other Python versions with pyenv

LambCI comes with pyenv installed and a script you can source to setup the pyenv root and download prebuilt versions for you.

Call it with the Python version you want (currently: 3.8.0, 3.7.4, 3.6.9 or system, which will use the 3.6 version already installed on Lambda):

{
  "cmd": ". ~/init/python 3.8.0 && pip install --user tox && tox"
}

Java

The Java SDK is not installed on AWS Lambda, so needs to be downloaded as part of your build – but the JRE does exist on Lambda, so the overall impact is small.

LambCI includes a script you can source before running your build commands that will install and setup the SDK correctly, as well as Maven (v3.6.3). Call it with the OpenJDK version you want (currently only 1.8.0):

{
  "cmd": ". ~/init/java 1.8.0 && mvn install -B -V && mvn test"
}

You can see an example of this working here – and the resulting build log.

Go

Go is not installed on AWS Lambda, so needs to be downloaded as part of your build, but Go is quite small and well suited to running anywhere.

LambCI includes a script you can source before running your build commands that will install Go and set your GOROOT and GOPATH with the correct directory structure. Call it with the Go version you want (any of the versions on the Go site):

{
  "cmd": ". ~/init/go 1.13.5 && make test"
}

You can see examples of this working here – and the resulting build log.

Ruby

Ruby is not installed on AWS Lambda, so needs to be downloaded as part of your build.

LambCI includes a script you can source before running your build commands that will install Ruby, rbenv, gem and bundler. Call it with the Ruby version you want (currently: 2.7.0, 2.6.5, 2.5.7, 2.4.9, 2.3.8, 2.2.10, 2.1.10 or 2.0.0-p648):

{
  "cmd": ". ~/init/ruby 2.7.0 && bundle install && bundle exec rake"
}

You can see an example of this working here – and the resulting build log.

PHP

PHP is not installed on AWS Lambda, so needs to be downloaded as part of your build.

LambCI includes a script you can source before running your build commands that will install PHP, phpenv and composer. Call it with the PHP version you want (currently: 7.3.13, 7.2.26, 7.1.33, 7.0.32 or 5.6.38):

{
  "cmd": ". ~/init/php 7.3.13 && composer install -n --prefer-dist && vendor/bin/phpunit"
}

These versions are compiled using php-build with the default config options and overrides of --disable-cgi and --disable-fpm.

You can see an example of this working here – and the resulting build log.

Extending with ECS

LambCI can run tasks on an ECS cluster, which means you can perform all of your build tasks in a Docker container and not be subject to the same restrictions you have in the Lambda environment.

This needs to be documented further – for now you'll have to go off the source and check out the lambci/ecs repo.

Questions

What does the Lambda function do?

  1. Receives notification from GitHub (via a webhook)
  2. Looks up config in DynamoDB
  3. Clones git repo using a bundled git binary
  4. Looks up config files in repo
  5. Runs install and build cmds on Lambda (or starts ECS task)
  6. Updates Slack and GitHub statuses along the way (optionally SNS for email, etc)
  7. Uploads build logs/statuses to S3

License

MIT

node-custom-lambda's People

Contributors

clayzermk1 avatar jgriepentrog avatar mhart avatar reconbot avatar wqfan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

node-custom-lambda's Issues

Repository size

Hey,
thanks a lot already for this nice example, but I saw that the repository is very big (more than 300MB).
Is it really necessary to push the layer.zip in the repository?

Cryptic error - "Unknown application error occurred"

Here are the logs of my app when I try to reach my lambda:

REPORT RequestId: 1274b44b-b585-4a59-89a4-22e86460a5eb  Duration: 6866.84 ms    Billed Duration: 6900 ms        Memory Size: 512 MB     Max Memory Used: 162 MB 

Unknown application error occurred
Error
START RequestId: 48a19299-ac57-4f05-b747-2cd3118a4851 Version: $LATEST
END RequestId: 48a19299-ac57-4f05-b747-2cd3118a4851
REPORT RequestId: 48a19299-ac57-4f05-b747-2cd3118a4851  Duration: 6835.13 ms    Billed Duration: 6900 ms        Memory Size: 512 MB     Max Memory Used: 161 MB 

Unknown application error occurred
Error
START RequestId: e94accec-fb7d-407b-8dca-07b1c8ec7cec Version: $LATEST
END RequestId: e94accec-fb7d-407b-8dca-07b1c8ec7cec
REPORT RequestId: e94accec-fb7d-407b-8dca-07b1c8ec7cec  Duration: 6816.46 ms    Billed Duration: 6900 ms        Memory Size: 512 MB     Max Memory Used: 162 MB 

Unknown application error occurred
Error
START RequestId: 48f6b0a5-b9e3-44a6-b510-d19d476d694a Version: $LATEST

It happened after I changed the --target option of one of my scripts in my serverless.yml:

  webpack:
    packager: 'yarn'
    packagerOptions:
      # XXX Necessary to properly package the "node-v48-linux-x64-glibc" binary used by "dialogflow", because AWS Lambda runs under Linux
      # See https://github.com/serverless-heaven/serverless-webpack/issues/342#issuecomment-383248835
      scripts:
        - npm rebuild grpc --target=8.1.0 --target_arch=x64 --target_platform=linux --target_libc=glibc

This script tries to rebuild the grpc binary using a node target of 8.1.0 while running under a 10.15.3 nodejs version. It's perfectly normal that it fails, but the error message should be more understandable.

I'm using https://github.com/serverless-heaven/serverless-webpack#custom-scripts to execute the scripts.

I have no idea how it can be improved, but I'd suggest to try to pinpoint the cause of the error in such use case, because if anyone migrate from an official runtime to this custom runtime and encounters this error they'll have a very hard time figuring out the root cause.

Configurable custom runtime

I wonder what would be the recommended approach to make a custom runtime configurable.

I'm thinking about logging low level exceptions, like the ones only the layer can catch, and I'm wondering about how to configure that low-level logging.

More specifically, I'm interested by adding either Sentry or https://epsagon.com/, and both rely on some configuration, like app name, tokens, etc.

So, I wonder what would be the best way of loading this config, I'm thinking ENV variables are probably the way to go, but I'm not sure and I wonder how you'd implement it, if you wanted to do something similar?

Seek help for node 6.9.3 custom runtime - Doesn't implement context.succeed for node 6.9.3 - TypeError: context.succeed is not a function

I'm sorry to bother you, but since your have some experience with AWS Lambda custom runtimes I wonder if you could guide me.

I opened an issue a few days ago regarding implementing support for the soon-to-be-deprecated node6.10.3 official AWS Lambda runtime.

Since then I played around with custom runtimes quite a bit, and also tried to migrate my app, but I'm encountering errors even when using a custom runtime matching node6.10.3, because your lambda implementation doesn't exactly match the real AWS Lambda implementation. (which is normal, considering you ported the 10.x version and the official implementation isn't the same between 6 and 10).

But I'm struggling with the implementation itself and could use some help or guidance regarding the steps to take.

I released a first version of my custom runtime using your implementation, I'm now trying to change that implementation to match the AWS one, for node 6.10.3. I opened a PR Vadorequest#1 and I found some (of yours) gist regarding how it's implemented by AWS. But it's incomplete and seems to rely on closed sources.

I'm wondering if SSH-ing into a live 6.10 lambda and retrieving the content would be the right approach in order to copy the files AWS uses.


For instance, my current implementation changes the behaviour of the aws-serverless-express plugin, which relies on a context.succeed method (callback way), which is implemented as a Promise in node 10 and therefore doesn't have a succeed property.

CodeGenieApp/serverless-express#231

Add support for Runtime layer node 6.10

It may sound silly, but as AWS is officially removing their official node 6.10 runtime, lots of apps developed using node 6.10 won't be deployable by the end of April 2019, and won't be updatable by the end of May 2019.

Due to AWS slow release cycle before Runtime layers were a thing, I believe lots of apps may be in this situation and can't be migrated as fast as they should be.

I personally am in such situation, where I developed an app using node 6 because node 8 wasn't available back then, and can't migrate it as fast as I would like. So, I'm very much interested by a node 6.10 version that behaves like the official AWS runtime, so I can buy a few more months before migrating to a newer node.js runtime.

Please let me know if that's something you'd be interested in doing, if not I'll do it myself (but I'd prefer to keep all the versions in this repo as it seems to acts like a node.js runtime layers HUB) :)

Include latest AWS SDK?

What do you think about globally including the latest aws-sdk bundled with the custom runtime?

Multi_line_start_pattern

One of the big differences between the official runtimes and this one is the logs on cloudwatch are not grouped per console.log() message (each /n is split into it's own log line). This makes stack traces very difficult to parse

As per this thread, this can be changed by changing the Agent file for Cloudwatch with the runtime: https://forums.aws.amazon.com/thread.jspa?threadID=158643 to use multi_line_start_pattern

Would this be something that can be included in this run time? Or if it's out of scope, can someone help figure out how I could add it myself for my own custom runtime?

Allow importing handler from layers

It would be useful to be able to load handler modules from layers! (I do this with other runtimes)

Options would be to include /opt/nodejs/node_modules in the search path, or to allow absolute imports.

node 13 custom runtime

Hi there,
I would like to use a custom runtime with node 13 to try out the unflagged implementation of ES modules available from node 13.

Is there any plan to release a custom runtime for node 13 as part of this project?

Second Layer not working

how does one add a second layer? I created another layer named commonlibs, containing node_modules dependencies.

Both layers appear in the Lambda console with nodejs11 listed first. When I try to invoke the lambda get the following:

"errorType": "Error", "errorMessage": "Unable to import module 'handler'", "stackTrace": [ " at getHandler (/opt/bootstrap.js:138:13)", " at start (/opt/bootstrap.js:23:15)", " at Object.<anonymous> (/opt/bootstrap.js:18:1)", " at Module._compile (internal/modules/cjs/loader.js:723:30)", " at Object.Module._extensions..js (internal/modules/cjs/loader.js:734:10)", " at Module.load (internal/modules/cjs/loader.js:620:32)", " at tryModuleLoad (internal/modules/cjs/loader.js:560:12)", " at Function.Module._load (internal/modules/cjs/loader.js:552:3)", " at Function.Module.runMain (internal/modules/cjs/loader.js:776:12)", " at executeUserCode (internal/bootstrap/node.js:342:17)

this happen on the require('moment'). I created my commonlibs layer with the following structure:
└── nodejs
├── node_modules
├── package-lock.json
└── package.json

Thanks

Unable to import module 'handler'

Trying to integrate your library with Lambda-API (https://github.com/jeremydaly/lambda-api)

  • I have a simple example working using each library independently.

  • However it seems including this line at the top of my handler.js causes an error. const api = require('lambda-api')

  • 'requiring' another library such as 'axios' works fine. So it doesn't seem to be a global problem.

  • switching to the built-in 8.10 runtime causes the issue to go away

Unable to import module 'handler'","stackTrace":[" at getHandler (/opt/bootstrap.js:139:13)"," at start (/opt/bootstrap.js:24:15)"," at Object. (/opt/bootstrap.js:19:1)"," at Module._compile (internal/modules/cjs/loader.js:799:30)"," at Object.Module._extensions..js (internal/modules/cjs/loader.js:810:10)"," at Module.load (internal/modules/cjs/loader.js:666:32)"," at tryModuleLoad (internal/modules/cjs/loader.js:606:12)"," at Function.Module._load (internal/modules/cjs/loader.js:598:3)"," at Function.Module.runMain (internal/modules/cjs/loader.js:862:12)"," at internal/main/run_main_module.js:21:11"]}
Thu Mar 21 22:13:44 UTC 2019 : Lambda execution failed with status 200 due to customer function error: Unable to import module 'handler'. Lambda request id: 3d6a100b-dd7d-4d03-9be8-7e30233d2e8e
Thu Mar 21 22:13:44 UTC 2019 : Method completed with status: 502

serverless.yml

service: node112

provider:
name: aws
runtime: provided
region: us-west-1

functions:
hello:
handler: handler.hello
events:
- http:
path: /
method: get
layers:
- arn:aws:lambda:us-west-1:553035198032:layer:nodejs11:11

handler.js

const api = require('lambda-api')
exports.hello = async (event, context) => {
console.log(Hi from Node.js ${process.version} on Lambda!)
return {
statusCode: 200,
body: JSON.stringify({ message: Hi from Node.js ${process.version} on Lambda! })
}
}

`context.succeed` is not a function

Hi there,

I'm using https://github.com/awslabs/aws-serverless-express as a wrapper for my Express application in Lambda. Which fails with the below error. It seems to be looking for context.succeed which exists on the standard AWS nodejs environment but none of the custom ones.
I've also attached the test code, and the outputs for context in both environments.
I'm not sure what course of action you would like to proceed with, but it could stop people from deploying using the layer in the future.

Thanks!
Daniel

Crash log:

/var/task/node_modules/aws-serverless-express/src/index.js:243
if (params.resolutionMode === 'CONTEXT_SUCCEED') return params.context.succeed(params2.response)
^

TypeError: params.context.succeed is not a function
at Object.succeed (/var/task/node_modules/aws-serverless-express/src/index.js:243:78)
at IncomingMessage.response.on.on (/var/task/node_modules/aws-serverless-express/src/index.js:105:16)
at IncomingMessage.emit (events.js:202:15)
at IncomingMessage.EventEmitter.emit (domain.js:439:20)
at endReadableNT (_stream_readable.js:1129:12)
at processTicksAndRejections (internal/process/next_tick.js:76:17)

Test code:

exports.handler = async(event, context) => {
  console.log(event, context);
}

Node 10/11 (output is the same)

START RequestId: c99f7d98-af85-4860-90a4-5433bfc65d91 Version: $LATEST
{ key1: 'value1', key2: 'value2', key3: 'value3' } { awsRequestId: 'c99f7d98-af85-4860-90a4-5433bfc65d91',
invokedFunctionArn:
'arn:aws:lambda:ap-southeast-1:795327357717:function:Example',
logGroupName: '/aws/lambda/Example',
logStreamName: '2019/02/06/[$LATEST]6da0c26641034929843fb503163be206',
functionName: 'Example',
functionVersion: '$LATEST',
memoryLimitInMB: '128',
getRemainingTimeInMillis: [Function: getRemainingTimeInMillis] }
END RequestId: c99f7d98-af85-4860-90a4-5433bfc65d91
REPORT RequestId: c99f7d98-af85-4860-90a4-5433bfc65d91 Duration: 255.44 ms Billed Duration: 300 ms Memory Size: 128 MB Max Memory Used: 55 MB

Node 8.10 from AWS

START RequestId: ffa39ca7-7d64-4879-8af1-d9f1457f6540 Version: $LATEST
2019-02-06T09:12:29.175Z ffa39ca7-7d64-4879-8af1-d9f1457f6540 { key1: 'value1', key2: 'value2', key3: 'value3' } { callbackWaitsForEmptyEventLoop: [Getter/Setter],
done: [Function: done],
succeed: [Function: succeed],
fail: [Function: fail],
logGroupName: '/aws/lambda/Example',
logStreamName: '2019/02/06/[$LATEST]84f0d049f76d4e6a9e010ea66b07d9d0',
functionName: 'Example',
memoryLimitInMB: '128',
functionVersion: '$LATEST',
getRemainingTimeInMillis: [Function: getRemainingTimeInMillis],
invokeid: 'ffa39ca7-7d64-4879-8af1-d9f1457f6540',
awsRequestId: 'ffa39ca7-7d64-4879-8af1-d9f1457f6540',
invokedFunctionArn: 'arn:aws:lambda:ap-southeast-1:795327357717:function:Example' }
END RequestId: ffa39ca7-7d64-4879-8af1-d9f1457f6540
REPORT RequestId: ffa39ca7-7d64-4879-8af1-d9f1457f6540 Duration: 66.89 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 20 MB

Help with serverless framework

First of all, thanks for doing this!
Very useful to be able to use any runtime that we can possible want.
I'm trying to use your ARNs to deploy a couple of functions using serverless framework, but I have no luck.
Have you by any chance use yours with serverless framework and can provide how you'll be able to use it?
Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.