Git Product home page Git Product logo

docker-lambda's Introduction

Note: This repo is in maintenance mode as we assess whether Serverless GitHub Actions might provide a better experience going forward

LambCI

Serverless continuous integration

Launch CloudFormation Stack Serverless App Repository LambCI Build Status Gitter


Automate your testing and deployments with:

  • 1000 concurrent builds out of the box (can request more)
  • No maintenance of web servers, build servers or databases
  • Zero cost when not in use (ie, 100% utilization)
  • Easy to integrate with the rest of your AWS resources

Contents


What is it?

LambCI is a package you can upload to AWS Lambda that gets triggered when you push new code or open pull requests on GitHub and runs your tests (in the Lambda environment itself) – in the same vein as Jenkins, Travis or CircleCI.

It integrates with Slack, and updates your Pull Request and other commit statuses on GitHub to let you know if you can merge safely.

LambCI in action

It can be easily launched and kept up-to-date as a CloudFormation Stack, or you can manually create the different resources yourself.

Installed languages

  • Node.js 12.x (including npm/npx)
  • Python 3.6 (including pip)
  • Gcc 7.2 (including c++)

Supported languages

Prerequisites

Current Limitations (due to the Lambda environment itself)

  • No root access
  • 500MB disk space
  • 15 min max build time
  • Bring-your-own-binaries – Lambda has a limited selection of installed software
  • 3.0GB max memory
  • Linux only

You can get around many of these limitations by configuring LambCI to send tasks to an ECS cluster where you can run your builds in Docker.

Installation

You don't need to clone this repository – the easiest way to install LambCI is to deploy it from the Serverless Application Repository or directly spin up a CloudFormation stack. This will create a collection of related AWS resources, including the main LambCI Lambda function and DynamoDB tables, that you can update or remove together – it should take around 3 minutes to spin up.

You can use multiple repositories from the one stack, and you can run multiple stacks with different names side-by-side too (eg, lambci-private and lambci-public).

If you'd prefer to run your stack after cloning this repository, you can use npm run deploy – this depends on AWS SAM CLI being installed.

1. Create a GitHub token

You can create a token in the Personal access tokens section of your GitHub settings. If you're setting up LambCI for an organization, it might be a good idea to create a separate GitHub user dedicated to running automated builds (GitHub calls these "machine users") – that way you have more control over which repositories this user has access to.

Click the Generate new token button and then select the appropriate access levels.

LambCI only needs read access to your code, but unfortunately GitHub webhooks have rather crude access mechanisms and don't have a readonly scope for private repositories – the only options is to choose repo ("Full control").

Private GitHub access

If you're only using LambCI for public repositories, then you just need access to commit statuses:

Public GitHub access

Then click the "Generate token" button and GitHub will generate a 40 character hex OAuth token.

2. Create a Slack token (optional)

You can obtain a Slack API token by creating a bot user (or you can use the token from an existing bot user if you have one) – this direct link should take you there, but you can navigate from the App Directory via Browse Apps > Custom Integrations > Bots.

Pick any name, and when you click "Add integration" Slack will generate an API token that looks something like xoxb-<numbers>-<letters>

Add Slack bot

3. Launch the LambCI CloudFormation stack

You can either deploy it from the Serverless Application Repository or use this direct CloudFormation link or navigate in your AWS Console to Services > CloudFormation, choose "Create Stack" and use the S3 link:

CloudFormation Step 1

Then click Next where you can enter a stack name (lambci is a good default), API tokens and a Slack channel – you'll also need to make up a secret to secure your webhook and enter it as the GithubSecret – any randomly generated value is good here, but make sure you still have it handy to enter when you setup your webhooks in GitHub later on.

CloudFormation Step 2

Click Next, and then Next again on the Options step (leaving the default options selected), to get to the final Review step:

CloudFormation Step 3

Check the acknowledgments, click Create Change Set and then Execute to start the resource creation process:

CloudFormation Step 4

Once your stack is created (should be done in a few minutes) you're ready to add the webhook to any repository you like!

You can get the WebhookUrl from the Outputs of the CloudFormation stack:

CloudFormation Step 5

Then create a new Webhook in any GitHub repo you want to trigger under Settings > Webhooks (https://github.com/<user>/<repo>/settings/hooks/new) and enter the WebhookUrl from above as the Payload URL, ensure Content type is application/json and enter the GithubSecret you generated in the first step as the Secret:

GitHub Webhook Step 1

Assuming you want to respond to Pull Requests as well as Pushes, you'll need to choose "Let me select individual events", and check Pushes and Pull requests.

GitHub Webhook Step 2

Then "Add webhook" and you're good to go!

By default LambCI only responds to pushes on the master branch and pull requests (you can configure this), so try either of those – if nothing happens, then check Services > CloudWatch > Logs in the AWS Console and see the Questions section below.

Installing as a nested stack in another CloudFormation stack

You can also embed LambCI in your own stack, using a AWS::Serverless::Application resource:

AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31

Resources:
  LambCI:
    Type: AWS::Serverless::Application
    Properties:
      Location:
        ApplicationId: arn:aws:serverlessrepo:us-east-1:553035198032:applications/lambci
        SemanticVersion: 0.11.2
      Parameters:
        GithubToken: '123456789abcdef123456789abcdef123456789'
        GithubSecret: 'my-web-secret'
        SlackChannel: '#general'
        SlackToken: 'xoxb-123456789-abcdefABCDEFabcdef'

Outputs:
  S3Bucket:
    Description: Name of the build results S3 bucket
    Value: !GetAtt LambCI.Outputs.S3Bucket
  WebhookUrl:
    Description: GitHub webhook URL
    Value: !GetAtt LambCI.Outputs.WebhookUrl

If you save the above as template.yml, then you can use the AWS SAM CLI to deploy from the same directory:

sam deploy --stack-name lambci --capabilities CAPABILITY_IAM CAPABILITY_AUTO_EXPAND

Configuration

Many configuration values can be specified in a .lambci.js, .lambci.json or package.json file in the root of your repository – and all values can be set in the DynamoDB configuration table (named <stack>-config, eg, lambci-config)

For example, the default command that LambCI will try to run is npm ci && npm test, but let's say you have a python project – you could put the following in .lambci.json in your repository root:

{
  "cmd": "pip install --user tox && tox"
}

(LambCI bundles pip and adds $HOME/.local/bin to PATH)

If you have a more complicated build setup, then you could specify make or create a bash script in your repository root:

{
  "cmd": "./lambci-test.sh"
}

Overriding default properties

LambCI resolves configuration by overriding properties in a cascading manner in the following order:

  1. Default config (see below)
  2. global project key in lambci-config DynamoDB table
  3. gh/<user>/<repo> project key in lambci-config DynamoDB table
  4. lambci property in package.json file in repository root
  5. .lambci.js or .lambci.json file in repository root

You can use the command line to edit the DynamoDB config values:

lambci config secretEnv.GITHUB_TOKEN abcdef01234
lambci config --project gh/mhart/kinesalite secretEnv.SLACK_TOKEN abcdef01234

Or the AWS console:

Global config in DynamoDB

So if you wanted to use a different Slack token and channel for a particular project, you could create an item in the config table with the project key gh/<user>/<repo> that looks similar to the global config above, but with different values:

{
  project: 'gh/mhart/kinesalite',
  secretEnv: {
    SLACK_TOKEN: 'xoxb-1234243432-vnjcnioeiurn'
  },
  notifications: {
    slack: {
      channel: '#someotherchannel'
    }
  }
}

Using the command line:

lambci config --project gh/mhart/kinesalite secretEnv.SLACK_TOKEN xoxb-1234243432-vnjcnioeiurn
lambci config --project gh/mhart/kinesalite notifications.slack.channel '#someotherchannel'

Config file overrides

Here's an example package.json overriding the cmd property:

{
  "name": "some-project",
  "scripts": {
    "lambci-build": "eslint . && mocha"
  },
  "lambci": {
    "cmd": "npm ci && npm run lambci-build"
  }
}

And the same example using .lambci.js:

module.exports = {
  cmd: 'npm ci && npm run lambci-build'
}

The ability to override config properties using repository files depends on the allowConfigOverrides property (see the default config below).

Branch and pull request properties

Depending on whether LambCI is building a branch from a push or a pull request, config properties can also be specified to override in these cases.

For example, to determine whether a build should even take place, LambCI looks at the top-level build property of the configuration. By default this is actually false, but if the branch is master, then LambCI checks for a branches.master property and if it's set, uses that instead:

{
  build: false,
  branches: {
    master: true
  }
}

If a branch just has a true value, this is the equivalent of {build: true}, so you can override other properties too – ie, the above snippet is just shorthand for:

{
  build: false,
  branches: {
    master: {
      build: true
    }
  }
}

So if you wanted Slack notifications to go to a different channel to the default for the develop branch, you could specify:

{
  branches: {
    master: true,
    develop: {
      build: true,
      notifications: {
        slack: {
          channel: '#dev'
        }
      }
    }
  }
}

You can also use regular expression syntax to specify config for branches that match, or don't match (if there is a leading !). Exact branch names are checked first, then the first matching regex (or negative regex) will be used:

// 1. Don't build gh-pages branch
// 2. Don't build branches starting with 'dev'
// 3. Build any branch that doesn't start with 'test-'
{
  build: false,
  branches: {
    '/^dev/': false,
    '!/^test-/': true,
    'gh-pages': false,
  }
}

Default configuration

This configuration is hardcoded in utils/config.js and overridden by any config from the DB (and config files)

{
  cmd: 'npm ci && npm test',
  env: { // env values exposed to build commands
  },
  secretEnv: { // secret env values, exposure depends on inheritSecrets config below
    GITHUB_TOKEN: '',
    GITHUB_SECRET: '',
    SLACK_TOKEN: '',
  },
  s3Bucket: '', // bucket to store build artifacts
  notifications: {
    slack: {
      channel: '#general',
      username: 'LambCI',
      iconUrl: 'https://lambci.s3.amazonaws.com/assets/logo-48x48.png',
      asUser: false,
    },
  },
  build: false, // Build nothing by default except master and PRs
  branches: {
    master: true,
  },
  pullRequests: {
    fromSelfPublicRepo: true, // Pull requests from same (private) repo will build
    fromSelfPrivateRepo: true, // Pull requests from same (public) repo will build
    fromForkPublicRepo: { // Restrictions for pull requests from forks on public repos
      build: true,
      inheritSecrets: false, // Don't expose secretEnv values in the build command environment
      allowConfigOverrides: ['cmd', 'env'], // Only allow file config to override cmd and env properties
    },
    fromForkPrivateRepo: false, // Pull requests from forked private repos won't run at all
  },
  s3PublicSecretNames: true, // Use obscured names for build HTML files and make them public. Has no effect in public repositories
  inheritSecrets: true, // Expose secretEnv values in the build command environment by default
  allowConfigOverrides: true, // Allow files to override config values
  clearTmp: true, // Delete /tmp each time for safety
  git: {
    depth: 5, // --depth parameter for git clone
  },
}

SNS Notifications (for email, SMS, etc)

By default, the CloudFormation template doesn't create an SNS topic to publish build statuses (ie, success, failure) to – but if you want to receive build notifications via email or SMS, or some other custom SNS subscriber, you can specify an SNS topic and LambCI will push notifications to it:

notifications: {
  sns: {
    topicArn: 'arn:aws:sns:us-east-1:1234:lambci-StatusTopic-1WF8BT36'
  }
}

The Lambda function needs to have permissions to publish to this topic, which you can either add manually, or by modifying the CloudFormation template.yaml and updating your stack.

Add a top-level SNS topic resource (a commented-out example of this exists in template.yaml):

  StatusTopic:
    Type: AWS::SNS::Topic
    Properties:
      DisplayName: LambCI

And ensure the Lambda function has permissions to publish to it:

  BuildLambda:
    Type: AWS::Serverless::Function
    Properties:
      # ...
      Policies:
        # ...
        - SNSPublishMessagePolicy:
            TopicName: !Ref StatusTopic

Build status badges

Each branch has a build status image showing whether the last build was successful or not. For example, here is LambCI's latest master status (yes, LambCI dogfoods!):

LambCI Build Status

You can see the URLs for the branch log and badge image near the start of the output of your build logs (so you'll need to run at least one build on your branch to get these):

Branch log: https://<bucket>/<project>/branches/master/<somehash>.html
Branch status img: https://<bucket>/<project>/branches/master/<somehash>.svg

Updating

You can update your CloudFormation stack at any time to change, add or remove the parameters – or even upgrade to a new version of LambCI.

In the AWS Console, go to Services > CloudFormation, select your LambCI stack in the list and then choose Actions > Update Stack. You can keep the same template selected (unless you're updating LambCI), and then when you click Next you can modify parameters like your GitHub token, Slack channel, etc.

LambCI will do its best to update these parameters correctly, but if it fails or you run into trouble, just try setting them all to blank, updating, and then update again with the values you want.

If you've (only) modified template.yaml locally, then you'll need to run npm run template and use build/versioned.yaml to update your stack.

If you've modified other LambCI code locally, you can update with npm run deploy – this requires AWS SAM CLI to be installed.

Updating to 0.10.0 from earlier versions

Updating to 0.10.0 should Just Work™ using the new template – however GitHub shut down the use of SNS hooks, which is how LambCI was previously triggered, so you'll need to go through any repositories on GitHub that you had setup with previous LambCI versions, remove the SNS hook if it wasn't removed already (in Settings), and add the new webhook as laid out in Installation.

Security

The default configuration passes secret environment variables to build commands, except when building forked repositories. This allows you to use your AWS credentials and Git/Slack tokens in your build commands to communicate with the rest of your stack. Set inheritSecrets to false to prevent this.

HTML build logs are generated with random filenames, but are accessible to anyone who has the link. Set s3PublicSecretNames to false (only works for private repositories) to make build logs completely private (you'll need to use the AWS console to access them), or you can remove s3Bucket entirely – you can still see the build logs in the Lambda function output in CloudWatch Logs.

By default, the /tmp directory is removed each time – this is to prevent secrets from being leaked if your LambCI stack is building both private and public repositories. However, if you're only building private (trusted) repositories, then you can set the clearTmp config to false, and potentially cache files (eg, in $HOME) for use across builds (this is not guaranteed – it depends on whether the Lambda environment is kept "warm").

If you discover any security issues with LambCI please email [email protected].

Language Recipes

The default command is npm ci && npm test which will use Node.js 12.14.1 and npm 6.13.6.

The way to build with different Node.js versions, or other languages entirely, is just to override the cmd config property.

LambCI comes with a collection of helper scripts to setup your environment for languages not supported out of the box on AWS Lambda – that is, every language except Node.js and Python 3.6

Node.js

LambCI comes with nave installed and available on the PATH, so if you wanted to run your npm install and tests using Node.js v10.x, you could do specify:

{
  "cmd": "nave use 10 bash -c 'npm ci && npm test'"
}

If you're happy using the built-in npm to install, you could simplify this a little:

{
  "cmd": "npm ci && nave use 10 npm test"
}

There's currently no way to run multiple builds in parallel but you could have processes run in parallel using a tool like npm-run-all – the logs will be a little messy though!

Here's an example package.json for running your tests in Node.js v8, v10 and v12 simultaneously:

{
  "lambci": {
    "cmd": "npm ci && npm run ci-all"
  },
  "scripts": {
    "ci-all": "run-p ci:*",
    "ci:node8": "nave use 8 npm test",
    "ci:node10": "nave use 10 npm test",
    "ci:node12": "nave use 12 npm test"
  },
  "devDependencies": {
    "npm-run-all": "*"
  }
}

Python

LambCI comes with pip installed and available on the PATH, and Lambda has Python 3.6 already installed. $HOME/.local/bin is also added to PATH, so local pip installs should work:

{
  "cmd": "pip install --user tox && tox"
}

Other Python versions with pyenv

LambCI comes with pyenv installed and a script you can source to setup the pyenv root and download prebuilt versions for you.

Call it with the Python version you want (currently: 3.8.0, 3.7.4, 3.6.9 or system, which will use the 3.6 version already installed on Lambda):

{
  "cmd": ". ~/init/python 3.8.0 && pip install --user tox && tox"
}

Java

The Java SDK is not installed on AWS Lambda, so needs to be downloaded as part of your build – but the JRE does exist on Lambda, so the overall impact is small.

LambCI includes a script you can source before running your build commands that will install and setup the SDK correctly, as well as Maven (v3.6.3). Call it with the OpenJDK version you want (currently only 1.8.0):

{
  "cmd": ". ~/init/java 1.8.0 && mvn install -B -V && mvn test"
}

You can see an example of this working here – and the resulting build log.

Go

Go is not installed on AWS Lambda, so needs to be downloaded as part of your build, but Go is quite small and well suited to running anywhere.

LambCI includes a script you can source before running your build commands that will install Go and set your GOROOT and GOPATH with the correct directory structure. Call it with the Go version you want (any of the versions on the Go site):

{
  "cmd": ". ~/init/go 1.13.5 && make test"
}

You can see examples of this working here – and the resulting build log.

Ruby

Ruby is not installed on AWS Lambda, so needs to be downloaded as part of your build.

LambCI includes a script you can source before running your build commands that will install Ruby, rbenv, gem and bundler. Call it with the Ruby version you want (currently: 2.7.0, 2.6.5, 2.5.7, 2.4.9, 2.3.8, 2.2.10, 2.1.10 or 2.0.0-p648):

{
  "cmd": ". ~/init/ruby 2.7.0 && bundle install && bundle exec rake"
}

You can see an example of this working here – and the resulting build log.

PHP

PHP is not installed on AWS Lambda, so needs to be downloaded as part of your build.

LambCI includes a script you can source before running your build commands that will install PHP, phpenv and composer. Call it with the PHP version you want (currently: 7.3.13, 7.2.26, 7.1.33, 7.0.32 or 5.6.38):

{
  "cmd": ". ~/init/php 7.3.13 && composer install -n --prefer-dist && vendor/bin/phpunit"
}

These versions are compiled using php-build with the default config options and overrides of --disable-cgi and --disable-fpm.

You can see an example of this working here – and the resulting build log.

Extending with ECS

LambCI can run tasks on an ECS cluster, which means you can perform all of your build tasks in a Docker container and not be subject to the same restrictions you have in the Lambda environment.

This needs to be documented further – for now you'll have to go off the source and check out the lambci/ecs repo.

Questions

What does the Lambda function do?

  1. Receives notification from GitHub (via a webhook)
  2. Looks up config in DynamoDB
  3. Clones git repo using a bundled git binary
  4. Looks up config files in repo
  5. Runs install and build cmds on Lambda (or starts ECS task)
  6. Updates Slack and GitHub statuses along the way (optionally SNS for email, etc)
  7. Uploads build logs/statuses to S3

License

MIT

docker-lambda's People

Contributors

adamlc avatar adamlewisgmsl avatar alanjds avatar aripalo avatar austinlparker avatar billyshambrook avatar bshackelford avatar chrisoverzero avatar dlahn avatar endemics avatar gliptak avatar hsbt avatar jackmcguire1 avatar jfuss avatar justinmchase avatar kamilsamaj-accolade avatar koxudaxi avatar mhart avatar ndobryanskyy avatar ojongerius avatar patrickhousley avatar recumbent avatar rmax avatar sanathkr avatar smon avatar sriram-mv avatar timoschilling avatar tmo-trustpilot avatar wapmesquita avatar wsee avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

docker-lambda's Issues

Support for dotnet?

I use C# for my lambda and want to test it in local. AWS point to here that we can test lambda locally by this docker. Would you pleased to add C# dotnet in this too?

Unable to use yum command in lambci/lambda-base

I've created a Dockerfile built from lambci/lambda-base so I can add some custom commands to speed up developer workflow.

We'd like to install git on the image, but when I run:

yum install git

I get:

http://packages.us-east-1.amazonaws.com/2015.09/main/201509419456/x86_64/repodata/repomd.xml?instance_id=fail&region=timeout: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 403 Forbidden"
Trying other mirror.
http://packages.us-west-1.amazonaws.com/2015.09/main/201509419456/x86_64/repodata/repomd.xml?instance_id=fail&region=timeout: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 403 Forbidden"
Trying other mirror.
http://packages.us-west-2.amazonaws.com/2015.09/main/201509419456/x86_64/repodata/repomd.xml?instance_id=fail&region=timeout: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 403 Forbidden"
Trying other mirror.
http://packages.eu-west-1.amazonaws.com/2015.09/main/201509419456/x86_64/repodata/repomd.xml?instance_id=fail&region=timeout: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 403 Forbidden"
Trying other mirror.
http://packages.eu-central-1.amazonaws.com/2015.09/main/201509419456/x86_64/repodata/repomd.xml?instance_id=fail&region=timeout: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 403 Forbidden"
Trying other mirror.
http://packages.ap-southeast-1.amazonaws.com/2015.09/main/201509419456/x86_64/repodata/repomd.xml?instance_id=fail&region=timeout: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 403 Forbidden"
Trying other mirror.
http://packages.ap-northeast-1.amazonaws.com/2015.09/main/201509419456/x86_64/repodata/repomd.xml?instance_id=fail&region=timeout: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 403 Forbidden"
Trying other mirror.
http://packages.sa-east-1.amazonaws.com/2015.09/main/201509419456/x86_64/repodata/repomd.xml?instance_id=fail&region=timeout: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 403 Forbidden"
Trying other mirror.
http://packages.ap-southeast-2.amazonaws.com/2015.09/main/201509419456/x86_64/repodata/repomd.xml?instance_id=fail&region=timeout: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 403 Forbidden"
Trying other mirror.

One of the configured repositories failed (amzn-main-Base),
and yum doesn't have enough cached data to continue. At this point the only
safe thing yum can do is fail. There are a few ways to work "fix" this:

 1. Contact the upstream for the repository and get them to fix the problem.

 2. Reconfigure the baseurl/etc. for the repository, to point to a working
    upstream. This is most often useful if you are using a newer
    distribution release than is supported by the repository (and the
    packages for the previous distribution release still work).

 3. Disable the repository, so yum won't use it by default. Yum will then
    just ignore the repository until you permanently enable it again or use
    --enablerepo for temporary usage:

        yum-config-manager --disable amzn-main

 4. Configure the failing repository to be skipped, if it is unavailable.
    Note that yum will try to contact the repo. when it runs most commands,
    so will have to try and fail each time (and thus. yum will be be much
    slower). If it is a very temporary problem though, this is often a nice
    compromise:

        yum-config-manager --save --setopt=amzn-main.skip_if_unavailable=true

failure: repodata/repomd.xml from amzn-main: [Errno 256] No more mirrors to try.

yum-config-manager is not available.

Thanks!

Image provides tmpfs on /dev/shm

The Lambda environment unfortunately does not have a tempfs mounted on /dev/shm, but it is provided by this image.

I can manually fix this by running the container with --privileged, reinstalling util-linux (because /bin/mount is missing) and unmounting /dev/shm.

Python's multiprocessing module uses /dev/shm extensively and does not work properly in AWS Lambda, this is not fully replicated in this docker image.

See issue on AWS forums.

However, this still runs on docker-lambda, but not on AWS Lambda:

from multiprocessing import Pool

def f(x):
    return x*x
    
p = Pool(5)
print(p.map(f, [1, 2, 3]))
[Errno 38] Function not implemented: OSError
Traceback (most recent call last):
  File "/var/task/lambda_function.py", line 9, in lambda_handler
    p = Pool(5)
  File "/usr/lib64/python2.7/multiprocessing/__init__.py", line 232, in Pool
    return Pool(processes, initializer, initargs, maxtasksperchild)
  File "/usr/lib64/python2.7/multiprocessing/pool.py", line 138, in __init__
    self._setup_queues()
  File "/usr/lib64/python2.7/multiprocessing/pool.py", line 234, in _setup_queues
    self._inqueue = SimpleQueue()
  File "/usr/lib64/python2.7/multiprocessing/queues.py", line 354, in __init__
    self._rlock = Lock()
  File "/usr/lib64/python2.7/multiprocessing/synchronize.py", line 147, in __init__
    SemLock.__init__(self, SEMAPHORE, 1, 1)
  File "/usr/lib64/python2.7/multiprocessing/synchronize.py", line 75, in __init__
    sl = self._semlock = _multiprocessing.SemLock(kind, value, maxvalue)
OSError: [Errno 38] Function not implemented

Remove default AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY env vars

First of all thanks for this project. Pretty useful to have :)

Would it be possible to remove the default AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY env vars from the images that that are for example defined here

_GLOBAL_ACCESS_KEY_ID = os.environ.get('AWS_ACCESS_KEY_ID', 'SOME_ACCESS_KEY_ID')

I'd like to check/give feedback from the lambda I'm running if these are set and error if they aren't, but currently can't do so because these default are there.

Error when running lambci/lambda:python3.6

Running this command docker run -v "$PWD":/var/task lambci/lambda:python3.6 with the file in examples/python/lambda_function.py, I got this error:

$ docker run -v "$PWD":/var/task lambci/lambda:python3.6

START RequestId: 5218ac6f-6b85-475c-a8e1-0574ab7f1509 Version: $LATEST
Traceback (most recent call last):
  File "/var/runtime/awslambda/bootstrap.py", line 514, in <module>
    main()
  File "/var/runtime/awslambda/bootstrap.py", line 503, in main
    init_handler, request_handler = _get_handlers(handler, mode)
  File "/var/runtime/awslambda/bootstrap.py", line 29, in _get_handlers
    lambda_runtime.report_user_init_start()
AttributeError: module 'runtime' has no attribute 'report_user_init_start'

Do you have any ideia?

_sqlite3 error

repro:
docker run -ti lambci/lambda:build-python3.6 bash
bash-4.2# pip3 install nltk
.....
Successfully installed nltk-3.2.2

bash-4.2# python -c "import nltk"

Traceback (most recent call last):
File "", line 1, in
File "/var/lang/lib/python3.6/site-packages/nltk/init.py", line 137, in
from nltk.stem import *
File "/var/lang/lib/python3.6/site-packages/nltk/stem/init.py", line 29, in
from nltk.stem.snowball import SnowballStemmer
File "/var/lang/lib/python3.6/site-packages/nltk/stem/snowball.py", line 24, in
from nltk.corpus import stopwords
File "/var/lang/lib/python3.6/site-packages/nltk/corpus/init.py", line 66, in
from nltk.corpus.reader import *
File "/var/lang/lib/python3.6/site-packages/nltk/corpus/reader/init.py", line 105, in
from nltk.corpus.reader.panlex_lite import *
File "/var/lang/lib/python3.6/site-packages/nltk/corpus/reader/panlex_lite.py", line 15, in
import sqlite3
File "/var/lang/lib/python3.6/sqlite3/init.py", line 23, in
from sqlite3.dbapi2 import *
File "/var/lang/lib/python3.6/sqlite3/dbapi2.py", line 27, in
from _sqlite3 import *
ModuleNotFoundError: No module named '_sqlite3'

I found that version in the container but nothing for python3.6 or 3.4
/usr/lib64/python2.7/lib-dynload/_sqlite3.so

I have installed sqlite-devel (yum install sqlite-devel) before rebuilding python but still no luck.

I am out of ideas now.

access external service such as Localstack and Elasticsearch

I created a Lambda function using Localstack Lambda service and triggered it using docker-lambda.

My Lambda is supposed to save objects into Local stack s3 service which is in another container. But I always got this error messages. I wonder if anyone could help me to fix it.

err: 'UnknownEndpoint: Inaccessible host: test.localstack\'. This service may not be available in the us-east-1' region.\n

triggered Lambda by using:

docker run -d --link localstack:localstack --network mynetwork -v "/tmp/localstack/zipfile.283766df":/var/task "lambci/lambda:nodejs6.10" "test.handler"

My docker-compose file looks like following:

elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:5.2.1
volumes:
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- mynetwork

lambci:
image: lambci/lambda:nodejs6.10
networks:
- mynetwork

localstack:
image: localstack/localstack
ports:
- "4567-4582:4567-4582"
- "8080:8080"
environment:
- DEFAULT_REGION=us-west-2
- SERVICES=${SERVICES-lambda, kinesis, s3}
- DEBUG=1
- DATA_DIR=${DATA_DIR- }
- LAMBDA_EXECUTOR=docker
- KINESIS_ERROR_PROBABILITY=${KINESIS_ERROR_PROBABILITY- }
- DOCKER_HOST=unix:///var/run/docker.sock
volumes:
- "/tmp/localstack:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
networks:
- mynetwork

networks:
mynetwork:
driver: bridge

Triggers to run Lambda Function

Hello,

I'm trying to find a solution to run a Lambda function locally based on a "Dynamo Stream" Trigger. I've looked at the SAM local work, but that only allows one-off executions of a function (via the invoke command.)

This docker environment looks ideal, but I don't think there is scope here to define a trigger. Am I right? Is there a way of achieving this locally anyone can think of?

how to mount file credentials from host to container?

Hi,

Can I mount my $HOME/.aws into Docker container to share my AWS config/credentials and have my code like this:

console.log('starting lambda')

var AWS = require("aws-sdk");
AWS.config.update({region: 'us-west-2' });


if (process.env.IN_DOCKER_LAMBDA) {
  var credentials        = new AWS.SharedIniFileCredentials({profile: 'myprofile'});
  AWS.config.credentials = credentials;
  AWS.config.update({region: 'us-west-2' });
}

In this case the docker-lambda will just load the credentials in shared ini while in real AWS lambda it will just retrieve credentials from the EC2 metadata. And I don't have to hardcode my credentials in the code.

idea?

npm install

Hello, I want to create a lambda function that includes some executables installed via npm, with:

npm install accesslint-cli

If I install this in my mac, the node_modules folder will contain the node modules, but the paths reference my machine (/Users/jaime/code...).

Can docker-lambda be used to generate this node_modules folder correctly for a Lambda function environment?

Thanks!

identical to lambda?

I figured that if I could run certain commands inside a docker container based on docker-lambda, I must also be able to run these commands on lambda itself. This does not seem to be the case for the following:

This works (docker):

docker run -v "$PWD":/var/task -it lambci/lambda:build bash
easy_install pip
pip install -U certbot

This does not work (lambda):

./lambdash easy_install pip && pip install -U certbot

Results in /bin/sh: easy_install: command not found, while it works just fine with docker-lambda.

Python3.6 image version of awscli doesn't work

[:~] $ sudo docker run --rm -it lambci/lambda:build-python3.6 aws
[sudo] password for dschep: 
Traceback (most recent call last):
  File "/usr/bin/aws", line 19, in <module>
    import awscli.clidriver
  File "/usr/lib/python2.7/dist-packages/awscli/clidriver.py", line 32, in <module>
    from awscli.help import ProviderHelpCommand
  File "/usr/lib/python2.7/dist-packages/awscli/help.py", line 20, in <module>
    from docutils.core import publish_string
  File "/var/runtime/docutils/core.py", line 246
    print('\n::: Runtime settings:', file=self._stderr)
                                         ^
SyntaxError: invalid syntax
[:~] $ sudo docker run --rm -it lambci/lambda:build-python2.7 aws
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:

  aws help
  aws <command> help
  aws <command> <subcommand> help
aws: error: too few arguments

My work around for now is to remove the existing entrypoint at /usr/bin/aws and reinstall with pip3

[:~] $ sudo docker run --rm -it lambci/lambda:build-python3.6 bash -c "rm /usr/bin/aws && pip3 install awscli > /dev/null && aws"
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:

  aws help
  aws <command> help
  aws <command> <subcommand> help
aws: error: the following arguments are required: command

nodejs shim is missing reportException() method

I am getting TypeError: awslambda.reportException is not a function when returning a non-null error through callback method. I suspect the nodejs shim is missing reportException() function.

I can repro on both nodejs4.3 and nodejs6.10

index.js file

exports.handler = function(context, event, callback) {
    return callback('error')
}

docker output

docker run -v "$PWD":/var/task lambci/lambda:nodejs6.10

START RequestId: fcf8ab72-c8b0-133b-2fc4-225a8173b1fe Version: $LATEST
2017-07-01T06:29:11.741Z	fcf8ab72-c8b0-133b-2fc4-225a8173b1fe	{"errorMessage":"error"}
2017-07-01T06:29:11.745Z	fcf8ab72-c8b0-133b-2fc4-225a8173b1fe	TypeError: awslambda.reportException is not a function

pkg-config prefix is incorrect

I'm using the lambci/lambda:build-python3.6 image to build a python C module. The prefix value in python-3.6 pkg-config file is incorrect.

The current value is:

/local/p4clients/pkgbuild-cuFpW/workspace/build/LambdaLangPython36/LambdaLangPython36-x.21.4/AL2012/DEV.STD.PTHREAD/build

It should be /var/lang.

Adding the following to my Dockerfile corrects the issue:

sed -i '/^prefix=/c\prefix=/var/lang' /var/lang/lib/pkgconfig/python-3.6.pc

Here is the full file for reference: /var/lang/lib/pkgconfig/python-3.6.pc

# See: man pkg-config
prefix=/local/p4clients/pkgbuild-cuFpW/workspace/build/LambdaLangPython36/LambdaLangPython36-x.21.4/AL2012/DEV.STD.PTHREAD/build
exec_prefix=${prefix}
libdir=${exec_prefix}/lib
includedir=${prefix}/include

Name: Python
Description: Python library
Requires:
Version: 3.6
Libs.private: -lpthread -ldl  -lutil -lrt
Libs: -L${libdir} -lpython3.6m
Cflags: -I${includedir}/python3.6m

CI: Invoking Lambda functions from docker image script

I am using GitLab CI to test my code and have been able to make a container that uses your docker image. How do I invoke my functions from a docker image? I haven't quite been able to figure that out.

This is what I have so far:

image: lambci/lambda:build

variables:
  AWS_DEFAULT_REGION: eu-west-1
  AWS_ACCESS_KEY_ID: YOUR_ACCESS_KEY_ID
  AWS_SECRET_ACCESS_KEY: YOUR_SECRET_ACCESS_KEY

cache:
  paths:
    - node_modules/

stages:
  - build

build_step:
  stage: build
  only:
    - /^feature\/.*$/
    - develop
    - master
  script:
    - npm install
    - npm run lint
    - docker run -v "$PWD":/var/task lambci/lambda

When I run this I just get issues finding the docker daemon ('Cannot connect to the Docker daemon. Is the docker daemon running on this host?'). I've also tried using the docker-lambda npm package and that gives me similar issues. Is it something I'm doing, or a problem with GitLab CI?

Thanks!

Add dependencies for python

How do you add dependencies for python from pip?

For example, for lambda, I can do pip install ... -t lambda and my imports are included in the package and all resolve. This doesn't seem to work with docker-lambda.

Python package build not working in build-python3.6

Repro:

 docker run lambci/lambda:build-python3.6 pip3 install cryptography

Fails with:

unable to execute 'x86_64-unknown-linux-gnu-gcc': No such file or directory

Full output:

Collecting cryptography
  Downloading cryptography-1.8.1.tar.gz (423kB)
Collecting idna>=2.1 (from cryptography)
  Downloading idna-2.5-py2.py3-none-any.whl (55kB)
Collecting asn1crypto>=0.21.0 (from cryptography)
  Downloading asn1crypto-0.22.0-py2.py3-none-any.whl (97kB)
Collecting packaging (from cryptography)
  Downloading packaging-16.8-py2.py3-none-any.whl
Requirement already satisfied: six>=1.4.1 in /var/runtime (from cryptography)
Requirement already satisfied: setuptools>=11.3 in /var/lang/lib/python3.6/site-packages (from cryptography)
Collecting cffi>=1.4.1 (from cryptography)
  Downloading cffi-1.10.0-cp36-cp36m-manylinux1_x86_64.whl (406kB)
Collecting pyparsing (from packaging->cryptography)
  Downloading pyparsing-2.2.0-py2.py3-none-any.whl (56kB)
Collecting pycparser (from cffi>=1.4.1->cryptography)
  Downloading pycparser-2.17.tar.gz (231kB)
Installing collected packages: idna, asn1crypto, pyparsing, packaging, pycparser, cffi, cryptography
  Running setup.py install for pycparser: started
    Running setup.py install for pycparser: finished with status 'done'
  Running setup.py install for cryptography: started
    Running setup.py install for cryptography: finished with status 'error'
    Complete output from command /var/lang//bin/python3.6 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-77kq7rsi/cryptography/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-fcct1gg2-record/install-record.txt --single-version-externally-managed --compile:
    running install
    running build
    running build_py
    creating build
    creating build/lib.linux-x86_64-3.6
    creating build/lib.linux-x86_64-3.6/cryptography
    copying src/cryptography/utils.py -> build/lib.linux-x86_64-3.6/cryptography
    copying src/cryptography/__init__.py -> build/lib.linux-x86_64-3.6/cryptography
    copying src/cryptography/fernet.py -> build/lib.linux-x86_64-3.6/cryptography
    copying src/cryptography/__about__.py -> build/lib.linux-x86_64-3.6/cryptography
    copying src/cryptography/exceptions.py -> build/lib.linux-x86_64-3.6/cryptography
    creating build/lib.linux-x86_64-3.6/cryptography/x509
    copying src/cryptography/x509/__init__.py -> build/lib.linux-x86_64-3.6/cryptography/x509
    copying src/cryptography/x509/extensions.py -> build/lib.linux-x86_64-3.6/cryptography/x509
    copying src/cryptography/x509/general_name.py -> build/lib.linux-x86_64-3.6/cryptography/x509
    copying src/cryptography/x509/oid.py -> build/lib.linux-x86_64-3.6/cryptography/x509
    copying src/cryptography/x509/name.py -> build/lib.linux-x86_64-3.6/cryptography/x509
    copying src/cryptography/x509/base.py -> build/lib.linux-x86_64-3.6/cryptography/x509
    creating build/lib.linux-x86_64-3.6/cryptography/hazmat
    copying src/cryptography/hazmat/__init__.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat
    creating build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives
    copying src/cryptography/hazmat/primitives/padding.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives
    copying src/cryptography/hazmat/primitives/__init__.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives
    copying src/cryptography/hazmat/primitives/hmac.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives
    copying src/cryptography/hazmat/primitives/hashes.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives
    copying src/cryptography/hazmat/primitives/keywrap.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives
    copying src/cryptography/hazmat/primitives/serialization.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives
    copying src/cryptography/hazmat/primitives/constant_time.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives
    copying src/cryptography/hazmat/primitives/cmac.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives
    creating build/lib.linux-x86_64-3.6/cryptography/hazmat/backends
    copying src/cryptography/hazmat/backends/__init__.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends
    copying src/cryptography/hazmat/backends/interfaces.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends
    copying src/cryptography/hazmat/backends/multibackend.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends
    creating build/lib.linux-x86_64-3.6/cryptography/hazmat/bindings
    copying src/cryptography/hazmat/bindings/__init__.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/bindings
    creating build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/twofactor
    copying src/cryptography/hazmat/primitives/twofactor/utils.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/twofactor
    copying src/cryptography/hazmat/primitives/twofactor/__init__.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/twofactor
    copying src/cryptography/hazmat/primitives/twofactor/totp.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/twofactor
    copying src/cryptography/hazmat/primitives/twofactor/hotp.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/twofactor
    creating build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/interfaces
    copying src/cryptography/hazmat/primitives/interfaces/__init__.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/interfaces
    creating build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/asymmetric
    copying src/cryptography/hazmat/primitives/asymmetric/utils.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/asymmetric
    copying src/cryptography/hazmat/primitives/asymmetric/padding.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/asymmetric
    copying src/cryptography/hazmat/primitives/asymmetric/__init__.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/asymmetric
    copying src/cryptography/hazmat/primitives/asymmetric/ec.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/asymmetric
    copying src/cryptography/hazmat/primitives/asymmetric/dh.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/asymmetric
    copying src/cryptography/hazmat/primitives/asymmetric/rsa.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/asymmetric
    copying src/cryptography/hazmat/primitives/asymmetric/dsa.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/asymmetric
    creating build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/ciphers
    copying src/cryptography/hazmat/primitives/ciphers/__init__.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/ciphers
    copying src/cryptography/hazmat/primitives/ciphers/modes.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/ciphers
    copying src/cryptography/hazmat/primitives/ciphers/algorithms.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/ciphers
    copying src/cryptography/hazmat/primitives/ciphers/base.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/ciphers
    creating build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/kdf
    copying src/cryptography/hazmat/primitives/kdf/x963kdf.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/kdf
    copying src/cryptography/hazmat/primitives/kdf/__init__.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/kdf
    copying src/cryptography/hazmat/primitives/kdf/scrypt.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/kdf
    copying src/cryptography/hazmat/primitives/kdf/kbkdf.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/kdf
    copying src/cryptography/hazmat/primitives/kdf/hkdf.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/kdf
    copying src/cryptography/hazmat/primitives/kdf/concatkdf.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/kdf
    copying src/cryptography/hazmat/primitives/kdf/pbkdf2.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/kdf
    creating build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/commoncrypto
    copying src/cryptography/hazmat/backends/commoncrypto/__init__.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/commoncrypto
    copying src/cryptography/hazmat/backends/commoncrypto/hmac.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/commoncrypto
    copying src/cryptography/hazmat/backends/commoncrypto/hashes.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/commoncrypto
    copying src/cryptography/hazmat/backends/commoncrypto/backend.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/commoncrypto
    copying src/cryptography/hazmat/backends/commoncrypto/ciphers.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/commoncrypto
    creating build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/openssl
    copying src/cryptography/hazmat/backends/openssl/utils.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/openssl
    copying src/cryptography/hazmat/backends/openssl/__init__.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/openssl
    copying src/cryptography/hazmat/backends/openssl/hmac.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/openssl
    copying src/cryptography/hazmat/backends/openssl/hashes.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/openssl
    copying src/cryptography/hazmat/backends/openssl/x509.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/openssl
    copying src/cryptography/hazmat/backends/openssl/ec.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/openssl
    copying src/cryptography/hazmat/backends/openssl/dh.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/openssl
    copying src/cryptography/hazmat/backends/openssl/backend.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/openssl
    copying src/cryptography/hazmat/backends/openssl/rsa.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/openssl
    copying src/cryptography/hazmat/backends/openssl/decode_asn1.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/openssl
    copying src/cryptography/hazmat/backends/openssl/encode_asn1.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/openssl
    copying src/cryptography/hazmat/backends/openssl/dsa.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/openssl
    copying src/cryptography/hazmat/backends/openssl/cmac.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/openssl
    copying src/cryptography/hazmat/backends/openssl/ciphers.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/openssl
    creating build/lib.linux-x86_64-3.6/cryptography/hazmat/bindings/commoncrypto
    copying src/cryptography/hazmat/bindings/commoncrypto/__init__.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/bindings/commoncrypto
    copying src/cryptography/hazmat/bindings/commoncrypto/binding.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/bindings/commoncrypto
    creating build/lib.linux-x86_64-3.6/cryptography/hazmat/bindings/openssl
    copying src/cryptography/hazmat/bindings/openssl/__init__.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/bindings/openssl
    copying src/cryptography/hazmat/bindings/openssl/binding.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/bindings/openssl
    copying src/cryptography/hazmat/bindings/openssl/_conditional.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/bindings/openssl
    running egg_info
    writing src/cryptography.egg-info/PKG-INFO
    writing dependency_links to src/cryptography.egg-info/dependency_links.txt
    writing entry points to src/cryptography.egg-info/entry_points.txt
    writing requirements to src/cryptography.egg-info/requires.txt
    writing top-level names to src/cryptography.egg-info/top_level.txt
    warning: manifest_maker: standard file '-c' not found
    
    reading manifest file 'src/cryptography.egg-info/SOURCES.txt'
    reading manifest template 'MANIFEST.in'
    no previously-included directories found matching 'docs/_build'
    warning: no previously-included files matching '*' found under directory 'vectors'
    writing manifest file 'src/cryptography.egg-info/SOURCES.txt'
    running build_ext
    generating cffi module 'build/temp.linux-x86_64-3.6/_padding.c'
    creating build/temp.linux-x86_64-3.6
    generating cffi module 'build/temp.linux-x86_64-3.6/_constant_time.c'
    generating cffi module 'build/temp.linux-x86_64-3.6/_openssl.c'
    building '_openssl' extension
    creating build/temp.linux-x86_64-3.6/build
    creating build/temp.linux-x86_64-3.6/build/temp.linux-x86_64-3.6
    x86_64-unknown-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I/local/p4clients/pkgbuild-nX_sd/workspace/build/LambdaLangPython36/LambdaLangPython36-x.8.1/AL2012/DEV.STD.PTHREAD/build/private/tmp/brazil-path/build.libfarm/include -I/local/p4clients/pkgbuild-nX_sd/workspace/build/LambdaLangPython36/LambdaLangPython36-x.8.1/AL2012/DEV.STD.PTHREAD/build/private/tmp/brazil-path/build.libfarm/include -fPIC -I/var/lang/include/python3.6m -c build/temp.linux-x86_64-3.6/_openssl.c -o build/temp.linux-x86_64-3.6/build/temp.linux-x86_64-3.6/_openssl.o
    unable to execute 'x86_64-unknown-linux-gnu-gcc': No such file or directory
    error: command 'x86_64-unknown-linux-gnu-gcc' failed with exit status 1
    
    ----------------------------------------
Command "/var/lang//bin/python3.6 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-77kq7rsi/cryptography/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-fcct1gg2-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-77kq7rsi/cryptography/

Support passing in env vars as options.

Now that Lambda support Environment Variables, it would be good to be able to pass those into the container. For example:

var dockerLambda = require('docker-lambda')

// Spawns synchronously, uses current dir – will throw if it fails
var lambdaCallbackResult = dockerLambda({
  event: {some: 'event'},
  userEnvVars: { // or a different name ? 
    MY_ENV_VAR: 'foo-bar'
  }
})

Happy to submit a PR if you'd like one.

Java test runners is not complete

Trying to run a basic hello world python lambda:

docker run -v "$PWD":/var/task lambci/lambda:python2.7

yields:

recv_start
Traceback (most recent call last):
  File "/var/runtime/awslambda/bootstrap.py", line 364, in <module>
    main()
  File "/var/runtime/awslambda/bootstrap.py", line 344, in main
    (invokeid, mode, handler, suppress_init, credentials) = wait_for_start(int(ctrl_sock))
  File "/var/runtime/awslambda/bootstrap.py", line 135, in wait_for_start
    (invokeid, mode, handler, suppress_init, credentials) = lambda_runtime.recv_start(ctrl_sock)
  File "/var/runtime/awslambda/runtime.py", line 13, in recv_start
    return (invokeid, mode, handler, suppress_init, credentials)
NameError: global name 'invokeid' is not defined

AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are not being overridden by dotenv

It's quite a wired issue, anyway, I can't find it out why it is like that.

I've developed a small lambda function, which I would like to be able to test locally first.
The main goal of the lambda function is to fetch and handle messages from AWS SQS.

While I'm running that function with help this docker image lambci/lambda nothing happens, it waits for 10+ seconds and then stops it :(

$ docker run -v "$PWD/dist":/var/task lambci/lambda
START RequestId: 915db92a-f5db-11ca-e67e-d25072a4290a Version: $LATEST
END RequestId: 915db92a-f5db-11ca-e67e-d25072a4290a
REPORT RequestId: 915db92a-f5db-11ca-e67e-d25072a4290a	Duration: 11232.60 ms	Billed Duration: 11300 ms	Memory Size: 1536 MB	Max Memory Used: 37 MB
null%                                                                                                                                                                

I'm using dotenv package to load some env-wise data to be able to connect to specific queue etc..
and it looks like that .env file is loaded well (because I can see almost all variables from it), but two main variables can't be overwritten somehow, and I still see your image default values

AWS_ACCESS_KEY_ID: 'SOME_ACCESS_KEY_ID',
AWS_SECRET_ACCESS_KEY: 'SOME_SECRET_ACCESS_KEY',

Why so?

P.S. Looks like because of that my function are not able to make the connection to AWS SQS
P.P.S. Meanwhile when I'm using this package everything is working well

Java support?

Should it be possible to support Java-based lambdas with this?

using docker-lambda for local function development

I'm trying to set up a basic boilerplate that would simplify getting started with developing functions locally and deploying them to aws, using the excellent work put in here. The idea is to use docker compose to start up the container but then wrapping the entry point in a nodemon call so that the function continually re-runs when code is changed. Then when a user is done developing they can go ahead and sh into the container and run zip / aws commands to deploy, or those commands could be part of npm scripts. I'm facing an issue with differences in the two images, lambci/lambda and lambci/lambda:build. Using the first image I was able to get this proof of concept working

-dockerfile-
FROM lambci/lambda

ENV HOME=/home/sbx_user1051

USER root

# create home directory for the user to make sure some node packages work
RUN mkdir -p /home/sbx_user1051 && chown -R sbx_user1051:495 /home/sbx_user1051

ADD . .

RUN npm install

USER sbx_user1051

# nodemon is defined as a devDependency in package.json 
ENTRYPOINT ./node_modules/.bin/nodemon --exec "node --max-old-space-size=1229 --max-semi-space-size=76 --max-executable-size=153 --expose-gc /var/runtime/node_modules/awslambda/index.js $HANDLER $EVENT"

-docker-compose-
version: '2'
services:
  app:
    build: "."
    environment: 
      HANDLER: "index.handler"
      EVENT: "'{\"email\": \"[email protected]\", \"id\": \"30\"}'"
    volumes:
    - ".:/var/task/"
    - "/var/task/node_modules"

The issue is if I connect to the container using docker exec none of the extra installed packages are available in /usr/bin (aws, zip). If I use lambci/lambda:build then those packages are available but the dockerfile is really complex and is basically just a clone of lambci/lambda, I would have to fork the repo to get it to work. I can't really tell from the repo how the base image for lambci/lambda:build is generated so I'm not sure what the difference in these two images is, I'm also not an adequate linux admin either (teehee). Any guidance on how to pull this off correctly would be appreciated and if any work comes out of this on my end that you want I'd certainly PR it back into this repo on your terms.

here's the second Dockerfile in case you wanted to see it (uses the same compose)

# basically a copy of lambci/lambda
FROM lambci/lambda:build

ENV PATH=$PATH:/usr/local/lib64/node-v4.3.x/bin:/usr/local/bin:/usr/bin/:/bin \
    LAMBDA_TASK_ROOT=/var/task \
    LAMBDA_RUNTIME_DIR=/var/runtime \
    LANG=en_US.UTF-8

ADD awslambda-mock.js /var/runtime/node_modules/awslambda/build/Release/awslambda.js

# Not sure why permissions don't work just by modifying the owner
RUN rm -rf /tmp && mkdir /tmp && chown -R sbx_user1051:495 /tmp && chmod 700 /tmp

# create home directory for the user to make sure some node packages work
RUN mkdir -p /home/sbx_user1051 && chown -R sbx_user1051:495 /home/sbx_user1051

WORKDIR /var/task

# install nodemon globally
RUN npm install -g nodemon

ADD . .

RUN npm install

USER sbx_user1051

ENTRYPOINT nodemon --exec "node --max-old-space-size=1229 --max-semi-space-size=76 --max-executable-size=153 --expose-gc /var/runtime/node_modules/awslambda/index.js $HANDLER $EVENT"

Running hooked up to a local kinesis stream

I have a local kinesis stream running in docker for testing purposes. I want to make a lambda function that is called when events come through that stream.

From looking at your code here I could pretty easily use your docker image that runs a little harness that hooks up to kinesis then forwards messages into my lambda function using your library. Does that sound right?

Do you know if there is already a tool to help with this? I don't want to re-invent the wheel here if I can avoid it.

unable to pass EVENT BODY to RUN python27

Unable to run the example Python lambda function with docker run -v "$PWD":/var/task lambci/lambda:python2.7 -e AWS_LAMBDA_EVENT_BODY='{}' or docker run -v "$PWD":/var/task lambci/lambda:python2.7 -e AWS_LAMBDA_EVENT_BODY={} or docker run -v "$PWD":/var/task lambci/lambda:python2.7 -e AWS_LAMBDA_EVENT_BODY '{}' docker run -v "$PWD":/var/task lambci/lambda:python2.7 -e AWS_LAMBDA_EVENT_BODY {}

Fails with

START RequestId: b2d49b12-52d6-4ad0-8b21-93faf7c48dec Version: $LATEST
Unable to parse input as json: No JSON object could be decoded
Traceback (most recent call last):
  File "/usr/lib64/python2.7/json/__init__.py", line 339, in loads
    return _default_decoder.decode(s)
  File "/usr/lib64/python2.7/json/decoder.py", line 364, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "/usr/lib64/python2.7/json/decoder.py", line 382, in raw_decode
    raise ValueError("No JSON object could be decoded")
ValueError: No JSON object could be decoded

END RequestId: b2d49b12-52d6-4ad0-8b21-93faf7c48dec
REPORT RequestId: b2d49b12-52d6-4ad0-8b21-93faf7c48dec Duration: 0 ms Billed Duration: 100 ms Memory Size: 1536 MB Max Memory Used: 14 MB
{"stackTrace": [["/usr/lib64/python2.7/json/__init__.py", 339, "loads", "return _default_decoder.decode(s)"], ["/usr/lib64/python2.7/json/decoder.py", 364, "decode", "obj, end = self.raw_decode(s, idx=_w(s, 0).end())"], ["/usr/lib64/python2.7/json/decoder.py", 382, "raw_decode", "raise ValueError(\"No JSON object could be decoded\")"]], "errorType": "ValueError", "errorMessage": "No JSON object could be decoded"}

Question: How would i use this docker-lambda to build code?

I am fairly new to docker, hence the noob question. I have installed docker and am able to execute basic lambda and it runs and exits promptly as expected.

  1. How would i bundle a bunch of code (c, js, cpp) to this docker to build and give me zip artifacts from this docker so that i can use it to deploy in my real lambdas. An example would be greatly appreciated.

  2. I tried to run to find out the gcc version in this docker, and it reports it doesnt have gcc nor does it have zip. how would i go about installing them in this docker? or am i using the wrong docker?

How am i running ( i have an index.js in the pwd)
sudo docker run -v "$PWD":/var/task lambci/lambda:nodejs6.10 (probably i need to run something else other than :nodejs6.10)

How do you return a JSON result from a python handler?

In a node handler, you can return results with the passed in context.

exports.handler = function(event, context) {
  context.succeed({'Hello':'from handler'});
  return;
};

What is the equivalent to do this in Python so that I can evaluate the results coming back from a Python lambda call using dockerLambda? I can not call context.succeed() on the context passed to a python handler.

var lambdaCallbackResult = dockerLambda({
                dockerImage: "lambci/lambda:python2.7",
                event: {"some":"data"}});
console.log(lambdaCallbackResult);

How to leverage caching?

I'd love to be able to cache compiled python wheels so we don't have to hit the network/recompile unnecessarily. My current command is as follows:

mkdir -m 777 -p ../.cache
docker run --rm \
    -v "$PWD/../.cache":/tmp/.cache \
    -v "$PWD":/var/task \
    lambci/lambda:build-python2.7 pip install -r requirements.txt --cache-dir /tmp/.cache -vv -t env

Unfortunately, I get the following error:

The directory '/tmp/.cache/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
The directory '/tmp/.cache' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.

Any idea as to what I can do here to mount a cache directory from the OS properly within the docker container?

permissions

I currently have a lambda in production that reads and writes from /tmp.
Running `docker run -v "$PWD":/var/task lambci/lambda index.handler "{"event":"args"}"
throw EACCES: permission denied, open 'tmp/sample.pdf'
Is there an environment variable or something else I can do to change read permissions when running from this docker instance? Thank you.

How to install python packages?

Hi I was very impressed with your work here so helpful! I was wondering how I would go about adding pip packages to these docker containers. I can't seem to find documentation on it anywhere. I am using this package as part of the serverless-plugin-simulate plugin. I was also wondering what I would have to do in order to make this jive well with the serverless-python-requirements plugin. Thanks!

How to modify max memory while running docker run?

I'm using the following command to run a lambda function as described in the docs.
docker run -v "$PWD":/var/task lambci/lambda index.myHandler '{"some": "event"}'

By, default it uses max memory of 1536MB. I tried modifying the max memory by using the following.
docker run -v "$PWD":/var/task lambci/lambda index.myHandler '{"some": "event"}' ['-m', '512M']

The output still shows max memory of 1536MB. I appreciate, if anyone can help me on changing max memory.

Trying to install postgresql fails

Hi,

i try to use psycopg2 in python3.6 but i am still getting an error in Lambda.
The issue might be that there is no postgresql-devel installed on the docker linux vm.

But when i try to install i still get an error (neverless if i use the yum -y update or not)

FROM lambci/lambda:build-python3.6

RUN yum -y update \
    && yum install -y yum-plugin-ovl \
    && yum install -y postgresql-devel

CMD ["bash"]

Error is (with update):

E: Failed to install umount
mkinitrd failed
warning: %posttrans(kernel-4.9.43-17.39.amzn1.x86_64) scriptlet failed, exit status 1
Non-fatal POSTTRANS scriptlet failure in rpm package kernel-4.9.43-17.39.amzn1.x86_64

or without update:

Rpmdb checksum is invalid: dCDPT(pkg checksums): postgresql92-libs.x86_64 0:9.2.22-1.61.amzn1 - u

Getting the return value out of the function?

Suppose my function is:

export function example (input, context, callback) {
  callback(null, { result: 'success' })
}

And I'm invoking it via:

let cmd = `docker run --rm -v "$PWD/build/${app}":/var/task lambci/lambda handler.example '{}'
exec(cmd, (err, stderr, stdout) => {
  if (stderr && stderr !== 'null') console.log(`λ: (err)\n${stderr}`)
  if (stdout && stdout !== 'null') console.log(`λ: (out)\n${stdout}`)
  callback(err)
})

How can I get the value returned by the handler: { result: 'success' }?

Provide event data via json file

It would be helpful to provide the event data via a file.

Current workaround:

docker run -v "$PWD":/var/task lambci/lambda index.handler "$(jq -M -c . event-create.json)"

No Python3 headers in build-python3.6 image

The build-python3.6 image seems to be missing headers for Python3:

$ sudo docker run lambci/lambda:build-python3.6 find / -iname '*python*.h'
/usr/include/python2.7/pythonrun.h
/usr/include/python2.7/Python-ast.h
/usr/include/python2.7/Python.h

Yum only shows packages for Python 3.4, not 3.6:

$ sudo docker run lambci/lambda:build-python3.6 yum search python3
============================= N/S matched: python3 =============================
mod24_wsgi-python34.x86_64 : A WSGI interface for Python web applications in
                           : Apache
postgresql92-plpython27.x86_64 : The Python3 procedural language for PostgreSQL
python34.x86_64 : Version 3.4 of the Python programming language aka Python 3000
python34-devel.x86_64 : Libraries and header files needed for Python 3.4
                      : development
python34-docs.noarch : Documentation for the Python programming language
python34-libs.i686 : Python 3.4 runtime libraries
python34-libs.x86_64 : Python 3.4 runtime libraries
python34-pip.noarch : A tool for installing and managing Python packages
python34-setuptools.noarch : Easily build and distribute Python packages
python34-test.x86_64 : The test modules from the main python 3.4 package
python34-tools.x86_64 : A collection of tools included with Python 3.4
python34-virtualenv.noarch : Tool to create isolated Python environments

  Name and summary matches only, use "search all" for everything.

Connecting to DynamoDB

I was doing some prototyping with AWS Lambda, and successfully ran the code within the docker container. However, when I wanted to extend the lambda functionality to connect to another docker container for Dynamodb, it doesn't seem to work.

This is what I've done:

docker run -d --name dynamodb deangiberson/aws-dynamodb-local
docker run --links dynamodb:dynamodb -v "$PWD":/var/task lambci/lambda index.handler

But when it attempts to connect, this is what it says:

{"errorMessage":"connect ECONNREFUSED 127.0.0.1:8000","errorType":"NetworkingError","stackTrace":["Object.exports._errnoException (util.js:870:11)","exports._exceptionWithHostPort (util.js:893:20)","TCPConnectWrap.afterConnect [as oncomplete] (net.js:1062:14)"]}

I'm running on Docker 1.13.1 (Docker for Mac)

Anyone else had this issue?

Thanks!

Allow to invoke the lambda function multiple times

In AWS Lambda containers are not destroyed after each execution.

In my scenario (tests), I need to invoke a function multiple times. It would be much faster if, you don't need to recreate the whole container, including the NodeJS process, before each invocation.

Additionally, this can catch potential production issues as it will be closer to the way AWS Lambda works.

Unable to import module 'lambda_function'

I'm trying to use this docker container to test out a Zappa + Flask deploy and having some issues. I followed the instructions on the README, and I can't get the Lambda function to run properly.

Is it not importing my Lambda code properly? What is supposed to happen?

docker run -v $PWD:/var/task lambci/lambda:python3.6

START RequestId: de56416b-9dfb-4a9f-b5e6-687af6593b61 Version: $LATEST
Unable to import module 'lambda_function': No module named 'flask_restless'
END RequestId: de56416b-9dfb-4a9f-b5e6-687af6593b61
REPORT RequestId: de56416b-9dfb-4a9f-b5e6-687af6593b61 Duration: 7 ms Billed Duration: 100 ms Memory Size: 1536 MB Max Memory Used: 19 MB

{"errorMessage": "Unable to import module 'lambda_function'"}

Here is the docker-compose.yml file I am using:

version: '3'
services:

  lambda:
    image: lambci/lambda:python3.6
    volumes:
      - $PWD:/var/task
    environment:
      - AWS_LAMBDA_FUNCTION_NAME=application

  mariadb:
    image: mariadb:latest
    volumes:
      - ./schema.sql:/docker-entrypoint-initdb.d/load.sql
    environment:
      - MYSQL_ROOT_PASSWORD=''
      - MYSQL_DATABASE=''
      - MYSQL_USER=''
      - MYSQL_PASSWORD=''

Loading github credentials

I am using the command:
docker run -v "$PWD":/var/task lambci/lambda:build-nodejs4.3 npm install

but getting the error:
Host key verification failed.

The problem lies in attempting to access some of my dependencies thru SSH'ing into github.
Where should I be putting my credentials to make this work?

Example for compiling PhantomJS

This project is awesome and could not have come at a better time.

You can also use it to compile native dependencies knowing that you're linking to the same library versions that exist on AWS Lambda and then deploy using the AWS CLI.

I'd love to replace https://github.com/18F/pa11y-lambda/blob/eecdd5d283de34e437847e21eed9314f27001aba/app/phantomjs_install.js with a pre-built PhantomJS binary that I know will Just Work in the Lambda environment.

According the the PhantomJS docs, you build it by obtaining the source and then running python build.py. I'm new to both PhantomJS building and Docker, so I was wondering if you could give a rough idea of how that workflow could fit in this Docker toolchain.

NSS version is mismatch

I run headless chrome with puppeteer. It runs correctly on AWS Lambda, but, below error is occurred on Lambci-docker.

[0918/092739.344468:FATAL:nss_util.cc(627)] NSS_VersionCheck("3.26") failed. NSS >= 3.26 is required. Please upgrade to the latest NSS, and if you still get this error, contact your distribution maintainer.

Can this run on a local machine?

when running create_build, yum fails on access to the amazonaws repos with "The requested URL returned error: 403 Forbidden".
After lots of reading, I think these repos are off-limits for anyone NOT running in EC2.

Anyone got the build to work on a local machine?

Note: this question came from a total noob to docker - you don't NEED to build it to use it.... if happy with the content you can just run the image, and docker will download a pre-built one.
i.e. just run
docker run -it lambci/lambda:build bash
and within a couple of minute, you will have a terminal session with gcc installed.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.