Git Product home page Git Product logo

lambda-perf's Introduction

🔄 Continous Lambda Cold Starts Benchmark

TL;DR: 👀 to https://maxday.github.io/lambda-perf/

to see the benchmark result:

screenshot

Why?

There are already a lot of blog posts talking about Lambda Cold Starts performance per runtime but I could not find any always up-to-date information.

That's why I decided to create this project: data is always up to date as the benchmark is running daily.

How does it work?

Architecture diagram

architecture

Step 1

An ultra simple hello-world function has been written in each AWS supported runtime:

  • nodejs16.x
  • nodejs18.x
  • nodejs20.x
  • python3.8
  • python3.9
  • python3.10
  • python3.11
  • python3.12
  • dotnet6
  • java11
  • java11 + snapstart
  • java17
  • java17 + snapstart
  • java21
  • java21 + snapstart
  • ruby3.2

in addition to the following custom runtimes:

  • go on provided.al2
  • go on provided.al2023
  • rust on provided.al2
  • rust on provided.al2023
  • c++ on provided.al2
  • c++ on provided.al2023
  • dotnet7 aot on provided.al2
  • dotnet8 aot on provided.al2
  • dotnet8 aot on provided.al2023
  • quarkus native on provided.al2
  • graalvm java17 on provided.al2
  • graalvm java21 on provided.al2023
  • apple swift 5.8 on provided.al2
  • bun on provided.al2 (with and without layer)
  • llrt on provided.al2023

Each of this function is packaged in a zip file, uploaded to a S3 bucket.

Step 2

Every day, each function is freshly grabbed from S3, deployed and invoked 10 times as cold starts.

Then the REPORT log line containing the init duration, max memory used and other useful information is saved to a DynamoDB table.

Step 3

After all these invocations, all information stored in DynamoDB is aggregated and a new JSON file is created, then commited to this repo. ie: https://github.com/maxday/lambda-perf/blob/main/data/2022-09-05.json

Step 4

A static website, hosted on GitHub pages here: https://maxday.github.io/lambda-perf/ fetches this JSON file and displays the result in a (nice?) UI.

Step 5

Hack/Fork/Send PR and create your own benchmarks!

Disclaimer

⚠️ This project is not associated, affiliated, endorsed, or sponsored by any companies nor have they been reviewed tested or certified by any company.

lambda-perf's People

Contributors

a-h avatar algo-facile avatar andys8 avatar beau-gosse-dev avatar chrisbll971 avatar davidruhmann avatar dependabot[bot] avatar grahamcampbell avatar jreijn avatar kidylee avatar kkeker avatar marcomagdy avatar marksailes avatar martincostello avatar maxday avatar michaelbrewer avatar mikrethor avatar msailes avatar o-alexandrov avatar raederdev avatar richicoder1 avatar slang25 avatar trivikr avatar vvalkonen avatar wtfjoke avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

lambda-perf's Issues

Move dotnet8 AOT to the dotnet runtime

We're recommending people deploy .NET 8 Native AOT projects to the .NET 8 managed runtime (instead of Al2 or AL2023). Since it will have the needed libraries available (like libicu). It also removes the need for the assembly name requirement of bootstrap and brings billing inline with managed runtimes.

PR #935 added AL2 and AL2023.

Can we replace provided.al2 with dotnet8?

Getting Started with Development Guide

This looks like a really interesting project, but when I cloned it, it wasn’t really clear what I needed to do to build and deploy it so that I could try to add a new target.

It would be really handy to have a “Getting Started” guide for developers that describes the tools needed and the steps to build and deploy.

Thanks!

Feature request: p95 of cold starts

First off, this is a useful tool. Thanks for setting it up.

Mean is a useful metric. But to get a feel of how much the cold starts can vary from the mean, it would be nice to have p95 and standard deviation (assuming normal distribution here) of the measurements.

Add Bun Lambda To The Benchmark Suite

Given all the recent hype around the launch of bun v1, it would be awesome if the official bun Lambda runtime could be added to the benchmark suite:

This is particularly interesting as Bun is supposed to index highly in start-up speed, but from the blog post above, it appears that the lambda cold start time is currently slower than Node.

Dart

Please add support for Dart. This post on the AWS Open Source blog seems to cover how.

GraalVM native image instead of the JVM

It would make more sense to compile the Java workload with GraalVM native image, cold start with JVM make no sense for this kind of use case.

I guess that's similar to #2 raised for .Net.

Include benchmark without redeploying the lambdas.

Please include benchmark where you don't re-deploy the lambdas.

The reason is I observe (using Log insights) sometime Lambda that has been idle for long time (more than 30 minutes) have higher cold starts than a freshly deployed one.

Add other cloud providers

Hi!

Thanks a lot for the up-to-date information regarding lambda cold start time.

Did you think about adding more cloud providers like Azure, GCP, etc? This information could be useful for the users as well.

Thanks in advance!

Java Tiered Compilation

I've seen good improvements for Lambda functions written in Java using tiered compilation, as described in this article:

https://aws.amazon.com/blogs/compute/increasing-performance-of-java-aws-lambda-functions-using-tiered-compilation/

Even just using TieredStopAtLevel=1 my team has seen the cold start times cut in half for many of our Lambda functions. I do think this is something that has more of an affect when you have more initialization code though, so it might not provide as much benefit in this package, since the code in the lambda is minimal. It might be worth a try, and would be a good data point.

I have not measured the results for the combination yet, but Tiered Compilation can also be combined with SnapStart.

How come C++ cold-starts are so slow?

I understand rust and go enjoying fast cold starts like:
image

But what could be the reason behind C++ being this slow on the same runtime?
image

Does it have to do with Rust and Go binaries being built with the standard tools? cargo and go?
While the C++ uses an external build tool like aws-lambda-cpp?

Or is Lambda just less optimized for C++ given it's not used as much?

Find a way to auto-detect new releases of aws-lambda-cpp

For now cpp builds are cloning https://github.com/awslabs/aws-lambda-cpp
It could be great to auto-dectect releases from https://github.com/awslabs/aws-lambda-cpp/releases so we could rebuild as soon as a new release is cut.
For now dependabot does not support that (see: dependabot/dependabot-core#2027)

So maybe one solution would be to create a simple bot posting an issue for every new release of awslabs/aws-lambda-cpp so we could rebuild as soon as possible

cc @marcomagdy

Java 11 Corretto with SnapStart

I think it would be just the coolest thing if someone would contribute a function running Java 11 Corretto with and without SnapStart.

Love this project!! 😍

Could you add Mojo to the list?

Mojo promises Python syntax with the speed of C.
Does that sound too good to be true? Yes, but let's see how well it does.

Java runtime comparsion might be missleading

Maybe thats a non issue, but I am not sure about the goal of the comparsion between different java runtimes.

Following questions popped up.

  1. Runtimes prefixed with java avoid using com.amazonaws:aws-lambda-java-core (which is listed as a required lib in Building Lambda functions with Java). In real world scenarios you most likely use this library because of that it would e a bit slower. Is this the desired solution for a comparison? Related PR: #410
  2. The runtime graalvm java17 (prov.al2), which is currently the fastest java runtime in the comparsion, has a similar optimization by using an alternative java lambda runtime formkiq/lambda-runtime-graalvm instead of the aws java-runtime-interface-client. I think this should reflect in the name of the runtime, something like graalvm java17 (prov.al2, formkiq runtime).
  3. quarkus (prov.al2) name should include the java version (17 in this case) e.g. quarkus java17 (prov.al2)
  4. quarkus (prov.al2) should use the same optimizations (or not) like the other java runtimes (mentioned in 1).

I can split this to different issues, if you think its worth fixing them

Use DocumentClient from JS SDK v3

Is your feature request related to a problem? Please describe.

Not a problem, but while going through the source code, I noticed that DocumentClient is used from AWS SDK for JavaScript (v2), while all other clients are from v3.

Describe the solution you'd like

Use aws-sdk-js-codemod to use DocumentClient from v3, and remove dependency on v2

Describe alternatives you've considered

N/A

Update llrt to retrieve the latest version

The version is currently hardcoded, we need a way to get the latest one automatically.
In addition, there is a specific binary for 'image' deployment, we need two different artifacts deployed on s3 for llrt, one for zip, one for image

Rebase native builds onto al2023

In theory al2023 should be a better and faster base image now than al2. Would love to see the native builds replaced or added complimentary to the existing al2 builds.

Java snapstart issue

I don't think the current updating function configuration trick makes it so your tests are actually using snapstart based on the numbers. It possibly takes some time before AWS has made the snapstart feature enabled on newly updated functions

image

Update cold start invoke frequency to at least 30 min

Could the cold start invoke frequency be updated to at least every 30 min for cold starts?
Looks like the code is getting 10 cold start samples daily for each language/platform but the numbers I have seen are much longer if hit frequency is at least 30 min apart for each of the invocations.

Java GraalVM 21 aws build

I have impressive cold start time with the latest JDK 21 release. Unforutnately building the native image with docker gives worse performance. I built directly in t4g large instance. Since the build and runtime environment are same I believe it is optimized for the processor. I want to add my lambda here if possible. Please let me know.

t4g large : 9.34ms
Docker    : 234ms

Bar Chart View

Thanks for the awesome work!

If you can provide a Bar chart for an easy comparison view for each metric, that would be great in easily comparing and finding the best platform / architecture performance.

Basically i want to see the average cold start for all platforms and both architectures all in 1 chart against each other for easy visual comparison.

Ability to sort

Amazing work developing and releasing this benchmark for all Lambda consumers like me. I think the frontend of the app could really benefit from some quick "sorting" capabilities.

E.g. sort by "cold start time" or "memory".

Add warm starts

I know the title is "cold start" but it would be great to have the same for warm start, so that we can see when functions are warmed, if the ranking is the same. Especially Java which should be much better then.

dedup runtimes array

for now the runtimes array is at 3 different places, let's refactor it at one place :)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.