Git Product home page Git Product logo

powertools-lambda's People

Contributors

amazon-auto avatar heitorlessa avatar michaelbrewer avatar sthulb avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

powertools-lambda's Issues

RFC: Allow for user defined exception handling in api gw handler

Key information

Summary

Allow for a simple way to define error handling

Motivation

Make it easier to define exception handling logic for api gw handlers

Proposal

If this feature should be available in other runtimes (e.g. Java, Typescript), how would this look like to ensure consistency?

User Experience

How would customers use it?

Example UX based on Fast API installing custom exception handlers

app = ApiGatewayResolver()
logger = Logger()


@app.exception_handler(SomeKindOfError)
def handle_error(ex: SomeKindOfError):
   print(f"request path is '{app.current_event.path}'")
   return Response(status_code=418, content_type=content_types.TEXT_HTML, body=str(ex))


@app.not_found()
def handle_not_found(exc: NotFoundError):
    return Response(status_code=404, content_type=content_types.TEXT_PLAIN, body="I am a teapot!")


@app.exception_handler(ServiceError)
def service_error(ex: ServiceError):
    logger.debug(f"Log out sensitive stuff: {ex.msg}")
    return Response(
        status_code=ex.status_code,
        content_type=content_types.APPLICATION_JSON,
        body="CUSTOM ERROR FORMAT",
    )


@app.get("/my/path")
def call_with_error() -> Response:
   raise SomeKindOfError("Foo!")


@app.get("/my/path2")
def call_with_error_sensitive() -> Response:
   raise InternalServiceError("Foo!")


def lambda_handler(event, context):
   return app(event, context)

Any configuration or corner cases you'd expect?

Demonstration of before and after on how the experience will be better

Drawbacks

Why should we not do this?

Make increase overall complexity.

Do we need additional dependencies? Impact performance/package size?

No additional dependencies

Rationale and alternatives

  • What other designs have been considered? Why not them?

Another option is to use Middleware Factor to define error handling. But this is not as intuitive.

  • What is the impact of not doing this?

Unresolved questions

Optional, stash area for topics that need further development e.g. TBD

Feature flags: Support non-boolean values

Today, the utility feature flags support boolean flags.
The utility support only boolean values at the moment, hence the wording "flag".
The utility can support more complex rule match values. I'd like to be able to provide a session context and get back according to the rule engine a more complex object such as Dict, list etc.

Logger decorator does not accept more arguments than event and context

What were you trying to accomplish?

I'm trying to use the logger in a lambda function that is executed once we deploy it via Terraform. We pass some custom arguments, however, the decorator is not able to handle them.

Expected Behavior

The expected behaviour should be the decorator to shallow the extra arguments

Current Behavior

Lambda function is falling with the error TypeError: decorate() got an unexpected keyword argument 'argument_name'

Possible Solution

By appending variadic arguments to decorate function, ti should fix the problem.

Steps to Reproduce (for bugs)

It's similar to that error.

Environment

  • Powertools version used: 1.22.0
  • Packaging format (Layers, PyPi):
  • AWS Lambda function runtime:
  • Debugging logs

How to enable debug mode**

I caught it during testing, below is the exact error

===================================================================== FAILURES ======================================================================
__________________________________________________________ test_migration_handler_failure ___________________________________________________________

monkeypatch = <_pytest.monkeypatch.MonkeyPatch object at 0x11167cf90>
lambda_context = lambda_context.<locals>.LambdaContext(function_name='test', memory_limit_in_mb=128, invoked_function_arn='arn:aws:lambda:eu-west-1:809313241:function:test', aws_request_id='52fdfc07-2182-154f-163f-5f0f9a621d72')

    def test_migration_handler_failure(monkeypatch, lambda_context):
        mock_err_message = "Testing for failure"
    
        def _err_side_effect(*args):
            raise TypeError("Testing for failure")
    
        mock_command = MagicMock()
        mock_command.upgrade.side_effect = _err_side_effect
        monkeypatch.setattr("lambdas.database_migration.handlers.command", mock_command)
    
        mock_event = {}
    
>       result = migration_handler(mock_event, lambda_context, num_retries=3, timeout=0.5)
E       TypeError: decorate() got an unexpected keyword argument 'num_retries'

tests/database_migration/test_handler.py:37: TypeError
------- generated xml file: /Users/vspallas/Documents/Development/project/python_unit_tests_junit_report.xml -------
============================================================== short test summary info ==============================================================
FAILED tests/database_migration/test_handler.py::test_migration_handler_failure - TypeError: decorate() got an unexpected keyword argument 'num_re...
============================================================ 1 failed, 1 passed in 0.35s ============================================================

Public Lambda Layers ARN instead of SAR App

Original author: @ecolban

Runtime e.g. Python, Java, all of them. All of them

Is your feature request related to a problem? Please describe.

As described in the Lambda Powertools Python discussion, Layers are currently available as a Serverless Application Repository (SAR) App and not as a public Lambda Layer ARN.

While this allows customers to selectively deploy the semantic version they're interested in it does require deploying a SAR App before they can use the Lambda Layer.

Describe the solution you'd like

Make Lambda Layer ARN publicly available per region. This will make it easier to add Lambda Powertools to existing projects.

Some examples: https://github.com/mthenw/awesome-layers

Describe alternatives you've considered

Building and publishing own Lambda Layer though that adds operational overhead in maintaining.

Is this something you'd like to contribute if you had guidance?

Additional context

Reduce size of Lambda layer .zip file by removing botocore already in Lambda runtime

Is your feature request related to a problem? Please describe.
In trying to upgrade to v1.22.0, I ran into the deployment error Layers consume more than the available size of 262144000 bytes. This is for a Lambda function that uses both the Lambda Powertools Layer and the latest AWS Data Wrangler Layer. I think the new Powertools layer is only a little larger than the previous version but enough to tip over the limit when combined with the AWS Data Wrangler layer.

I pulled down the Powertools zip contents following the "Get the Layer .zip contents" instructions, and it looks like most of the size (~70 MB unzipped) comes from botocore (~63 MB).

Is it necessary to include botocore and boto3 in the layer's zip file given that these are available by default in the Lambda runtime? If not, it would be helpful to remove them from the pre-built layer to help avoid hitting the overall size limit.

Describe the solution you'd like
Remove botocore and boto3 from the Lambda layer zip archive.

Describe alternatives you've considered
I could build my own layer that doesn't include botocore and boto3. I tried this already by downloading the existing layer .zip, deleting those packages, zipping it back up and deploying it. That seems to work in my application, but it would be great if creating my own layer wasn't necessary.

Additional context
You could replicate the error by trying to deploy a function with both the pre-built AWS Lambda Powertools and AWS Data Wrangler layers included. The code size coming from my application is negligible compared to the layer sizes.

P.S. Thanks for the great resource! This has been a very helpful package for me and my team :)

Add Support for API Gateway Lambda authorizer

Is your feature request related to a problem? Please describe.

The API Gateway Resolver does not (appear) to support the ability to dispatch to a function as a Lambda authorizer.

Describe the solution you'd like

Add support to be able to dispatch to a function that is a Lambda authorizer.

Describe alternatives you've considered

Additional context

Expand on the Projects `Readme.md` / Quickstart

What were you initially searching for in the docs?

A quick start guide on the GitHub main page for AWS Lambda Powertools

Is this related to an existing part of the documentation? Please share a link

Describe how we could make it clearer

Add a basic getting started guide with a couple simple examples for logger, tracer and maybe metrics. Once a developer got a taste of the library, they can still visit the full documentation site.

If you have a proposed update, please share it here

  • Add a very quick guide for starting a new project with Powertools (maybe using AWS SAM CLI)
  • Add a couple very basic examples that covers the make features of Powertools
from aws_lambda_powertools import Logger, Tracer
from aws_lambda_powertools.logging import correlation_paths
from aws_lambda_powertools.event_handler.api_gateway import ApiGatewayResolver

tracer = Tracer()
logger = Logger()
app = ApiGatewayResolver()

@app.get("/hello")
@tracer.capture_method
def get_hello_universe():
    return {"message": "hello universe"}

@logger.inject_lambda_context(correlation_id_path=correlation_paths.API_GATEWAY_REST)
@tracer.capture_lambda_handler
def lambda_handler(event, context):
    return app.resolve(event, context)

Log pydantic models as structured JSON, not str(...)

Is your feature request related to a problem? Please describe.

The structured logging/auto-JSON-ification support is great and makes our logs much better, thank you.

We occasionally log a pydantic model directly and this unfortunately doesn't work so well by default, as it uses the models str, rather than convert it (recursively) to JSON:

import pydantic
from datetime import datetime, timezone
from aws_lambda_powertools.logging import Logger
import json

logger = Logger()


class Inner(pydantic.BaseModel):
    x: int
    y: str
    z: datetime = datetime(2000, 1, 2, 3, 4, 5, 6, tzinfo=timezone.utc)


class Outer(pydantic.BaseModel):
    inner: Inner
    extra: float


value = Outer(inner=Inner(x=1, y="y"), extra=2.3)

print("Default")
logger.info(value)
logger.info({"nested": [("deeply", value)]})

print("\nDesired (powertools JSON encoding)")
logger.info(value.dict())
logger.info({"nested": [("deeply", value.dict())]})

print("\nDesired (pydantic JSON encoding)")
logger.info(json.loads(value.json()))
logger.info({"nested": [("deeply", json.loads(value.json()))]})

Output:

Default
{"level":"INFO","location":"<module>:24","message":"inner=Inner(x=1, y='y', z=datetime.datetime(2000, 1, 2, 3, 4, 5, 6, tzinfo=datetime.timezone.utc)) extra=2.3","timestamp":"2021-12-17 08:16:32,658+1100","service":"service_undefined"}
{"level":"INFO","location":"<module>:25","message":{"nested":[["deeply","inner=Inner(x=1, y='y', z=datetime.datetime(2000, 1, 2, 3, 4, 5, 6, tzinfo=datetime.timezone.utc)) extra=2.3"]]},"timestamp":"2021-12-17 08:16:32,658+1100","service":"service_undefined"}

Desired (powertools JSON encoding)
{"level":"INFO","location":"<module>:28","message":{"inner":{"x":1,"y":"y","z":"2000-01-02 03:04:05.000006+00:00"},"extra":2.3},"timestamp":"2021-12-17 08:16:32,658+1100","service":"service_undefined"}
{"level":"INFO","location":"<module>:29","message":{"nested":[["deeply",{"inner":{"x":1,"y":"y","z":"2000-01-02 03:04:05.000006+00:00"},"extra":2.3}]]},"timestamp":"2021-12-17 08:16:32,658+1100","service":"service_undefined"}

Desired (pydantic JSON encoding)
{"level":"INFO","location":"<module>:32","message":{"inner":{"x":1,"y":"y","z":"2000-01-02T03:04:05.000006+00:00"},"extra":2.3},"timestamp":"2021-12-17 08:16:32,658+1100","service":"service_undefined"}
{"level":"INFO","location":"<module>:33","message":{"nested":[["deeply",{"inner":{"x":1,"y":"y","z":"2000-01-02T03:04:05.000006+00:00"},"extra":2.3}]]},"timestamp":"2021-12-17 08:16:32,658+1100","service":"service_undefined"}

In particular, compare:

  • default: "inner=Inner(x=1, y='y', z=datetime.datetime(2000, 1, 2, 3, 4, 5, 6, tzinfo=datetime.timezone.utc)) extra=2.3"
  • JSON-ified: {"inner":{"x":1,"y":"y","z":"2000-01-02 03:04:05.000006+00:00"},"extra":2.3}

(The exact format of the JSON will depend on whether it converts to python types via .dict() and then uses powertools' JSON encoding, or uses pedantic's JSON encode via .json() then back to Python types with json.loads (pydantic/pydantic#1409 is related, for the latter).)

Describe the solution you'd like

The default json_default value for LambdaPowertoolsFormatter could detect Pydantic (and maybe other JSON-like structures, like dataclasses?) to coerce those to structured data. For instance:

try:
    import pydantic
except ImportError:
    pydantic = None

import dataclasses

def default_json_default(value: Any) -> Any:
    if pydantic is not None and isinstance(value, pydantic.BaseModel):
        return value.dict()
    elif dataclasses.is_dataclass(value): # (haven't tested this)
        return value.asdict()

    return str(value)

This potentially imposes an undesired performance cost on logging, but I haven't investigated.

Describe alternatives you've considered

Everyone providing our own custom json_default argument to LambdaPowertoolsFormatter (we're doing this currently):

def json_default(value):
    if isinstance(value, pydantic.BaseModel):
        return value.dict()
    return str(value)


logger = Logger(json_default=json_default)

Additional context

Exact requirements for example above:

aws-lambda-powertools==1.22.0
aws-xray-sdk==2.9.0
boto3==1.20.24
botocore==1.23.24
fastjsonschema==2.15.2
future==0.18.2
jmespath==0.10.0
pydantic==1.8.2
python-dateutil==2.8.2
s3transfer==0.5.0
six==1.16.0
typing_extensions==4.0.1
urllib3==1.26.7
wrapt==1.13.3

Add `@event_soruce` decorator for constructing event source data classes

Runtime:
Python (but could apply to all languages that support decorators and lack existing libraries for event source data classes)

Is your feature request related to a problem? Please describe

Originally raises in the following issue: aws-powertools/powertools-lambda-python#434

Constructing a data class from an existing event could be cleaner, and tools like MyPy would complain about change the type of event

def lambda_handler(event: Dict[str, Any], context: LambdaContext) -> Dict[str, Any]:
    event: APIGatewayProxyEventV2 = APIGatewayProxyEventV2(event)
    ...

and the more version would mean an floating event variable:

```python
def lambda_handler(_event: Dict[str, Any], context: LambdaContext) -> Dict[str, Any]:
    event = APIGatewayProxyEventV2(_event)
    ...

Describe the solution you'd like

Creating a thin decorator like event_source which just constructors an instance of the passed in class type would fix this.

@event_source(data_class=APIGatewayProxyEventV2)
def lambda_handler(event: APIGatewayProxyEventV2, context: LambdaContext) -> Dict[str, Any]:
   ...

Describe alternatives you've considered

N/A

If you provide guidance, is this something you'd like to contribute?

Implementation see: aws-powertools/powertools-lambda-python#442

Additional context

Note, to make this work in combination with the idempotent decorator we should allow for the generation of the idempotent key when combined with the @event_source decorator, otherwise there will be a type error when trying to generate the json. Example code which would raise a TypeError:

persistence_layer = DynamoDBPersistenceLayer(table_name="IdempotencyTable")

@event_source(data_class=APIGatewayProxyEventV2)
@idempotent(persistence_store=persistence_layer)
def lambda_handler(event: APIGatewayProxyEventV2, context):
    assert isinstance(event, APIGatewayProxyEventV2)
    ...

Fortunately this would be a simple fix:

        if hasattr(data, "raw_event"):
            data = data.raw_event
        hashed_data = self.hash_function(json.dumps(data, cls=Encoder).encode())

MetricsUtils.withSingleMetric doesn't publish powertools default "Service" dimension

What were you trying to accomplish?
I want to use MetricsUtils.withSingleMetric to publish a single metric with additional dimensions without affecting dimensions of other metrics. The single metric should also have the same default powertools dimension as other metrics when I don't overriding the default dimensions.

Expected Behavior

The single metric should have the default "Service" dimension as other normal metrics

Current Behavior

The single metric doesn't have the default "Service" dimension as other normal metrics

Possible Solution

I think the problem is in MetricsUtils#logger. I suspect two issues here:

  1. If there is no overriding default dimensions, we skip setting default dimensions. But in LambdaMetricsAspect#refreshMetricsContext (normal cases) we are appending the default powertools Service dimension instead. Possible solution: appending default Powertools Service dimension here as well.

  2. By using setDimensions if there are default overriding dimensions, this will likely cause another issue when customers use metric.setDimensions(dimensions); in their Consumer<MetricsLogger>, the overriding default dimensions will be gone. Possible solution: use setDefaultDimensions and update documentation to advice customers using putDimensions instead of setDimensions (because using setDimensions will hide all the default dimensions)

Steps to Reproduce (for bugs)

  1. Don't overwrite defaultDimensions.
  2. Put some metrics using MetricsUtils.metricsLogger() so my metrics will have the default "Service" dimension created by Lambda Powertools
  3. Publish a single metric with an additional dimension as following, the metric will only have AdditionalDimension
        MetricsUtils.withSingleMetric("MyCoolMetric", 1, Unit.COUNT, metric -> {
                final DimensionSet dimensions = DimensionSet.of("AdditionalDimension", "AdditionalDimensionValue");
                metric.setDimensions(dimensions);
            });
  1. If Using metric.putDimension(dimensions) instead of setDimensions, you will get all default EMF dimensions such as LogGroup, ServiceName, ServiceType which is also not desirable, I think.

Environment

  • Powertools version used: v1.5.0
  • Packaging format (Layers, Maven/Gradle): Maven
  • AWS Lambda function runtime: Java8
  • Debugging logs: N/A

Integrate with AWS Distro for OpenTelemetry

OpenTelemetry provides open source APIs, libraries, and agents to instrument, generate, collect, and export telemetry data (metrics, logs, and traces) of distributed applications.

Because there is a few overlap between the lambda power tools and OpenTelemetry, it would be nice to see how the layer provided can be reused or merged:
https://github.com/open-telemetry/opentelemetry-lambda/tree/main/python

Another important aspect, although it has not been fully finalized yet, is to follow the Log Data Model defined in the OpenTelemetry spec so that the context can be later related to both metric and trace.

RFC: graceful error/exceptions handling and documentation

Key information

  • RFC PR: (leave this empty)
  • Related issue(s), if known: n/a
  • Area: all
  • Meet tenets: Yes
  • Approved by: ''
  • Reviewed by: ''

Summary

Revisit the exceptions/errors currently thrown by the utilities, and document them to raise awareness and visibility for developers using the powertools at scale in production.

One paragraph explanation of the feature.

Motivation

Currently, the libraries throw exceptions/errors if unwanted behaviour happens, as it's expected. However, not all developers getting started might be aware of this for a lack of familiarity with the tools.
It would be good to raise visibility about this behaviour so that developers can take action on this and handle exceptions thrown by the powertools gracefully.

Why are we doing this? What use cases does it support? What is the expected outcome?

This is especially relevant when the developer needs to decide whether this invocation should be retried or not.
It would be good to understand if errors make sense at all.
Do we want the whole Lambda function invocation to fail if an observability-related step such as logging or tracing fails? We should take this into account.

Proposal

As in the summary:

  • Revisit the exceptions/errors currently thrown by the utilities, and decide on a different behaviour when it makes more sense;
  • Document error/exceptions thrown (via code comments or docs) to raise awareness and visibility for developers using the powertools at scale in production;

This is the bulk of the RFC.

Explain the design in enough detail for somebody familiar with Powertools to understand it, and for somebody familiar with the implementation to implement it.

If this feature should be available in other runtimes (e.g. Java), how would this look like to ensure consistency?

User Experience

How would customers use it?

Any configuration or corner cases you'd expect?

Demonstration of before and after on how the experience will be better

Drawbacks

Why should we not do this?

Do we need additional dependencies? Impact performance/package size?

Rationale and alternatives

  • What other designs have been considered? Why not them?
  • What is the impact of not doing this?

Unresolved questions

Optional, stash area for topics that need further development e.g. TBD

RFC: Unified Powertools Decorator

Key information

  • RFC PR: (leave this empty)
  • Related issue(s), if known:
  • Area: General
  • Meet tenets: Yes

Summary

Create a single decorator that can take in a configuration for all of the core features, including some new ones, like passing in the data_classes as the event or setting a correlation_id (aws-powertools/powertools-lambda-python#321).

@powertools.handler(
  config=PowerToolsConfig(
    type=API_GATEWAY_PROXY,  # Used to detect the event type
    set_correlation_id=True,  # Based event type we can automatically find the best correlation_id
    inject_lambda_context=True,  # Inject the logging context into the log messages
    log_event=True,  # Log the event
    capture_lambda_handler=True,  # Tracer capture
    idempotent={},  # Idempotent  config
    log_metrics=True,  # Log metrics
    capture_cold_start_metric=True,  # capture cold start metrics
    # and etc..
  )
)
def handler(
  event: APIGatewayProxyEvent,  # based on the event type
  context: LambdaContext
):
    logger.info("My message")  # Would include the correlation_id and context etc..

Motivation

When you chain together a bunch of decorators it is easy to get is wrong, like the below example would cause issues as the
idempotent should be the inner most one. Also when you want to add new features like recording the correlation_id,
you need to add more and more decorators.

@validator(inbound_schema=json_schema_dict, envelope="detail")
@idempotent(config=config, persistence_store=persistence_layer)
@logger.inject_lambda_context(log_event=True)
@metrics.log_metrics(capture_cold_start_metric=True)
@tracer.capture_lambda_handler(capture_response=False)
def handler(event, context):
    logger.info("My message")

Proposal

See summary for now..

Drawbacks

Why should we not do this?
We could continue to add more decorators and provide warnings when configured wrong?

Do we need additional dependencies? Impact performance/package size?
No additional dependencies

Rationale and alternatives

  • What other designs have been considered? Why not them?
  • What is the impact of not doing this?

Unresolved questions

Optional, stash area for topics that need further development e.g. TBD

Idempotent Feature - Specify Endpoint URL for DynamoDB Persistence Layer

Runtime: Python

Is your feature request related to a problem? Please describe
We are unable to use a local DynamoDB image when developing with Idempotent feature locally. This is due to DynamoDB boto3 resource, requiring a separate endpoint_url parameter, instead of it being part of the boto3 config object that is customizable.

Here we see the boto3 DynamoDB resource being created, however, there is no capability to set the endpoint_url input argument to reference a local running instance.

It appears that the only method to specify a custom endpoint URL for the DynamoDB resource is through the endpoint_url parameter when creating the resource object. https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GettingStarted.Python.Summary.html

This setting does not seem to be available in the boto3 Config class either.

This makes it very challenging to develop locally or to do integration testing.

Describe the solution you'd like
Allow us to set the DynamoDB endpoint URL when creating the DynamoDBPersistenceLayer object when implementing the Idempotency utility.

idempotency_persistence_layer = DynamoDBPersistenceLayer(table_name="idem", endpoint_url="http://localhost:8000")

Describe alternatives you've considered

I have tried the following workaround, by setting the url manually in the protected attribute but I end up getting a HTTP 302 error...

idempotency_persistence_layer.table.meta.client._endpoint.host = "http://localhost:8000"

I have also tried creating a Session and Config object. I know that localhost is not a valid region name but I just tried it to see if it would modify the URL used. It did not.

dynamodb_boto3_session = boto3.session.Session()
boto3_config = botocore.config.Config(region_name="localhost")
dynamodb_boto3_session.resource("dynamodb", endpoint_url="http://localhost:8000", config=boto3_config)
idempotency_persistence_layer = DynamoDBPersistenceLayer(table_name=idempotency_dynamo_table,
                                                         boto3_session=dynamodb_boto3_session)

If you provide guidance, is this something you'd like to contribute? Sure

Cache loaded JSON in event data classes

Is your feature request related to a problem? Please describe.

When using the provided event data classes (mainly APIGatewayProxyEvent/V2), the json_body property calls json.loads() every time. If I am passing the event object around, and not just the body content (say between different decorated functions) that means I'm performing this parsing multiple times within the same execution.

Describe the solution you'd like

I'd like to see caching introduced. Not only would this mean json.loads() is called only once, but in environments with high execution rates there could be significant time savings:

from functools import lru_cache
import json

json_body = '{"glossary": {"title": "example glossary", "GlossDiv": {"title": "S", "GlossList": {"GlossEntry": {"ID": "SGML", "SortAs": "SGML", "GlossTerm": "Standard Generalized Markup Language", "Acronym": "SGML", "Abbrev": "ISO 8879:1986", "GlossDef": {"para": "A meta-markup language, used to create markup languages such as DocBook.", "GlossSeeAlso": ["GML", "XML"]}, "GlossSee": "markup"}}}}}'

@lru_cache()
def load_json_once():
    return json.loads(json_body)

def load_json_every():
    return json.loads(json_body)

The below timeit test is an approximation of an execution calling to load the body as JSON five times, and multiplied out across 100,000 executions. Rough, but it demonstrates the aggregate time savings off the execution's duration.

>>> import timeit
>>> timeit.timeit(load_json_once, number=5)* 100000
4.252400003679213
>>> timeit.timeit(load_json_every, number=5)* 100000
9.994700008064683
>>> 

Cache loaded JSON in event data classes

Is your feature request related to a problem? Please describe.

When using the provided event data classes (mainly APIGatewayProxyEvent/V2), the json_body property calls json.loads() every time. If I am passing the event object around, and not just the body content (say between different decorated functions) that means I'm performing this parsing multiple times within the same execution.

Describe the solution you'd like

I'd like to see caching introduced. Not only would this mean json.loads() is called only once, but in environments with high execution rates there could be significant time savings:

from functools import lru_cache
import json

json_body = '{"glossary": {"title": "example glossary", "GlossDiv": {"title": "S", "GlossList": {"GlossEntry": {"ID": "SGML", "SortAs": "SGML", "GlossTerm": "Standard Generalized Markup Language", "Acronym": "SGML", "Abbrev": "ISO 8879:1986", "GlossDef": {"para": "A meta-markup language, used to create markup languages such as DocBook.", "GlossSeeAlso": ["GML", "XML"]}, "GlossSee": "markup"}}}}}'

@lru_cache()
def load_json_once():
    return json.loads(json_body)

def load_json_every():
    return json.loads(json_body)

The below timeit test is an approximation of an execution calling to load the body as JSON five times, and multiplied out across 100,000 executions. Rough, but it demonstrates the aggregate time savings off the execution's duration.

>>> import timeit
>>> timeit.timeit(load_json_once, number=5)* 100000
4.252400003679213
>>> timeit.timeit(load_json_every, number=5)* 100000
9.994700008064683
>>> 

RFC: Golang Support

Key information

  • RFC PR: (leave this empty)
  • Related issue(s), if known:
  • Area: (i.e. Tracer, Metrics, Logger, etc.) Tracer, Metrics, Logger, Batch
  • Meet tenets: Yes
  • Approved by: ''
  • Reviewed by: ''

Summary

Add support for Golang to make the awesome powertools ecosystem available to Go developers.

Motivation

As Go developers, we currently cannot benefit from the powertools, thus needing to re-implement the same functionality over and over. I have an internal implementation already setup but would like to contribute it to the wider community.

Proposal

This is the bulk of the RFC.

Explain the design in enough detail for somebody familiar with Powertools to understand it, and for somebody familiar with the implementation to implement it.

If this feature should be available in other runtimes (e.g. Java), how would this look like to ensure consistency?

User Experience

How would customers use it?

First, install the dependency via

go get github.com/awslabs/aws-lambda-powertools-go

Next step would be to have a Lambda handler wrapper (similar to the @Logging/@Metrics annotations for Java or the context managers for Python) to run some preprocessing / setup based on the POWERTOOLS_ environment variables.

package main 

import (
	"github.com/aws/aws-lambda-go/pkg/powertools"
        "github.com/awslabs/aws-lambda-powertools-go"
)

type input struct{}
type output struct{}

func handleRequest(ctx context.Context, event input) (output, error) {
    res, err := doSomething()
    return res, err
}

func main() {
   // create a new wrapper accepting the actual Lambda handler as the first parameter
   handler := powertools.NewLambdaHandler(handleRequest)
   lambda.Start(handler.HandleRequest)
}

The LambdaHandler definition could look something like this

type LambdaHandler struct {
   handlerFunc interface{}
}

func (h *LambdaHandler) HandleRequest(ctx context.Context, event interface{}) (interface{}, error) {
    // setup some internal settings
    ....
    // run middlewares
    ....
    // run the actual handler
    res, err := h.handlerFunc(ctx, event)
    // do some post-processing like logging output etc
    ....
    return res, err
}

Developers also want to use the Logger, Tracer and Metrics capabilities to enhance their experience.

Logger

The Logger capability will provide a standard logging facade with structured log support and leveled logging.

var logger = powertoolslog.NewLogger()

func operation() {
    logger.AppendKey("event_id", "1234")
    logger.AppendKey("event_payload", map[string]string{
      "key": "value",
      "num": 4,
    })
    logger.Info("A message!")

    powertoolslog.Info("Using the globally registered logger")
}

The output will be the same as in Powertools Python.

Metrics

The Metrics capability provides standard EMF metrics (maybe based on prozz/aws-embedded-metrics-golang.

var metricsLogger = powertoolsmetrics.NewMetricsLogger()

func operation() {
    defer metricsLogger.Log()

    metricsLogger.PutDimension("dimKey", "dimValue")
    metricsLogger.PutMetadata("metaKey", "metaValue")
    metricsLogger.PutMetric("metricName", 1, emf.Count)
    
    // use a context-manager like variant. will flush the logger automatically
    powertoolsmetrics.WithMetricsLogger(context.TODO(), func(ml *powertools.MetricsLogger) {
       ml.PutMetric("metric2", 1, emf.Count)
    })
}

Tracer

Tracer will provide capabilities for using X-Ray (based on aws/aws-xray-sdk-go)

func operation(ctx context.Context) {
    powertoolstracing.Capture(ctx, "operation", func(c context.Context) {
        cfg, err := config.LoadDefaultConfig(c)
        powertoolstracing.InstrumentAWSv2(&cfg)
        ssm := ssm.NewFromConfig(cfg)
        ssm.GetParameter(c, &ssm.GetParameterInput{})
    })
}

Also, the LambdaHandler will automatically setup tracing when the appropriate POWERTOOLS_ environment variables are set.

Any configuration or corner cases you'd expect?

Demonstration of before and after on how the experience will be better

  • Before: No Powertools
  • After: Powertools :)

Drawbacks

Why should we not do this?

Do we need additional dependencies? Impact performance/package size?

For the initial implementation (and maybe in general), we should use establishes libraries like

Rationale and alternatives

  • What other designs have been considered? Why not them?

The design for the UX needs to be fleshed out and check if it is ideomatic enough.

  • What is the impact of not doing this?

Unresolved questions

Optional, stash area for topics that need further development e.g. TBD

  • Package design: Multiple distinct Go packages (similar to AWS SDK, individually go gettable), one package powertools or multiple packages in one project (like powertools, powertoolsmetrics etc).
  • APIs for the core packages

Tracer: add service value and coldstart false annotations

Is your feature request related to a problem? Please describe.

When troubleshooting a larger app, service annotation would've made it easier to slice the service map in X-Ray. Similarly, ColdStart annotation is only added when there is a cold start - having false would be great to quickly analyze sampled requests where cold start happened vs warm starts.

Describe the solution you'd like

Include service annotation, Service, if value is defined, and ColdStart=false annotation.

Describe alternatives you've considered

Manually create that as part of my function

Additional context

Provide basic functionality like measuring time/success/failure count for function through @Metrics annotation

Is your feature request related to a problem? Please describe.
I would like to know time taken & success/failure ration for a service call from lambda function.

Describe the solution you'd like
Currently Metrics annotation only supports capturing cold start, it would be really if we can get time/success/failure count for the method being executed(similar to DCM/DLM :)).

Describe alternatives you've considered

Currently we are using Guava's stopwatch & manually instrumenting the code.

Extend ApiGatewayResolver functionality to support splitting routes into multiple files

Is your feature request related to a problem? Please describe.
I've been trying out the functionality of ApiGatewayResolver and it's working great for my needs; however it doesn't support splitting the individual routes into different files.

Describe the solution you'd like
Mirroring the style of Flask, I'd like to use a proxy object which can replicate the original @app.route decorator; and can later be used to apply the routes onto the original app.

Describe alternatives you've considered
The only simple alternative that I can see currently available is to keep all of the routes defined in the main handler file.

Additional context
I wrote a concise implementation that is working well for me; I'll submit it as a PR to get some input.

Ignore tracing for certain Urls /hostname

Is your feature request related to a problem? Please describe.
We are using a capture lambda handler decorations, but unfortunately our lambda is making a large amount of api calls to ec2.amazon.com, which we dont want any trace(segements) for. Since a large amount of trace subsegment is generate due to this api call( that happens in loop), we run out of xray limit for a signel trace which is 500Kb .
https://docs.aws.amazon.com/general/latest/gr/xray.html#limits_xray.

Describe the solution you'd like
If we could avoid generating trace segements for a known URL /hostname as described here https://github.com/aws/aws-xray-sdk-python#ignoring-httplib-requests, while using tracer , we would be able to get our trace limit under the limit and still use tracer. Its possible for us to use xray sdk directly and start tracing only methods we wanted but, it would be very useful to have such ignore capability in tracer too.

Ability to distinguish Temporary and Permanent Exceptions in SQS Batch Processing

Is your feature request related to a problem? Please describe.
While processing SQS messages in batch, I want to distinguish between Temporary and Permanent Exceptions so that in case of temporary failures, message return to the main queue and in case of permanent failure, it moves to either DLQ or deleted completely as per the requirement.

Describe the solution you'd like
Simple conversion of exception thrown from SqsMessageHandler implementation from their respective exception to TemporaryFailureException or PermanentFailureException with some defined logic. Then handler logic defined as well for those exceptions.

Describe alternatives you've considered
Writing my own alternative exception handler which I am still figuring out.

Additional context
This seems to me a fairly common use-case and should be a well-understood problem.

API Gateway proxy/ANY integration to api gateway

Is your feature request related to a problem? Please describe.
Use case:
Serverless microservices are deployed as API Gateway + Lambda Proxy Integration. Authentication is IAM based.
Other API gateways are used to expose these services. They can act as proxy to microservices or as a composer. In this scenario, the job of API gateway is not only to authenticate requests (Cognito, Custom) and do rate limit, but also route, compose or transform.

Problem:
API gateway supports HTTP proxy integration, but if you specify the endpoint of another API gateway with IAM authentication, requests will fail due to missing token (the request is not signed).
The Aws Service integration does not include Api Gateway as a target.
So, Lambda Proxy Integration is the only solution (and also the most flexible).

Describe the solution you'd like
Lambda Power Tools already implement lots of functionality related to routing request paths to functions, handle responses, CORS, etc.
It could be extended to support other features like proxy, transformation and composition.

The proxy feature:

  • The resolver could match an ANY method and paths such as /orders/{proxy}.
  • The forward request could be implemented by a function. This function receives the target endpoint. It would build a request to this endpoint using the same verb, adding the path to the endpoint and signing the request (similar to API Gateway HTTP proxy functionality, but with request signing).
  • The return of the forward request is encapsulated in a Response object and handled accordingly (including Cors, etc)
  • It could also work with the current decorator by adding a proxy attribute and let the function body do any transformation/processing like adding additional headers

Notice that we may want to expose methods from many microservices. For example:
ANY /orders/{proxy} => proxy do Orders service endpoint
ANY /accounts/{proxy} => proxy to Accounts service endpoints

Describe alternatives you've considered
The alternative we have today is to map all possible paths and verbs (to all microservice methods that should be exposed) and implement the handler the same way (forward request mentioned before), get the response and encapsulate in a Response object.

Preserving trace id in Lambda -> SQS -> Lambda flow

Runtime:
All of them

Is your feature request related to a problem? Please describe
SQS Supports XRay but since Lambda execution is done as batch the lambda function consuming the queue starts a new trace. To work around this the consuming lambda needs to over write the traceid with the AWSTraceHeader value.

`func handler(ctx context.Context, event events.SQSEvent) {

if trcHdrStr, ok := event.Records[0].Attributes["AWSTraceHeader"]; ok {
	traceHeader := header.FromString(trcHdrStr)

	var seg *xray.Segment
	ctx, seg = xray.BeginSegment(ctx, "YOUR-SEGMENT-NAME")
	seg.TraceID = traceHeader.TraceID
	seg.ParentID = traceHeader.ParentID
	seg.Sampled = traceHeader.SamplingDecision == header.Sampled

	defer seg.Close(nil)
}

}`

This is described at length here https://medium.com/@filiplubniewski/distributed-tracing-in-serverless-with-x-ray-lambda-sqs-and-golang-f38616cbd79b

Describe the solution you'd like
This would be perfect to have as an annotation

Describe alternatives you've considered
Its obivously an annotation ;)

If you provide guidance, is this something you'd like to contribute?

Additional context

RFC: Update, view and remove the correlation id

Key information

Summary

When using the correlation feature in conjunction with the SQS you may want to update the correlation id for each message in the incoming batch. This allows you to correlate log lines that are being processed in a batch with the lines in an other log group using the CloudWatch Logs Insights.

Further more next to lines that are related to the message there are also lines that are related to the processing of the batch itself. So an easy way to remove the correlation id would be beneficial to have.

Optionally, it might be useful to fetch the current correlation id for displaying purposes.

Motivation

The following code "works" but it breaks the typing as logger.set_correlation_id expects a str and not a Optional[str]:

from aws_lambda_powertools import Logger
from aws_lambda_powertools.utilities.typing import LambdaContext
from aws_lambda_powertools.utilities.batch import sqs_batch_processor
from aws_lambda_powertools.utilities.data_classes.sqs_event import SQSRecord
logger = Logger(service="callback")


def record_handler(record: dict) -> None:
    record = SQSRecord(record)
    # Get the correlation id from the message attributes
    correlation_id = record.message_attributes["correlation_id"].string_value
    logger.set_correlation_id(correlation_id)
    logger.info(f"Processing message with {correlation_id} as correlation_id")


@sqs_batch_processor(record_handler=record_handler)
def lambda_handler(event: dict, context: LambdaContext) -> None:
    logger.set_correlation_id(None)
    logger.info(f"Received a SQSEvent with {len(list(event.records))} records")

If you want to display the current correlation id you need to track it in your own logic, for example:

from aws_lambda_powertools import Logger
from aws_lambda_powertools.logging import correlation_paths
from aws_lambda_powertools.utilities.typing import LambdaContext
from aws_lambda_powertools.utilities.data_classes import (
    event_source,
    APIGatewayProxyEvent,
)
logger = Logger(service="api")


@event_source(data_class=APIGatewayProxyEvent)
@logger.inject_lambda_context(correlation_id_path=correlation_paths.API_GATEWAY_REST)
def lambda_handler(event: APIGatewayProxyEvent, context: LambdaContext) -> dict:
    logger.info(
        f"Received request using {event.request_context.request_id} as correlation id"
    )

Here you see that there are 2 things that you need to maintain the correlation_id_path=correlation_paths.API_GATEWAY_REST and the event.request_context.request_id both need to be updated if you want to change it.

Proposal

Adding Optional[str] typing or a logger.remove_correlation_id() method would solve the removal of the correlation id.

A logger.get_correlation_id() method would help but from the discussions on slack that might not be so straight forward. @heitorlessa could you add some context?

If this feature should be available in other runtimes (e.g. Java, Typescript), how would this look like to ensure consistency? The interface should be the same but tailored to the language standards.

User Experience

Get the current correlation id:

@event_source(data_class=APIGatewayProxyEvent)
@logger.inject_lambda_context(correlation_id_path=correlation_paths.API_GATEWAY_REST)
def lambda_handler(event: APIGatewayProxyEvent, _: LambdaContext) -> dict:
    logger.info(
        f"Received request using {logger.get_correlation_id()} as correlation id"
    )

Remove the correlation id:

logger.set_correlation_id(None)
# or
logger.remove_correlation_id()

Drawbacks

Not that we know of until now

Rationale and alternatives

  • What other designs have been considered? Why not them? TBD
  • What is the impact of not doing this? It breaks the typing interface, python will not break on it but it's not nice

Unresolved questions

Optional, stash area for topics that need further development e.g. TBD

Provide access to ApiGatewayResolver object from Router object

Is your feature request related to a problem? Please describe.

When attempting to use the, introduced in 1.22.0, Split routes with Router methods, we found that many of our end point functions needed access to the ApiGatewayResolver object. This is due to the fact that we have subclassed the ApiGatewayResolver to add some application-specific functionality.

Describe the solution you'd like

Please provide, either in the Router object or in the BaseProxyEvent (referenced by the current_event property) access to the ApiGatewayResolver object. Ideally, the BaseProxyEvent could provide access to both the ApiGatewayResolver object and Router object.

Describe alternatives you've considered

Not using the new router functionality.

Additional context

Add .net support for Lambda Powertools

Runtime:
.net (C#)

Is your feature request related to a problem? Please describe
I have been engaged in multiple conversations with different customers using .net where they ask for best practices on how to write metrics to cloudwatch, implement tracing and logging. Immediatelly Lambda Powertools comes to my mind but support for microsoft workloads is missing.

Describe the solution you'd like
I would like to have a single library that I can use to capture metrics, traces and logging at the same time that I am guaranteed that best practices are applied.

Describe alternatives you've considered
The current alternative right now is to use different libraries or implementing the features by ourselves which might be difficult to guarantee depending on the existing knowledge on the topic.

If you provide guidance, is this something you'd like to contribute?
YES.

Additional context
Currently all languages supported by Lambda Powertools support Core features (tracing, metrics, logging) but they have been expanded to support additional patterns and best practices. The same must be true for this implementation.

All feature requests are more then welcome since all of us have different needs.

Enable Powertools logging for imported libraries

Is your feature request related to a problem? Please describe.

It would be nice to easily modify the logger of imported libraries to use JSON logging from Powertools as well.

Describe the solution you'd like

Something like discussed in this discussion: aws-powertools/powertools-lambda-python#799

Maybe something like this?

logger = Logger(...., level="INFO", deny_list=["dont-log-this-library", "and-not-this"])

Describe alternatives you've considered
See the linked discussion.

Thank you!

RFC: Powertools libraries naming convention

Key information

  • RFC PR: (leave this empty)
  • Related issue(s), if known: n/a
  • Area: all
  • Meet tenets: Yes
  • Approved by:
  • Reviewed by:

Summary

The goal of this RFC is to have a unique, standardised and easy to understand naming convention for each Powertools library in all places, including across different languages.

Motivation

Currently, in our documentation the library is referred in different ways:

  • AWS Lambda Powertools (TypeScript)
  • Lambda Powertools TypeScript
  • AWS Lambda Powertools TypeScript
  • AWS Lambda Powertools for TypeScript

Screenshot 2022-02-14 at 12 16 03

Screenshot 2022-02-14 at 12 16 15

Screenshot 2022-02-14 at 12 17 08

Screenshot 2022-02-14 at 12 20 37.

TypeScript is used here as example, but the same applies to Python, Java, and other upcoming languages.

As the number of languages supported by the AWS Lambda Powertools grows, there is an opportunity to make the name of the project clearer and consistent among different languages/runtime.
The conversation among maintainers about the library's naming convention was generated by this comment in this HN thread.

Proposal

The library name of a specific language is "AWS Lambda Powertools for [Language]":

  • AWS Lambda Powertools for Python
  • AWS Lambda Powertools for Java
  • AWS Lambda Powertools for TypeScript

As consequence, all the versions below should be changed accordingly to the new one:

  • AWS Lambda Powertools (TypeScript)
  • Lambda Powertools TypeScript
  • AWS Lambda Powertools TypeScript

If this feature should be available in other runtimes (e.g. Java), how would this look like to ensure consistency?

User Experience

Customers should not be impacted as this does only affect the documentation.

Drawbacks

N/A

Rationale and alternatives

The maintainers have considered also these versions (TypeScript as example):

  • AWS Lambda Powertools TypeScript
  • AWS Lambda Powertools (TypeScript)
    But AWS Lambda Powertools for TypeScript was the winner.
  • What other designs have been considered? Why not them?
  • What is the impact of not doing this?

Unresolved questions

Impacted repositories

https://github.com/awslabs/aws-lambda-powertools-python
https://github.com/awslabs/aws-lambda-powertools-java
https://github.com/awslabs/aws-lambda-powertools-typescript
https://github.com/aws-samples/aws-lambda-powertools-examples

Feature request: Exception handling for Idempotency utility

Is your feature request related to a problem? Please describe.

As part of awslabs/aws-lambda-powertools-python#218 RFC, we had a brief discussion about exception handling but didn't implement in the first iteration as we weren't sure.

This is to enable a discussion on this before we make Idempotency G, or to agree on whether we should do this.

Describe the solution you'd like

A mechanism to allow an external function to handle individual or all exceptions raised as part of the Idempotency utility.

At present, customers using API Gateway might want to return a different response to their end customers if an operation is already in progress. This is not currently possible and requires a custom middleware - This brings operational and maintenance complexity.

Haven't put much thought on the UX yet hence why creating this to enable this discussion.

Example for handling a given exception

import os 

from aws_lambda_powertools.utilities.idempotency import (
    DynamoDBPersistenceLayer, IdempotencyConfig, idempotent
)
from aws_lambda_powertools.utilities.idempotency.exceptions import IdempotencyAlreadyInProgressError


persistence_layer = DynamoDBPersistenceLayer(table_name=os.getenv("TABLE_NAME"))
config =  IdempotencyConfig(
    event_key_jmespath="body",
    use_local_cache=True,
    exception_handler=my_custom_exception_handler,
    exceptions_list=[IdempotencyAlreadyInProgressError]
)


def my_custom_exception_handler(event: Dict[str, Any], exception: BaseException):
    ...


@idempotent(config=config, persistence_store=persistence_layer)
def handler(event, context):
    ...

Example for handling all exceptions

import os 

from aws_lambda_powertools.utilities.idempotency import (
    DynamoDBPersistenceLayer, IdempotencyConfig, idempotent
)

persistence_layer = DynamoDBPersistenceLayer(table_name=os.getenv("TABLE_NAME"))
config =  IdempotencyConfig(
    event_key_jmespath="body",
    use_local_cache=True,
    exception_handler=my_custom_exception_handler,
)


def my_custom_exception_handler(event: Dict[str, Any], exception: BaseException):
    ...


@idempotent(config=config, persistence_store=persistence_layer)
def handler(event, context):
    ...

Describe alternatives you've considered

Create a custom middleware using middleware factory utility.

Additional context

I don't think we need to change how we remove idempotency keys from the store when exceptions are raised. This is mostly to give customers an option to handle exceptions as they see fit.

FAQ

Does this include exceptions raised in your own handler, or only for the Idempotent related parts?

I'd vote for Idempotent related only, though I also see the point of catching all of them as it can quickly become confusing. I'm on two minds here - Either building a separate scoped utility to handle exceptions as a whole, or handle Idempotency exceptions only to give customers a chance to provide a non-error response when needed.

I like the former because it allows other utilities like the future Circuit Breaker, integration with Error Tracking systems like Sentry, and a clean intent with batteries included (e.g. retry on X exceptions, etc.).

Would the idempotent library have standard error responses defined for API Gateway proxy requests?

No. This could get out of hand quickly, as the next would be Circuit Breaker, or any other utility we might want to provide.

If it is the result of the handler, then if it does not re-raise the error, does this result then get saved in the idempotent persistence layer?

That's where I think it's tricky and I'd like to not touch the current Idempotency mechanics - It's a complex sequence already, and an additional branch flow could make this harder to maintain and reason.

Feature request: CodePipeline Lambda Event

Is your feature request related to a problem? Please describe.

A CodePipeline data class for Lambda events with helper methods to use the embedded credentials to download an artifact from S3.

https://docs.aws.amazon.com/codepipeline/latest/userguide/actions-invoke-lambda-function.html#actions-invoke-lambda-function-json-event-example

Describe the solution you'd like

see above

Describe alternatives you've considered

Manually accessing the event

Additional context

DynamoDBPersistenceLayer should support Tables with Range Keys

Is your feature request related to a problem? Please describe.

I cannot use the DynamoDBPersistenceLayer class when my table has HASH and RANGE keys. I do not want to create a separate table for tracking lambda idempotence. [channeling my inner Houlihan] One table to rule them all!

Describe the solution you'd like

I would like the DynamoDBPersistenceLayer to accept a configuration such that the HASH key is set to a known value, e.g. idempotence#<feature-name>, and the RANGE key contains the current values managed by the data_record.idempotency_key.

Describe alternatives you've considered

There are of course other HASH/RANGE key designs. Another possible design is to split the current data_record.idempotency_key value, which is <function name>#<hash>, to maybe be HASH key = <function name> and RANGE key = <hash>.

Additional context

I would like this to be a non-breaking change for current consumers. If we agree on the first HASH/RANGE key design proposal, the client could configure a DynamoDBPersistenceLayer with these extra parameters:

        key_attr_value: str = "idempotence",  # this is just a hardcoded value
        range_attr: str = "SK",               # this is the range key to store the current `data_record.idempotency_key`

If these are set, then DynamoDBPersistenceLayer uses the HASH/RANGE key implementation, else it reverts to HASH key implementation.

Feature Request: Add DEBUG mode to the API Gateway Event Handler

Is your feature request related to a problem? Please describe.

When there was an error while developing my initial lambda, i would like to be able to see more information when debug mode is turned on.

Describe the solution you'd like

Add a DEBUG mode or feature flag, that should never used in production, which can do the following:

  1. Pretty print the json responses
  2. Get more context when there is an error
  3. Relax rules for setting up CORS

Describe alternatives you've considered

For debugging errors, you can also look at the lambda logs, but i would be nice to allow for the end user to get more context event before accessing the logs.

Additional context

Add TypeScript support for AWS Lambda Powertools

Runtime:
Node.js

Is your feature request related to a problem? Please describe
AWS Lambda supports multiple languages through the use of runtimes, one of them being Node.js. In the Serverless Lens for the Well Architected Framework, we suggest several best practices for observability such as structured logging, distributed tracing, and monitoring of metrics. The suite of utilities of the AWS Lambda Powertools help developers with the adoption of best practices.
Today, AWS Lambda Powertools is available for the Python and Java runtimes.

Describe the solution you'd like
AWS Lambda Powertools available for Node.js runtimes, written in TypeScript.

Describe alternatives you've considered
For the generation of metrics, traces and logs developers can use a number of different open-source libraries together, maintained by the AWS community, or implement that logic by themselves.

If you provide guidance, is this something you'd like to contribute?
Absolutely yes.

Additional context
The TypeScript Powertool will follow the same tenets as the other languages, and folks can expect the same core utilities/ functionalities being supported.

RFC: New batch processor for new native partial response (SQS, DynamoDB, Kinesis)

Key information

Summary

A new generic batch processing utility, which can process records from SQS, Kinesis Data Streams, and DynamoDB streams, and handle reporting batch failures.

Motivation

With the launch of support for partial batch responses for Lambda/SQS, the event source mapping can now natively handle partial failures in a batch - removing the need for calls to the delete api. This support already exists for Kinesis and DynamoDB streams.

Proposal

The new utility will be implemented in the namespace which already exists for the SQS Batch Processing utility - aws_lambda_powertools.utilities.batch. Rather than calling the SQS DeleteMessage api like the existing batch utility, this version will instead inject BatchItemFailures (or batchItemFailures for kinesis and ddb) into the Lambda response. We will expose at the least, a new decorator batch_processor, which will accept an event type depending on the integration (Kinesis, SQS or DDB Streams). This will look similar to the Event Handler design. There will be a boolean setting to handle partial failures, defaulting to True (users will still need to enable this in the event source mapping for it to work).

from aws_lambda_powertools.batch import KinesisStreamEvent, EventType


def record_handler(record):
    return do_something_with(record["body"])


@batch_processor(record_handler=record_handler, event_type=EventType.KinesisStreamEvent)
def lambda_handler(event, context):
    return {"statusCode": 200} 

Proposed class/method names for the new api:
aws_lambda_powertools.utilities.batch.base.BatchProcessor
aws_lambda_powertools.utilities.batch.batch_processor

Any configuration or corner cases you'd expect?
Users will need to enable this in the event source mapping when configuring the Lambda trigger - reporting partial failures will not work with only changes to the Lambda code. Investigation is needed to better understand the consequences here, and how expensive it would be to enable a check in the code to see if it is enabled or not. If we don't implement this, we need to call this out front and center in the documentation.

Rationale and alternatives

The main consideration here is where the new functionality fits into the powertools package. It could be a new top level utility eg. aws_lambda_powertools.batchv2 - but we prefer not to add version numbers as it is confusing for users.

We could make a straight replacement of the implementation behind the existing API, which was the initial idea. However, the native functionality requires a setting when the event source mapping is created. That means we'd be introducing a breaking change if we did this - the utility would stop working for users who had not made this configuration.

Unresolved questions

Need to decide if we should add a context manager like the existing implementation has. It is simple to implement this, but it burdens the user with ensuring they store the return value and use it in their lambda response. I feel like this is too likely to be misunderstood, but would like opinions. Example:

from aws_lambda_powertools.utilities.batch import BatchProcessor

def record_handler(record):
    return do_something_with(record["body"])

def lambda_handler(event, context):
    records = event["Records"]

    processor = BatchProcessor(event_type=EventType.KinesisStreamEvent)

    with processor(records, record_handler) as proc:
        result = proc.process()  # Users will have to store the response from processor
  
   return result. # Users will have to return the result they stored from the processor

feature(apigateway): define function to handle not found requests

Executing the following code does not trigger the rules. Ideally I would expect the rule="/not-found-error" to be triggered when an incorrect path is sent. The print statement is never printed. I get the default value. Am I missing something?

from aws_lambda_powertools import Logger, Tracer
from aws_lambda_powertools.logging import correlation_paths
from aws_lambda_powertools.event_handler.api_gateway import ApiGatewayResolver
from aws_lambda_powertools.event_handler.exceptions import (
BadRequestError,
InternalServerError,
NotFoundError,
ServiceError,
UnauthorizedError,
)

tracer = Tracer()
logger = Logger()

app = ApiGatewayResolver()

@app.get(rule="/bad-request-error")
def bad_request_error():
# HTTP 400
raise BadRequestError("Missing required parameter")

@app.get(rule="/unauthorized-error")
def unauthorized_error():
# HTTP 401
raise UnauthorizedError("Unauthorized")

@app.get(rule="/not-found-error")
def not_found_error():
# HTTP 404
print("Not found")
raise NotFoundError

@app.get(rule="/internal-server-error")
def internal_server_error():
# HTTP 500
raise InternalServerError("Internal server error")

@app.get(rule="/service-error", cors=True)
def service_error():
raise ServiceError(502, "Something went wrong!")
# alternatively
# from http import HTTPStatus
# raise ServiceError(HTTPStatus.BAD_GATEWAY.value, "Something went wrong)

def handler(event, context):
return app.resolve(event, context)

doc: reference the existing custom resource handler from python

Is your feature request related to a problem? Please describe.
The developer experience for setting up a CFN custom resource in Python is still quiet messy and developers
maybe come to Powertools to solve this and not https://github.com/aws-cloudformation/custom-resource-helper

Describe the solution you'd like

In the java powertools we have a very clean implementation we could bring to python aws-powertools/powertools-lambda-java#560 . As there is already a solution for this, [crhelper] (https://github.com/aws-cloudformation/custom-resource-helper) and blog post on crhelper), maybe we can reference this in our documentation

Describe alternatives you've considered

Additional context

Add Rust support for AWS Lambda Powertools

Runtime:
Rust

Is your feature request related to a problem? Please describe
In the Serverless Lens for the Well Architected Framework, we suggest several best practices for observability such as structured logging, distributed tracing, and monitoring of metrics. The suite of utilities of the AWS Lambda Powertools help developers with the adoption of best practices.
Today, AWS Lambda Powertools is available for the Python and Java runtimes.

In addition to the powertools the crate, the documentation site can help people new to Rust and AWS Lambda to get up in running

Describe the solution you'd like
AWS Lambda Powertools available for Provided runtimes, written in Rust.

Describe alternatives you've considered
There is almost no alternatives or guidance that i have seen so far.

If you provide guidance, is this something you'd like to contribute?
Absolutely yes.

Additional context
The Rust Powertools will follow the same tenets as the other languages, and folks can expect the same core utilities/ functionalities being supported.

This request is based on #26 :)

Graduate powertools for python out of `awslabs` into `aws`

Runtime:

Python initially, but over time other runtimes could be promoted.

Is your feature request related to a problem? Please describe

Currently all of the powertools projects are within the awslabs org, which often include very popular but yet can abandoned projects (due to no official funding). Now that a significant number of large organizations contribute or use powertools in production, it would be a relief to see these repos be promoted up, and get official support from the AWS organisation.

Describe the solution you'd like

Ideally aws-lambda-powertools would move into it's own organization to allow for more than just aws maintainers, OR at least be promoted to the aws main org.

Describe alternatives you've considered

Nothing much the community can do, other than potentially fork the repo under the AWS Community Builders?

If you provide guidance, is this something you'd like to contribute?

Put this under if own organization to allow for community to be more involved

Additional context

Repos with alot of interest (stars, downloads and usage), but not active maintainers:

RFC: [SQS Batch Processing] Ability to move Message from Batch to DLQ when certain types of exception occurs

Key information

  • RFC PR: (leave this empty)
  • Related issue(s), if known: #21
  • Area: (i.e. Tracer, Metrics, Logger, etc.) SQS Batch Processing
  • Meet tenets: (Yes/no) Yes
  • Approved by: ''
  • Reviewed by: ''

Summary

During batch processing of SQS messages, there can be messages in the batch which fails processing because of reasons for which user will not want them to retry it but move those to a DLQ associated with SQS queue or delete it entirely. Example might be where a message is failing business validation, and it won't make sense to let it retry until it expires, rather we can simply move such message directly to a DLQ.

We could enhance SQL Batch processing to accept a list of Exception/Errors. If those exceptions occur during message processing via SqsMessageHandler for Java or via record_handler in python, utility can take care of moving such message to DLQ directly or delete it entirely based on a config param, instead of moving them back to queue.

Motivation

This is a fairly common use-case. This will take away all the custom logic that user need to build themselves and let user focus on writing business logic of processing the SQS message instead.

Proposal

During batch processing of SQS messages, there can be messages in the batch which fails processing because of reasons for which user will not want them to retry it but move those to a DLQ associated with SQS queue or delete it entirely. Example might be where a message is failing business validation, and it won't make sense to let it retry until it expires, rather we can simply move such message directly to a DLQ.

We could enhance SQL Batch processing to accept a list of Exception/Errors. If those exceptions occur during message processing via SqsMessageHandler for Java or via record_handler in python, utility can take care of moving such message to DLQ directly or delete it entirely based on a config param, instead of moving them back to queue.

So basically, accepting a list of exceptions/errors and a new flag if such a message should be deleted or moved to DLQ in the api contract or the annotation/decorator. Default could be to move to DLQ is one exists for the SQS queue.

If this feature should be available in other runtimes (e.g. Python), how would this look like to ensure consistency?

For Python version, since it supports similar batch processing utility with similar UX via decorator and APIs, same capability could be added to it as well.

User Experience

How would customers use it?

For Java:

public class PartialBatchPartialFailureHandler implements RequestHandler<SQSEvent, String> {
    @Override
    @SqsBatch(value = InnerMessageHandler.class, nonRetryableExceptions = {IllegalStateException.class})
    public String handleRequest(final SQSEvent sqsEvent,
                                final Context context) {
        return "Success";
    }

    private class InnerMessageHandler implements SqsMessageHandler<Object> {

        @Override
        public String process(SQSMessage message) {
            if ("some business logic validation".equals("false")) {
                throw new IllegalStateException("2e1424d4-f796-459a-8184-9c92662be6da");
            }
            
            return "Success";
        }
    }
}
public class PartialBatchPartialFailureHandler implements RequestHandler<SQSEvent, String> {
    @Override
    @SqsBatch(value = InnerMessageHandler.class, nonRetryableExceptions = {IllegalStateException.class}, deleteNonRetryableMessageFromQueue = true)
    public String handleRequest(final SQSEvent sqsEvent,
                                final Context context) {
        return "Success";
    }

    private class InnerMessageHandler implements SqsMessageHandler<Object> {

        @Override
        public String process(SQSMessage message) {
            if ("some business logic validation".equals("false")) {
                throw new IllegalStateException("2e1424d4-f796-459a-8184-9c92662be6da");
            }
            
            return "Success";
        }
    }
}

Similar support can be made to higher level api supported by utility:

public class PartialBatchPartialFailureHandler implements RequestHandler<SQSEvent, Object> {
    @Override
    public Object handleRequest(final SQSEvent sqsEvent,
                                final Context context) {

        return SqsUtils.batchProcessor(sqsEvent, message -> {
                    if ("some business logic validation".equals("false")) {
                        throw new IllegalStateException("2e1424d4-f796-459a-8184-9c92662be6da");
                    }

                    return "Success";
                },
                IllegalStateException.class);
    }

So in above examples, if IllegalStateException is thrown from handler while processing the message, then that message will be automatically moved to a DLQ or be deleted based on flag value of deleteNonRetryableMessageFromQueue. By default, it will attempt to move it to DLQ if one exists.

Any configuration or corner cases you'd expect?
NA

Demonstration of before and after on how the experience will be better

Refer summary above. Today all the logic of deciding to move DQL or Deleting such message has to be done by users writing alot of custom code around it.

Drawbacks

Increases complexity of the utilty and more code to maintain?

No, since we already depend on SQS client today. Its just additional functionality within utility.

Rationale and alternatives

  • What other designs have been considered? Why not them?
  • What is the impact of not doing this? Customer are forced to write and maintian such custom logic themselves.

Unresolved questions

Optional, stash area for topics that need further development e.g. TBD

Allow APIGatewayResolver to handle Custom Domain with Mapping and still work

Is your feature request related to a problem? Please describe.
Have opened as a feature request as I believe this is not exactly a bug, perhaps an oversight in the ApiGatewayResolver design (happy to discuss more).
A couple of months ago I developed an API for a project whose API gateway was associated with a custom domain.
This week I built a new API extension (new gateway) using AWS Lambda Powertools for Python and have applied several routes into the one lambda using the API Gateway Resolver with the intention of adding to the same custom domain since want the ApiGateway to be hosted on the one common DNS domain. When associating a second gateway to a custom domain you must associate a mapping for the additional gateways so the API paths do not collide and everything works.

Within my lambda resolver setup for this second gateway if I have a resolver route of "/status" setup this works fine if it is mounted as-is on the root of the domain. If I add to a custom domain as a second gateway I need to add in a "mapping", for example, sake let's say I choose "unique". The AWS API Gateway event "path" for the API when I call it as "https://mycustomdomain.com/unique/status" is set to "/unique/status" which means the power tools Resolver will respond with a "404, NOT FOUND" since that path is not setup within the Resolver routes.

I have noticed that the "resource" path in the event correctly holds the route as "/status" but the path holds the route as "/unique/status"

This is a complicated one - the ApiGatewayResolver does not allow me to mount the gateways developed with this component on any custom domain with a mapping and have the lambda API actually work - I actually kind of think this is a bug but not raising as such since the implementation seems perfectly reasonable.

Describe the solution you'd like
What I would ideally like is the freedom to be able to mount my Python API to a custom domain using any mapping I choose and still have the API resolver find my routes within the lambda code correctly.

The current implementation uses the "path" of the ApiGateway Lambda event which houses the complete API path including the mapping which breaks the resolver.

Describe alternatives you've considered
As a work around I can simply change my routes to include the proposed mapping but then this stops me from being able to use Api Gateway configuration to remap an API in the future and actually have it work without changing my code which is not ideal given this is a feature of using Services like the AWS Api gateway.

Additional context
This kind of also brings into question how the Event content is generated and passed to lambda by the ApiGateway since one could argue it makes no sense that the "path" also includes the logical "mapping" from the API gateway Custom Domain configuration (not an argument I want to start but a consideration given the logical config nature of this scenario).

I have taken a look at the event structure from this configuration and notice the "resource" contains a correct path that matches the route I have in my Lambda Resolver routes in python code but is possibly not ideal.

RFC: Function Idempotency Helper

Key information

Summary

Helper to facilitate writing Idempotent Lambda functions.
The developer would specify (via JMESPath) which value from the event will be used as a unique execution identifier, then this helper would search a persistence layer (e.g. DynamoDB) for that ID; if present, get the return value and skip the function execution, otherwise run the function normally and persist the return value + execution ID.

Motivation

Idempotency is a very useful design characteristic of any system. It enables the seamless separation of successful and failed executions, and is particularly useful in Lambdas used by AWS Step Functions. It is also a design principle on the AWS Well Architected Framework - Serverless Lens

Broader description of this idea can be found here

Proposal

Define a Python Decorator @idempotent which would receive as arguments a) the JMESPath of the event key to use as execution ID, b) {optional} storage backend configuration, e.g. DynamoDB table name, or ElasticSearch URL + Index).

This decorator would wrap the function execution in the following way (pseudo-python):

from aws_lambda_powertools.itempotent import PersistenceLayer

def idempotent(func, event_key, persistence_config):
  def wrapper(*args, **kwargs):
    persistence = PersistenceLayer(persistence_config)
    key = jmespath.find(event_key, **kwargs['event'])
    persistence.find(key)

    if persistence.executed_successfully():
      return persistence.result()

    try:
      result = func(*args, **kwargs)
      persistence.save_success(key, result)
      return result
    except Exception => e:
      persistence.save_error(key, e)

  return wrapper

Usage then would be similar to:

from aws_lambda_powertools.itempotent import itempotent

@idempotent(event_key='Event.UniqueId', persistence_config='dynamodb://lambda-idemp-table')
def handler(event, context):
  # Normal function code here
  return {'result': 'OK', 'message': 'working'}

The decorator would first extract the unique execution ID from the Lambda event using the JMESPath provided, then check the persistence layer for a previous successfull execution of the function and - if found - get the previous returned value, de-serialize it (using base64 or something else) and return it instead; otherwise, execute the function handler normally, catch the returned object, serialize + persist it and finally return.

The Persistence layer could be implemented initially with DynamoDB, and either require the DDB table to exist before running the function, or create it during the first execution. It should be in such way as to allow different backends in the future (e.g. Redis for VPC-enabled lambdas).

Drawbacks

This solution could have noticeable performance impacts on the execution of Lambda functions. Every execution would require at at least 1, at most 2 accesses to the persistence layer.

No additional dependencies are required - DynamoDB access is provided by boto3, object serialisation can use Python's native base64encode/decode

Rationale and alternatives

  • What other designs have been considered? Why not them?
    No other designs considered at the moment. Open to suggestions.

  • What is the impact of not doing this?
    Implemention of idempotent Lambda functions will have to be done 'manually' in every function.

Unresolved questions

  • How to make the persistence layer access as fast as possible?
  • Which other persistence layers to consider (DynamoDB, ElasticSearch, Redis, MySQL)?

Official support for MyPy

Original author: @huonw

Runtime e.g. Python, Java, all of them. Python

Is your feature request related to a problem? Please describe.

As per in the original Lambda Powertools Python Discussions, MyPy isn't officially supported and can fail when type checking Tracer, as an example.

Describe the solution you'd like

Lambda Powertools to consider MyPy customers and include MyPy in the pipeline to make it compliant :-)

Describe alternatives you've considered

Ignoring type errors

Is this something you'd like to contribute if you had guidance?

Additional context

Add exponential back-off with jitter in SQS Batch Processor in case of temporary errors

Is your feature request related to a problem? Please describe.
We use SQS Batch processing for Powertools and some message fails, it stays in the queue until it is received enough times that it moves to DLQ instead. The gap between processing of this failed message is equal to or greater than the SQS's set visibilityTimeout. We can override this visibilityTimeout at failure events for individual messages during exception handling.

Describe the solution you'd like
I would like that SQS Batch Processor library provides this functionality by default so that we don't need to write this ourselves. This also ties nicely with another feature request of mine: #21.

Describe alternatives you've considered
Writing my own utility code using the references I found on the internet like this: https://ivan-site.com/2018/06/exponential-backoff-in-sqs/.

Additional context
This is a pretty common usecase.

Extend AppSyncResolver functionality to support splitting resolvers into multiple files

Is your feature request related to a problem? Please describe.
Similar to https://github.com/awslabs/aws-lambda-powertools-python/issues/644, I was trying to find a way to split the resolvers into different files as our resolvers are too cluttered in one file currently.

Describe the solution you'd like
An appsync version like app.register_blueprint(my_type_resolvers) would be nice. Another possibility would be to have something like:

app.register_resolvers(type_name="MyType", resolver_class=MyType)

Describe alternatives you've considered
We've been adding all resolvers in the same file currently, which is becoming a bit cluttered.

clear logger keys between lambda invocations

Runtime: Python

Is your feature request related to a problem? Please describe
Currently, logger keys are kept between Lambda invocations unless I manually call remove_keys or structure_logs.

def handler(event, context):
  logger.info('log info') # will include key1 & key2 after first invocation

  logger.append_keys(key1=body['something'])

  logger.info('log log log')

  # some code...

  logger.append_keys(key2=some_value)

  logger.info('log log log')

Currently, if I want to remove all keys added during the lambda I need to run at the start of the lambda:

logger.remove_keys(['key1', 'key2'])
# or
logger.structure_logs()

The problem with the first option is that I don't want to specify all keys manually, I might even forget some keys.

The problem with the second option is that it will be removed in the next major version and it would also delete lambda injected context (if I use @logger.inject_lambda_context)

Describe the solution you'd like
An easy way to remove all keys at the start of a lambda execution

  • If using @logger.inject_lambda_context remove all keys by default (can be changed with parameter) at the start of each lambda
  • create a new method: logger.restart_keys(exclude=[])
    logger.restart_keys() # will remove all keys
    logger.restart_keys(['key2']) # will remove all keys except key2
    

If you provide guidance, is this something you'd like to contribute?
Yes

Additional context
Related issue: aws-powertools/powertools-lambda-python#407 (comment)

RFC: Feature toggles rule engine - make AWS AppConfig great again

Key information

Maintainers Note: Update RFC after docs have been created as implementation was improved

Summary

[summary]: Simplify the usage of feature toggles with AppConfig. Take it to the next level with a rule engine that provides calculated values of feature toggles according to input context.

Motivation

Why are we doing this? What use cases does it support? What is the expected outcome?

App config is great but it's very raw/barebones. This feature will encourage people to use AppConfig.
I'd like to build a feature toggles on top the current app config utility.

Proposal

Build a simple feature toggle rule engine. The rules will be part of the JSON schema that is uploaded to AppConfig. The rule engine will use existing powertools utility to load and parse the json from AppConfig.
It will provide a simple function (same API as launch darkly support) for getting a feature toggle by name. The function will also receive a context dict which will be matched against a set of rules. The feature will have a default boolean value. However, if the context data matches a rule in the schema JSON, the returned value will be the value that is defined in the matched rule.
This can allow you to have a feature toggle turned off by default but turning it on for a specific user or customer in a specific region etc.
The rules will accept any key/value context.
Supported actions can be equals, starts with, regex match , endswith, in a list and many more. It's very easy to extend the engine.
An example rule: 'if customer_name' equals coca cola and username starts with 'admin' , trigger the feature on. For other cases, the feature is off. See configuration language below.
This type of API will take appconfig to the next level. It's very much barebones at the moment.

If this feature should be available in other runtimes (e.g. Java, Typescript), how would this look like to ensure consistency?
It can be done in other languages, it's a very simple rule engine that I've already written in Python.

User Experience

How would customers use it?

conf_store: ConfigurationStore = ConfigurationStore(
environment='test_env',
service='test_app',
conf_name="test_conf_name",
cache_seconds=600,
)

toggle: bool = conf_store.get_feature_toggle(feature_name='my_feature', rules_context={'customer_name': 'coca-cola', 'username': 'abc'}, default_value=False)

The default value parameter in the API is the default value to return if the feature doesn't exist in the json schema.

Any configuration or corner cases you'd expect?

Example configuration:
{
'log_level': 'DEBUG',
'features': {
'my_feature': {
'default_value':
'False',
'rules': [
{
'name':
'set the toggle on for customer name 666 and username abc ',
'default_value':
True,
'restrictions': [
{
'action': 'EQUALS',
'key': 'customer_name',
'value': 'coca-cola',
},
{
'action': 'EQUALS',
'key': 'username',
'value': 'abc',
}
]
},
]
}
}
}

Drawbacks

Current solution has support for only boolean values for feature toggles. This can be of course expanded if required rather easily.

Why should we not do this?
dont see a reason not to :)

Do we need additional dependencies? Impact performance/package size?
Solution can be based upon pydantic (very easy) or regular json parsing.

Rationale and alternatives

Alternative is to use a third party tool (which is not free) like Launch Darkly.

  • What other designs have been considered? Why not them?
    You can use Launch Darkly. However, this solution is very simple and provides the same client side API that launch darkly provide with AWS AppConfig.

  • What is the impact of not doing this?

Unresolved questions

Optional, stash area for topics that need further development e.g. TBD

Lambda Powertools Idempotency Support for SQS Batch Processor

We use the Powertools SQS Batch Processor utility with our Lambda function. The documentation suggests implementing idempotency into the function logic. We were trying to leverage the Powertools Idempotent feature. There seems to be some limitations with this approach as the Idempotency feature only checks against the entire SQS batch of records instead of each individual record in the batch. So kind of defeats the purpose of the SQS Batch Processor.

Is there any way we can utilize powertools idempotency utility in our lambda with the SQS batch processor utility having the idempotency utility check at an individual record rather than the full batch.

Reading through the source code, it seems like it assumes that it will wrap the lambda handler only so it seems impossible?

Thanks.

feat(data-classes): Amazon MQ as an event source for AWS Lambda

Is your feature request related to a problem? Please describe.
Implementing a Python Lambda for Amazon MQ (ActiveMQ and RabbitMQ)

Describe the solution you'd like
Implement 2 data-classes for RabbitMQ and ActiveMQ, along with some helper functions like handling the
base64 decoding

Describe alternatives you've considered

Additional context
Some useful documentation and blog posts

Active MQ

{
  "eventSource": "aws:amq",
  "eventSourceArn": "arn:aws:mq:us-west-2:112556298976:broker:test:b-9bcfa592-423a-4942-879d-eb284b418fc8",
  "messages": {
    [
      {
        "messageID": "ID:b-9bcfa592-423a-4942-879d-eb284b418fc8-1.mq.us-west-2.amazonaws.com-37557-1234520418293-4:1:1:1:1",
        "messageType": "jms/text-message",
        "data": "QUJDOkFBQUE=",
        "connectionId": "myJMSCoID",
        "redelivered": false,
        "destination": {
          "physicalname": "testQueue" 
        }, 
        "timestamp": 1598827811958,
        "brokerInTime": 1598827811958,
        "brokerOutTime": 1598827811959
      },
      {
        "messageID": "ID:b-9bcfa592-423a-4942-879d-eb284b418fc8-1.mq.us-west-2.amazonaws.com-37557-1234520418293-4:1:1:1:1",
        "messageType":"jms/bytes-message",
        "data": "3DTOOW7crj51prgVLQaGQ82S48k=",
        "connectionId": "myJMSCoID1",
        "persistent": false,
        "destination": {
          "physicalname": "testQueue" 
        }, 
        "timestamp": 1598827811958,
        "brokerInTime": 1598827811958,
        "brokerOutTime": 1598827811959
      }
    ]
  }
}

Rabbit MQ

{
  "eventSource": "aws:rmq",
  "eventSourceArn": "arn:aws:mq:us-west-2:112556298976:broker:pizzaBroker:b-9bcfa592-423a-4942-879d-eb284b418fc8",
  "rmqMessagesByQueue": {
    "pizzaQueue::/": [
      {
        "basicProperties": {
          "contentType": "text/plain",
          "contentEncoding": null,
          "headers": {
            "header1": {
              "bytes": [
                118,
                97,
                108,
                117,
                101,
                49
              ]
            },
            "header2": {
              "bytes": [
                118,
                97,
                108,
                117,
                101,
                50
              ]
            },
            "numberInHeader": 10
          },
          "deliveryMode": 1,
          "priority": 34,
          "correlationId": null,
          "replyTo": null,
          "expiration": "60000",
          "messageId": null,
          "timestamp": "Jan 1, 1970, 12:33:41 AM",
          "type": null,
          "userId": "AIDACKCEVSQ6C2EXAMPLE",
          "appId": null,
          "clusterId": null,
          "bodySize": 80
        },
        "redelivered": false,
        "data": "eyJ0aW1lb3V0IjowLCJkYXRhIjoiQ1pybWYwR3c4T3Y0YnFMUXhENEUifQ=="
      }
    ]
  }
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.