Git Product home page Git Product logo

aws-otel-lambda's People

Contributors

alolita avatar aneurysm9 avatar anuraaga avatar bhautikpip avatar bryan-aguilar avatar dependabot[bot] avatar erichsueh3 avatar humivo avatar jan-xyz avatar kausik-a avatar kkelvinlo avatar kxyr avatar lupengamzn avatar mahbub21463 avatar nathanielrn avatar paurushgarg avatar rapphil avatar shaochengwang avatar srikanthccv avatar tviaud avatar vasireddy99 avatar wangzlei avatar willarmiros avatar wytrivail avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aws-otel-lambda's Issues

GRPC Receiver Timeout in Python Lambda running in a Container

I have a Python lambda deployed and running in a container, similar to this issue "publish Docker images". I have managed to compile and place the aws-otel-lambda extension in my container correctly as far as I can tell. However, when I execute the function, the lambda times out waiting for the gRPC receiver to start. The timeout is set to 30 seconds.

2022-01-28T20:31:58.661Z	info	builder/receivers_builder.go:68	Receiver is starting...	{"kind": "receiver", "name": "otlp"}
2022-01-28T20:31:58.661Z	info	otlpreceiver/otlp.go:68	Starting GRPC server on endpoint 0.0.0.0:4317	{"kind": "receiver", "name": "otlp"}
EXTENSION	Name: collector	State: Ready	Events: [INVOKE,SHUTDOWN]
2022/01/28 20:32:28 [collector] Received event: {
	"eventType": "SHUTDOWN",
	"deadlineMs": 1643401950589,
	"requestId": "",

etc

My collector config:

# collector.yaml in the root directory
# Set an environemnt variable 'OPENTELEMETRY_COLLECTOR_CONFIG_FILE' to
# '/var/task/collector.yaml'

receivers:
  otlp:
    protocols:
      grpc:
      http:

exporters:
  logging:
    loglevel: debug
  # awsxray:

#enables output for traces
service:
  pipelines:
    traces:
      receivers: [otlp]
      exporters: [logging]

I have a hunch I need to do something else with my container to enable this but not sure what. Any thoughts would be appreciated.

[nodejs] tracer.startSpan() occasionally returns non-recording span with aws-otel-nodejs 1.0 layer

Description

We use the aws-otel-nodejs version 1.0 layer with our node lambdas to create Open Telemetry span events.
We have observed that, occasionally, telemetry does not make it to Honeycomb for a lambda invocation.
We have noticed that, when this happens:

  • The Span returned by the tracer's startSpan() method is a NonRecordingSpan object.

  • Telemetry is typically fine for invocations of the lambda before and after that lambda invocation, seemingly within the same lambda container.

We are able to reproduce this issue with a cut-down lambda that takes an event of records, and uses a simple otel wrapper to create a span for each record and add an event for that record. The lambda is triggered by an SQS queue, to which we stream large numbers of messages (via a producer lambda) to make it easy to see the issue. It's not quite the same as with our normal setup, where messages stream intermittently, but the issue symptoms are the same.

Our tracer is obtained on each invocation of the lambda by:
const tracer = otelApi.trace.getTracer('fof-1852-logger');
And our otel wrapper does the following with that tracer when creating a span for a particular string message:

    let span: Span | undefined;
    try {
      const name = message.split('\n')[0];
      span = this.tracer.startSpan(`${level} : ${name}`);
      try {
        // log span state for monitoring purposes, to see if recording
        console.info(`startSpan isRecording=${span.isRecording()} spanContext=${JSON.stringify(span.spanContext())} message="${message} span=${JSON.stringify(span)}"`);
      } catch (e) {
        // note the non-first spans within the lambda invocation have recursive structure, when recording
        console.info(`startSpan isRecording=${span.isRecording()} spanContext=${JSON.stringify(span.spanContext())} message="${message} span=${e}"`);
      }
      span.addEvent(name, {
        level,
        message,
      });
    } catch (error) {
      console.error(`Error adding event: ${error}`);
      console[level.toLowerCase() as keyof Omit<Console, 'Console'>](message);
    } finally {
      if (span !== undefined) {
        span.end();
      }
    }

We see in CloudWatch that some invocations of the lambda log startSpan isRecording=false (and with spanContext={traceFlags: 0, ...}) and these naturally do not make it to Honeycomb.

However, it is difficult to debug this further and understand the cause.

Please find attached a more detailed description of the issue, steps to recreate it using the attached lambda code, and an example of the issue when it occurs.

tracer occasionally returns non-recording span.pdf
fof-1852-logger.zip
fof-1852-producer.zip

Support for OTEL_PROPAGATORS other than awsxray

Is your feature request related to a problem? Please describe.
At the moment, Python lambdas only support AWS-XRay trace propagation. Setting OTEL_PROPAGATORS environment variable to anything than awsxray is ignored.

Describe the solution you'd like
Support for other context propagators, such as W3C and B3 by setting OTEL_PROPAGATORS=tracecontext, for example.

Describe alternatives you've considered
I managed to get it to work with tracecontext by cloning the otel-instrumentation script, adding environ["OTEL_PROPAGATORS"] = "tracecontext" and setting AWS_LAMBDA_EXEC_WRAPPER env variable to it.

[Python] `Tracecontext` OTel propagator config generates an additional X-Ray span

I am using v1.3.0 of the Python layer and configured the lambda to use tracecontext propagation, but I can see the lambda reporting an additional extra span that seems to be related to X-Ray, which shouldn't be there.

Notable additional config:

    OTEL_PROPAGATORS: "tracecontext"
    OTEL_TRACES_SAMPLER: "Always_On"

Lambda tracing is turned off (PassThrough).

Code:

def consumer(event, lambda_context):

    context = extract(event['headers'])

    # Create top level transaction representing the lambda work
    with tracer.start_as_current_span("consumer-function-top-level", context=context, kind=SpanKind.SERVER):

        # Some other internal work performed by the lambda
        with tracer.start_as_current_span("consumer-function-some-internal-work", kind=SpanKind.INTERNAL):

            time.sleep(1)

        time.sleep(0.5)

    return {'statusCode': 200}

This should produce 2 spans: top-level consumer-function-top-level and nested consumer-function-some-internal-work. Instead, I am seeing 3 spans created in CloudWatch log with Span 2 065c07aa6cc4916d that shouldn't be there:



2021-07-24T04:36:51.316Z	INFO	loggingexporter/logging_exporter.go:41	TracesExporter	{     "#spans": 3 }
--
2021-07-24T04:36:51.316Z	DEBUG	loggingexporter/logging_exporter.go:51	ResourceSpans #0
Resource labels:
-> telemetry.sdk.language: STRING(python)
-> telemetry.sdk.name: STRING(opentelemetry)
-> telemetry.sdk.version: STRING(1.3.0)
-> cloud.region: STRING(ap-southeast-2)
-> cloud.provider: STRING(aws)
-> faas.name: STRING(aws-python-api-worker-project-dev-consumer)
-> faas.version: STRING($LATEST)
-> service.name: STRING(aws-python-api-worker-project-dev-consumer)
InstrumentationLibrarySpans #0
InstrumentationLibrary handler

Span #0
Trace ID       : ac4d7110c6d22e985b5db28e8ecca769
Parent ID      : 3c374013084a679c
ID             : 522890b38766e922
Name           : consumer-function-some-internal-work
Kind           : SPAN_KIND_INTERNAL
Start time     : 2021-07-24 04:36:49.813522476 +0000 UTC
End time       : 2021-07-24 04:36:50.814894967 +0000 UTC
Status code    : STATUS_CODE_UNSET
Status message :

Span #1
Trace ID       : ac4d7110c6d22e985b5db28e8ecca769
Parent ID      : e24504245b7cff0e
ID             : 3c374013084a679c
Name           : consumer-function-top-level
Kind           : SPAN_KIND_SERVER
Start time     : 2021-07-24 04:36:48.812135568 +0000 UTC
End time       : 2021-07-24 04:36:51.315562096 +0000 UTC
Status code    : STATUS_CODE_UNSET
Status message :

Span #2 <==== EXTRA SPAN!!!
Trace ID       : ac4d7110c6d22e985b5db28e8ecca769
Parent ID      : ad1bb604de1da312
ID             : 065c07aa6cc4916d
Name           : handler.consumer
Kind           : SPAN_KIND_SERVER
Start time     : 2021-07-24 04:36:48.811673087 +0000 UTC
End time       : 2021-07-24 04:36:51.315622948 +0000 UTC
Status code    : STATUS_CODE_UNSET
Status message :
Attributes:
-> faas.execution: STRING(e759abdb-6119-4f91-afe0-716a71276825)
-> faas.id: STRING(arn:aws:lambda:ap-southeast-2:401722391821:function:aws-python-api-worker-project-dev-consumer)
-> faas.name: STRING(aws-python-api-worker-project-dev-consumer)
-> faas.version: STRING($LATEST)


Only run smoke tests for released layer

Today, in the release workflow, we run a smoke test after making any public release. We test for the layer's presence in all regions and run a quick test to validate it. However we do this for each layer in each region, which is problematic because:

  1. It couples failures for one layer type to release workflows for all layers
  2. This is wasteful & causes the workflows to be very long, and scale linearly with the amount of layer types.

We should only run directly impacted groups of smoke tests per release.

Conneting aws lambda with ADOT for grafana labs

In the documentation site for the Grafana labs given the config yaml that is understand by grafana agent, but i need sample/template config (opentelemetry config ) to connect to grafana labs account. In case of AppDynamic and HoneyComb, examples given are pretty much understandable. But in case of grafana it is not.

OTel collector to report `Initialization` and `Overhead` segments of Lambda calls

Is your feature request related to a problem? Please describe.
At the moment, the only traces that get reported by the OTel collector are the ones that involve Lambda itself that belong to the Invocation segment of Lambda runtime. X-Ray supports reporting additional segments that Lambda runtime goes through: Initialization and Overhead and it can be valuable to report on these segments as well.

Describe the solution you'd like
OTel to report the full picture of Lambda lifecycle, including Overhead and Initialization, similar to how X-Ray reports it:
image

Python lambdas: with tracecontext propagation and autoinstrumentation, the top level spans are not created

I can see that auto-instrumentation works with requests calls, but to make the spans nest properly, I had to manually define the top-level spans for each lambda for the tracing to work. What am I missing?

def producer(event, lambda_context):

    context = extract(event['headers'])

    tracer = trace.get_tracer(__name__)

    # Create the top-level transaction - auto-instrumentation doesn't pick it.
    with tracer.start_as_current_span(name="producer-function-top-level", context=context, kind=SpanKind.SERVER):

        # The following is autodetected as a child span fine as part of `requests` library detection
        request_to_downstream = requests.Request(method="GET", url=URL, 
        headers={
            "Content-Type": "application/json"
        })
        session = requests.Session()
        res = session.send(request_to_downstream.prepare())

    return {'statusCode': res.status_code}

Traces broken when using third party exporter with Lambda

Is your feature request related to a problem? Please describe.

When using the AWS Otel Collector with a third party exporter (NewRelic) only spans that are related to transactions within the lambda are exported. All other spans that are a part of the trace are not available for export. For example: A request that is started in API Gateway that invokes a Lambda; we are only able to export the spans related to the lambda and don't have the ability to configure an exporter for the spans related to API Gateway. This results in broken and incomplete traces.

Describe the solution you'd like

It is critical that we can export a complete trace in order to use this feature effectively.

Describe alternatives you've considered

We need to be able to configure third party exporters for all services that can invoke lambdas. This could either be individually or more likely from something like X-Ray.

Additional context
Add any other context or screenshots about the feature request here.

In case of disabled AWS Tracing (X-ray) spans are not sent

Description:
I've instrumented Python Lambda with ADOT instrumentation but I've deactivated the AWS Xray Tracing.
I'm invoking Lambda from the AWS Console (test invoke). I can see in the logs _X_AMZN_TRACE_ID = "Root=1-605a3357-3fdb2ab603403e7e1dcdfb10;Parent=783f1fb448fc1be2;Sampled=0" so I assume trace/span was generated but was not exported to the collector. I've discovered when Sampled=0 then traces/spans are not exported.

What's the right way to export spans in case of disabled AWS X-Ray (Tracing) or/and invoke without context propagation?

Support for Custom Runtime

Hello. Thank you for providing such a nice lambda layer.
I'm using custom runtime for my lambda function (in Rust), and now would like to trace the app via OpenTelemetry.
After I looked around this repo, I ended up not finding any way to use ADOT for custom runtime.
It would be helpful if this lambda layer could support custom runtime.

Python Lambda instrumentation fails with error :Unable to import module 'aws_observability': cannot import name 'AwsXRayIdsGenerator'

I build and deployed the Python sample app and layer and when I executed the function, I got this error :

Unable to import module 'aws_observability': cannot import name 'AwsXRayIdsGenerator'

I followed every step documented in https://github.com/aws-observability/aws-otel-lambda/tree/main/sample-apps/python-lambda

Attached is the full log.
log-events-viewer-result.txt

in a single View of AWS X-RAY , i should see an end-to-end tracing path of Lambda functions , API - Gateway and Dynamodb

Is your feature request related to a problem? Please describe.

My Goal or Assumption : in a single View of AWS X-RAY , i should see an end-to-end tracing path of Lambda functions , API - Gateway and Dynamodb .

Question:
what should be the expected tracing result by using the Extension and Layer of OpenTelemetry Javascript and ADOT Collector v0.9.1?

Setup

  • functional:
    HTTP-Client --calls --> API-Gateway endpoint "/dev/views" --calls--> Lambda Function --calls--> API-Gateway endpoint "/dev/views/{id}"--calls--> Lambda Function --calls--> Dynamodb

  • installation: aws-distro for opentelemetry by deploying the Lambda Layer ARN (includes OpenTelemetry Javascript and ADOT Collector v0.9.1)

  • runtime: nodejs 14.x

  • no config.yml deployed, defaults apply: XRAY, cloudwatch logging

  • installed nodejs libs: aws sdk v2, axios for http requests

Problem Statement:
in AWS X-RAY Traces view, i do not see an end-to-end tracing path, but i see two separate tracing paths in order to have a complete view of my tracing paths.
In concrete i see:

i) HTTP-Client --calls --> API-Gateway endpoint "/dev/views" --calls--> Lambda Function --calls--> API-Gateway endpoint "/dev/views/{id}"

ii) API-Gateway endpoint "/dev/views/{id}" --calls--> Lambda Function --calls--> Dynamodb

Describe the solution you'd like
in a single View of AWS X-RAY , i should see an end-to-end tracing path of Lambda functions , API - Gateway and Dynamodb .

Describe alternatives you've considered

Additional context
in Service map View of XRAY i see in a single view as separate tracing paths:
i) HTTP-Client --calls --> API-Gateway endpoint "/dev/views" --calls--> Lambda Function --calls--> API-Gateway endpoint "/dev/views/{id}"

ii) API-Gateway endpoint "/dev/views/{id}" --calls--> Lambda Function --calls--> Dynamodb

Created the issue also here aws-observability/aws-otel-collector#535
but i think here is the right place to make my question ?..

Java Agent Lambda Layer first invocation has no instrumentation traces; works on subsequent traces

Description

I was using the aws-otel-java-agent-ver-1-7-0 Lambda layer on a Java 11 Lambda and noticed that while the first invocation does not have an S3 AWS SDK trace for the Sample App on the OTel Lambda repo, subsequent invocations do have it.

First invocation with the Initialization Lambda trace but no S3 trace:

image

Subsequent invocation which still has the Initialization Lambda trace but now does have the S3 trace:

image

cc @anuraaga

[Python] Lambda Layer is not found

According to the docs, the lambda ARN for US-East-1 should be arn:aws:lambda:us-east-1:901920570463:layer:aws-otel-python38-ver-1-3-0:1, however this results in the following error when clicking "verify":

Failed to load layer version details: The resource you requested does not exist. (Service: AWSLambda; Status Code: 404; Error Code: ResourceNotFoundException; Request ID: 427b8941-9073-48ed-8598-febe07d5ff5c; Proxy: null)

Is there a list anywhere of the actual versions of this layer, or will I need to manually try incrementing the version number?

Java Agent Lambda Layer Not Sending all Traces

I have a Spring application running on lambda. I have configured the Java autoinstrumentation layer (and found that it only works with Java 11). I have found that about 1/3 - 1/2 of the traces being sent to XRay are incomplete. Specifically, some of the requests show the full trace of the request, down to calling my DynamoDB instance, whereas some of them only show the invocation/overhead of invoking the lambda function. Note that the invocation/overhead traces are tracked simply by enabling XRay Tracing in the Lambda console, so in these cases the Agent is essentially sending nothing at all to Xray. My question is, why is it failing almost half the time?

Support for golang lambda

From the documentation, it looks like it is not possible (yet?) to use otel with golang.

I would like to be able to use Otel like in other languages, or a better documentation (as, for example, the one existing for .net runtime.

Describe alternatives you've considered
aws sdk to push traces to xray, but no support for sdk v2 yet on that front sent me back here.

Lambda layer support for Python 3.9

Is your feature request related to a problem? Please describe.
We have Lambda functions that we'd like to upgrade to use Python 3.9. When we test the upgrade path and use the Lambda layer that targets Python 3.8, the Lambda layer returns the following error:

ChannelCredentials, Compression
File "/opt/python/grpc/__init__.py", line 22, in <module>
from grpc import _compression
File "/opt/python/grpc/_compression.py", line 15, in <module>
from grpc._cython import cygrpc
ImportError: cannot import name 'cygrpc' from 'grpc._cython' (/opt/python/grpc/_cython/__init__.py)
Failed to auto initialize opentelemetry
Traceback (most recent call last):
File "/opt/python/opentelemetry/instrumentation/auto_instrumentation/sitecustomize.py", line 116, in initialize
_load_configurators()
File "/opt/python/opentelemetry/instrumentation/auto_instrumentation/sitecustomize.py", line 109, in _load_configurators
raise exc
File "/opt/python/opentelemetry/instrumentation/auto_instrumentation/sitecustomize.py", line 105, in _load_configurators
entry_point.load()().configure(auto_instrumentation_version=__version__)  # type: ignore
File "/var/task/opentelemetry/sdk/_configuration/__init__.py", line 178, in configure
self._configure(**kwargs)
File "/var/task/opentelemetry/sdk/_configuration/__init__.py", line 194, in _configure
_initialize_components(kwargs.get("auto_instrumentation_version"))
File "/var/task/opentelemetry/sdk/_configuration/__init__.py", line 148, in _initialize_components
trace_exporters = _import_exporters(exporter_names)
File "/var/task/opentelemetry/sdk/_configuration/__init__.py", line 122, in _import_exporters
) in _import_tracer_provider_config_components(
File "/var/task/opentelemetry/sdk/_configuration/__init__.py", line 108, in _import_tracer_provider_config_components
component_impl = entry_point.load()
File "/var/task/pkg_resources/__init__.py", line 2465, in load
return self.resolve()
File "/var/task/pkg_resources/__init__.py", line 2471, in resolve
module = __import__(self.module_name, fromlist=['__name__'], level=0)
File "/opt/python/opentelemetry/exporter/otlp/proto/grpc/trace_exporter/__init__.py", line 20, in <module>
from grpc import ChannelCredentials, Compression
File "/opt/python/grpc/__init__.py", line 22, in <module>
from grpc import _compression
File "/opt/python/grpc/_compression.py", line 15, in <module>
from grpc._cython import cygrpc
ImportError: cannot import name 'cygrpc' from 'grpc._cython' (/opt/python/grpc/_cython/__init__.py)
[ERROR]	2022-02-09T14:57:20.701Z	9003dd25-68a1-4070-90ee-203e3b98a61b	TracerProvider was missing `force_flush` method. This is necessary in case of a Lambda freeze and would exist in the OTel SDK implementation.

Describe the solution you'd like
Add OTEL Lambda Layer support for Python 3.9

Describe alternatives you've considered
Pause migration to Python 3.9 for Lambda functions that implement OpenTelemetry layer.

Not seeing the metrics using the lambda layers or the traces

Using the serverless framework and created the layers provided in the example using node.js v12. I don't see any of the metrics and the traces are giving me an error. Here's my code:

Dependencies:

"dependencies": {
    "@aws/otel-aws-xray-id-generator": "^0.13.1",
    "@aws/otel-aws-xray-propagator": "^0.13.0",
    "@opentelemetry/api": "^1.0.0-rc.0",
    "@opentelemetry/core": "^0.16.0",
    "@opentelemetry/exporter-collector-grpc": "^0.18.0",
    "@opentelemetry/exporter-zipkin": "^0.17.0",
    "@opentelemetry/instrumentation": "^0.18.0",
    "@opentelemetry/metrics": "^0.18.0",
    "@opentelemetry/node": "^0.17.0",
    "@opentelemetry/plugin-express": "^0.14.0",
    "@opentelemetry/plugin-http": "^0.18.0",
    "@opentelemetry/plugin-https": "^0.18.0",
    "@opentelemetry/tracing": "^0.17.0"
  }

My handler.js is:

'use strict';

const tracer = require('./tracer')('aws-otel-integ-test');
const meter = require('./metric-emitter');
const AWS = require('aws-sdk');
const api = require('@opentelemetry/api');


module.exports.hello = async (event) => {
  console.log('Calling counter')
  const requestStartTime = new Date().getMilliseconds();
  //TODO fix error - Cannot read property 'context' of undefined
  // const traceID = returnTraceIdJson()
  // console.log('tracerId', traceID)
  const statusCode = 200
  meter.emitsPayloadMetric(mimicPayLoadSize(), '/outgoing-http-call', statusCode);
  meter.emitReturnTimeMetric(new Date().getMilliseconds() - requestStartTime, '/outgoing-http-call', statusCode);
  console.log('Counter called')
  return {
    statusCode: 200,
    body: JSON.stringify(
      {
        message: 'Go Serverless v1.0! Your function executed successfully!',
        input: event,
      },
      null,
      2
    ),
  };

  // Use this code if you don't use the http event with the LAMBDA-PROXY integration
  // return { message: 'Go Serverless v1.0! Your function executed successfully!', event };
};

const returnTraceIdJson = () => {
  const currentSpan = api.getSpan(api.context.active())
  const traceId = currentSpan.context.traceId;
  const xrayTraceId = "1-" + traceId.substring(0, 8) + "-" + traceId.substring(8);
  const traceIdJson = JSON.stringify({"traceId": xrayTraceId});
  return traceIdJson;
}

const mimicPayLoadSize = () => {
  return Math.random() * 1000;
}

The trace is giving me an error that the context on the returnTraceIdJson above is unidentified. Here's the trace.js:

'use strict'

const { DiagConsoleLogger, DiagLogLevel, diag } = require('@opentelemetry/api')
const { SimpleSpanProcessor, ConsoleSpanExporter } = require("@opentelemetry/tracing");
const { NodeTracerProvider } = require('@opentelemetry/node');
const { CollectorTraceExporter } = require('@opentelemetry/exporter-collector-grpc');

const { AWSXRayPropagator } = require('@aws/otel-aws-xray-propagator');
const { AwsXRayIdGenerator } = require('@aws/otel-aws-xray-id-generator');

const { propagation, trace } = require("@opentelemetry/api");

// Optional and only needed to see the internal diagnostic logging (during development)
diag.setLogger(new DiagConsoleLogger(), DiagLogLevel.DEBUG);

module.exports = (serviceName) => {
  // set global propagator
  propagation.setGlobalPropagator(new AWSXRayPropagator());

  // create a provider for activating and tracking with AWS IdGenerator
  const tracerConfig = {
    idGenerator: new AwsXRayIdGenerator(),
    plugins: {
      https: {
        enabled: true,
        // You may use a package name or absolute path to the file.
        path: '@opentelemetry/plugin-https',
        // https plugin options
      },
      "aws-sdk": {
        enabled: true,
        // You may use a package name or absolute path to the file.
        path: "opentelemetry-plugin-aws-sdk",
      },
    },
  };
  const tracerProvider = new NodeTracerProvider(tracerConfig);
  
  // add OTLP exporter
  const otlpExporter = new CollectorTraceExporter({
    serviceName: serviceName,
    url: (process.env.OTEL_EXPORTER_OTLP_ENDPOINT) ? process.env.OTEL_EXPORTER_OTLP_ENDPOINT : "localhost:55680"
  });
  tracerProvider.addSpanProcessor(new SimpleSpanProcessor(otlpExporter));
  tracerProvider.addSpanProcessor(new SimpleSpanProcessor(new ConsoleSpanExporter()));

  // Register the tracer
  tracerProvider.register();

  // Return an tracer instance
  return trace.getTracer("awsxray-tests");
}

Here's the metrics. I don't see any of them in the cloudwatch metrics. I don't find either "apiBytesSent" or "latency" in my metrics when I search.

'use strict';

const { DiagConsoleLogger, DiagLogLevel, diag } = require('@opentelemetry/api')
const { CollectorMetricExporter } = require('@opentelemetry/exporter-collector-grpc');
const { MeterProvider } = require('@opentelemetry/metrics');

const API_COUNTER_METRIC = 'apiBytesSent';
const API_LATENCY_METRIC = 'latency';

// Optional and only needed to see the internal diagnostic logging (during development)
diag.setLogger(new DiagConsoleLogger(), DiagLogLevel.DEBUG);

/** The OTLP Metrics gRPC Collector */
const metricExporter = new CollectorMetricExporter({
    serviceName: process.env.OTEL_RESOURCE_ATTRIBUTES ? process.env.OTEL_RESOURCE_ATTRIBUTES : 'aws-otel-integ-test',
    url: process.env.OTEL_EXPORTER_OTLP_ENDPOINT ? process.env.OTEL_EXPORTER_OTLP_ENDPOINT : 'localhost:55680'
});

/** The OTLP Metrics Provider with OTLP gRPC Metric Exporter and Metrics collection Interval  */
const meter = new MeterProvider({
    exporter: metricExporter,
    interval: 1000,
}).getMeter('aws-otel');

/**  grabs instanceId and append to metric name to check individual metric for integration test */
const latencyMetricName = API_LATENCY_METRIC;
const  apiBytesSentMetricName = API_COUNTER_METRIC;
// const instanceId = process.env.INSTANCE_ID || '';
// if (instanceId && instanceId.trim() !== '') {
//     latencyMetricName += '_' + instanceId;
//     apiBytesSentMetricName += '_' + instanceId;
// }

/** Counter Metrics */
const payloadMetric = meter.createCounter(apiBytesSentMetricName, {
    description: 'Metric for counting request payload size',
});

/** Value Recorder Metrics with Histogram */
const requestLatency = meter.createValueRecorder(latencyMetricName, {
    description: 'Metric for record request latency'
});

//** binds request latency metric with returnTime */
const emitReturnTimeMetric = (returnTime, apiName, statusCode) => {
    console.log('emit metric with return time ' + returnTime + ', ' + apiName + ', ' + statusCode);
    const labels = { 'apiName': apiName, 'statusCode': statusCode };
    requestLatency.bind(labels).record(returnTime);
}

//** emitsPayLoadMetrics() Binds payload Metric with number of bytes */
const emitsPayloadMetric = (bytes, apiName, statusCode) => {
    console.log('emit metric with http request size ' + bytes + ' byte, ' + apiName);
    const labels = { 'apiName': apiName, 'statusCode': statusCode };
    payloadMetric.bind(labels).add(bytes);
}

module.exports = {
    emitReturnTimeMetric: emitReturnTimeMetric,
    emitsPayloadMetric: emitsPayloadMetric
}

Add back RPC attributes to Python expected template

We should combine the attributes recorded in #165 and #168 so that we are verifying both aws.operation is set on an inferred segment and the three rpc attributes are set on the span per the spec. We can do this once the translation is updated in the awsxrayexporter and we've upgraded the Python Lambda Layer to include this Collector Contrib awsxrayexporter change.

[Python] SerializationException for XRay exporter

I have a python 3.8 lambda instrumented with the aws layer, after following the documentation here: https://aws-otel.github.io/docs/getting-started/lambda/lambda-python

  • Layer version: aws-otel-python38-ver-1-3-0:1 (contains OpenTelemetry for Python v1.3.0 with Contrib v0.22b0 and ADOT Collector for Lambda v0.11.0)
  • The lambda is a scheduled lambda (using AWS eventbridge)
  • I have not made any changes to the default collector config provided by the layer:
receivers:
  otlp:
    protocols:
      grpc:
      http:

exporters:
  logging:
  awsxray:

service:
  pipelines:
    traces:
      receivers: [otlp]
      exporters: [awsxray]
    metrics:
      receivers: [otlp]
      exporters: [logging]

After running the lambda, the layer initialises successfully, but there is an error with the xray exporter after the function execution completes:

2021-08-02T11:12:09.614Z	error	exporterhelper/queued_retry.go:245	Exporting failed. Try enabling retry_on_failure config option.	{
    "kind": "exporter",
    "name": "awsxray",
    "error": "Permanent error: SerializationException: \n\tstatus code: 400
[...]

2021-08-03T10:53:10.222Z	error	exporterhelper/queued_retry.go:175	Exporting failed. Dropping data. Try enabling sending_queue to survive temporary failures.	
{
    "kind": "exporter",
    "name": "awsxray",
    "dropped_items": 2
}

I have not tried the suggestions to survive temporary failures because this error happens consistently.

Is there something missing in my setup or is it a bug somewhere?

Add more 3rd party Collector exporters and processors to lambdaComponent module

Is your feature request related to a problem? Please describe.
Before opentelemetry-lambda CNCF account is available, aws-observability is the only OTel Lambda layer provider. In term of meeting more user's expectation we want to support as more 3rd party components as possible, as long as the layer size does not get inflated too much and meet Amazon security requirement.

Describe the solution you'd like
All collector components in ADOT Collector have passed Amazon security check, ADOT Lambda Collector extension can include these exporters and processors. But make sure that:

  1. check the layer size and Lambda memory consumption, don't increase too much than before.
  2. add corresponding integration test in CI/CD.

It is hard to verify end-to-end result for 3rd party exporter, a compromised option is just enable these exporters in Lambda to see if Collector extension get started successfully.

Describe alternatives you've considered
N/A

Additional context
ADOT Lambda components module in https://github.com/aws-observability/aws-otel-collector/tree/main/pkg/lambdacomponents

Not able to export metrics to awsprometheus

Hello , I am trying to export metrics from lambda function. I have followed the steps as stated . (Built the downstream layer and deployed the intergration-tests folder using terraform). I am using nodejs sample app and layer. I am still not able to get metrics in aws prometheus and neither otlphttp. Any idea what might have been wrong ?

My config file is as follows:

  otlp:
    protocols:
      grpc:
      http:

exporters:
  logging:
    loglevel: debug
  awsprometheusremotewrite:
    endpoint: https://aps-workspaces.us-east-2.amazonaws.com/workspaces/<ws>/api/v1/remote_write
    aws_auth:
      region: us-east-2
      service: 'aps'
  otlphttp:
    endpoint: <endpoint>
service:
  pipelines:
    traces:
      receivers: [otlp]
      exporters: [awsprometheusremotewrite]
    metrics:
      receivers: [otlp]
      exporters: [ awsprometheusremotewrite]

Logs of lambda function



START RequestId: bcfbaa09-e578-46dc-a4f9-90da2c98f2ba Version: $LATEST
--
2021/10/15 12:20:11 [collector] Launching OpenTelemetry Lambda extension, version:  v0.1.0
2021-10-15T12:20:11.362Z	info	service/collector.go:176	Applying configuration...
2021-10-15T12:20:11.362Z	info	builder/exporters_builder.go:227	Ignoring exporter as it is not used by any pipeline	{     "kind": "exporter",     "name": "logging" }
2021-10-15T12:20:11.363Z	info	builder/exporters_builder.go:265	Exporter was built.	{     "kind": "exporter",     "name": "awsprometheusremotewrite" }
2021-10-15T12:20:11.363Z	info	builder/exporters_builder.go:227	Ignoring exporter as it is not used by any pipeline	{     "kind": "exporter",     "name": "otlphttp" }
2021-10-15T12:20:11.363Z	info	builder/pipelines_builder.go:214	Pipeline was built.	{     "pipeline_name": "metrics",     "pipeline_datatype": "metrics" }
2021-10-15T12:20:11.363Z	info	builder/receivers_builder.go:228	Receiver was built.	{     "kind": "receiver",     "name": "otlp",     "datatype": "metrics" }
2021-10-15T12:20:11.363Z	info	service/service.go:101	Starting extensions...
2021-10-15T12:20:11.363Z	info	service/service.go:106	Starting exporters...
2021-10-15T12:20:11.363Z	info	builder/exporters_builder.go:92	Exporter is starting...	{     "kind": "exporter",     "name": "logging" }
2021-10-15T12:20:11.363Z	info	builder/exporters_builder.go:97	Exporter started.	{     "kind": "exporter",     "name": "logging" }
2021-10-15T12:20:11.363Z	info	builder/exporters_builder.go:92	Exporter is starting...	{     "kind": "exporter",     "name": "awsprometheusremotewrite" }
2021-10-15T12:20:11.365Z	info	builder/exporters_builder.go:97	Exporter started.	{     "kind": "exporter",     "name": "awsprometheusremotewrite" }
2021-10-15T12:20:11.365Z	info	builder/exporters_builder.go:92	Exporter is starting...	{     "kind": "exporter",     "name": "otlphttp" }
2021-10-15T12:20:11.365Z	info	builder/exporters_builder.go:97	Exporter started.	{     "kind": "exporter",     "name": "otlphttp" }
2021-10-15T12:20:11.365Z	info	service/service.go:111	Starting processors...
2021-10-15T12:20:11.365Z	info	builder/pipelines_builder.go:51	Pipeline is starting...	{     "pipeline_name": "metrics",     "pipeline_datatype": "metrics" }
2021-10-15T12:20:11.365Z	info	builder/pipelines_builder.go:62	Pipeline is started.	{     "pipeline_name": "metrics",     "pipeline_datatype": "metrics" }
2021-10-15T12:20:11.365Z	info	service/service.go:116	Starting receivers...
2021-10-15T12:20:11.365Z	info	builder/receivers_builder.go:70	Receiver is starting...	{     "kind": "receiver",     "name": "otlp" }
2021-10-15T12:20:11.365Z	info	otlpreceiver/otlp.go:74	Starting GRPC server on endpoint 0.0.0.0:4317	{     "kind": "receiver",     "name": "otlp" }
2021-10-15T12:20:11.365Z	info	otlpreceiver/otlp.go:92	Starting HTTP server on endpoint 0.0.0.0:4318	{     "kind": "receiver",     "name": "otlp" }
2021-10-15T12:20:11.365Z	info	otlpreceiver/otlp.go:147	Setting up a second HTTP listener on legacy endpoint 0.0.0.0:55681	{     "kind": "receiver",     "name": "otlp" }
2021-10-15T12:20:11.365Z	info	otlpreceiver/otlp.go:92	Starting HTTP server on endpoint 0.0.0.0:55681	{     "kind": "receiver",     "name": "otlp" }
2021-10-15T12:20:11.365Z	info	builder/receivers_builder.go:75	Receiver started.	{     "kind": "receiver",     "name": "otlp" }
2021-10-15T12:20:11.365Z	info	service/telemetry.go:65	Setting up own telemetry...
2021-10-15T12:20:11.366Z	info	service/telemetry.go:113	Serving Prometheus metrics	{     "address": ":8888",     "level": 0,     "service.instance.id": "a800ee06-64f7-4085-8868-be2240376ff1" }
2021-10-15T12:20:11.366Z	info	service/collector.go:230	Starting otelcol...	{     "Version": "v0.1.0",     "NumCPU": 2 }
2021-10-15T12:20:11.366Z	info	service/collector.go:134	Everything is ready. Begin running and processing data.
2021/10/15 12:20:11 Registered extension ID: "f58fa35e-d869-45bc-b661-e647430bcc00"
2021/10/15 12:20:11 [collector] Register response: {
"functionName": "hello-nodejs-awssdk",
"functionVersion": "$LATEST",
"handler": "index.handler"
}
2021/10/15 12:20:11 [collector] Waiting for event...
Registering OpenTelemetry
EXTENSION	Name: collector	State: Ready	Events: [INVOKE,SHUTDOWN]
2021/10/15 12:20:12 [collector] Received event: {
"eventType": "INVOKE",
"deadlineMs": 1634300432688,
"requestId": "bcfbaa09-e578-46dc-a4f9-90da2c98f2ba",
"invokedFunctionArn": "arn:aws:lambda:us-east-2:739457818465:function:hello-nodejs-awssdk",
"tracing": {
"type": "X-Amzn-Trace-Id",
"value": "Root=1-616971fa-0298ca0d6bb48b175b16932f;Parent=2e6615f64704d786;Sampled=1"
}
}
2021/10/15 12:20:12 [collector] Waiting for event...
2021-10-15T12:20:13.091Z	bcfbaa09-e578-46dc-a4f9-90da2c98f2ba	INFO	Serving lambda request.
END RequestId: bcfbaa09-e578-46dc-a4f9-90da2c98f2ba
REPORT RequestId: bcfbaa09-e578-46dc-a4f9-90da2c98f2ba	Duration: 855.81 ms	Billed Duration: 856 ms	Memory Size: 384 MB	Max Memory Used: 137 MB	Init Duration: 1505.31 ms	XRAY TraceId: 1-616971fa-0298ca0d6bb48b175b16932f	SegmentId: 2e6615f64704d786	Sampled: true
START RequestId: ca6700ea-c227-40c6-a197-5624af1b8930 Version: $LATEST
2021/10/15 12:20:35 [collector] Received event: {
"eventType": "INVOKE",
"deadlineMs": 1634300455007,
"requestId": "ca6700ea-c227-40c6-a197-5624af1b8930",
"invokedFunctionArn": "arn:aws:lambda:us-east-2:739457818465:function:hello-nodejs-awssdk",
"tracing": {
"type": "X-Amzn-Trace-Id",
"value": "Root=1-61697212-2a26451a312587f4167bb880;Parent=0ca39ae26ef9faee;Sampled=1"
}
}
2021/10/15 12:20:35 [collector] Waiting for event...
2021-10-15T12:20:35.008Z	ca6700ea-c227-40c6-a197-5624af1b8930	INFO	Serving lambda request.
END RequestId: ca6700ea-c227-40c6-a197-5624af1b8930
REPORT RequestId: ca6700ea-c227-40c6-a197-5624af1b8930	Duration: 117.02 ms	Billed Duration: 118 ms	Memory Size: 384 MB	Max Memory Used: 138 MB	XRAY TraceId: 1-61697212-2a26451a312587f4167bb880	SegmentId: 0ca39ae26ef9faee	Sampled: true
2021/10/15 12:26:10 [collector] Received event: {
"eventType": "SHUTDOWN",
"deadlineMs": 1634300772893,
"requestId": "",
"invokedFunctionArn": "",
"tracing": {
"type": "",
"value": ""
}
}
2021-10-15T12:26:10.907Z	info	service/collector.go:150	Received shutdown request
2021-10-15T12:26:10.907Z	info	service/collector.go:242	Starting shutdown...
2021-10-15T12:26:10.907Z	info	service/service.go:136	Stopping receivers...
2021-10-15T12:26:10.909Z	info	service/service.go:141	Stopping processors...
2021-10-15T12:26:10.909Z	info	builder/pipelines_builder.go:70	Pipeline is shutting down...	{     "pipeline_name": "metrics",     "pipeline_datatype": "metrics" }
2021-10-15T12:26:10.909Z	info	builder/pipelines_builder.go:76	Pipeline is shutdown.	{     "pipeline_name": "metrics",     "pipeline_datatype": "metrics" }
2021-10-15T12:26:10.909Z	info	service/service.go:146	Stopping exporters...
2021-10-15T12:26:10.909Z	info	service/service.go:151	Stopping extensions...
2021-10-15T12:26:10.909Z	info	service/collector.go:258	Shutdown complete.
2021/10/15 12:26:10 [collector] Received SHUTDOWN event
2021/10/15 12:26:10 [collector] Exiting


python S3 upload_file method

Is your feature request related to a problem? Please describe.

X-Ray spans generated by S3 upload_file method requests do not have the same trace id as their parent and sibling spans and the parent_id is set to null which causes broken traces.

Describe the solution you'd like

To be able to use upload_file within our code and not have broken traces.

Describe alternatives you've considered

I had to move away from using upload_file and switch to put_object which works but if some methods are not instrumented you should call that out in the X-Ray/S3 documentation.

Additional context
Add any other context or screenshots about the feature request here.

Lambda traces not being sent to data-prepper

Hello,
I am trying to send trace data from a lambda to my data-prepper server which I have running on an EC2 instance.
Currently I can only see the traces on AWS X-ray.

My config.template.yaml file:

receivers:
  otlp:
    protocols:
      grpc:
exporters:
  logging:
    loglevel: debug
  otlp/data-prepper:
    endpoint: [email protected]:21890
    insecure: true
processors:
  batch:
service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [logging, otlp/data-prepper]

I am using the following environment variables:

OPENTELEMETRY_COLLECTOR_CONFIG_FILE: /var/task/collector.yaml/collector.template.yaml
AWS_LAMBDA_EXEC_WRAPPER: /opt/otel-proxy-handler

I followed this guide, and used the lambda layer arn:aws:lambda:\us-west-2:901920570463:layer:aws-otel-java-wrapper-ver-1-2-0:2. I also used this guide to instead of sending the trace data to X-ray to send to data-prepper.

Is there something my set-up is missing?

I believe the issue may be that the the lambda layer and thus the config file (which exists in the correct directory after deploying lambda) is never being used, as the open-telemetry is never outputting any logs, (even though as shown in the config file it should be posting debug logs and exporting to data-prepper).

I would appreciate any suggestions or help with the matter, and please let me know if you have any questions/ need any more information for me.

[ERROR] Runtime.ImportModuleError when using python-lambda sample app lambda layer

After I followed the instructions provided in the sample-apps/python-lambda (I targeted us-east-1) I encountered an error in executing my Lambda function. The error log in Cloudwatch shows

[ERROR] Runtime.ImportModuleError: Unable to import module 'aws_observability': cannot import name 'AwsXRayIdsGenerator' from 'opentelemetry.sdk.extension.aws.trace' (/opt/python/opentelemetry/sdk/extension/aws/trace/__init__.py)
Traceback (most recent call last)

I checked the contents of the LambdaLayer by downloading them and verified that all of the files exist in the paths specified, but for some reason this isn't working for me.

Collect Lambda alias and version number

We need to collect lambda version number and alias information automatically as part of the trace. This help to build intelligence int the analysis and insights system to automatically detect version changes and capture any regression in the system due to code changes.

Add opentelemetry-instrumentation-logging library to the python layer to allow logs/traces correlation

I have a python lambda function that is being auto-instrumented using the lambda layer aws-otel-python38-ver-1-7-1. This is working fine, however, I need to get the lambda function to log the traceid into the CloudWatch logs. One way of doing this is to use the opentelemetry-intrumentation-logging python library. As I am instrumenting both the lambda traces and logs to an elastic cluster, I want to be able to correlate the traces with the logs.

The aws-otel-python38-ver-1-7-1 layer does not appear to have the opentelemetry-instrumentation-logging in the requirements.txt. As a workaround I could create my own layer but I think it's important that you allow everyone to have access to this logging library as correlation is an important part of Observability.

Issue with aws-sdk v3

I have a NodeJS Runtime 12 lambda.

Issue I'm facing

When I run the lambda the following occurs

{
  "errorType": "TypeError",
  "errorMessage": "Cannot redefine property: constructStack",
  "trace": [
    "TypeError: Cannot redefine property: constructStack",
    "    at Function.defineProperty (<anonymous>)",
    "    at defineProperty (/opt/nodejs/node_modules/shimmer/index.js:14:10)",
    "    at AwsInstrumentation.wrap [as _wrap] (/opt/nodejs/node_modules/shimmer/index.js:56:3)",
    "    at AwsInstrumentation.patchV3ConstructStack (/opt/nodejs/node_modules/opentelemetry-instrumentation-aws-sdk/dist/src/aws-sdk.js:41:14)",
    "    at AwsInstrumentation._onRequire (/opt/nodejs/node_modules/opentelemetry-instrumentation-aws-sdk/node_modules/@opentelemetry/instrumentation/build/src/platform/node/instrumentation.js:75:39)",
    "    at /opt/nodejs/node_modules/opentelemetry-instrumentation-aws-sdk/node_modules/@opentelemetry/instrumentation/build/src/platform/node/instrumentation.js:115:29",
    "    at Module.Hook._require.Module.require (/opt/nodejs/node_modules/require-in-the-middle/index.js:154:32)",
    "    at Module.Hook._require.Module.require (/opt/nodejs/node_modules/require-in-the-middle/index.js:80:39)",
    "    at Module.Hook._require.Module.require (/opt/nodejs/node_modules/require-in-the-middle/index.js:80:39)",
    "    at Module.Hook._require.Module.require (/opt/nodejs/node_modules/require-in-the-middle/index.js:80:39)"
  ]
}

My setup

I have active tracing enabled.
I have AWS_LAMBDA_EXEC_WRAPPER set to /opt/otel-handler.
I've added the layer arn:aws:lambda:us-east-1:901920570463:layer:aws-otel-nodejs-ver-0-18-0:1

I'm using CDK to deploy the lambda.
I'm using pure javascript (although many of the modules are using TS).
The node module references both aws-sdk and @AWS-SDK modules.

Other things I've tried

  1. Using the layer
    arn:aws:lambda:us-east-1:901920570463:layer:aws-otel-nodejs-ver-0-19-0:1

[Python] Error translating span: invalid xray traceid for XRay exporter

I have a python 3.8 lambda instrumented with a aws layer. This lambda is using the FastAPI ASGI framework wrapped by the Mangum Lambda+API Gateway adapter.

I'm currently running into issues where the xray exporter is consistently erroring on Error translating span: invalid xray traceid for all spans not related to the lambda startup events.

Is there something that i'm missing somewhere ? I tried multiple variations of the code and the one that always fails is when i'm using the Mangum adapter. However i can't remove that since it's handling AWS API Gateway requests and responses. Is there something in addition to the fastapi guide that i should be implementing to get around this issue ?

error demo repo

Lambda
Layer version: aws-otel-python38-ver-1-7-1:1 (Contains OpenTelemetry Python v1.7.1 with the AWS Python Extension v1.0.1)
The lambda is a invoked by an AWS Gateway configured to proxy requests

default collector config modified to add debugging.

Code to test opentelemetry auto-instrumentation

"""
Main API handler that defines all routes.
"""

import boto3
import os

from fastapi import FastAPI
from mangum import Mangum
from opentelemetry.instrumentation.fastapi import FastAPIInstrumentor

app = FastAPI(
    title="AWS + FastAPI",
    description="AWS API Gateway, Lambdas and FastAPI (oh my)",
    root_path="/dev"
)

@app.get("/hello")
def hello():
    "Hello path request"
    return {"Hello": "World"}

@app.get("/list")
def hello():
    client = boto3.client("s3")
    client.list_buckets()
    
    client = boto3.client("ec2")
    client.describe_instances()

    return {"Region ": os.environ['AWS_REGION']}  


FastAPIInstrumentor.instrument_app(app)
# Mangum allows us to use Lambdas to process requests
handler = Mangum(app=app)

Requirements file

boto3==1.17.*
fastapi==0.67.0
mangum==0.12.1
opentelemetry-sdk==1.7.1
opentelemetry-distro==0.26b1
opentelemetry-instrumentation==0.26b1
opentelemetry-instrumentation-fastapi==0.26b1

Collector config

receivers:
  otlp:
    protocols:
      grpc:
      http:

exporters:
  logging:
  awsxray:

service:
  pipelines:
    traces:
      receivers: [otlp]
      exporters: [awsxray]
    metrics:
      receivers: [otlp]
      exporters: [logging]
  telemetry:
    logs:
      level: debug
Debug logs...

message
START RequestId: c6f8d807-e951-4cad-a932-1a397cc37866 Version: $LATEST
2021/12/18 00:00:03 [collector] Launching OpenTelemetry Lambda extension, version:  v0.1.0
2021/12/18 00:00:03 Using config file at path /var/task/collector.yaml
2021-12-18T00:00:03.083Z	info	service/collector.go:186	Applying configuration...
2021-12-18T00:00:03.085Z	info	builder/exporters_builder.go:254	Exporter was built.	{"kind": "exporter", "name": "logging"}
2021-12-18T00:00:03.085Z	debug	[email protected]/conn.go:59	Using proxy address: 	{"kind": "exporter", "name": "awsxray", "proxyAddr": ""}
2021-12-18T00:00:03.087Z	debug	[email protected]/conn.go:125	Fetch region from environment variables	{"kind": "exporter", "name": "awsxray", "region": "ca-central-1"}
2021-12-18T00:00:03.088Z	debug	[email protected]/xray_client.go:51	Using Endpoint: %s	{"kind": "exporter", "name": "awsxray", "endpoint": "https://xray.ca-central-1.amazonaws.com"}
2021-12-18T00:00:03.088Z	info	builder/exporters_builder.go:254	Exporter was built.	{"kind": "exporter", "name": "awsxray"}
2021-12-18T00:00:03.088Z	info	builder/pipelines_builder.go:222	Pipeline was built.	{"name": "pipeline", "name": "traces"}
2021-12-18T00:00:03.088Z	info	builder/pipelines_builder.go:222	Pipeline was built.	{"name": "pipeline", "name": "metrics"}
2021-12-18T00:00:03.089Z	info	builder/receivers_builder.go:224	Receiver was built.	{"kind": "receiver", "name": "otlp", "datatype": "traces"}
2021-12-18T00:00:03.089Z	info	builder/receivers_builder.go:224	Receiver was built.	{"kind": "receiver", "name": "otlp", "datatype": "metrics"}
2021-12-18T00:00:03.089Z	info	service/service.go:86	Starting extensions...
2021-12-18T00:00:03.089Z	info	service/service.go:91	Starting exporters...
2021-12-18T00:00:03.089Z	info	builder/exporters_builder.go:40	Exporter is starting...	{"kind": "exporter", "name": "logging"}
2021-12-18T00:00:03.089Z	info	builder/exporters_builder.go:48	Exporter started.	{"kind": "exporter", "name": "logging"}
2021-12-18T00:00:03.089Z	info	builder/exporters_builder.go:40	Exporter is starting...	{"kind": "exporter", "name": "awsxray"}
2021-12-18T00:00:03.089Z	info	builder/exporters_builder.go:48	Exporter started.	{"kind": "exporter", "name": "awsxray"}
2021-12-18T00:00:03.089Z	info	service/service.go:96	Starting processors...
2021-12-18T00:00:03.089Z	info	builder/pipelines_builder.go:54	Pipeline is starting...	{"name": "pipeline", "name": "traces"}
2021-12-18T00:00:03.089Z	info	builder/pipelines_builder.go:65	Pipeline is started.	{"name": "pipeline", "name": "traces"}
2021-12-18T00:00:03.089Z	info	builder/pipelines_builder.go:54	Pipeline is starting...	{"name": "pipeline", "name": "metrics"}
2021-12-18T00:00:03.089Z	info	builder/pipelines_builder.go:65	Pipeline is started.	{"name": "pipeline", "name": "metrics"}
2021-12-18T00:00:03.089Z	info	service/service.go:101	Starting receivers...
2021-12-18T00:00:03.089Z	info	builder/receivers_builder.go:68	Receiver is starting...	{"kind": "receiver", "name": "otlp"}
2021-12-18T00:00:03.089Z	info	otlpreceiver/otlp.go:68	Starting GRPC server on endpoint 0.0.0.0:4317	{"kind": "receiver", "name": "otlp"}
2021-12-18T00:00:03.089Z	info	otlpreceiver/otlp.go:86	Starting HTTP server on endpoint 0.0.0.0:4318	{"kind": "receiver", "name": "otlp"}
2021-12-18T00:00:03.089Z	info	otlpreceiver/otlp.go:141	Setting up a second HTTP listener on legacy endpoint 0.0.0.0:55681	{"kind": "receiver", "name": "otlp"}
2021-12-18T00:00:03.089Z	info	otlpreceiver/otlp.go:86	Starting HTTP server on endpoint 0.0.0.0:55681	{"kind": "receiver", "name": "otlp"}
2021-12-18T00:00:03.089Z	info	builder/receivers_builder.go:73	Receiver started.	{"kind": "receiver", "name": "otlp"}
2021-12-18T00:00:03.089Z	info	service/telemetry.go:92	Setting up own telemetry...
2021-12-18T00:00:03.090Z	info	service/telemetry.go:116	Serving Prometheus metrics	{"address": ":8888", "level": "basic", "service.instance.id": "29e96d95-7d06-4d11-b3aa-f98048ddfd9c", "service.version": "latest"}
2021-12-18T00:00:03.090Z	info	service/collector.go:235	Starting otelcol...	{"Version": "v0.1.0", "NumCPU": 2}
2021-12-18T00:00:03.090Z	info	service/collector.go:131	Everything is ready. Begin running and processing data.
2021/12/18 00:00:03 Registered extension ID: "51290cbf-6075-4c1a-82e1-1bb952cb661e"
2021/12/18 00:00:03 [collector] Register response: {
"	"functionName": "lambda_api",
"
"	"functionVersion": "$LATEST",
"
"	"handler": "lambda.handler"
"
"}
"
2021/12/18 00:00:03 [collector] Waiting for event...
"EXTENSION	Name: collector	State: Ready	Events: [INVOKE,SHUTDOWN]
"
2021/12/18 00:00:05 [collector] Received event: {
"	"eventType": "INVOKE",
"
"deadlineMs": 1639785615129,
"requestId": "c6f8d807-e951-4cad-a932-1a397cc37866",
"invokedFunctionArn": "arn:aws:lambda:ca-central-1:############:function:lambda_api",
"	"tracing": {
"
"		"type": "X-Amzn-Trace-Id",
"
"value": "Root=1-61bd2482-41527aa150bacd653a4b7a04;Parent=1a9bb62f555d892d;Sampled=1"
"	}
"
"}
"
2021/12/18 00:00:05 [collector] Waiting for event...
2021-12-18T00:00:09.601Z	debug	[email protected]/awsxray.go:54	TracesExporter	{"kind": "exporter", "name": "awsxray", "type": "awsxray", "name": "awsxray", "#spans": 9}
2021-12-18T00:00:09.703Z	debug	[email protected]/awsxray.go:65	Error translating span.	{"kind": "exporter", "name": "awsxray", "error": "invalid xray traceid: 7239d7900b5275e300a448031cf09077"}
2021-12-18T00:00:09.703Z	debug	[email protected]/awsxray.go:65	Error translating span.	{"kind": "exporter", "name": "awsxray", "error": "invalid xray traceid: 7239d7900b5275e300a448031cf09077"}
2021-12-18T00:00:09.703Z	debug	[email protected]/awsxray.go:65	Error translating span.	{"kind": "exporter", "name": "awsxray", "error": "invalid xray traceid: 7239d7900b5275e300a448031cf09077"}
2021-12-18T00:00:09.703Z	debug	[email protected]/awsxray.go:65	Error translating span.	{"kind": "exporter", "name": "awsxray", "error": "invalid xray traceid: 7239d7900b5275e300a448031cf09077"}
2021-12-18T00:00:09.703Z	debug	[email protected]/awsxray.go:65	Error translating span.	{"kind": "exporter", "name": "awsxray", "error": "invalid xray traceid: 7239d7900b5275e300a448031cf09077"}
2021-12-18T00:00:09.703Z	debug	[email protected]/awsxray.go:65	Error translating span.	{"kind": "exporter", "name": "awsxray", "error": "invalid xray traceid: 7239d7900b5275e300a448031cf09077"}
2021-12-18T00:00:09.703Z	debug	[email protected]/awsxray.go:65	Error translating span.	{"kind": "exporter", "name": "awsxray", "error": "invalid xray traceid: 7239d7900b5275e300a448031cf09077"}
2021-12-18T00:00:09.703Z	debug	[email protected]/awsxray.go:65	Error translating span.	{"kind": "exporter", "name": "awsxray", "error": "invalid xray traceid: 94fcdc8c426860edfc075740db6ce0f7"}
2021-12-18T00:00:09.741Z	debug	[email protected]/awsxray.go:78	request: {
"  TraceSegmentDocuments: [
"
""{\"name\":\"lambda_api\",\"id\":\"38f525d2575a900e\",\"start_time\":1639785605.1313155,\"trace_id\":\"1-61bd2482-41527aa150bacd653a4b7a04\",\"end_time\":1639785608.9352887,\"fault\":false,\"error\":false,\"throttle\":false,\"aws\":{\"xray\":{\"sdk\":\"opentelemetry for python\",\"sdk_version\":\"1.7.1\",\"auto_instrumentation\":true}},\"metadata\":{\"default\":{\"faas.execution\":\"c6f8d807-e951-4cad-a932-1a397cc37866\",\"faas.id\":\"arn:aws:lambda:ca-central-1:############:function:lambda_api\",\"otel.resource.cloud.provider\":\"aws\",\"otel.resource.cloud.region\":\"ca-central-1\",\"otel.resource.faas.name\":\"lambda_api\",\"otel.resource.faas.version\":\"$LATEST\",\"otel.resource.service.name\":\"lambda_api\",\"otel.resource.telemetry.auto.version\":\"0.26b1\",\"otel.resource.telemetry.sdk.language\":\"python\",\"otel.resource.telemetry.sdk.name\":\"opentelemetry\",\"otel.resource.telemetry.sdk.version\":\"1.7.1\"}},\"parent_id\":\"f3a96843ab50f00e\"}
","
"    <invalid value>,
"
"    <invalid value>,
"
"    <invalid value>,
"
"    <invalid value>,
"
"    <invalid value>,
"
"    <invalid value>,
"
"    <invalid value>,
"
"    <invalid value>
"
"  ]
"
"}	{"kind": "exporter", "name": "awsxray"}
"
"2021-12-18T00:00:10.400Z	debug	[email protected]/awsxray.go:81	response error	{"kind": "exporter", "name": "awsxray", "error": "SerializationException: 
\tstatus code: 400, request id: 221db53c-fb02-484a-b7a0-865bbe24169e"}"
2021-12-18T00:00:10.400Z	debug	[email protected]/awsxray.go:85	response: {
"
"
"}	{"kind": "exporter", "name": "awsxray"}
"
"2021-12-18T00:00:10.400Z	error	exporterhelper/queued_retry.go:149	Exporting failed. Try enabling retry_on_failure config option.	{"kind": "exporter", "name": "awsxray", "error": "Permanent error: SerializationException: 
\tstatus code: 400, request id: 221db53c-fb02-484a-b7a0-865bbe24169e"}"
"go.opentelemetry.io/collector/exporter/exporterhelper.(*retrySender).send
"
go.opentelemetry.io/[email protected]/exporter/exporterhelper/queued_retry.go:149
"go.opentelemetry.io/collector/exporter/exporterhelper.(*tracesExporterWithObservability).send
"
go.opentelemetry.io/[email protected]/exporter/exporterhelper/traces.go:136
"go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).send
"
go.opentelemetry.io/[email protected]/exporter/exporterhelper/queued_retry.go:83
go.opentelemetry.io/collector/exporter/exporterhelper.NewTracesExporter.func2
go.opentelemetry.io/[email protected]/exporter/exporterhelper/traces.go:115
"go.opentelemetry.io/collector/consumer/consumerhelper.ConsumeTracesFunc.ConsumeTraces
"
go.opentelemetry.io/[email protected]/consumer/consumerhelper/traces.go:29
"go.opentelemetry.io/collector/service/internal/fanoutconsumer.(*tracesConsumer).ConsumeTraces
"
go.opentelemetry.io/[email protected]/service/internal/fanoutconsumer/traces.go:75
"go.opentelemetry.io/collector/receiver/otlpreceiver/internal/trace.(*Receiver).Export
"
go.opentelemetry.io/[email protected]/receiver/otlpreceiver/internal/trace/otlp.go:65
"go.opentelemetry.io/collector/model/otlpgrpc.rawTracesServer.Export
"
go.opentelemetry.io/collector/[email protected]/otlpgrpc/traces.go:166
go.opentelemetry.io/collector/model/internal/data/protogen/collector/trace/v1._TraceService_Export_Handler.func1
go.opentelemetry.io/collector/[email protected]/internal/data/protogen/collector/trace/v1/trace_service.pb.go:217
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc.UnaryServerInterceptor.func1
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/[email protected]/interceptor.go:325
go.opentelemetry.io/collector/model/internal/data/protogen/collector/trace/v1._TraceService_Export_Handler
go.opentelemetry.io/collector/[email protected]/internal/data/protogen/collector/trace/v1/trace_service.pb.go:219
"google.golang.org/grpc.(*Server).processUnaryRPC
"
google.golang.org/[email protected]/server.go:1282
"google.golang.org/grpc.(*Server).handleStream
"
google.golang.org/[email protected]/server.go:1616
google.golang.org/grpc.(*Server).serveStreams.func1.2
google.golang.org/[email protected]/server.go:921
2021-12-18T00:00:10.422Z	error	exporterhelper/queued_retry.go:85	Exporting failed. Dropping data. Try enabling sending_queue to survive temporary failures.	{"kind": "exporter", "name": "awsxray", "dropped_items": 9}
"go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).send
"
go.opentelemetry.io/[email protected]/exporter/exporterhelper/queued_retry.go:85
go.opentelemetry.io/collector/exporter/exporterhelper.NewTracesExporter.func2
go.opentelemetry.io/[email protected]/exporter/exporterhelper/traces.go:115
"go.opentelemetry.io/collector/consumer/consumerhelper.ConsumeTracesFunc.ConsumeTraces
"
go.opentelemetry.io/[email protected]/consumer/consumerhelper/traces.go:29
"go.opentelemetry.io/collector/service/internal/fanoutconsumer.(*tracesConsumer).ConsumeTraces
"
go.opentelemetry.io/[email protected]/service/internal/fanoutconsumer/traces.go:75
"go.opentelemetry.io/collector/receiver/otlpreceiver/internal/trace.(*Receiver).Export
"
go.opentelemetry.io/[email protected]/receiver/otlpreceiver/internal/trace/otlp.go:65
"go.opentelemetry.io/collector/model/otlpgrpc.rawTracesServer.Export
"
go.opentelemetry.io/collector/[email protected]/otlpgrpc/traces.go:166
go.opentelemetry.io/collector/model/internal/data/protogen/collector/trace/v1._TraceService_Export_Handler.func1
go.opentelemetry.io/collector/[email protected]/internal/data/protogen/collector/trace/v1/trace_service.pb.go:217
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc.UnaryServerInterceptor.func1
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/[email protected]/interceptor.go:325
go.opentelemetry.io/collector/model/internal/data/protogen/collector/trace/v1._TraceService_Export_Handler
go.opentelemetry.io/collector/[email protected]/internal/data/protogen/collector/trace/v1/trace_service.pb.go:219
"google.golang.org/grpc.(*Server).processUnaryRPC
"
google.golang.org/[email protected]/server.go:1282
"google.golang.org/grpc.(*Server).handleStream
"
google.golang.org/[email protected]/server.go:1616
google.golang.org/grpc.(*Server).serveStreams.func1.2
google.golang.org/[email protected]/server.go:921
2021-12-18T00:00:10.440Z	INFO	loggingexporter/logging_exporter.go:40	TracesExporter	{"#spans": 9}
2021-12-18T00:00:10.440Z	DEBUG	loggingexporter/logging_exporter.go:49	ResourceSpans #0
"Resource labels:
"
"     -> telemetry.sdk.language: STRING(python)
"
"     -> telemetry.sdk.name: STRING(opentelemetry)
"
-> telemetry.sdk.version: STRING(1.7.1)
-> cloud.region: STRING(ca-central-1)
"     -> cloud.provider: STRING(aws)
"
"     -> faas.name: STRING(lambda_api)
"
"     -> faas.version: STRING($LATEST)
"
"     -> service.name: STRING(lambda_api)
"
-> telemetry.auto.version: STRING(0.26b1)
InstrumentationLibrarySpans #0
InstrumentationLibrary opentelemetry.instrumentation.botocore 0.26b1
Span #0
Trace ID       : 7239d7900b5275e300a448031cf09077
Parent ID      : ae44d3838ecdfae1
ID             : 9dbf833317108684
Name           : S3.ListBuckets
"    Kind           : SPAN_KIND_CLIENT
"
Start time     : 2021-12-18 00:00:06.821828085 +0000 UTC
End time       : 2021-12-18 00:00:07.069644866 +0000 UTC
"    Status code    : STATUS_CODE_UNSET
"
"    Status message : 
"
"Attributes:
"
"     -> rpc.system: STRING(aws-api)
"
-> rpc.service: STRING(S3)
"     -> rpc.method: STRING(ListBuckets)
"
-> aws.region: STRING(ca-central-1)
-> aws.request_id: STRING(NYKTKGPNCP11HZQ3)
-> retry_attempts: INT(0)
-> http.status_code: INT(200)
Span #1
Trace ID       : 7239d7900b5275e300a448031cf09077
Parent ID      : ae44d3838ecdfae1
ID             : 8c4afa98d92d4ce2
Name           : EC2.DescribeInstances
"    Kind           : SPAN_KIND_CLIENT
"
Start time     : 2021-12-18 00:00:08.620597006 +0000 UTC
End time       : 2021-12-18 00:00:08.932517372 +0000 UTC
"    Status code    : STATUS_CODE_UNSET
"
"    Status message : 
"
"Attributes:
"
"     -> rpc.system: STRING(aws-api)
"
-> rpc.service: STRING(EC2)
"     -> rpc.method: STRING(DescribeInstances)
"
-> aws.region: STRING(ca-central-1)
-> aws.request_id: STRING(56c54b54-b596-458e-a748-1c32051e2f88)
-> retry_attempts: INT(0)
-> http.status_code: INT(200)
InstrumentationLibrarySpans #1
InstrumentationLibrary opentelemetry.instrumentation.asgi 0.26b1
Span #0
Trace ID       : 7239d7900b5275e300a448031cf09077
Parent ID      : 4afc00b42f28d031
ID             : 6c4dce0276690b11
"    Name           : /list http send
"
"    Kind           : SPAN_KIND_INTERNAL
"
Start time     : 2021-12-18 00:00:08.934742806 +0000 UTC
End time       : 2021-12-18 00:00:08.934812714 +0000 UTC
"    Status code    : STATUS_CODE_UNSET
"
"    Status message : 
"
"Attributes:
"
-> http.status_code: INT(200)
"     -> type: STRING(http.response.start)
"
Span #1
Trace ID       : 7239d7900b5275e300a448031cf09077
Parent ID      : ae44d3838ecdfae1
ID             : 4afc00b42f28d031
"    Name           : /list http send
"
"    Kind           : SPAN_KIND_INTERNAL
"
Start time     : 2021-12-18 00:00:08.934620615 +0000 UTC
End time       : 2021-12-18 00:00:08.934832729 +0000 UTC
"    Status code    : STATUS_CODE_UNSET
"
"    Status message : 
"
"Attributes:
"
-> http.status_code: INT(200)
"     -> type: STRING(http.response.start)
"
Span #2
Trace ID       : 7239d7900b5275e300a448031cf09077
Parent ID      : 4a1957f63a0ff5e2
ID             : 8b3cd404c4994971
"    Name           : /list http send
"
"    Kind           : SPAN_KIND_INTERNAL
"
Start time     : 2021-12-18 00:00:08.934935617 +0000 UTC
End time       : 2021-12-18 00:00:08.93497295 +0000 UTC
"    Status code    : STATUS_CODE_UNSET
"
"    Status message : 
"
"Attributes:
"
"     -> type: STRING(http.response.body)
"
Span #3
Trace ID       : 7239d7900b5275e300a448031cf09077
Parent ID      : ae44d3838ecdfae1
ID             : 4a1957f63a0ff5e2
"    Name           : /list http send
"
"    Kind           : SPAN_KIND_INTERNAL
"
Start time     : 2021-12-18 00:00:08.934881634 +0000 UTC
End time       : 2021-12-18 00:00:08.93498422 +0000 UTC
"    Status code    : STATUS_CODE_UNSET
"
"    Status message : 
"
"Attributes:
"
"     -> type: STRING(http.response.body)
"
Span #4
Trace ID       : 7239d7900b5275e300a448031cf09077
"    Parent ID      : 
"
ID             : ae44d3838ecdfae1
"    Name           : /list
"
"    Kind           : SPAN_KIND_SERVER
"
Start time     : 2021-12-18 00:00:05.142575755 +0000 UTC
End time       : 2021-12-18 00:00:08.935000769 +0000 UTC
"    Status code    : STATUS_CODE_UNSET
"
"    Status message : 
"
"Attributes:
"
"     -> http.scheme: STRING(https)
"
-> http.host: STRING(8mbdm35542.execute-api.ca-central-1.amazonaws.com:443)
-> net.host.port: INT(443)
-> http.flavor: STRING(1.1)
"     -> http.target: STRING(/list)
"
-> http.url: STRING(https://8mbdm35542.execute-api.ca-central-1.amazonaws.com:443/dev/list)
"     -> http.method: STRING(GET)
"
-> http.server_name: STRING(8mbdm35542.execute-api.ca-central-1.amazonaws.com)
-> http.user_agent: STRING(Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.93 Safari/537.36)
-> net.peer.ip: STRING(24.53.241.86)
-> net.peer.port: INT(0)
"     -> http.route: STRING(/list)
"
-> http.status_code: INT(200)
Span #5
Trace ID       : 94fcdc8c426860edfc075740db6ce0f7
"    Parent ID      : 
"
ID             : 34b39cafa2d90e30
"    Name           : /list
"
"    Kind           : SPAN_KIND_SERVER
"
Start time     : 2021-12-18 00:00:05.142299632 +0000 UTC
End time       : 2021-12-18 00:00:08.935016985 +0000 UTC
"    Status code    : STATUS_CODE_UNSET
"
"    Status message : 
"
"Attributes:
"
"     -> http.scheme: STRING(https)
"
-> http.host: STRING(8mbdm35542.execute-api.ca-central-1.amazonaws.com:443)
-> net.host.port: INT(443)
-> http.flavor: STRING(1.1)
"     -> http.target: STRING(/list)
"
-> http.url: STRING(https://8mbdm35542.execute-api.ca-central-1.amazonaws.com:443/dev/list)
"     -> http.method: STRING(GET)
"
-> http.server_name: STRING(8mbdm35542.execute-api.ca-central-1.amazonaws.com)
-> http.user_agent: STRING(Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.93 Safari/537.36)
-> net.peer.ip: STRING(24.53.241.86)
-> net.peer.port: INT(0)
"     -> http.route: STRING(/list)
"
-> http.status_code: INT(200)
InstrumentationLibrarySpans #2
InstrumentationLibrary opentelemetry.instrumentation.aws_lambda 0.26b1
Span #0
Trace ID       : 61bd248241527aa150bacd653a4b7a04
Parent ID      : f3a96843ab50f00e
ID             : 38f525d2575a900e
"    Name           : lambda.handler
"
"    Kind           : SPAN_KIND_SERVER
"
Start time     : 2021-12-18 00:00:05.131315526 +0000 UTC
End time       : 2021-12-18 00:00:08.935288475 +0000 UTC
"    Status code    : STATUS_CODE_UNSET
"
"    Status message : 
"
"Attributes:
"
-> faas.id: STRING(arn:aws:lambda:ca-central-1:############:function:lambda_api)
-> faas.execution: STRING(c6f8d807-e951-4cad-a932-1a397cc37866)
"
"
END RequestId: c6f8d807-e951-4cad-a932-1a397cc37866
"REPORT RequestId: c6f8d807-e951-4cad-a932-1a397cc37866	Duration: 5370.81 ms	Billed Duration: 5371 ms	Memory Size: 128 MB	Max Memory Used: 128 MB	Init Duration: 2245.90 ms	
XRAY TraceId: 1-61bd2482-41527aa150bacd653a4b7a04	SegmentId: 1a9bb62f555d892d	Sampled: true"
\```
</p>
</details>  





publish Docker images for multi-stage builds to use

i've had difficulty adding X-Ray tracing for a python application using OpenTelemetry with a container-based Lambda. The published extension isn't immediately applicable to that use case.

Ideally, there'd be images published to ECR which could be used in multi-stage builds something like this (maybe per python version?):

FROM public.ecr.aws/lambda/aws-otel-lambda:latest as otel-base

FROM public.ecr.aws/lambda/python:3.8 
COPY src/ $LAMBDA_TASK_ROOT
COPY --from otel-base /opt/ /opt/

That may not work for all use cases, but could at least work for images based on public.ecr.aws/lambda/python.

i understand if that's outside the purpose of this project.


i've tried doing something like in a Dockerfile myself, and i'm getting Lambda timeouts whenever the xray exporter is included in the config (using logging exorter only works fine), with hardly any context to go on.

Maybe i could get a little help:

FROM public.ecr.aws/lambda/python:3.8 as python-base
# [... some ENV set up here]


FROM python-base as otel-base

RUN yum -y install git  make

RUN wget https://go.dev/dl/go1.17.3.linux-amd64.tar.gz && rm -rf /usr/local/go && tar -C /usr/local -xzf go1.17.3.linux-amd64.tar.gz

ENV PATH=$PATH:/usr/local/go/bin

RUN git clone --recurse-submodules https://github.com/aws-observability/aws-otel-lambda.git

WORKDIR $LAMBDA_TASK_ROOT/aws-otel-lambda

# Patch ADOT
RUN ./patch-upstream.sh

# instead of running python/build.sh, which is creating zip archives, run the underlying build.sh scripts.
# opentelemetry-lambda/python/src/build.sh requires docker, so use Make instead. we also make an empty collector_build dir so the script doesn't fail; it's looking for the collector we are adding later
RUN cd opentelemetry-lambda/python/src/otel && \
    mkdir build && \
    mkdir collector_build && \
    touch collector_build/tmp && \
    ARTIFACTS_DIR=build make build-OTelLayer && \
    mv build/python /opt/

# FIXME: /opt/otel-instrument is in /opt/python/otel-instrument?

# Build collector
RUN cd opentelemetry-lambda/collector && make build && cp -r build/extensions /opt

RUN mkdir /opt/collector-config && cp opentelemetry-lambda/collector/config* /opt/collector-config/

# use our defined collector configs
COPY "./applications/collector-config/" /opt/collector-config/

FROM python-base as application

# [... add and install python libs]

COPY --from=otel-base /opt/ /opt/

Does the instrumentor need to be installed in the same environment as the application's python libs? In my case, i've been using poetry to install to /var/lang/lib/python3.8/.

[Python] Unable to see DynamoDB traces when using PynamoDB

Environment Details

  • Lambda Runtime: Python3.8
  • PynamoDB version: 5.1.0
  • OTEL Layer: arn:aws:lambda:us-east-1:901920570463:layer:aws-otel-python38-ver-1-7-1:1

Description

I haven't been able to get OTEL to show DynamoDB traces when using PynamoDB instead of the dynamodb client from boto3 in my functions.

Traces from boto3 Lambda:

boto3-trace

Log Output from boto3 Lambda:

ilt.	{"name": "pipeline", "name": "metrics"}
2021-12-03T07:17:22.783Z	info	builder/receivers_builder.go:224	Receiver was built.	{"kind": "receiver", "name": "otlp", "datatype": "traces"}
2021-12-03T07:17:22.783Z	info	builder/receivers_builder.go:224	Receiver was built.	{"kind": "receiver", "name": "otlp", "datatype": "metrics"}
2021-12-03T07:17:22.783Z	info	service/service.go:86	Starting extensions...
2021-12-03T07:17:22.783Z	info	service/service.go:91	Starting exporters...
2021-12-03T07:17:22.783Z	info	builder/exporters_builder.go:40	Exporter is starting...	{"kind": "exporter", "name": "logging"}
2021-12-03T07:17:22.783Z	info	builder/exporters_builder.go:48	Exporter started.	{"kind": "exporter", "name": "logging"}
2021-12-03T07:17:22.783Z	info	builder/exporters_builder.go:40	Exporter is starting...	{"kind": "exporter", "name": "awsxray"}
2021-12-03T07:17:22.783Z	info	builder/exporters_builder.go:48	Exporter started.	{"kind": "exporter", "name": "awsxray"}
2021-12-03T07:17:22.783Z	info	service/service.go:96	Starting processors...
2021-12-03T07:17:22.783Z	info	builder/pipelines_builder.go:54	Pipeline is starting...	{"name": "pipeline", "name": "traces"}
2021-12-03T07:17:22.783Z	info	builder/pipelines_builder.go:65	Pipeline is started.	{"name": "pipeline", "name": "traces"}
2021-12-03T07:17:22.783Z	info	builder/pipelines_builder.go:54	Pipeline is starting...	{"name": "pipeline", "name": "metrics"}
2021-12-03T07:17:22.783Z	info	builder/pipelines_builder.go:65	Pipeline is started.	{"name": "pipeline", "name": "metrics"}
2021-12-03T07:17:22.783Z	info	service/service.go:101	Starting receivers...
2021-12-03T07:17:22.783Z	info	builder/receivers_builder.go:68	Receiver is starting...	{"kind": "receiver", "name": "otlp"}
2021-12-03T07:17:22.783Z	info	otlpreceiver/otlp.go:68	Starting GRPC server on endpoint 0.0.0.0:4317	{"kind": "receiver", "name": "otlp"}
2021-12-03T07:17:22.783Z	info	otlpreceiver/otlp.go:86	Starting HTTP server on endpoint 0.0.0.0:4318	{"kind": "receiver", "name": "otlp"}
2021-12-03T07:17:22.783Z	info	otlpreceiver/otlp.go:141	Setting up a second HTTP listener on legacy endpoint 0.0.0.0:55681	{"kind": "receiver", "name": "otlp"}
2021-12-03T07:17:22.783Z	info	otlpreceiver/otlp.go:86	Starting HTTP server on endpoint 0.0.0.0:55681	{"kind": "receiver", "name": "otlp"}
2021-12-03T07:17:22.783Z	info	builder/receivers_builder.go:73	Receiver started.	{"kind": "receiver", "name": "otlp"}
2021-12-03T07:17:22.783Z	info	service/telemetry.go:92	Setting up own telemetry...
2021-12-03T07:17:22.786Z	info	service/telemetry.go:116	Serving Prometheus metrics	{"address": ":8888", "level": "basic", "service.instance.id": "64c8a618-7b73-47db-992b-bbb85cd5e14d", "service.version": "latest"}
2021-12-03T07:17:22.786Z	info	service/collector.go:235	Starting otelcol...	{"Version": "v0.1.0", "NumCPU": 2}
2021-12-03T07:17:22.786Z	info	service/collector.go:131	Everything is ready. Begin running and processing data.
2021/12/03 07:17:22 Registered extension ID: "109ca9fd-096a-49b2-b414-5978ad30548e"
2021/12/03 07:17:22 [collector] Register response: {
	"functionName": "mytestboto3",
	"functionVersion": "$LATEST",
	"handler": "mytest.myboto3handler"
}
2021/12/03 07:17:22 [collector] Waiting for event...
EXTENSION	Name: collector	State: Ready	Events: [INVOKE,SHUTDOWN]
2021/12/03 07:17:24 [collector] Received event: {
	"eventType": "INVOKE",
	"deadlineMs": 1638515847394,
	"requestId": "0e59b4bb-2f7b-44e3-b0c8-f91d1a9c5b82",
	"invokedFunctionArn": "arn:aws:lambda:us-east-1:ACCOUNT:function:mytestboto3",
	"tracing": {
		"type": "X-Amzn-Trace-Id",
		"value": "Root=1-61a9c482-555624a9539827287d10b4e3;Parent=20bd53eb485e6521;Sampled=1"
	}
}
2021/12/03 07:17:24 [collector] Waiting for event...
[{'id': {'S': 'test'}, 'sort': {'S': 'test'}}]
END RequestId: 0e59b4bb-2f7b-44e3-b0c8-f91d1a9c5b82
REPORT RequestId: 0e59b4bb-2f7b-44e3-b0c8-f91d1a9c5b82	Duration: 1749.22 ms	Billed Duration: 1750 ms	Memory Size: 128 MB	Max Memory Used: 125 MB	Init Duration: 1823.59 ms	
XRAY TraceId: 1-61a9c482-555624a9539827287d10b4e3	SegmentId: 20bd53eb485e6521	Sampled: true	

Traces from PynamoDB Lambda:

pynamodb-trace

Log Output from PynamoDB Lambda:

o:222	Pipeline was built.	{"name": "pipeline", "name": "metrics"}
2021-12-03T07:22:53.677Z	info	builder/receivers_builder.go:224	Receiver was built.	{"kind": "receiver", "name": "otlp", "datatype": "metrics"}
2021-12-03T07:22:53.678Z	info	builder/receivers_builder.go:224	Receiver was built.	{"kind": "receiver", "name": "otlp", "datatype": "traces"}
2021-12-03T07:22:53.678Z	info	service/service.go:86	Starting extensions...
2021-12-03T07:22:53.678Z	info	service/service.go:91	Starting exporters...
2021-12-03T07:22:53.678Z	info	builder/exporters_builder.go:40	Exporter is starting...	{"kind": "exporter", "name": "awsxray"}
2021-12-03T07:22:53.679Z	info	builder/exporters_builder.go:48	Exporter started.	{"kind": "exporter", "name": "awsxray"}
2021-12-03T07:22:53.679Z	info	builder/exporters_builder.go:40	Exporter is starting...	{"kind": "exporter", "name": "logging"}
2021-12-03T07:22:53.679Z	info	builder/exporters_builder.go:48	Exporter started.	{"kind": "exporter", "name": "logging"}
2021-12-03T07:22:53.679Z	info	service/service.go:96	Starting processors...
2021-12-03T07:22:53.679Z	info	builder/pipelines_builder.go:54	Pipeline is starting...	{"name": "pipeline", "name": "metrics"}
2021-12-03T07:22:53.679Z	info	builder/pipelines_builder.go:65	Pipeline is started.	{"name": "pipeline", "name": "metrics"}
2021-12-03T07:22:53.679Z	info	builder/pipelines_builder.go:54	Pipeline is starting...	{"name": "pipeline", "name": "traces"}
2021-12-03T07:22:53.679Z	info	builder/pipelines_builder.go:65	Pipeline is started.	{"name": "pipeline", "name": "traces"}
2021-12-03T07:22:53.679Z	info	service/service.go:101	Starting receivers...
2021-12-03T07:22:53.679Z	info	builder/receivers_builder.go:68	Receiver is starting...	{"kind": "receiver", "name": "otlp"}
2021-12-03T07:22:53.679Z	info	otlpreceiver/otlp.go:68	Starting GRPC server on endpoint 0.0.0.0:4317	{"kind": "receiver", "name": "otlp"}
2021-12-03T07:22:53.679Z	info	otlpreceiver/otlp.go:86	Starting HTTP server on endpoint 0.0.0.0:4318	{"kind": "receiver", "name": "otlp"}
2021-12-03T07:22:53.679Z	info	otlpreceiver/otlp.go:141	Setting up a second HTTP listener on legacy endpoint 0.0.0.0:55681	{"kind": "receiver", "name": "otlp"}
2021-12-03T07:22:53.679Z	info	otlpreceiver/otlp.go:86	Starting HTTP server on endpoint 0.0.0.0:55681	{"kind": "receiver", "name": "otlp"}
2021-12-03T07:22:53.679Z	info	builder/receivers_builder.go:73	Receiver started.	{"kind": "receiver", "name": "otlp"}
2021-12-03T07:22:53.679Z	info	service/telemetry.go:92	Setting up own telemetry...
2021-12-03T07:22:53.680Z	info	service/telemetry.go:116	Serving Prometheus metrics	{"address": ":8888", "level": "basic", "service.instance.id": "dcc93450-a887-4a08-b4cf-13022b344e72", "service.version": "latest"}
2021-12-03T07:22:53.680Z	info	service/collector.go:235	Starting otelcol...	{"Version": "v0.1.0", "NumCPU": 2}
2021-12-03T07:22:53.680Z	info	service/collector.go:131	Everything is ready. Begin running and processing data.
2021/12/03 07:22:53 Registered extension ID: "12254e8c-4555-4b8e-84cd-46007cbdfa9b"
2021/12/03 07:22:53 [collector] Register response: {
	"functionName": "mytest",
	"functionVersion": "$LATEST",
	"handler": "mytest.myhandler"
}
2021/12/03 07:22:53 [collector] Waiting for event...
EXTENSION	Name: collector	State: Ready	Events: [INVOKE,SHUTDOWN]
2021/12/03 07:22:55 [collector] Received event: {
	"eventType": "INVOKE",
	"deadlineMs": 1638516178335,
	"requestId": "71b48d3e-c0bb-4180-8f58-cb9347dbc405",
	"invokedFunctionArn": "arn:aws:lambda:us-east-1:ACCOUNT:function:mytest",
	"tracing": {
		"type": "X-Amzn-Trace-Id",
		"value": "Root=1-61a9c5cc-2a13669d21038df31184e836;Parent=6a647f894e57bf01;Sampled=1"
	}
}
2021/12/03 07:22:55 [collector] Waiting for event...
[TABLE_NAME<test, test>]
END RequestId: 71b48d3e-c0bb-4180-8f58-cb9347dbc405
REPORT RequestId: 71b48d3e-c0bb-4180-8f58-cb9347dbc405	Duration: 1759.07 ms	Billed Duration: 1760 ms	Memory Size: 128 MB	Max Memory Used: 124 MB	Init Duration: 1850.17 ms	
XRAY TraceId: 1-61a9c5cc-2a13669d21038df31184e836	SegmentId: 6a647f894e57bf01	Sampled: true	

Steps to Reproduce

Python code:

#!/usr/bin/env python3
import json
import boto3
from pynamodb.models import Model
from pynamodb.attributes import UnicodeAttribute


class SimpleModel(Model):
    class Meta:
        table_name = "TABLE_NAME"

    pk = UnicodeAttribute(hash_key=True, attr_name="id")
    id = UnicodeAttribute(range_key=True, null=False, attr_name="sort")


def myhandler(event, context):
    item = SimpleModel("test")
    item.id = "test"
    item.save()
    results = list(SimpleModel.query("test"))
    print(results)
    return json.dumps({"total": len(results)})


def myboto3handler(event, context):
    ddb = boto3.client("dynamodb")
    ddb.put_item(TableName="TABLE_NAME", Item={"id": {"S": "test"}, "sort": {"S": "test"}})
    results = ddb.query(
        TableName="TABLE_NAME",
        KeyConditionExpression="id = :test",
        ExpressionAttributeValues={":test": {"S": "test"}},
    )["Items"]
    print(results)
    return json.dumps({"total": len(results)})
  1. Create a Lambda with the code above using mytest.myboto3handler as handler (with pynamodb and otel)
  2. Create a Lambda with the code above using mytest.myhandler as handler (with pynamodb and otel)
  3. Create a table called TABLE_NAME with id of type S as partition key and sort of type S as sort key
  4. Run the functions

NodeJS 12.x environment support for auto instrumentation

we have a graphQL API written in nodeJS and hosted in AWS Lambda. We are currently considering using AWS OTEL to enable observability in our API, but as I can see, this lambda layer is only supporting python, or am I missing something here. Is there any way I can use this in our environment?

get tracer in aws auto instrumented lambda

Is your feature request related to a problem? Please describe.
Is there a way to use the tracing api (@opentelemetry/api ) in aws lambda in order to interact in my Lambda app with the extension Layer of OpenTelemetry, which is intalled as extension to my Lambda app ?

Describe the solution you'd like
Assumption: given the Extension Layer of OpenTelemetry for lambda does automatic instrumentation, as read the code by registering instrumentation and providing a tracerProvider: https://github.com/open-telemetry/opentelemetry-lambda/blob/c58c998138f355b1faa5fa071a645ca37895ccb3/nodejs/packages/layer/src/wrapper.ts

On top of the instrumentation that the Layer extension provides for my application in aws Lambda, without modifying my code,
i would like to use the @opentelemetry/api to get a tracer in order to create custom Spans, add Spans to an active context .
The same way as described in the specs: https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/api.md#context-interaction and in the opentelemetry/api repo .

Describe alternatives you've considered

Additional context

Having trouble building layer and Java app

Hi!

I'm trying to build the ADOT layer and deploy the sample Java Lambda as per your guide https://github.com/aws-observability/aws-otel-lambda/tree/main/sample-apps/java-lambda. In Step 4, you have :

cd sample-apps/java-lambda
run.sh

Here, I'm getting this exception. Any ideas why?

_FAILURE: Build failed with an exception.

* Where:
Build file '/home/ec2-user/aws-otel-lambda/sample-apps/java-lambda/build.gradle' line: 51

* What went wrong:
A problem occurred evaluating root project 'spark-sample-app'.
> Could not find method testCompile() for arguments [{group=junit, name=junit, version=4.12}] on object of type org.gradle.api.internal.artifacts.dsl.dependencies.DefaultDep
endencyHandler.

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org_

I'm building this on an EC2 running Amazon Linux.

Thanks
--Sib

Problem with using logzio exporter in Lambda layer

Hello, we have a user who is trying to use the Lambda layer with the logzio exporter that's been in the last releases. They were getting an error that the exporter doesn't exist. We noticed in the build manifest that it includes none of the 3rd party supported exporters which are not coming from AWS. I can submit a PR to fix this build if you'd like: https://github.com/aws-observability/aws-otel-collector/blob/main/pkg/lambdacomponents/defaults.go#L23-L26 Please advise.

Overriding Collector Config With Collector Layer Doesn't Appear To Work

I am attempting to use the layer arn:aws:lambda:us-east-1:901920570463:layer:aws-otel-collector-ver-0-39-0:1 with aGo function.

My Go function is identical to the sample app.

But the collector doesn't appear to be letting me override the config to use my own exporter.

I am setting OPENTELEMETRY_COLLECTOR_CONFIG_FILE to /var/task/collector.yaml.

With the contents:

receivers:
  otlp:
    protocols:
      grpc:
      http:

exporters:
  otlp:
    endpoint: ${OPENTELEMETRY_ENDPOINT}
    headers:
      api-key: ${API_KEY}

service:
  pipelines:
    traces:
      receivers: [otlp]
      exporters: [otlp]
    metrics:
      receivers: [otlp]
      exporters: [otlp]

But no telemetry is being exported to this endpoint. When I enable logging, I don't see this exporter getting loaded.

I am, however, still seeing traces in X-Ray. Which makes me think the collector is not using this config at all?

I have tested the Python layer with a Python function and the above worked fine. Not sure what the problem is with this collector layer. I know there was a bug with expanding environment variables in the collector config, could that be related?

Add .NET lambda auto instrumentation smoke test in release workflow

Currently Release workflow has smoke test of Collector Layer with Go Lambda instrumented App. Ideally, we should also add one more smoke test for .NET Lambda instrumented App with Collector layer. Current Github release workflow design allows to test collector layer with only one Lambda instrumented App (Go/.NET). We should modify the workflow design to allow room for more testing scenarios.

Unable to send traces to OTEL collector using aws-otel-nodejs-ver-1-0-0 layer

Hello team, I set up a lambda function with Node.js by following this documentation (https://aws-otel.github.io/docs/getting-started/lambda/lambda-js). I have logging exporter (loglevel: debug) and otlp exporter (endpoint to a different OTEL collector hosted on an EC2) enabled in the config.yaml file. However, I'm not seeing the traces going to the OTEL collector. I'm not seeing the logging exporter print the traces in the execution results. I was able to make this work with a lambda function with Python (https://aws-otel.github.io/docs/getting-started/lambda/lambda-python) but it's not working with Node.js which makes me think that there might be an issue with the Node.js layer? Can someone please help me with this issue?

aws-otel-nodejs-ver-1-2-0

I've referred to the doc but this lambda does not seem to be available from ap-southeast-2

When I run

aws lambda get-layer-version \
    --layer-name "arn:aws:lambda:ap-southeast-2:901920570463:layer:aws-otel-nodejs-ver-0-23-0" \
    --version-number 1

I get this error:

An error occurred (AccessDeniedException) when calling the GetLayerVersion operation: User is not authorized to perform: lambda:GetLayerVersion on resource: arn:aws:lambda:ap-southeast-2:901920570463:layer:aws-otel-nodejs-ver-0-23-0:1

But no issue with python version

aws lambda get-layer-version \
    --layer-name "arn:aws:lambda:ap-southeast-2:901920570463:layer:aws-otel-python38-ver-1-2-0" \
    --version-number 1

[Nodejs] AWS auto-instrumentation doesn't propagate parent trace.id and span.id properly

Using nodejs lambdas.

I am trying to get my consumer lambda to call the producer lambda and propagate the trace context between them using traceparent HTTP header.

So, I have the env variable set:

OTEL_PROPAGATORS: tracecontext

I am also seeing the code performing the header injection properly and passing it from the consumer lambda. It is received by the producer lambda, as per the logs.

Consumer logs show the span with the correct span.id to be propagated:

InstrumentationLibrary @opentelemetry/instrumentation-http 0.19.0
--
Span #0
Trace ID       : 60e6a7a924cc736a604af2a602d84600 <==== TRACE ID
Parent ID      : 2430b46dd43fac16
ID             : 81eb76e307cb48a4 <==== SPAN ID
Name           : HTTPS GET
Kind           : SPAN_KIND_CLIENT
Start time     : 2021-07-08 07:22:19.353159168 +0000 UTC
End time       : 2021-07-08 07:22:20.792924928 +0000 UTC
Status code    : STATUS_CODE_OK
Status message :
Attributes:
-> http.url: STRING(https://pgzr057had.execute-api.ap-southeast-2.amazonaws.com/dev/ping)
-> http.method: STRING(GET)
-> http.target: STRING(/dev/ping)
-> net.peer.name: STRING(pgzr057had.execute-api.ap-southeast-2.amazonaws.com)
-> net.peer.ip: STRING(13.226.106.129)
-> net.peer.port: INT(443)
-> http.host: STRING(pgzr057had.execute-api.ap-southeast-2.amazonaws.com:443)
-> http.response_content_length_uncompressed: INT(88)
-> http.status_code: INT(200)
-> http.status_text: STRING(OK)
-> http.flavor: STRING(1.1)
-> net.transport: STRING(IP.TCP)

Producer logs receiving the traceparent header with correct propagation data:

        "traceparent": "00-60e6a7a924cc736a604af2a602d84600-81eb76e307cb48a4-01",

However, the span that the producer lambda generates has a totally different parent from the one that is sent in the traceparent header:

2021-07-08T07:22:20.781Z	DEBUG	loggingexporter/logging_exporter.go:48	ResourceSpans #0
--
Resource labels:
-> service.name: STRING(aws-node-simple-http-endpoint-dev-consumer)
-> telemetry.sdk.language: STRING(nodejs)
-> telemetry.sdk.name: STRING(opentelemetry)
-> telemetry.sdk.version: STRING(0.19.0)
-> cloud.provider: STRING(aws)
-> cloud.region: STRING(ap-southeast-2)
-> faas.name: STRING(aws-node-simple-http-endpoint-dev-consumer)
-> faas.version: STRING($LATEST)
-> process.id: INT(18)
-> process.executable.name: STRING(/var/lang/bin/node)
-> process.command: STRING(/var/runtime/index.js)
-> process.command_line: STRING(/var/lang/bin/node /var/runtime/index.js)
InstrumentationLibrarySpans #0
InstrumentationLibrary @opentelemetry/instrumentation-aws-lambda 0.16.0
Span #0
Trace ID       : 60e6a7ab0e1a895f1930b0f93004a373 <=== Different TRACE ID
Parent ID      : 05eb499bf8d606d3 <=== Different Parent ID
ID             : 906c6487661e9ec0
Name           : aws-node-simple-http-endpoint-dev-consumer
Kind           : SPAN_KIND_SERVER
Start time     : 2021-07-08 07:22:20.718549504 +0000 UTC
End time       : 2021-07-08 07:22:20.722462208 +0000 UTC
Status code    : STATUS_CODE_UNSET
Status message :
Attributes:
-> faas.execution: STRING(85b62062-31fb-49ab-a5b0-8cba44a1315c)
-> faas.id: STRING(arn:aws:lambda:ap-southeast-2:401722391821:function:aws-node-simple-http-endpoint-dev-consumer)
-> cloud.account.id: STRING(401722391821)
Consumer lambda log


START RequestId: 327318f1-18d3-4288-8caa-bb3dcbce0f69 Version: $LATEST
--
2021/07/08 07:22:18 [collector] Launching OpenTelemetry Lambda extension, version:  v0.1.0
2021-07-08T07:22:18.343Z	info	service/application.go:277	Starting otelcol...	{     "Version": "v0.1.0",     "NumCPU": 2 }
2021-07-08T07:22:18.343Z	info	service/application.go:185	Setting up own telemetry...
2021-07-08T07:22:18.343Z	info	service/application.go:220	Loading configuration...
2021-07-08T07:22:18.345Z	info	service/application.go:236	Applying configuration...
2021-07-08T07:22:18.345Z	info	builder/exporters_builder.go:274	Exporter was built.	{     "kind": "exporter",     "exporter": "logging" }
2021-07-08T07:22:18.346Z	info	builder/exporters_builder.go:274	Exporter was built.	{     "kind": "exporter",     "exporter": "otlp" }
2021-07-08T07:22:18.346Z	info	builder/pipelines_builder.go:204	Pipeline was built.	{     "pipeline_name": "traces",     "pipeline_datatype": "traces" }
2021-07-08T07:22:18.346Z	info	builder/pipelines_builder.go:204	Pipeline was built.	{     "pipeline_name": "metrics",     "pipeline_datatype": "metrics" }
2021-07-08T07:22:18.346Z	info	builder/receivers_builder.go:230	Receiver was built.	{     "kind": "receiver",     "name": "otlp",     "datatype": "traces" }
2021-07-08T07:22:18.347Z	info	builder/receivers_builder.go:230	Receiver was built.	{     "kind": "receiver",     "name": "otlp",     "datatype": "metrics" }
2021-07-08T07:22:18.347Z	info	service/service.go:155	Starting extensions...
2021-07-08T07:22:18.347Z	info	service/service.go:200	Starting exporters...
2021-07-08T07:22:18.347Z	info	builder/exporters_builder.go:92	Exporter is starting...	{     "kind": "exporter",     "name": "otlp" }
2021-07-08T07:22:18.347Z	info	builder/exporters_builder.go:97	Exporter started.	{     "kind": "exporter",     "name": "otlp" }
2021-07-08T07:22:18.347Z	info	builder/exporters_builder.go:92	Exporter is starting...	{     "kind": "exporter",     "name": "logging" }
2021-07-08T07:22:18.347Z	info	builder/exporters_builder.go:97	Exporter started.	{     "kind": "exporter",     "name": "logging" }
2021-07-08T07:22:18.347Z	info	service/service.go:205	Starting processors...
2021-07-08T07:22:18.347Z	info	builder/pipelines_builder.go:51	Pipeline is starting...	{     "pipeline_name": "traces",     "pipeline_datatype": "traces" }
2021-07-08T07:22:18.347Z	info	builder/pipelines_builder.go:62	Pipeline is started.	{     "pipeline_name": "traces",     "pipeline_datatype": "traces" }
2021-07-08T07:22:18.347Z	info	builder/pipelines_builder.go:51	Pipeline is starting...	{     "pipeline_name": "metrics",     "pipeline_datatype": "metrics" }
2021-07-08T07:22:18.347Z	info	builder/pipelines_builder.go:62	Pipeline is started.	{     "pipeline_name": "metrics",     "pipeline_datatype": "metrics" }
2021-07-08T07:22:18.347Z	info	service/service.go:210	Starting receivers...
2021-07-08T07:22:18.347Z	info	builder/receivers_builder.go:70	Receiver is starting...	{     "kind": "receiver",     "name": "otlp" }
2021-07-08T07:22:18.347Z	info	otlpreceiver/otlp.go:83	Starting GRPC server on endpoint 0.0.0.0:4317	{     "kind": "receiver",     "name": "otlp" }
2021-07-08T07:22:18.347Z	info	otlpreceiver/otlp.go:145	Setting up a second GRPC listener on legacy endpoint 0.0.0.0:55680	{     "kind": "receiver",     "name": "otlp" }
2021-07-08T07:22:18.347Z	info	otlpreceiver/otlp.go:83	Starting GRPC server on endpoint 0.0.0.0:55680	{     "kind": "receiver",     "name": "otlp" }
2021-07-08T07:22:18.347Z	info	otlpreceiver/otlp.go:101	Starting HTTP server on endpoint 0.0.0.0:55681	{     "kind": "receiver",     "name": "otlp" }
2021-07-08T07:22:18.347Z	info	builder/receivers_builder.go:75	Receiver started.	{     "kind": "receiver",     "name": "otlp" }
2021-07-08T07:22:18.347Z	info	service/application.go:197	Everything is ready. Begin running and processing data.
2021/07/08 07:22:18 Registered extension ID: "f13fddea-3af0-454b-8d3c-739edf925beb"
2021/07/08 07:22:18 [collector] Register response: {
"functionName": "aws-node-simple-http-endpoint-dev-producer",
"functionVersion": "$LATEST",
"handler": "handler.consumer"
}
2021/07/08 07:22:18 [collector] Waiting for event...
Registering OpenTelemetry
EXTENSION	Name: collector	State: Ready	Events: [INVOKE,SHUTDOWN]
2021/07/08 07:22:19 [collector] Received event: {
"eventType": "INVOKE",
"deadlineMs": 1625728945219,
"requestId": "327318f1-18d3-4288-8caa-bb3dcbce0f69",
"invokedFunctionArn": "arn:aws:lambda:ap-southeast-2:401722391821:function:aws-node-simple-http-endpoint-dev-producer",
"tracing": {
"type": "X-Amzn-Trace-Id",
"value": "Root=1-60e6a7a9-24cc736a604af2a602d84600;Parent=3d3d3a7622e60477;Sampled=1"
}
}
2021/07/08 07:22:19 [collector] Waiting for event...
2021-07-08T07:22:20.793Z	327318f1-18d3-4288-8caa-bb3dcbce0f69	INFO	Status: 200
2021-07-08T07:22:20.793Z	327318f1-18d3-4288-8caa-bb3dcbce0f69	INFO	{     "message": "Hello, the current time is 07:22:20 GMT+0000 (Coordinated Universal Time)." }
2021-07-08T07:22:20.831Z	INFO	loggingexporter/logging_exporter.go:42	TracesExporter	{     "#spans": 3 }
2021-07-08T07:22:20.831Z	DEBUG	loggingexporter/logging_exporter.go:48	ResourceSpans #0
Resource labels:
-> service.name: STRING(aws-node-simple-http-endpoint-dev-producer)
-> telemetry.sdk.language: STRING(nodejs)
-> telemetry.sdk.name: STRING(opentelemetry)
-> telemetry.sdk.version: STRING(0.19.0)
-> cloud.provider: STRING(aws)
-> cloud.region: STRING(ap-southeast-2)
-> faas.name: STRING(aws-node-simple-http-endpoint-dev-producer)
-> faas.version: STRING($LATEST)
-> process.id: INT(17)
-> process.executable.name: STRING(/var/lang/bin/node)
-> process.command: STRING(/var/runtime/index.js)
-> process.command_line: STRING(/var/lang/bin/node /var/runtime/index.js)
InstrumentationLibrarySpans #0
InstrumentationLibrary @opentelemetry/instrumentation-net 0.16.0
Span #0
Trace ID       : 60e6a7a924cc736a604af2a602d84600
Parent ID      : 81eb76e307cb48a4
ID             : 78b32ffd863269f2
Name           : tcp.connect
Kind           : SPAN_KIND_INTERNAL
Start time     : 2021-07-08 07:22:19.377481728 +0000 UTC
End time       : 2021-07-08 07:22:19.39574656 +0000 UTC
Status code    : STATUS_CODE_UNSET
Status message :
Attributes:
-> net.transport: STRING(IP.TCP)
-> net.peer.name: STRING(pgzr057had.execute-api.ap-southeast-2.amazonaws.com)
-> net.peer.port: INT(443)
-> net.peer.ip: STRING(13.226.106.129)
-> net.host.ip: STRING(169.254.76.1)
-> net.host.port: INT(49788)
InstrumentationLibrarySpans #1
InstrumentationLibrary @opentelemetry/instrumentation-http 0.19.0
Span #0
Trace ID       : 60e6a7a924cc736a604af2a602d84600
Parent ID      : 2430b46dd43fac16
ID             : 81eb76e307cb48a4
Name           : HTTPS GET
Kind           : SPAN_KIND_CLIENT
Start time     : 2021-07-08 07:22:19.353159168 +0000 UTC
End time       : 2021-07-08 07:22:20.792924928 +0000 UTC
Status code    : STATUS_CODE_OK
Status message :
Attributes:
-> http.url: STRING(https://pgzr057had.execute-api.ap-southeast-2.amazonaws.com/dev/ping)
-> http.method: STRING(GET)
-> http.target: STRING(/dev/ping)
-> net.peer.name: STRING(pgzr057had.execute-api.ap-southeast-2.amazonaws.com)
-> net.peer.ip: STRING(13.226.106.129)
-> net.peer.port: INT(443)
-> http.host: STRING(pgzr057had.execute-api.ap-southeast-2.amazonaws.com:443)
-> http.response_content_length_uncompressed: INT(88)
-> http.status_code: INT(200)
-> http.status_text: STRING(OK)
-> http.flavor: STRING(1.1)
-> net.transport: STRING(IP.TCP)
InstrumentationLibrarySpans #2
InstrumentationLibrary @opentelemetry/instrumentation-aws-lambda 0.16.0
Span #0
Trace ID       : 60e6a7a924cc736a604af2a602d84600
Parent ID      : 7818715d2bd817ba
ID             : 2430b46dd43fac16
Name           : aws-node-simple-http-endpoint-dev-producer
Kind           : SPAN_KIND_SERVER
Start time     : 2021-07-08 07:22:19.318216704 +0000 UTC
End time       : 2021-07-08 07:22:20.794181376 +0000 UTC
Status code    : STATUS_CODE_UNSET
Status message :
Attributes:
-> faas.execution: STRING(327318f1-18d3-4288-8caa-bb3dcbce0f69)
-> faas.id: STRING(arn:aws:lambda:ap-southeast-2:401722391821:function:aws-node-simple-http-endpoint-dev-producer)
-> cloud.account.id: STRING(401722391821)
 
END RequestId: 327318f1-18d3-4288-8caa-bb3dcbce0f69
REPORT RequestId: 327318f1-18d3-4288-8caa-bb3dcbce0f69	Duration: 1615.37 ms	Billed Duration: 1616 ms	Memory Size: 1024 MB	Max Memory Used: 160 MB	Init Duration: 1104.55 ms	XRAY TraceId: 1-60e6a7a9-24cc736a604af2a602d84600	SegmentId: 3d3d3a7622e60477	Sampled: true
2021-07-08T07:28:18.094Z	error	exporterhelper/queued_retry.go:295	Exporting failed. No more retries left. Dropping data.	{     "kind": "exporter",     "name": "otlp",     "error": "max elapsed time expired failed to push trace data via OTLP exporter: rpc error: code = DeadlineExceeded desc = context deadline exceeded",     "dropped_items": 3 }
go.opentelemetry.io/collector/exporter/exporterhelper.(*retrySender).send
go.opentelemetry.io/[email protected]/exporter/exporterhelper/queued_retry.go:295
go.opentelemetry.io/collector/exporter/exporterhelper.(*tracesExporterWithObservability).send
go.opentelemetry.io/[email protected]/exporter/exporterhelper/traces.go:118
go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).start.func1
go.opentelemetry.io/[email protected]/exporter/exporterhelper/queued_retry.go:152
github.com/jaegertracing/jaeger/pkg/queue.ConsumerFunc.Consume
github.com/jaegertracing/[email protected]/pkg/queue/bounded_queue.go:104
github.com/jaegertracing/jaeger/pkg/queue.(*BoundedQueue).StartConsumersWithFactory.func1
github.com/jaegertracing/[email protected]/pkg/queue/bounded_queue.go:83
2021/07/08 07:28:18 [collector] Received event: {
"eventType": "SHUTDOWN",
"deadlineMs": 1625729300093,
"requestId": "",
"invokedFunctionArn": "",
"tracing": {
"type": "",
"value": ""
}
}
2021-07-08T07:28:18.099Z	info	service/application.go:212	Received stop test request
2021-07-08T07:28:18.099Z	info	service/application.go:307	Starting shutdown...
2021-07-08T07:28:18.099Z	info	service/service.go:225	Stopping receivers...
2021-07-08T07:28:18.101Z	info	service/service.go:231	Stopping processors...
2021-07-08T07:28:18.101Z	info	builder/pipelines_builder.go:70	Pipeline is shutting down...	{     "pipeline_name": "traces",     "pipeline_datatype": "traces" }
2021-07-08T07:28:18.101Z	info	builder/pipelines_builder.go:76	Pipeline is shutdown.	{     "pipeline_name": "traces",     "pipeline_datatype": "traces" }
2021-07-08T07:28:18.101Z	info	builder/pipelines_builder.go:70	Pipeline is shutting down...	{     "pipeline_name": "metrics",     "pipeline_datatype": "metrics" }
2021-07-08T07:28:18.101Z	info	builder/pipelines_builder.go:76	Pipeline is shutdown.	{     "pipeline_name": "metrics",     "pipeline_datatype": "metrics" }
2021-07-08T07:28:18.101Z	info	service/service.go:237	Stopping exporters...
2021-07-08T07:28:18.101Z	info	service/service.go:164	Stopping extensions...
2021-07-08T07:28:18.101Z	info	service/application.go:325	Shutdown complete.
2021/07/08 07:28:18 [collector] Received SHUTDOWN event
2021/07/08 07:28:18 [collector] Exiting


Producer lambda logs


START RequestId: 85b62062-31fb-49ab-a5b0-8cba44a1315c Version: $LATEST
--
2021/07/08 07:22:19 [collector] Launching OpenTelemetry Lambda extension, version:  v0.1.0
2021-07-08T07:22:19.771Z	info	service/application.go:277	Starting otelcol...	{     "Version": "v0.1.0",     "NumCPU": 2 }
2021-07-08T07:22:19.771Z	info	service/application.go:185	Setting up own telemetry...
2021-07-08T07:22:19.771Z	info	service/application.go:220	Loading configuration...
2021-07-08T07:22:19.773Z	info	service/application.go:236	Applying configuration...
2021-07-08T07:22:19.774Z	info	builder/exporters_builder.go:274	Exporter was built.	{     "kind": "exporter",     "exporter": "logging" }
2021-07-08T07:22:19.775Z	info	builder/exporters_builder.go:274	Exporter was built.	{     "kind": "exporter",     "exporter": "otlp" }
2021-07-08T07:22:19.775Z	info	builder/pipelines_builder.go:204	Pipeline was built.	{     "pipeline_name": "metrics",     "pipeline_datatype": "metrics" }
2021-07-08T07:22:19.776Z	info	builder/pipelines_builder.go:204	Pipeline was built.	{     "pipeline_name": "traces",     "pipeline_datatype": "traces" }
2021-07-08T07:22:19.776Z	info	builder/receivers_builder.go:230	Receiver was built.	{     "kind": "receiver",     "name": "otlp",     "datatype": "traces" }
2021-07-08T07:22:19.776Z	info	builder/receivers_builder.go:230	Receiver was built.	{     "kind": "receiver",     "name": "otlp",     "datatype": "metrics" }
2021-07-08T07:22:19.776Z	info	service/service.go:155	Starting extensions...
2021-07-08T07:22:19.776Z	info	service/service.go:200	Starting exporters...
2021-07-08T07:22:19.776Z	info	builder/exporters_builder.go:92	Exporter is starting...	{     "kind": "exporter",     "name": "logging" }
2021-07-08T07:22:19.776Z	info	builder/exporters_builder.go:97	Exporter started.	{     "kind": "exporter",     "name": "logging" }
2021-07-08T07:22:19.776Z	info	builder/exporters_builder.go:92	Exporter is starting...	{     "kind": "exporter",     "name": "otlp" }
2021-07-08T07:22:19.776Z	info	builder/exporters_builder.go:97	Exporter started.	{     "kind": "exporter",     "name": "otlp" }
2021-07-08T07:22:19.776Z	info	service/service.go:205	Starting processors...
2021-07-08T07:22:19.776Z	info	builder/pipelines_builder.go:51	Pipeline is starting...	{     "pipeline_name": "metrics",     "pipeline_datatype": "metrics" }
2021-07-08T07:22:19.776Z	info	builder/pipelines_builder.go:62	Pipeline is started.	{     "pipeline_name": "metrics",     "pipeline_datatype": "metrics" }
2021-07-08T07:22:19.776Z	info	builder/pipelines_builder.go:51	Pipeline is starting...	{     "pipeline_name": "traces",     "pipeline_datatype": "traces" }
2021-07-08T07:22:19.776Z	info	builder/pipelines_builder.go:62	Pipeline is started.	{     "pipeline_name": "traces",     "pipeline_datatype": "traces" }
2021-07-08T07:22:19.776Z	info	service/service.go:210	Starting receivers...
2021-07-08T07:22:19.776Z	info	builder/receivers_builder.go:70	Receiver is starting...	{     "kind": "receiver",     "name": "otlp" }
2021-07-08T07:22:19.776Z	info	otlpreceiver/otlp.go:83	Starting GRPC server on endpoint 0.0.0.0:4317	{     "kind": "receiver",     "name": "otlp" }
2021-07-08T07:22:19.776Z	info	otlpreceiver/otlp.go:145	Setting up a second GRPC listener on legacy endpoint 0.0.0.0:55680	{     "kind": "receiver",     "name": "otlp" }
2021-07-08T07:22:19.776Z	info	otlpreceiver/otlp.go:83	Starting GRPC server on endpoint 0.0.0.0:55680	{     "kind": "receiver",     "name": "otlp" }
2021-07-08T07:22:19.776Z	info	otlpreceiver/otlp.go:101	Starting HTTP server on endpoint 0.0.0.0:55681	{     "kind": "receiver",     "name": "otlp" }
2021-07-08T07:22:19.778Z	info	builder/receivers_builder.go:75	Receiver started.	{     "kind": "receiver",     "name": "otlp" }
2021-07-08T07:22:19.778Z	info	service/application.go:197	Everything is ready. Begin running and processing data.
2021/07/08 07:22:19 Registered extension ID: "5e1aec70-1611-4aef-9369-7b18e2dd725f"
2021/07/08 07:22:19 [collector] Register response: {
"functionName": "aws-node-simple-http-endpoint-dev-consumer",
"functionVersion": "$LATEST",
"handler": "handler.producer"
}
2021/07/08 07:22:19 [collector] Waiting for event...
Registering OpenTelemetry
EXTENSION	Name: collector	State: Ready	Events: [INVOKE,SHUTDOWN]
2021/07/08 07:22:20 [collector] Received event: {
"eventType": "INVOKE",
"deadlineMs": 1625728946604,
"requestId": "85b62062-31fb-49ab-a5b0-8cba44a1315c",
"invokedFunctionArn": "arn:aws:lambda:ap-southeast-2:401722391821:function:aws-node-simple-http-endpoint-dev-consumer",
"tracing": {
"type": "X-Amzn-Trace-Id",
"value": "Root=1-60e6a7ab-0e1a895f1930b0f93004a373;Parent=6d5d52cd6c5c8110;Sampled=1"
}
}
2021/07/08 07:22:20 [collector] Waiting for event...
2021-07-08T07:22:20.719Z	85b62062-31fb-49ab-a5b0-8cba44a1315c	INFO	Event: {     "resource": "/ping",     "path": "/ping",     "httpMethod": "GET",     "headers": {         "CloudFront-Forwarded-Proto": "https",         "CloudFront-Is-Desktop-Viewer": "true",         "CloudFront-Is-Mobile-Viewer": "false",         "CloudFront-Is-SmartTV-Viewer": "false",         "CloudFront-Is-Tablet-Viewer": "false",         "CloudFront-Viewer-Country": "AU",         "Host": "pgzr057had.execute-api.ap-southeast-2.amazonaws.com",         "traceparent": "00-60e6a7a924cc736a604af2a602d84600-81eb76e307cb48a4-01",         "User-Agent": "Amazon CloudFront",         "Via": "1.1 a6767fff426820a624528e674b714fb3.cloudfront.net (CloudFront)",         "X-Amz-Cf-Id": "_kKJ5a_w11lV-6dzkHkD4h_tRWvkMMkyz2xLC1euTguJj4L0AmmNgQ==",         "X-Amzn-Trace-Id": "Root=1-60e6a7ab-0e1a895f1930b0f93004a373",         "X-Forwarded-For": "3.25.229.29, 64.252.184.78",         "X-Forwarded-Port": "443",         "X-Forwarded-Proto": "https"     },     "multiValueHeaders": {         "CloudFront-Forwarded-Proto": [             "https"         ],         "CloudFront-Is-Desktop-Viewer": [             "true"         ],         "CloudFront-Is-Mobile-Viewer": [             "false"         ],         "CloudFront-Is-SmartTV-Viewer": [             "false"         ],         "CloudFront-Is-Tablet-Viewer": [             "false"         ],         "CloudFront-Viewer-Country": [             "AU"         ],         "Host": [             "pgzr057had.execute-api.ap-southeast-2.amazonaws.com"         ],         "traceparent": [             "00-60e6a7a924cc736a604af2a602d84600-81eb76e307cb48a4-01"         ],         "User-Agent": [             "Amazon CloudFront"         ],         "Via": [             "1.1 a6767fff426820a624528e674b714fb3.cloudfront.net (CloudFront)"         ],         "X-Amz-Cf-Id": [             "_kKJ5a_w11lV-6dzkHkD4h_tRWvkMMkyz2xLC1euTguJj4L0AmmNgQ=="         ],         "X-Amzn-Trace-Id": [             "Root=1-60e6a7ab-0e1a895f1930b0f93004a373"         ],         "X-Forwarded-For": [             "3.25.229.29, 64.252.184.78"         ],         "X-Forwarded-Port": [             "443"         ],         "X-Forwarded-Proto": [             "https"         ]     },     "queryStringParameters": null,     "multiValueQueryStringParameters": null,     "pathParameters": null,     "stageVariables": null,     "requestContext": {         "resourceId": "6t8m7d",         "resourcePath": "/ping",         "httpMethod": "GET",         "extendedRequestId": "CI8i0FLnSwMF9TA=",         "requestTime": "08/Jul/2021:07:22:19 +0000",         "path": "/dev/ping",         "accountId": "401722391821",         "protocol": "HTTP/1.1",         "stage": "dev",         "domainPrefix": "pgzr057had",         "requestTimeEpoch": 1625728939406,         "requestId": "8d9cf442-ffd7-4bb4-8a4e-696097961e8f",         "identity": {             "cognitoIdentityPoolId": null,             "accountId": null,             "cognitoIdentityId": null,             "caller": null,             "sourceIp": "3.25.229.29",             "principalOrgId": null,             "accessKey": null,             "cognitoAuthenticationType": null,             "cognitoAuthenticationProvider": null,             "userArn": null,             "userAgent": "Amazon CloudFront",             "user": null         },         "domainName": "pgzr057had.execute-api.ap-southeast-2.amazonaws.com",         "apiId": "pgzr057had"     },     "body": null,     "isBase64Encoded": false }
2021-07-08T07:22:20.781Z	INFO	loggingexporter/logging_exporter.go:42	TracesExporter	{     "#spans": 1 }
2021-07-08T07:22:20.781Z	DEBUG	loggingexporter/logging_exporter.go:48	ResourceSpans #0
Resource labels:
-> service.name: STRING(aws-node-simple-http-endpoint-dev-consumer)
-> telemetry.sdk.language: STRING(nodejs)
-> telemetry.sdk.name: STRING(opentelemetry)
-> telemetry.sdk.version: STRING(0.19.0)
-> cloud.provider: STRING(aws)
-> cloud.region: STRING(ap-southeast-2)
-> faas.name: STRING(aws-node-simple-http-endpoint-dev-consumer)
-> faas.version: STRING($LATEST)
-> process.id: INT(18)
-> process.executable.name: STRING(/var/lang/bin/node)
-> process.command: STRING(/var/runtime/index.js)
-> process.command_line: STRING(/var/lang/bin/node /var/runtime/index.js)
InstrumentationLibrarySpans #0
InstrumentationLibrary @opentelemetry/instrumentation-aws-lambda 0.16.0
Span #0
Trace ID       : 60e6a7ab0e1a895f1930b0f93004a373
Parent ID      : 05eb499bf8d606d3
ID             : 906c6487661e9ec0
Name           : aws-node-simple-http-endpoint-dev-consumer
Kind           : SPAN_KIND_SERVER
Start time     : 2021-07-08 07:22:20.718549504 +0000 UTC
End time       : 2021-07-08 07:22:20.722462208 +0000 UTC
Status code    : STATUS_CODE_UNSET
Status message :
Attributes:
-> faas.execution: STRING(85b62062-31fb-49ab-a5b0-8cba44a1315c)
-> faas.id: STRING(arn:aws:lambda:ap-southeast-2:401722391821:function:aws-node-simple-http-endpoint-dev-consumer)
-> cloud.account.id: STRING(401722391821)
 
END RequestId: 85b62062-31fb-49ab-a5b0-8cba44a1315c
REPORT RequestId: 85b62062-31fb-49ab-a5b0-8cba44a1315c	Duration: 181.99 ms	Billed Duration: 182 ms	Memory Size: 1024 MB	Max Memory Used: 155 MB	Init Duration: 1035.04 ms	XRAY TraceId: 1-60e6a7ab-0e1a895f1930b0f93004a373	SegmentId: 6d5d52cd6c5c8110	Sampled: true

Source code
'use strict';

const request = require('request');


module.exports.producer = (event, context, callback) => {

  console.log(`Event: ${JSON.stringify(event)}`)

  const response = {
    statusCode: 200,
    body: JSON.stringify({
      message: `Hello, the current time is ${new Date().toTimeString()}.`,
    }),
  };

  callback(null, response);

};


module.exports.consumer = (event, context, callback) => {

  let options = {
    url: process.env.CONSUMER_API,
    headers: {}
  }

  request.get( options, (err, res, body) => {

    if (err) {
      return console.log(err);
    }
    console.log(`Status: ${res.statusCode}`);
    console.log(body);

    const response_final = {
      statusCode: 200,
      body: JSON.stringify({
        message: `The response from the producer is ${body}.`,
      }),
    };

    callback(null, response_final);

  });

};

OTel not showing DynamoDb, SQS and other services nodes in AWS x-ray service map for Quarkus Java Native Builds

OTel with Quarkus Lambda Native Images
We are using java with Quarkus framework to build lambda workloads, but after enabling OTel, we only get the invoke and shutdown trace. There is no other service node or traces found in logs or in xray, even though multiple aws services are being invoked from this lambda.
image

OTel must detect and display nodes in Xray service map for native images
image
We want OTel to report the full picture of Lambda lifecycle, including Overhead and Initialization, and traces of other services invoked.
It does well in Java environments, But when we moved to native build of Quarkus to get rid of initialization delays, its not reporting everything.

Additional context
We are using Java 11 with quarkus lambda in native mode.
We have configured OTEL using lambda layer, Lambda Java Auto Instrumentation
Also tried running OTel in debug mode setting OTEL_JAVAAGENT_DEBUG=true, but that didn't gave us enough information to resolve this issue with other aws services.

Java Agent Lambda Layer does not work on Java 8 runtime

Description

This caught my attention because of open-telemetry/opentelemetry-lambda#107, which said that the Java Agent Lambda Layer only works on the Java 11 runtime.

This is in contrast to the AWS Public Documentation for the Java Agent Lambda Layer which says the Layer is compatible with both Java 8 and Java 11 runtimes.

Our tests most likely did not catch this because we only test our Sample App on the Java 11 run-time.

I confirmed this by using the AWS SDK Sample App from the OTel Lambda repo and testing it only both Java 8 and Java 11:

The trace from the Java 11 run-time:

Screen Shot 2021-12-17 at 11 45 00 AM

The trace from the Java 8 run-time:

Screen Shot 2021-12-17 at 11 46 05 AM

The Java 8 runtime does not have the S3 trace which means it is not instrumenting the AWS SDK.

cc @anuraaga

[Bug] Issue using aws-otel-nodejs in Lambda

Context

I am trying to configure ADOT and use it with the current implementation of Lambda through AWS CDK. AWS CDK offers a ready-to-use NodeJS lambda constructs @aws-cdk/aws-lambda-nodejs.

Unfortunately, it seems that there is an error/conflict between the otel-handler and the way @aws-cdk/aws-lambda-nodejs builds the lambda function. I am not sure if it is related to esbuild or not.

Update: It seems that the issue is similar to #97 .

Configuration

I am using the following code to create a Lambda function.

this.adotLayer = lambda.LayerVersion.fromLayerVersionArn(
      this,
      "adot-layer",
      "arn:aws:lambda:us-east-1:901920570463:layer:aws-otel-nodejs-ver-0-19-0:1"
    );

const lambdaFn = new NodejsFunction(this, `sample-fn`, {
      runtime: lambda.Runtime.NODEJS_14_X,
      description: props.description,
      handler: "handler",
      entry: "function/sample.ts",
      tracing: lambda.Tracing.ACTIVE,
      environment: {
        AWS_LAMBDA_EXEC_WRAPPER: "/opt/otel-handler", 
      },
      layers: [this.adotLayer],
    });

Error

The Lambda function is deployed properly. However, when running the function, I receive the following error:

{
    "errorType": "Runtime.ImportModuleError",
    "errorMessage": "Error: Cannot find module '/var/task/package.json'\nRequire stack:\n- /opt/nodejs/node_modules/@opentelemetry/instrumentation/build/src/platform/node/instrumentation.js\n- /opt/nodejs/node_modules/@opentelemetry/instrumentation/build/src/platform/node/index.js\n- /opt/nodejs/node_modules/@opentelemetry/instrumentation/build/src/platform/index.js\n- /opt/nodejs/node_modules/@opentelemetry/instrumentation/build/src/index.js\n- /opt/nodejs/node_modules/@opentelemetry/instrumentation-aws-lambda/build/src/aws-lambda.js\n- /opt/nodejs/node_modules/@opentelemetry/instrumentation-aws-lambda/build/src/index.js\n- /opt/wrapper.js\n- internal/preload",
    "stack": [
        "Runtime.ImportModuleError: Error: Cannot find module '/var/task/package.json'",
        "Require stack:",
        "- /opt/nodejs/node_modules/@opentelemetry/instrumentation/build/src/platform/node/instrumentation.js",
        "- /opt/nodejs/node_modules/@opentelemetry/instrumentation/build/src/platform/node/index.js",
        "- /opt/nodejs/node_modules/@opentelemetry/instrumentation/build/src/platform/index.js",
        "- /opt/nodejs/node_modules/@opentelemetry/instrumentation/build/src/index.js",
        "- /opt/nodejs/node_modules/@opentelemetry/instrumentation-aws-lambda/build/src/aws-lambda.js",
        "- /opt/nodejs/node_modules/@opentelemetry/instrumentation-aws-lambda/build/src/index.js",
        "- /opt/wrapper.js",
        "- internal/preload",
        "    at _loadUserApp (/var/runtime/UserFunction.js:100:13)",
        "    at Object.module.exports.load (/var/runtime/UserFunction.js:140:17)",
        "    at Object.<anonymous> (/var/runtime/index.js:43:30)",
        "    at Module._compile (internal/modules/cjs/loader.js:1068:30)",
        "    at Object.Module._extensions..js (internal/modules/cjs/loader.js:1097:10)",
        "    at Module.load (internal/modules/cjs/loader.js:933:32)",
        "    at Function.Module._load (internal/modules/cjs/loader.js:774:14)",
        "    at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:72:12)",
        "    at internal/main/run_main_module.js:17:47"
    ]
}

⚠️ Update

I added a dummy package.json to the project, and now I receive the same error in #97 .

"TypeError: Cannot redefine property: handler",
        "    at Function.defineProperty (<anonymous>)",
        "    at defineProperty (/opt/nodejs/node_modules/shimmer/index.js:14:10)",
        "    at AwsLambdaInstrumentation.wrap [as _wrap] (/opt/nodejs/node_modules/shimmer/index.js:56:3)",
        "    at InstrumentationNodeModuleFile.patch (/opt/nodejs/node_modules/@opentelemetry/instrumentation-aws-lambda/build/src/aws-lambda.js:67:26)",
        "    at AwsLambdaInstrumentation._onRequire (/opt/nodejs/node_modules/@opentelemetry/instrumentation/build/src/platform/node/instrumentation.js:75:33)",
        "    at /opt/nodejs/node_modules/@opentelemetry/instrumentation/build/src/platform/node/instrumentation.js:102:29",
        "    at Module.Hook._require.Module.require (/opt/nodejs/node_modules/require-in-the-middle/index.js:154:32)",
        "    at Module.Hook._require.Module.require (/opt/nodejs/node_modules/require-in-the-middle/index.js:80:39)",
        "    at Module.Hook._require.Module.require (/opt/nodejs/node_modules/require-in-the-middle/index.js:80:39)",
        "    at Module.Hook._require.Module.require (/opt/nodejs/node_modules/require-in-the-middle/index.js:80:39)"

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.