Git Product home page Git Product logo

zipkin-aws's People

Contributors

abesto avatar adriancole avatar anuraaga avatar benitovisone avatar cemo avatar codefromthecrypt avatar devinsba avatar enriquerecarte avatar grahamlea avatar jcchavezs avatar jeqo avatar jlkweb12 avatar jochemkuijpers avatar jtanza avatar lance-st avatar llinder avatar mmozum avatar reta avatar robyf avatar serceman avatar trajano avatar wavetylor avatar zeagord avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

zipkin-aws's Issues

AwsClientTracing: Null pointer exception thrown for AwsAsyncClientBuilders

Stack trace includes:

Caused by: java.lang.NullPointerException: null
	at brave.instrumentation.aws.AwsClientTracing$TracingExecutorFactory.<init>(AwsClientTracing.java:69)
	at brave.instrumentation.aws.AwsClientTracing.build(AwsClientTracing.java:47)

This is because the getClientConfiguration() does not get initialized by default in the default builder (thus blows up when calling clientConfiguration.getMaxConnections() in TracingExecutorFactory), instead Amazon uses AwsSyncClientParams to provide defaults if any are missing right before they call the build method:

    @Override
    public final TypeToBuild build() {
        return configureMutableProperties(build(getSyncClientParams()));
    }

Temporary work-around: Have async builders include: .withClientConfiguration(new ClientConfigurationFactory().getConfig());

Add span reporter for SNS

Supporting SNS for reporting allows us to subscribe any number SQS queue in any region to our span stream. This would also allow for a fanout to be used for any real time analytics purposes that might arise.

This will include a SNS span reporter

Kinesis collector does not use Region in Docker

Sorry if I make issue in wrong format.
I am running zipkin with kinesis collector.
My kinesis stream is us-east-2 but Kinesis collector always try to connect to us-east-1.
When Kinesis stream in us-east-1 all is working fine.
And here is my docker-compose.yml

version: '2'
services:
zipkin:
image: openzipkin/zipkin-aws
container_name: zipkin-aws
ports:
- 9411:9411
networks:
- docker-net
environment:
- STORAGE_TYPE=elasticsearch
- ES_HOSTS=https://search-*****************.us-east-2.es.amazonaws.com
- KINESIS_APP_NAME=zipkin-kinesis
- KINESIS_STREAM_NAME=kinesis-zipkin
- AWS_ACCESS_KEY_ID=keyid
- AWS_SECRET_ACCESS_KEY=secretkeyid
- KINESIS_AWS_STS_REGION=us-east-2
- AWS_DEFAULT_REGION=us-east-2
- AWS_CBOR_DISABLE=1
networks:
docker-net:
driver: bridge


My region is us-east-2 but Kinesis collector alway try to connect to us-east-1.

zipkin-aws | 2018-07-20 10:34:34.799 INFO 5 --- [ main] o.s.j.e.a.AnnotationMBeanExporter : Registering beans for JMX exposure on startup
zipkin-aws | 2018-07-20 10:34:35.258 INFO 5 --- [ main] o.s.b.w.e.u.UndertowServletWebServer : Undertow started on port(s) 9411 (http) with context path ''
zipkin-aws | 2018-07-20 10:34:35.277 INFO 5 --- [ main] z.s.ZipkinServer : Started ZipkinServer in 19.063 seconds (JVM running for 20.253)
zipkin-aws | 2018-07-20 10:34:35.468 INFO 5 --- [inesis-zipkin-0] c.a.s.k.c.l.w.Worker : Syncing Kinesis shard info
zipkin-aws | 2018-07-20 10:34:35.709 ERROR 5 --- [inesis-zipkin-0] c.a.s.k.c.l.w.ShardSyncTask : Caught exception while sync'ing Kinesis shards and leases
zipkin-aws |
zipkin-aws | com.amazonaws.services.kinesis.model.ResourceNotFoundException: Stream kinesis-zipkin under account not found. (Service: AmazonKinesis; Status Code: 400; Error Code: ResourceNotFoundException; Request ID: ********************************)
zipkin-aws | at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1640) ~[aws-java-sdk-core-1.11.348.jar!/:?]
zipkin-aws | at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1304) ~[aws-java-sdk-core-1.11.348.jar!/:?]
zipkin-aws | at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1058) ~[aws-java-sdk-core-1.11.348.jar!/:?]
zipkin-aws | at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743) ~[aws-java-sdk-core-1.11.348.jar!/:?]
zipkin-aws | at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717) ~[aws-java-sdk-core-1.11.348.jar!/:?]
zipkin-aws | at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699) ~[aws-java-sdk-core-1.11.348.jar!/:?]
zipkin-aws | at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667) ~[aws-java-sdk-core-1.11.348.jar!/:?]
zipkin-aws | at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649) ~[aws-java-sdk-core-1.11.348.jar!/:?]
zipkin-aws | at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513) ~[aws-java-sdk-core-1.11.348.jar!/:?]
zipkin-aws | at com.amazonaws.services.kinesis.AmazonKinesisClient.doInvoke(AmazonKinesisClient.java:2388) ~[aws-java-sdk-kinesis-1.11.348.jar!/:?]
zipkin-aws | at com.amazonaws.services.kinesis.AmazonKinesisClient.invoke(AmazonKinesisClient.java:2364) ~[aws-java-sdk-kinesis-1.11.348.jar!/:?]
zipkin-aws | at com.amazonaws.services.kinesis.AmazonKinesisClient.executeListShards(AmazonKinesisClient.java:1337) ~[aws-java-sdk-kinesis-1.11.348.jar!/:?]
zipkin-aws | at com.amazonaws.services.kinesis.AmazonKinesisClient.listShards(AmazonKinesisClient.java:1312) ~[aws-java-sdk-kinesis-1.11.348.jar!/:?]
zipkin-aws | at com.amazonaws.services.kinesis.clientlibrary.proxies.KinesisProxy.listShards(KinesisProxy.java:304) ~[amazon-kinesis-client-1.9.1.jar!/:?]
zipkin-aws | at com.amazonaws.services.kinesis.clientlibrary.proxies.KinesisProxy.getShardList(KinesisProxy.java:365) ~[amazon-kinesis-client-1.9.1.jar!/:?]
zipkin-aws | at com.amazonaws.services.kinesis.clientlibrary.lib.worker.ShardSyncer.getShardList(ShardSyncer.java:319) ~[amazon-kinesis-client-1.9.1.jar!/:?]
zipkin-aws | at com.amazonaws.services.kinesis.clientlibrary.lib.worker.ShardSyncer.syncShardLeases(ShardSyncer.java:121) ~[amazon-kinesis-client-1.9.1.jar!/:?]
zipkin-aws | at com.amazonaws.services.kinesis.clientlibrary.lib.worker.ShardSyncer.checkAndCreateLeasesForNewShards(ShardSyncer.java:90) ~[amazon-kinesis-client-1.9.1.jar!/:?]
zipkin-aws | at com.amazonaws.services.kinesis.clientlibrary.lib.worker.ShardSyncTask.call(ShardSyncTask.java:71) [amazon-kinesis-client-1.9.1.jar!/:?]
zipkin-aws | at com.amazonaws.services.kinesis.clientlibrary.lib.worker.MetricsCollectingTaskDecorator.call(MetricsCollectingTaskDecorator.java:49) [amazon-kinesis-client-1.9.1.jar!/:?]
zipkin-aws | at com.amazonaws.services.kinesis.clientlibrary.lib.worker.Worker.initialize(Worker.java:635) [amazon-kinesis-client-1.9.1.jar!/:?]
zipkin-aws | at com.amazonaws.services.kinesis.clientlibrary.lib.worker.Worker.run(Worker.java:566) [amazon-kinesis-client-1.9.1.jar!/:?]
zipkin-aws | at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_171]
zipkin-aws | at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_171]
zipkin-aws | at java.lang.Thread.run(Thread.java:748) [?:1.8.0_171]

Make sure this isn't java biased

One of the more important things about zipkin is it is architecture level of abstraction vs framework. We should be careful to review how we encode messages (and headers) is natural for any language.

For example, in Kafka, we don't even encode the representation type, rather peek at bytes.
https://github.com/openzipkin/zipkin/tree/master/zipkin-collector/kafka#encoding-spans-into-kafka-messages While this is more about limitations in metadata for Kafka, it is a somewhat usual thing. Ex in http, we look at media type headers to tell which codec to use.

An anti-pattern would be encoding java class names or something else hard to describe in pseudocode.

I'm not saying we are doing anything wrong here, just calling out something that may not be very explicit.

Food for thought. cc @eirslett @basvanbeek @mjbryant @jcarres-mdsol @rogeralsing @abesto

Properly address encoding and limits in SQS

If you look deeply into the code, you'll notice Amazon's SQS client uses Base64 to encode messages, similarly to how we do in Scribe. Even json has encoding issues. For example, there are constraints on which unicode characters are permitted. After all that.. there's either URL or POST encoding!

http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-limits.html

Because zipkin doesn't put constraints on UTF-8, a user can create a message that cannot be published (without encoding). One way to solve this is to only permit thrift, and then always Base64 (which is fine albeit inefficient). Another way is to blindly try for json, and assume users won't use restricted unicode characters.

Either way, we have to reflect (base64) encoding overhead in Sender.messageSizeInBytes, and note in docs any constraints beyond that and what people should expect. (even if the answer is just watch for dropped messages).

Setup publishing

I'll take this on, but basically we need to have publishing setup so that this can eventually go to maven central

AWS Propagation Incompatible with Brave 5.6

Changes in openzipkin/brave#846 made HexCodec.writeHexByte package-private and therefore no longer accessible from AWSPropagation. It also added more efficient access to string span / trace id values which should probably be used to benefit from the cached id string values.

Kinesis Sender not compatible with latest zipkin2 release

I get this exception when trying to start up with all the latest deps:

Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [zipkin2.reporter.Sender]: Factory method 'kinesisSender' threw exception; nested exception is java.lang.NoClassDefFoundError: zipkin2/reporter/internal/BaseCall

Does'nt work with AWS SQS FIFO queue

Zipkin SQSSender doesn't work with AWS SQS FIFO queue

Throws following exception in AsyncReporter at this.sender.sendSpans(nextMessage).execute();

com.amazonaws.services.sqs.model.AmazonSQSException: The request must contain the parameter MessageGroupId. (Service: AmazonSQS; Status Code: 400; Error Code: MissingParameter; Request ID: 82b6a6cc-c2e8-5905-b27e-f45be9c97ea0)

SDK V2 - Support intermediate errors

In the V1 SDK instrumentation we extracted errors from the intermediate http requests so that we could see the errors when retries occurred. This was not immediately obvious when building the V2 instrumentation so we should get that added for feature parity.

Release 1.0 Todo List

With the SDK instrumentation finished I think this is a good time to summarize what we feel should be included or addressed before we release a version 1.0 of this library.

Existing issues

  • #60 Add an XRay ErrorHandler impl, and associated storage code
  • #73 Add better hooks to XRay storage to allow RPC instrumentation to map nicely
  • #108 XRay configured sampler
  • #116 Intermediate error handling on V2 SDK Instrumentation

Possible missing features

The following are features I can come up with by reading through the service list

  • FinishedSpanHandler for tagging spans with host/container metadata
  • TraceContext.Extractor for Lambda requests from API Gateway
  • Service specific tags in SDK instrumentation for Dynamo and SQS (for XRay support)
  • #45 DynamoDB storage support, this needs a champion and I don't think we hit rule of 3 on it either
  • SNS, SQS, and Kinesis propagation handlers (smarthings have built the first 2
  • Kinesis Streaming Dependencies job (we attic-ed the first one)
  • MQTT propagation that supports AWS IoT
  • A tool for extracting XRay data to add into Zipkin (API Gateway, Lambda)

NullPointers in aws-java-sdk-core in s3 clients

The TracingRequestHandler in brave-instrumentation/aws-java-sdk-core can throw NPE due to an application span not being present in both the beforeAttempt and afterAttempt callback.

The root cause here is the s3 client executes a stealth HEAD request for many of the operations like CreateBucket, doesBucketExistV2 to do some preliminary checking and caching. Unfortunately, these HEAD requests do not invoke beforeExecution so an application span is never created for them, but they do invoke beforeAttempt.

This means the overall aws-sdk span isn't available and results in the NPE.

A couple of options:

  • Capture the head request and correctly map it to its parent. I'm not sure this is possible with what the callback provides.
  • ignore requests that don't have the span from beforeExecution

Sending "unknown" as segment name to X-Ray results in incorrect data displayed in the X-Ray UI

As part of #59 a check has been added that sends "unknown" as segment name to X-Ray when the received span doesn't have a remote service name. This solves the issue that X-Ray segments must have a name but creates issues on the X-Ray side, for example:
selection_007
in this case the "unknown" segment belongs to the security-gateway-aws service and not to the test-app and in the service map "unknown" services appear as dependencies:
selection_008

Make edge-case behavior nice for elasticsearch service on AWS

@devinsba noticed the error handling wasn't right when his account was missing permissions for elasticsearch. We should note in the readme (well create a readme first and note) the IAM permissions needed. Let's add a error nicer than NPE when it isn't there.

There was also a report of a hang on a newly provisioned cluster. This might be a smell of an infinite socket or otherwise timeout. Something to look into.

cc @sethp-jive

Minimum language level for reporters

In Brave, the minimum language level for core code is 1.6 as there are a number of legacy apps and/or agents that cannot move ahead of that. This is also the case in zipkin-reporter. The minimum level for collectors is java 7, as custom servers needn't be so low.

There are libraries which have higher language level, such as okhttp (java7) and I'm not sure the minimum level for AWS sdk (haven't looked).

Whatever we decide here is important here, and should be noted on the README.

Don't use batch apis when sending to SQS

Batch apis are useful in SQS when you have multiple messages you want to send at one time. The zipkin.reporter.Sender is already designed for batch operations. For example, if you give it a timeout and a threshold, it will collect as many messages as possible to meet that.

After looking carefully, I noticed that not only is the current AwsBufferedSqsSender redundant to this, but it also implies a higher overhead (lower signal). For example, regardless of whether you are using batches or not, an api request cannot be larger than 256KiB. This is a lower figure than most transports, so collectors are more than capable of accepting a single list of 256KiB of spans.

Rather than confuse configuration with a second-tier of batching (which won't be effective anyway when using the AsyncReporter), we should revert to single-message (with up to 256KiB of span)

SQS Reporter V2 SDK

When projects are already importing the V2 AWS SDK for other things, it would be nice to have the option of using V2 SDK for the SQS reporter as well. Besides not inflating the project dependencies, reportedly important V1 and V2 in the same project produces conflicts.

Amazon Kinesis

Amazon Kinesis is a service similar to Kafka. It suggests lower message costs and higher amount of data per message (1MB just like Kafka). It persists all data for minimum 24hrs which means you have to pay certain costs to keep the stream alive.

I've not heard anyone request this specifically, just noting some things as we go along.

Support for AWS Xray SDK

Dear developers,

I am currently extensively using zipkin-aws, brave and spring cloud sleuth together to fire trace to AWS X-Ray. I found that if I include the XRay SDK and Zipkin-aws together. Their tracing context are separated and will be independently record the trace to X-Ray.

Will there be any roadmap for the integration between zipkin-aws and AWS X-Ray SDK in the near future?

Regards,
Alex Wong

Feature Request: DynamoDB support

From @cemo on May 26, 2017 10:59

I would like to see DynamoDB support for zipkin. I usually let AWS services to store data and I deal rest of the services. DynamoDB seems a good and cheap alternative for Zipkin. Is it possible to support it as well?

Copied from original issue: openzipkin/zipkin#1599

XRay Remote Service Name is always "Remote"

From @cemo on October 18, 2017 21:20

I am in the process of customizing brave and putting into the production system but I came across a problem.

Despite of creating my interceptor with hardcoded labels, it always displays remote in the console. I could not find time to check it but seems there is a bug at there. I might give a try tomorrow to find culprit.

image

Copied from original issue: openzipkin/brave#524

auto-confiure not signing zipkin requests , getting blocked when storing in elasticsearh aws

I have configured 2 apps which are traced by zipkin, data is stored in aws elasticsearch, if i give full acess , and also if i allow not to sign for IAM users, zipkins data is stored in aws ES, but if i restrict for specific IAM user and try to save it in ES, am unable to store data, i was using aws autoconfiure aws-elasticsearch module, which take care of signing requests from zipkins to ES, but it is failing, however if i tried make a manual signed request to ES, it is working, is there any issue with autosigner? , pls help

Take Spans From AWS Kinesis And Put in AWS XRAY

Hi,
This Issue is more a Question:

My app sends data to kinesis with sender-kinesis (zipkin-aws) utility, I'm using kinesis collector for elasticsearch storage as an independent process, now I want a Hybrid--> run another kinesis collector with xray storage.

The reason for the hybrid is because I want still keep using zipkin UI for traces and AWS XRAY console for other purposes.

I don't see the kinesis collector with xray storage, does it exist? if it exists , how can I do it?

SQS Collector fails to delete corrupt messages

When the SQS collector encounters a message that fails to deserialize it should log and delete the offending message. If this doesn't happen the bad message will cycle back through the queue and continue to fail.

Stack trace for reference
java.lang.RuntimeException: Cannot decode spans
at zipkin.internal.Collector.doError(Collector.java:144) [io.zipkin.java-zipkin-2.4.5.jar!/:na]
at zipkin.internal.Collector.errorReading(Collector.java:119) [io.zipkin.java-zipkin-2.4.5.jar!/:na]
at zipkin.internal.Collector.errorReading(Collector.java:114) [io.zipkin.java-zipkin-2.4.5.jar!/:na]
at zipkin.internal.Collector.acceptSpans(Collector.java:59) [io.zipkin.java-zipkin-2.4.5.jar!/:na]
at zipkin.internal.V2Collector.acceptSpans(V2Collector.java:43) [io.zipkin.java-zipkin-2.4.5.jar!/:na]
at zipkin.collector.Collector.acceptSpans(Collector.java:112) [io.zipkin.java-zipkin-2.4.5.jar!/:na]
at zipkin.collector.sqs.SQSSpanProcessor.process(SQSSpanProcessor.java:109) [zipkin-collector-sqs-0.8.7.jar!/:na]
at zipkin.collector.sqs.SQSSpanProcessor.run(SQSSpanProcessor.java:75) [zipkin-collector-sqs-0.8.7.jar!/:na]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_152]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_152]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_152]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_152]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_152]
Caused by: java.lang.IllegalArgumentException: Empty endpoint at $[3].remoteEndpoint reading List from json
at zipkin2.internal.JsonCodec.exceptionReading(JsonCodec.java:229) ~[io.zipkin.zipkin2-zipkin-2.4.5.jar!/:na]
at zipkin2.internal.JsonCodec.readList(JsonCodec.java:142) ~[io.zipkin.zipkin2-zipkin-2.4.5.jar!/:na]
at zipkin2.codec.SpanBytesDecoder$1.decodeList(SpanBytesDecoder.java:38) ~[io.zipkin.zipkin2-zipkin-2.4.5.jar!/:na]
at zipkin.internal.V2Collector.decodeList(V2Collector.java:48) [io.zipkin.java-zipkin-2.4.5.jar!/:na]
at zipkin.internal.V2Collector.decodeList(V2Collector.java:29) [io.zipkin.java-zipkin-2.4.5.jar!/:na]
at zipkin.internal.Collector.acceptSpans(Collector.java:57) [io.zipkin.java-zipkin-2.4.5.jar!/:na]
... 9 common frames omitted
Caused by: java.lang.IllegalArgumentException: Empty endpoint at $[3].remoteEndpoint
at zipkin2.internal.V2SpanReader$1.fromJson(V2SpanReader.java:134) ~[io.zipkin.zipkin2-zipkin-2.4.5.jar!/:na]
at zipkin2.internal.V2SpanReader$1.fromJson(V2SpanReader.java:109) ~[io.zipkin.zipkin2-zipkin-2.4.5.jar!/:na]
at zipkin2.internal.V2SpanReader.fromJson(V2SpanReader.java:59) ~[io.zipkin.zipkin2-zipkin-2.4.5.jar!/:na]
at zipkin2.internal.V2SpanReader.fromJson(V2SpanReader.java:22) ~[io.zipkin.zipkin2-zipkin-2.4.5.jar!/:na]
at zipkin2.internal.JsonCodec.readList(JsonCodec.java:138) ~[io.zipkin.zipkin2-zipkin-2.4.5.jar!/:na]
... 13 common frames omitted

Strange death in circleci

kicked https://circleci.com/gh/openzipkin/zipkin-aws/567 due to

[ERROR] org.apache.maven.surefire.booter.SurefireBooterForkException: The forked VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd /home/circleci/project/collector-sqs && /usr/lib/jvm/java-9-openjdk-amd64/bin/java -jar /home/circleci/project/collector-sqs/target/surefire/surefirebooter15158461991753914067.jar /home/circleci/project/collector-sqs/target/surefire 2018-07-31T23-48-15_167-jvmRun1 surefire9318205394990007620tmp

This was the image

Status: Downloaded newer image for circleci/openjdk:9-jdk
  using image circleci/openjdk@sha256:c53ae15adb4c48727b6b7ea1763e2d95ed6414f90

oddly that image doesn't show up in circleci at least as far as I can tell https://hub.docker.com/r/circleci/openjdk/tags/

AWS SQS X-ray Trace propagation

Dear developers,

I am currently using Spring Cloud Sleuth and Spring AWS Message to send and receive message from AWS SQS.

I find that the X-ray trace cannot be propagated across the SQS message producer and SQS message consumer and the X-ray trace appears as 2 independent trace.

Will there be any feature enhancement roadmap to support trace propagation across SQS and even SNS?

Regards,
Alex Wong

Support gRPC in XRay Encoder

For those of us that use gRPC it would be nice to have the encoder support handling spans from the brave-grpc instrumentation.

I wonder if there is some abstraction we can add so we don't have to keep manually implementing these things as new RPC mechanisms are instrumented in brave, obviously HTTP is fairly well understood and the tags are standarized

Decide on credentials strategy for collectors

While trying to get docker-zipkin-aws updated with both collectors I ran into a problem. Both have their own configuration for their credentials provider. Unfortunately this causes a problem with spring at startup because they are both able to create a bean of the same type and trample on each other

Add span transport for SQS

In order to use AWS managed resources for as much of the process as possible it would be nice to support pulling spans off of an SQS queue

This will include a span reporter and span collector for SQS

[Feature Request] Instrument Java AWS-SDK clients

Originally requested by @pims as openzipkin/brave#473

Hi there,

I’ve been trying to instrument the Amazon S3 Client from the Java AWS-SDK.
New versions of the SDK offer hooks via the RequestHandler2 abstract class which implements the following interface:

public interface IRequestHandler2 {
    AmazonWebServiceRequest beforeExecution(AmazonWebServiceRequest request);
    AmazonWebServiceRequest beforeMarshalling(AmazonWebServiceRequest request);
    void beforeRequest(Request<?> request);
    HttpResponse beforeUnmarshalling(Request<?> request, HttpResponse httpResponse);
    void afterResponse(Request<?> request, Response<?> response);
    void afterError(Request<?> request, Response<?> response, Exception e);
}

I’ve tried something along those lines, but couldn't get it to work properly. @adriancole suggested raising the issue here.

public class ZipkinRequestHandler extends RequestHandler2 {
    final private Tracer tracer;
    final private CurrentTraceContext currentTraceContext;
    final private HttpClientHandler<Request, Response> handler;
    final private TraceContext.Injector<Request> injector;
    final private TraceContext.Extractor<Request> extractor;

    public static ZipkinRequestHandler create(final HttpTracing httpTracing,
                                              final HttpClientAdapter<Request, Response> adapter) {
        return new ZipkinRequestHandler(
                httpTracing.tracing().tracer(),
                httpTracing.tracing().currentTraceContext(),
                HttpClientHandler.create(httpTracing, adapter),
                httpTracing.tracing().propagation().injector(new Propagation.Setter<Request, String>() {
                    @Override
                    public void put(Request carrier, String key, String value) {
                        carrier.addHeader(key, value);
                    }
                }),
                httpTracing.tracing().propagation().extractor(new Propagation.Getter<Request, String>() {
                    @Override
                    public String get(Request carrier, String key) {
                        final Map<String, String> headers = carrier.getHeaders();
                        return headers.get(key);
                    }
                })

        );
    }

    private ZipkinRequestHandler(Tracer tracer, CurrentTraceContext ctc, HttpClientHandler<Request, Response> handler,
                                 TraceContext.Injector<Request> injector, TraceContext.Extractor<Request> extractor) {
        this.tracer = tracer;
        this.currentTraceContext = ctc;
        this.handler = handler;
        this.injector = injector;
        this.extractor = extractor;

    }

    @Override
    public void beforeRequest(Request<?> request) {
        TraceContext parent = currentTraceContext.get();
        try(CurrentTraceContext.Scope scope = currentTraceContext.newScope(parent)) {
            Span span = handler.handleSend(injector,request);
            span.annotate("start" + LocalDateTime.now().toString());
            System.out.println(LocalDateTime.now() + " beforeRequest " + span.toString());
        }


    }

    @Override
    public void afterResponse(Request<?> request, Response<?> response) {
        final Span span = tracer.joinSpan(extractor.extract(request).context());
        span.annotate("end-"+ LocalDateTime.now().toString());
        handler.handleReceive(response, null, span);
        System.out.println(LocalDateTime.now() + " afterResponse " + span.toString());
    }

    @Override
    public void afterError(Request<?> request, Response<?> response, Exception ex) {
        final Span span = tracer.joinSpan(extractor.extract(request).context());
        handler.handleReceive(null, ex, span);
        System.out.println("afterError " + span.toString());
    }
}

Integration tests aren't running on CI

As part of building PR's and master, we should be running all of our tests. I'm not sure if this worked in the past but we should get it back to working.

This probably only involves enabling and configuring the failsafe plugin

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.