Git Product home page Git Product logo

disco's Introduction

License Actions Status

The Amazon Disco Toolkit

A suite of tools including a framework for creating Java Agents, for aspect-oriented tooling for distributed systems.

Disco is a sort-of acronym of Distributed Systems Comprehension and may be styled as DiSCo, Disco or disco. We really don't mind.

Right now, although intended eventually to encompass other tooling in a similar space, Disco includes a library/framework/toolkit for creating Java Agents aimed at solving operational tooling for distributed systems in service-oriented architectures.

Pre-release software

๐Ÿ›‘ ย  Disco is pre-release software. As an author of agents or plugins, you may be subject to some churn or rework while we finalize the APIs and features.

Please note that whilst on versions less than semantic version 1.0, Disco is pre-release or preview software. Some APIs may be subject to change or removal until a stable v1.0 release occurs.

What does it do?

The Disco Java Agent toolkit provides 2 key runtime primitives, as an extension on top of Java (or other JVM languages such as Kotlin, but Java is currently the only language with robust tests).

  1. An in-process Event Bus, which advertises moments in the lifetime of a transaction or activity in service-oriented software. For example, a service receiving a request via HTTP will begin a timeline of such events at that time, concluding when it has finally delivered a response to its caller.
  2. A Transactionally Scoped data dictionary called the TransactionContext. By Transactionally Scoped we mean that this data store will be created at the beginning of the activity lifetime as described above, and survive until it concludes. Extensions are added to the Java runtime such that this data store will be consistent during thread handoffs.

These are explored in more detail below.

In future, more languages will be supported.

Event Bus

Think about a straw man API on a service-oriented system. Let's say we have a service CityInfoService, with an API "getInfoForZipCode". It delegates to two downstream services, WeatherService and TrafficService, whose responses it aggregates into a response containing weather and traffic information for a given zip code. Since these two services are orthogonal and not interdependent, it calls them in parallel.

The timeline of a call to getInfoForZipCode might be as follows:

  1. At time T in Thread 0, a Servlet.service() call is made upon receipt of the getInfoForZipCode API being called.
  2. At time T+1 in Thread 0, the service submits two tasks to a pooled ExecutorService, to call the two downstream services.
  3. At time T+2 in Thread 1, a call to WeatherService is made.
  4. At time T+2 in Thread 2, a call to TrafficService is made.
  5. At time T+3 in Thread 1, the response from WeatherService is received.
  6. At time T+4 in Thread 2, the response from TrafficService is received.
  7. At time T+5 in Thread 0, Threads 1 and 2 are join()'d.
  8. Finally, at time T+6 in Thread 0, a response is given to the service's caller.

The disco event bus issues events at all the key moments on this timeline of events. At the time of each event, the TransactionContext (see below) is available to pass data between the points at which each event is received and throughout this timeline of events, the Transaction ID (again, see below) is consistent and unique. If two calls to getInfoForZipCode were happening concurrently, and each had the same sequence of events those events would agree on the content of the TransactionContext within each activity with no crosstalk or confusion across the two.

Events are published for the upstream request and response, the downstream requests and responses and each time a Thread is forked or joined.

Transaction Context

Consider when a service activity might call 3 downstream services, parallelized in 3 threads from a thread pool. It may do this, for example, by dispatching work to an ExecutorService.

One problem we have observed in tooling such as AWS X-Ray, due to deficiencies in Java, is that these worker threads have no clear concept of 'caller' or 'parent'. Using Disco, the Java runtime is extended to ensure that the concept of caller/parent is passed from thread to thread at the time of thread handoff (e.g. when calling Thread.start(), or someExecutor.submit(), or when using a Java 8 parallel stream), by the 'forking' thread giving the 'forked' thread access to its Transaction Context data store.

By default, upon creation, the Transaction Context always contains a 96 bit random number formatted as a hexadecimal string as a Transaction ID. This can be overridden by plugins or client code if desirable, and any other arbitrary data may also be added at any point in the lifetime of the service activity. Once the data is placed in the Transaction Context, it becomes available across the activity's family of threads thereafter.

What kind of uses does an Agent built on disco have?

Let's start with a simple example of a logger. In our CityInfoService above, it may be challenging to produce really good 'joined-up' logs due to the concurrent behavior of the service. When logging the calls to WeatherService and TrafficService, the threads appear 'orphaned'. If you've ever tried to make sense of logs by looking at 'nearby timestamps' instead of having unambiguous IDs available, this will be a familiar problem.

So the simplest example, is to create a Listener on the disco Event Bus which logs every event, along with the ID from the Transaction Context. Now without taking any particular special action in the service's business logic itself, all these lines of log can be tied together via this ID.

Another common requirement is to pass metadata (perhaps via HTTP headers) from service to service, when creating tracing our routing tools in service-oriented architectures. Incoming request events and outgoing request events provide a facility to inspect, and manipulate HTTP headers. The AWS X-Ray agent uses this feature to propagate its segments across service call hops, without user-intervention.

Creating a Java Agent

There are two basic ways to create a Java Agent using Disco. As a self-contained artifact, or using a plugin-based system to allow multi-tenancy.

See the subproject disco-java-agent-example as a simple example of building a self contained agent, along with the associated tests for it in disco-java-agent-example-test

Alternatively, the plugin-based approach may be seen in the disco-java-agent-web-plugin project, which uses the canonical agent found in the disco-java-agent/disco-java-agent package as a substrate for plugin discovery.

How to install a Java Agent once created

As can be see in the build.gradle.kts files of several subprojects (e.g. the tests for disco-java-agent-example), a single argument needs to be passed to the Java process at start-up. See AgentConfigParser.java in the Core libraries for details on the command line arguments for agents, and see PluginDiscovery.java for details on how a substrate agent may load plugins.

Using Java Agents on managed runtimes such as Lambda

One complexity for some managed runtimes is that the user does not have complete authority over the arguments passed to Java. To remedy this, it is possible to 'Inject' a Disco agent at runtime, using a tiny (1 or 2 lines) amount of boilerplate code. An example of this is given in the disco-java-agent-example-inject-test project. If using this method, care must be taken to perform the injection as early as possible in the target software's lifetime (first line of Main() is ideal, as soon as possible after that is the next best). Disco works on the basis of extending the Java runtime via aspect-oriented bytecode manipulation, and some of these treatments are unable to work if the instrumented class has already been used.

License

This library is licensed under the Apache 2.0 Software License.

Building Disco

The simplest way to start is to run the following command in the root directory. ./gradlew build

This will build all the code, and run all the tests (functional tests and integration tests).

The final tests which are executed are tests for the 'thread handoff' TransactionContext propogation mentioned elsewhere in this README, which deserve a mention. Some of the tests are naturally 'flaky'. This is true because when we request, for example, someCollection.parallelStream(), and then perform work, the test is not in control of whether that will actually be parallel or actually be serial. The Java runtime is in charge and is not easily manipulated. If the Java runtime elects not to parallelize, our test becomes meaningless - we cannot assert that disco's thread hand-off behavior is correct if no thread hand-off occurred at all. So these tests are designed to retry. To be clear they don't stubbornly "retry until they succeed". They retry until preconditions are met which they do not control.

Unfortunately this can mean that they sometimes fail and require restarting. We don't know a better way, sorry.

Including Disco as a dependency in your product

Disco is available in Maven Central. A Bill of Materials (BOM) package is vended to make depending on multiple Disco packages easier.

Using Maven coordinates

<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>software.amazon.disco</groupId>
            <artifactId>disco-toolkit-bom</artifactId>
            <version>0.13.0</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
    </dependencies>
</dependencyManagement>
<dependencies>
    <dependency>
      <groupId>software.amazon.disco</groupId>
      <artifactId>disco-java-agent-api</artifactId>
    </dependency>
    <!-- Other Disco dependencies -->
</dependencies>

Using Gradle's default DSL

implementation platform('software.amazon.disco:disco-toolkit-bom:0.13.0')
implementation 'software.amazon.disco:disco-java-agent-api'
// Other disco dependencies

Using Gradle's Kotlin DSL

implementation(platform("software.amazon.disco:disco-toolkit-bom:0.13.0"))
implementation("software.amazon.disco:disco-java-agent-api")
// Other disco dependencies

Troubleshooting

If you receive the following error

Could not determine the dependencies of task ':disco-java-agent-example:shadowJar'.
> Could not resolve all files for configuration ':disco-java-agent-example:runtimeClasspath'.
   > path may not be null or empty string. path='null'

Please ensure that the default Java binary that is ran is Java 8. As a workaround, you may specify the JAVA_HOME to point to another version of Java.

Example: export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_202.jdk/Contents/Home

What's with all the subprojects?

Disco is quite componentized, added to which there are quite a few projects which serve as examples and tests.

Browse through the READMEs included with the subprojects to make sense of it all, but in summary there are a few layers and families of projects in here:

  1. Public API contained in disco-java-agent-api
  2. Special API for implementors of interception plugins, in disco-java-agent-plugin-api
  3. Core library, for consumption by plugin authors and agent authors (not client code) in disco-java-agent-core
  4. Canonical Pluggable agent, capable of plugin discovery, in disco-java-agent/disco-java-agent
  5. A facility to 'Inject' a Disco Agent into managed runtimes like AWS Lambda
  6. A Plugin to support Servlets and Apache HTTP clients, in disco-java-agent-web-plugin
  7. A Plugin to support SQL connections & queries using JDBC, in disco-java-agent-sql-plugin
  8. A Plugin to support requests made with the AWS SDK for Java, in disco-java-agent-aws-plugin
  9. Example code in anything with '-example' in the project name.
  10. Tests in anything with '-test' in the project name.

Subprojects themselves also encapsulate their own tests.

disco's People

Contributors

aarolesl avatar alreiche avatar btsmithcodecommits avatar connellp avatar hydo-amzn avatar jangelas avatar leerjae avatar willarmiros avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

disco's Issues

Slow application start with xray agent

Crossposting this issue here as disucssed with @willarmiros (although admittedly I'm a bit late....)
aws/aws-xray-java-agent#70

I recently added aws-xray-java-agent 2.7.1 to an app running Spring boot 2.3.5.

Before I added the agent, startup typically took around 3-4 seconds.
Started Application in 3.927 seconds (JVM running for 4.282)

Using aws-xray-java-agent 2.7.1, the startup of my application takes anywhere between 5-10 times longer.

Basically put the disco jars in place and added this line to my gradle.build

bootRun {
jvmArgs = ["-javaagent:disco/disco-java-agent.jar=pluginPath=disco/disco-plugins"]
}
Started Application in 21.268 seconds (JVM running for 23.917)

Does this slowdown seem reasonably, or is something going on with my setup?

Kotlin Coroutine Support

We're looking to use Disco for HTTP header propagation in a Kotlin project. Reading the description of how the Agent works makes me concerned that it won't play nice with Kotlin's coroutines.

Can you confirm that Disco works with Kotlin coroutines?

Make all header operations case-insensitive

Right now, header data is stored internally using a regular Map. Right now, if you are consuming a ProtocolEvent and would like to get header data, you do something like this:

String headerData = httpNetworkProtocolRequestEvent.getHeaderData("x-forwarded-for");

However if the header is stored internally as X-Forwarded-For or some other casing, the above returns null despite the header being present. We should consider switching to using a TreeMap or similar implementation which can use case-insensitive keys, because header keys are case-insensitive.

Support Netty

It would be nice if the web package had interceptors for Netty servers, since Netty doesn't use Servlets. The interceptor should publish request/response events with similar information to their Servlet counterparts. Can't see from a quick skim of the docs that netty has built-in hooks to intercept requests, so some ByteBuddy action will probably be needed.

DiSCo(Concurrency) unable to propagate context in ForkJoinTask

Following the guidelines provided in https://docs.aws.amazon.com/xray/latest/devguide/aws-x-ray-auto-instrumentation-agent-for-java.html , my hopes were that I could collect tracing data from multiple Java services I have implemented as part of my stack.

My first service is a tool we rely on for CQRS called Axon (https://axoniq.io/download). In our case, we use the AxonServer enterprise (paid), but I was able to replicate the exact same issue on a single node on AxonServer standard edition (open-source). I decided to build my own container and publish to ECR. To package and configure the pre-built JAR file for AxonServer, I created the following Dockerfile:

FROM busybox as source

RUN addgroup -S -g 1001 axonserver \
    && adduser -S -u 1001 -G axonserver -h /axonserver -D axonserver \
    && mkdir -p /axonserver/disco/disco-plugins \
    && mkdir -p /axonserver/config /axonserver/data /axonserver/events /axonserver/log \
    && chown -R axonserver:axonserver /axonserver

FROM amazoncorretto/amazoncorretto:15-alpine-full

COPY --from=source /etc/passwd /etc/group /etc/
COPY --from=source --chown=axonserver /axonserver /axonserver

COPY --chown=axonserver axonserver.jar axonserver.properties /axonserver/
COPY --chown=axonserver disco/* /axonserver/disco/

USER axonserver
WORKDIR /axonserver

ENV JAVA_OPTIONS=""
ENV LANG=C.UTF-8

VOLUME [ "/axonserver/config", "/axonserver/data", "/axonserver/events", "/axonserver/log" ]
EXPOSE 8024/tcp 8124/tcp 8224/tcp

ENTRYPOINT java -Duser.timezone=UTC -Djava.security.egd=file:/dev/./urandom -Dlogging.level.com.amazonaws.xray=DEBUG -javaagent:/axonserver/disco/disco-java-agent.jar=pluginPath=/axonserver/disco/disco-plugins:loggerfactory=software.amazon.disco.agent.reflect.logging.StandardOutputLoggerFactory:verbose $JAVA_OPTIONS -jar axonserver.jar

Axon exposes 3 ports that does the following: 8024 provides the GUI (and also responsible for readiness and liveness), 8124 is the gRPC responsible to receive commands/queries and dispatch to corresponding handlers, and finally to relay the resulting events to any event handler that may be interested, and finally port 8224 that does cross-node communication for clustering.

My setup is currently an EKS cluster with various ties to AWS services, such as Cloudwatch, Container Insights, ECR, IAM-Authenticator, ALB Ingress Controller, etc. To leverage the integration with X-Ray, I've deployed the X-Ray daemon as a Daemonset in my cluster and exposed them as a headless service reachable through aws-xray-daemon.aws-system.svc.cluster.local:2000. Performing a netcat successfully connects to the UDP port, meaning daemon is reachable for the containers. Please let me know if you require me to provide the Kubernetes manifests for this deployment. It's inspired on https://www.eksworkshop.com/intermediate/245_x-ray/x-ray-daemon/ with my specific setup/configuration.

My xray daemon contains the following configuration:

# Maximum buffer size in MB (minimum 3). Choose 0 to use 1% of host memory.
TotalBufferSizeMB: 0
# Maximum number of concurrent calls to AWS X-Ray to upload segment documents.
Concurrency: 8
# Send segments to AWS X-Ray service in a specific region
Region: "us-east-1"
# Change the X-Ray service endpoint to which the daemon sends segment documents.
Endpoint: ""
Socket:
  # Change the address and port on which the daemon listens for UDP packets containing segment documents.
  # Make sure we listen on all IP's by default for the k8s setup
  UDPAddress: 0.0.0.0:2000
Logging:
  LogRotation: true
  # Change the log level, from most verbose to least: dev, debug, info, warn, error, prod (default).
  LogLevel: prod
  # Output logs to the specified file path.
  LogPath: ""
# Turn on local mode to skip EC2 instance metadata check.
LocalMode: false
# Amazon Resource Name (ARN) of the AWS resource running the daemon.
ResourceARN: ""
# Assume an IAM role to upload segments to a different account.
RoleARN: ""
# Disable TLS certificate verification.
NoVerifySSL: false
# Upload segments to AWS X-Ray through a proxy.
ProxyAddress: ""
# Daemon configuration file format version.
Version: 2

When deploying it on my EKS cluster, the application is fully functional/operational. No traces show up in the AWS Console UI. However, when I turn on the logs, I experience these from disco:

axonserver-0 axonserver [software.amazon.disco.agent.DiscoAgentTemplate] DiSCo(Core) finished parsing argument list: pluginPath=/axonserver/disco/disco-plugins:loggerfactory=software.amazon.disco.agent.reflect.logging.StandardOutputLoggerFactory:verbose
axonserver-0 axonserver [software.amazon.disco.agent.DiscoAgentTemplate] DiSCo(Core) passing arguments to ForkJoinTaskInterceptor to process
axonserver-0 axonserver [software.amazon.disco.agent.DiscoAgentTemplate] DiSCo(Core) passing arguments to ForkJoinPoolInterceptor to process
axonserver-0 axonserver [software.amazon.disco.agent.DiscoAgentTemplate] DiSCo(Core) passing arguments to ExecutorInterceptor to process
axonserver-0 axonserver [software.amazon.disco.agent.DiscoAgentTemplate] DiSCo(Core) passing arguments to ThreadInterceptor to process
axonserver-0 axonserver [software.amazon.disco.agent.DiscoAgentTemplate] DiSCo(Core) passing arguments to ThreadSubclassInterceptor to process
axonserver-0 axonserver [software.amazon.disco.agent.DiscoAgentTemplate] DiSCo(Core) passing arguments to ScheduledThreadPoolExecutorInterceptor to process
axonserver-0 axonserver [software.amazon.disco.agent.interception.InterceptionInstaller] DiSCo(Core) attempting to install software.amazon.disco.agent.concurrent.ForkJoinTaskInterceptor
axonserver-0 axonserver [software.amazon.disco.agent.interception.InterceptionInstaller] DiSCo(Core) attempting to install software.amazon.disco.agent.concurrent.ForkJoinPoolInterceptor
axonserver-0 axonserver [software.amazon.disco.agent.interception.InterceptionInstaller] DiSCo(Core) attempting to install software.amazon.disco.agent.concurrent.ExecutorInterceptor
axonserver-0 axonserver [software.amazon.disco.agent.interception.InterceptionInstaller] DiSCo(Core) attempting to install software.amazon.disco.agent.concurrent.ThreadInterceptor
axonserver-0 axonserver [software.amazon.disco.agent.interception.InterceptionInstaller] DiSCo(Core) attempting to install software.amazon.disco.agent.concurrent.ThreadSubclassInterceptor
axonserver-0 axonserver [software.amazon.disco.agent.interception.InterceptionInstaller] DiSCo(Core) attempting to install software.amazon.disco.agent.concurrent.ScheduledThreadPoolExecutorInterceptor
axonserver-0 axonserver [software.amazon.disco.agent.DiscoAgent] DiSCo(Agent) agent startup complete

... Axon startup logs here ...

axonserver-0 axonserver [software.amazon.disco.agent.concurrent.ForkJoinTaskInterceptor] DiSCo(Concurrency) unable to propagate context in ForkJoinTask
axonserver-0 axonserver [software.amazon.disco.agent.concurrent.ForkJoinTaskInterceptor] DiSCo(Concurrency) unable to propagate context in ForkJoinTask

... Same error keeps repeating undefinitely ...

Please let me know if there is anything else I can provide or do to help you troubleshoot this error.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.