Git Product home page Git Product logo

athena's Introduction

A Performance and Functional Testing Engine for APIs

About

Athena is a performance and functional testing engine that aims to reduce the time and effort required to define and run tests. Its main goal is to act as a unified, but extensible tool for managing and running functional as well as performance test suites.

What can Athena do?

  • Increase confidence in each release by using an integrated testing framework (performance/functional).
  • Allow support for defining tests in a modular, but configurable way using YAML files.
  • Aggregate test results and provide in-depth reports via Elasticsearch and predefined Kibana dashboards.
  • Provide support for tests version management.
  • Run tests independent of their location.
  • Allow support for defining assertions in a programmatic way (functional).
  • Allow support for easily extending the core functionality of the framework through plugins.
  • Allow support for defining reusable fixture modules among tests.
  • Allow support for creating complex performance mix patterns using functional tests. (on the roadmap)

📝Note: A thorough list of upcoming features is available in the Roadmap.

How it Works

Behind the scenes, Athena uses Autocannon for performance testing and Chakram for functional API testing, however it is capable of supporting almost any other testing engine via extension.

Getting Started

You can start using Athena right away and run either performance and functional tests using the following command:

node athena.js run -t ./custom_tests_path --[performance|functional]

Developer Guide

TBD

Coding Standards

Athena uses ESLint for linting and it is following the Google JavaScript Style Guide with a couple of minor adjustments. If you are using IntelliJ as your preferred IDE, make sure to follow this guide in order to learn more about integrating the two for a better development experience.

Distributed Load Testing

📝 Note: This feature is currently available only for performance testing. Clustering support for functional testing is on the roadmap.

Athena supports clustering out of the box via multiple deployment strategies. Its clustering model is straightforward, requiring a Manager node and at least one Agent node.

Cluster management is fully integrated, therefore you can use the Athena CLI to create a new cluster, join an existing cluster using a secret access token (generated at initialization) and delegate suites of tests to all available workers without the need of additional management software.

Reporting and aggregation inside the cluster is also provided out of the box. The cluster manager constantly monitors the cluster state, pending and running jobs and aggregates all data inside Elasticsearch. The complete state of the cluster (available agents, previous job reports, etc) can be easily visualised in custom Kibana dashboards.

Management via a UI Dashboard provides an easy to use solution for defining test suites as well as managing previous test runs. (roadmap feature)

Creating a new Cluster

There are multiple ways of bootstrapping a new cluster.

Standalone

Creating a standalone cluster can be achieved using the following command:

node athena.js cluster --init --addr 0.0.0.0

Once the cluster manager is bootstrapped, you will see a standard message and the necessary instructions in order to join the cluster from another node. There is no need to specify as port, as Athena will automatically assign one in the 5000-5100 range.

ℹ️  INFO:  Creating a new Athena cluster on: 0.0.0.0:5000 ...
ℹ️  INFO:  Athena cluster successfully initiated: current node (qa8NK1-QfWNq) is now a manager.
        
        👋 Hello! My name is "tall-coral" and I was assigned to manage this Athena cluster!
        
        To add more workers to this cluster, run the following command:
        node athena.js cluster --join --token DwJHjvTpmE73b-sgfkIcpZCDbiN8MMp6xdZHSb-N01Zp949M-YKcQUOS7w3-fi-u --addr 0.0.0.0:5000

📝 Note: Monitoring and reporting while bootstrapping a new standalone cluster requires extra configuration for Elasticsearch indexing.

Docker Compose

The preffered way of bootstrapping a new cluster is via Docker Compose. Using the provided docker-compose.yaml configuration file you can easily kickstart a complete Athena cluster Manager on the fly.

docker-compose up

This will start a new Athena process running in Manager mode, an Elasticsearch cluster, a Filebeat service and Kibana for visualisation.

Once all the services are bootstrapped, you can use the generated access token to join the current cluster from other nodes.

Accessing Kibana and Elasticsearch

Kibana

Once the Compose stack is up and running, you can access Kibana at:

http://localhost:5601

Elasticsearch

Also, Elasticsearch can be accessed at:

http://localhost:9200

Process Management

Athena uses the PM2 process manager behind the scenes for managing the cluster Manager and Agent processes. Therefore, you can get useful information about the running processes and manage them easily via the CLI.

Aggregation and Reporting

Each report is indexed in ElasticSearch and can be aggregated and previewed using Kibana.

Kibana Dashboard - Performance Results

Athena provides a custom Kibana Dashboard that aggregates performance job results. The aggregated results can provide insights for a single performance job executed by a single Agent inside the cluster or for the entire cluster results.

You can access the Performance Reports dashboard inside Kibana > Dashboard > Performance Reports. The following visualizations are included inside the Performance Report dashboard:

  • Connections Goal
  • Average RPS
  • RPS in the 99th Percentile
  • 2xx Responses
  • non-2xx Responses
  • Duration (seconds)
  • Total Requests
  • RPS Over Time (area chart)
  • (RIOT) Requests Increase Over Time (area chart)
  • RPS Percentiles (bar chart)

Isolating Performance Reports

  • Use job_id : "<JOB_ID>" to aggregate the results to a specific job (provides results for the entire cluster if the job ran that way).
  • Use agent_name: "<AGENT_NAME>" to aggregate the results to a specific agent.

Optimizing Your System for Performance

In order to get the best performance from your nodes while running Athena, make sure to fine tune your open-file limits, network as well as your kernel settings.

Depending on your operating system, you can change your open-file limits using the following command:

ulimit -n 65536 200000

Furthermore, the following network and kernel settings are recommended inside sysctl.conf:

net.ipv4.tcp_max_syn_backlog = 40000
net.core.somaxconn = 40000
net.core.wmem_default = 8388608
net.core.rmem_default = 8388608
net.ipv4.tcp_sack = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_fin_timeout = 15
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_moderate_rcvbuf = 1
net.core.rmem_max = 134217728
net.core.wmem_max = 134217728
net.ipv4.tcp_mem  = 134217728 134217728 134217728
net.ipv4.tcp_rmem = 4096 277750 134217728
net.ipv4.tcp_wmem = 4096 277750 134217728
net.core.netdev_max_backlog = 300000

Performance Tests

Athena allows you to define and specify various flexible performance testing scenarios for traffic-shaping, namely:

  • Stress Testing - Constant traffic at specified parameters.
  • Load Testing - Steady increase of traffic until a threshold is reached.
  • Spike Testing - Short bursts of high traffic.
  • Soak Testing - Reliable long-duration tests.

Performance tests within Athena are composed from 3 types of modules that can be defined in individual yaml configuration files and can be used to compose complex performace tests.

  1. Performance Runs - The most granular unit of work in performance tests. They allow you to define a standard or linear performance test.
  2. Performance Patterns - Are composed from multiple performance runs and allow you to define complex weighted pattern mixes.
  3. Performance Tests - Are composed from performance patterns and provide support for defining complex scenarios while controlling the rampUp, coolDown and spike behavior.

Hooks

All performance test definitions support multiple hooks that can be used to dynamically manipulate a test's behavior. When a hook is used, the assigned function will receive the test's current context, which can be used for further decisions.

skip

Whether to skip the current test or not. The value can be provided dynamically via a fixture function.

onInit

Runs when the test is first initialised.

onRequest

Runs before the HTTP request.

onResponse

Runs when the HTTP response is received.

onDestroy

Runs when the test case has finished.

Mockup Responses

TBD

Fault Injection

TBD

Configuration

The following section describes the configuration model defined for Athena's performance test types.

📝Notes: The config and hooks objects will cascade between performance test types and the provided values will be overriden depending on the specificity level.

Runs

The following config properties are available for performance runs.

name: string
version: string
description: string
engine: string
type: perfRun # Required perf run identifier.
config:
    url: string
    socketPath: string
    connections: number
    duration: number # in seconds
    amount: number # overrides duration
    timeout: number
    pipelining: object
    bailout: 
    method:
    title:
    body:
    headers: "object"
    maxConnectionRequests: number
    connectionRate: number
    overallRate: number
    reconnectRate: number
    requests: "[object]"
    idReplacement: string
    forever: boolean
    servername: string
    excludeErrorStats: boolean
hooks:
  onInit:
  onDestroy:
  onRequest:
  onResponse:
Patterns

The following config properties are available for performance patterns.

name: string
version: string
description: string
engine: autocannon      # required
type: perfPattern       # required
pattern:
  - ref: string         # the performance run reference   
    version: string
    weight: string      # percentage (eg. 20%)
    config:             # object
                        # see perf. run config for example.
    hooks:              # object
                        # see perf. run hooks for example.
Tests

The following config properties are available for complete performance tests definitions:

name: string
description: string
engine: autocannon      # required
type: perfTest          # required
hooks:                  # object
    # ...
config:                 # object
    # ...
scenario:
  pattern:
    - ref: "prod"
      version: "1.0"
      config:
        # granular config control 
      rampUp:
        every: 10s      # or fixed
        rps: 10
        connections: 10
        threads:
        fixed: 30s      # or every
      coolDown:
        every: 10s      # or fixed
        rps: 10
        threads:
        connections: 10
        fixed: 30s      # or every
      spike:
        every: 10s      # or fixed
        rps: 10
        threads:
        connections: 10
        fixed: 30s      # or "every". if "fixed", you need to specify "after"
        after: 30s

Functional Tests

Entities

Tests

At a granular level, functional tests must also be defined via yaml files that follow a specification, a Gherkin-like schema:

  • given - Variables and prerequisites for the API call.
  • when - The API call itself.
  • then - The validation of expected outcomes.

Functional tests also support hooks, which are simply sections where additional preparations or cleanup can be done (e.g. obtaining an authentication token before using it within a request or setting and resetting resource states).

The order in which these sections will be executed is:

  • setup
  • given
  • beforeWhen
  • when
  • beforeThen
  • then
  • teardown

Test Example:

type: test
name: Sample test
engine: chakram
scenario:
  given: >
    host = "https://httpbin.org/get"
    params = {
      headers: {
        "accept: application/json"
      }
    };
  when: >
    response = chakram.get(host, params)
  then: >
    expect(response).to.have.status(200)
Suites

You can organize tests together and employ hierarchical configurations via suites. These can flexibly override any or all the hook/scenario items for the tests grouped under them, as in the following example:

type: suite
name: sampleSuite
engine: chakram
hooks:  # affects hooks for all suite tests (with lower precedence)
  setup: console.log("override setup")
  beforeWhen: console.log("override beforeWhen")
  beforeThen: console.log("override beforeThen")
  teardown: console.log("override teardown")
  tests:  # affects hooks only for simpleTest (with higher precedence)
    - ref: simpleTest
      setup: console.log("override setup")
      beforeWhen: console.log("override beforeWhen")
      beforeThen: console.log("override beforeThen")
      teardown: console.log("override teardown")
scenario:  # affects scenario for all suite tests (with lower precedence)
  given: console.log("override given")
  when: console.log("override when")
  then: console.log("override then")
  tests:  # affects scenario only for simpleTest (with higher precedence)
  - ref: simpleTest
    given: console.log("override given")
    when: console.log("override when")
    then: console.log("override then")

Plugins and Fixtures

Fixtures are helper functions that can be injected in various contexts. Plugins allow you to extend Athena's functionality. A limited set of out-of-the-box plugins will be provided by the framework, but users can define pretty much any functionality over what is already offered (for now, there is a cryptographic utility, but more are in the works).

Configuration

Fixtures

Fixtures need to be defined and configured first via yaml files in order to be used inside tests. The following configuration options are available while defining fixtures:

name

(Required) The fixture name in camelCase, which will also act as the provisioned function name.

type

(Required) The module type (fixture).

config:type

(Required) The fixture type. Allowed values: lib, inline.

config:source

(Required) The fixture source path if config:type is set to lib. The fixture implementation if the config:type is set to inline.

The following is an example of a valid fixture definition:

name: getUUID
type: fixture
config:
  type: lib
  source: fixtures/getUUIDFixture.js
Plugins

Plugins allow you to extend Athena's core functionality via setup actions and filters. Using plugins, you can intercept Athena's behaviour at specific times or even override it completely.

📝Note: A thorough list of available filters and actions is in the works.

Dependencies

All plugins and fixture dependencies are automatically detected and installed during runtime. The following example represents a valid fixture without the need for you to manage the package.json file.

const uuid = require("uuid/v1");

function uuidFixture() {
    return uuid();
}

module.exports = uuidFixture;

💡 Note: If a test is marked as unstable during the pre-flight check, its dependencies will not be installed.

Roadmap

  • RESTful API.
  • Web-based dashboard UI for managing suites, tests and the cluster.
  • Ability to run functional tests as complex performance mix patterns.
  • Support for Git hooks.
  • Extended storage support for multiple adapters.
  • Sidecar for Kubernetes.

Sidecar for Kubernetes

Injected as a separate pod inside a node via Kubernetes hooks and Kubernetes controller, modifies iptables so all inbound and outbound traffic goes through the Atheena sidecar, for checks and traffic proxying athena uses an envoy proxy that it configures for outbound traffic proxying.

Sidecar deployment model

Sidecar for Docker Images

Via a Docker Compose configuration and bash scripting, Athena acts as a sidecar for individual Docker images. The approach is the same for a Kubernetes cluster.

Git Hooks

Athena can be configured to listen to Git hooks and run tests in any directory that contains a file called .perf.athena.

RESTful API

Athena has a simple control plane and a powerful RESTful web server that can be used to manage suites of tests. Using the API, you can start/stop different tests as well as manage the collected metrics about a specific test or suite run.

Management via UI Dashboard

Configuration management should be handled via a web-based custom dashboard UI that takes advantage of the exposed RESTful API.

Troubleshooting

TBD

Frequently Asked Questions

  • 🤔Question: In terms of performance, how does Athena compare with any other load testing tool?
    • 💬 Answer: Behind the scenes, Athena uses the Autocannon load testing engine which is able to deliver more load than wrk and wrk2. We've benchmarked three load testing tools (Autocannon, WRK2 and Gatling) and published our results in this short article on Medium.

Contributing

Contributions are welcomed! Read the Contributing Guide for more information.

Licensing

This project is licensed under the Apache V2 License. See LICENSE for more information.

athena's People

Contributors

atrifan avatar bogdanbranescu avatar catalin-me avatar corinapurcarea avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

athena's Issues

Add Cluster Support for Performance Testing

Feature

  • Provide support for running Athena as a manager node.
  • Allow Athena agents to join an already set-up cluster using an access token generated by the manager node.
  • Use the athena CLI to manage the cluster and delegate performance tests to all agents inside the cluster.
  • Provide support to the Athena manager to aggregate the results from all working nodes and store them inside a local database (sqlite?).
  • Allow the Athena manager to report the tests result inside a CI/CD environment.

The command line arguments are not propagated to master when running performance tests in cluster mode

The command line arguments given when running performance tests in cluster mode are not propagated to ManagerNode and are discarded.

Expected Behaviour

When issuing a command to master to run performance tests from a given path, the tests located at the given path are executed.

Actual Behaviour

When issuing a command to master to run performance tests from a given path, the tests that are located at the default path are used instead.

Steps to Reproduce

node athena.js cluster --init --addr 0.0.0.0
node athena.js cluster --join --addr 0.0.0.0:5000 --token abcd

node athena.js cluster --run --performance --tests gw-tests/performance/upstream-directive/baseline

pm2 logs athena-agent
pm2 logs athena-manager

Switch to more inclusive language

Could you please take some time in the next few days to make changes to some terminology in your repos and content as much as is possible:

  • Whitelist/blacklist to Allowed List and Blocked List (or Approve List/Deny List - some software uses this instead) respectively. Google and many developers are formalizing allowlist and blocklist. You might want to lobby for those terms to be used in the UI.
  • Master/Slave to master and replica (or subordinate, if that makes more sense) respectively.

If you cannot remove the term because the writing for example reflects the UI or the code, please make a note and send me an email to [email protected] so we can bring it that team’s attention. Thanks for your efforts in this matter.

Action required: Greenkeeper could not be activated 🚨

🚨 You need to enable Continuous Integration on Greenkeeper branches of this repository. 🚨

To enable Greenkeeper, you need to make sure that a commit status is reported on all branches. This is required by Greenkeeper because it uses your CI build statuses to figure out when to notify you about breaking changes.

Since we didn’t receive a CI status on the greenkeeper/initial branch, it’s possible that you don’t have CI set up yet.
We recommend using:

If you have already set up a CI for this repository, you might need to check how it’s configured. Make sure it is set to run on all new branches. If you don’t want it to run on absolutely every branch, you can whitelist branches starting with greenkeeper/.

Once you have installed and configured CI on this repository correctly, you’ll need to re-trigger Greenkeeper’s initial pull request. To do this, please click the 'fix repo' button on account.greenkeeper.io.

suite model potentially wrong

Tests should not present a dependency to a suite.

Suite model is potentially wrong. A suite should look like:

.....
scenario:
given: >
....
then: >
....
when: >
....
tests:
   - ref: "name"
      version: "x.x"
      pottential overriding fields (must respect test model)

Please change the entity parsing logic to refer tests inside suites not the other way around

Failing tests report only the first failing assertion

In order to understand the state of a failing test, a list of all the failing assertions should be provided. See an example of faulty reporting below:

  • test file:
    expect(response).to.have.header("pragma", "no-cache"),
    expect(response).to.have.header("accept-ranges", "bytes"),
    expect(response).to.have.header("X-First-Bad-Header", "this_header_enjoys_espresso"),
    expect(response).to.have.header("X-Second-Bad-Header", "this_value_drinks_only_decaf")
  • output:
     AssertionError: expected header X-First-Bad-Header with value undefined to match this_header_enjoys_espresso

Add sidecar support

Expected Behaviour

Ability to be injected arround k8 pod and on server around a docker image. Start tests and intercept outbound traffic.

Actual Behaviour

N/A

Reproduce Scenario (including but not limited to)

N/A

Steps to Reproduce

N/A

Platform and Version

0.X

Sample Code that illustrates the problem

Known issue

Logs taken while reproducing problem

N/A

Simplify paths for config and test collection

  • Move config.js at repo root level.
  • Allow for arbitrary directory structure when collecting tests.
  • Automatically detect fixtures when a specific test is passed through the -t option.

REST Api control plane

Expected Behaviour

Add REST API for starting up functional/performance tests. Since these might be a long running process present corelationId to client so he can later query for the status of the job.

Actual Behaviour

Reproduce Scenario (including but not limited to)

Steps to Reproduce

Platform and Version

Sample Code that illustrates the problem

Logs taken while reproducing problem

Json array is transformed in map when removing empty elements

Expected Behaviour

Json array stays as array when removing empty elements

Actual Behaviour

Json array is transformed in map when removing empty elements

Reproduce Scenario (including but not limited to)

Steps to Reproduce

Set examples/performance/perfRun.yaml to

name: "apiKey"
version: "1.0"
description: ""
engine: autocannon
type: perfRun
config:
  url: https://api-gateway-qe-d-ue1.adobe.io/test
  connections: 2000
  duration: 30
  method: GET
  requests: # Array of objects.
    - headers:
        host: keepalive-with-upstream-test-stage-service-1.adobe.io
      method:
      path:
    - headers:
        host: baseline-performance-testing-stage-service-1.adobe.io
hooks:
  onInit:
  onDestroy:
  onRequest:

and run athena

The model looks insufficiently restricted for API testing

By offering a plugin as part of the base platform, please also consider a more keyword-based / YAML-structured alternative to what is currently offered. While clearly flexible, the code-heavy model can be a source of inconsistency (e.g. numerous equivalent ways to express the same thing, arbitrary function calls). I raise this issue because many aspects related to API testing are pretty standard (there are only so many types of entities attached to a request or expected from a response).

The coding capabilities need not be removed from this alternative, but certain aspects look repetitive enough and can be addressed case by case (for instance, checking multiple headers requires the same lengthy call many times). I understand that this means adding custom logic where out-of-the-box functionalities already took care of things, yet I consider this feature valuable as long as it does not eat too much development time.

What do you think is the proper way to include this functionality?

Write unit tests

Create unit test collection for the framework in order to test several critical parts.

Register perfRun

Need to register perfRun in perfPattern and map configs hooks and functions

Option for verbose report of passing assertions

Currently, the passing tests simply show up in a simple list. A flag (e.g. --verbose) should be provided so that each expected item within the passing test can be inspected. Examples where this is useful:

  • test passes under multiple conditions that are discernible from the assertions
  • report information can be further used (part of the response, the result of test fixtures, etc)

Register perfPattern

Ensure registration of pattern and context with overriding for referenced perf runs

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.