Git Product home page Git Product logo

spring-cloud-dataflow-acceptance-tests's Introduction

Spring Cloud Data Flow Acceptance Tests

This project bootstraps a dataflow server on a target platform, executes a series of tests by creating a series of streams and tasks and then cleans up after its done.

How to run it

The main script is called run.sh and supports a few flags:

USAGE: run.sh -p <PLATFORM> -b <BINDER> [-s -t -c]
  The default mode will setup, run tests and clean up, you can control which stage you want to
  have executed by toggling the flags (-s, -t, -c)

Flags:

[*] -p  | --platform - define the target platform to run
    -b  | --binder - define the binder (i.e. rabbit, kafka) defaults to rabbit
    -tests - comma separated list of tests to run. Wildcards such as *http* are allowed. (e.g. -tests TickTockTests#tickTockTests)
    -s  | --skipSetup - skip setup phase
    -t  | --skipTests - skip test phase
    -c  | --skipCleanup - skip the clean up phase
    -d  | --doNotDownload - skip the downloading of the server
    -m  | --skipperMode - specify if skipper mode should be enabled
    -cc | --skipCloudConfig - skip Cloud Config server tests for CF
    -sv | --skipperVersion - set the skipper version to test (e.g. 1.0.4.BUILD-SNAPSHOT)
    -dv | --dataflowVersion - set the dataflow version to test (e.g. 1.5.0.BUILD-SNAPSHOT)
    -av | --appsVersion - set the stream app version to test (e.g. Celsius.SR2). Apps should be accessible via maven repo or docker hub.
    -tv | --tasksVersion - set the task app version to test (e.g. Clark.RELEASE). Tasks should be accessible via maven repo or docker hub.
    -se | --schedulesEnabled - installs scheduling infrastructure and configures SCDF to use the service.
    -na | --noAutoreconfiguration - tell the buildpack to disable spring autoreconfiguration

[*] = Required arguments

The first option is to choose a PLATFORM, available options cloudfoundry, local, gke (kubernetes) and pks (kubernetes). The scripts for each platform are located in folders of the same name in the main directory.

By default the script will execute three main phases:

  • Setup: The setup phase will traverse each folder and call create.sh scripts. At the end of this phase you should expect to have an environment available with the Spring Cloud Data Flow server along with the services required for it to run.

  • Test: The test phase will invoke the mvn test and deploy apps into the environment and run tests.

  • Clean: The clean up phase will undeploy the server and remove any services.

Each phase can be toggled by setting the appropriate flag (-s, -t, -c)

The services created in the setup phase are mysql and redis. Depending on the binder selected, it will create a Rabbit or Kafka service.

Examples

To run the tests locally cleaning up services using Rabbit (the default binder)

./run.sh -p local

To run the tests locally cleaning up services using Kafka

./run.sh -p local -b kafka

To run the tests locally and keep the Data Flow Server, Kafka, and other services running afterwards

./run.sh -p local -b kafka -c

To run the tests on cloudfoundry, cleaning up services and using Rabbit (the default binder)

./run.sh -p cloudfoundry

To setup a Data Flow Server and services on CloudFoundry, but not run tests

./run.sh -p cloudfoundry -c -t

General configuration

Make sure you have JAVA_HOME configured correctly in your environment.

Each platform will have a file named env.properties located on init/env.properties, change those to reflect your environment. Each platform has different flags, but the global ones should be:

  • RETRIES : Number of times to test for a port when checking a service status (6 by default)

  • WAIT_TIME: How long to wait for another port test (5s by default)

  • SPRING_CLOUD_DATAFLOW_SERVER_DOWNLOAD_URL: Location of the dataflow jar file to be downloaded.

Required environment variables when skipping setup

If you want to point to an already running Data Flow server, set the environment variables

  • SERVER_URI - default is http://localhost:9393

  • STREAM_REGISTRATION_RESOURCE - default is maven + rabbit based apps

  • TASK_REGISTRATION_RESOURCE - default is maven + rabbit based tasks

To point to where your server is located and also specify which artifacts you want to register with the server.

Running Scheduling Acceptance Tests

By default scheduling acceptance tests are disabled, because not all data flow implementations support it yet. To enable scheduling acceptance tests use the -se flag. Also you will need to set the scheduler URL if the platform is cloudfoundry. For example export SCHEDULES_URL=<your scheduler url>.

Note
Currently Spring Cloud Data Flow Cloud Foundry and Kubernetes are the only implementations that support scheduling.

Platform specific notes

Local

Pre-requisites

  • docker and docker-compose installed. Make sure you can connect to the docker daemon without using 'sudo', e.g. docker info works.

  • $DOCKER_SERVER environment variable properly set. Defaults to localhost, which works on unix. For MacOS 192.168.99.100 should work.

The local deployment will always try to connect to a service running on your local machine. So if you have a local redis we will use it.

If a local service is not found, the script will try to deploy using docker-compose so it’s important that you have that installed and configured properly.

When cleaning up, the script will only remove docker images, if you are using a local service like redis or mysql the script will not do anything to it

CloudFoundry

Pre-requisites

On Cloudfoundry, make sure you have the following environment variables exported. We will not include them on any files to prevent it to be leaked into github repos with credentials.

  • SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_URL

  • SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_DOMAIN

  • SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_USERNAME

  • SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_PASSWORD

Configuration

You can override service names and plans by either exporting or changing the following properties:

  • MYSQL_SERVICE_NAME

  • MYSQL_PLAN_NAME

  • RABBIT_SERVICE_NAME

  • RABBIT_PLAN_NAME

  • REDIS_SERVICE_NAME

  • REDIS_PLAN_NAME

The creation and deletion of services are implemented as blocking functions, i.e. a test job will wait, for instance, during setup until a service is created before continuing. After requesting CloudFoundry to create or delete a service, these functions periodically poll until the request has been fully met. The defaults for the number of polls and the delay between polling can be overridden using the following properties:

  • SCDFAT_RETRY_MAX (default 100, set to <0 for no max)

  • SCDFAT_RETRY_SLEEP (in seconds, default 5)

Kubernetes (GKE)

Pre-requisites

Google Cloud SDK installed with the kubectl component enabled.

Configuration

The following environment variables must be set:

  • GCLOUD_PROJECT

  • GCLOUD_COMPUTE_ZONE

  • GCLOUD_CONTAINER_CLUSTER

Note
You can also set a KUBERNETES_NAMESPACE environment variable that specifies an existing namespace to use for the testing. If this is not specified, the 'default' namespace will be used.

If you use a service account make sure to set the GOOGLE_APPLICATION_CREDENTIALS environment variable to point to your service account key file and to use the following to authenticate:

gcloud auth activate-service-account --key-file $GOOGLE_APPLICATION_CREDENTIALS

Kubernetes (PKS)

Pre-requisites

Configuration

The following environment variables must be set:

  • PKS_CLUSTER_NAME

  • PKS_ENDPOINT

  • PKS_USERNAME

  • PKS_PASSWORD

Note
You can also set a KUBERNETES_NAMESPACE environment variable that specifies an existing namespace to use for the testing. If this is not specified, the 'default' namespace will be used.

Code formatting guidelines

  • The directory /etc/eclipse has two files for use with code formatting, eclipse-code-formatter.xml for the majority of the code formatting rules and eclipse.importorder to order the import statements.

  • In eclipse you import these files by navigating Windows β†’ Preferences and then the menu items Preferences > Java > Code Style > Formatter and Preferences > Java > Code Style > Organize Imports respectfully.

  • In IntelliJ, install the plugin Eclipse Code Formatter. You can find it by searching the "Browse Repositories" under the plugin option within IntelliJ (Once installed you will need to reboot Intellij for it to take effect). Then navigate to Intellij IDEA > Preferences and select the Eclipse Code Formatter. Select the eclipse-code-formatter.xml file for the field Eclipse Java Formatter config file and the file eclipse.importorder for the field Import order. Enable the Eclipse code formatter by clicking Use the Eclipse code formatter then click the OK button.

    • NOTE: If you configure the Eclipse Code Formatter from File > Other Settings > Default Settings it will set this policy across all of your Intellij projects.

spring-cloud-dataflow-acceptance-tests's People

Contributors

ccheetham avatar chrisjs avatar cppwfs avatar ilayaperumalg avatar jvalkeal avatar markpollack avatar olegz avatar sabbyanandan avatar trisberg avatar tzolov avatar viniciusccarvalho avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.