Git Product home page Git Product logo

elastic-package's Introduction

elastic-package

elastic-package is a command line tool, written in Go, used for developing Elastic packages. It can help you lint, format, test and build your packages. Learn about each of these and other features in Commands below.

Currently, elastic-package only supports packages of type Elastic Integrations.

Getting started

Download latest release from the Releases page.

On macOS, use xattr -r -d com.apple.quarantine elastic-package after downloading to allow the binary to run.

Alternatively, you may use go install but you will not be able to use the elastic-package version command or check updates.

go install github.com/elastic/elastic-package@latest

Please make sure that you've correctly setup environment variables - $GOPATH and $PATH, and elastic-package is accessible from your $PATH.

Change directory to the package under development.

cd my-package

Run the help command and see available commands:

elastic-package help

Development

Even though the project is "go-gettable", there is the Makefile present, which can be used to build, install, format the source code among others. Some examples of the available targets are:

make build - build the tool source

make clean - delete elastic-package binary and build folder

make format - format the Go code

make check - one-liner, used by CI to verify if source code is ready to be pushed to the repository

make install - build the tool source and move binary to $GOBIN

make gomod - ensure go.mod and go.sum are up to date

make update - update README.md file

make licenser - add the Elastic license header in the source code

To start developing, download and build the latest main of elastic-package binary:

git clone https://github.com/elastic/elastic-package.git
cd elastic-package
make build

When developing on Windows, please use the core.autocrlf=input or core.autocrlf=false option to avoid issues with CRLF line endings:

git clone --config core.autocrlf=input https://github.com/elastic/elastic-package.git
cd elastic-package
make build

This option can be also configured on existing clones with the following commands. Be aware that these commands will remove uncommited changes.

git config core.autocrlf input
git rm --cached -r .
git reset --hard

Testing with integrations repository

While working on a new branch, it is interesting to test these changes with all the packages defined in the integrations repository. This allows to test a much wider scenarios than the test packages that are defined in this repository.

This test can be triggered automatically directly from your Pull Request by adding a comment test integrations. Example:

This comment triggers this Buildkite pipeline (Buildkite job).

This pipeline creates a new draft Pull Request in integration updating the required dependencies to test your own changes. As a new pull request is created, a CI job will be triggered to test all the packages defined in this repository. A new comment with the link to this new Pull Request will be posted in your package-spec Pull Request.

IMPORTANT: Remember to close this PR in the integrations repository once you close the package-spec Pull Request.

Usually, this process would require the following manual steps:

  1. Create your elastic-package pull request and push all your commits
  2. Get the SHA of the latest changeset of your PR:
     $ git show -s --pretty=format:%H
    1131866bcff98c29e2c84bcc1c772fff4307aaca
  3. Go to the integrations repository, and update go.mod and go.sum with that changeset:
    cd /path/to/integrations/repostiory
    go mod edit -replace github.com/elastic/elastic-package=github.com/<your_github_user>/elastic-package@1131866bcff98c29e2c84bcc1c772fff4307aaca
    go mod tidy
  4. Push these changes into a branch and create a Pull Request
    • Creating this PR would automatically trigger a new Jenkins pipeline.

Commands

elastic-package currently offers the commands listed below.

Some commands have a global context, meaning that they can be executed from anywhere and they will have the same result. Other commands have a package context; these must be executed from somewhere under a package's root folder and they will operate on the contents of that package.

For more details on a specific command, run elastic-package help <command>.

elastic-package help

Context: global

Use this command to get a listing of all commands available under elastic-package and a brief description of what each command does.

elastic-package completion

Context: global

Use this command to output shell completion information.

The command output shell completions information (for bash, zsh, fish and powershell). The output can be sourced in the shell to enable command completion.

Run elastic-package completion and follow the instruction for your shell.

elastic-package benchmark

Context: package

Use this command to run benchmarks on a package. Currently, the following types of benchmarks are available:

Pipeline Benchmarks

These benchmarks allow you to benchmark any Ingest Node Pipelines defined by your packages.

For details on how to configure pipeline benchmarks for a package, review the HOWTO guide.

Rally Benchmarks

These benchmarks allow you to benchmark an integration corpus with rally.

For details on how to configure rally benchmarks for a package, review the HOWTO guide.

Stream Benchmarks

These benchmarks allow you to benchmark ingesting real time data. You can stream data to a remote ES cluster setting the following environment variables:

ELASTIC_PACKAGE_ELASTICSEARCH_HOST=https://my-deployment.es.eu-central-1.aws.foundit.no ELASTIC_PACKAGE_ELASTICSEARCH_USERNAME=elastic ELASTIC_PACKAGE_ELASTICSEARCH_PASSWORD=changeme ELASTIC_PACKAGE_KIBANA_HOST=https://my-deployment.kb.eu-central-1.aws.foundit.no:9243

System Benchmarks

These benchmarks allow you to benchmark an integration end to end.

For details on how to configure system benchmarks for a package, review the HOWTO guide.

elastic-package benchmark pipeline

Context: package

Run pipeline benchmarks for the package.

elastic-package benchmark rally

Context: package

Run rally benchmarks for the package (esrally needs to be installed in the path of the system).

elastic-package benchmark stream

Context: package

Run stream benchmarks for the package.

elastic-package benchmark system

Context: package

Run system benchmarks for the package.

elastic-package build

Context: package

Use this command to build a package. Currently it supports only the "integration" package type.

Built packages are stored in the "build/" folder located at the root folder of the local Git repository checkout that contains your package folder. The command will also render the README file in your package folder if there is a corresponding template file present in "_dev/build/docs/README.md". All "_dev" directories under your package will be omitted. For details on how to generate and syntax of this README, see the HOWTO guide.

Built packages are served up by the Elastic Package Registry running locally (see "elastic-package stack"). If you want a local package to be served up by the local Elastic Package Registry, make sure to build that package first using "elastic-package build".

Built packages can also be published to the global package registry service.

For details on how to enable dependency management, see the HOWTO guide.

elastic-package changelog

Context: package

Use this command to work with the changelog of the package.

You can use this command to modify the changelog following the expected format and good practices. This can be useful when introducing changelog entries for changes done by automated processes.

elastic-package changelog add

Context: package

Use this command to add an entry to the changelog file.

The entry added will include the given description, type and link. It is added on top of the last entry in the current version

Alternatively, you can start a new version indicating the specific version, or if it should be the next major, minor or patch version.

elastic-package check

Context: package

Use this command to verify if the package is correct in terms of formatting, validation and building.

It will execute the format, lint, and build commands all at once, in that order.

elastic-package clean

Context: package

Use this command to clean resources used for building the package.

The command will remove built package files (in build/), files needed for managing the development stack (in ~/.elastic-package/stack/development) and stack service logs (in ~/.elastic-package/tmp/service_logs).

elastic-package create

Context: global

Use this command to create a new package or add more data streams.

The command can help bootstrap the first draft of a package using embedded package template. It can be used to extend the package with more data streams.

For details on how to create a new package, review the HOWTO guide.

elastic-package create data-stream

Context: global

Use this command to create a new data stream.

The command can extend the package with a new data stream using embedded data stream template and wizard.

elastic-package create package

Context: global

Use this command to create a new package.

The command can bootstrap the first draft of a package using embedded package template and wizard.

elastic-package dump

Context: global

Use this command as an exploratory tool to dump resources from Elastic Stack (objects installed as part of package and agent policies).

elastic-package dump agent-policies

Context: global

Use this command to dump agent policies created by Fleet as part of a package installation.

Use this command as an exploratory tool to dump agent policies as they are created by Fleet when installing a package. Dumped agent policies are stored in files as they are returned by APIs of the stack, without any processing.

If no flag is provided, by default this command dumps all agent policies created by Fleet.

If --package flag is provided, this command dumps all agent policies that the given package has been assigned to it.

elastic-package dump installed-objects

Context: global

Use this command to dump objects installed by Fleet as part of a package.

Use this command as an exploratory tool to dump objects as they are installed by Fleet when installing a package. Dumped objects are stored in files as they are returned by APIs of the stack, without any processing.

elastic-package edit

Context: package

Use this command to edit assets relevant for the package, e.g. Kibana dashboards.

elastic-package edit dashboards

Context: package

Use this command to make dashboards editable.

Pass a comma-separated list of dashboard ids with -d or use the interactive prompt to make managed dashboards editable in Kibana.

elastic-package export

Context: package

Use this command to export assets relevant for the package, e.g. Kibana dashboards.

elastic-package export dashboards

Context: package

Use this command to export dashboards with referenced objects from the Kibana instance.

Use this command to download selected dashboards and other associated saved objects from Kibana. This command adjusts the downloaded saved objects according to package naming conventions (prefixes, unique IDs) and writes them locally into folders corresponding to saved object types (dashboard, visualization, map, etc.).

elastic-package format

Context: package

Use this command to format the package files.

The formatter supports JSON and YAML format, and skips "ingest_pipeline" directories as it's hard to correctly format Handlebars template files. Formatted files are being overwritten.

elastic-package install

Context: package

Use this command to install the package in Kibana.

The command uses Kibana API to install the package in Kibana. The package must be exposed via the Package Registry or built locally in zip format so they can be installed using --zip parameter. Zip packages can be installed directly in Kibana >= 8.7.0. More details in this HOWTO guide.

elastic-package lint

Context: package

Use this command to validate the contents of a package using the package specification (see: https://github.com/elastic/package-spec).

The command ensures that the package is aligned with the package spec and the README file is up-to-date with its template (if present).

elastic-package profiles

Context: global

Use this command to add, remove, and manage multiple config profiles.

Individual user profiles appear in ~/.elastic-package/stack, and contain all the config files needed by the "stack" subcommand. Once a new profile is created, it can be specified with the -p flag, or the ELASTIC_PACKAGE_PROFILE environment variable. User profiles can be configured with a "config.yml" file in the profile directory.

elastic-package profiles create

Context: global

Create a new profile.

elastic-package profiles delete

Context: global

Delete a profile.

elastic-package profiles list

Context: global

List available profiles.

elastic-package profiles use

Context: global

Sets the profile to use when no other is specified.

elastic-package promote

Context: global

[DEPRECATED] Use this command to move packages between the snapshot, staging, and production stages of the package registry.

This command is intended primarily for use by administrators.

It allows for selecting packages for promotion and opens new pull requests to review changes. Please be aware that the tool checks out an in-memory Git repository and switches over branches (snapshot, staging and production), so it may take longer to promote a larger number of packages.

elastic-package publish

Context: package

[DEPRECATED] Use this command to publish a new package revision.

The command checks if the package hasn't been already published (whether it's present in snapshot/staging/production branch or open as pull request). If the package revision hasn't been published, it will open a new pull request.

elastic-package report

Context: package

Use this command to generate various reports relative to the packages. Currently, the following types of reports are available:

Benchmark report for Github

These report will be generated by comparing local benchmark results against ones from another benchmark run. The report will show performance differences between both runs.

It is formatted as a Markdown Github comment to use as part of the CI results.

elastic-package report benchmark

Context: package

Generate a benchmark report comparing local results against ones from another benchmark run.

elastic-package service

Context: package

Use this command to boot up the service stack that can be observed with the package.

The command manages lifecycle of the service stack defined for the package ("_dev/deploy") for package development and testing purposes.

elastic-package service up

Context: package

Boot up the stack.

elastic-package stack

Context: global

Use this command to spin up a Docker-based Elastic Stack consisting of Elasticsearch, Kibana, and the Package Registry. By default the latest released version of the stack is spun up but it is possible to specify a different version, including SNAPSHOT versions by appending --version .

You can run your own custom images for Elasticsearch, Kibana or Elastic Agent, see this document.

Be aware that a common issue while trying to boot up the stack is that your Docker environments settings are too low in terms of memory threshold.

For details on how to connect the service with the Elastic stack, see the service command.

elastic-package stack down

Context: global

Take down the stack.

elastic-package stack dump

Context: global

Dump stack data for debug purposes.

elastic-package stack shellinit

Context: global

Use this command to export to the current shell the configuration of the stack managed by elastic-package.

The output of this command is intended to be evaluated by the current shell. For example in bash: 'eval $(elastic-package stack shellinit)'.

Relevant environment variables are:

  • ELASTIC_PACKAGE_ELASTICSEARCH_HOST
  • ELASTIC_PACKAGE_ELASTICSEARCH_USERNAME
  • ELASTIC_PACKAGE_ELASTICSEARCH_PASSWORD
  • ELASTIC_PACKAGE_KIBANA_HOST
  • ELASTIC_PACKAGE_CA_CERT

You can also provide these environment variables manually. In that case elastic-package commands will use these settings.

elastic-package stack status

Context: global

Show status of the stack services.

elastic-package stack up

Context: global

Use this command to boot up the stack locally.

By default the latest released version of the stack is spun up but it is possible to specify a different version, including SNAPSHOT versions by appending --version .

You can run your own custom images for Elasticsearch, Kibana or Elastic Agent, see this document.

Be aware that a common issue while trying to boot up the stack is that your Docker environments settings are too low in terms of memory threshold.

To expose local packages in the Package Registry, build them first and boot up the stack from inside of the Git repository containing the package (e.g. elastic/integrations). They will be copied to the development stack (~/.elastic-package/stack/development) and used to build a custom Docker image of the Package Registry. Starting with Elastic stack version >= 8.7.0, it is not mandatory to be available local packages in the Package Registry to run the tests.

For details on how to connect the service with the Elastic stack, see the service command.

You can customize your stack using profile settings, see Elastic Package profiles section. These settings can be also overriden with the --parameter flag. Settings configured this way are not persisted.

elastic-package stack update

Context: global

Update the stack to the most recent versions.

elastic-package status [package]

Context: package

Use this command to display the current deployment status of a package.

If a package name is specified, then information about that package is returned, otherwise this command checks if the current directory is a package directory and reports its status.

elastic-package test

Context: package

Use this command to run tests on a package. Currently, the following types of tests are available:

Asset Loading Tests

These tests ensure that all the Elasticsearch and Kibana assets defined by your package get loaded up as expected.

For details on how to run asset loading tests for a package, see the HOWTO guide.

Pipeline Tests

These tests allow you to exercise any Ingest Node Pipelines defined by your packages.

For details on how to configure pipeline test for a package, review the HOWTO guide.

Static Tests

These tests allow you to verify if all static resources of the package are valid, e.g. if all fields of the sample_event.json are documented.

For details on how to run static tests for a package, see the HOWTO guide.

System Tests

These tests allow you to test a package's ability to ingest data end-to-end.

For details on how to configure amd run system tests, review the HOWTO guide.

elastic-package test asset

Context: package

Run asset loading tests for the package.

elastic-package test pipeline

Context: package

Run pipeline tests for the package.

elastic-package test static

Context: package

Run static files tests for the package.

elastic-package test system

Context: package

Run system tests for the package.

elastic-package uninstall

Context: package

Use this command to uninstall the package in Kibana.

The command uses Kibana API to uninstall the package in Kibana. The package must be exposed via the Package Registry.

elastic-package version

Context: global

Use this command to print the version of elastic-package that you have installed. This is especially useful when reporting bugs.

Elastic Package profiles

The profiles subcommand allows to work with different configurations. By default, elastic-package uses the "default" profile. Other profiles can be created with the elastic-package profiles create command. Once a profile is created, it will have its own directory inside the elastic-package data directory. Once you have more profiles, you can change the default with elastic-package profiles use.

You can find the profiles in your system with elastic-package profiles list.

You can delete profiles with elastic-package profiles delete.

Each profile can have a config.yml file that allows to persist configuration settings that apply only to commands using this profile. You can find a config.yml.example that you can copy to start.

The following settings are available per profile:

  • stack.apm_enabled can be set to true to start an APM server and configure instrumentation in services managed by elastic-package. Traces for these services are available in the APM UI of the kibana instance managed by elastic-package. Supported only by the compose provider. Defaults to false.
  • stack.elastic_cloud.host can be used to override the address when connecting with the Elastic Cloud APIs. It defaults to https://cloud.elastic.co.
  • stack.geoip_dir defines a directory with GeoIP databases that can be used by Elasticsearch in stacks managed by elastic-package. It is recommended to use an absolute path, out of the .elastic-package directory.
  • stack.logstash_enabled can be set to true to start Logstash and configure it as the default output for tests using elastic-package. Supported only by the compose provider. Defaults to false.
  • stack.self_monitor_enabled enables monitoring and the system package for the default policy assigned to the managed Elastic Agent. Defaults to false.
  • stack.serverless.type selects the type of serverless project to start when using the serverless stack provider.
  • stack.serverless.region can be used to select the region to use when starting serverless projects.

Useful environment variables

There are available some environment variables that could be used to change some of the elastic-package settings:

  • Related to docker-compose / docker compose commands:

    • ELASTIC_PACKAGE_COMPOSE_DISABLE_VERBOSE_OUTPUT: If set to true, it disables the progress output from docker compose/docker-compose commands.
      • For versions v2 < 2.19.0, it sets --ansi never flag.
      • For versions v2 >= 2.19.0, it sets --progress plain flag and --quiet-pull for up sub-command`.
  • Related to global elastic-package settings:

    • ELASTIC_PACKAGE_CHECK_UPDATE_DISABLED: if set to true, elastic-package is not going to check for newer versions.
    • ELASTIC_PACKAGE_PROFILE: Name of the profile to be using.
    • ELASTIC_PACKAGE_DATA_HOME: Custom path to be used for elastic-package data directory. By default this is ~/.elastic-package.
  • Related to the build process:

    • ELASTIC_PACKAGE_REPOSITORY_LICENSE: Path to the default repository license.
    • ELASTIC_PACKAGE_LINKS_FILE_PATH: Path to the links table file (e.g. links_table.yml) with the link definitions to be used in the build process of a package.
  • Related to signing packages:

    • ELASTIC_PACKAGE_SIGNER_PRIVATE_KEYFILE: Path to the private key file to sign packages.
    • ELASTIC_PACKAGE_SIGNER_PASSPHRASE: Passphrase to use the private key file.
  • Related to tests:

    • ELASTIC_PACKAGE_SERVERLESS_PIPELINE_TEST_DISABLE_COMPARE_RESULTS: If set to true, the results from pipeline tests are not compared to avoid errors from GeoIP.
  • To configure the Elastic stack to be used by elastic-package:

    • ELASTIC_PACKAGE_ELASTICSEARCH_HOST: Host of the elasticsearch (e.g. https://127.0.0.1:9200)
    • ELASTIC_PACKAGE_ELASTICSEARCH_USERNAME: User name to connect to elasticsearch (e.g. elastic)
    • ELASTIC_PACKAGE_ELASTICSEARCH_PASSWORD: Password of that user.
    • ELASTIC_PACKAGE_ELASTICSEARCH_KIBANA_HOST: Kibana URL (e.g. https://127.0.0.1:5601)
    • ELASTIC_PACKAGE_ELASTICSEARCH_CA_CERT: Path to the CA certificate to connect to the Elastic stack services.
  • To configure an external metricstore while running benchmarks (more info at system benchmarking docs or rally benchmarking docs):

    • ELASTIC_PACKAGE_ESMETRICSTORE_HOST: Host of the elasticsearch (e.g. https://127.0.0.1:9200)
    • ELASTIC_PACKAGE_ESMETRICSTORE_USERNAME: Username to connect to elasticsearch (e.g. elastic)
    • ELASTIC_PACKAGE_ESMETRICSTORE_PASSWORD: Password for the user.
    • ELASTIC_PACKAGE_ESMETRICSTORE_CA_CERT: Path to the CA certificate to connect to the Elastic stack services.

Release process

This project uses GoReleaser to release a new version of the application (semver). Release publishing is automatically managed by the Jenkins CI (Jenkinsfile) and it's triggered by Git tags. Release artifacts are available in the Releases section.

Steps to create a new release

  1. Fetch latest main from upstream (remember to rebase the branch):
git fetch upstream
git rebase upstream/main
  1. Create Git tag with release candidate:
git tag v0.15.0 # let's release v0.15.0!
  1. Push new tag to the upstream.
git push upstream v0.15.0

The CI will run a new job for the just pushed tag and publish released artifacts. Please expect an automated follow-up PR in the Integrations repository to bump up the version (sample PR).

elastic-package's People

Contributors

adriansr avatar agithomas avatar andrewkroh avatar bhapas avatar chrsmark avatar constanca-m avatar dependabot[bot] avatar efd6 avatar elasticmachine avatar endorama avatar fearful-symmetry avatar jillguyonnet avatar jlind23 avatar jsoriano avatar kaiyan-sheng avatar kpollich avatar marc-gr avatar mashhurs avatar mdelapenya avatar michaelkatsoulis avatar mrodm avatar mtojek avatar oren-zohar avatar pkoutsovasilis avatar r00tu53r avatar ruflin avatar sharbuz avatar stuartnelson3 avatar v1v avatar ycombinator avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

elastic-package's Issues

Command: sync the integration with the EPR

Integrations are not consistent with packages at the moment, as they require rebuilding. The command rebuilds the developed integration and makes sure that integration is served via the local, dockerized instance of the EPR.

The following switches are available:
--watch to monitor changes in the integration, reflect them in the registry (rebuild on the fly) and reinstall if the package has been already installed (use Kibana API).

Suggested command: elastic-package sync --watch

Command: package version handling

It would be handy to have a set of commands to deal with package versions. See for example elastic/integrations#206 where all the package versions must be bumped in batch. These commands might look like this:

$ pwd
/Users/massi/work/integrations/packages/apache
$ elastic-package version print
0.1.0
$ elastic-package version bump minor
$ elastic-package version print
0.2.0

This way, in case of batch updates similar to the linked PR, one might just do:

$ pwd
/Users/massi/work/integrations/packages
$ find . -type f -maxdepth 2 -execdir elastic-package version bump bugfix \;

Add sample test package

I believe we need some basic fake integration to test our changes and prevent from potential issues.

Command: export data

The command is responsible for dumping various resources from the running cluster. A developer can use it to save a design dashboard file or export collected metrics for the sample in docs.

Suggested commands:

elastic-package export dashboard
elastic-package export logs
elastic-package export metrics

This issue can be converted to a meta-issue if it's more convenient.

Discuss: where should we keep "import-beats" script

The integrations repository should contain only package contents.

Consider moving it to integrations-extras or elastic-package. The original idea was to remove few months after the final migration.

It's related to the final design of the package registry.

[System test runner] Add more service deployers

Follow up to #64.

Currently the system test runner only supports the Docker Compose service deployer. That is, it can only test packages whose services can be spun up using Docker Compose. We should add more service deployers to enable system testing of packages such as system (probably a no-op or minimal service deployer), aws (probably some way to pass connection parameters and credentials via environment variables and/or something that understands terraform files), kubernetes.

Command: create new integration

The command bootstraps the structure for a new integration using a wizard or command line arguments.

Suggested command: elastic-package create

Subcommands:

create package
create data-stream

Discuss: Use mage instead of make

Currently the project uses a Makefile. By now most of our other go projects have move to using mage to make it better cross platform compatible. I would encourage as long as the Makefile is still simple to start moving magefiles instead.

Command: version

The command would output the version of elastic-package currently installed.

As elastic-package is rapidly under development it would be good to know which version is being run for debugging issues from users, etc.

Command: check the integration

The command is used by the CI to verify correctness of the integration. Ultimately it can also be used by a developer to make sure that they will push the correct code to the Git repository.

Suggested command: elastic-package check

Feature: run system tests

Master issue: #14

System tests are end-to-end tests that involve multiple Elastic products and the real service (under tests) or collected log data (which could be mocked). They support multiple versions of the tested service (e.g. selected with environment variables).
The current design assumes that system tests only consist of configuration files and there are no test cases in Go language.

Input:
test-a-agent.yml: (possibly optional?) configuration file used to spawn a standalone instance of the agent
Dockerfile: Dockerfile definition that defines an instance of the tested service (might be single for entire integration)
docker-compose.yml: (optional) the manifest defines other properties of the tested service, e.g. scaling factor, volumes, exposed ports (might be single for entire integration)
supported-variants.yml: contains environment variables defining multiple versions of the product (might be single for entire integration)
test-a-input-file.log: (optional) other resources referenced in the configuration, e.g. log files to be harvested.
test-a-expected.json: contains a document expected to be present in Elasticsearch. The document format allows for defining basic matchers to verify non-exact values.

Execution:

The test runner (elastic-package) performs full end-to-end testing. It performs the following actions:

  1. Start the next variant of the tested service.
  2. Tear up the testing cluster (Elasticsearch, Kibana, Package Registry).
  3. Install the package using the Kibana API.
  4. Start the agent using the provided configuration file.
  5. Wait for logs/metrics and compare results with the document in the database.
  6. Verify that all fields in the document are documented.
  7. Tear down the testing cluster.

[System test runner] Use volume mounts for logs

Follow up to #64, based on suggestion in #64 (comment):

I wonder if we can simplify this - we still have a volume with log files mounted, so maybe don't depend specifically on stdout and stderr, but just share a volume between agent and the service. I'm fine with discussing this in a different issue.

Command: publish

Since elastic-package aims to be the primary tool for Integrations package authors, I propose we add a new sub-command to it: publish. This command would be in the same vein as npm publish, gem push, and their ilk. Its usage would be as follows.

$ elastic-package publish [registry stage] [--dry-run]

Where:

  • registry stage could be one of snapshot, staging, or production; the default would be snapshot.
  • --dry-run would show the steps that would be taken without actually taking them.

The command would build the package and make a PR to the appropriate branch of the package-storage repository. In the future, if/when the package registry service offers an HTTP API to publish packages, the implementation could be updated to use this API instead of making the PR.

[System test runner] test run interruption ends with a permanent error

I stopped the test execution after the "1 minute wait". Next test run ends up with the following issue:

➜  apache git:(system-test-apache) /Users/marcin.tojek/go/bin/elastic-package test system -v
2020/09/16 12:32:59 DEBUG Enable verbose logging
Run system tests for the package
2020/09/16 12:32:59  INFO setting up service...
2020/09/16 12:32:59 DEBUG setting up service using Docker Compose service deployer
2020/09/16 12:32:59 DEBUG running command: /usr/local/bin/docker-compose -f /Users/marcin.tojek/go/src/github.com/elastic/integrations/packages/apache/_dev/deploy/docker-compose.yml -p elastic-package-service up -d
elastic-package-service_apache_1 is up-to-date
2020/09/16 12:32:59 DEBUG creating temp file /Users/marcin.tojek/.elastic-package/tmp/service_logs/stdout to hold service container elastic-package-service_apache_1 STDOUT
2020/09/16 12:32:59 DEBUG creating temp file /Users/marcin.tojek/.elastic-package/tmp/service_logs/stderr to hold service container elastic-package-service_apache_1 STDERR
2020/09/16 12:32:59 DEBUG redirecting service container elastic-package-service_apache_1 STDOUT and STDERR to temp files
2020/09/16 12:32:59 DEBUG attaching service container elastic-package-service_apache_1 to stack network elastic-package-stack_default
Error: error running package system tests: could not setup service: could not attach service container to stack network: exit status 1

Unfortunately the problem is permanent.

/cc @ycombinator

Add support for CI

Once the basic stub for the elastic-package tool is buildable, let's enable CI builds.

Also:

  • report CI status for PRs

Proposal Indicate to user when `elastic-package cluster up` is ready?

At the moment, when someone runs elastic-package cluster up and the command returns, the cluster is not actually ready to use and it can take quite a bit long until Kibana is available. This might be confusing to the user. To debug, docker ps has to be used to check the health status.

My proposal would be to only return the up command when the cluster is ready to be worked with. In this test script I implemented a check: https://github.com/elastic/package-storage/blob/production/testing/main_integration_test.go#L54 This could be added to the up command.

Command: lint the integration

The command runs a basic validation of the package content, it doesn’t necessarily need to build the content. YAML manifests, JSON files, mandatory files, etc. It also verifies if dashboard file are properly encoded.

The command exposes a flag that fails when any file modification is required (for "check" purposes). It must be part of the implementation.

Suggested command: elastic-package lint

CI: Setup Hound

Similar to the Beats repo, I think we should setup Hound to catch common golang linting errors and variances from suggested practices.

[BUG] format: bad indent in multiline source (ingest pipeline)

I tried to format ingest pipeline files and I failed with:

- script:
    if: ctx.nginx?.access?.remote_ip_list != null && ctx.nginx.ingress_controller.remote_ip_list.length > 0
    lang: painless
    source: >-
      aaaa
        bbbb
          cccc

The formatter ends up with form:

- script:
    if: ctx.nginx?.access?.remote_ip_list != null && ctx.nginx.ingress_controller.remote_ip_list.length
      > 0
    lang: painless
    source: >-
      aaaa

        bbbb
          cccc

I understand the reason behind the first break up (> 0) - too long line, but I don't understand the second one with source. Does it mean we should:

  1. Disable formatting of ingest pipelines as they may contain templates? Same reasoning as for .yml.hbs files.
  2. Leave as is, but disable it from "Check" scope, so developer can adjust it if needed.
  3. Report bug to the library maintainer. BTW. I tried also with https://github.com/mikefarah/yq and received the same effect.

For reference, here is the original source, but I tried to narrow down the problem as close as possible to the root cause:

    source: >-
      boolean isPrivate(def dot, def ip) {

        try {
          StringTokenizer tok = new StringTokenizer(ip, dot);
          int firstByte = Integer.parseInt(tok.nextToken());
          int secondByte = Integer.parseInt(tok.nextToken());
          if (firstByte == 10) {
            return true;
          }
          if (firstByte == 192 && secondByte == 168) {
            return true;
          }
          if (firstByte == 172 && secondByte >= 16 && secondByte <= 31) {
            return true;
          }
          if (firstByte == 127) {
            return true;
          }
          return false;
        }
        catch (Exception e) {
          return false;
        }
      } try {

        ctx.source.address = null;
        if (ctx.nginx.ingress_controller.remote_ip_list == null) {
          return;
        }
        def found = false;
        for (def item : ctx.nginx.ingress_controller.remote_ip_list) {
          if (!isPrivate(params.dot, item)) {
            ctx.source.address = item;
            found = true;
            break;
          }
        }
        if (!found) {
          ctx.source.address = ctx.nginx.ingress_controller.remote_ip_list[0];
        }
      } catch (Exception e) {

        ctx.source.address = null;
      }

Remove vendor folder

We recently removed the vendor folder after migrating to go mod, as it mostly creates discrepancies. I think we should do the same here

Automatically check if the tool is up to date

We are currently recommending people to install this tool by using go get. I think this is the best approach until we get to some degree of stability in the features it provides (we are not close to this by any means).

It would be nice to have an automatic check when the tools is called that warns you if it's not up to date, and gives you the command to run if you need to update.

This would ensure that developers don't lag behind on tooling, with the confusion that that may cause (ie my integration works in local but fails in CI...).

To avoid degrading the user experience, I think we should only check for new versions from time to time (maybe store the last successful check timestamp into a file) and have a good silent timeout if things are too slow.

We could use the following API to retrieve master branch ref:

turing.local :: integrations/packages/nginx ‹master› » time curl https://api.github.com/repos/elastic/elastic-package/git/ref/heads/master
{
  "ref": "refs/heads/master",
  "node_id": "MDM6UmVmMjY5NjEyNzUzOnJlZnMvaGVhZHMvbWFzdGVy",
  "url": "https://api.github.com/repos/elastic/elastic-package/git/refs/heads/master",
  "object": {
    "sha": "9e6eefab29ae9533a7d387cc62247a77a9ec07ff",
    "type": "commit",
    "url": "https://api.github.com/repos/elastic/elastic-package/git/commits/9e6eefab29ae9533a7d387cc62247a77a9ec07ff"
  }
}
curl https://api.github.com/repos/elastic/elastic-package/git/ref/heads/maste  0.02s user 0.01s system 6% cpu 0.333 total

format, lint: fail fast, but don't modify integration source

The goal of this issue is to modify commands (format, lint) to fail fast when a specific flag is passed (e.g. --fail-on-modification). "Check" command will set and pass this flag to subcommands to fail in case of any inconsistency in the integration (e.g. not formatted source code).

Add support for advanced logging

Right now the tool uses simple fmt.Println calls. We should introduce some more advanced logging, but ones that doesn't affect readability (JSON array is not a human readable format :)

Feature: run pipeline tests

Master issue: #14

Pipeline tests are quick, small tests that can verify the ingest pipeline without a need for booting up the entire cluster (only the Elasticsearch instance).

Input:
test-a-events.json: defines a set of JSON events, which could be produced by the input. These input data are sent to the pipeline.
test-a-expected.json: contains a document expected to be produced by the tested pipeline

Execution:

The test runner (elastic-package) installs the ingest pipeline in Elasticsearch, sends mocked events to the pipeline and compares stored documents with the expected output.

Command: run tests for logs and metrics

It executes appropriate test runners to verify expected output for logs and run test sets for metrics.
It supports the following modes:

  • run for a single package from inside of the package
  • run for a specific dataset in the package

Supported commands:
elastic-package test
elastic-package test --dataset ingest_controller

This issue can be converted to a meta-issue if it's more convenient.

Enhance README

Currently the README in this repo contains information more relevant to developing the elastic-package tool. I propose moving this information to CONTRIBUTING.md instead.

And then, I propose using the README.md file to document the purpose and usage of the elastic-package tool. For instance, the tool has several sub-commands. It would be good to know what each of these sub-commands are intended for. Also, some of the sub-commands have a parent-child relationship with others (e.g. check and validate). These are the sorts of details that would be good to know about in the README, IMO.

I know some of these details are present in the original proposal for this tool: https://docs.google.com/document/d/16pe7uPAE7QBWsoi3S1q_seI4xk_SLm2CGG5BPzOoXXY/edit. Now that that proposal has settled, it might be good to bring over some of the contents from that doc into the README.md so they are more easily discoverable for package developers.

[System test runner] Install package?

Follow up to #64, based on discussion in #64 (comment):

Actually you don't use any "install" API like the one that is used in integration tests of package-registry. It looks like it's not required to be involved in the process, so I wonder what does it do actually? ... I was thinking about the reference to this API method: https://github.com/elastic/package-storage/blob/production/testing/main_integration_test.go#L108

Hmmm, good question — I wonder how I'm able to get things working even without calling this API. 🤔 Let's leave it out of this PR for now and add it if we need it in a follow up PR.

Support multiple stack environments

I'm starting to get used to run elastic-package cluster up every time I need an Elastic Stack cluster. In most cases around packages, I'm interested in the 7.9 cluster, but sometimes I want to test against master. It would be nice if a flag could be passed to cluster up with the version to be used. Some example:

elastic-package cluster up --stack=8.0.0-SNAPSHOT
elastic-package cluster up --stack=7.10.0-SNAPSHOT
elastic-package cluster up --stack=7.9.1

Param name is just a suggestion.

[System test runner] Cleanup old test assets before each run?

Follow up to #64, based on discussion in #64 (comment):

... we have the same identifiers for each test run. Then we clean up resources before (and after) the test run starts, in case any resources were left from transient failures from a previous run... we need to account for orphan resources due to transient failures from previous runs.

As long as we don't have intention to run tests in parallel, this can work pretty well. Agree for introducing such mechanism. I'm also fine with postponing it to the next iteration.

Actually, what I suggested above is not trivial to implement for policies as their IDs are generated by Kibana. We could retrieve all policies and delete ones that match a certain naming convention, but let's worry about that in a follow up PR.

Command: format the integration

The command format the structure for an existing integration. It's for cleaning up potential whitespaces, etc.

The command exposes a flag that fails when any file modification is required (for "check" purposes). It must be part of the implementation.

Suggested command: elastic-package format

[System test runner] Support deployment variants

Enhance elastic-package test system to support variations of deployments for the service under test. For example, we may want to test against a specific version of the service.

The variations will be defined in the _dev/deploy/variants.yml file, for example:

variants:
  v1:
    SERVICE_VERSION: 1.19.1
  latest:
    SERVICE_VERSION: 2.3.0
default: latest

In this case the elastic-package test system command might take an optional flag that allows the selection of a specific variant, e.g. elastic-package test system --deploy-variant=latest (no strong opinions about the specific flag name). When the flag is not specified, the variant defined in the default property should be selected. The flag may take a special value, e.g. __random__, that will randomly select a variant for the test (again, no strong opinions on the value name).

When the variant is selected, all the variables defined under it must be made available to the service deployer so it may deploy the package's service(s) after interpolating these variables wherever appropriate, e.g. in the service's docker-compose.yml file.

[System test runner] cleanup doesn't remove all documents

I played today a bit with the system test runner and found the issue with cleanup.

The source code in defer cleans up the old data and then deassign the policy. Unfortunately it may happen that there is a race condition which results in more data being pushed to the data stream.

2020/09/16 18:04:49 DEBUG deleting data in data stream...
2020/09/16 18:04:49 DEBUG reassigning original policy back to agent...
2020/09/16 18:04:49 DEBUG PUT http://127.0.0.1:5601/api/ingest_manager/fleet/agents/1ccef44c-3a32-4a19-94a9-85674cd3508e/reassign
2020/09/16 18:04:49 DEBUG { "policy_id": "e0f87020-f833-11ea-b5c3-6f5f5f464533" }
2020/09/16 18:04:50 DEBUG deleting test policy...
2020/09/16 18:04:50 DEBUG POST http://127.0.0.1:5601/api/ingest_manager/agent_policies/delete
2020/09/16 18:04:50 DEBUG { "agentPolicyId": "26496c90-f836-11ea-b5c3-6f5f5f464533" }

In the elasticsearch we can see 6 documents in the data stream:

yellow open   .ds-logs-apache.access-ep-000001                    XXxuuljAT1WvzFl4liwvyQ   1   1          6          504      137kb          137kb

Command: manage the local cluster

Although the local testing cluster is built upon docker-compose, it may be feasible to expose extra actions to the users to let them manage the cluster (etc. up, down, wipe, shell-init)

Suggested commands:

elastic-package cluster up - boot up Elasticsearch, Kibana, the EPR and the enrolled Elastic Agent
elastic-package cluster shellinit - import the environment variables with reference to the testing cluster (URL, credentials)
elastic-package cluster down - take down the testing environment.

This issue can be converted to a meta-issue if it's more convenient.

Add documentation

Now that the elastic-package tool is gaining maturity in some capabilities/commands and is adding new capabilities/commands quite quickly, we should start adding appropriate documentation for the tool. In particular, I think we need documentation that covers these areas:

  • An introduction to the tool, covering its use cases, capabilities, and commands. #76
  • Reference documentation for each command. Ideally this will be available as part of the help text for each command, i.e. when elastic-package help <command> is run but also be available in the repo itself. #302
  • Use case or HOWTO guides, e.g. how do I test my package?
    • Pipeline tests: #132
    • System tests: #128

[System test runner] Allow access to Agent and underlying Beats logs when testing

Follow up to #64, based on suggestion in #64 (comment):

The agent log file can go to tmp. We can monitor this one for potential errors. I wonder if we can do the same for metricbeat and filebeat logs.

... it's a rather a nice-to-have feature to also observe programmatically logs of elastic-agent, filebeat and metricbeat while testing. Many configuration errors are reported deep down by filebeat or metricbeat's internals.
Logs can be collected in the same single directory and exported as volume.

Command: build the integration

The command builds the resources that require to be processed before pushing to the “package-storage”. For example: exported dashboards require to encode some fields. It would be great to get rid of this step, as it simplifies serving packages via EPR.

Suggested command: elastic-package build

[System test runner] Support use of deployment variants

Enhance elastic-package test system to support variations of deployments for the service under test. For example, we may want to test against a specific version of the service.

The variations will be defined in the _dev/deploy/variants.yml file, for example:

variants:
  v1:
    SERVICE_VERSION: 1.19.1
  latest:
    SERVICE_VERSION: 2.3.0
default: latest

In this case the elastic-package test system command might take an optional flag that allows the selection of a specific variant, e.g. elastic-package test system --deploy-variant=latest (no strong opinions about the specific flag name). When the flag is not specified, the variant defined in the default property should be selected. The flag may take a special value, e.g. __random__, that will randomly select a variant for the test (again, no strong opinions on the value name).

When the variant is selected, all the variables defined under it must be made available to the service deployer so it may deploy the package's service(s) after interpolating these variables wherever appropriate, e.g. in the service's docker-compose.yml file.

Related: elastic/integrations#781

stack command: start selected services only

Pipeline tests (see: #15) require only the Elasticsearch instance, so it's not necessary to boot up the entire stack.

Maybe the stack up subcommand should expose additional parameters like service names? e.g. elasticsearch, kibana, package-registry

[stack up] Report booting progress

According to Jenkins it tooks few minutes to start the composed stack:

[2020-09-09T12:29:52.528Z] + elastic-package stack up -d
[2020-09-09T12:29:52.528Z] elastic-package has been installed.
[2020-09-09T12:29:52.528Z] Boot up the Elastic stack
[2020-09-09T12:33:29.203Z] Done

It would be great to report in the mean time, what's going on behind the scenes.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.