Git Product home page Git Product logo

datadog-cloudfoundry-buildpack's Introduction

Datadog Cloud Foundry Buildpack

This is a supply buildpack for Cloud Foundry. It installs the following three binaries in the container your application is running on:

  • Datadog Dogstatsd for submitting custom metrics from your application
  • Datadog Trace Agent for submitting APM traces from your application
  • Datadog IoT Agent for submitting your application logs

Installation

Upload the buildpack to CF

Download the latest Datadog buildpack release or build it and upload it to your Cloud Foundry environment.

  • Upload buildpack for the first time
    cf create-buildpack datadog-cloudfoundry-buildpack datadog-cloudfoundry-buildpack.zip 99 --enable
  • Update existing buildpack
    cf update-buildpack datadog-cloudfoundry-buildpack -p datadog-cloudfoundry-buildpack.zip

Once it is available in your Cloud Foundry environment, configure your application to use the Datadog buildpack by specifying it in your application manifest.

Note: Since this is a supply buildpack, it has to be specified before any final buildpack in the list. See Cloud Foundry documentation for details about pushing an application with multiple buildpacks.

Configuration

General configuration of the Datadog Agent

All options supported by the Agent in the main datadog.yaml configuration file can also be set through environment variables as described in the Agent documentation.

Setup the Datadog API Key

Set an API Key in your environment to enable the Datadog Agents in the buildpack. The following code samples specify the env section of the application manifest.

env:
  DD_API_KEY: <DATADOG_API_KEY>

Assigning Tags

For an overview about tags, read Getting Started with Tags.

Custom tags can be configured with the environment variable DD_TAGS. These tags will be attached to the application logs, metrics, and traces as span tags.

By default, DD_TAGS is expected to be a comma separated list of tags.

env:
  DD_TAGS: "key1:value1,key2:value2,key3:value3"

To use a different separator, set DD_TAGS_SEP to the desired separator.

env:
  DD_TAGS: "key1:value1 key2:value2 key3:value3"
  DD_TAGS_SEP: " "

Instrument your application

Instrument your application to send custom metrics and APM traces through DogStatsD and the Datadog Trace Agent. Download and import the relevant libraries to send data. To learn more, check out the DogSatsD documentation and APM documentation.

Variable Description
DD_APM_INSTRUMENTATION_ENABLED Use this option to automatically instrument your application, without any additional installation or configuration steps. See Single Step APM Instrumentation.
DD_WAIT_TRACE_AGENT Use this option to delay the startup of the application until the Trace Agent is ready. This option is especially useful for Golang apps.

Log collection

Enable log collection

To collect logs from your application in CloudFoundry, the Agent contained in the buildpack needs to be activated with log collection enabled.

env:
  DD_API_KEY: <DATADOG_API_KEY>
  DD_LOGS_ENABLED: true
  # Disable the Agent core checks to disable system metrics collection
  DD_ENABLE_CHECKS: false
  # Redirect Container Stdout/Stderr to a local port so the agent can collect the logs
  STD_LOG_COLLECTION_PORT: <PORT>
  # Configure the agent to collect logs from the wanted port and set the value for source and service
  LOGS_CONFIG: '[{"type":"tcp","port":"<PORT>","source":"<SOURCE>","service":"<SERVICE>"}]'

Configure log collection

The following environment variables are used to configure log collection.

Variable Description
STD_LOG_COLLECTION_PORT Must be used when collecting logs from stdout/stderr. It redirects the stdout/stderr stream to the corresponding local port value.
LOGS_CONFIG Use this option to configure the Agent to listen to a local TCP port and set the value for the service and source parameters. The port specified in the configuration must be the same as specified in the environment variable STD_LOG_COLLECTION_PORT.
SUPPRESS_DD_AGENT_OUTPUT Use this option to see the Datadog agent, Trace agent and DogStatsD logs in the cf logs output.
DD_SPARSE_APP_LOGS Use this option to avoid losing log lines when app sparsely writes to STDOUT.

Example

An app01 Java application is running in Cloud Foundry. The following configuration redirects the container stdout/stderr to the local port 10514. It then configures the Agent to collect logs from that port while setting the proper value for service and source:

env:
  DD_API_KEY: <DATADOG_API_KEY>
  DD_LOGS_ENABLED: true
  DD_ENABLE_CHECKS: false
  STD_LOG_COLLECTION_PORT: 10514
  LOGS_CONFIG: '[{"type":"tcp","port":"10514","source":"java","service":"app01"}]'

Unified Service Tagging

This feature requires the Datadog Cluster Agent to be installed. See Datadog Cluster Agent BOSH Release.

Unified service tagging ties Datadog telemetry together using three reserved tags: env, service, and version. In Cloud Foundry, they are set through the application labels/annotations and DD_ENV, DD_SERVICE and DD_VERSION environment variables, as shown in the example below:

 env:
    DD_ENV: <ENV_NAME>
    DD_SERVICE: <SERVICE_NAME>
    DD_VERSION: <VERSION>
  metadata:
    labels:
      tags.datadoghq.com/env: <ENV_NAME>
      tags.datadoghq.com/service: <SERVICE_NAME>
      tags.datadoghq.com/version: <VERSION>

The tags.datadoghq.com prefix is part of the Agent Autodiscovery notation as described in Basic Agent Autodiscovery documentation.

You can find more information in the Unified Service Tagging documentation.

Application Metadata collection

This feature requires both the Datadog Agent and the Datadog Cluster Agent to be installed. See Datadog Agent BOSH Release and Datadog Cluster Agent BOSH Release.

You can enable the collection of your application metadata (labels and annotations) as tags in your application logs, traces and metrics by setting the environment variable DD_ENABLE_CAPI_METADATA_COLLECTION to true.

Note: Enabling this feature might trigger a restart of the Datadog Agent when the application metadata are updated, depending on the cloud_foundry_api.poll_interval on the Datadog Cluster Agent. On average, it takes around 20 seconds to restart the agent.

Docker

If you're running Docker on Cloud Foundry, review the docker directory to adapt this buildpack to use in a dockerfile.

datadog-cloudfoundry-buildpack's People

Contributors

ajacquemot avatar arbll avatar christopherfrieler avatar ganeshkumarsv avatar gmmeyer avatar gzussa avatar hithwen avatar jakobfels avatar jirikuncar avatar lotharsee avatar martinpfeifer avatar nbparis avatar nmuesch avatar nouemankhal avatar sarah-witt avatar zippolyte avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

datadog-cloudfoundry-buildpack's Issues

compile script cannot find dogstatsd or trace-agent

I am trying to add the buildpack to a cloudfoundry instance but not having any luck

I'm using the command bx app push -b 'https://github.com/DataDog/datadog-cloudfoundry-buildpack.git'

   2018-05-04T07:55:20.97-0500 [STG/0] OUT Installing Datadog Dogstatsd and Trace Agent
   2018-05-04T07:55:20.99-0500 [STG/0] ERR cp: cannot stat ‘/tmp/buildpackdownloads/4ae4130b0cb34ce2717bb699d44ccec0/lib/dogstatsd’: No such file or directory
   2018-05-04T07:55:20.99-0500 [STG/0] ERR cp: cannot stat ‘/tmp/buildpackdownloads/4ae4130b0cb34ce2717bb699d44ccec0/lib/trace-agent’: No such file or directory
   2018-05-04T07:55:21.00-0500 [STG/0] ERR chmod: cannot access ‘/tmp/app/datadog/trace-agent’: No such file or directory
   2018-05-04T07:55:21.00-0500 [STG/0] ERR Failed to build droplet release: fork/exec /tmp/buildpackdownloads/4ae4130b0cb34ce2717bb699d44ccec0/bin/release: no such file or directory
   2018-05-04T07:55:21.02-0500 [STG/0] OUT Exit status 224
   2018-05-04T07:55:21.02-0500 [STG/0] ERR Staging failed: STG: Exited with status 224

Looking at the compile script, these lines are the culprits

if [ -f $ROOT_DIR/lib/dogstatsd ]; then
  cp $ROOT_DIR/lib/dogstatsd $BUILD_DIR/datadog/dogstatsd
fi
cp $ROOT_DIR/lib/dogstatsd $BUILD_DIR/datadog/dogstatsd
cp $ROOT_DIR/lib/trace-agent $BUILD_DIR/datadog/trace-agent

should both of the cp statements be wrapped in conditionals?

ETA for a new release?

We are using the buildpack in addition to the java buildpack.

In mid May there was a change for the location of the datadog agent which was picked up lately by the java buildpack.

The change breaks the usage of the latest java buildpack together with the 4.22.0 version of the datadog cloudfoundry buildpack, as the datadog agent detection fails.

We already opened an issue in the java buildpack project.

As there are 2 solutions to the current issue

  1. rolling back to an older version of the java buildpack or
  2. having a new datadog-cloudfoundry buildpack release

We were wondering if there's an ETA for a new release that includes the missing change for the java buildpack.

Thank you very much!

test-endpoint.sh doesn't use proxy

The test-endpoint.sh script contains an nc command in it:
nc -w5 $LOGS_ENDPOINT $LOGS_PORT < /dev/null

If this is behind a proxy then it will fail so whilst various proxy environment variables are supported the agent won't ever start because this nc command fails.

To be honest I'm not 100% sure of this but this is what seems to be happening for us and the error message produced within that file is the one we're seeing both in logs and in Datadog alerts (which do use the proxy).

APM check reports error in Infrastructure List

Describe the bug
The agent(s) included in the buildpack start up correctly and sends metrics, logs and traces. APM seems to work fine. However, the host is reporting that the APM check has issues in the Infrastructure list (see attached screenshots).

Warnings in trace.log:

2020-02-24 14:01:59 UTC | TRACE | WARN | (pkg/util/log/log.go:491 in func1) | Unknown key in config file: metadata_collectors
2020-02-24 14:02:11 UTC | TRACE | WARN | (pkg/trace/api/api.go:358 in handleTraces) | Error getting trace count: "HTTP header \"X-Datadog-Trace-Count\" not found". Functionality may be limited.

Errors in agent.log:

2020-02-24 14:02:00 UTC | CORE | ERROR | (pkg/collector/scheduler.go:183 in GetChecksFromConfigs) | Unable to load the check: unable to load any check from config 'apm'

To Reproduce
Set environment variables:

DD_API_KEY: <api_key>
DD_TAGS: env:<some value>
RUN_AGENT: true
DD_LOGS_ENABLED: true
DD_LOGS_INJECTION: true
STD_LOG_COLLECTION_PORT: 10514
LOGS_CONFIG: >-
      [{"type":"tcp","port":"10514","source":"java","sourcecategory":"sourcecode","service":"<app name>"}]
DD_SERVICE_NAME: <app name>
DD_IGNORE_RESOURCE: <ignored-resources>

Additionally, inject the Java Trace Agent in JAVA_OPTS.

Expected behavior
The APM check should correctly load / report its status correctly.

Screenshots
Screen Shot 2020-02-22 at 09 32 22
Screen Shot 2020-02-22 at 09 32 43

Environment and Versions (please complete the following information):
We're using the buildpack together with the standard Java buildpack.

Enable Infrastructure Processes

Hi there,
So I got a problem with enabling Live Process
i tried to use DD_PROCESS_AGENT_ENABLED: true - but it's doesn't work

I know that for datadog.yml it's possible to use

process_config:
    enabled: 'true'

But how to correctly turn on this for CF?

Best Regards,
Stanislav

META-buildpack is deprecated

https://github.com/cf-platform-eng/meta-buildpack

NOTE: Meta-buildback is being deprecated
Changes to the core CloudFoundry lifecycle process are making it hard to guarantee on-going compatibility with meta-buildpack and decorators. Some of the use cases for decorators can now be solved by leveraging the new supply buildpack functionality. Some examples can be found at the bottom of this page.

Tag instance_index never gets written for the first instance (instance_index=0)

Describe the bug
We just noticed that the tag instance_index is missing if the instance is the first instance. This is problematic if you have metrics or dashboards in which you want to group by instance.

To Reproduce
Steps to reproduce the behavior:

  1. Have an app with at least 2 instances logging to datadog
  2. in the logs view, filter for instance_index:1
  3. Notice how logs appear
  4. Do the same with instance_index:0
  5. See no logs at all

Expected behavior
The tag instance_index is written regardless of it's value.

Additional context
The issue is in


This is a check to see if the var is defined. For the first instance, index is 0, which evaluates to falsy in python.

New buildpack release based on agent version 7.40.0

Lately the agent got a new release with version 7.40.0 which is long awaited in my company.

The new agent version hopefully includes a fix correctly tagging apm traces with the CloudFoundry application instance id.

It would be great to see a new buildpack release which includes agent version 7.40.0

Can I help with anything?

Thank you very much!

CPU metrics seem to be host rather than application instance

CPU usage metrics seems to show what I'm guessing is the underlying host metrics rather than the metrics from the AI. Is this intentional? Is there a way to get the AI metrics into Datadog. At the moment we're having to use the Tanzu App Metrics to view and don't have a correlated view in Datadog which is hindering our monitoring efforts and from our perspective makes the metrics capability fairly useless.

Tags only set in first entry of LOGS_CONFIG

Describe the bug

Missing tags in Datadog logs with a config similar to this:

LOGS_CONFIG: '[{"type":"file","path":"<PATH_A>","source":"<SOURCE_A>"}, {"type":"file","path":"<PATH_B>","source":"<SOURCE_B>"}]'
DD_TAGS: foo:bar

To Reproduce
Steps to reproduce the behavior:

  1. Configure CloudFoundry buildpack with LOGS_CONFIG specifying multiple log sources and DD_TAGS
  2. Check logs from both sources on Datadog
  3. Notice that only logs from the first source have the tags from DD_TAGS (SOURCE_A in the example above), but not the logs from the second source (SOURCE_B).

Expected behavior
DD_TAGS should apply equally to all sources in LOGS_CONFIG.

Environment and Versions (please complete the following information):
A clear and precise description of your setup:

Additional context
After some debugging with the following environment variables, I noticed the issue in the file /home/vcap/app/.datadog/dist/conf.d/logs.d/logs.yaml:

        DD_UPDATE_SCRIPT_LOG_LEVEL: DEBUG
        DD_LOG_LEVEL: debug

I suspect the problem happens because tags_list is only set for the first entry in logs:

config["logs"][0]["tags"] = tags_list

https://github.com/DataDog/datadog-cloudfoundry-buildpack/blob/master/lib/scripts/create_logs_config.rb#L50

Trace Agent is not accepting traffic before all .profile.d scripts complete

Bug

golang application starts attempting to use Datadog Trace Agent before it is accepting traffic resulting in no APM capabilities

Sequence to reproduce

Follow Tracing Go Application for a golang app using this supply buildpack

lib/run-datadog.sh starts the Datadog Trace Agent via a shell script in the background via this buildpack
Start tracer using gopkg.in/DataDog/dd-trace-go.v1/ddtrace/tracer

Resulting error

[APP/PROC/WEB/0]ERR 2021/09/16 20:55:58 Datadog Tracer v1.33.0 WARN: DIAGNOSTICS Unable to reach agent intake: Post "http://localhost:8126/v0.4/traces": dial tcp 127.0.0.1:8126: connect: connection refused

Versions

Capability Statement/Expected Behavior

As a Cloud Foundry golang Application Owner
I want the Datadog Trace Agent accepting traffic before my application attempts to start
So that APM capabilities with my application can rely the initial health state of the Datadog Trace Agent for runtime state

Workaround

The following workaround within the application code appears to address the issue

.profile.d/sleep-for-datadog.sh

echo 'Sleeping for Datadog Agent Start'
sleep 5

Configuration of datadog's own logs

Hi,

we would like to configure the logs of the datadog-agent itself (not the application-logs it's shipping) to get json logs.

Background: We are using this buildpack within PCF to collect the application-logs in json-format from standard-out. In addition we get the logs of the datadog-agent itself, which are not structured, not tagged and so are hard to filter out when searching in the logs.

In the lib/dist/datadog.yaml I've found the property log_format_json. I would like to be able to set it to yes. Or does it have any impact that I don't see yet and is set to no for a good reason?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.