Git Product home page Git Product logo

datadog-plugin's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

datadog-plugin's Issues

Security events notifications are very spammy

We use several tools that end up updating the apiTokenStats.xml and in p

Steps to reproduce the behavior:

  1. register a tool that access the jenkins API with a token such as ccmenu and monitor jobs.
  2. integrate the jenkins Datadog Plugin with security audit events
  3. get an influx of apiTokenStats.xml updates on DataDog: User anonymous changed file apiTokenStats.xml

Other notifications are lacking details to make them useful such as User SYSTEM changed file config.xml, with no context for which job this applies.

Expected behavior
Stats files updates should not trigger a notification to Datadog and we should be able to have more fine grained control of what is sent to Datadog.

Datadog-Plugin breaks on LTS v2.319.1 w/ built-in node update

Datadog-Plugin breaks on LTS v2.319.1 with the renaming of the built-in node

To Reproduce
Steps to reproduce the behavior:

  1. Update to Jenkins to 2.319.1 and rename controller to built-in
  2. Datadog plugin breaks, stops reporting metrics

Expected behavior
Send metrics to datadog

Screenshots
n/a

Environment and Versions (please complete the following information):
-Datadog-Plugin 3.4.0
-Jenkins LTS v2.319.1

Additional context
n/a

Invalid configuration elements `emitConfigChangeEvents`

Describe the bug
We had an error with the upgrade from 5.5.1 to 5.6.0 : Invalid configuration elements for type class org.datadog.jenkins.plugins.datadog.DatadogGlobalConfiguration : emitConfigChangeEvents.

This property has been removed but doesn't seem to be documented

Label the issue properly.

  • Add bug/ label.
  • Add documentation label if this issue is related to documentation changes.

To Reproduce
Steps to reproduce the behavior:

  1. within the JCASC configuration under datadogGlobalConfiguration
  2. use: emitConfigChangeEvents

Expected behavior
at least information on how to replace this property and a warning of breaking change

Use dependabot to check for action updates

Note:
If you have a feature request, you should contact support so the request can be properly tracked.

Is your feature request related to a problem? Please describe.
This project uses GitHub actions. Some of which are outdated. It would be nice if the dependencies which this project uses could be up-to-date.

Describe the solution you'd like
Dependabot is now native to GitHub. This allows this project to make use of it in order to check for updates to its dependencies. By using dependabot, a PR can automatically be made with version bumps. This also allows the project to find any issues with breaking changes with its depenencies sooner than later.

Describe alternatives you've considered
Manually checking for updates to its dependencies.

Additional context
https://docs.github.com/en/free-pro-team@latest/github/administering-a-repository/keeping-your-actions-up-to-date-with-github-dependabot

Queue stops progressing after restart when jobs running

Describe the bug
Starting in what appears to be datadog 2.6.0, we've found that controllers that are not cleanly restarted when jobs are in progress (first discovered when a controller OOM'd, but we can reproduce by just killing the process) come back with items that are "in" the queue but cannot be deleted. In addition, node creation (using the ec2 plugin) and saving configuration at the manage UI all never make an HTTP response. All of these are events that typically send datadog events, and I can reproduce running a fresh jenkins:lts docker image with default plugins, plus datadog.

The only way we've found to clear the behavior is to visit the job(s) stuck in the queue and delete the build. Then after a restart, it comes back as normal.

We haven't been able to reproduce yet on any version of datadog prior to 2.6.0, (tested on 0.7.1 (don't ask...) 1.0.0, 2.0.0, and all 2.x minor releases) and have not yet tested on the 2.7.0 release yesterday.

I'm not entirely sure what mechanism in the datadog plugin could even cause this behavior, but I'm hoping someone here does.

To Reproduce
Steps to reproduce the behavior:

  1. docker run -p 8080:8080 -p 50000:50000 -v jenkins_home:/var/jenkins_home jenkins/jenkins:lts
  2. Do initial Jenkins setup, accepting defaults.
  3. Install the datadog plugin.
  4. Start a long-running job. All of ours are Jenkinsfile pipelines, but a sh 'sleep 300' seems sufficient to reproduce for us.
  5. Forcibly terminate Jenkins without waiting for jobs to finish (eg., docker kill)
  6. When Jenkins returns, note the job still in the queue. Attempts to delete it will fail.
  7. Visit /configure on the controller and set a system message to exercise saving configurations. Attempts to save or apply will fail.

To Remediate

  1. Once in this state, visit the job that is stuck in the queue. Stop the run from the build UI, and then delete the run.
  2. Restart Jenkins. You may need to forcibly terminate, as we've seen Jenkins get stuck "shutting down" when it is in this state. After restart, the queue will now progress, and configurations can be saved.

Expected behavior
After an unexpected shutdown, Jenkins should continue to process jobs and accept configuration changes.

Environment and Versions (please complete the following information):
A clear and precise description of your setup:

  • datadog 2.6.0
  • Jenkins 2.263.1

Datadog endpoint for logs should be unbound from metrics/trace host.

Note:
If you have a feature request, you should contact support so the request can be properly tracked.

Is your feature request related to a problem? Please describe.

I've started playing with CI integration in Datadog. Because Datadog accepts neither stats nor process data in their raw form, we force this plugin to send data to an agent for processing. In the current implementation, this demands I setup a custom TCP "logger" (which I do understand to be closer to a proxy)

The agent's TCP connector feature is understandably brittle, given its lack of broad use. It seems to be somewhat of an orphan in configuration. Indeed, after nearly a month's time, support clarified that though com.datadoghq.ad.logs may largely be configured as expected when running within Docker (using envvars, labels, annotations)...the TCP listener in practice behaves like a check and requires YAML onboard the container to work properly. Although I'm hoping to simply sidestep the need to build a custom container, bind mount, or the like, I'm curious why the configuration for logging is tightly bound to metric collection. There's obvious benefits for consolidating tagging, but that's all I can come up with.

The TCP submission method does not use encryption and is considered by Datadog to be a backup method, and the proxy does seem even less reliable. It's particularly painful given my experience that when the pipe reaches an error state (presumably when Jenkins is abruptly terminated or another malformed request is sent...or some flavor of un-ACKed RST is sent from one end, the agent appears to do nothing to correct the matter. Exascerbating that is that this plugin writes a SEVERE level log after each send error, spamming Jenkins logs 1:1 with any offending job, such that I end up with quite a bit of log bloat on any failure.

I think it's likely that #221 didn't appreciate that the appeal of #74 extended beyond the lack of a local agent. At their core, however, the gathering of CI metrics and logging are independent activities and I'd expect configuration to reflect that if possible.

Describe the solution you'd like
Configuration for CI Metrics remains like present, toggled by a boolean with either unified host/port configuration or preferably (someone's bound to end up with an edge case) separate host/ports for each.

  • Logging submission is offered its own host configuration, and is selectable between HTTP and TCP submission

Describe alternatives you've considered

  • Not collecting logs at all
  • Monitoring Datadog and Jenkins alike to keep the TCP flow connection healthy
  • Life as a lumberjack.

Additional context
During my either quixotic or dramatically ill-informed attempt to get the TCP listener to be generated promptly, I've run across a fairly wide swath of troubled souls who had some flavor of trouble setting this up as such. Plenty of unreliable TCP connections, a few issues getting the custom log uptake to start properly, and the occasional local networking frustration. This approach would retain CI functionality while allowing a user to sidestep what I believe is a needless binding of the two configurations.

Failed to send log payload for Jenkins workers

Describe the bug

We are using ec2 plugin to connect Jenkins workers and this is spot into their log.

INFO: Failed to send log payload: java.lang.NullPointerException: Name is null at java.base/java.lang.Enum.valueOf(Enum.java:238) at org.datadog.jenkins.plugins.datadog.DatadogClient$ClientType.valueOf(DatadogClient.java:39) at org.datadog.jenkins.plugins.datadog.clients.ClientFactory.getClient(ClientFactory.java:86) at org.datadog.jenkins.plugins.datadog.logs.DatadogWriter.write(DatadogWriter.java:80) at org.datadog.jenkins.plugins.datadog.logs.DatadogOutputStream.eol(DatadogOutputStream.java:47) at hudson.console.LineTransformationOutputStream.eol(LineTransformationOutputStream.java:61) at hudson.console.LineTransformationOutputStream.write(LineTransformationOutputStream.java:57) at hudson.console.LineTransformationOutputStream.write(LineTransformationOutputStream.java:75) at org.jenkinsci.plugins.credentialsbinding.masking.SecretPatterns$MaskingOutputStream.eol(SecretPatterns.java:104) at hudson.console.LineTransformationOutputStream.eol(LineTransformationOutputStream.java:61) at hudson.console.LineTransformationOutputStream.write(LineTransformationOutputStream.java:57) at hudson.console.LineTransformationOutputStream.write(LineTransformationOutputStream.java:75) at hudson.plugins.timestamper.pipeline.GlobalDecorator$GlobalDecoratorLineTransformationOutputStream.eol(GlobalDecorator.java:83) at hudson.console.LineTransformationOutputStream.eol(LineTransformationOutputStream.java:61) at hudson.console.LineTransformationOutputStream.write(LineTransformationOutputStream.java:57) at hudson.console.LineTransformationOutputStream.write(LineTransformationOutputStream.java:75) at java.base/java.io.PrintStream.write(PrintStream.java:559) at java.base/sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:233) at java.base/sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:312) at java.base/sun.nio.cs.StreamEncoder.flushBuffer(StreamEncoder.java:104) at java.base/java.io.OutputStreamWriter.flushBuffer(OutputStreamWriter.java:181) at java.base/java.io.PrintStream.newLine(PrintStream.java:625) at java.base/java.io.PrintStream.println(PrintStream.java:883) at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:2801) at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:2762) at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:2757) at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:2051) at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:2063) at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.getBranches(CliGitAPIImpl.java:3003) at hudson.plugins.git.GitAPI.getBranches(GitAPI.java:219) at org.jenkinsci.plugins.gitclient.CliGitAPIImpl$9.execute(CliGitAPIImpl.java:3155) at org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$GitCommandMasterToSlaveCallable.call(RemoteGitImpl.java:170) at org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$GitCommandMasterToSlaveCallable.call(RemoteGitImpl.java:161) at hudson.remoting.UserRequest.perform(UserRequest.java:211) at hudson.remoting.UserRequest.perform(UserRequest.java:54) at hudson.remoting.Request$2.run(Request.java:377) at hudson.remoting.InterceptingExecutorService.lambda$wrap$0(InterceptingExecutorService.java:78) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829)

Label the issue properly.

  • Add severity/ label.
  • Add documentation label if this issue is related to documentation changes.

To Reproduce
Steps to reproduce the behavior:

  1. Go to Jenkins worker linux server
  2. Find remoting directory. Usually in /var/tmp/jenkins/remoting/logs
  3. See error

Expected behavior
Same behaviour as Jenkins master where we are able to see everything working as expected.

Screenshots
If applicable, add screenshots to help explain your problem.

Environment and Versions (please complete the following information):
A clear and precise description of your setup:

  • version for this project in use: Datadog Plugin
    Version 5.5.0
  • services, libraries, languages and tools list and versions.

Additional context
Add any other context about the problem here.

Jenkins Datadog plugin - Log Collection Port Error

The Datadog Agent configuration of the Jenkins Datadog Plugin shows the Log Collection Port (Optional). If no port is configured for Log Collection and you click Test connection, you get an error "The port cannot be empty."

Steps to reproduce the behavior:

  1. Go to Manage Jenkins
  2. Click on Configure System
  3. Scroll down to Datadog Plugin
  4. Click on Use the Datadog Agent to report to Datadog (recommended)
  5. Configuration
  • Agent Host: localhost
  • DogStatsD Port: 8125
  • Log Collection Port: empty
  1. Click Test connection
  2. Error: "The port cannot be empty"

Expected behavior
The Log Collection Port should not be a required field. The test should test the DogStatsD Port and display the result.

Screenshots
Screenshot available if needed

Environment and Versions (please complete the following information):
This occurs on multiple Jenkins instances

Jenkins
Jenkins 2.303.3 with Datadog plugin 3.3.0
Jenkins 2.289.2 with Datadog plugin 3.4.1
Jenkins 2.303.1 with Datadog plugin 3.1.0

unhelpful logspam in DatadogUtilities.severe

Describe the bug

The plugin produces completely unhelpful logspam in some situations.

when passed an exception with a null message DatadogUtilities.severe(...) will put the following in the logs which has no useful information in order to diagnose the issue.

021-03-03 01:44:49.803+0000 [id=873289]	SEVERE	o.d.j.p.datadog.DatadogUtilities#severe: An unexpected error occurred
2021-03-03 01:44:49.808+0000 [id=873289]	SEVERE	o.d.j.p.datadog.DatadogUtilities#severe: An unexpected error occurred
2021-03-03 01:44:49.808+0000 [id=873056]	SEVERE	o.d.j.p.datadog.DatadogUtilities#severe: An unexpected error occurred
2021-03-03 01:44:49.882+0000 [id=873163]	SEVERE	o.d.j.p.datadog.DatadogUtilities#severe: An unexpected error occurred
2021-03-03 01:44:49.887+0000 [id=873163]	SEVERE	o.d.j.p.datadog.DatadogUtilities#severe: An unexpected error occurred
2021-03-03 01:44:49.912+0000 [id=873083]	SEVERE	o.d.j.p.datadog.DatadogUtilities#severe: An unexpected error occurred
2021-03-03 01:44:49.916+0000 [id=873083]	SEVERE	o.d.j.p.datadog.DatadogUtilities#severe: An unexpected error occurred
2021-03-03 01:44:49.979+0000 [id=873163]	SEVERE	o.d.j.p.datadog.DatadogUtilities#severe: An unexpected error occurred
2021-03-03 01:44:49.982+0000 [id=873163]	SEVERE	o.d.j.p.datadog.DatadogUtilities#severe: An unexpected error occurred
2021-03-03 01:44:50.013+0000 [id=873064]	SEVERE	o.d.j.p.datadog.DatadogUtilities#severe: An unexpected error occurred

Additionally enabling finest logging throws away any useful messages in the exception casuse (and causes).

Label the issue properly.

  • Add severity/ label.
  • Add documentation label if this issue is related to documentation changes.

To Reproduce
Have some severe error (?) possibly a machine with no hostname?

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected behavior

The log entry tells me what has gone wrong and some clues as to how to fix it.

Screenshots

see example log above

Environment and Versions (please complete the following information):
A clear and precise description of your setup:

  • version for this project in use.
  • services, libraries, languages and tools list and versions.

Additional context

Additionally the Finest logging should just be passing the exception to the logger, and not using an intermediary StringWriter.

collectLogs: false doesn't exclude pipeline logs collection

Describe the bug
Not sure if this is the bug or I misunderstood plugin configuration. What I want to achieve is to collect logs from all but one Jenkins pipeline.

To do so I've added to all pipelines following:

  options {
    ...
    datadog(collectLogs: true)
    ...
  }

This single pipeline that I would like to exclude from logs collection has collectLogs set to false:

  options {
    ...
    datadog(collectLogs: false)
    ...
  }

Global "Enable Log Collection" is set to true. When I set it to false then no logs are collected at all regardless of collectLogs pipeline configuration.

Despite datadog(collectLogs: false), logs are still collected from this pipeline.

Expected behavior
I expected that logs would not be collected from pipeline where collectLogs: false is configured.

Screenshots
image

Environment and Versions (please complete the following information):
A clear and precise description of your setup:

  • datadog plugin version: 2.9.0
  • Jenkins version: 2.277.1.

Failed to serialize org.datadog.jenkins.plugins.datadog.model.PipelineQueueInfoAction occasionally thrown during pipeline execution

Describe the bug
Occasionally an error about 'failed to serialization' is thrown by the Datadog plugin during the jenkins job.
Ever since we installed the DD plugin, it errors out once in a while.
Removing the plugin obviously fixes it but we do want to use the DD<->Jenkins integration to fully utilize our monitoring capabilities.

Expected behavior
This error will not be thrown.

Environment and Versions (please complete the following information):

  • Jenkins version 2.319.1
  • Datadog Plugin version 3.5.1
  • We use a shared library for the pipelines
  • DD site is hosted in EU region.
  • The datadog.yaml and the plugin are configured correctly. The plugin prints out success messages to both the CI visibility and Log tests.

Additional context

Log:

Running on Jenkins in /opt/bitnami/apps/jenkins/jenkins_home/workspace/randomfoldername/randomjobname
[Pipeline] {
[Pipeline] unstash
[Pipeline] readFile
[Pipeline] readProperties
[Pipeline] }
[Pipeline] // node
[Pipeline] node
[Pipeline] End of Pipeline
java.util.ConcurrentModificationException
	at java.util.HashMap$HashIterator.nextNode(HashMap.java:1445)
	at java.util.HashMap$EntryIterator.next(HashMap.java:1479)
	at java.util.HashMap$EntryIterator.next(HashMap.java:1477)
	at com.thoughtworks.xstream.converters.collections.MapConverter.marshal(MapConverter.java:75)
	at com.thoughtworks.xstream.core.AbstractReferenceMarshaller.convert(AbstractReferenceMarshaller.java:68)
	at com.thoughtworks.xstream.core.TreeMarshaller.convertAnother(TreeMarshaller.java:58)
	at com.thoughtworks.xstream.core.AbstractReferenceMarshaller$1.convertAnother(AbstractReferenceMarshaller.java:83)
	at hudson.util.RobustReflectionConverter.marshallField(RobustReflectionConverter.java:278)
	at hudson.util.RobustReflectionConverter$2.writeField(RobustReflectionConverter.java:265)
Caused: java.lang.RuntimeException: Failed to serialize org.datadog.jenkins.plugins.datadog.model.PipelineQueueInfoAction#queueDataByFlowNode for class org.datadog.jenkins.plugins.datadog.model.PipelineQueueInfoAction
	at hudson.util.RobustReflectionConverter$2.writeField(RobustReflectionConverter.java:269)
	at hudson.util.RobustReflectionConverter$2.visit(RobustReflectionConverter.java:236)
	at com.thoughtworks.xstream.converters.reflection.PureJavaReflectionProvider.visitSerializableFields(PureJavaReflectionProvider.java:174)
	at hudson.util.RobustReflectionConverter.doMarshal(RobustReflectionConverter.java:221)
	at hudson.util.RobustReflectionConverter.marshal(RobustReflectionConverter.java:160)
	at com.thoughtworks.xstream.core.AbstractReferenceMarshaller.convert(AbstractReferenceMarshaller.java:68)
	at com.thoughtworks.xstream.core.TreeMarshaller.convertAnother(TreeMarshaller.java:58)
	at com.thoughtworks.xstream.core.TreeMarshaller.convertAnother(TreeMarshaller.java:43)
	at com.thoughtworks.xstream.core.AbstractReferenceMarshaller$1.convertAnother(AbstractReferenceMarshaller.java:87)
	at com.thoughtworks.xstream.converters.collections.AbstractCollectionConverter.writeBareItem(AbstractCollectionConverter.java:94)
	at com.thoughtworks.xstream.converters.collections.AbstractCollectionConverter.writeItem(AbstractCollectionConverter.java:66)
	at com.thoughtworks.xstream.converters.collections.AbstractCollectionConverter.writeCompleteItem(AbstractCollectionConverter.java:81)
	at com.thoughtworks.xstream.converters.collections.CollectionConverter.marshal(CollectionConverter.java:74)
	at com.thoughtworks.xstream.core.AbstractReferenceMarshaller.convert(AbstractReferenceMarshaller.java:68)
	at com.thoughtworks.xstream.core.TreeMarshaller.convertAnother(TreeMarshaller.java:58)
	at com.thoughtworks.xstream.core.AbstractReferenceMarshaller$1.convertAnother(AbstractReferenceMarshaller.java:83)
	at hudson.util.RobustReflectionConverter.marshallField(RobustReflectionConverter.java:278)
	at hudson.util.RobustReflectionConverter$2.writeField(RobustReflectionConverter.java:265)
Caused: java.lang.RuntimeException: Failed to serialize hudson.model.Actionable#actions for class org.jenkinsci.plugins.workflow.job.WorkflowRun
	at hudson.util.RobustReflectionConverter$2.writeField(RobustReflectionConverter.java:269)
	at hudson.util.RobustReflectionConverter$2.visit(RobustReflectionConverter.java:236)
	at com.thoughtworks.xstream.converters.reflection.PureJavaReflectionProvider.visitSerializableFields(PureJavaReflectionProvider.java:174)
	at hudson.util.RobustReflectionConverter.doMarshal(RobustReflectionConverter.java:221)
	at hudson.util.RobustReflectionConverter.marshal(RobustReflectionConverter.java:160)
	at com.thoughtworks.xstream.core.AbstractReferenceMarshaller.convert(AbstractReferenceMarshaller.java:68)
	at com.thoughtworks.xstream.core.TreeMarshaller.convertAnother(TreeMarshaller.java:58)
	at com.thoughtworks.xstream.core.TreeMarshaller.convertAnother(TreeMarshaller.java:43)
	at com.thoughtworks.xstream.core.TreeMarshaller.start(TreeMarshaller.java:82)
	at com.thoughtworks.xstream.core.AbstractTreeMarshallingStrategy.marshal(AbstractTreeMarshallingStrategy.java:37)
	at com.thoughtworks.xstream.XStream.marshal(XStream.java:1243)
	at com.thoughtworks.xstream.XStream.marshal(XStream.java:1232)
	at com.thoughtworks.xstream.XStream.toXML(XStream.java:1205)
	at hudson.util.XStream2.toXMLUTF8(XStream2.java:325)
	at org.jenkinsci.plugins.workflow.support.PipelineIOUtils.writeByXStream(PipelineIOUtils.java:34)
	at org.jenkinsci.plugins.workflow.job.WorkflowRun.save(WorkflowRun.java:1218)
	at hudson.BulkChange.commit(BulkChange.java:97)
	at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.notifyListeners(CpsFlowExecution.java:1485)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$3.run(CpsThreadGroup.java:491)
	at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$1.run(CpsVmExecutorService.java:38)
	at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:139)
	at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
	at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:68)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Finished: FAILURE

It's gotta be related to concurrency and parallelism.
Perhaps the

should be ConcurrentHashMap instead of HashMap?

Thanks!

Remote Agent configuration do not send metrics

Describe the bug
Configuring the agent with a remote agent breaks the metrics sending. The check connectivity neither seems to work,
I'm running Jenkins and the datadog agent on ECS with two separate services using EC2:

  • Jenkins Service: Replica service with 1 instance
  • Datadog-agent: Daemon service ( runs the container in every EC2 node)

I've tested the underlying communications and they are working fine:

  • Connetivity from jenkins container to the datadog container on port 8125/udp and 8126 works:

JENKINS_CONTAINER # nc -vz datadog-agent.jks-preprod-e614356d 8126
datadog-agent.jks-preprod-e614356d [10.9.36.221] 8126 (?) open

JENKINS_CONTAINER # nc -vzu datadog-agent.jks-preprod-e614356d 8125
datadog-agent.jks-preprod-e614356d [10.9.25.129] 8125 (?) open

If I test the connectivity from the jenkins datadog plugin configuration panel it throws a connection time out error. It seems that the connectivity check is using a TCP connection instead of UDP against port 8125.

The traces are working fine, I'm trying to configure the CI visibility and the job statistics are visible on Datadog CI.

To Reproduce
Steps to reproduce the behavior:

  • Try to configure the plugin to use remote agent and test the connection.
  • Check if the jenkins metrics are sent.

Expected behavior

  • Connectivity working as expected
  • Jenkins metrics are sent to DataDog

Environment and Versions (please complete the following information):

  • Jenkins 2.301
  • DD Plugin: 3.0.1
  • datadog-agent: datadog/agent:latest-jmx

Plugin don't send metrics(datadog agent)

Describe the bug
Plugin don't send metrics to datadog when i try with datadog agent.Im not sure its a bug i guess i don't find the right way to configure my plugin.
Regards

To Reproduce
Steps to reproduce the behavior:

  1. install datadog helm chart
  2. install jenkins helm chart
  3. install datadog plugin
  4. set this config
    image

Expected behavior
On datadog, See jenkins metrics

Environment and Versions (please complete the following information):
A clear and precise description of your setup:

  • datadog plugin: 3.5.0
  • kubernetes: 1.21
  • datadog helm chart repo: https://helm.datadoghq.com chart: datadog version: 2.30.16
  • datadog helm chart values resource
"helm_release" "datadog" {
  name             = "datadog"
  repository       = "https://helm.datadoghq.com"
  namespace        = "monitoring"
  chart            = "datadog"
  version          = "2.30.16"

  set {
    name  = "datadog.site"
    value = "datadoghq.eu"
  }

  set {
      name = "datadog.apiKey"
      value = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
  }

  set {
      name = "datadog.logs.enabled"
      value = true
  }

  set {
      name = "datadog.logs.containerCollectAll"
      value = "true"
  }

    set {
      name = "datadog.prometheusScrape.enabled"
      value = true
  }
  set {
      name = "datadog.apm.enabled"
      value = "true"
  }
}

Additional context
my actual available service for datadog.

└─$ kubectl get svc
NAME                                           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
datadog-cluster-agent                          ClusterIP   172.20.179.21    <none>        5005/TCP                     4h46m
datadog-kube-state-metrics                     ClusterIP   172.20.205.35    <none>        8080/TCP                     4h46m

All classic jobs in jenkins turn into failed on datadog plugin v6.0.0

Describe the bug
Today I upgraded all plugins in jenkins to the latest version. Among them is the most recent version of Datadog Plugin.
After jenkins restarted all Successful builds of classic jobs show up as failed. I can still see in the logs that the result was SUCCESS.
Here's what I found in the logs:

ConversionException: 
---- Debugging information ---- cause-exception : java.lang.NumberFormatException cause-message : For input string: "https://jenkins2.dev.lockhart.io/job/Infrastructure/job/fmc/job/cdfmc_rds_deploy/job/deploy_fmc_with_rds/2750/" class :
 java.lang.Long required-type : java.lang.Long converter-type : com.thoughtworks.xstream.converters.SingleValueConverterWrapper wrapped-converter : com.thoughtworks.xstream.converters.basic.LongConverter path : /build/actions/org.datadog.jenkins.plugins.datadog.traces.BuildSpanAction/buildData/buildUrl line number : 137 class[1] : org.datadog.jenkins.plugins.datadog.traces.message.TraceSpan$TraceSpanContext required-type[1] : org.datadog.jenkins.plugins.datadog.traces.message.TraceSpan$TraceSpanContext converter-type[1] : hudson.util.XStream2$AssociatedConverterImpl class[2] : org.datadog.jenkins.plugins.datadog.traces.BuildSpanAction required-type[2] : org.datadog.jenkins.plugins.datadog.traces.BuildSpanAction -------------------------------, CannotResolveClassException: buildParameters, CannotResolveClassException: charsetName, CannotResolveClassException: nodeName, CannotResolveClassException: jobName, CannotResolveClassException: baseJobName, CannotResolveClassException: buildTag, CannotResolveClassException: jenkinsUrl, CannotResolveClassException: executorNumber, CannotResolveClassException: javaHome, CannotResolveClassException: branch, CannotResolveClassException: gitUrl, CannotResolveClassException: gitCommit, CannotResolveClassException: isCompleted, CannotResolveClassException: hostname, CannotResolveClassException: userId, CannotResolveClassException: tags, CannotResolveClassException: startTime, CannotResolveClassException: endTime, ConversionException: Refusing to unmarshal duration for security reasons; see https://www.jenkins.io/redirect/class-filter/ ---- Debugging information ---- message : Refusing to unmarshal duration for security reasons; see https://www.jenkins.io/redirect/class-filter/ class : java.time.Duration required-type : java.time.Duration converter-type : hudson.util.XStream2$BlacklistedTypesConverter path : /build/actions/org.datadog.jenkins.plugins.datadog.traces.BuildSpanAction/buildData/duration line number : 197 -------------------------------, CannotResolveClassException: millisInQueue, CannotResolveClassException: buildSpanContext

I reverted back to the previous version and it fixed my issues.

To Reproduce
I didn't try to reproduce this issue. I can provide my job configs and build history if needed.

Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
Screenshot 2024-02-03 at 9 30 33 AM

Environment and Versions (please complete the following information):
Jenkins 2.426.3
Datadog 5.6.2 -> 6.0.0

Additional context
Add any other context about the problem here.

Some pipeline steps cannot run in parallel when datadog plugin is installed

Some pipeline steps, e.g. rtp cannot run in parallel when the datadog plugin is installed. This triggers a long stacktrace (see below)"

java.util.ConcurrentModificationException
		at java.util.HashMap$HashIterator.nextNode(HashMap.java:1445)
		at java.util.HashMap$EntryIterator.next(HashMap.java:1479)
		at java.util.HashMap$EntryIterator.next(HashMap.java:1477)
		at com.thoughtworks.xstream.converters.collections.MapConverter.marshal(MapConverter.java:75)
		at com.thoughtworks.xstream.core.AbstractReferenceMarshaller.convert(AbstractReferenceMarshaller.java:68)
		at com.thoughtworks.xstream.core.TreeMarshaller.convertAnother(TreeMarshaller.java:58)
		at com.thoughtworks.xstream.core.AbstractReferenceMarshaller$1.convertAnother(AbstractReferenceMarshaller.java:83)
		at hudson.util.RobustReflectionConverter.marshallField(RobustReflectionConverter.java:275)
		at hudson.util.RobustReflectionConverter$2.writeField(RobustReflectionConverter.java:262)
	Caused: java.lang.RuntimeException: Failed to serialize org.datadog.jenkins.plugins.datadog.traces.StepDataAction#stepDataByDescriptor for class org.datadog.jenkins.plugins.datadog.traces.StepDataAction
		at hudson.util.RobustReflectionConverter$2.writeField(RobustReflectionConverter.java:266)
		at hudson.util.RobustReflectionConverter$2.visit(RobustReflectionConverter.java:233)
		at com.thoughtworks.xstream.converters.reflection.PureJavaReflectionProvider.visitSerializableFields(PureJavaReflectionProvider.java:150)
		at hudson.util.RobustReflectionConverter.doMarshal(RobustReflectionConverter.java:219)```
...

To Reproduce

Run the following pipeline in a Jenkins instance with the datadog plugin installed:

node("medium") {
    Map jobs = [:]
    1.upto(100) { i ->
        jobs[i] = {
            rtp(nullAction: '1', stableText: "${i}")
        }
    }
    parallel(jobs)
}

Expected behavior

The rtp pipeline step should be executed 100 times with the text ${i}.

Environment and Versions:

  • Jenkins 2.289.3
  • Datadog plugin 3.1.1

Java errors after enabling log collection in Jenkins

Hello we are having the following errors on Jenkins after enabling the log collection in Datadog plugin:

2022-07-08 16:30:58.755+0000 [id=729933]    INFO    o.d.j.p.d.c.DatadogAgentClient#reinitializeLogger: Re/Initialize Datadog-Plugin Logger: hostname = 10.241.65.43, logCollectionPort = 8126
2022-07-08 16:30:58.760+0000 [id=732176]    SEVERE    o.d.j.p.datadog.DatadogUtilities#severe: java.util.logging.ErrorManager: 2

Tried to do a telnet on localhost 8126 and it works.

To Reproduce
Steps to reproduce the behavior:

  1. Configure the plugin using configuration as code (casc)
  2. Configure the settings in a pipeline job to enable extra settings
  3. Datadog agent is locally running as a container in the node with log collection enabled

Expected behavior
The agent should ship the logs to the central console

Configuration:
The casc configuration is the following:

unclassified:
  datadogGlobalConfiguration:
    ciInstanceName: "jenkins-prod"
    collectBuildLogs: true
    emitConfigChangeEvents: false
    emitSecurityEvents: true
    emitSystemEvents: true
    enableCiVisibility: true
    reportWith: "DSD"
    retryLogs: true
    targetLogCollectionPort: 8126
    targetPort: 8125
    targetTraceCollectionPort: 8126

The plugin is working correctly sending the information about the jobs into datadog (we are using the already existent Datadog dashboard to check) example:
image

On the Jenkins pipeline we are setting the following:

pipeline {
    agent {
        kubernetes {}
    }
    options {
        // https://docs.datadoghq.com/integrations/jenkins/
        datadog(collectLogs: true, tags: ["tenant:test", "service:jenkins"])
    }

Also testing the connections on the UI seem to work correctly:
image

We are not seeing any kind of pipeline logs information being shipped to datadog.
Is there any kind of debugging we can do? Are some of the settings incorrect?

Also a question regarding the configuration:
If we set the collectLogs variable:true in the pipeline do we need to specify the option to collect logs globally? In this case if we disable the collect logs option globally and set via the pipeline we still aren't receiving the logs, but the error logs on Jenkins mentioned above stops

Environment and Versions:

  • Datadog Plugin version 4.0.0
  • Jenkins version: 2.346.1 LTS JDK11 version
  • Jenkins chart version: 4.11.11

Thanks for your help

Datadog errors under 'manage old data' - for pipelines not utilising the plugin

Describe the bug
Errors appearing under 'Manage old Data', which look related to the datadog plugin for Jenkins.

Label the issue properly.
I've configured the datadog plugin for Jenkins, and setup a valid API token. I haven't begun utilising the plugin yet in any pipelines.

I noticed Jenkins had flagged some issues with pipelines under 'Manage old Data'. When I click in here, I see a lot of pipelines listed with the below error:

MissingFieldException: No field 'emitOnCheckout' found in class 'org.datadog.jenkins.plugins.datadog.DatadogJobProperty'

I'm not sure if this is a regression, but I did a bit of searching and couldn't find a direct link to existing issues, nor identify the root cause.

I did also perform a search for 'emitOnCheckout' in this git repository, but couldn't find any references. that said, when I search for that online in the context of Jenkins, examples are returned which reference the datadog plugin.

Will also attach a screenshot.

To Reproduce
Steps to reproduce the behavior:

  1. Datadog plugin for Jenkins installed - v 1.0.2
  2. Configured valid API token for the datadog plugin
  3. Clicked into the 'manage Old Data' screen in Jenkins
  4. Reviewed the findings and as per above

Expected behavior
There shouldn't be any datadog errors under 'Manage old Data' for pipelines which do not use the plugin.

Screenshots
datadog-plugin-errors

Environment and Versions (please complete the following information):
Jenkins LTS, v2.204.3
Datadog plugin for Jenkins, v1.0.2

StackOverflowError after enabling log collection

We've been getting stack overflows in a pipeline that uses stashedFile / the file parameter plugin; it appears to be the same issue as jenkinsci/file-parameters-plugin#182 (where they say the problem is in this plugin)

To Reproduce
Steps to reproduce the behavior:

  1. Enable datadog log collection in Manage Jenkins -> System
  2. Use something along the lines of stashedFile name: 'aFile', description: 'a file' as a parameter in a pipeline
  3. Try to run it
  4. The job fails with no console output, and the jenkins log shows a really long trace that starts like this, and just repeats the same list forever, bouncing between the file parameter plugin and the datadog plugin:
trace

Feb 01 00:53:27 jenkins jenkins[507501]: java.lang.StackOverflowError
Feb 01 00:53:27 jenkins jenkins[507501]:         at org.codehaus.groovy.reflection.ParameterTypes.coerceArgumentsToClasses(ParameterTypes.java:145)
Feb 01 00:53:27 jenkins jenkins[507501]:         at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:323)
Feb 01 00:53:27 jenkins jenkins[507501]:         at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:264)
Feb 01 00:53:27 jenkins jenkins[507501]:         at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1034)
Feb 01 00:53:27 jenkins jenkins[507501]:         at groovy.lang.Closure.call(Closure.java:420)
Feb 01 00:53:27 jenkins jenkins[507501]:         at groovy.lang.Closure$WritableClosure.writeTo(Closure.java:860)
Feb 01 00:53:27 jenkins jenkins[507501]:         at groovy.lang.Closure$WritableClosure.toString(Closure.java:986)
Feb 01 00:53:27 jenkins jenkins[507501]:         at io.jenkins.plugins.opentelemetry.backend.ObservabilityBackend.getTraceVisualisationUrl(ObservabilityBackend.java:114)
Feb 01 00:53:27 jenkins jenkins[507501]:         at io.jenkins.plugins.opentelemetry.job.MonitoringAction.lambda$getLinks$6(MonitoringAction.java:155)
Feb 01 00:53:27 jenkins jenkins[507501]:         at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
Feb 01 00:53:27 jenkins jenkins[507501]:         at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1625)
Feb 01 00:53:27 jenkins jenkins[507501]:         at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
Feb 01 00:53:27 jenkins jenkins[507501]:         at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
Feb 01 00:53:27 jenkins jenkins[507501]:         at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921)
Feb 01 00:53:27 jenkins jenkins[507501]:         at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
Feb 01 00:53:27 jenkins jenkins[507501]:         at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682)
Feb 01 00:53:27 jenkins jenkins[507501]:         at io.jenkins.plugins.opentelemetry.job.MonitoringAction.getLinks(MonitoringAction.java:158)
Feb 01 00:53:27 jenkins jenkins[507501]:         at io.jenkins.plugins.opentelemetry.job.OtelEnvironmentContributorService.addEnvironmentVariables(OtelEnvironmentContributorService.java:58)
Feb 01 00:53:27 jenkins jenkins[507501]:         at io.jenkins.plugins.opentelemetry.job.OtelEnvironmentContributor.buildEnvironmentFor(OtelEnvironmentContributor.java:31)
Feb 01 00:53:27 jenkins jenkins[507501]:         at hudson.model.Run.getEnvironment(Run.java:2430)
Feb 01 00:53:27 jenkins jenkins[507501]:         at org.jenkinsci.plugins.workflow.job.WorkflowRun.getEnvironment(WorkflowRun.java:519)
Feb 01 00:53:27 jenkins jenkins[507501]:         at org.datadog.jenkins.plugins.datadog.model.BuildData.<init>(BuildData.java:150)
Feb 01 00:53:27 jenkins jenkins[507501]:         at org.datadog.jenkins.plugins.datadog.logs.DatadogTaskListenerDecorator.<init>(DatadogTaskListenerDecorator.java:49)
Feb 01 00:53:27 jenkins jenkins[507501]:         at org.datadog.jenkins.plugins.datadog.logs.DatadogTaskListenerDecorator$Factory.of(DatadogTaskListenerDecorator.java:82)
Feb 01 00:53:27 jenkins jenkins[507501]:         at org.jenkinsci.plugins.workflow.log.TaskListenerDecorator.lambda$apply$3(TaskListenerDecorator.java:164)
Feb 01 00:53:27 jenkins jenkins[507501]:         at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
Feb 01 00:53:27 jenkins jenkins[507501]:         at java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:179)
Feb 01 00:53:27 jenkins jenkins[507501]:         at java.base/java.util.Iterator.forEachRemaining(Iterator.java:133)
Feb 01 00:53:27 jenkins jenkins[507501]:         at java.base/java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1845)
Feb 01 00:53:27 jenkins jenkins[507501]:         at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
Feb 01 00:53:27 jenkins jenkins[507501]:         at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
Feb 01 00:53:27 jenkins jenkins[507501]:         at java.base/java.util.stream.StreamSpliterators$WrappingSpliterator.forEachRemaining(StreamSpliterators.java:310)
Feb 01 00:53:27 jenkins jenkins[507501]:         at java.base/java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:735)
Feb 01 00:53:27 jenkins jenkins[507501]:         at java.base/java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:735)
Feb 01 00:53:27 jenkins jenkins[507501]:         at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
Feb 01 00:53:27 jenkins jenkins[507501]:         at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
Feb 01 00:53:27 jenkins jenkins[507501]:         at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921)
Feb 01 00:53:27 jenkins jenkins[507501]:         at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
Feb 01 00:53:27 jenkins jenkins[507501]:         at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682)
Feb 01 00:53:27 jenkins jenkins[507501]:         at org.jenkinsci.plugins.workflow.log.TaskListenerDecorator.apply(TaskListenerDecorator.java:166)
Feb 01 00:53:27 jenkins jenkins[507501]:         at org.jenkinsci.plugins.workflow.job.WorkflowRun.getListener(WorkflowRun.java:236)
Feb 01 00:53:27 jenkins jenkins[507501]:         at org.jenkinsci.plugins.workflow.job.WorkflowRun$Owner.getListener(WorkflowRun.java:1024)
Feb 01 00:53:27 jenkins jenkins[507501]:         at io.jenkins.plugins.file_parameters.StashedFileParameterValue.buildEnvironment(StashedFileParameterValue.java:70)
Feb 01 00:53:27 jenkins jenkins[507501]:         at hudson.model.ParametersAction.buildEnvironment(ParametersAction.java:143)
Feb 01 00:53:27 jenkins jenkins[507501]:         at hudson.model.Run.getEnvironment(Run.java:2434)
Feb 01 00:53:27 jenkins jenkins[507501]:         at org.jenkinsci.plugins.workflow.job.WorkflowRun.getEnvironment(WorkflowRun.java:519)
Feb 01 00:53:27 jenkins jenkins[507501]:         at org.datadog.jenkins.plugins.datadog.model.BuildData.<init>(BuildData.java:150)
Feb 01 00:53:27 jenkins jenkins[507501]:         at org.datadog.jenkins.plugins.datadog.logs.DatadogTaskListenerDecorator.<init>(DatadogTaskListenerDecorator.java:49)
Feb 01 00:53:27 jenkins jenkins[507501]:         at org.datadog.jenkins.plugins.datadog.logs.DatadogTaskListenerDecorator$Factory.of(DatadogTaskListenerDecorator.java:82)
Feb 01 00:53:27 jenkins jenkins[507501]:         at org.jenkinsci.plugins.workflow.log.TaskListenerDecorator.lambda$apply$3(TaskListenerDecorator.java:164)
[...]

Expected behavior
Job runs

Environment and Versions (please complete the following information):
openjdk 17.0.9 2023-10-17
Jenkins 2.443
Datadog plugin 6.0.0
Datadog agent 6.50.3
File parameter plugin 316.va_83a_1221db_a_7

Allow using a Jenkins Secret for API key

Note:
If you have a feature request, you should contact support so the request can be properly tracked.

Is your feature request related to a problem? Please describe.
Currently the only way to define a DataDog API Key is by manually inserting it into the config window. This means that using Jenkins Configuration-as-Code would require adding plaintext API key to VCS.

Describe the solution you'd like
It would be nice if the plugin would support using a Jenkins Secret Text for that. Similarly to how Git plugin allows selecting credentials from Jenkins Secrets.
This would allow the usage of Jenkins Config-as-Code without exposing the API key to anyone with read permissions to the Git repo

Describe alternatives you've considered
None that I can think of

Additional context
Allow defining API key here
image
Similarly to how, for example, Bitbucket plugin does it:
image

Datadog Log Collection Port is not set properly in jenkins logs

Describe the bug
A clear and concise description of what the bug is.

Seeing lots of the below errors in the jenkins log, even though this is optional and I have not enabled log collection

2023-05-30 16:11:46.763+0000 [id=79160]	SEVERE	o.d.j.p.d.c.DatadogAgentClient#sendLogs: Datadog Log Collection Port is not set properly

Label the issue properly.

  • Add severity/ label.
  • Add documentation label if this issue is related to documentation changes.

To Reproduce
Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Environment and Versions (please complete the following information):
A clear and precise description of your setup:

  • version for this project in use.
  • services, libraries, languages and tools list and versions.

Additional context
Add any other context about the problem here.

Plugin configured with CI metrics prevents logs from being displayed in the pipeline stage view

Describe the bug
Logs normally displayed within the Pipeline stage view are no longer visible once the datadog plugin is installed and CI visibility is enabled as per official datadog docs.

Each of the stages will display the logs while the job is running, but as soon as the job finishes the logs modal becomes empty. This continues until the jenkins server is restarted, at which point you can see the logs for existing jobs (but will have the same issue for any newly triggered jobs)

To Reproduce
Steps to reproduce the behavior:

  1. In Jenkins 2.375.2 install the workflow-job, workflow-aggregator and datadog plugins with versions noted below
  2. Configure the datadog plugin to send CI metrics as per official datadog docs. (If you just install the plugin without configuring it, the issue disappears)
  3. Create a Jenkins pipeline with stages: Screenshot 2023-01-23 at 19 06 28
  4. Run the pipeline
  5. In the pipeline view, note you see the logs fore each stage while the job is still running: Screenshot 2023-01-23 at 19 06 44
  6. As soon as the job finishes, the logs for each stage can no longer be seen: Screenshot 2023-01-23 at 19 30 32 (There are no errors in the browser console nor network tab)
  7. If the server is restarted, the logs are visible again in the pipeline view. However, the error will repeat itself for any newly run jobs)

Expected behavior
The logs should continue to be visible in the pipeline view, even when the datadog plugin is installed and configured to capture CI metrics. Ie, after step 6 above (job finishes), the logs should continue to be visible in the pipeline view.

Environment and Versions (please complete the following information):

  • jenkins: 2.375.2 (container jenkins/jenkins:2.375.2-lts)
  • datadog-plugin: 5.2.0
  • pipeline-stage-view: 2.29
  • workflow-job: 1268.v6eb_e2ee1a_85a
  • workflow-aggregator: 590.v6a_d052e5a_a_b_5

Additional context

  • Datadog plugin configured to track CI metrics using environment variables:
    - name: DATADOG_JENKINS_PLUGIN_REPORT_WITH
      value: DSD
    - name: DATADOG_JENKINS_PLUGIN_TARGET_HOST
      value: datadog.datadog-agent.svc.cluster.local
    - name: DATADOG_JENKINS_PLUGIN_TARGET_PORT
      value: "8125"
    - name: DATADOG_JENKINS_PLUGIN_ENABLE_CI_VISIBILITY
      value: "true"
    - name: DATADOG_JENKINS_PLUGIN_TARGET_TRACE_COLLECTION_PORT
      value: "8126"
    - name: DATADOG_JENKINS_PLUGIN_CI_VISIBILITY_CI_INSTANCE_NAME
      value: jenkins-test
    
  • Log injection is not enabled in the datadog plugin. Ie, I haven't applied the optional steps https://docs.datadoghq.com/continuous_integration/pipelines/jenkins/?tab=usingenvironmentvariables#enable-job-log-collection
  • I have tried downgrading the datadog plugin to versions 4.0.0, 3.5.2 and 2.13.0. The problem still persists.
  • Both jenkins and datadog agent hosted in Kubernetes. Jenkins deployed with the jenkins-operator and the datadog agent with the official helm chart.

New tag please

Describe the bug
Technically not a new bug -- Issue #29 was fixed by PR #30. We are also blocked by that issue and need a new tag to upgrade our plugin to a new release which contains the fix.

Running Jenkins 2.204.1 w/ Datadog 1.0.1

Thanks 🙏

Datadog plugin incompatible with Jenkins 2.338 and up

Describe the bug

Since version 2.338 of Jenkins, the Datadog plugin generates errors that prevent Jenkins from starting normally. This appears to be due to Jenkins's recent removal of the Java Native Runtime (JNR) library from its core. Since the plugin seems to rely on this library, either directly or indirectly, Jenkins startup fails and shows a stacktrace in the UI.

To Reproduce
Steps to reproduce the behavior:

  1. Install the Datadog plugin on Jenkins version < 2.338
  2. Upgrade to Jenkins version >= 2.338
  3. See error on Jenkins startup

Expected behavior
Jenkins should start normally with no error.

Screenshots
Here's the stacktrace I see on startup:

Stacktrace in UI

Environment and Versions (please complete the following information):

  • Jenkins version 2.338
  • Datadog plugin version 3.51

Additional context

Here is the stacktrace visible in the UI (same as in the screenshot above):

java.lang.UnsatisfiedLinkError: could not get native definition for type `POINTER`, original error message follows: java.lang.UnsatisfiedLinkError: could not locate stub library in jar file.  Tried [jni/x86_64-Linux/libjffi-1.2.so, /jni/x86_64-Linux/libjffi-1.2.so]
	at com.kenai.jffi.internal.StubLoader.getStubLibraryStream(StubLoader.java:450)
	at com.kenai.jffi.internal.StubLoader.loadFromJar(StubLoader.java:375)
	at com.kenai.jffi.internal.StubLoader.load(StubLoader.java:278)
	at com.kenai.jffi.internal.StubLoader.<clinit>(StubLoader.java:487)
	at java.base/java.lang.Class.forName0(Native Method)
	at java.base/java.lang.Class.forName(Class.java:398)
	at com.kenai.jffi.Init.load(Init.java:68)
	at com.kenai.jffi.Foreign$InstanceHolder.getInstanceHolder(Foreign.java:49)
	at com.kenai.jffi.Foreign$InstanceHolder.<clinit>(Foreign.java:45)
	at com.kenai.jffi.Foreign.getInstance(Foreign.java:103)
	at com.kenai.jffi.Type$Builtin.lookupTypeInfo(Type.java:242)
	at com.kenai.jffi.Type$Builtin.getTypeInfo(Type.java:237)
	at com.kenai.jffi.Type.resolveSize(Type.java:155)
	at com.kenai.jffi.Type.size(Type.java:138)
	at jnr.ffi.provider.jffi.NativeRuntime$TypeDelegate.size(NativeRuntime.java:178)
	at jnr.ffi.provider.AbstractRuntime.<init>(AbstractRuntime.java:48)
	at jnr.ffi.provider.jffi.NativeRuntime.<init>(NativeRuntime.java:57)
	at jnr.ffi.provider.jffi.NativeRuntime.<init>(NativeRuntime.java:41)
	at jnr.ffi.provider.jffi.NativeRuntime$SingletonHolder.<clinit>(NativeRuntime.java:53)
	at jnr.ffi.provider.jffi.NativeRuntime.getInstance(NativeRuntime.java:49)
	at jnr.ffi.provider.jffi.Provider.<init>(Provider.java:29)
	at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
	at java.base/java.lang.Class.newInstance(Class.java:584)
	at jnr.ffi.provider.FFIProvider$SystemProviderSingletonHolder.getInstance(FFIProvider.java:68)
	at jnr.ffi.provider.FFIProvider$SystemProviderSingletonHolder.<clinit>(FFIProvider.java:57)
	at jnr.ffi.provider.FFIProvider.getSystemProvider(FFIProvider.java:35)
	at jnr.ffi.Runtime$SingletonHolder.<clinit>(Runtime.java:82)
	at jnr.ffi.Runtime.getSystemRuntime(Runtime.java:67)
	at jnr.unixsocket.SockAddrUnix.<init>(SockAddrUnix.java:46)
	at jnr.unixsocket.SockAddrUnix$DefaultSockAddrUnix.<init>(SockAddrUnix.java:208)
	at jnr.unixsocket.SockAddrUnix.create(SockAddrUnix.java:174)
	at jnr.unixsocket.UnixSocketAddress.<init>(UnixSocketAddress.java:53)
	at com.timgroup.statsd.NonBlockingStatsDClient$5.call(NonBlockingStatsDClient.java:1416)
	at com.timgroup.statsd.NonBlockingStatsDClient$5.call(NonBlockingStatsDClient.java:1413)
	at com.timgroup.statsd.NonBlockingStatsDClient.staticAddressResolution(NonBlockingStatsDClient.java:1433)
	at com.timgroup.statsd.NonBlockingStatsDClient.staticStatsDAddressResolution(NonBlockingStatsDClient.java:1450)
	at com.timgroup.statsd.NonBlockingStatsDClient.<init>(NonBlockingStatsDClient.java:364)
	at com.timgroup.statsd.NonBlockingStatsDClient.<init>(NonBlockingStatsDClient.java:210)
	at com.timgroup.statsd.NonBlockingStatsDClient.<init>(NonBlockingStatsDClient.java:182)
	at org.datadog.jenkins.plugins.datadog.clients.DatadogAgentClient.reinitializeStatsDClient(DatadogAgentClient.java:217)
	at org.datadog.jenkins.plugins.datadog.clients.DatadogAgentClient.getInstance(DatadogAgentClient.java:133)
	at org.datadog.jenkins.plugins.datadog.clients.ClientFactory.getClient(ClientFactory.java:52)
	at org.datadog.jenkins.plugins.datadog.clients.ClientFactory.getClient(ClientFactory.java:84)
	at org.datadog.jenkins.plugins.datadog.listeners.DatadogItemListener.onCRUD(DatadogItemListener.java:81)
	at org.datadog.jenkins.plugins.datadog.listeners.DatadogItemListener.onUpdated(DatadogItemListener.java:69)
	at hudson.model.listeners.ItemListener.lambda$fireOnUpdated$2(ItemListener.java:205)
	at jenkins.util.Listeners.lambda$notify$0(Listeners.java:59)
	at jenkins.util.Listeners.notify(Listeners.java:70)
	at hudson.model.listeners.ItemListener.fireOnUpdated(ItemListener.java:205)
	at com.cloudbees.hudson.plugins.folder.AbstractFolder.save(AbstractFolder.java:1315)
	at hudson.util.PersistedList.onModified(PersistedList.java:193)
	at hudson.util.PersistedList._onModified(PersistedList.java:224)
	at hudson.util.PersistedList.add(PersistedList.java:85)
	at com.cloudbees.hudson.plugins.folder.AbstractFolder.addProperty(AbstractFolder.java:616)
	at jenkins.branch.OrganizationFolder.onLoad(OrganizationFolder.java:242)
	at hudson.model.Items.load(Items.java:376)
	at jenkins.model.Jenkins$13.run(Jenkins.java:3418)
	at org.jvnet.hudson.reactor.TaskGraphBuilder$TaskImpl.run(TaskGraphBuilder.java:175)
	at org.jvnet.hudson.reactor.Reactor.runTask(Reactor.java:305)
	at jenkins.model.Jenkins$5.runTask(Jenkins.java:1156)
	at org.jvnet.hudson.reactor.Reactor$2.run(Reactor.java:222)
	at org.jvnet.hudson.reactor.Reactor$Node.run(Reactor.java:121)
	at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:68)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:829)

	at com.kenai.jffi.Type$Builtin.lookupTypeInfo(Type.java:253)
	at com.kenai.jffi.Type$Builtin.getTypeInfo(Type.java:237)
	at com.kenai.jffi.Type.resolveSize(Type.java:155)
	at com.kenai.jffi.Type.size(Type.java:138)
	at jnr.ffi.provider.jffi.NativeRuntime$TypeDelegate.size(NativeRuntime.java:178)
	at jnr.ffi.provider.AbstractRuntime.<init>(AbstractRuntime.java:48)
	at jnr.ffi.provider.jffi.NativeRuntime.<init>(NativeRuntime.java:57)
	at jnr.ffi.provider.jffi.NativeRuntime.<init>(NativeRuntime.java:41)
	at jnr.ffi.provider.jffi.NativeRuntime$SingletonHolder.<clinit>(NativeRuntime.java:53)
	at jnr.ffi.provider.jffi.NativeRuntime.getInstance(NativeRuntime.java:49)
	at jnr.ffi.provider.jffi.Provider.<init>(Provider.java:29)
	at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
	at java.base/java.lang.Class.newInstance(Class.java:584)
	at jnr.ffi.provider.FFIProvider$SystemProviderSingletonHolder.getInstance(FFIProvider.java:68)
	at jnr.ffi.provider.FFIProvider$SystemProviderSingletonHolder.<clinit>(FFIProvider.java:57)
	at jnr.ffi.provider.FFIProvider.getSystemProvider(FFIProvider.java:35)
	at jnr.ffi.Runtime$SingletonHolder.<clinit>(Runtime.java:82)
	at jnr.ffi.Runtime.getSystemRuntime(Runtime.java:67)
	at jnr.unixsocket.SockAddrUnix.<init>(SockAddrUnix.java:46)
	at jnr.unixsocket.SockAddrUnix$DefaultSockAddrUnix.<init>(SockAddrUnix.java:208)
	at jnr.unixsocket.SockAddrUnix.create(SockAddrUnix.java:174)
	at jnr.unixsocket.UnixSocketAddress.<init>(UnixSocketAddress.java:53)
	at com.timgroup.statsd.NonBlockingStatsDClient$5.call(NonBlockingStatsDClient.java:1416)
	at com.timgroup.statsd.NonBlockingStatsDClient$5.call(NonBlockingStatsDClient.java:1413)
	at com.timgroup.statsd.NonBlockingStatsDClient.staticAddressResolution(NonBlockingStatsDClient.java:1433)
	at com.timgroup.statsd.NonBlockingStatsDClient.staticStatsDAddressResolution(NonBlockingStatsDClient.java:1450)
	at com.timgroup.statsd.NonBlockingStatsDClient.<init>(NonBlockingStatsDClient.java:364)
	at com.timgroup.statsd.NonBlockingStatsDClient.<init>(NonBlockingStatsDClient.java:210)
	at com.timgroup.statsd.NonBlockingStatsDClient.<init>(NonBlockingStatsDClient.java:182)
	at org.datadog.jenkins.plugins.datadog.clients.DatadogAgentClient.reinitializeStatsDClient(DatadogAgentClient.java:217)
	at org.datadog.jenkins.plugins.datadog.clients.DatadogAgentClient.getInstance(DatadogAgentClient.java:133)
	at org.datadog.jenkins.plugins.datadog.clients.ClientFactory.getClient(ClientFactory.java:52)
	at org.datadog.jenkins.plugins.datadog.clients.ClientFactory.getClient(ClientFactory.java:84)
	at org.datadog.jenkins.plugins.datadog.listeners.DatadogItemListener.onCRUD(DatadogItemListener.java:81)
	at org.datadog.jenkins.plugins.datadog.listeners.DatadogItemListener.onUpdated(DatadogItemListener.java:69)
	at hudson.model.listeners.ItemListener.lambda$fireOnUpdated$2(ItemListener.java:205)
	at jenkins.util.Listeners.lambda$notify$0(Listeners.java:59)
	at jenkins.util.Listeners.notify(Listeners.java:70)
	at hudson.model.listeners.ItemListener.fireOnUpdated(ItemListener.java:205)
	at com.cloudbees.hudson.plugins.folder.AbstractFolder.save(AbstractFolder.java:1315)
	at hudson.util.PersistedList.onModified(PersistedList.java:193)
	at hudson.util.PersistedList._onModified(PersistedList.java:224)
	at hudson.util.PersistedList.add(PersistedList.java:85)
	at com.cloudbees.hudson.plugins.folder.AbstractFolder.addProperty(AbstractFolder.java:616)
	at jenkins.branch.OrganizationFolder.onLoad(OrganizationFolder.java:242)
	at hudson.model.Items.load(Items.java:376)
	at jenkins.model.Jenkins$13.run(Jenkins.java:3418)
	at org.jvnet.hudson.reactor.TaskGraphBuilder$TaskImpl.run(TaskGraphBuilder.java:175)
	at org.jvnet.hudson.reactor.Reactor.runTask(Reactor.java:305)
	at jenkins.model.Jenkins$5.runTask(Jenkins.java:1156)
	at org.jvnet.hudson.reactor.Reactor$2.run(Reactor.java:222)
	at org.jvnet.hudson.reactor.Reactor$Node.run(Reactor.java:121)
	at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:68)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:829)
Caused: java.lang.UnsatisfiedLinkError: could not load FFI provider jnr.ffi.provider.jffi.Provider
	at jnr.ffi.provider.InvalidRuntime.newLoadError(InvalidRuntime.java:101)
	at jnr.ffi.provider.InvalidRuntime.findType(InvalidRuntime.java:42)
	at jnr.ffi.Struct$NumberField.<init>(Struct.java:872)
	at jnr.ffi.Struct$Unsigned16.<init>(Struct.java:1240)
	at jnr.unixsocket.SockAddrUnix$DefaultSockAddrUnix.<init>(SockAddrUnix.java:209)
	at jnr.unixsocket.SockAddrUnix.create(SockAddrUnix.java:174)
	at jnr.unixsocket.UnixSocketAddress.<init>(UnixSocketAddress.java:53)
	at com.timgroup.statsd.NonBlockingStatsDClient$5.call(NonBlockingStatsDClient.java:1416)
	at com.timgroup.statsd.NonBlockingStatsDClient$5.call(NonBlockingStatsDClient.java:1413)
	at com.timgroup.statsd.NonBlockingStatsDClient.staticAddressResolution(NonBlockingStatsDClient.java:1433)
	at com.timgroup.statsd.NonBlockingStatsDClient.staticStatsDAddressResolution(NonBlockingStatsDClient.java:1450)
	at com.timgroup.statsd.NonBlockingStatsDClient.<init>(NonBlockingStatsDClient.java:364)
	at com.timgroup.statsd.NonBlockingStatsDClient.<init>(NonBlockingStatsDClient.java:210)
	at com.timgroup.statsd.NonBlockingStatsDClient.<init>(NonBlockingStatsDClient.java:182)
	at org.datadog.jenkins.plugins.datadog.clients.DatadogAgentClient.reinitializeStatsDClient(DatadogAgentClient.java:217)
	at org.datadog.jenkins.plugins.datadog.clients.DatadogAgentClient.event(DatadogAgentClient.java:421)
	at org.datadog.jenkins.plugins.datadog.listeners.DatadogComputerListener.onOnline(DatadogComputerListener.java:82)
	at jenkins.model.Jenkins.<init>(Jenkins.java:1020)
	at hudson.model.Hudson.<init>(Hudson.java:86)
	at hudson.model.Hudson.<init>(Hudson.java:82)
	at hudson.WebAppMain$3.run(WebAppMain.java:247)
Caused: hudson.util.HudsonFailedToLoad
	at hudson.WebAppMain$3.run(WebAppMain.java:261)

Here are the logs from Jenkins startup: jenkins.log

As a workaround, I tried installing the standalone JNR Posix API Plugin, but it did not change anything as I believe some action must be done to have the Datadog plugin rely on this.

One fix would be for the Datadog plugin to declare it as an explicit dependency, like other plugins have done.

Another possible longer-term fix could be to remove any JNR library dependency from the plugin altogether.

Dashboard

Is there a baseline Dashboard in JSON available we can import to get started with this plugin?

Add ability to customize build tags from groovy pipelines

Migrated from DataDog/jenkins-datadog-plugin#115

Description

I need to change a tag value based on what triggered the build (i.e. what's the build cause). After a lot of tinkering, I'm doing something like this:

def cause = 'something' // actually comes from currentBuild.rawBuild.getCause...
properties([
  [
    $class: 'DatadogJobProperty',
    tagProperties: "cause=${cause}"
  ]
])

And this sort of works, but isn't really reliable. It causes issues when multiple jobs run concurrently. The current DatadogJobProperty value is used whenever a build finishes. That means that if job A starts with cause "A" and job B starts with cause "B" before A finishes running, then both jobs will use the last DatadogJobProperty which has "B" configured as the cause.

I also tried the following, but this fails to set the value altogether. The tag ends up as cause=_cause.

env.CAUSE = 'something' // actually comes from currentBuild.rawBuild.getCause...
properties([
  [
    $class: 'DatadogJobProperty',
    tagProperties: 'cause=${CAUSE}'
  ]
])

Jenkins log is flooded by "o.d.j.p.datadog.DatadogUtilities#severe: Failed to process build deletion"

Describe the bug
My jenkins.log file is flooded by severe level log from Datadog plugin
o.d.j.p.datadog.DatadogUtilities#severe: Failed to process build deletion
Error it's in a catch block inside class o.d.j.p.d.l.DatadogBuildListener#onDeleted
I enabled Jenkins loggers to get class specific logs beside the DatadogUtilities log and nothing is printed between
Start DatadogBuildListener#onDeleted
and
Failed to process build deletion

2021-05-05 15:36:54.574+0000 [id=1855210] FINE o.d.j.p.d.l.DatadogBuildListener#onDeleted: Start DatadogBuildListener#onDeleted
2021-05-05 15:36:54.575+0000 [id=1855210] SEVERE o.d.j.p.datadog.DatadogUtilities#severe: Failed to process build deletion
2021-05-05 15:36:54.575+0000 [id=1855210] FINER o.d.j.p.datadog.DatadogUtilities#severe: Failed to process build deletion: java.lang.NullPointerException

To Reproduce
IDK how to reproduce it, but it happens thousands of times a day for me

Expected behavior
Everything seems to works well, so maybe the log Is wrong, or a log line suggesting were the error is.

Screenshots
Jenkins logger I configured

Environment and Versions (please complete the following information):
A clear and precise description of your setup:

  • Jenkins 2.263.2
  • datadog plugin 2.10.0

image

jenkins.job.stage.completed - metrics for stage result

Is your feature request related to a problem? Please describe.
I am trying to get metrics around success|failure for different stages. Currently, I can get stage duration but I cannot get metrics around jenkins.job.stage.completed { result = failure | success }

Describe the solution you'd like
jenkins.job.stage.completed { stage_name, stage_depth, stage_parent, result = failure | success }

Describe alternatives you've considered
I can run datadog cli to generate this data in the pipeline

Additional plugin metrics

Is your feature request related to a problem? Please describe.
We'd like to set up monitors on certain plugin related metrics.

Describe the solution you'd like
We're looking to add additional metrics around plugins. The main one we need is tracking plugins that updating, however, active/inactive and failed plugins is also useful metrics that we would like to track. These metrics are available in the metrics plugin.

These metrics are requested so that we can set up metrics on, for example, ensuring we don't have too many plugins out of date, and seeing which plugins may have failed to start.

Describe alternatives you've considered
We've considered using the metrics plugin, and write a script that reads from the metrics plugin, and subsequently forwards it to datadog, however, this feels like it's quite hacky.
Alternatively we considering the metrics-datadog plugin, but that didn't seem to send any metrics.

Support buildName as DefaultTags for jenkins.job.completed

Is your feature request related to a problem? Please describe.
We use multi-stage pipeline for all our builds and during pipeline run we would like to change the displayName to the commit id and we would want to get that info in datadog so that we can provide a quick dashboard on which commit-id is deployed to which environment in a tabular format in datadog.

Describe the solution you'd like
Currently jenkins.job.completed returns following tags

Metric Name Description Default Tags
jenkins.job.completed Rate of completed jobs. branch, jenkins_url, job, node, result, user_id

I believe you should be able to add the displayName to default tags.

Describe alternatives you've considered
I can write my own function in groovy to publish metrics to datadog with tags.

Additional context
This is how we would set the displayName of the job.
https://support.cloudbees.com/hc/en-us/articles/220860347-How-to-set-build-name-in-Pipeline-job-

Service Check missing tags

Describe the bug
Following the issue #191 we installed the version 1.0.1 and now it works again, but there are no tags attached to the service checks. We can only see the host tag. The events and metrics have all the tags we added to the global tags. We also tried to add the tags to the global job tags, but no success either.

To Reproduce
Steps to reproduce the behavior:

  1. Have a job without Datadog XML tag in the config.xml
  2. Configure global tags
  3. Check jenkins.job.status under Service Check Summary

Expected behavior
Service check has all the tags attached.

Missing git client dependency

Describe the bug
Jenkins startup shows the following error when loading datadog plugin:

WARNING h.ExtensionFinder$GuiceFinder$SezpozModule#configure: Failed to load org.datadog.jenkins.plugins.datadog.listeners.DatadogSCMListener
java.lang.ClassNotFoundException: org.jenkinsci.plugins.gitclient.GitClient
...
Caused: java.lang.NoClassDefFoundError: org/jenkinsci/plugins/gitclient/GitClient

Looking into it further, I noticed that the datadog plugin imports the git-client plugin here but does not explicitly define a dependency in the pom.xml. Digging further into the dependency chain, it seems that the datadog plugin relied on an implicit dependency to git-client plugin through the pipeline-model-definition plugin. However, in their latest update, pipeline-model-definition no longer includes git-client as a dependency.

Proposal: Explicitly add git-client as a dependency of datadog plugin to ensure that git-client is loaded correctly during startup.

severity/normal

To Reproduce
Check Jenkins spin up logs or plugin list when installing the latest versions of datadog and pipeline-model-definition.

Expected behavior
Jenkins should contain the git-client plugin (via dependency resolution) if datadog plugin is installed.

Screenshots
If applicable, add screenshots to help explain your problem.

Environment and Versions (please complete the following information):
Jenkins LTS 2.440.x
Pipeline: Declarative plugin (pipeline-model-definition) version after merge of jenkinsci/pipeline-model-definition-plugin#706

`branch` tag missing on metrics and events

Describe the bug
branch is not set as a a tag on any metric, and it is not set on any events.

Screenshots
My plugin config is pretty much the defaults:
Screenshot 2023-02-27 at 9 15 18 am

nothing else is changed.

I also have an env var set in Jenkins:

          - name: DATADOG_JENKINS_PLUGIN_TARGET_HOST
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: status.hostIP

Environment and Versions (please complete the following information):
Jenkins 2.375.3, on EKS 1.22
plugin: 5.3.0

Additional context
On metrics, e.g. jenkins.job.completed I get all other documented tags: jenkins_url, job, node, result, user_id.
On events, e.g. "Build completed", I get all other documented tags as well: event_type, jenkins_url, job, node, result, user_id.

It looks like branch is consistently missing across the board. Can you provide any insight into how I can troubleshoot that? I'm aware I can set tags via https://github.com/jenkinsci/datadog-plugin#pipeline-customization, but didn't want to do it for a "built-in" tag.

Support different hosts for DogStatsD and Traces Collection Port

Note:
If you have a feature request, you should contact support so the request can be properly tracked.

Is your feature request related to a problem? Please describe.
I deploy the Datadog Agent to a separate cluster. Our cluster configurations (not managed by us) only allow for a single port to be forwarded per host.

Currently Agent host is used as the host for both DogStatsD and Traces (and logs).

Describe the solution you'd like
I would like to specify a different hostname for the DogStatsD Port vs the Traces Collection Port.

Describe alternatives you've considered
I've tried specifying the env variables for the traces and dogstatsd in Jenkins but the plugin doesn't seem to use those values if they are specified as env values with Jenkins. I unfortunately don't have access to specify them as actual env variables in our Jenkins instance.

Additional context
Add any other context or screenshots about the feature request here.
Screen Shot 2022-08-31 at 9 29 03 AM

Don't continuously log messages whenever no API token is configured

Describe the bug
Whenever no API token is configured for the datadog plugin, the plugin will continuously write to the jenkins system log, to this affect.

As a result, the jenkins system log is in affect useless as when debugging jenkins issues, the logs have likely already been rotated.

As an example, the plugin has written 17 log lines in 39 seconds:

Apr 07, 2020 8:04:56 AM INFO hudson.model.AsyncPeriodicWork lambda$doRun$0
Started Datadog Counters Publisher
Apr 07, 2020 8:04:56 AM SEVERE org.datadog.jenkins.plugins.datadog.clients.DatadogHttpClient getInstance
Datadog API Key is not set properly
Apr 07, 2020 8:04:56 AM INFO hudson.model.AsyncPeriodicWork lambda$doRun$0
Finished Datadog Counters Publisher. 5 ms
Apr 07, 2020 8:05:06 AM INFO hudson.model.AsyncPeriodicWork lambda$doRun$0
Started Datadog Counters Publisher
Apr 07, 2020 8:05:06 AM SEVERE org.datadog.jenkins.plugins.datadog.clients.DatadogHttpClient getInstance
Datadog API Key is not set properly
Apr 07, 2020 8:05:06 AM INFO hudson.model.AsyncPeriodicWork lambda$doRun$0
Finished Datadog Counters Publisher. 5 ms
Apr 07, 2020 8:05:16 AM INFO hudson.model.AsyncPeriodicWork lambda$doRun$0
Started Datadog Counters Publisher
Apr 07, 2020 8:05:16 AM SEVERE org.datadog.jenkins.plugins.datadog.clients.DatadogHttpClient getInstance
Datadog API Key is not set properly
Apr 07, 2020 8:05:16 AM INFO hudson.model.AsyncPeriodicWork lambda$doRun$0
Finished Datadog Counters Publisher. 5 ms
Apr 07, 2020 8:05:16 AM SEVERE org.datadog.jenkins.plugins.datadog.clients.DatadogHttpClient getInstance
Datadog API Key is not set properly
Apr 07, 2020 8:05:23 AM INFO hudson.model.AsyncPeriodicWork lambda$doRun$0
Started EC2 Jenkins Agents Monitor
Apr 07, 2020 8:05:23 AM INFO hudson.model.AsyncPeriodicWork lambda$doRun$0
Finished EC2 Jenkins Agents Monitor. 4 ms
Apr 07, 2020 8:05:26 AM INFO hudson.model.AsyncPeriodicWork lambda$doRun$0
Started Datadog Counters Publisher
Apr 07, 2020 8:05:26 AM SEVERE org.datadog.jenkins.plugins.datadog.clients.DatadogHttpClient getInstance
Datadog API Key is not set properly
Apr 07, 2020 8:05:26 AM INFO hudson.model.AsyncPeriodicWork lambda$doRun$0
Finished Datadog Counters Publisher. 5 ms
Apr 07, 2020 8:05:36 AM INFO hudson.model.AsyncPeriodicWork lambda$doRun$0
Started Datadog Counters Publisher
Apr 07, 2020 8:05:36 AM SEVERE org.datadog.jenkins.plugins.datadog.clients.DatadogHttpClient getInstance
Datadog API Key is not set properly

FYI, this is not the same issue reported here - while they are both related in the same area, the below was a regression which meant the logs were firing as exceptions:

To Reproduce
Steps to reproduce the behavior:

  1. Install latest datadog plugin (though this doesn't look to be a regression, it's been the behaviour for some time)
  2. Make sure an API token is not configured for the plugin (or the current one has expired)
  3. observe the Jenkins log

Expected behavior
The datadog plugin should not be spamming the jenkins log in such a way, that it makes it un-usable for debugging other issues which may occur.

Other plugins, such as the github plugin, do write to the log when an API token is not valid, e.g when it fails to create webhooks, but not in such high frequency.

Additional context
We operate an environment where we provide jenkins masters to other teams, maintaining a core set of plugins. Not all teams will leverage the plugins we have installed, such as the datadog plugin, and may not configure an API token for the plugin.

Others may have expired API tokens and as a result, if we need to debug any issues, the logs are not usable.

Datadog Log Shipper failing

Hello all, running the latest JENKINS and Datadog and the logging used to work shipping to a datadog agent running on the cluster. Now I just get this:

Feb 12, 2021 12:54:21 AM org.jenkinsci.plugins.workflow.log.TaskListenerDecorator$DecoratedTaskListener getLogger
WARNING: null
java.lang.NullPointerException
at org.datadog.jenkins.plugins.datadog.logs.DatadogTaskListenerDecorator.decorate(DatadogTaskListenerDecorator.java:53)
at org.jenkinsci.plugins.workflow.log.TaskListenerDecorator$DecoratedTaskListener.getLogger(TaskListenerDecorator.java:237)
at org.jenkinsci.plugins.workflow.log.TaskListenerDecorator$CloseableTaskListener.getLogger(TaskListenerDecorator.java:279)
at hudson.Launcher$RemoteLaunchCallable.call(Launcher.java:1381)
at hudson.Launcher$RemoteLaunchCallable.call(Launcher.java:1333)
at hudson.remoting.UserRequest.perform(UserRequest.java:211)
at hudson.remoting.UserRequest.perform(UserRequest.java:54)
at hudson.remoting.Request$2.run(Request.java:376)
at hudson.remoting.InterceptingExecutorService.lambda$wrap$0(InterceptingExecutorService.java:78)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at hudson.remoting.Engine$1.lambda$newThread$0(Engine.java:119)
at java.lang.Thread.run(Thread.java:748)

datadog helm chart version: 2.8.3
jenkins helm chart version: 3.1.8

datadog plugin 2.8.1

Latest plugin version causing exception in system log, when no API token configured

Description
Whenever an API token is not configured for the datadog plugin, the following output is repeatedly output to the Jenkins system log:

eb 03, 2020 4:27:37 PM SEVERE org.datadog.jenkins.plugins.datadog.clients.DatadogHttpClient getInstance
Datadog API Key is not set properly
Feb 03, 2020 4:27:37 PM SEVERE org.datadog.jenkins.plugins.datadog.DatadogUtilities severe
An unexpected error occurred: java.lang.RuntimeException: Datadog API Key is not set properly
	at org.datadog.jenkins.plugins.datadog.clients.DatadogHttpClient.getInstance(DatadogHttpClient.java:89)
	at org.datadog.jenkins.plugins.datadog.clients.ClientFactory.getClient(ClientFactory.java:38)
	at org.datadog.jenkins.plugins.datadog.clients.ClientFactory.getClient(ClientFactory.java:60)
	at org.datadog.jenkins.plugins.datadog.listeners.DatadogSaveableListener.onChange(DatadogSaveableListener.java:61)
	at hudson.model.listeners.SaveableListener.fireOnChange(SaveableListener.java:81)
	at jenkins.model.Jenkins.save(Jenkins.java:3318)
	at hudson.model.Saveable$save.call(Unknown Source)
	at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
	at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
	at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:117)
	at 1-initMasterBaseConfiguration.run(1-initMasterBaseConfiguration.groovy:30)
	at groovy.lang.GroovyShell.evaluate(GroovyShell.java:585)
	at jenkins.util.groovy.GroovyHookScript.execute(GroovyHookScript.java:136)
	at jenkins.util.groovy.GroovyHookScript.execute(GroovyHookScript.java:127)
	at jenkins.util.groovy.GroovyHookScript.run(GroovyHookScript.java:110)
	at hudson.init.impl.GroovyInitScript.init(GroovyInitScript.java:41)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at hudson.init.TaskMethodFinder.invoke(TaskMethodFinder.java:104)
	at hudson.init.TaskMethodFinder$TaskImpl.run(TaskMethodFinder.java:175)
	at org.jvnet.hudson.reactor.Reactor.runTask(Reactor.java:296)
	at jenkins.model.Jenkins$5.runTask(Jenkins.java:1121)
	at org.jvnet.hudson.reactor.Reactor$2.run(Reactor.java:214)
	at org.jvnet.hudson.reactor.Reactor$Node.run(Reactor.java:117)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

Regression?
This looks like a regression in the latest plugin version (either 1.0.2 or 1.0.1). v 1.0.0 does not show this behaviour, and I have been able to confirm and get a repo of this on multiple different Jenkins masters.

To Reproduce

  1. Install (or upgrade) to latest datadog plugin
  2. Don't configure an API token for datadog in the jenkins configuration
  3. Observe the output in the system log

Environment and Versions (please complete the following information):

  • Jenkins v 2.204.2 (also repo on v 2.204.1)
  • Datadog plugin 1.02

Delivery KPI Metric Numbers Are Not Correct

Describe the bug
I am trying to create a dashboard using the KPI metrics jenkins.job.cycletime, jenkins.job.mtbf, jenkins.job.mttr. The metric numbers reported are incorrect.

To Reproduce

  1. Generate a sample pipeline.
  2. Do something to make it fail wait 10 minutes.
  3. Run it again without a failure.
  4. Wait for the metrics to show up in Datadog. The mean time between failure is in microseconds.

Expected behavior
MTBF should be reported in minutes.

Environment and Versions:
Jenkins: 2.263.1.2
Plugin: 2.4.0

Additional context
These KPI metrics were added in this PR: DataDog/jenkins-datadog-plugin#156
And they are based off of this work https://github.com/stelligent/pipeline-dashboard/blob/master/README.md#metric-details

Here are what the values should look like...
image

Cannot use CI Visibility with DogStatsD over Unix Domain Socket

I'm using the Datadog plugin on Jenkins with the Datadog Agent and a Unix socket configuration.
This is working fine for metrics reporting, but not for CI Visibility/traces. Looking at Jenkins logs during a pipeline run, I see the plugin is trying to use HTTP instead of the configured DSD, and so it fails (screenshot further below).

I've tried looking at the plugin source code and found no clue, so I'm wondering if there's something I'm missing, but I suspect it's a bug, i.e. that the plugin was not designed with the unix socket configuration in mind (cf. #242). Thanks for any insights you might have!

To Reproduce
Steps to reproduce the behavior:

  1. Set up the plugin using a Unix Socket configuration
  2. Enable CI Visibility (you'll have to set a port to enable Traces collection.. I set 0), and save the configuration
  3. Create a Jenkins logger for Datadog-related classes (see screenshot below)
  4. Launch a Jenkins build and let it run to completion
  5. Check logs for the logger created in step 3
  6. See errors (e.g. "protocol = http host = null")

Expected behavior
The plugin should use the DSD configuration (not HTTP), traces should be sent without error, and Jenkins build data should show up in Datadog's Pipeline Visibility feature.

Screenshots

Here's how the plugin configuration looks in Jenkins:

image

(Note that I have to set 0 for the Traces Collection Port to enable CI Visibility; the "Test traces connection" button fails, but maybe this is expected behavior in this case? I can still save the configuration.)

Here is the Jenkins logger configuration:

image

Here are the logs that show the error:

image

Environment and Versions:

  • Datadog Plugin version 3.5.0
  • Jenkins version: 2.336
  • Jenkins chart version: 3.11.5

Additional context

Jenkins Logs

Here are the same logs as in the previous screenshot (included as text for searches):

Start DatadogBuildListener#onFinalized
Mar 23, 2022 3:38:10 PM FINE org.datadog.jenkins.plugins.datadog.DatadogUtilities
The list of Global Job Tags are: []
Mar 23, 2022 3:38:10 PM FINE org.datadog.jenkins.plugins.datadog.DatadogUtilities
Using unix hostname found via `/bin/hostname -f`. Hostname: jenkins-0.jenkins.ep.svc.cluster.local
Mar 23, 2022 3:38:10 PM FINE org.datadog.jenkins.plugins.datadog.clients.DatadogAgentClient
Finished build trace
Mar 23, 2022 3:38:10 PM FINE org.datadog.jenkins.plugins.datadog.listeners.DatadogBuildListener
End DatadogBuildListener#onFinalized
Mar 23, 2022 3:38:10 PM FINE org.datadog.jenkins.plugins.datadog.DatadogUtilities
Jenkins proxy configuration not found
Mar 23, 2022 3:38:10 PM FINE org.datadog.jenkins.plugins.datadog.DatadogUtilities
Using HttpURLConnection, without proxy
Mar 23, 2022 3:38:10 PM SEVERE org.datadog.jenkins.plugins.datadog.DatadogUtilities severe
protocol = http host = null
Mar 23, 2022 3:38:10 PM FINER org.datadog.jenkins.plugins.datadog.transport.LoggerHttpErrorHandler
protocol = http host = null: java.lang.IllegalArgumentException: protocol = http host = null
	at java.base/sun.net.spi.DefaultProxySelector.select(DefaultProxySelector.java:192)
	at java.base/sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1181)
	at java.base/sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1081)
	at java.base/sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:1015)
	at java.base/sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1367)
	at java.base/sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1342)
	at org.datadog.jenkins.plugins.datadog.transport.HttpSender.blockingSend(HttpSender.java:77)
	at org.datadog.jenkins.plugins.datadog.transport.HttpSender.run(HttpSender.java:55)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:829)

Mar 23, 2022 3:38:10 PM SEVERE org.datadog.jenkins.plugins.datadog.DatadogUtilities severe
java.lang.IllegalArgumentException: protocol = http host = null
Mar 23, 2022 3:38:10 PM FINE org.datadog.jenkins.plugins.datadog.clients.DatadogAgentClient
Send pipeline traces.
Mar 23, 2022 3:38:10 PM FINER org.datadog.jenkins.plugins.datadog.transport.LoggerHttpErrorHandler
java.lang.IllegalArgumentException: protocol = http host = null: java.lang.RuntimeException: java.lang.IllegalArgumentException: protocol = http host = null
	at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1534)
	at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1520)
	at java.base/sun.net.www.protocol.http.HttpURLConnection.getHeaderField(HttpURLConnection.java:3135)
	at java.base/java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:536)
	at org.datadog.jenkins.plugins.datadog.transport.HttpSender.blockingSend(HttpSender.java:90)
	at org.datadog.jenkins.plugins.datadog.transport.HttpSender.run(HttpSender.java:55)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.lang.IllegalArgumentException: protocol = http host = null
	at java.base/sun.net.spi.DefaultProxySelector.select(DefaultProxySelector.java:192)
	at java.base/sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1181)
	at java.base/sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1081)
	at java.base/sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:1015)
	at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1592)
	at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1520)
	at java.base/java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:527)
	... 7 more

Plugin configuration with environment variable not updated

Describe the bug

I have jenkins with datadog plugin.
My jenkins instance is dockerized on kubernetes cluster and deployed by chart helm.
I configure the following env var of my container :

DATADOG_JENKINS_PLUGIN_REPORT_WITH : DSD
DATADOG_JENKINS_PLUGIN_TARGET_HOST : ip of my kubernetes node, dynamically resolved
DATADOG_JENKINS_PLUGIN_TARGET_PORT : 8125

If i deploy a fresh jenkins without plugin configuration, the configuration is ok, i can see it via jenkins administration UI.
If i redeploy my jenkins and change the value of DATADOG_JENKINS_PLUGIN_TARGET_HOST , the configuration is not updated.

And another thing, if i configure the first time manually on admin panel UI and then i set environments and restart the container, the configuration on the UI is not updated with the environment values.

To Reproduce
Steps to reproduce the behavior:

  1. Deploy jenkins from scratch with datadog plugin on docker with env (DATADOG_JENKINS_PLUGIN_REPORT_WITH...)
  2. Go to the admin datadog panel and check the configuration is ok
  3. Change the value of the env (DATADOG_JENKINS_PLUGIN_TARGET_HOST) and redeploy the container
  4. Go to the admin datadog panel and see the configuration of target host has not been updated

Expected behavior

The jenkins datadog plugin configuration is updated with correct value.
Maybe the behaviour is normal but in that case, the plugin documentation is not clear about that : https://docs.datadoghq.com/integrations/jenkins/

Environment and Versions

jenkins 2.176.3 with datadog plugin 1.1.2 dockerised and hosted on kubernetes (aks cluster)

jenkins.job.stage_duration - result shows failure even when stage is not executed

Describe the bug
I am not sure if this a bug or enhancement but when checking the jenkins.job.stage_duration { results } metric if one of the previous stage failed all the subsequent stages are marked as failed. In reality none of those have been executed and in blue ocean ui it shows as not executed and in the log it shows as skipped but in datadog metrics it shows them as failed. When trying to graph all stages based on which stage failed for a pipeline we get very high variation.

Label the issue properly.
severity/high

To Reproduce
Steps to reproduce the behavior:
Run the following pipeline and you will see that the Deploy stage shows as failed even though it did not run.
The duration will also show in secs since it did not run it at all.

pipeline {
agent any
stages {
stage('Build') {
steps {
echo "Build"
}
}
stage('Test') {
steps {
echo "Test"
sh "exit -1"
}
}
stage('Deploy') {
steps {
sh "sleep 300"
}
}
}
}

Expected behavior
I am expecting that "Deploy" stage above would be marked as skipped or not logged at all.

Screenshots
image

Environment and Versions (please complete the following information):
Using latest jenkins - Jenkins 2.249.2
Using latest datadog plugin - 2.0.0 (I see the same error with latest master branch local build as well)

Additional context

CasC: default values are not respected

Describe the bug
If CasC is used to configure Datadog plugin, settings which are not explicitly set in CasC will not be changed, so settings from existing config (from global configuration page) will persist after applying CasC.

To Reproduce
Steps to reproduce the behavior:

  1. Setup CasC config w/o some optional settings for Datadog plugin (w/o enableCiVisibility for example)
  datadogGlobalConfiguration:
    emitSecurityEvents: true
    emitSystemEvents: true
    hostname: "host"
    reportWith: "DSD"
    targetHost: "localhost"
    targetPort: "8125"
    targetTraceCollectionPort: "8126"
  1. Apply CasC config
  2. Go to global configuration
  3. Ensure that Datadog plugin has default values in all unset settings (CI Visibility is disabled)
  4. Change on of unset settings via UI on global configuration page (enable CI Visibility for example)

image

  1. Apply and save changes.
  2. Apply CasC config again
  3. Go to global configuration
  4. See that settings are not changed (CI Visibility is still enabled)

Expected behavior
After applying CasC I'd expect overriding unset settings with default values. In other words I want to make sure that if CasC config is used, it is the single source of truth for Datadog plugin settings.

Environment and Versions (please complete the following information):
CasC plugin: 1512.vb_79d418d5fc8
datadog plugin: 5.0.0
Jenkins LTS: 2.361.1

Enable build log scanning for custom metrics

Currently the plugin offers limited, coarse-grained metrics regarding build failures. There is no way to add extra custom metrics, based on the type of errors.

I'd suggest adding a list of pairs of string fields in the plugin configuration, to map log lines with custom metrics. Then, on every build, scan the lines lazily for the configured regex text and increment relevant custom metric counters. Logs can be retrieved post-build from the master jenkins node file system, and adding metrics in the plugin is straightforward, so effort should be relatively low.

I have already partially implemented such a solution for a project, as we required more fine-grained control of the error messages, due to multiple error causes, so if you want I could raise a pr with it.

Metrics and/or service checks for available upgrades or plugins with warnings / security issues

Note:
If you have a feature request, you should contact support so the request can be properly tracked.

Is your feature request related to a problem? Please describe.
I would like to be able to get an alert when Jenkins upgrades are available or plugins with warnings are published.

Describe the solution you'd like
A service check for when an upgrade is available and metric with the count of plugins with warnings.

Describe alternatives you've considered
I haven't considered alternatives.

Additional context
I'd like to alert in Datadog on the warnings that show up as notifications here in Jenkins:

image

CI Traces not being sent to agent because BuildSpanAction is null

Describe the bug
No data is being sent to the Datadog tracer agent on our server for any of the jobs being run, instead we see a log message like this in our System Log:
Unable to set trace ids as environment variables. in Run 'loopio-app » Pull Requests » sub-jobs » react-unit-tests #3517'. BuildSpanAction is null

To Reproduce
Steps to reproduce the behavior:

  1. Install the Datadog Agent on your Jenkins server as per the documentation
  2. Install the Jenkins Datadog plugin
  3. Enable CI Tracing in the Datadog section of the Configure System screen
  4. Test the connection to the trace port successfully
  5. Go to the System Log and find the Datadog logs that include "Unable to set trace ids as environment variables" and "BuildSpanAction is null"
  6. Check your tracker-agent.log on your server to find "No data received"

Expected behavior
After installing the plugin, I expected it to send build data to the installed Datadog agent, but no data is being sent

Screenshots

Environment and Versions (please complete the following information):
A clear and precise description of your setup:
Jenkins 2.426.3
Datadog Plugin Version 6.0.2
Datadog agent agent 7.52.1

Additional context
I don't know exactly how to reproduce it because I took the exact same steps on two different Jenkins servers and it is working on one of them, but we're getting this error on the other one. It could come down to some sort of minor configuration difference, which if you can point me in the right direction to update, I can close this ticket. But I can't find any reference to the error we are seeing in the logs anywhere besides in the Github source code

StepDataAction objects consume large amounts of memory

Describe the bug
We are using the Bitbucket Branch Source plugin with Jenkins in order to create new jobs for each branch and PR for our git repositories. While trying to figure out what was consuming so much memory on our Jenkins box I noticed we have many WorkflowRun objects containing large StepDataAction objects.

I took a look at a few of the WorkflowRun objects and they correspond to builds that are at least 24 hours old. By this point I would expect the memory to clear. However, as seen in the attached screenshot we have quite a few large StepDataAction objects persisting in memory. Most of the memory are copies of the environment variables that are stored in StepData.

Expected behavior
StepDataAction memory to be cleared once no longer required.

Screenshots
Screen Shot 2021-07-01 at 3 14 46 PM
Screen Shot 2021-07-01 at 4 40 49 PM

Environment and Versions (please complete the following information):
Jenkins 2.249.1
Datadog Plugin 2.13.0
Bitbucket Branch Source 2.4.2

Allow tagging file to be used from repo

I would like to be able to use something similar to a jenkinsfile from a repository to set tags sent to Datadog. This will allow a faster way to set global tagging for all reporting and quickly add/mod/delete tags across an enterprise scenario.

criticality/medium

Support for Env Vars in the Global job tags,

It would be great if we can use env vars in the "Global job tags".
Something like (.*?)/(.*?)/(.*?),a:$,b:$2,c:$3,team:$TEAM_NAME.

This will be useful to set custom tags when a job is executed.

Thanks.

"Hmmm, your API key may be invalid."

Describe the bug

Unable to send metrics when using API URL https://api.datadoghq.eu/api/
Receive API key seems to be invalid

To Reproduce

Steps to reproduce the behavior:

  1. Go to 'Manage Jenkins -> Configure System -> Datadog Plugin
  2. Enter 'https://api.datadoghq.eu/api/' for API URL field
  3. Enter your API key from https://datadoghq.eu/account/settings#api to API Key field
  4. See error Hmmm, your API key seems to be invalid.

Expected behavior

Since I haven't seen a successful test yet, I'm assuming something alone the lines of "API key valid"

Screenshots
screenshot

Environment and Versions (please complete the following information):

A clear and precise description of your setup:

  • Jenkins ver. 2.190.2 (bitnami) on Debian GNU/Linux 9 (stretch)
  • Datadog Plugin 1.2.0

Additional context

Logs always show this line:

30-Oct-2020 16:54:31.745 SEVERE [Handling POST /jenkins/descriptorByName/org.datadog.jenkins.plugins.datadog.DatadogGlobalConfiguration/testConnection from xxx.xxx.xxx.xxx : ajp-nio-8009-exec-5] org.datadog.jenkins.plugins.datadog.clients.DatadogHttpClient.validateDefaultIntakeConnection Hmmm, your API key may be invalid. We received a 403 error.

Our instance is behind a firewall but the ranges from the sections api and logs of https://ip-ranges.datadoghq.eu/ are whitelisted.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.