Git Product home page Git Product logo

datadogpy's Introduction

The Datadog Python library

Unit Tests Integration Tests Documentation Status PyPI - Version PyPI - Downloads

The Datadog Python Library is a collection of tools suitable for inclusion in existing Python projects or for the development of standalone scripts. It provides an abstraction on top of Datadog's raw HTTP interface and the Agent's DogStatsD metrics aggregation server, to interact with Datadog and efficiently report events and metrics.

See CHANGELOG.md for changes.

Installation

To install from pip:

pip install datadog

To install from source:

python setup.py install

Datadog API

To support all Datadog HTTP APIs, a generated library is available which will expose all the endpoints: datadog-api-client-python.

Find below a working example for submitting an event to your Event Stream:

from datadog import initialize, api

options = {
    "api_key": "<YOUR_API_KEY>",
    "app_key": "<YOUR_APP_KEY>",
}

initialize(**options)

title = "Something big happened!"
text = "And let me tell you all about it here!"
tags = ["version:1", "application:web"]

api.Event.create(title=title, text=text, tags=tags)

Consult the full list of supported Datadog API endpoints with working code examples in the Datadog API documentation.

Note: The full list of available Datadog API endpoints is also available in the Datadog Python Library documentation

Environment Variables

As an alternate method to using the initialize function with the options parameters, set the environment variables DATADOG_API_KEY and DATADOG_APP_KEY within the context of your application.

If DATADOG_API_KEY or DATADOG_APP_KEY are not set, the library attempts to fall back to Datadog's APM environment variable prefixes: DD_API_KEY and DD_APP_KEY.

from datadog import initialize, api

# Assuming you've set `DD_API_KEY` and `DD_APP_KEY` in your env,
# initialize() will pick it up automatically
initialize()

title = "Something big happened!"
text = "And let me tell you all about it here!"
tags = ["version:1", "application:web"]

api.Event.create(title=title, text=text, tags=tags)

In development, you can disable any statsd metric collection using DD_DOGSTATSD_DISABLE=True (or any not-empty value).

DogStatsD

In order to use DogStatsD metrics, the Agent must be running and available.

Instantiate the DogStatsD client with UDP

Once the Datadog Python Library is installed, instantiate the StatsD client using UDP in your code:

from datadog import initialize, statsd

options = {
    "statsd_host": "127.0.0.1",
    "statsd_port": 8125,
}

initialize(**options)

See the full list of available DogStatsD client instantiation parameters.

Instantiate the DogStatsd client with UDS

Once the Datadog Python Library is installed, instantiate the StatsD client using UDS in your code:

from datadog import initialize, statsd

options = {
    "statsd_socket_path": PATH_TO_SOCKET,
}

initialize(**options)

Origin detection over UDP and UDS

Origin detection is a method to detect which pod DogStatsD packets are coming from in order to add the pod's tags to the tag list. The DogStatsD client attaches an internal tag, entity_id. The value of this tag is the content of the DD_ENTITY_ID environment variable if found, which is the pod's UID. The Datadog Agent uses this tag to add container tags to the metrics. To avoid overwriting this global tag, make sure to only append to the constant_tags list.

To enable origin detection over UDP, add the following lines to your application manifest

env:
  - name: DD_ENTITY_ID
    valueFrom:
      fieldRef:
        fieldPath: metadata.uid

Usage

Metrics

After the client is created, you can start sending custom metrics to Datadog. See the dedicated Metric Submission: DogStatsD documentation to see how to submit all supported metric types to Datadog with working code examples:

Some options are supported when submitting metrics, like applying a Sample Rate to your metrics or tagging your metrics with your custom tags.

Events

After the client is created, you can start sending events to your Datadog Event Stream. See the dedicated Event Submission: DogStatsD documentation to see how to submit an event to your Datadog Event Stream.

Service Checks

After the client is created, you can start sending Service Checks to Datadog. See the dedicated Service Check Submission: DogStatsD documentation to see how to submit a Service Check to Datadog.

Monitoring this client

This client automatically injects telemetry about itself in the DogStatsD stream. Those metrics will not be counted as custom and will not be billed. This feature can be disabled using the statsd.disable_telemetry() method.

See Telemetry documentation to learn more about it.

Benchmarks

Note: You will need to install psutil package before running the benchmarks.

If you would like to get an approximate idea on the throughput that your DogStatsD library can handle on your system, you can run the included local benchmark code:

$ # Python 2 Example
$ python2 -m unittest -vvv tests.performance.test_statsd_throughput

$ # Python 3 Example
$ python3 -m unittest -vvv tests.performance.test_statsd_throughput

You can also add set BENCHMARK_* to customize the runs:

$ # Example #1
$ BENCHMARK_NUM_RUNS=10 BENCHMARK_NUM_THREADS=1 BENCHMARK_NUM_DATAPOINTS=5000 BENCHMARK_TRANSPORT="UDP" python2 -m unittest -vvv tests.performance.test_statsd_throughput

$ # Example #2
$ BENCHMARK_NUM_THREADS=10 BENCHMARK_TRANSPORT="UDS" python3 -m unittest -vvv tests.performance.test_statsd_throughput

Maximum packets size in high-throughput scenarios

In order to have the most efficient use of this library in high-throughput scenarios, default values for the maximum packets size have already been set for both UDS (8192 bytes) and UDP (1432 bytes) in order to have the best usage of the underlying network. However, if you perfectly know your network and you know that a different value for the maximum packets size should be used, you can set it with the parameter max_buffer_len. Example:

from datadog import initialize

options = {
    "api_key": "<YOUR_API_KEY>",
    "app_key": "<YOUR_APP_KEY>",
    "max_buffer_len": 4096,
}

initialize(**options)

Thread Safety

DogStatsD and ThreadStats are thread-safe.

datadogpy's People

Contributors

ahmed-mez avatar bitnot avatar bkabrda avatar clokep avatar enbashi avatar ewdurbin avatar gzussa avatar jbarciauskas avatar jd avatar jirikuncar avatar johnistan avatar miketheman avatar mlaureb avatar nilabhsagar avatar nkzou avatar nmuesch avatar ofek avatar prognant avatar ronindesign avatar ross avatar sgnn7 avatar skarimo avatar ssc3 avatar thehesiod avatar therve avatar unclebconnor avatar vickenty avatar xvello avatar yannmh avatar zippolyte avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

datadogpy's Issues

socket.connect does not appear to be thread safe

Seeing a lot of socket errors logged by DogStatsd when running multiple threads. The issue appears to be related to the use of socket.connect with socket.send. The alternative, socket.sendto, should fix the issue.

The following is a simplification of the flow, but consistently reproduces the issue.

import threading
import socket

class WithConnect(object):

    def __init__(self):
        self.sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
        self.sock.connect(('127.0.0.1', 9991))
        self.error_count = 0

    def run(self):
        try:
            self.sock.send('hi')
        except:
            self.error_count += 1

class WithoutConnect(object):

    def __init__(self):
        self.sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
        self.error_count = 0

    def run(self):
        try:
            self.sock.sendto('hi', ('127.0.0.1', 9990))
        except:
            self.error_count += 1

wc = WithConnect()
woc = WithoutConnect()

def main():
    wc.run()
    woc.run()

threads = [
    threading.Thread(target=main)
    for i in range(100)
]

map(lambda t: t.start(), threads)
map(lambda t: t.join(), threads)

print 'WithConnect.error_count = %d' % wc.error_count
print 'WithoutConnect.error_count = %d' % woc.error_count

Error handling: APIError is never raised

This library never returns the status code of the http request.
Instead, when the status code is 400, 403 or 404, it is supposed to raise an APIError exception with the content of the errors field received from the dispatcher.(https://github.com/DataDog/datadogpy/blob/master/datadog/api/base.py#L152).
For all other 3, 4, 5** status codes, an HTTPError is raised.

So catching errors and interacting with the API should be a matter of catching APIError and HTTPError exceptions.

BUT, the _swallow attribute prevents APIError to be raised (https://github.com/DataDog/datadogpy/blob/master/datadog/api/base.py#L171-L172). It's set to True in api/init.py and is not modifiable at the moment.

A temporary workaround is to use from datadog import api api._swallow = False.

We could evaluate the impact of a potential transition to swallow being True by default and also being set up when initializing the api.

Issue when parsing metrics containing "NaN" in the value field

I have a Dropwizard project that ships a bunch of metrics to dogStatsD. When the app is initialized there are a few metrics that report "NaN" in the value field (this will happen until requests start coming in). I notice that during this time dogStatsD is throwing exceptions that then get logged (in our case, they get logged into SumoLogic which we are paying for by "space used").

It would be nice if dogStatsD handled this type of field since it is a part of the Java language specification. It seems that "NaN" should result in a value of 0 (in the case of the Dropwizard metrics, it is the case of "0.0d / 0.0") especially when being used for metrics.

Splunk integration via dogapi: message/text does not appear in the event

http://docs.datadoghq.com/integrations/splunk/

There are a few problems following the documentation.

  1. The env variables you're referencing don't seem to exist. This is a Splunk issue. Resolution is to use positional arg $2, $3 etc rather than environment variables
  2. the "body" of the event does not end up in the event on the site, it is empty.
#!/bin/bash
API_KEY= your_api_key
APP_KEY= your_application_key
dog --api-key $API_KEY --application-key $APP_KEY event post \
"Found $SPLUNK_ARG_1 events in splunk" \
"Matching $SPLUNK_ARG_2 based on $SPLUNK_ARG_5, from report $SPLUNK_ARG_4. More details at $SPLUNK_ARG_6." \
--aggregation_key $SPLUNK_ARG_3 --type splunk

I ended up using curl to get it all working. Here is a working example:

#!/bin/bash

API_KEY=
APP_KEY=


# http://docs.datadoghq.com/api/
#
# http://docs.splunk.com/Documentation/Splunk/6.2.2/Alert/Configuringscriptedalerts
#
#  Arg Value
#  1   Number of events returned
#  2   Search terms
#  3   Fully qualified query string
#  4   Name of report
#  5   Trigger reason
#  6   Browser URL to view the report.
#  7   Not used for historical reasons.
#  8   File in which the results for the search are stored. Contains raw results.


curl  -X POST -H "Content-type: application/json" \
-d '{
      "title": "Found '"$1"' events in splunk",
      "text": "Matching '"$2"' based on '"$5"', from report '"$4"'. More details at '"$6"'.",
      "aggregation_key": "'"$3"'",
      "source_type_name": "splunk"
  }' \
https://app.datadoghq.com/api/v1/events?api_key=$API_KEY&application_key=$APP_KEY

dogwrap: Trim output

When a command has output that makes the message exceed the 4000 character limit for events, the formating gets completely lost and so does the notifications that are appended ( #88 )

There should probably be a strategy in place to trim the output to something that fits in an event. I'd suggest it be configurable.

Total size of the event can not be > 4000 characters, so a few different strategies make sense to me

  • tail - Show only the last ~3500 characters (4k - however many bytes are used in headers/notifications/etc), for when the commands output really only matters what comes at the end
  • head - show only the first ~3500 characters (^^^), for when the start of the command is really all that matters
  • both (default?)- Take ~1000 from the top and ~2500 from the bottom with a ...trimmed... in the middle, the first 1/3rd of the budget on the top, and the other 2 thirds on the bottom.

screenshot 2015-10-15 11 47 32

Use dog without a .dogrc

Hi,

Is there a way to use dog without a dogrc? Ideally, to steal the api_key from /etc/dd-agent/datadog.conf?

I haven't had any luck with this hack:

dog --api-key $(grep api_key /etc/dd-agent/datadog.conf | head -n 1 | awk '{print $NF}') event post --tags deleteme 'test string'

Goal is to make sure crons are running successfully.

Thank you,
Teran

Python dogapi not compatible with Gunicorn WSGI server

Issue by alq666
Friday Feb 15, 2013 at 18:29 GMT
Originally opened as https://github.com/DataDog/dogapi/issues/42


Just noticed that running a Python Pyramid WSGI application using Gunicorn WSGI server in combination
with the datadog API gives me the following error.

Traceback (most recent call last):
File "/home/ajung/.buildout/eggs/dogapi-1.1.2-py2.7.egg/dogapi/stats/statsd.py", line 29, in add_point self.socket_sendto(payload, self.address)
File "/home/ajung/.buildout/eggs/eventlet-0.12.1-py2.7.egg/eventlet/greenio.py", line 300, in sendto trampoline(self.fd, write=True)
File "/home/ajung/.buildout/eggs/eventlet-0.12.1-py2.7.egg/eventlet/hubs/__init__.py", line 119, in trampoline listener = hub.add(hub.WRITE, fileno, current.switch)
File "/home/ajung/.buildout/eggs/eventlet-0.12.1-py2.7.egg/eventlet/hubs/epolls.py", line 48, in add listener = BaseHub.add(self, evtype, fileno, cb)
File "/home/ajung/.buildout/eggs/eventlet-0.12.1-py2.7.egg/eventlet/hubs/hub.py", line 126, in add evtype, fileno, evtype))
RuntimeError: Second simultaneous write on fileno 12 detected. Unless you really know what you're doing, make sure that only one greenthread can write any particular socket. Consider using a pools.Pool. If you do know what you're doing and want to disable this error, call eventlet.debug.hub_multiple_reader_prevention(False)

Socket creation in DogStatsd class is not thread safe.

The code in get_socket() is:

    def get_socket(self):
        '''
        Return a connected socket
        '''
        if not self.socket:
            self.socket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
            self.socket.connect((self.host, self.port))
        return self.socket

This code is not thread safe and can result in failure in a multithreaded application, with potential loss of initial metric data points on process startup.

The function which triggers the function is:

    def _send_to_server(self, packet):
        try:
            # If set, use socket directly
            (self.socket or self.get_socket()).send(packet.encode(self.encoding))
        except socket.error:
            log.info("Error submitting packet, will try refreshing the socket")
            self.socket = None
            try:
                self.get_socket().send(packet.encode(self.encoding))
            except socket.error:
                log.exception("Failed to send packet with a newly binded socket")

Specifically, the line:

            (self.socket or self.get_socket()).send(packet.encode(self.encoding))

Because in get_socket() the newly created socket is assigned direct to self.socket, then a separate thread calling into _send_to_server() can see the socket before the socket has been connected.

The end result is a socket error:

error: [Errno 39] Destination address required

in the second thread which can call send() on the unconnected socket.

This triggers the socket.error exception clause which wipes out self.socket and sets it to None.

The original thread when it returns from connect() on the socket then returns None as a socket which the first thread then calls send() on with the resulting exception of:

[Sun Jun 07 19:28:26.101636 2015] [wsgi:error] [pid 89948:tid 4361912320]   File ".../datadog/dogstatsd/base.py", line 189, in _send_to_server
[Sun Jun 07 19:28:26.101660 2015] [wsgi:error] [pid 89948:tid 4361912320]     self.get_socket().send(packet.encode(self.encoding))
[Sun Jun 07 19:28:26.101681 2015] [wsgi:error] [pid 89948:tid 4361912320] AttributeError: 'NoneType' object has no attribute 'send'

To avoid this problem, get_socket() should be rewritten as:

    def get_socket(self):
        '''
        Return a connected socket
        '''
        if not self.socket:
            sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
            sock.connect((self.host, self.port))
            self.socket = sock
        return self.socket

In other words, assign the socket to a local variable and do the connect() before then assigning it to the class instance.

Note that although this avoids the presenting problem, the code still has a thread race condition due to no thread locking being used. That is, multiple threads could create the socket connection with only one winning out. This will not cause any issue except that an extra socket connection will temporarily exist until it is closed through reference count reduction in CPython or garbage collection in pypy.

This race condition is tolerable, although with the way that checks are done it would be trivial to make socket creation entirely thread safe without introducing thread lock contention beyond the initial point of creation the first time it is required.

syntax error during initialization

I'm trying to run some code in python3.4 with datadog and keep seeing the following error:

from datadog import initialize
File "/opt/datadog-agent/agent/checks/datadog.py", line 168
self._error_count = 0L
                     ^

Is this a python3 compatibility issue?

Is it thread safe?

Hello,

Could you add informations about thread safeness , i'm specifically interested about the statsd variable.

thanx

dogwrap: notifications lost

Notifications are lost if the output of the command wrapped with dogwrap is too long.

Perhaps you should prepend the Notifications: string instead of appending it.

Unmute all monitors fails with 403 forbidden

dog monitors unmute_all fails with 403 forbidden error:

dog monitor unmute_all
Traceback (most recent call last):
  File "/home/vagrant/dogweb/python/bin/dog", line 9, in <module>
    load_entry_point('datadog==0.4.0', 'console_scripts', 'dog')()
  File "/home/vagrant/dogweb/python/local/lib/python2.7/site-packages/datadog/dogshell/__init__.py", line 71, in main
    args.func(args)
  File "/home/vagrant/dogweb/python/local/lib/python2.7/site-packages/datadog/dogshell/monitor.py", line 177, in _unmute_all
    res = api.Monitor.unmute_all()
  File "/home/vagrant/dogweb/python/local/lib/python2.7/site-packages/datadog/api/monitors.py", line 95, in unmute_all
    return super(Monitor, cls)._trigger_class_action('POST', 'unmute_all')
  File "/home/vagrant/dogweb/python/local/lib/python2.7/site-packages/datadog/api/base.py", line 411, in _trigger_class_action
    return HTTPClient.request(method, cls._class_url + "/" + name, params)
  File "/home/vagrant/dogweb/python/local/lib/python2.7/site-packages/datadog/api/base.py", line 118, in request
    result.raise_for_status()
  File "/home/vagrant/dogweb/python/local/lib/python2.7/site-packages/requests/models.py", line 851, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 403 Client Error: Forbidden

It worked as expected (before the dogrc file was configured) by passing keys in the request itself, like this dog --api-key {key} --application-key {key} monitor unmute_all. Once that file is configured, it doesn't work anymore.

`dog` commandline tool does not auto-detect hostname

if config.has_option('Connection', 'host_name'):
self['host_name'] = config.get('Connection', 'host_name')

if args.localhostname:
host = find_localhost()
else:
host = args.host

This then requires each use of the dog command to specify the hostname, or the --localhostname argument, but there is no auto-detection in place for default.

My Principle of Least Surprise tells me that something should be detected by default, and the user should be allowed to override it with command line flags, config file values, environment variables, etc.

SSLError: [Errno 185090050] _ssl.c:344: error:0B084002:x509 certificate routines:X509_load_cert_crl_file:system lib

Hi,

I'm using dogstatsd on a Windows 7 machine and I always get this error:

2015-08-20 15:31:58 W. Europe Daylight Time | ERROR | dogstatsd(dogstatsd.pyc:216) | Unable to post payload.
Traceback (most recent call last):
  File "dogstatsd.pyc", line 206, in submit_http
  File "requests\api.pyc", line 94, in post
  File "requests\api.pyc", line 49, in request
  File "requests\sessions.pyc", line 457, in request
  File "requests\sessions.pyc", line 569, in send
  File "requests\adapters.pyc", line 420, in send
SSLError: [Errno 185090050] _ssl.c:344: error:0B084002:x509 certificate routines:X509_load_cert_crl_file:system lib

Do you have any idea how to get rid of this error??

Thanks in advance

add option to time in ms

Issue by clutchski
Monday Dec 17, 2012 at 17:20 GMT
Originally opened as https://github.com/DataDog/dogstatsd-python/issues/5


add use_ms for timed decorator instead of seconds. e.g.

from statsd import statsd

statsd.use_ms = True

# This will use ms
statsd.timed('this.will.use.ms')
def foo():
      # pass

# Or ....
from statsd import statsd
statsd.timed('this.will.use.seconds')
def foo():
      # pass

statsd.timed('this.will.use.ms', use_ms=True):
def foo():
      # pass

Context managed `statsd.timed`

Right now you can use statsd.timed, that would be cool when you want to measure the runtime of some random code to be able to do:

with statsd.timed('execution_time'):
    [some code...]

Initialization exception on windows 7

Initializing datadogpy on windows7 results in the exception below.

Note: running python 3.4

Traceback (most recent call last):
  File "<pyshell#1>", line 1, in <module>
    datadog.initialize()
  File "C:\Python34\lib\site-packages\datadog\__init__.py", line 59, in initialize
    api._host_name = host_name if host_name is not None else get_hostname()
  File "C:\Python34\lib\site-packages\datadog\util\hostname.py", line 52, in get_hostname
    config = get_config()
  File "C:\Python34\lib\site-packages\datadog\util\config.py", line 122, in get_config
    config_path = get_config_path(cfg_path, os_name=get_os())
  File "C:\Python34\lib\site-packages\datadog\util\config.py", line 101, in get_config_path
    return _windows_config_path()
  File "C:\Python34\lib\site-packages\datadog\util\config.py", line 68, in _windows_config_path
    common_data = _windows_commondata_path()
  File "C:\Python34\lib\site-packages\datadog\util\config.py", line 62, in _windows_commondata_path
    path_buf = wintypes.create_unicode_buffer(wintypes.MAX_PATH)
AttributeError: 'module' object has no attribute 'create_unicode_buffer'

[feature] dogwrap should report `duration` as a metric

When using the CLI tool dogwrap in conjunction with a utility like cron, we get the event output on the Event Stream which helps us determine success vs failure.

Adding a metric for duration for the event, which is already calculated and placed in the Event Title would allow trending this particular event over time, and potentially alerting on the value.

Monitor query DSL

It would be really nice to have a DSL to define monitor queries as opposed to having to build and send strings.

`initialize` method failing if `/etc/dd-agent/datadog.conf` is not accessible

Reported by a user:

The datadog python client v0.4 will exit with an error if the user running the script doesn't have permissions to access the datadog config file in initialize_datadog(). It appears it only accesses the file in order to get the hostname of the current machine. The code looks like it tries to fall back to other methods if the file doesn't exist, but doesn't appear to handle the permissions failure case.

DD API does not support `related_event_id` when creating events

I'm not sure if this is an issue with the datadog API itself or simply unsupported behavior exposed by dogshell here https://github.com/DataDog/datadogpy/blob/master/datadog/dogshell/event.py#L81-L82

The datadog API does not currently support a related_event_id attribute when creating events. It is not documented http://docs.datadoghq.com/api/#events.

(python.2.7) $ dog --config /etc/dogrc event post --type=dave "dave.test.start" "This is a test event"
dave.test.start This is a test event  ()
2016-03-08 22:25:17 | https://app.datadoghq.com/event/event?id=442259632191669779
(python.2.7) $ dog --config /etc/dogrc event post --type=dave --related_event_id="442259632191669779" "dave.test.end" "This is a related test event"
ERROR: Invalid JSON structure

I have also tried posting the following json with curl with same result.

'{
      "title": "dave.related",
      "text": "dave related event",
      "priority": "normal",
      "alert_type": "info",
      "related_event_id": "442259632191669779"
  }'

I am able to successfully curl the following

'{
      "title": "dave.related",
      "text": "dave related event",
      "priority": "normal",
      "alert_type": "info"
  }'

Non-inherited methods

At the moment http://datadogpy.readthedocs.org/en/latest/#datadog.api.Monitor.create does not display any relevant information about what kind of parameters it needs. Even worse, all the create methods show the same docs because the method is inherited from a "generic" CreateableAPIResource class.

It's the same situation for other inherited methods.

This means that I'm forced to constantly look up the actual docs at http://docs.datadoghq.com/api/#monitors because this prevents tools such as PyCharm from displaying information about the actual parameters that each method needs.

Dogshell: Unicode error when piping/handling output containing international chars

I set up a monitor which contains an international character (ä).

When I run dog monitor show_all in the terminal, it outputs all monitors as expected.

But when I try to pipe the output or handle it at all, like dog monitor show_all > dog.txt, I get a Unicode error:

Traceback (most recent call last):
  File "/usr/local/bin/dog", line 9, in <module>
    load_entry_point('datadog==0.5.0', 'console_scripts', 'dog')()
  File "/usr/local/Cellar/python/2.7.9/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/datadog/dogshell/__init__.py", line 71, in main
    args.func(args)
  File "/usr/local/Cellar/python/2.7.9/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/datadog/dogshell/monitor.py", line 147, in _show_all
    (d["type"])]))
UnicodeEncodeError: 'ascii' codec can't encode character u'\xe4' in position 26: ordinal not in range(128)

The user who reported this used a workaround: on line 142 of datadog/dogshell/monitor.py, replace (cls._escape(d["message"])) with (str(d["message"].encode('ascii','backslashreplace'))) but there may be a better way to handle this.

From: https://datadog.zendesk.com/agent/tickets/26802

import pdb left in 0.2.1 release, doesn't seem to be in repo

I tried to pip install datadog==0.2.1 and run our test suite.

This statement appears on line 179 of dogstatsd/base.py

import pdb; pdb.set_trace()

https://pypi.python.org/packages/source/d/datadog/datadog-0.2.1.tar.gz#md5=45dd40f381a4b019c91fafc7d4ae284a

That code doesn't appear to be in the current master branch https://github.com/DataDog/datadogpy/blob/master/datadog/dogstatsd/base.py#L176-L186

Can you cut a new release without this debugging statement in there?

`AttributeError: 'Namespace' object has no attribute 'func'`

Environment

  • python 3.4.3
  • w/ pyenv

case without option:
/Users/papago% dog timeboard

Traceback (most recent call last):
  File "/Users/papago/.pyenv/versions/3.4.3/bin/dog", line 9, in <module>
    load_entry_point('datadog==0.9.0', 'console_scripts', 'dog')()
  File "/Users/papago/.pyenv/versions/3.4.3/lib/python3.4/site-packages/datadog/dogshell/__init__.py", line 74, in main
    args.func(args)
AttributeError: 'Namespace' object has no attribute 'func'

case with -h option:
/Users/papago% dog timeboard -h

usage: dog timeboard [-h] [--string_ids]
                     {post,update,show,show_all,pull,pull_all,push,new_file,web_view,delete}
                     ...

optional arguments:
  -h, --help            show this help message and exit
  --string_ids          Represent timeboard IDs as strings instead of ints in
                        JSON

Verbs:
  {post,update,show,show_all,pull,pull_all,push,new_file,web_view,delete}
    post                Create timeboards
    update              Update existing timeboards
    show                Show a timeboard definition
    show_all            Show a list of all timeboards
    pull                Pull a timeboard on the server into a local file
    pull_all            Pull all timeboards into files in a directory
    push                Push updates to timeboards from local files to the
                        server
    new_file            Create a new timeboard and put its contents in a file
    web_view            View the timeboard in a web browser
    delete              Delete timeboards
/Users/papago% 

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.