Git Product home page Git Product logo

opentelemetry-azure-monitor-python's Introduction

This repository has been moved to the Azure SDK for Python repository. In order to improve discoverability and share common dependencies/tests, the OpenTelemetry Azure Monitor exporters for Python has moved to a common location containing all Azure SDKs. Please submit all issues and inquiries in that repository.

OpenTelemetry Azure Monitor

Gitter chat Build status

Installation

pip install opentelemetry-azure-monitor

Documentation

The online documentation is available at https://opentelemetry-azure-monitor-python.readthedocs.io/.

Usage

Trace

The Azure Monitor Span Exporter allows you to export OpenTelemetry traces to Azure Monitor.

This example shows how to send a span "hello" to Azure Monitor.

  • Create an Azure Monitor resource and get the instrumentation key, more information can be found here.
  • Place your instrumentation key in a connection string and directly into your code.
  • Alternatively, you can specify your connection string in an environment variable APPLICATIONINSIGHTS_CONNECTION_STRING.
from azure_monitor import AzureMonitorSpanExporter
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchExportSpanProcessor

trace.set_tracer_provider(TracerProvider())
tracer = trace.get_tracer(__name__)

# SpanExporter receives the spans and send them to the target location
exporter = AzureMonitorSpanExporter(
    connection_string='InstrumentationKey=<your-ikey-here>',
)

span_processor = BatchExportSpanProcessor(exporter)
trace.get_tracer_provider().add_span_processor(span_processor)

with tracer.start_as_current_span('hello'):
    print('Hello World!')

Instrumentations

OpenTelemetry also supports several instrumentations which allows to instrument with third party libraries.

This example shows how to instrument with the requests_ library.

  • Create an Azure Monitor resource and get the instrumentation key, more information can be found here.
  • Install the requests integration package using pip install opentelemetry-ext-http-requests.
  • Place your instrumentation key in a connection string and directly into your code.
  • Alternatively, you can specify your connection string in an environment variable APPLICATIONINSIGHTS_CONNECTION_STRING.
import requests

from azure_monitor import AzureMonitorSpanExporter
from opentelemetry import trace
from opentelemetry.ext.requests import RequestsInstrumentor
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchExportSpanProcessor

trace.set_tracer_provider(TracerProvider())
tracer_provider = trace.get_tracer_provider()

exporter = AzureMonitorSpanExporter(
    connection_string='InstrumentationKey=<your-ikey-here>',
)
span_processor = BatchExportSpanProcessor(exporter)
tracer_provider.add_span_processor(span_processor)

RequestsInstrumentor().instrument()

# This request will be traced
response = requests.get(url="https://azure.microsoft.com/")

Modifying Traces

  • You can pass a callback function to the exporter to process telemetry before it is exported.
  • Your callback function can return False if you do not want this envelope exported.
  • Your callback function must accept an envelope data type as its parameter.
  • You can see the schema for Azure Monitor data types in the envelopes here.
  • The AzureMonitorSpanExporter handles Data data types.
from azure_monitor import AzureMonitorSpanExporter
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchExportSpanProcessor

# Callback function to add os_type: linux to span properties
def callback_function(envelope):
    envelope.data.baseData.properties['os_type'] = 'linux'
    return True

exporter = AzureMonitorSpanExporter(
    connection_string='InstrumentationKey=<your-ikey-here>'
)
# This line will modify telemetry
exporter.add_telemetry_processor(callback_function)

trace.set_tracer_provider(TracerProvider())
tracer = trace.get_tracer(__name__)
span_processor = BatchExportSpanProcessor(exporter)
trace.get_tracer_provider().add_span_processor(span_processor)

with tracer.start_as_current_span('hello'):
    print('Hello World!')

Metrics

The Azure Monitor Metrics Exporter allows you to export metrics to Azure Monitor.

This example shows how to track a counter metric and send it as telemetry every export interval.

  • Create an Azure Monitor resource and get the instrumentation key, more information can be found here.
  • Place your instrumentation key in a connection string and directly into your code.
  • Alternatively, you can specify your connection string in an environment variable APPLICATIONINSIGHTS_CONNECTION_STRING.
from azure_monitor import AzureMonitorMetricsExporter
from opentelemetry import metrics
from opentelemetry.sdk.metrics import Counter, MeterProvider
from opentelemetry.sdk.metrics.export.controller import PushController

metrics.set_meter_provider(MeterProvider())
meter = metrics.get_meter(__name__)
exporter = AzureMonitorMetricsExporter(
    connection_string='InstrumentationKey=<your-ikey-here>'
)
controller = PushController(meter, exporter, 5)

requests_counter = meter.create_metric(
    name="requests",
    description="number of requests",
    unit="1",
    value_type=int,
    metric_type=Counter,
    label_keys=("environment",),
)

testing_labels = {"environment": "testing"}

requests_counter.add(25, testing_labels)
input("...")

opentelemetry-azure-monitor-python's People

Contributors

gitter-badger avatar hectorhdzg avatar lzchen avatar microsoft-github-operations[bot] avatar microsoftopensource avatar victoraugustolls avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

opentelemetry-azure-monitor-python's Issues

pyodbc dependency tracing not integrating correctly

I have been following the example for pyodbc in the docstring here: https://github.com/open-telemetry/opentelemetry-python/blob/master/instrumentation/opentelemetry-instrumentation-dbapi/src/opentelemetry/instrumentation/dbapi/__init__.py

Desired Behaviour
The trace integration allows me to see the calls to my database through pyodbc in app insights and on the application map.

Actual Behaviour
I just get an InProc entry in the dependencies log showing the name of my span. Not entries showing the connection to the database etc.

Code
Will want to implement it for the scenario below:

import pyodbc
import pandas as pd
from opentelemetry.instrumentation.dbapi import trace_integration
from azure_monitor import AzureMonitorSpanExporter
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchExportSpanProcessor
from exec_proc import exec_proc


def sql_to_df(cursor: pyodbc.Cursor) -> pd.DataFrame:

    data = []

    try:
        columns = [column[0] for column in cursor.description]
        for row in cursor:
            row_as_list = [x for x in row]
            data.append(row_as_list)
    except MemoryError as err:
        log_exception(err)
    except Exception as exception:
        # unexpected Exceptions.
        log_exception(exception, False)

    df = pd.DataFrame(data, columns=columns)
    return df


def exec_proc(
    server: str,
    database: str,
    proc: str,
    params: tuple = None
) -> None:

    conn = pyodbc.connect(
        driver='{SQL Server Native Client 11.0}',
        server=<server>,
        database=<database>,
        trusted_connection='yes'
    )
    with conn.cursor() as cur:

        if params == None:
            sql = 'EXEC ' + proc
            try:
                cur.execute(sql)
                return sql_to_df(cur)
            except pyodbc.Error as err:
                print("can't execute proc! Error: %s" % err)

        else:
            placeholders = ('?,' * len(params))[:-1]
            sql = 'EXEC ' + proc + placeholders
            try:
                cur.execute(sql, params)
                return sql_to_df(cur)
            except pyodbc.Error as err:
                print("can't execute proc! Error: %s" % err)


def callback_addRoleName(envelope):
    envelope.tags['ai.cloud.role'] = "Python Test - Database OpenTelemetry"
    return True


trace.set_tracer_provider(TracerProvider())

tracer = trace.get_tracer(__name__)

trace_integration(pyodbc, "Connection", "odbc", "sql")

exporter = AzureMonitorSpanExporter(
    instrumentation_key=<instrumentation_key>,
    proxies=<azure_proxy>
)

exporter.add_telemetry_processor(callback_addRoleName)

span_processor = BatchExportSpanProcessor(exporter)
trace.get_tracer_provider().add_span_processor(span_processor)

with tracer.start_as_current_span("Database Connection Test opentelemetry"):
    data = exec_proc(<server>, <database>, <proc>, <values>)
    print(data)

Also tried using just the connection as below:

with tracer.start_as_current_span("Database Connection Test opentelemetry"):
    conn = pyodbc.connect(
        driver='{SQL Server Native Client 11.0}',
        server=<server>,
        database=<database>,
        trusted_connection='yes'
    )

Id format changes: no need to support old format

We report parentId and Id in the format of |traceId.parentSpanId and |traceId.spanId.

This is not needed anymore even for backward compatibility reasons. Compatibility with old versions of Application Insights SDK is handled by Breeze.

[QUESTION] BaseObject

Hi!

I was just wondering if there is a practical reason for using the BaseObject in protocol.py as the base for pretty much everything, instead of normal objects (maybe with slots for optimization) instead!

I know open census exporter was built this way, so if the reason was just to reutilize code, Iโ€™m glad to open a PR to make a typed and (maybe) optimized version!!

Dependency Types

Proposal:

  • Use OpenTelemetry specs to make it possible to determine some of the dependency types that ApplicationInsights accept.
  • Be able to force a dependency type to include the ones that are out of OpenTelemetry specs.

Flush telemetry on application exit + Implement Queue mechanism

Like in OpenCensus, can use a Queue structure for queuing up telemetry to be exported for trace exporter. This will be used for the atexit trigger in the exporter to flush telemetry before exiting the application.
https://github.com/census-instrumentation/opencensus-python/blob/master/contrib/opencensus-ext-azure/opencensus/ext/azure/trace_exporter/__init__.py#L161

For metrics, all the currently aggregated metrics can be collected and flushed at once when the application is closed.

https://github.com/census-instrumentation/opencensus-python/blob/master/contrib/opencensus-ext-azure/opencensus/ext/azure/meatrics_exporter/__init__.py#L145

Add "Private Preview" in README

Some customers may think that the SDK is already ready for GA. Add a "Private Preview" excerpt in the README so they do not get confused.

Infinite transmission notification

Hi,
When I set my logging to INFO level, I'm getting continuously a transmission success message, which in turn is transmitted, which.. once again logs a transmission success

2020-04-02 13:31:32,141 INFO Transmission succeeded: {"itemsReceived":1,"itemsAccepted":1,"errors":[]}.
2020-04-02 13:31:47,879 INFO Transmission succeeded: {"itemsReceived":1,"itemsAccepted":1,"errors":[]}.
2020-04-02 13:32:04,620 INFO Transmission succeeded: {"itemsReceived":1,"itemsAccepted":1,"errors":[]}.

There is a branch of code in https://github.com/microsoft/opentelemetry-azure-monitor-python/blob/master/azure_monitor/src/azure_monitor/export/__init__.py

if response.status_code == 200:
                logger.info("Transmission succeeded: %s.", text)
                return ExportResult.SUCCESS

Latest version 0.3b0 raises an error that Envelope is not JSON serializable

When trying the sample from https://github.com/microsoft/opentelemetry-azure-monitor-python/blob/master/azure_monitor/examples/metrics/observer.py I get the error

TypeError: Object of type 'Envelope' is not JSON serializable..

My current hack/workaround is to update this function
def _metric_to_envelope( self, metric_record: MetricRecord ) -> protocol.Envelope:

in this file https://github.com/microsoft/opentelemetry-azure-monitor-python/blob/master/azure_monitor/src/azure_monitor/export/metrics/__init__.py

to return a dictionary.
return envelope.to_dict().

Dependency resultCode is always 0, success is always true

Azure Monitor exporter does not populate HTTP Dependency properties correctly

Steps to reproduce

  • Enable requests integration.
  • Make outgoing HTTP call

What is the expected behavior?
Dependency telemetry is populated properly with Data, Target and name:

  • Data matches URL
  • Target matches authority (host:port)
  • name is "METHOD /path"

What is the actual behavior?

  • Data not set
  • Target not set
  • name not correct

Additional context

It seems incoming requests are properly populated here
https://github.com/microsoft/opentelemetry-exporters-python/blob/master/azure_monitor/src/azure_monitor/trace.py#L107..L117

Support data.type in dependency telemetry

For dependency telemetry, populate the data.type field by checking:

  1. checking for presence of "http.method" attribute to detect if type is http
  2. checking for presence of "db.system" to detect if type is database
  3. checking InstrumentationInfo for everything else.

Add support for pre-aggregated standard metrics

Currently the Azure backend takes in request/dependency telemetry and automatically generates metrics for them (duration, count, etc) in the form of standard metrics. We want to do this generation of metrics in the SDK side.

Add auto-collection as a configuration

Currently, user instantiates AutoCollection class to tell meter to start collecting standard metrics.
example.

We should have a mechanism to configure whether to collect or not (not based off of user instantiating autocollection class).

Possibly could be part of the exporter.

MetricData in Breeze backend only accepts 10 customDimensions

Breeze has a limit to how many dimensions are accepted for MetricData (first 10).

We can either:

  1. Take only the first 10 dimensions (will need to know order).
  2. Drop the metric entirely.

Cijo actually prefers number 2 and we advise customers to simply use 10 or less dimensions. This might cause complaints but it is better than possibly having to deal with a set of dimensions that might have different "dropped" ones each export. If the dimensions dropped are different, this would generate a different time series and then the aggregation in the backend would be incorrect.

Dropped metrics due to too many dimensions should actually be a breeze problem.

Add QuickPulse metrics

COMMITTED_BYTES= "\Memory\Committed Bytes"

REQUEST_FAILURE_RATE= "\ApplicationInsights\Requests Failed/Sec"

DEPENDENCY_FAILURE_RATE= "\ApplicationInsights\Dependency Calls Failed/Sec"
DEPENDENCY_DURATION= "\ApplicationInsights\Dependency Call Duration"

EXCEPTION_RATE= "\ApplicationInsights\Exceptions/Sec"

Support Histogram aggregation in metrics exporter

How do we handle metric data that is aggregated with Histogram?

  1. Drop metrics
  2. Recommend only using metrics that have MMSC as default aggregator

Note: We should not override default aggregator for metrics (once views are implemented) because user maybe using other metrics backends in tangent.

Using ValueEncoder

Hi I have tried to use ValueEncoder to capture some job latency from my Python code, but it seems the value shared to the Azure Monitor is the count instead of value, is that the expected behaviour?

This is the code I use to test

`
metrics.set_meter_provider(MeterProvider())

meter = metrics.get_meter(__name__)

exporter = AzureMonitorMetricsExporter(
    connection_string='InstrumentationKey=' + args.instrumentation_key
)

controller = PushController(meter, exporter, 5)
job_duration = meter.create_metric(
    name="job_duration",
    description="job_duration",
    unit="1",
    value_type=int,
    metric_type=ValueRecorder,
    label_keys=("environment", "task")
)

testing_labels = {"environment": "testing", "task":"preprocess"}
job_duration.record(25, testing_labels)`

`

In Application Insights Log Analytics, the 'value' is recorded as 1 instead of 25

This is what I found in the AzureMonitorMetricsExporter class:

if isinstance(metric, ValueObserver): # mmscl value = metric_record.aggregator.checkpoint.last elif isinstance(metric, ValueRecorder): # mmsc value = metric_record.aggregator.checkpoint.count

Azure Functions

Hi!

I was wondering, what would be the best way to integrate OpenTelemetry (and OpenCensus) to an Python Azure Functions.

In a web app we start the exporter on startup, but what about an AzureFunction? I don't think the best way was to start a new exporter every time a functions starts running, it seems like it would maybe cause memory problems (?), due to the local storage?

Was wondering if someone here has any opinions about this! Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.