Git Product home page Git Product logo

autometrics-go's Introduction

GitHub_headerImage

Go Reference Discord Shield

Metrics are a powerful and cost-efficient tool for understanding the health and performance of your code in production, but it's hard to decide what metrics to track and even harder to write queries to understand the data.

Autometrics is a Go Generator bundled with a library that instruments your functions with the most useful metrics: request rate, error rate, and latency. It standardizes these metrics and then generates powerful Prometheus queries based on your function details to help you quickly identify and debug issues in production.

Benefits

  • โœจ //autometrics:inst directive adds useful metrics to any function, without you thinking about what metrics to collect
  • ๐Ÿ’ก Generates powerful Prometheus queries to help quickly identify and debug issues in production
  • ๐Ÿ”— Injects links to live Prometheus charts directly into each function's doc comments
  • ๐Ÿ“Š Grafana dashboards work without configuration to visualize the performance of functions & SLOs
  • ๐Ÿ” Correlates your code's version with metrics to help identify commits that introduced errors or latency
  • ๐Ÿ“ Standardizes metrics across services and teams to improve debugging
  • โš–๏ธ Function-level metrics provide useful granularity without exploding cardinality

Advanced Features

See autometrics.dev for more details on the ideas behind autometrics.

Example

Documentation comments of instrumented function is augmented with links

When alerting rules are added, code annotations make Prometheus trigger alerts directly from production usage:

a Slack bot is posting an alert directly in the channel

A fully working use-case and example of library usage is available in the examples/web subdirectory. You can build and run load on the example server using:

git submodule update --init
docker compose -f docker-compose.prometheus-example.yaml up

And then explore the generated links by opening the main file in your editor.

Quickstart

There is a one-time setup phase to prime the code for autometrics. Once this phase is accomplished, only calling go generate is necessary.

1. Install the go generator.

The generator is the binary in cmd/autometrics, so the easiest way to get it is to install it through go:

go install github.com/autometrics-dev/autometrics-go/cmd/autometrics@latest
Make sure your `$PATH` is set up In order to have `autometrics` visible then, make sure that the directory `$GOBIN` (or the default `$GOPATH/bin`) is in your `$PATH`:
$ echo "$PATH" | grep -q "${GOBIN:-$GOPATH/bin}" && echo "GOBIN in PATH" || echo "GOBIN not in PATH, please add it"
GOBIN in PATH

2. Import the libraries and initialize the metrics

In the main entrypoint of your program, you need to both add package

import (
	"github.com/autometrics-dev/autometrics-go/prometheus/autometrics"
)

And then in your main function initialize the metrics

	shutdown, err := autometrics.Init()
	if err != nil {
		log.Fatalf("could not initialize autometrics: %s", err)
	}
	defer shutdown(nil)

Init takes optional arguments to customize the metrics. The main ones are WithBranch, WithService, WithVersion, and WithCommit; it will add relevant information on the metrics for better intelligence:

	shutdown, err := autometrics.Init(
		autometrics.WithService("myApp"),
		autometrics.WithVersion("0.4.0"),
	)
	if err != nil {
		log.Fatalf("could not initialize autometrics: %s", err)
	}
	defer shutdown(nil)

You can use any string variable whose value is injected at build time by ldflags for example, or use environment variables.

Note Instead of hardcoding the service in the code, you can simply have environment variables set to fill the "Service" name. AUTOMETRICS_SERVICE_NAME will be used if set, otherwise OTEL_SERVICE_NAME will be attempted (so OpenTelemetry compatibility comes out of the box).

3. Add directives for each function you want to instrument

3a. The QUICKEST way

If you have am installed in version 0.6.0 or later, you can use am instrument single -e /vendor/ -l go . to instrument everything (excluding a possible /vendor subdirectory)

3b. The VERY quick way

Use find and sed to insert a //go:generate directive that will instrument all the functions in all source files under the current directory:

(Replace gsed with sed on linux; gsed is installed with brew gsed)

find . \
  -type d -name vendor -prune -or \
  -type f -name '*.go' \
  -print0 | xargs -0 gsed -i -e '/package/{a\//go:generate autometrics --inst-all --no-doc' -e ':a;n;ba}'

You can remove the --no-doc to get the full experience, but the generator will add a lot of comments if so.

3c. The slower quick way

This grants you more control over what gets instrumented, but it is longer to add.

Warning You must both add the //go:generate directive, and one //autometrics:inst directive per function you want to instrument

On top of each file you want to use Autometrics in, you need to have a go generate cookie:

//go:generate autometrics

Then instrumenting functions depend on their signature, expand the corresponding subsection to see details:

Once it is done, you can call the generator

For error-returning functions
Expand to instrument error returning functions

Given a starting function like:

func AddUser(args any) error {
        // Do stuff
        return nil
}

The manual changes you need to do are:

+//autometrics:inst
-func AddUser(args any) error {
+func AddUser(args any) (err error) {
        // Do stuff
        return nil
}

The generated metrics will count a function as having failed if the err return value is non-nil.

Warning If you want the generated metrics to contain the function success rate, you must name the error return value. This is why we recommend to name the error value you return for the function you want to instrument.

For HTTP handler functions
Expand to instrument HTTP handlers functions

Autometrics comes with a middleware library for net.http handler functions.

  • Import the middleware library
import "github.com/autometrics-dev/autometrics-go/prometheus/midhttp"
  • Wrap your handlers in Autometrics handler
	http.Handle(
		"/path", 
+		midhttp.Autometrics(
-		http.HandlerFunc(routeHandler),
+			http.HandlerFunc(routeHandler),
+			// Optional: override what is considered a success (default is 100-399)
+			autometrics.WithValidHttpCodes([]autometrics.ValidHttpRange{{Min: 200, Max: 299}}),
+			// Optional: Alerting rules
+			autometrics.WithSloName("API"),
+			autometrics.WithAlertSuccess(90),
+		)
	)

The generated metrics here will count a function as having failed if the return code of the handler is bad (in the 4xx and 5xx ranges). The code snippet above shows how to override the ranges of codes that should be considered as errors for the metrics/monitoring.

Note There is only middleware for net/http handlers for now, but support for other web frameworks will come as needed/requested! Don't hesitate to create issues in the repository.

Warning To properly report the function name in the metrics, the autometrics wrapper should be the innermost middleware in the stack.

4. Generate the documentation and instrumentation code

You can now call go generate:

$ go generate ./...

The generator will augment your doc comment to add quick links to metrics (using the Prometheus URL as base URL), and add a unique defer statement that will take care of instrumenting your code.

autometrics --help will show you all the different arguments that can control behaviour through environment variables. The most important options are changing the target of generated links, or disabling doc generation to keep only instrumentation

5. Expose metrics outside

The last step now is to actually expose the generated metrics to the Prometheus instance.

The shortest way is to reuse prometheus/promhttp handler in your main entrypoint:

import (
	"github.com/autometrics-dev/autometrics-go/prometheus/autometrics"
	"github.com/prometheus/client_golang/prometheus/promhttp"
)


func main() {
	shutdown, err := autometrics.Init(
		autometrics.WithVersion("0.4.0"),
		autometrics.WithCommit("anySHA"),
		autometrics.WithService("myApp"),
	)
	http.Handle("/metrics", promhttp.Handler())
}

This is the shortest way to initialize and expose the metrics that autometrics will use in the generated code.

A Prometheus server can be configured to poll the application, and the autometrics will be available! (See the Web App example for a simple, complete setup)

Run Prometheus locally to validate and preview the data

You can use the open source Autometrics CLI to run automatically configured Prometheus locally to see the metrics that will be registered by the change. See the Autometrics CLI docs for more information.

or you can configure Prometheus manually:

scrape_configs:
  - job_name: my-app
    metrics_path: /metrics # the endpoint you configured your metrics exporter on (usually /metrics)
    static_configs:
      - targets: ['localhost:<PORT>'] # The port your service is on
    scrape_interval: 200ms
    # For a real deployment, you would want the scrape interval to be
    # longer but for testing, you want the data to show up quickly

You can also check the documentation to find out about setting up Prometheus locally, with Fly.io, or with Kubernetes


Optional advanced features

Generate alerts automatically

Change the annotation of the function to automatically generate alerts for it:

//autometrics:inst --slo "Api" --success-target 90
func AddUser(args any) (err error) {
        // Do stuff
        return nil
}

Then you need to add the bundled recording rules to your prometheus configuration.

The valid arguments for alert generation are:

  • --slo (MANDATORY for alert generation): name of the service for which the objective is relevant
  • --success-rate : target success rate of the function, between 0 and 100 (you must name the error return value of the function for detection to work.)
  • --latency-ms : maximum latency allowed for the function, in milliseconds.
  • --latency-target : latency target for the threshold, between 0 and 100 (so X% of calls must last less than latency-ms milliseconds). You must specify both latency options, or none.

Warning The generator will error out if you use percentile targets that are not supported by the bundled Alerting rules file. Support for custom target is planned but not present at the moment

Warning You MUST have the --latency-ms values to match the values given in the buckets given in the autometrics.Init call. The values in the buckets are given in seconds. By default, the generator will error and tell you the valid default values if they don't match. If the default values in autometrics.DefBuckets do not match your use case, you can change the buckets in the init call, and add a --custom-latency argument to the //go:generate invocation.

-//go:generate autometrics
+//go:generate autometrics --custom-latency

Exemplar support

When using the Prometheus library for metrics collection, it automatically adds trace and span information in the metrics as exemplars that can be queried with Prometheus, if the server is configured correctly

A prometheus graph that shows exemplars on top of metrics

OpenTelemetry Support

Autometrics supports using OpenTelemetry with a prometheus exporter instead of using Prometheus to publish the metrics. The changes you need to make are:

  • change where the autometrics import points to
import (
-	"github.com/autometrics-dev/autometrics-go/prometheus/autometrics"
+	"github.com/autometrics-dev/autometrics-go/otel/autometrics"
)
  • maybe change the call to autometrics.Init to the new signature: instead of a registry, the Init function takes a meter name for the otel_scope label of the exported metric. That means autometrics won't have a WithRegistry option anymore, but a WithMeterName instead.
	shutdown, err := autometrics.Init(
-		autometrics.WithRegistry(nil),
+		autometrics.WithMeterName("myApp/v2/prod"),
		autometrics.WithVersion("2.1.37"),
		autimetrics.WithCommit("anySHA"),
		autometrics.WithService("myApp"),
	)
  • add the --otel flag to the //go:generate directive
-//go:generate autometrics
+//go:generate autometrics --otel

Push-based workflows

Why would I use a push-based workflow?

If you have an auto-scaled service (with instances spinning up and down), maintaining the configuration/discovery of instances on the Prometheus side of things can be hard. Using a push-based workflow inverts the burden of configuration: all your instances generate a specific ID, and they just need to push metrics to a given URL. So the main advantages of a push-based workflow appear when the the set of machines producing metrics is dynamic:

  • Your Prometheus configuration does not need to be dynamic anymore, it's "set and forget" again
  • No need to configure service discovery separately (which can be error-prone)

It can be summarized with one sentence. The monitoring stack (Prometheus/OpenTelemetry collector) does not need to know the infrastructure of application deployment; nor does the application code need to know the infrastructure of the monitoring stack. Decoupling prevents configuration-rot.

If you don't want to/cannot configure your Prometheus instance to scrape the instrumented code, Autometrics provides a way to push metrics instead of relying on a polling collection process.

Note It is strongly advised to use the OpenTelemetry variant of Autometrics to support push-based metric collection. Prometheus push gateways make aggregation of data across multiple sources harder.

How can I use a push-based workflow with Autometrics?

If you have a Prometheus push gateway or an OTLP collector setup with an accessible URL, then you can directly switch from metric polling to metric pushing by passing the push-related options to autometrics.Init:

	shutdown, err := autometrics.Init(
		autometrics.WithMeterName("myApp/v2/prod"),
		autometrics.WithVersion("2.1.37"),
		autometrics.WithService("myApp"),
+		 autometrics.WithPushCollectorURL("https://collector.example.com"),
+		 autometrics.WithPushJobName("instance_2"),                         // You can leave the JobName out to let autometrics generate one
+		 autometrics.WithPushPeriod(1 * time.Second),                       // Period is only relevant (and available) when using OpenTelemetry implementation
+		 autometrics.WithPushTimeout(500 * time.Millisecond),               // Timeout is only relevant (and available) when using OpenTelementry implementation
	)

Note If you do not want to setup an OTLP collector or a Prometheus push-gateway yourself, you can contact us so we can setup a managed instance of Prometheus for you. We will effectively give you collector URLs, that will work with both OpenTelemetry and Prometheus; and can be visualized easily with our explorer as well!

Logging

Monitoring/Observability must not crash the application.

So when Autometrics encounters errors, instead of bubbling it up until the program stops, it will log the error and absorb it. To leave choice in the logging implementation (depending on your application dependencies), Autometrics exposes a Logger interface you can implement and then inject in the Init call to have the logging you want.

The Logger interface is a subset of slog.Logger methods, so that most loggers can be used. Autometrics also provides 2 simple loggers out of the box:

  • NoOpLogger, which is the default logger and does nothing,
  • PrintLogger which uses fmt.Print to write logging messages on stdout.

To use the PrintLogger instead of the NoOpLogger for examble, you just have to change the Init call:

	shutdown, err := autometrics.Init(
		autometrics.WithMeterName("myApp/v2/prod"),
		autometrics.WithVersion("2.1.37"),
		autometrics.WithService("myApp"),
+		 autometrics.WithLogger(autometrics.PrintLogger{}),
	)

Git hook

As autometrics is a Go generator that modifies the source code when run, it might be interesting to set up go generate ./... to run in a git pre-commit hook so that you never forget to run it if you change the source code.

If you use a tool like pre-commit, see their documentation about how to add a hook that will run go generate ./....

Otherwise, a simple example has been added in the configs folder as an example. You can copy this file in your copy of your project's repository, within .git/hooks and make sure that the file is executable.

Tips and Tricks

Make generated links point to different Prometheus instances

By default, the generated links will point to localhost:9090, which the default location of Prometheus when run locally.

The environment variable AM_PROMETHEUS_URL controls the base URL of the instance that is scraping the deployed version of your code. Having an environment variable means you can change the generated links without touching your code. The default value, if absent, is http://localhost:9090/.

You can have any value here, the only adverse impact it can have is that the links in the doc comment might lead nowhere useful.

Remove the documentation

By default, autometrics will add a lot of documentation on each instrumented function. If you prefer not having the extra comments, but keep the instrumentation only, you have multiple options:

  • To disable documentation on a single function, add the --no-doc argument to the //autometrics:inst directive:
-//autometrics:inst
+//autometrics:inst --no-doc
  • To disable documentation on a file, add the --no-doc argument to the //go:generate directive:
-//go:generate autometrics
+//go:generate autometrics --no-doc
  • To disable documentation globally, use the environment variable AM_NO_DOCGEN:
$ AM_NO_DOCGEN=true go generate ./...
Offboarding

If for some reason you want to stop using autometrics, the easiest way includes 3 steps.

First is to use the environment variable AM_RM_ALL to remove all generated documentation and code:

$ AM_RM_ALL=true go generate ./...

Second step is to use grep/your text-editor/your IDE of choice to remove all lines starting with //go:generate autometrics from your files so go generate will stop creating autometrics code.

Last step is to use your text-editor IDE to remove remnants:

  • a "go imports" cleaner to remove the autometrics imports (or grep-ing again)
  • remove the autometrics.Init call from the main entrypoint of your code.

Contributing

The first version of the library has not been written by Go experts. Any comment or code suggestion as Pull Request is more than welcome!

Issues, feature suggestions, and pull requests are very welcome!

If you are interested in getting involved:

autometrics-go's People

Contributors

gagbo avatar k-yang avatar keturiosakys avatar lpmi-13 avatar mellowagain avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

autometrics-go's Issues

Write a middleware library for Echo

Pending #47 merge.

Once the PR is merged, the ability to inject a new trace ID/span ID into an Echo context will only happen through a middleware library, to properly fill the autometrics.MiddlewareTraceIDKey and SpanIDKey fields with hex-encoded byte strings of the relevant IDs

Ideas for Alert generation

Getting Prometheus alerts generation

As a reminder, the rust implementation uses this kind of syntax to trigger the generation of alerts for a single function:

#[autometrics(alerts(success_rate = 99.9%, latency(99% <= 200ms)))]
pub async fn handle_http_requests(req: Request) -> Result<Response, Error> {
  // ...
}

We want to provide a similar experience with the Go version, by exploiting the //autometrics:doc directive currently used per-function. This issue proposes a design for the feature as well as a few technical solutions to create the feature.

Reusing Sloth

The rust implementation is relying on Sloth, there's no reason to avoid it here. If we're lucky enough, as it's written in Go, we might be able to reuse the types from its library to serialize alerting rules (We can just build the relevant Go structure and Marshal those)

New argument to autometrics directive

We probably want to specify that the generator should both create documentation and alerts, so a good matching syntax would be

//autometrics:doc,alerts --success-rate 99.9 --latency-perc 99 --latency-threshold 200ms
func handleHttpRequests() (err error) {
        // ...
        return nil
}

That would allow to just split the directive arguments by ,, and then use something as shlex to parse the alerts parts just like CLI flags.

New argument to the go:generate directive

The //go:generate autometrics directive will need to take an extra argument to point to the global location of the Sloth alerts file, otherwise we won't know where to write. The syntax could be

//go:generate autometrics --alerts-file ../../autometrics.sloth.yaml

Having a CLI flag instead of a positional argument (i.e. //go:generate autometrics ../../autometrics.sloth.yaml) helps with backwards compatibility later.

Data races

We want the calls to the generator to concatenate all the Sloth rules to a unique file, so that sloth (the binary) can generate the prometheus rules we want.

autometrics (the go-generator) is only called once per file, so we can safely generate a Sloth fragment per single call to autometrics. Once that's done, we can use a small file-lock library like fslock to query a global, out-of-process, lock on the resulting global file in order to safely concatenate the Sloth fragments in the same file

We would still need to add a step at the end to generate the Prometheus rules. Maybe each autometrics call could use their lock to also keep it while exec-ing sloth to regenerate the Prometheus rules. That would mean that the last call to get the lock will be the one producing the final file. As long as we can guarantee that the last process that has the lock, also is the process with the last version of the sloth rules, we're fine.

Make autometrics work easily with http.HandleFunc, and specify 'error' status codes per handler

This very, very likely implies writing a wrapper function to be used as middleware just like the TS version does. It should use the facilities from #47 to inject the traceID and spanID in the context, and wrap the response writer in order to read the status code and automatically mark the handler as successful or not.

This also calls for an addition to the autometrics:inst directive, so that each function could specify the range of status codes that should be considered an error from the point of view of autometrics. It will default to anything >400 and could optionally take (multiple) --http-code-ok-range 100,499 --http-code-error 404 arguments to override the range that's considered OK (in a "last rule wins" manner)

Add function that returns the information about the next function declaration after a "file:line" position

To be useful in a go generate context, autometrics need to be able to know the information about the function it "decorates".

To have the lightest API possible (i.e. not needing to specify the function name as a go generate argument) instead we want to parse the file, find the function declaration that follows the comment, and then extract from that:

  • the function name
  • the name of the named return values (we're interested in the error type)

Hints:

Write a middleware library for Gin

Pending #47 merge.

Once the PR is merged, the ability to inject a new trace ID/span ID into a Gin context will only happen through a middleware library, to properly fill the autometrics.MiddlewareTraceIDKey and SpanIDKey fields with hex-encoded byte strings of the relevant IDs

Make the generator fail if it uses unsupported latencies/objectives

A "supported" latency is a latency target that exists in the buckets of the function_call_duration histogram. If the latency used in the autometrics directive is not exactly one of the bucket values of the histogram, then the latency alert will never trigger.

In the same vein, a "supported" objective target is a percentage that exists in the bundled configs/autometrics.rules.yaml file. Support to create custom objective targets might be added later (using Sloth binary and types to generate a matching rules file with custom objectives, like the Rust version does), but for the time being, if the objective latency percentile/success rate is not one of the precompile rules file, then the latency/success rate alert will never trigger.

The autometricsGeneratorContext should be augmented with the supported value sets for each arguments, so that it can use the Validate method to bail out and error if users try to use the generator with unsupported latencies/objectives

"autometrics": executable file not found

Hello, and thanks for the interesting project. I actually heard about it from Evan at Berline.rs meetup.

I'm trying to add this to my project, but it looks like i need the 'autometrics' binary, but neither readme specify how to actually install it, nor releases contain any binaries.

Can you please explain/link - how to get the binary the go generate command runs?

โฏ go generate ./...                       
server/api/v1/server_api_v1.go:3: running "autometrics": exec: "autometrics": executable file not found in $PATH

โฏ go version
go version go1.19.2 linux/amd64

Add an option on the go generator to instrument all functions in the file

This could be used to speed up the process of instrumenting a code base if you know you want to instrument everything in a given file.

  • //go:generate autometrics --inst-all (remember in already accepts --no-doc argument) and //go:generate autometrics --doc-all`
  • Also, to be reentrant, the generator MUST always walk all functions in the file, removing //autometrics:defer statements before choosing whether to process them
  • //go:generate autometrics --rm-all could also be added to clean everything up in one go generate call for offboarding

Pass arbitrary data in context and generate defer calls for popular frameworks

This is a stepping stone to deal more largely with #44 later.

The use-case for contexts

Currently the runtime context of autometrics embeds a context.Context just because it was needed when we added OpenTelemetry support. Now there's a specific need for contexts:

  • We want to be able to pass data between function calls (to transmit the traceId for example)
  • We want to be able to pass arbitrary data (it is the only way to have autometrics work with any instrumented code)

Therefore, instead of always initializing the context to a default one, we want to add the options and ability to create an autometrics.Context:

  • with a context.Context parent, and/or
  • with specific key-value pairs embedded (a TraceId and a SpanId)

End goal

Creating a context like that will allow autometrics to fetch any data using its embedded context, therefore enabling autometrics to attach any runtime data to the generated metrics

Support for popular libraries

Doing this will allow, in a second time, to use the function signatures of instrumented functions to detect what popular framework they use. If a function has func hello(ctx *gin.Context) as its signature, we could detect the gin context and generate code that will extract and add this information directly in autometrics. At that point, having tracing info in the metrics would only be one middleware away (the one that calls ctx.Set(autometrics.TraceIdKey, "traceId"))

Being able to do this needs a big refactoring of the internal.generate package though, since it kind of assumes there's only 1 way to generate the //autometrics:defer statement

Panic when My Own function name is longer than function name of autometrics.PreInstrument

Panic Stack:

runtime error: slice bounds out of range [99:78]
/usr/local/go/src/runtime/panic.go:154 (0x44453b)
	goPanicSliceB: panic(boundsError{x: int64(x), signed: true, y: y, code: boundsSliceB})
/root/go/pkg/mod/github.com/autometrics-dev/[email protected]/pkg/autometrics/instrument.go:69 (0x1b444ce)
	callerInfo: callInfo.Parent.Function = functionName[index+1:]
/root/go/pkg/mod/github.com/autometrics-dev/[email protected]/pkg/autometrics/ctx.go:262 (0x1b42104)
	FillTracingAndCallerInfo: callInfo := callerInfo(ctx)
/root/go/pkg/mod/github.com/autometrics-dev/[email protected]/prometheus/autometrics/instrument.go:130 (0x1b48bb1)
	PreInstrument: ctx = am.FillTracingAndCallerInfo(ctx)
/root/InnerGitlab/wms-query/pkg/api/query_svc_map.go:150 (0x2590f19)
	(*Handler).QueryServiceMapSingleConnection: amCtx := autometrics.PreInstrument(autometrics.NewContext(

After debug, I found that in this code line

callInfo.Parent.Function = functionName[index+1:]

it still uses functionName(which is github.com/autometrics-dev/autometrics-go/prometheus/autometrics.PreInstrument in my case), but the parent function name is gitlab.ubiservices.ubi.com/zhuyanxi1998/wave/services/monitoring/component/query-service/pkg/api.(*Handler).QueryServiceMapSingleConnection, which is longer than functionName, so it out of range.

I think in Line69, the variable functionName might be changed to parentFrameFunctionName.

Create initializer function that sets up metrics

The main metrics names are

const COUNTER_NAME_PROMETHEUS: &str = "function_calls_count";
const HISTOGRAM_BUCKET_NAME_PROMETHEUS: &str = "function_calls_duration_bucket";
const GAUGE_NAME_PROMETHEUS: &str = "function_calls_concurrent";

and should be created in an autometrics.Init function that could be called in the main package of the autometrics users.

All metrics MUST have function (for the function name) and module (for the qualified package name) in the labels

Quickstart guide

There is a fair amount of content in the readme. Someone that was integrating the Rust library commented that it would be useful to have a clear quickstart guide for anyone that doesn't need to be convinced about the project but just wants to follow a set of steps to add it. This is what we added for the Rust library

[RFC] "Auto links in documentation" design ramblings

Question

How do we make the documentation of a function contain generated links to queries to Prometheus?

Issues

  • Go does not want you to get smart and access the documentation of functions. You MUST have the documentation just above the function declaration
  • Go generate does not work well with modifying the code in place. All the arguments you get from environment variables when calling go generate would be out of date as soon as you start editing the file in place. So that's a huge race condition waiting to happen

Possible solutions

Generate an extra package

The proposed solution would be to generate a single file autometrics package, with doc_gen.go containing a huge documentation string for the package, with a heading per decorated function.

The advantage here is that we can use file-locking on the doc_gen.go to actually fix the race condition issue. But the disadvantage is that you have to explore the huge documentation of the generated autometrics package to find the correct
documentation

Two-pass design

The proposed solution would be to have autometrics-go actually be 2 go:generate binaries, to run in 2 passes on the code:

A single top level generate call per file would exist:

package main

//go:generate autometrics


// indexHandler is a cool function.
//
// It is used to handle the `/` route
//
//autometrics:doc
func indexHandler(args interface{}) error {
        return nil
}

First pass: the comment pass

That pass would detect all the //autometrics comment and:

  • replace the comment in place with extra comments about the generated prometheus queries with links (we
    want to have that code marked or under a heading so it's easy to replace in case someone wants to reset those)
  • find the function that's being commented in the AST and mark it (add function_name, module pair to a list) for the code generation pass.
  • ensures that the error is a named returned value

Second pass: the code generation pass

We can now use the list of (function_name, module) collected from the first pass to actually generate a <package>_autometrics_gen.go file to deal with the creation of defer-able functions that will be actually used as wrappers inside the client code (which would be defer <function_name>_autometrics())

Decision ?

Add support for OpenTelemetry

Adding support for OpenTelemetry on top of Prometheus should be done in a few steps:

  • Add an argument to the go generator to choose between Prometheus and OpenTelemetry
  • Add support for using OpenTelemetry or Prometheus in the autometrics.Instrument method (as argument?)
    • Do we even need one? It seems that for tracing applications you need to do a lot of wrapping anyways around your whole service and not just a few functions.
  • Add a comment generator for the OpenTelemetry visualizers
  • Add an environment variable to read the base URL when it's OpenTelemetry (or rename the existing Prometheus one into something more generic)
  • Add an example in the repo that uses OpenTelemetry instead of Prometheus
  • Think about how alerts can be handled (so how to put labels on metrics when relevant)

Add Format checker step in CI

There are too many spurious changes in PR from times I forgot to run the formatter. We want CI to enforce gofmt -s to avoid issues like this.

The PR adding the step will obviously also need to run the formatter on the whole project.

Add an extra statement in function to give autometrics context

Giving out the context created by autometrics.PreInstrument is the best (only?) way to enable library users to pass the context down to the function's callees if they wish to.

Having autometrics context through a complete call tree will allow to have cleaner traces support, and better caller information for the call graph feature.

We could also reuse part of the middleware code, to shadow the function argument context with the autometrics one if we are able to make a type-compatible variant.

Measure performance impact

It is important to help potential users assess the performance impact of the library. I suppose we could add an extra script to the example app that also measures the runtime impact of autometrics?

That means we need to find the currently used Golang profiling tools to plug it to the demo application.

Support `--no-docs-gen` argument

The purpose of this argument is to disable documentation generation by the generator. This would allow the VS Code extension to become solely responsible for generating documentation tooltips.

This is a blocker for finalizing autometrics-dev/vscode-autometrics#28

Feel free to suggest a better name for the argument ๐Ÿ˜…

Panic on already generated godoc

My code:

package handler

import (
	"gmod/pkg/version"
	"net/http"

	"github.com/autometrics-dev/autometrics-go/prometheus/autometrics"
	"github.com/gin-gonic/gin"
)

//go:generate autometrics
//autometrics:inst
func (h *Handler) VersionHandler(ctx *gin.Context) {
	defer autometrics.Instrument(autometrics.PreInstrument(autometrics.NewContext(
		nil,
		autometrics.WithConcurrentCalls(true),
		autometrics.WithCallerName(true),
	)), nil) //autometrics:defer

	ctx.JSON(http.StatusOK, gin.H{
		"version": version.Version,
	})
}

First run of go generate ./... runs smooth, properly generates docs and instruments code. Executing it second time causes panic:

โฏ go generate ./...
panic: runtime error: slice bounds out of range [:-1]

goroutine 1 [running]:
github.com/autometrics-dev/autometrics-go/internal/generate.cleanUpAutometricsComments({{{0x911459, 0x3}, {0x0, 0x0}, {0x0, 0x0}, 0x1, 0x1, 0x0}, {0x0, ...}, ...}, ...)
        /home/krzwiatrzyk/go/pkg/mod/github.com/autometrics-dev/[email protected]/internal/generate/documentation.go:43 +0x691
github.com/autometrics-dev/autometrics-go/internal/generate.walkFuncDeclaration(0xc000166000, 0xc000188000, {0xc0000322da?, 0x8cd560?})
        /home/krzwiatrzyk/go/pkg/mod/github.com/autometrics-dev/[email protected]/internal/generate/generate.go:145 +0x29b
github.com/autometrics-dev/autometrics-go/internal/generate.GenerateDocumentationAndInstrumentation.func1({0x9c6da0?, 0xc000188000?})
        /home/krzwiatrzyk/go/pkg/mod/github.com/autometrics-dev/[email protected]/internal/generate/generate.go:100 +0x4b
github.com/dave/dst.inspector.Visit(0xc000182240, {0x9c6da0?, 0xc000188000?})
        /home/krzwiatrzyk/go/pkg/mod/github.com/dave/[email protected]/walk.go:341 +0x31
github.com/dave/dst.Walk({0x9c82c0?, 0xc000182240?}, {0x9c6da0?, 0xc000188000?})
        /home/krzwiatrzyk/go/pkg/mod/github.com/dave/[email protected]/walk.go:52 +0x68
github.com/dave/dst.walkDeclList({0x9c82c0, 0xc000182240}, {0xc000132e60?, 0x2, 0x50?})
        /home/krzwiatrzyk/go/pkg/mod/github.com/dave/[email protected]/walk.go:38 +0x69
github.com/dave/dst.Walk({0x9c82c0?, 0xc000182240?}, {0x9c6d60?, 0xc00011ef70?})
        /home/krzwiatrzyk/go/pkg/mod/github.com/dave/[email protected]/walk.go:321 +0x173e
github.com/dave/dst.Inspect(...)
        /home/krzwiatrzyk/go/pkg/mod/github.com/dave/[email protected]/walk.go:353
github.com/autometrics-dev/autometrics-go/internal/generate.GenerateDocumentationAndInstrumentation({{{0x911459, 0x3}, {0x0, 0x0}, {0x0, 0x0}, 0x1, 0x1, 0x0}, {0x0, ...}, ...}, ...)
        /home/krzwiatrzyk/go/pkg/mod/github.com/autometrics-dev/[email protected]/internal/generate/generate.go:110 +0x2c5
github.com/autometrics-dev/autometrics-go/internal/generate.TransformFile({{{0x911459, 0x3}, {0x0, 0x0}, {0x0, 0x0}, 0x1, 0x1, 0x0}, {0x0, ...}, ...}, ...)
        /home/krzwiatrzyk/go/pkg/mod/github.com/autometrics-dev/[email protected]/internal/generate/generate.go:54 +0x2e8
main.main()
        /home/krzwiatrzyk/go/pkg/mod/github.com/autometrics-dev/[email protected]/cmd/autometrics/main.go:77 +0x1c5
pkg/handler/version.go:36: running "autometrics": exit status 2

Create function that takes `function_name` as argument, generates deferable function that updates prometheus counters

The function will create a <function_name>_autometrics function that will use a named return value err to decide whether or not function_name call is successful or not.

The goal is that the code modification to make autometrics work is

+ //go:generate go-autometrics
- func My_handler(args: interface{}) error {
+ func My_handler(args: interface{}) err error {
+        defer My_handler_autometrics()
         // Do stuff
         return nil
}

Add option to specify namespace of metrics

If not already possible, i would like for the metrics to be namespaced if a namespace option was specified

autometrics.Init(
    autometrics.WithNamespace("imaginary_application"),
)

function_calls_duration_seconds_count would become imaginary_application_function_calls_duration_seconds_count

if this is not already possible then i would be happy to raise a PR to add this

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.