influxdata / influxdb-client-go Goto Github PK
View Code? Open in Web Editor NEWInfluxDB 2 Go Client
License: MIT License
InfluxDB 2 Go Client
License: MIT License
We should ensure that we handle gziped responses from influxdb in our test suite.
Either via a unit tests or the end to end tests (or even better both).
see: #56
package main
import (
"context"
"github.com/influxdata/influxdb-client-go"
"log"
"net/http"
"time"
)
func main() {
var myHTTPClient *http.Client
influx, err := influxdb.New(myHTTPClient, influxdb.WithAddress("http://localhost:8086"), influxdb.WithToken("mytoken"))
if err != nil {
panic(err) // error handling here, normally we wouldn't use fmt, but it works for the example
}
// we use client.NewRowMetric for the example because its easy, but if you need extra performance
// it is fine to manually build the []client.Metric{}.
myMetrics := []influxdb.Metric{
influxdb.NewRowMetric(
map[string]interface{}{"memory": 1000, "cpu": 0.93},
"system-metrics",
map[string]string{"hostname": "hal9000"},
time.Date(2018, 3, 4, 5, 6, 7, 8, time.UTC)),
influxdb.NewRowMetric(
map[string]interface{}{"memory": 1000, "cpu": 0.93},
"system-metrics",
map[string]string{"hostname": "hal9000"},
time.Date(2018, 3, 4, 5, 6, 7, 9, time.UTC)),
}
// The actual write..., this method can be called concurrently.
if err := influx.Write(context.Background(), "my-awesome-bucket", "my-very-awesome-org", myMetrics...); err != nil {
log.Fatal(err) // as above use your own error handling here.
}
influx.Close() // closes the client. After this the client is useless.
}
ERROR 2019/05/10 01:07:57 json: cannot unmarshal number into Go value of type influxdb.genericRespError
exit status 1
Hello,
I'd like to know if this snippet is a correct way to use the QueryCSV function:
type influxRecord struct {
Zone ***string `flux:"name" json:"zone"`
Stop time.Time `flux:"_stop" json:"-"`
Start time.Time `flux:"_start" json:"-"`
Time time.Time `flux:"_time" json:"date"`
HostIP string `flux:"host_ip" json:"-"`
Count int32 `flux:"_value" json:"count"`
}
q := fmt.Sprintf(
`from(bucket: "%s")
|> range(start: -1h)
|> filter(fn: (r) => r._measurement == "occupation" and r._field == "%s")
|> last()`, me.c.InfluxDB.PullBucket, sensor)
response, err := me.cli.QueryCSV(
context.Background(),
q,
me.c.InfluxDB.Org,
)
r := influxRecord{}
for response.Next() {
err = response.Unmarshal(&r)
if err != nil {
...
}
}
I'm having this error flux: unsupported type: is not supported to generate flux, try a map or a struct with public keys
at the moment.
Thank you.
edit: I realized my error right after having posted the issue. Anyway since there are very few examples, except the tests, I put the final snippet as an example. It may helps someone.
The root influxdb.Client
implements the writer.BucketMetricWriter
interface (https://github.com/influxdata/influxdb-client-go/blob/develop/writer/writer.go#L11).
A new decorating implementation is required which handles automatically retrying on error conditions.
This was previously implemented in the influxdb.Client
, however, it was removed in the buffered writer refactor. This should now be reinstated as a decorator for the writer.BucketMetricWriter
interface.
Docs, Fluxlang, and InfluxDB's TSDB all use the concept of Measurement
, rarely use the concept of Metric
.
Now using the concept of Metric
in the go client may be confusing to users.
Lines 9 to 30 in 0d5eea1
Due to the where the goto
statement is placed, in the event of a retry the metrics are encoded again to the same buffer. This leads to duplicate metrics in the event of retries.
Lines 38 to 48 in 0d5eea1
$ go get github.com/influxdata/influxdb-client-go
../github.com/influxdata/influxdb-client-go/internal/ast/ast.go:33:10: v.MapRange undefined (type reflect.Value has no field or method MapRange)
../github.com/influxdata/influxdb-client-go/internal/ast/ast.go:243:10: v.MapRange undefined (type reflect.Value has no field or method MapRange)
I'm inserting large amounts of data via the golang library, and I'm seeing arbitrary trailing "
characters added to some of the tag fields.
I've done step-by-step debugging to ensure that the "
is not there when the influxdb.Metric()
object is built and submitted to the influxdb Client write
method, but the quotes still show up (not on all tags, just some).
myMetrics = []influxdb.Metric{ influxdb.NewRowMetric( map[string]interface{}{"confirmed": confirmed, "deaths": dead, "recovered": recovered, "lat": latitude, "lon": longitude}, meas, map[string]string{ "state_province": Case.Province, "country_region": Case.Country, "s2_cell_id": cell, "last_update": stringTime, }, t), }
I've verified that Case.Province
does NOT have extra trailing quotes, I've run the string through strings.Trim()
as well as strings.Replace()
to ensure the "
character is not there, but it still ends up in the database.
See:
Lines 138 to 155 in de2584b
There are two early exit points on non-nil errors which will lead to this waitgroup not getting dremented. This will ultimately lead to a deadlock on Wait() in Stop().
Should defer w.wg.Done()
asap
Pass in connection string and token as the base interface.
Specifying correct credentials for admin member, the login fails using development
branch:
http := &http.Client{Timeout: writeTimeout}
options := []influxdb.Option{
influxdb.WithAddress(url),
influxdb.WithUserAndPass(user, password),
}
client, err := influxdb.New(http, options...)
result:
unauthorized: unauthorized access
go 1.11 doesn't seem to work with modules, but, 1.12 does.
Flux added a base64Binary type.
The client should support this.
Information for the type:
https://github.com/influxdata/flux/blob/master/docs/SPEC.md#annotations
query.go will need to be updated so that type can be properly unmarshaled.
internal/ast/ast.go will also need to be updated to support this.
go get -u github.com/influxdata/influxdb-client-go
# github.com/influxdata/influxdb-client-go/internal/ast
..\github.com\influxdata\influxdb-client-go\internal\ast\ast.go:33:10: v.MapRange undefined (type reflect.Value has no field or method MapRange)
..\github.com\influxdata\influxdb-client-go\internal\ast\ast.go:243:10: v.MapRange undefined (type reflect.Value has no field or method MapRange)
handle returned http.StatusRequestEntityTooLarge, in the various automated buffered writers that auto send.
there are two people on this planet that knows what that means heh
The following doesn't work for me:
p := influxdb2.NewPoint("sensor-data",
map[string]string{"id": "1"},
map[string]interface{}{"temp": 45.5,"lightIntensity": 120},
time.Now())
Only the lightIntensity is written to the cloud, any float value does not work. Issue only came up with the newest client.
Currently the only authentication (which is broken see #45) we support is username + password combination.
see:
Line 48 in 88077c0
We should also add support for token based authentication.
i.e. Authorization: Token xxxxxx
Perhaps a WithToken(token string)
here: https://github.com/influxdata/influxdb-client-go/blob/develop/clientoptions.go#L116 which just sets a static token on client.authorization
.
..or provide API documentation here.
https://github.com/deepmap/oapi-codegen
Newer library but the generated code is looks pretty darn good.
Hi, I use the PointWriter
to write metrics with an underlying buffer to influxdb.
Here is the code
client, _ := influxdb.New(config.Endpoint, config.Token)
writer := influxdb_writer.New(client, config.Bucket, config.Organization)
Then I got something weird. When the writer returns an error from influxdb (for instance, exceeded rate limit), then every time the writer writes, it returns the last error, even the influxdb service back to normal. And there is no more network connection with influxdb service.
Then I found this code:
influxdb-client-go/writer/point.go
Lines 103 to 118 in fb46d51
Once the p.err
is an error, there is no way that it will invoke the underlying writer to write metrics. And there are many places in code like that.
I have no idea what's going on. And I miss something?
Add support for managing authorizations
We're writing a small terraform provider for influx v2 to automate the creation of buckets, etc. Hence, we're trying to use this client as much as possible.
There's already a file for the setup/
route, but only for this one. Do you intend to add support for all other routes, and should we contribute to that, or should we write our own http requests because the goal of the client is simply to allow handy read/writes (and secondary operations such as setup)?
I have checked the code, ping
is simply HTTP Get
ing the url: ip:port/api/v2/ready
, and then a 401 {"code":"unauthorized","message":"unauthorized access"}
is returned.
When I try this in the browser after login, a 404 {"code":"not found","message":"path not found"}
is returned.
I also check the url: ip:port/ready
is worked as expected without Auth.
Is this a client problem, or I should wait for the api/v2/ready
implement in influxdb
?
Closes #45
All actions for influxdb 2.0 need to be authenticated either via a token or session (and in the future JWT).
Client currently only supports Setup()
which completes onboarding procedure for new user / org / bucket creation. Calling this does retrieve a session token and subsequently uses it for auth. However, this is only suitable once per unique user + org + bucket combo.
We should leverage the /signin
basic auth api call when username is provided via WithUsernameAndPassword
and no authentication has been set.
Example:
func (c *Client) SignIn() error {
c.mu.Lock()
defer c.mu.Unlock()
//...
resp, err := c.do(req)
// ...
c.session = token
}
func WithUsernamePassword(username, password string) Option {
return func(c *Client) {
c.username = username
c.password = password
c.once = sync.Once{}
}
}
func (c *Client) signIn() {
c.once.Do(func(){
if err := c.SignIn(); err != nil {
c.logger.Error(err)
}
})
}
func (c *Client) Write(...) {
c.signIn()
}
Update:
For reference /signin returns session via Set-Cookie: session=token
Another Update:
Need to actually use this token as a session cookie. Consider using https://golang.org/pkg/net/http/cookiejar/
I'd like to be able to generate tokens and expire them using vault.
I contributed the influxdb V1 integration to Vault almost a year ago
https://www.vaultproject.io/docs/secrets/databases/influxdb/
It's time to be able to do the same with Influxdb V2 but I'm able to work with the new SDK. can I get an example of workable creation of a new token?
A query with the resulting CSV:
,result,table,_start,_stop,uptime,_time
,_result,0,2020-01-27T05:04:48.70477677Z,2020-01-27T17:04:48.70477677Z,0.5454545454545454,2020-01-27T06:00:00Z
,_result,0,2020-01-27T05:04:48.70477677Z,2020-01-27T17:04:48.70477677Z,1,2020-01-27T07:00:00Z
yields incorrect row and column name information:
ColNames: [result table _start _stop uptime _time]
Row: [ 0 2020-01-27T04:49:51.607050038Z 2020-01-27T16:49:51.607050038Z 0 2020-01-27T05:00:00Z]
It seems that the preceding comma creates an extra empty element in the Row slice, offsetting each row by 1 index.
I would like to have an endpoint to validate that some Flux is, at least, syntactically correct.
This would be great for users, because your go module shouldn't depend on Flux to validate a script, and you shouldn't care about versioning, but just request to your InfluxDB server 👍
Consider this- takes almost a hundred dependencies:
❯ mkdir foo && cd foo
~/foo
❯ go mod init foo
go: creating new go.mod: module foo
~/foo
❯ go get -u github.com/influxdata/influxdb-client-go@develop
go: finding github.com/influxdata/influxdb-client-go develop
go: finding github.com/influxdata/tdigest latest
go: finding github.com/influxdata/line-protocol latest
go: finding golang.org/x/crypto latest
go: finding golang.org/x/sync latest
go: finding github.com/andreyvit/diff latest
go: finding gonum.org/v1/netlib latest
go: finding golang.org/x/tools latest
go: finding golang.org/x/exp latest
go: finding github.com/pkg/term latest
go: finding golang.org/x/net latest
go: finding github.com/apache/arrow/go/arrow latest
go: finding golang.org/x/sys latest
go: finding gopkg.in/check.v1 latest
go: finding gonum.org/v1/gonum latest
go: finding github.com/remyoudompheng/bigfft latest
go: finding golang.org/x/mobile latest
go: finding github.com/kevinburke/ssh_config latest
go: finding github.com/BurntSushi/xgb latest
go: finding github.com/alecthomas/units latest
go: finding github.com/jbenet/go-context latest
go: finding golang.org/x/image latest
go: finding github.com/anmitsu/go-shlex latest
go: finding github.com/alcortesm/tgz latest
go: finding github.com/blakesmith/ar latest
go: finding github.com/alecthomas/template latest
go: finding golang.org/x/xerrors latest
go: finding golang.org/x/oauth2 latest
go: finding github.com/jpillora/backoff latest
go: finding github.com/eapache/go-xerial-snappy latest
go: finding github.com/aybabtme/rgbterm latest
go: finding github.com/armon/consul-api latest
go: finding github.com/flynn/go-shlex latest
go: finding github.com/aphistic/golf latest
go: finding github.com/mattn/go-tty latest
go: finding github.com/tj/assert latest
go: finding github.com/smartystreets/go-aws-auth latest
go: finding github.com/campoy/unique latest
go: finding gopkg.in/tomb.v1 latest
go: finding github.com/jmespath/go-jmespath latest
go: finding contrib.go.opencensus.io/exporter/aws latest
go: finding google.golang.org/genproto latest
go: finding gonum.org/v1/plot latest
go: finding github.com/golang/freetype latest
go: finding github.com/ajstarks/svgo latest
go: finding github.com/GoogleCloudPlatform/cloudsql-proxy latest
go: finding github.com/golang/glog latest
go: finding github.com/tmc/grpc-websocket-proxy latest
go: finding github.com/xdg/scram latest
go: finding golang.org/x/lint latest
go: finding github.com/coreos/go-systemd latest
go: finding golang.org/x/time latest
go: finding github.com/tj/go-kinesis latest
go: finding github.com/mgutz/ansi latest
go: finding github.com/jstemmer/go-junit-report latest
go: finding github.com/coreos/pkg latest
go: finding github.com/mattn/go-ieproxy latest
go: finding github.com/google/pprof latest
go: finding github.com/codahale/hdrhistogram latest
go: finding github.com/xiang90/probing latest
go: finding github.com/golang/groupcache latest
go: finding github.com/kr/logfmt latest
go: finding github.com/tj/go-elastic latest
go: finding github.com/dgryski/go-sip13 latest
go: finding github.com/prometheus/client_model latest
go: finding github.com/armon/go-socks5 latest
go: finding github.com/mwitkow/go-conntrack latest
go: finding github.com/ruudk/golang-pdf417 latest
go: finding github.com/modern-go/concurrent latest
go: finding github.com/rcrowley/go-metrics latest
go: finding github.com/streadway/amqp latest
go: finding istio.io/gogo-genproto latest
go: downloading github.com/influxdata/influxdb-client-go v0.0.2-0.20190805165203-23da33b60c81
go: extracting github.com/influxdata/influxdb-client-go v0.0.2-0.20190805165203-23da33b60c81
New readme has some examples wrongly formated
I have a flux query which returns 2 yields in a single script as follows:
(Assuming all the values are sent)
workbench = from(bucket: "test")
|> range(start: v.timeStart, stop: v.timeEnd)
|> filter(fn: (r) => r._measurement == "mem")
|> filter(fn: (r) => r.host == v.host)
count = workbench
|> count()
|> yield(name:"count")
data = workbench
|> limit(n: v.limit, offset : v.offset)
|> sort(columns: ["_time"], desc: v.isDescending)
|> yield(name:"data")
From the resulting query I get 2 series in the HTTP request. But the package fails to
read the second series returning the error record on line 61: wrong number of fields
. So I tried setting the csvreader with FieldsPerRecord
as -1 (as per this). It fails for that too. Need help.
Line 85 in b0a0379
The Line above is unconfigurable through the provided API-Interface and it essentially seems to force all queries to be <=20 seconds. Am I missing something or is this configurable? I can't seem to get around it.
I constantly get the error:
Post "https://localhost/api/v2/query?org=testing_queries": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
E2E test output:
#datatype,string,long,dateTime:RFC3339,dateTime:RFC3339,dateTime:RFC3339,long,string,string,string,string,string,string
#group,false,false,true,true,false,false,false,false,false,false,false,false
#default,_result,,,,,,,,,,,
,result,table,_start,_stop,_time,_value,_field,_measurement,ktest1,ktest2,"ktest2,k-test3",ktest3
,,0,2019-06-21T19:44:37.421358Z,2019-08-02T11:44:37.421358Z,2019-08-02T11:43:32.396033Z,3,ftest1,test,k-test1,k-test2,,
PASS
ok github.com/influxdata/influxdb-client-go 5.112s
InfluxDB revision 945f16ff4be321e5ff4eda783be03ac27d9e68b7
Influxdb logs:
2019-08-02T11:44:37.442925Z info Error writing response to client {"log_id": "0H0YANUG000", "handler": "query", "handler": "flux", "error": "csv encoder error: expected integer cursor type, got *reads.stringMultiShardArrayCursor", "errorVerbose": "expected integer cursor type, got *reads.stringMultiShardArrayCursor\ncsv encoder error\ngithub.com/influxdata/flux/csv.wrapEncodingError\n\t/Users/georgemac/go/pkg/mod/github.com/influxdata/[email protected]/csv/result.go:753\ngithub.com/influxdata/flux/csv.(*ResultEncoder).Encode.func1\n\t/Users/georgemac/go/pkg/mod/github.com/influxdata/[email protected]/csv/result.go:843\ngithub.com/influxdata/flux/execute.(*result).Do\n\t/Users/georgemac/go/pkg/mod/github.com/influxdata/[email protected]/execute/result.go:70\ngithub.com/influxdata/influxdb/query/control.(*errorCollectingTableIterator).Do\n\t/Users/georgemac/github/influxdata/influxdb/query/control/controller.go:803\ngithub.com/influxdata/flux/csv.(*ResultEncoder).Encode\n\t/Users/georgemac/go/pkg/mod/github.com/influxdata/[email protected]/csv/result.go:771\ngithub.com/influxdata/flux.(*DelimitedMultiResultEncoder).Encode\n\t/Users/georgemac/go/pkg/mod/github.com/influxdata/[email protected]/result.go:287\ngithub.com/influxdata/influxdb/query.ProxyQueryServiceAsyncBridge.Query\n\t/Users/georgemac/github/influxdata/influxdb/query/bridges.go:103\ngithub.com/influxdata/influxdb/http.(*FluxHandler).handleQuery\n\t/Users/georgemac/github/influxdata/influxdb/http/query_handler.go:167\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/Cellar/go/1.12.6/libexec/src/net/http/server.go:1995\ngithub.com/NYTimes/gziphandler.GzipHandlerWithOpts.func1.1\n\t/Users/georgemac/go/pkg/mod/github.com/!n!y!times/[email protected]/gzip.go:289\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/Cellar/go/1.12.6/libexec/src/net/http/server.go:1995\ngithub.com/julienschmidt/httprouter.(*Router).Handler.func1\n\t/Users/georgemac/go/pkg/mod/github.com/julienschmidt/[email protected]/params_go17.go:26\ngithub.com/julienschmidt/httprouter.(*Router).ServeHTTP\n\t/Users/georgemac/go/pkg/mod/github.com/julienschmidt/[email protected]/router.go:334\ngithub.com/influxdata/influxdb/http.(*APIHandler).ServeHTTP\n\t/Users/georgemac/github/influxdata/influxdb/http/api_handler.go:262\ngithub.com/influxdata/influxdb/http.(*AuthenticationHandler).ServeHTTP\n\t/Users/georgemac/github/influxdata/influxdb/http/authentication_middleware.go:89\ngithub.com/influxdata/influxdb/http.(*PlatformHandler).ServeHTTP\n\t/Users/georgemac/github/influxdata/influxdb/http/platform_handler.go:71\ngithub.com/influxdata/influxdb/http.(*Handler).ServeHTTP\n\t/Users/georgemac/github/influxdata/influxdb/http/handler.go:151\ngithub.com/influxdata/influxdb/http.DebugFlush.func1\n\t/Users/georgemac/github/influxdata/influxdb/http/debug.go:22\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/Cellar/go/1.12.6/libexec/src/net/http/server.go:1995\nnet/http.serverHandler.ServeHTTP\n\t/usr/local/Cellar/go/1.12.6/libexec/src/net/http/server.go:2774\nnet/http.(*conn).serve\n\t/usr/local/Cellar/go/1.12.6/libexec/src/net/http/server.go:1878\nruntime.goexit\n\t/usr/local/Cellar/go/1.12.6/libexec/src/runtime/asm_amd64.s:1337"}
The default user agent should be influxdb-client-go/<VERSION>
Move into a buffer writer
As per suggestion in #28
The semantics of Start and Stop are a little tricky. Ideally Start and Stop need to be synchronized and flip/flop between being able to be called. The current implementation raises a number of race and deadlock hazards and is a good example of how tricky it is to implement.
That said I see little value in having Start and Stop semantics. Personally I would like to start a periodic flushing buffer and eventually close it. If I need another, I would just create a new instance. Without a solid use-case for Start and Stop I would suggest that we remove it in favor of something like:
type LPWriter struct {}
func (*LPWriter) Start() {
//...
}
func (*LPWriter) Flush() {
//...
}
func (*LPWriter) Close() error { return nil }
A further suggestion would be that we remove periodic flushing altogether. In favor of just a Flush method. Then we write a type which takes a type WriteFlusher interface {}
and moves periodic flushing respoinsibility elsewhere.
Thoughts?
Perhaps use struct tags?
When a non-existing organization is used in writing to influx, no error is returned.
when i use example write data to influxdb bucket,
data explorer not hava some data,
do you know why?
Much like kubectl, aws and gcloud-cli, az-cli: env variables are the first source of credentials, but they all fall back to local credentials (using the current configured profile)
Hello!
I'm using "github.com/influxdata/influxdb/client/v2" and I'm quite disappointed to see old client completely removed.
I like new client but there are some features which aren't available in new one.
Previously, you had option to run arbitrary commands using client:
q := influx_client.Query{
Command: cmd,
Database: database_name,
}
if response, err := influx_client_connection.Query(q); err == nil {
res = response.Results
}
Unfortunately, it's not possible with new client.
How can I implement it?
Thank you!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.