redistimeseries / redistimeseries-go Goto Github PK
View Code? Open in Web Editor NEWgolang client lib for RedisTimeSeries
Home Page: https://redistimeseries.io
License: Apache License 2.0
golang client lib for RedisTimeSeries
Home Page: https://redistimeseries.io
License: Apache License 2.0
Delete data points for a given timeseries and interval range in the form of start and end delete timestamps.
The given timestamp interval is closed (inclusive), meaning start and end data points will also be deleted.
TS.DEL key fromTimestamp toTimestamp
TS.DEL complexity is O(n).
n = Number of data points that are in the requested range
TS.DEL temperature:2:32 1548149180000 1548149183000
As of RedisTimeSeries >= v1.4, the CHUNK_SIZE
(in Bytes) parameter is exposed and properly documented as a TS.CREATE option.
Further reference here: https://oss.redislabs.com/redistimeseries/master/commands/#tscreate
As of RedisTimeseries >= 1.4 You can now add samples to a time series where the time of the sample is older than the newest sample in the series. Bundled with that, we now have a policy that will define handling of duplicate samples, and that needs to be supported on the client via the arguments [DUPLICATE_POLICY policy]
on TS.CREATE
and via [ON_DUPLICATE policy]
on TS.ADD
. The following are the possible policies:
BLOCK
- an error will occur for any out of order sampleFIRST
- ignore the new valueLAST
- override with latest valueMIN
- only override if the value is lower than the existing valueMAX
- only override if the value is higher than the existing valueFurther reference: documentation link
According to https://pkg.go.dev/github.com/RedisTimeSeries/redistimeseries-go#Client.CreateKeyWithOptions is it possible to define the retention time.
I used the example as shown in the documentation but it doesn't seem that the specified options are correctly applied.
127.0.0.1:6379>` TTL timeserie-1
(integer) -1
I believe we might have an issue on the go client that is not closing the idle connections when using the default simpler way to connect.
Given we have a connection timeout of 0 ( due to not passing a default ) and that we have a max idle of 500 coons ( very old default, I believe from the start of client ) we might have a lot of dangling connections up to the point the servers closes them. This we can we investigate and fix.
Hi RedisTimeSeries team,
The RedisTimeSeries module brings many interesting features that can expand the way we use Redis. But looking at the examples and the documentation, if I got it right, redistimeseries-go does not support cluster? If not, is there any plan to add cluster support or any suggestion to use the RedisTimeSeries cluster in prod with Go?
Thank you!
SELECTED_LABELS allows to request only a subset of the key-value pair labels of a serie.
An important note is that SELECTED_LABELS
and WITHLABELS
are mutualy exclusive.
TS.MRANGE fromTimestamp toTimestamp
[FILTER_BY_TS TS1 TS2 ..]
[FILTER_BY_VALUE min max]
[COUNT count]
[WITHLABELS | SELECTED_LABELS label1 ..]
[AGGREGATION aggregationType timeBucket]
FILTER filter..
[GROUPBY <label> REDUCE <reducer>]
TS.MREVRANGE fromTimestamp toTimestamp
[FILTER_BY_TS TS1 TS2 ..]
[FILTER_BY_VALUE min max]
[COUNT count]
[WITHLABELS | SELECTED_LABELS label1 ..]
[AGGREGATION aggregationType timeBucket]
FILTER filter..
[GROUPBY <label> REDUCE <reducer>]
* WITHLABELS - Include in the reply the label-value pairs that represent metadata labels of the time series. If `WITHLABELS` or `SELECTED_LABELS` are not set, by default, an empty Array will be replied on the labels array position.
* SELECTED_LABELS - Include in the reply a subset of the label-value pairs that represent metadata labels of the time series. This is usefull when you have a large number of labels per serie but are only interested in the value of some of the labels. If `WITHLABELS` or `SELECTED_LABELS` are not set, by default, an empty Array will be replied on the labels array position.
Query time series with metric=cpu, but only reply the team label
127.0.0.1:6379> TS.ADD ts1 1 90 labels metric cpu metric_name system team NY
(integer) 1
127.0.0.1:6379> TS.ADD ts1 2 45
(integer) 2
127.0.0.1:6379> TS.ADD ts2 2 99 labels metric cpu metric_name user team SF
(integer) 2
127.0.0.1:6379> TS.MRANGE - + SELECTED_LABELS team FILTER metric=cpu
1) 1) "ts1"
2) 1) 1) "team"
2) "NY"
3) 1) 1) (integer) 1
2) 90
2) 1) (integer) 2
2) 45
2) 1) "ts2"
2) 1) 1) "team"
2) "SF"
3) 1) 1) (integer) 2
2) 99
@danni-m - this requires a redis-timeseries version, or a redis-timeseries docker image.
TS.MRANGE
: GROUPBY <label> REDUCE <reducer>
TS.MRANGE 1451679382646 1451682982646 WITHLABELS
AGGREGATION MAX 60000
FILTER measurement=cpu
fieldname=usage_user
hostname=(host_9,host_3,host_5,host_1,host_7,host_2,host_8,host_4)
GROUPBY hostname REDUCE MAX
GROUPBY - Aggregate results across different time series, grouped by the provided label name.
For OSS clustered databases, RedisGears is required to be present.
When combined with AGGREGATION
the groupby/reduce is applied post aggregation stage.
label - label name to group series by.
reducer - Reducer type used to aggregate series that share the same label value. Available reducers: sum, min, max.
Note: The resulting series will contain 3 labels with the following label array structure:
<label>=<groupbyvalue>
: containing the label name and label value.__reducer__=<reducer>
: containing the used reducer.__source__=key1,key2,key3
: containing the source time series used to compute the grouped serie.Query time series with metric=cpu, group them by metric_name reduce max
127.0.0.1:6379> TS.ADD ts1 1 90 labels metric cpu metric_name system
(integer) 1
127.0.0.1:6379> TS.ADD ts1 2 45
(integer) 2
127.0.0.1:6379> TS.ADD ts2 2 99 labels metric cpu metric_name user
(integer) 2
127.0.0.1:6379> TS.MRANGE - + WITHLABELS FILTER metric=cpu GROUPBY metric_name REDUCE max
1) 1) "metric_name=system"
2) 1) 1) "metric_name"
2) "system"
2) 1) "__reducer__"
2) "max"
3) 1) "__source__"
2) "ts1"
3) 1) 1) (integer) 1
2) 90
2) 1) (integer) 2
2) 45
2) 1) "metric_name=user"
2) 1) 1) "metric_name"
2) "user"
2) 1) "__reducer__"
2) "max"
3) 1) "__source__"
2) "ts2"
3) 1) 1) (integer) 2
2) 99
The underlying redigo/redis/pool.Pool
provides a Close()
method, but redis-timeseries-go doesn't provide any way to access it. redis-timeseries-go clients currently live forever and have no way of being cleaned up.
I notice that Sum is present as a DuplicatePolicy type in both official documentation and also in some other clients but why is it missing from redistimeseries-go. Is there a way in which we could do this? Or can we add it here?
Further reference: https://oss.redislabs.com/redistimeseries/commands/#tsrangetsrevrange
since the deps are already in the repo.
because the data can't be arrived sequentially, the problem is I can't write the data if the newer timer been written, how can I solve this ?
DOD:
AddWithOptions
reply format: (err error)
Add
reply format: (storedTimestamp int64, err error)
AddWithRetention
reply format: (err error)
Hi,
I observed that the API of redis-timeseries client doesn't have the functionality to delete a key.
I want to avoid using 2 redis clients in my code (this client and "github.com/go-redis/redis")
Can you please add it?
My business case: I create a timeseries for a key, store samples, do aggregations and then delete key.
Thanks,
Cristian
Please add the ability to run a TS.MGET with options (i.e., optional args) to this package. We need to run TS.MGET with the arg WITHLABELS.
Example:
TS.MGET WITHLABELS filter type=foo id=1
RedisTimeSeries/RedisTimeSeries#801
ALIGN
featureTS.RANGE key fromTimestamp toTimestamp [FILTER_BY_TS TS1 TS2 ..] [FILTER_BY_VALUE min max] [COUNT count] [ALIGN value] [AGGREGATION aggregationType timeBucket]
TS.REVRANGE key fromTimestamp toTimestamp [FILTER_BY_TS TS1 TS2 ..] [FILTER_BY_VALUE min max] [COUNT count] [ALIGN value] [AGGREGATION aggregationType timeBucket]
ALIGN
docsALIGN - Time bucket alignment control for AGGREGATION. This will control the time bucket timestamps by changing the reference timestamp on which a bucket is defined.
Possible values:
start
or -
: The reference timestamp will be the query start interval time (fromTimestamp).end
or +
: The reference timestamp will be the signed remainder of query end interval time by the AGGREGATION time bucket (toTimestamp % timeBucket).Note: when not provided alignment is set to 0
.
( first ingestion )
127.0.0.1:6379> ts.add serie1 1 10.0
(integer) 1
127.0.0.1:6379> ts.add serie1 3 5.0
(integer) 3
127.0.0.1:6379> ts.add serie1 11 10.0
(integer) 11
127.0.0.1:6379> ts.add serie1 21 11.0
(integer) 21
Old behaviour and the default behaviour when no ALIGN
is specified ( aligned to 0 ):
127.0.0.1:6379> ts.range serie1 1 30 AGGREGATION COUNT 10
1) 1) (integer) 0
2) 2
2) 1) (integer) 10
2) 1
3) 1) (integer) 20
2) 1
Align to the query start interval time (fromTimestamp)
127.0.0.1:6379> ts.range serie1 1 30 ALIGN start AGGREGATION COUNT 10
1) 1) (integer) 1
2) 2
2) 1) (integer) 11
2) 1
3) 1) (integer) 21
2) 1
Align to the query end interval time (toTimestamp). The reference timestamp will be the signed remainder of query end interval time by the AGGREGATION time bucket (toTimestamp % timeBucket).
127.0.0.1:6379> ts.range serie1 1 30 ALIGN end AGGREGATION COUNT 10
1) 1) (integer) 0
2) 2
2) 1) (integer) 10
2) 1
3) 1) (integer) 20
2) 1
Align to a timestamp
127.0.0.1:6379> ts.range serie1 1 30 ALIGN 1 AGGREGATION COUNT 10
1) 1) (integer) 1
2) 2
2) 1) (integer) 11
2) 1
3) 1) (integer) 21
2) 1
Since RedisTimeSeries 1.4 we've added the ability to back-fill time series, with different duplicate policies.
However, we still see several issues being raised on the core repo/client repo that point to users no being aware of it. Example: RedisTimeSeries/redistimeseries-py#86
We should address this by adding examples and further notes about duplicate policy.
You can check our docs about duplicate policy here: https://oss.redislabs.com/redistimeseries/configuration/#duplicate_policy.
Sample Readme of python client with an example of the expected outcome of this documentation/examples task:
https://github.com/RedisTimeSeries/redistimeseries-py#further-notes-on-back-filling-time-series
Go Modules is the official Go dependency management tool, and itβs directly implemented into the go toolchain.
We need this in order to:
In Go 1.14, module support is considered ready for production use, and all users are encouraged to migrate to modules from other dependency management systems.
Also quoting the golang official blog, the reason why this does not break compatibility with previous go versions:
(Inside $GOPATH/src, for compatibility, the go command still runs in the old GOPATH mode, even if a go.mod is found. See the go command documentation for details.) Starting in Go 1.13, module mode will be the default for all development.
we will need to ensure we have a go.mod on all go clients ( and test at least with the minimum 1.12 go version to say we support >=1.12 ). Ideally it should be 1.11 but redigo outputs an error:
$ gvm use go1.11
Now using version go1.11
filipe@filipe-ThinkPad-T490:~/go/src/github.com/RedisBloom/redisbloom-go$ GOCMD="GO111MODULE=on go" make
GO111MODULE=on go get -t -v ./...
go get: -t flag is a no-op when using modules
github.com/gomodule/redigo/redis
go build github.com/gomodule/redigo/redis: module requires Go 1.14
make: *** [Makefile:24: get] Error 1
add the bin/* files to gitigonre
Further reference: https://oss.redislabs.com/redistimeseries/commands/#tsmrangetsmrevrange
A declarative, efficient, and flexible JavaScript library for building user interfaces.
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. πππ
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google β€οΈ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.