adroll / goamz Goto Github PK
View Code? Open in Web Editor NEWFork of the GOAMZ version developed within Canonical with additional functionality with DynamoDB
Home Page: https://wiki.ubuntu.com/goamz
License: Other
Fork of the GOAMZ version developed within Canonical with additional functionality with DynamoDB
Home Page: https://wiki.ubuntu.com/goamz
License: Other
CreateSecurityGroup
needs to send a VpcID parameter in order to use the group within a VPC (http://goo.gl/Eo7Yl). This has to be set at creation time, and can't be changed later.
It might be even better to take an options struct to prevent possible future API changes from breaking the ec2 library.
@crowdmatt we appreciate you sharing your code, and taking care of the pull requests and all,
we appreciate it even more if you give a few words of explanation when you change how things work, like when you change s3/item.go -> getItem put a comment line above it and say "I think this way is more flexible" or "this is the way it should have been done" or ...
if you feel very fancy you can put these in the readme or a release note.
Again we appreciate your work,
func (b *Bucket) Get(path string) (data []byte, err error)
It isn't possible to get content-type, nor any other custom headers sent to S3.
Correct me if I am wrong, but it seems like there's no way to run a count query where you want to take the LastEvaluatedKey
and forward it on (i.e. to get the total count across all pages). E.g. here is a response from dynamodb for a count query on an index, where you actually want to "follow" the last key:
{
"Count":209,
"LastEvaluatedKey":{
"Id":{
"S":"359a4dce-52b7-487c-9a4a-7fa6baaa3934"
},
"Status":{
"S":"Complete"
}
},
"ScannedCount":209
}
(FYI this is for a query which is essentially "count records where status=Complete")
There is no variation of CountQuery
that returns the last evaluated key, and you can't use QueryTable
because even though it returns the last evaluated key, it doesn't return the count. If said method was modified to continue if Items
doesn't exist (per https://github.com/crowdmob/goamz/issues/236) it may be possible to take the cap(results)
, but not sure if that would work, and it's a bit hacky.
What do you think? Happy to provide a solution if you think there's a good one.
How can I connect to S3 bucket without knowing it's Region?
Is this API call implemented in this library? http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETlocation.html
or better, In boto (python) aws sdk I can connect to S3 bucket with 'universal' region without the need to find what the bucket region is.
I find myself wanting to loop the SQS ReceiveMessage
function in a goroutine, producing popped queue items through a channel. This seems like it'd be generally useful, but I'm not sure if this library is appropriate for it. Would you accept a patch to do that, or is it something I'm better off developing myself separately?
Hi @crowdmatt ,
I was curious to know why you don't use IAM roles? is it that you guys have not get to research and test them? or you have a use case that can't take advantage of IAM roles? or you see a problem with them? or you have been living happily without them and have not bothered?
Just curious,
Cheers,
Ali
The QueueFromArn() function is misnamed.
The method name claims it wants an ARN. e.g. arn:aws:sqs:us-west-2:xxxxxxxxxxxx:my_queue_name
In reality, it wants the queue url: https://sqs.us-west-2.amazonaws.com/xxxxxxxxxxxx/my_queue_name
https://github.com/crowdmob/goamz/blob/master/sqs/sqs.go#L197
So either:
Since 996246a there is no longer a way to specify a namespace on custom metrics with PutMetricData
Just for the record, dynamodb, putitem does not handle {"__type":"com.amazon.coral.validate#ValidationException","message":"One or more parameter values were invalid: An AttributeValue may not contain an empty string."}
properly,
With any luck, I'll submit a pull request regarding both issues (putitem and additem) soon.
I'm getting invalid signature and token errors.
Or to rephrase the question, once you built a query how should you run it?
I see a query builder but no actually "query" functionality, only scans, am I missing something? Anybody working on it? Can I use your half baked code if there is any? Should I do it myself and contribute back?
http://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_MetricDatum.html implies a single value and it works;
Submitting it as an array (as currently implemented) results in an error
Can't see any support for signed S3 URLs with custom headers or query parameters, e.g. to set Content-Disposition using "response-content-disposition"
Also can't find a (sensible) way to hack around it since none of the required methods are exported.
Per #242, the v4 signer should encode a space in query params with a "%20" rather than "+" when building the canonicalURI.
http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-query-string-auth.html
In aws/sign.go#L246 and aws/sign.go#L251 the query string keys and values are encoded with the net/url QueryEscape function. However, that function escapes a space character with a "+" rather than "%20" as required by AWS.
The escaping method for the canonicalQueryString method need to be updated.
One possible solution that continues to use the standard net/url (rather than a custom encoding function) is to use the path escaping functionality in the net/url package as demonstrated here:
http://play.golang.org/p/Po6CFEDcF1
package main
import (
"fmt"
"net/url"
)
func main() {
// Standard QueryEscape call encodes a space with a "+" character.
// This is because the specs allow either a "+" or "%20" for encoding
// of a space character in URL Query strings and this function was
// written for that purpose.
myFirstString := "Hello world"
myFirstEncodedString := url.QueryEscape(myFirstString)
fmt.Println("URL QueryEscape:", myFirstEncodedString)
// However, the logic for encoding a space using "%20" is build into
// the net/url package, it is just not readily exposed. We can access
// it by encoding a url path, where a space must be encoded as "%20"
// according to spec.
mySecondString := "Hello world"
t := &url.URL{Path: mySecondString}
mySecondEncodedString := t.String()
fmt.Println("URL Path Encoded:", mySecondEncodedString)
}
I will work on a PR when I get time, but wanted to document the problem and a possible solution here in the meantime.
The current SQS API is fairly limiting in that you can set very few request attributes per function call. For example, there is a ReceiveMessage
function and a ReceiveMessageWithVisibilityTimeout
function, but no way to set, for example, WaitTimeSeconds
.
I dealt with this a little bit by adding CreateQueueWithAttributes
in #74, but it would be pretty tedious to create WithAttributes
variants for every function.
I think a cleaner and more idiomatic approach would be to create request structs for each request and set attributes through a function call on them. This is similar to how net/http
works with its client and setting headers by calling Headers().Set()
. When it came time to actually make the request, you would do that via a Do
method on the request object. The toplevel functions could be reimplemented in terms of these to maintain backward compatibility and a simple API.
This is a little bit different than how the rest of goamz seems to be implemented though. Do other AWS APIs not have as many parameters? Does goamz simply not allow them?
Is there a way to do conditional put and update item in dynamodb. The PutItem API for table doesn't seem to allow adding expected attributes/values.
The query class seems to allow adding it but there is no query runner?
This switch is incomplete omitting the us-west-2 region and several other regions... https://github.com/crowdmob/goamz/blob/master/sqs/sqs.go#L44
Anyone figured this out?
Samples would be appreciated!
I should be getting the error DuplicateAccessPointName with HTTP 400 but instead the call just returns the ELB.
http://docs.aws.amazon.com/ElasticLoadBalancing/latest/APIReference/API_CreateLoadBalancer.html
You can pass through a name without a value and CloudWatch will return all metrics associated with that name: http://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_DimensionFilter.html
I made a little hack to put an if guard around the value so that if it's blank it will not add it to the params here: https://github.com/crowdmob/goamz/blob/master/cloudwatch/cloudwatch.go#L276
This worked as expected and I can't think of a reason you'd be submitting metrics with blank tag values.. If anyone can that's unfortunate as we'd have to rework dimensions to use nil types :| Otherwise, I can submit a patch?
I'm using this code to test whether a file is found in an S3 bucket:
s := s3.New(auth, aws.USEast)
b = *s.Bucket(AmzBucket) // the bucket to check
ok, err = b.Exists(amzPath) // the path/prefix of the object to check for
I occasionally get this (even though the object exists):
panic: interface conversion: error is *url.Error, not *s3.Error
goroutine 1 [running]:
runtime.panic(0x359640, 0xc2102cbfc0)
/usr/local/go/src/pkg/runtime/panic.c:266 +0xb6
github.com/crowdmob/goamz/s3.(*Bucket).Exists(0xc2106ef800, 0xc2107e3f00, 0x96, 0x0, 0x0, ...)
/...myapp/github.com/crowdmob/goamz/s3/s3.go:245 +0x1e7
I'm getting the following when trying to go get github.com/crowdmob/goamz/s3
:
$ go get github.com/crowdmob/goamz/s3
# github.com/crowdmob/goamz/s3
../../../crowdmob/goamz/s3/s3.go:944: unknown net.Dialer field 'KeepAlive' in struct literal
../../../crowdmob/goamz/s3/s3.go:947: unknown http.Client field 'Timeout' in struct literal
$ go version
go version go1.1.2 linux/amd64
Same on multiple hosts.
Hello,
According to documentation, http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_CreateTable.html
Create Table routine supports sending a list of global secondary indexes (GlobalSecondaryIndexes field), which is not present in Go implementation, https://github.com/crowdmob/goamz/blob/master/dynamodb/table.go
Are there any plans to add global secondary indexes?
Thanks
The DynamoDB API for Scan only supports results of up to ~1 MB per request. It uses LastEvaluatedKey and ExclusiveStartKey to allow one to make multiple requests to scan the entire data set. This is orthogonal to parallel scanning.
I'll need this feature in a project that I'm working on and as far as I can tell it is not currently supported in the goamz library. Adding it would likely require breaking the current API or adding a new type of scan (and parallel scan) function. I wanted to get some input from others that have been working on the library before jumping in.
Thanks!
For each bucket.Get() a new http.Client is created (https://github.com/crowdmob/goamz/blob/94ebb8df2d498469ee950fbc4e2ab3c901dea007/s3/s3.go#L1023) Also, the request is explicitly told to close after finish (https://github.com/crowdmob/goamz/blob/94ebb8df2d498469ee950fbc4e2ab3c901dea007/s3/s3.go#L1004). So each request requires a new TCP connection and this creates overhead as well as leaving many sockets in a TIME_WAIT state if you're doing a lot of connections.
I've "hacked in a fix" where the http.Client is only created once for each Bucket object. and then the req.Close is set to false. This fixes the issue and it does reuse connections properly. However, I'm not totally sure how to fix this in the context of this library, specifically the timeouts would not be fixed.
This may apply to all S3 requests, but I'm currently only interested in downloading lots of data fast. :) My use case is downloading small (10-50k) files from S3 to an ec2 instance. At rates above 200 qps I get lots of sockets in the TIME_WAIT state and I can't create new connections until then close out. This fix allowed me to max out at 400 qps (from 300) and the code CPU bound presumably the logic of processing the data.
How / where to add in support to deal with proxies (uploading to S3)?
I am trying to use (t *Table)AddItem to update a item,
I create a new table and create a slice of attributes,
If with this table and []attribute if I call PutItem it works fine,
but if I call AddItem (which I believe is intended to update the record instead of overwriting it) it does returns (true, nil) but it does not actually update the record, when I print out the response that it gets from amazon it is:
{"__type":"com.amazon.coral.validate#ValidationException","message":"One or more parameter values were invalid: Action ADD is not supported for the type S"}
To make it clear, I am not trying to add anything, just update an item,
Any suggestions?
Following
http://docs.aws.amazon.com/IAM/latest/APIReference/API_GetUser.html
The UserName argument in iam.GetUser() is optional.
In the case where the UserName is omitted, the response returns information about the user making the request. This is incredibly helpful when running code on an instance with an IAM role.
Currently you can only specify Value, but I would like to be able to set up a DNS for ELB as seen here: http://docs.aws.amazon.com/Route53/latest/APIReference/CreateAliasRRSAPI.html
I first noticed this in the sqs package, though it seems to be an issue in other packages as well. When using an SQS struct in a long running process while relying on IAM roles to provide authentication credentials, the sqs module will eventually fail to authenticate.
It seems that the original idea is that when calling sqs.sign
the aws.Auth
credentials should be refreshed with the call to aws.Auth.Token()
. However, due to the auth struct not being passed by reference, the calling function will maintain the outdated credentials. In addition, the token call comes after the secret key is placed in the http request headers (https://github.com/crowdmob/goamz/blob/master/sqs/sign.go#L15). These problems combined cause long running processes to fail after sufficient uptime.
I have made a temporary patch in github.com/crxpandion/goamz
that solves the immediate issue of the credentials being outdated by solving the latter problem, but it does not fully solve the issue of the calling function's auth struct maintaining outdated credentials.
Do you guys have ideas to solve this issue? The easiest solution seems to be to pass the auth struct to the sqs.sign
function by reference, but I can understand how it might not be ideal for the function to have side effects.
I am getting a failure for the TestSign unit test, and I'm also getting invalid signature reponses when trying to connect to dynamodb, does this test also fail for anyone else or is it something wrong with my machine?
I'm using go version go1.1 windows/amd64
go test github.com/crowdmob/goamz/dynamodb
--- FAIL: TestSign (0.00 seconds)
sign_test.go:60: Authorization Does Not Match
sign_test.go:61: Expected: AWS4-HMAC-SHA256 Credential=AKIDEXAMPLE/20110909/us-east-1/host/aws4_request, SignedHeaders=date;host, Signature=3576498fabe29305d8fbe7ebae518bd911549c9fb124b935f64366d72c1f7983
sign_test.go:62: Actual: AWS4-HMAC-SHA256 Credential=AKIDEXAMPLE/20110909/us-east-1/host/aws4_request, SignedHeaders=date;host, Signature=128cf614b88075981751a44ad3d5fa709310d49961d50e399464365d7bfe5264
FAIL
FAIL github.com/crowdmob/goamz/dynamodb 0.039s
@nabeken your test TestAddWriteRequestItems
is failing and it is hard for me to tell what is wrong, can I ask you to please update your test so that it is more clear, like with individual assertions?
https://github.com/crowdmob/goamz/blob/master/dynamodb/query_builder_test.go#L32
I appreciate your time and contribution very much.
Or if you know what is wrong and what broke the test that will be very useful too.
The return value for DescribeTable contains all zero values for a valid table
Package net/http/httptest
provides a test http.Server
that is closeable. It provides the same behavior as the current implementation of test servers (for instance ec2/ec2test
or s3/s3test
) with a cleaner interface:
http://golang.org/pkg/net/http/httptest/#Server
I'm opening this issue to see if there's interest before starting to work on a PR that would do this.
I've noticed occasional, looong drops in transfer speed for large files. Sometimes it never recovers.
I'd like the ability to cancel the http request at the transport level.
It is sloppy (and impractical) for me to make another request and start a new transfer without canceling (letting timeouts close the connection eventually). The read timeout is set very high because the files are (edit: large) so stalled connections may be around for a long time. Restarting the process is not ideal because it's a daemon! ๐
What I envision currently is a set of methods that produce *http.Request
values that can be executed directly with an *http.Client
. Then the user (or potentially the package itself) can make use of the HTTP transport's CancelRequest()
method if it implements one like *http.Transport
.
func (*Bucket) GetRequest(path string) (*http.Request, error)
func (*Bucket) PutRequest(path string, length int64, contType string, perm ACL, options Options) (*http.Request, error)
note: the lack of a 'body' in the arguments of PutRequest()
.
edit: I will happily patch this if we can agree on an exported api.
For example, in dynamodb.go:98:
resp, err := http.DefaultClient.Do(hreq)
Leads to this error when using it on App Engine SDK
ERROR 2014-09-24 15:02:16,635 http_runtime.py:281] bad runtime process port
The reason for this is the default client for HTTP is purposely broken on the App Engine SDK, as they require you to use a client from the urlfetch lib they provide. Hard-coded usage of the default Go HTTP client should be removed in favor of a solution that lets you specify the HTTP client to use.
I have objects in S3 with the header "Content-Encoding: gzip". For the application I'm building I would actually like to retrieve the gzipped data. But, the net/http Transport appears to automatically decompress gzip by default.
I haven't found a way to set the proper flag in the underlying http.Transport (or add the Content-Encoding 'manually') to disable automatic decompression. Is anybody aware of such a feature? If not, do people have good ideas about how to implement it? It seems like the S3 type controls the Transport, although a flag in the S3 struct seems a pretty wide scope through which to control automatic decompression.
I wrote an application that has to iterate through our entire bucket and update the Cache-Control metadata value. The problem is that PutCopy wiped the Content-Type and S3 decided that everything is now binary/octet-stream.
It appears that PutCopy is not allowing S3 to detect the Content-Type automatically or passing the original Content-Type with the PutCopy request.
for _, value := range Response.Contents {
_, err := Bucket.PutCopy(value.Key, s3.PublicRead, opts, bucketName+"/"+value.Key)
if err != nil {
log.Panic(err.Error())
}
}
Here are the opts values:
opts := s3.CopyOptions{}
ops.CacheControl = "public, max-age=0"
ops.MetadataDirective = "REPLACE"
Attempting to Get()
an object whose key includes a "+" character results in a 404 error (if I have List permission on the bucket, that is; otherwise, I get a 403). It looks like partiallyEscapedPath
should be escaping "+" with "%2B" but isn't.
I can't find any specific mention of this rule in the AWS documentation except where it talks about request signing (see also issue #243).
Here is a test program:
package main
import (
"log"
"fmt"
"github.com/crowdmob/goamz/aws"
"github.com/crowdmob/goamz/s3"
)
func AWS() aws.Auth {
auth, err := aws.EnvAuth()
if err != nil {
log.Fatal(err)
}
return auth
}
// Region returns the region for use by the aws package
func Region() aws.Region {
rgn, ok := aws.Regions["us-east-1"]
if !ok {
log.Fatal("Unknown region")
}
return rgn
}
func main() {
s3 := s3.New(AWS(), Region())
b := s3.Bucket("rscheme-docker")
data, err := b.Get("test/test+plus.now")
if err != nil {
log.Fatal(err)
}
fmt.Printf("Data is: %q\n", data)
}
which produces the following output (with s3.go
's debug
flag set to true):
2014/11/06 14:27:56 Signature payload: "GET\n\n\nThu, 06 Nov 2014 20:27:56 UTC\n/rscheme-docker/test/test+plus.now"
2014/11/06 14:27:56 Signature: "<redacted>"
2014/11/06 14:27:56 Running S3 request: &s3.request{method:"GET", bucket:"rscheme-docker", path:"/rscheme-docker/test/test+plus.now", params:url.Values{}, headers:http.Header{"Host":[]string{"s3.amazonaws.com"}, "Date":[]string{"Thu, 06 Nov 2014 20:27:56 UTC"}, "Authorization":[]string{"AWS <redacted>"}}, baseurl:"https://s3.amazonaws.com", payload:?reflect.Value?, prepared:true}
2014/11/06 14:27:57 } -> HTTP/1.1 404 Not Found
Connection: close
Transfer-Encoding: chunked
Content-Type: application/xml
Date: Thu, 06 Nov 2014 20:27:57 GMT
Server: AmazonS3
X-Amz-Id-2: <redacted>
X-Amz-Request-Id: E886DA21B6EA0191
115
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>NoSuchKey</Code><Message>The specified key does not exist.</Message><Key>test/test plus.now</Key><RequestId>E886DA21B6EA0191</RequestId><HostId>DAO074akQn94Raa/Q4fqCKelHlisazuIXVTCujiFCDgkjPiOvR7SWxBLL2ZQ3FLt</HostId></Error>
0
2014/11/06 14:27:57 got error (status code 404)
2014/11/06 14:27:57 data:
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>NoSuchKey</Code><Message>The specified key does not exist.</Message><Key>test/test plus.now</Key><RequestId>E886DA21B6EA0191</RequestId><HostId>DAO074akQn94Raa/Q4fqCKelHlisazuIXVTCujiFCDgkjPiOvR7SWxBLL2ZQ3FLt</HostId></Error>
2014/11/06 14:27:57 err: s3.Error{StatusCode:404, Code:"NoSuchKey", Message:"The specified key does not exist.", BucketName:"", RequestId:"E886DA21B6EA0191", HostId:"DAO074akQn94Raa/Q4fqCKelHlisazuIXVTCujiFCDgkjPiOvR7SWxBLL2ZQ3FLt"}
2014/11/06 14:27:57 The specified key does not exist.
Hi @crowdmatt ,
Pull requests Part1-4 are supposed to go together, I didn't know how to make them into one pull request,
since my personal repo and the main repo (crowdmob/goamz) are drifting apart, I have no easy way to create a patch or something,
Anyway,
this is meant to fix the issue with expiration of temporary tokens, and all the tests should pass, (they pass on my repo),
Ali
How willing are you to accept aggressive changes to the dynamodb library that change the exposed interface? Obviously changing or removing exported types and functions would impact anyone currently using this package, but the current state of this library feels pretty incomplete and like it should probably be in the /exp/ directory at the moment.
Before I make major changes to the library and submit pull requests, how willing are you to break from the current interface, assuming that the new interface provides the correct functionality in a clearer and more concise way?
Go reuses connections within the scope of a http.Client. The recent addition of timeouts broke this functionality by creating a new client for every request.
Hi @crowdmatt ,
Any plans to automatically renew the temporary security token (Auth.Token) when it expires?
If yes, that is great,
If no, I can think of two ways to deal with this,
1- a timer that fires right at the expiration and gets the new credentials (of course there is a chance that machine time and aws time are not exactly in sync and this method may need some hysteresis)
2- hiding the token and implementing something like a func (a *Auth) SecurityToken() string {}
that checks if the token is [nearly] expired and if so updates it,
let me know if the answer is no and which solution you think works best?
Ali
I am using the GetAuth function to get IAM instance credentials. I do this by calling GetAuth("", "", "", time.Time{})
. Sometimes the call to http://169.254.169.254/latest/meta-data/iam/security-credentials/
takes a long time to return (>10s in some cases). This causes the http client in the GetMetaData function to timeout and return an error. GetAuth then falls back to looking for the credentials file, fails to find it, then returns the error: No valid AWS authentication found: open /home/ec2-user/.aws/credentials: no such file or directory
.
This error doesn't make sense in the case where I am expecting to get IAM instance credentials. One possible solution would be to make the getInstanceCredentials
function public to allow client packages to explicitly choose that method to get authentication.
The code in https://github.com/crowdmob/goamz/blob/master/cloudwatch/cloudwatch.go#L323-L325 prevents value zero (=0.0) to be submitted. This should be possible because a custom metric can be zero and needs to be transmitted to prevent "INSUFFICIENT_DATA" in CloudWatch.
It seems the ListResp struct is missing some fields, including the 'NextMarker' - which would allow getting more records - i.e., if you have a bucket with say 10k records, you'll only get up to max 1000 records and no way to continue getting the rest.
This is because there is no such method. You can't use QueryTable
because it requires the JSON response to have an Items
key (a dynamodb count call doesn't seem to return one) E.g. if you try some code like this:
q := dynamodb.NewQuery(table)
q.AddIndex("myIndex")
q.AddKeyConditions([]dynamodb.AttributeComparison{
*dynamodb.NewEqualStringAttributeComparison(AttributeStatus, status),
})
q.AddSelect("COUNT")
res, _, err := t.QueryTable(q)
log.Println(err)
You'll get an error like this:
Unexpected response {"Count":207,"LastEvaluatedKey":{"Id":{"S":"a2f0983e-9e2a-4d67-ac88-8fde867cc250"},"Status":{"S":"Complete"}},"ScannedCount":207}
(i.e. no Items
key exists)
I'd like to add a new method similar to https://github.com/crowdmob/goamz/blob/master/dynamodb/query.go#L38
func (t *Table) CountQueryOnIndex(attributeComparisons []AttributeComparison, indexName string) (int64, error) {
// ...
}
Also, I'll move the bulk of the CountQuery
method into a helper func to reduce duplication.
Would you accept a PR with that?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.