goamz / goamz Goto Github PK
View Code? Open in Web Editor NEWAmazon AWS Library for Go
License: Other
Amazon AWS Library for Go
License: Other
The latest Amazon API (API Version 2014-10-01) supports paging:
http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-query-DescribeInstances.html
Hello, Earlier I was using "launchpad.net/goamz/s3" but for my new project I am using "github.com/goamz/goamz/s3".
And there is change in put method of bucket now it has one more param "options"
[ region := aws.USEast2
connection := s3.New(AWSAuth, region)
bucket := connection.Bucket("XXXXX") // change this your bucket name
path := "mypath" // this is the target file and location in S3
//Save image to s3
err = bucket.Put(path,user_content,content_type, s3.ACL("public-read"), options)]
Above is my code. Can you help me what is expected in options and how I can get the value of that?
I'm actually in the process of adding this now for my project's needs but thought I should file an issue primarily for awareness. If anyone does happen to have a branch where they've started working on this please do let me know, maybe we can more efficiently combine efforts. Otherwise it probably won't take me a very long time to implement it independently.
Has anyone begun working on this functionality in the IAM client? We're considering starting to implement it.
API Documentation: http://docs.aws.amazon.com/IAM/latest/APIReference/API_ListServerCertificates.html
const awsTimeFormat = "Mon, 2 Jan 2006 15:04:05 GMT"
Line 500 in c35091c
Should be Mon, 02 Jan 2006 15:04:05 GMT
I have verified using a fresh object created on real S3 today and got the header :-
Last-Modified: Mon, 01 May 2017 02:04:39 GMT
Why would I use goamz in preference to the official SDK for Go: aws-sdk-go?
It would be good to add to the README a brief explanation for newcomers. Thanks.
I want to use GoAWS (https://github.com/p4tin/GoAws) for testing purposes and need to specify SQS endpoint. So far, I see no way to do it, and would like to contribute. But first of all, what is the maintainers' opinions on that? Is there any reason, or any blocker for not having the customisable endpoint? Best suggestion to introduce this feature without breaking the API?
Thanks in advance.
trying to work with backblaze storage, and getting:
The V2 signature authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256
seems like goamz
still not support v4
#118
but I can see workaround discussion cloudflare/complainer#13
does anybody can use goamz
for backblaze
?
PS: this is my test code
package main
import (
"github.com/mitchellh/goamz/aws"
"github.com/mitchellh/goamz/s3"
"log"
"fmt"
)
func main() {
auth, err := aws.EnvAuth()
if err != nil {
log.Fatal(err)
}
client := s3.New(auth, aws.USEast)
client.S3Endpoint = "https://s3.us-west-002.backblazeb2.com"
resp, err := client.ListBuckets()
if err != nil {
log.Fatal(err)
}
log.Print(fmt.Sprintf("%T %+v", resp.Buckets[0], resp.Buckets[0]))
}
I can add signing logic from here https://github.com/bmizerany/aws4/blob/master/sign.go
Hi,
While using "PUT" to upload objects to the AWS bucket, how can i set the custom metadata to the object ? Also how can i then retrieve the list of objects in that bucket with their metadata information that i had set previously?
Thanks,
Madhukar
I tried downloading logs from my /logs in S3 - overall it was 419 files.
Here are times:
goamz-s3: 1m55.5454449s
python boto: 35.65361350977789s
It is a simple script:
Here is go code:
package main
import (
"bufio"
"fmt"
"gopkg.in/amz.v1/aws"
"gopkg.in/amz.v1/s3"
"io"
"os"
"time"
)
func main() {
time1 := time.Now()
AWSAuth := aws.Auth{
AccessKey: "",
SecretKey: "",
}
region := aws.EUWest
connection := s3.New(AWSAuth, region)
bucket := connection.Bucket("my-bucket") // change this your bucket name
response, _ := bucket.List("logs/access_log", "", "", 1000)
for _, objects := range response.Contents {
downloadBytes, err := bucket.Get(objects.Key)
if err != nil {
fmt.Println(err)
os.Exit(1)
}
downloadFile, err := os.Create(objects.Key)
if err != nil {
fmt.Println(err)
os.Exit(1)
}
downloadBuffer := bufio.NewWriter(downloadFile)
downloadBuffer.Write(downloadBytes)
io.Copy(downloadBuffer, downloadFile)
downloadFile.Close()
fmt.Println(objects.Key, time.Since(time1))
}
}
Why is go so slow comparing to python?
Global Secondary Indexes allow for alternate searches of the same data set.
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GSI.html
In the PutMetricDataNamespace
method of the CloudWatch
plugin there is an issue with the parameter parsing. It will only add a Value
parameter to the AWS POST if the Value
is not 0. However, 0 is a perfectly valid value for a metric.
https://github.com/goamz/goamz/blob/master/cloudwatch/cloudwatch.go#L321
Use case:
The repository github.com/feyeleanor/chain
appears to no longer exist.
Here is the dependency chain:
github.com/goamz/goamz/cloudwatch
-> github.com/feyeleanor/sets
-> github.com/feyeleanor/slices
-> github.com/feyeleanor/lists
-> github.com/feyeleanor/chain (missing)
Would it make sense to use a RetryingClient for the autoscaling API like is being done for the ec2 and iam APIs? If so, I'm new to Golang and looking for a few small issues to work on, so I'd be happy to take a stab at it.
Currently the Exists()
method considers 403 errors as non-existence of the object: https://github.com/goamz/goamz/blob/master/s3/s3.go#L295-L298. This has caused some issues for us because AWS had some issues in S3 and they were returning 403s for objects that actually existed.
Is there any chance of getting everyone to contribute to a single project? Hashi and CrowdMob are maintaining their own versions each with different levels of completeness in different AWS services.. It's quite the cluster ATM.
There should be a context version of each Bucket
method that makes an HTTP request.
For example:
func (b *Bucket) Del(path string) error
should be supplemented with:
func (b *Bucket) DelContext(ctx context.Context, path string) error
See the database/sql
package for an example of this:
func (db *DB) Query(query string, args ...interface{}) (*Rows, error)
func (db *DB) QueryContext(ctx context.Context, query string, args ...interface{}) (*Rows, error)
Is it possible to set the policy or CORS config for an S3 bucket using goamz?
I tried doing a .Put
with path equal to /?policy
and /?cors
, but it looks like ultimately the requests go to /bucket-name?policy
and /bucket-name?cors
, so it instead creates a file in the bucket with those names.
Maybe it's me doing something wrong but
with this code I get the result in the comments next to commands.
Why am I getting just 1k items ?
31 list, err := bucket.List("", "", "", 20000000) # ok
32 fmt.Println(list.MaxKeys) # 20000000
33 fmt.Println(list.IsTruncated) # true
34 fmt.Println(len(list.Contents)) #1000
Would be nice to use attempt strategy for Put methods (PutCopy, PutReader, PutReaderHeader)
The use of pointers in goamz/sqs is a bit inconsistent: e.g. there's a func(*SQS) DeleteMessage(*Message)
, but messages are handed to the user as []Message
, Error
has func (err *Error) Error()
but xmlErrors
contains Error
values etc. sqs_test.go
also had the downright mysterious []Message{*(&Message{…}), …}
construct where the pointer is immediately dereferenced.
I made pointer usage more consistent (see here), and while I was at it I cleaned up DeleteMessageBatch
so it doesn't do completely unnecessary allocation of a map and slice. I can turn this into a PR if needed, and any comments are welcome.
@radar The SNS publish API contains a TargetArn parameter. Your current commit has removed that for some reason.
Docs: http://docs.aws.amazon.com/sns/latest/APIReference/API_Publish.html
Previous commit that fixed this bug: 6c1f307
Your current commit: https://github.com/goamz/goamz/blob/master/exp/sns/subscription.go#L17
Looking at the source for DeleteMessageBatch
(https://github.com/goamz/goamz/blob/master/sqs/sqs.go#L443), it looks like the response only looks for elements named DeleteMessageBatchResultEntry
.
Looking at the SQS API docs (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_DeleteMessageBatch.html), if a message of the batch fails to delete, it's returned as an element named BatchResultErrorEntry
.
This means the fields in the struct DeleteMessageBatchResponse
, aside from ID, are never populated since a response of DeleteMessageBatchResultEntry
only ever contains the ID (as stated in the SQS API).
Am I missing something here? This seems like a pretty big bug in DeleteMessageBatch
I am trying to fetch temporary credentials using v4signer.
But it is giving me SignatureDoesNotMatch with following msg
Message>The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.
The Canonical String for this request should have been
'GET
/
Action=AssumeRole&DurationSeconds=900&Policy=&RoleArn=&SignatureVersion=4&Timestamp=&Version=2011-06-15
date:2016-02-27+06%3A47%3A34.075675879+%2B0000+UTC
host:sts.ap-southeast-1.amazonaws.com
x-amz-date:20160227T064734Z
date;host;x-amz-date
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855'
The String-to-Sign should have been
'AWS4-HMAC-SHA256
20160227T064734Z
20160227/ap-southeast-1/sts/aws4_request
0482faa42896241130f0cb72e0003d40a6a06a77c04e83cbdb6304008f95c46f'
And this is how I am generating signature.
v4Signer := aws.NewV4Signer(aws.Auth{
AccessKey: AWSAccessKey,
SecretKey: AWSSecretKey,
}, "sts", aws.APSoutheast)
// generate timestamp
timeStamp := time.Now().UTC()
// get time stamp into string
urlTime := url.QueryEscape(timeStamp.String())
logger.D("url time : " , urlTime)
// construct a sts url to hit
url := "https://sts.ap-southeast-1.amazonaws.com/" +
"?Version=2011-06-15" +
"&Policy=<POLICY>" +
"&RoleArn=<ROLE> +
"&DurationSeconds=900" +
"&SignatureVersion=4" +
"&Timestamp=" + urlTime +
"&Action=AssumeRole"
req, err := http.NewRequest("GET", url, nil)
if err != nil {
logger.E("Error ", err)
}
req.Header.Add("Date", urlTime)
// Sign the request
v4Signer.Sign(req)
logger.D(req.URL.String())
resp, err := http.DefaultClient.Do(req)
if err != nil {
logger.E("Error in do :: ", err)
}
// read the body content
contents, err := ioutil.ReadAll(resp.Body)
if err != nil {
logger.E("Error ioutil ", err)
}
logger.D(string(contents)) // here I get the SingatureDoesNotMatch error
I have removed long url encoded policy, role arn & time stamp, while pasting here, otherwise request becomes un-readable
I am trying to a bulk download files from the S3 bucket but I keep hitting this error.:
Get https://s3.amazonaws.com/bucker_name/E2LSFCX3P2DHJ3.2013-10-29-22.8dE9Scnj.gz: net/http: TLS handshake timeout
This does not happen for a smaller set of files but ONLY when there are multiple files being downloaded in a bunch of goroutines.
I saw a discussion happening at https://github.com/crowdmob/goamz/issues/247 which highlights the same issue.
How can I resolve this error ?
It seems that CreateHIT
in the mturk API does not support QualificationRequirement
.
I create a QualificationRequirement
:
qual := &mturk.QualificationRequirement{
QualificationTypeId: "00000000000000000071",
Comparator: "NotEqualTo",
IntegerValue: 0,
LocaleValue: "CN",
RequiredToPreview: true,
}
Then I called CreateHIT
with qual
as the argument:
hit, err := turk.CreateHIT(
"Crowdproxy", "Host a proxy and help defeat censorship!",
question,
reward,
1800, // assignmentDurationInSeconds
3600, // lifetimeInSeconds
"javascript", // keywords
1000, // maxAssignments
qual, // qualificationRequirement
"", // requesterAnnotation
)
if err == nil {
fmt.Println(hit)
}
Here is what the call returns:
&{{ False [{0 AWS.InvalidEnumeratedParameter The value "Illegal Text data found as child of: QualificationRequirement" you specified for enumeration is invalid. (1430347307855 s) }]} { } 0 0 0 0 { 0 false} <nil> 0 0 0 0}
Is there anything wrong with how I use the API? Or it is just not supported? Thanks.
GetQueueAttributes only takes a single argument and stores it as a value to an "AttributeName" parameter. According to the spec at http://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_GetQueueAttributes.html
, it should take multiple arguments and store them as
Attribute.1
Attribute.2
etc
How about changing the method signature from
func (q *Queue) GetQueueAttributes(A string) (resp *GetQueueAttributesResponse, err error) {
to
func (q *Queue) GetQueueAttributes(attributes ...string) (resp *GetQueueAttributesResponse, err error) {
so it would not break existing code, but people could pass multiple attributes.
if a nil slice is passed to the method, then set the attribute to be returned to "All"
TestCreateQueueWithAttributes asserts the name/values for attribute.1. and attribute.2, but since they are added using a map and then range, it isn't deterministic which order they actually appear when checking the request against the expected values.
This can manifest itself in failures like:
FAIL: sqs_test.go:53: S.TestCreateQueueWithAttributes
sqs_test.go:63:
// TestCreateQueue() tests the core functionality, just check the timeout in this test
c.Assert(req.Form["Attribute.1.Name"], gocheck.DeepEquals, []string{"ReceiveMessageWaitTimeSeconds"})
... obtained []string = []string{"VisibilityTimeout"}
... expected []string = []string{"ReceiveMessageWaitTimeSeconds"}
For me this test passes about 85% of the time (which I think is a go runtime deficiency, it should fail ~ 50% of the time). This test should be changed to not depend on the hash ordering so it will pass all the time.
I got this error on ubuntu with the go-stable package:
$ go get github.com/goamz/goamz/s3
../../src/github.com/goamz/goamz/s3/s3.go:731: function ends without a return statement
Possible Actions: Add a comment in the main README and force it with a build tag inside one of the file
It would be great to have an API to upload to S3 through a cloudfront distribution (just changing the endpoint doesn't work)
I consistently get the error "A header you provided implies functionality that is not implemented" back from S3 if I try to upload a zero length file to my S3 bucket. The content-length is set, so I'm not sure what's going wrong.
This is so that when using a dependency manager, we can lock to a version and get predictable builds
Otherwise, we're sticking to a commit sha or always pulling master.
For example, using glide, we can set it to only pull certain patch/minor versions based on our comfort.
S3 Has recently added versions to files. From the API it is possible to get a specific version if you know the version ID.
Is it possible, using this goamz to get a specific version of a file, or atleast get the version ids from S3?
Will it be possible?
Thanks
I'm trying to run the following code and getting the following error:
Invalid UpdateExpression: An expression attribute value used in expression is not defined; attribute value: :game
Here is my query
update := table.UpdateItem(&dd.Key{player.UserId, ""}).
ReturnValues(dd.ALL_NEW).
UpdateExpression("SET UserId = :game").
ExpressionAttributes(*dd.NewStringAttribute(
":game",
"test",
))
result, err := update.Execute()
Am I doing this right? I've tried this with a few different types.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.