leejo / aws-s3 Goto Github PK
View Code? Open in Web Editor NEWLightweight interface to Amazon S3 (Simple Storage Service)
Lightweight interface to Amazon S3 (Simple Storage Service)
Been running into an issue while accessing an object that is zero bytes. Here is the error returned.
Attribute (size) does not pass the type constraint because: Validation failed for 'Int' with value undef at constructor AWS::S3::File::new (defined at /usr/local/share/perl/5.14.2/AWS/S3/File.pm line 195) line 106
AWS::S3::File::new('AWS::S3::File', 'bucket', 'AWS::S3::Bucket=HASH(0x3112b80)', 'key', 'cases/00398535/a0i30000001ClJcAAK/Status.html', 'size', undef, 'contenttype', 'application/octet-stream', 'etag', '"d41d8cd98f00b204e9800998ecf8427e"', 'lastmodified', 'Fri, 13 Mar 2015 01:06:04 GMT', 'is_encrypted', 0) called at /usr/local/share/perl/5.14.2/AWS/S3/Bucket.pm line 181
AWS::S3::Bucket::file('AWS::S3::Bucket=HASH(0x3112b80)', 'cases/00398535/a0i30000001ClJcAAK/Status.html') called at lib/s3.pm line 64
s3::getObj('s3=HASH(0x2ab3000)', 'cases/00398535/a0i30000001ClJcAAK/Status.html') called at obj-bak.pl line 473
https://metacpan.org/source/LEEJO/AWS-S3-0.16/lib/AWS/S3.pm#L94
This line made location mandatory, however, it should be optional.
https://metacpan.org/source/LEEJO/AWS-S3-0.16/lib/AWS/S3/Request/CreateBucket.pm#L18
If a caller doesn't give location, then undef will be pass to this attribute, and fail the check.
This problem was fixed before, but why it come out again.
#9
This parameter is not documented. While it shouldn't be required, it's passed to the request whether it was specified or not. So if I don't specify it then it ends up being passed as an undef
and failing the TC for the AWS::S3::Request::CreateBucket
location
attribute.
In order to properly support TLS urls, the API will need to properly handle the correct S3 endpoints for a bucket.
e.g.
http://bucket-name.s3.amazonaws.com/key
wants to be e.g.
https://s3-eu-west-1.amazonaws.com/bucketname/key
the signature is not affected by this transformation, but the S3::Bucket code really ought to do the right thing here.
I may try to implement.
https://forums.aws.amazon.com/ann.jspa?annID=6776
Amazon S3 currently supports two request URI styles in all regions: path-style
(also known as V1) that includes bucket name in the path of the URI (example:
//s3.amazonaws.com/<bucketname>/key), and virtual-hosted style (also known
as V2) which uses the bucket name as part of the domain name
(example: //<bucketname>.s3.amazonaws.com/key). In our effort to continuously improve
customer experience, the path-style naming convention is being retired in favor
of virtual-hosted style request format. Customers should update their applications
to use the virtual-hosted style request format when making S3 API requests before
September 30th, 2020 to avoid any service disruptions. Customers using the
AWS SDK can upgrade to the most recent version of the SDK to ensure their
applications are using the virtual-hosted style request format.
Virtual-hosted style requests are supported for all S3 endpoints in all AWS regions.
S3 will stop accepting requests made using the path-style request format in all regions
starting September 30th, 2020. Any requests using the path-style request format
made after this time will fail.
Hi
How check when add_file finish ?
$result = $bucket->add_file( ..... ) ;
Some $result->status ?
I'm seeing the key part of the URL get URL-encoded ,i.e.
path/to/file => path%2Fto%2Ffile
...which is breaking the url.
We'll want to encode the signature, but not the path.
From the code here
Line 165 in 54a608d
Keys of S3 should be any utf-8 string according the document of Amazon. Console uses '/' as delimiter to organize keys in tree style. So it doesn't prohibit a key like '/xxxxxx'. But, AWS::S3::Bucket->file() doesn't handle it properly.
https://github.com/leejo/AWS-S3/blob/master/lib/AWS/S3/Roles/Request.pm#L65
this line will make leading '/' be the delimiter between the host name and thepath of result URI.
So, it will be removed at S3 server. For example,
$bucket->file('/foo/bar')
A key llike '/foo/bar' would be recoganized as 'foo/bar' at servers.
One workaround is to use "s3.amazonaws.com/Bucket-Name" instead of "Bucket-Name.s3.amazonaws.com" in the URL
Hi,
I try to implement this:
aws --no-sign-request s3 ls s3://commoncrawl/crawl-data/CC-MAIN-2021
It is a public AWS data-set, so no authentication. I try to work around that issue, but end-up on the front-page of AWS. My script is
#!/usr/bin/env perl
use AWS::S3;
{ no warnings;
use AWS::S3::Signer;
sub AWS::S3::Signer::auth_header { '' }
}
my $cc = AWS::S3->new;
my $data = $cc->bucket('commoncrawl/crawl-data');
my $ls = $data->files(page_size => 100, page_number => 1);
while(my @files = $ls->next_page)
{ print "Page number: ", $ls->page_number, "\n";
print " ", $file->key, "\n";
}
Any idea?
Because "+" in a URL gets interpreted as a space. The fix is to URL-quote the "+" as "%2B." The characters "&" and "?" should be quoted as well.
I am trying to use the AWS:S3 module with the S3 of Linode.
The following code gives me an error:
CreateBucket call had errors: [SignatureDoesNotMatch] at /home/gabor/perl5/lib/perl5/AWS/S3.pm line 108.
Using the same credentials in python script with boto3 works.
use strict;
use warnings;
use feature 'say';
use Dotenv;
use AWS::S3;
Dotenv->load;
my $url = "diggers.us-southeast-1.linodeobjects.com";
my $s3 = AWS::S3->new(
access_key_id => $ENV{AWS_ACCESS_KEY_ID},
secret_access_key => $ENV{AWS_SECRET_ACCESS_KEY},
endpoint => $url,
);
my $bucket = $s3->add_bucket(
name => 'qqrq',
);
If instead of add_bucket
I call say $s3->buckets;
it does not print anything despite already having buckets.
ps. I'd be happy to send you temporary credentials to try it if that helps.
On some of my smoker systems I see failures like this:
# Failed test 'endpoint was used'
# at t/aws/s3.t line 39.
# ':2: parser error : Space required after the Public Identifier
# <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
# ^
# :2: parser error : SystemLiteral " or ' expected
# <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
# ^
# :2: parser error : SYSTEM or PUBLIC, the URI is missing
# <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
# ^
# :11: parser error : Opening and ending tag mismatch: img line 11 and a
# tBeat Webhosting, www.netbeat.de - soeben freigeschaltete Domain" border="0"></a
# ^
# :12: parser error : Opening and ending tag mismatch: a line 11 and td
# </td>
# ^
# :14: parser error : Opening and ending tag mismatch: td line 10 and tr
# </tr>
# ^
# :15: parser error : Opening and ending tag mismatch: tr line 9 and table
# </table>
# ^
# :17: parser error : Opening and ending tag mismatch: table line 8 and body
# </body>
# ^
# :18: parser error : Opening and ending tag mismatch: body line 7 and html
# </html>
# ^
# :20: parser error : Premature end of data in tag html line 3
#
# ^
# '
# doesn't match '(?^:Can't connect to aws-s3-test-.*?bad\.hostname)'
# Looks like you failed 1 test of 11.
t/aws/s3.t ................
Dubious, test returned 1 (wstat 256, 0x100)
Failed 1/11 subtests
This is most probably caused by wildcard DNS records on this system. See tokuhirom/Furl#128 for a discussion of this problem.
A possible fix could look like this:
diff --git a/t/aws/s3.t b/t/aws/s3.t
index 1523287..8413e22 100644
--- a/t/aws/s3.t
+++ b/t/aws/s3.t
@@ -27,7 +27,7 @@ use_ok('AWS::S3');
my $s3 = AWS::S3->new(
access_key_id => $ENV{AWS_ACCESS_KEY_ID} // 'foo',
secret_access_key => $ENV{AWS_SECRET_ACCESS_KEY} // 'bar',
- endpoint => 'bad.hostname',
+ endpoint => 'bad.hostname.',
);
my $bucket_name = "aws-s3-test-" . int(rand() * 1_000_000) . '-' . time() . "-foo";
@@ -58,7 +58,7 @@ subtest 'create bucket strange temporary redirect' => sub {
# first PUT request, send a forward
is( $req->method, 'PUT', 'bucket creation with PUT request' );
- is( $req->uri->as_string, 'http://bar.bad.hostname/', '... and with correct URI' );
+ is( $req->uri->as_string, 'http://bar.bad.hostname./', '... and with correct URI' );
$i++;
return HTTP::Response->new(
@@ -81,7 +81,7 @@ subtest 'create bucket strange temporary redirect' => sub {
else {
# there is a call to ->bucket, which does ->buckets, which is empty.
is( $req->method, 'GET', '->buckets with GET' );
- is( $req->uri->as_string, 'http://bad.hostname/', '... and with correct URI' );
+ is( $req->uri->as_string, 'http://bad.hostname./', '... and with correct URI' );
# we need to return XML in the body or xpc doesn't work
return Mocked::HTTP::Response->new( 200,
The signed URLs returned by signed_url() are quite often invalid, leading to a SignatureDoesNotMatch
error on AWS. Does this always work on your end and maybe I'm just doing something stupid or is it quite possible that the signing process is somehow flawed?
my $s3 = AWS::S3->new({
access_key_id => $access_key,
secret_access_key => $secret_key,
secure => 0,
});
my $bucket = $s3->add_bucket(
name => $bucket_name,
location => '',
);
my $file = $bucket->add_file(
key => $filename,
contents => \$data,
);
my $expiration_date = time() + 7 * 24 * 60 * 60;
my $url = $ipa_file->signed_url( $expiration_date );
$ curl -I <$url>
HTTP/1.1 403 Forbidden
Running the same code again the URL will be correct in some cases.
Hello,
I am experiencing an issue with AWS Lambda and S3. When I use fixed IAM credentials within a Lambda function, I can successfully retrieve a file from S3.
However, when I switch to using the credentials provided in the Lambda execution environment, I receive an HTTP/1.1 403 Forbidden error.
I am wondering if there might be a way to add a header, specifically 'X-Amz-Security-Token', or if there's a specific parameter for the token that I may be missing.
I would greatly appreciate any assistance you could provide.
Best Regards
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.