Git Product home page Git Product logo

aws-s3's People

Contributors

autarch avatar evancarroll avatar jdrago999 avatar leejo avatar robin13 avatar simbabque avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

aws-s3's Issues

AWS::S3::File Zero byte file

Been running into an issue while accessing an object that is zero bytes. Here is the error returned.

Attribute (size) does not pass the type constraint because: Validation failed for 'Int' with value undef at constructor AWS::S3::File::new (defined at /usr/local/share/perl/5.14.2/AWS/S3/File.pm line 195) line 106
AWS::S3::File::new('AWS::S3::File', 'bucket', 'AWS::S3::Bucket=HASH(0x3112b80)', 'key', 'cases/00398535/a0i30000001ClJcAAK/Status.html', 'size', undef, 'contenttype', 'application/octet-stream', 'etag', '"d41d8cd98f00b204e9800998ecf8427e"', 'lastmodified', 'Fri, 13 Mar 2015 01:06:04 GMT', 'is_encrypted', 0) called at /usr/local/share/perl/5.14.2/AWS/S3/Bucket.pm line 181
AWS::S3::Bucket::file('AWS::S3::Bucket=HASH(0x3112b80)', 'cases/00398535/a0i30000001ClJcAAK/Status.html') called at lib/s3.pm line 64
s3::getObj('s3=HASH(0x2ab3000)', 'cases/00398535/a0i30000001ClJcAAK/Status.html') called at obj-bak.pl line 473

AWS::S3->add_bucket location param is undocumented and buggy

This parameter is not documented. While it shouldn't be required, it's passed to the request whether it was specified or not. So if I don't specify it then it ends up being passed as an undef and failing the TC for the AWS::S3::Request::CreateBucket location attribute.

S3 will no longer support path-style API requests starting September 30th, 2020

https://forums.aws.amazon.com/ann.jspa?annID=6776

Amazon S3 currently supports two request URI styles in all regions: path-style 
(also known as V1) that includes bucket name in the path of the URI (example: 
 //s3.amazonaws.com/<bucketname>/key), and virtual-hosted style (also known
 as V2) which uses the bucket name as part of the domain name 
(example: //<bucketname>.s3.amazonaws.com/key). In our effort to continuously improve 
customer experience, the path-style naming convention is being retired in favor 
of virtual-hosted style request format. Customers should update their applications
 to use the virtual-hosted style request format when making S3 API requests before 
September 30th, 2020 to avoid any service disruptions. Customers using the 
AWS SDK can upgrade to the most recent version of the SDK to ensure their 
applications are using the virtual-hosted style request format. 

Virtual-hosted style requests are supported for all S3 endpoints in all AWS regions. 
S3 will stop accepting requests made using the path-style request format in all regions 
starting September 30th, 2020. Any requests using the path-style request format 
made after this time will fail.

over-encoding the path

I'm seeing the key part of the URL get URL-encoded ,i.e.
path/to/file => path%2Fto%2Ffile

...which is breaking the url.

We'll want to encode the signature, but not the path.

AWS::S3::Bucket->file() doesn't handle keys with leading '/' correctly.

Keys of S3 should be any utf-8 string according the document of Amazon. Console uses '/' as delimiter to organize keys in tree style. So it doesn't prohibit a key like '/xxxxxx'. But, AWS::S3::Bucket->file() doesn't handle it properly.

https://github.com/leejo/AWS-S3/blob/master/lib/AWS/S3/Roles/Request.pm#L65
this line will make leading '/' be the delimiter between the host name and thepath of result URI.
So, it will be removed at S3 server. For example,
$bucket->file('/foo/bar')
A key llike '/foo/bar' would be recoganized as 'foo/bar' at servers.

no-sign-request error on auth

Hi,

I try to implement this:

aws --no-sign-request s3 ls s3://commoncrawl/crawl-data/CC-MAIN-2021

It is a public AWS data-set, so no authentication. I try to work around that issue, but end-up on the front-page of AWS. My script is

#!/usr/bin/env perl

use AWS::S3;

{ no warnings;
  use AWS::S3::Signer;
  sub AWS::S3::Signer::auth_header { '' }
}

my $cc = AWS::S3->new;
my $data = $cc->bucket('commoncrawl/crawl-data');

my $ls = $data->files(page_size => 100, page_number => 1);
while(my @files = $ls->next_page)
{   print "Page number: ", $ls->page_number, "\n";
    print "  ", $file->key, "\n";
}

Any idea?

CreateBucket call had errors: [SignatureDoesNotMatch] - while using with s3 on Linode

I am trying to use the AWS:S3 module with the S3 of Linode.

The following code gives me an error:

CreateBucket call had errors: [SignatureDoesNotMatch]  at /home/gabor/perl5/lib/perl5/AWS/S3.pm line 108.

Using the same credentials in python script with boto3 works.

use strict;
use warnings;
use feature 'say';

use Dotenv;
use AWS::S3;

Dotenv->load;

my $url = "diggers.us-southeast-1.linodeobjects.com";


my $s3 = AWS::S3->new(
  access_key_id     => $ENV{AWS_ACCESS_KEY_ID},
  secret_access_key => $ENV{AWS_SECRET_ACCESS_KEY},
  endpoint => $url,
);

my $bucket = $s3->add_bucket(
  name    => 'qqrq',
);

If instead of add_bucket I call say $s3->buckets; it does not print anything despite already having buckets.

ps. I'd be happy to send you temporary credentials to try it if that helps.

"bad hostname" test may fail in presence of wildcard DNS records

On some of my smoker systems I see failures like this:

#   Failed test 'endpoint was used'
#   at t/aws/s3.t line 39.
#                   ':2: parser error : Space required after the Public Identifier
# <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
#                                                              ^
# :2: parser error : SystemLiteral " or ' expected
# <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
#                                                              ^
# :2: parser error : SYSTEM or PUBLIC, the URI is missing
# <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
#                                                              ^
# :11: parser error : Opening and ending tag mismatch: img line 11 and a
# tBeat Webhosting, www.netbeat.de - soeben freigeschaltete Domain" border="0"></a
#                                                                                 ^
# :12: parser error : Opening and ending tag mismatch: a line 11 and td
# 		</td>
# 		     ^
# :14: parser error : Opening and ending tag mismatch: td line 10 and tr
# 	</tr>	
# 	     ^
# :15: parser error : Opening and ending tag mismatch: tr line 9 and table
# </table>
#         ^
# :17: parser error : Opening and ending tag mismatch: table line 8 and body
# </body>
#        ^
# :18: parser error : Opening and ending tag mismatch: body line 7 and html
# </html>
#        ^
# :20: parser error : Premature end of data in tag html line 3
# 	
# 	^
# '
#     doesn't match '(?^:Can't connect to aws-s3-test-.*?bad\.hostname)'
# Looks like you failed 1 test of 11.
t/aws/s3.t ................ 
Dubious, test returned 1 (wstat 256, 0x100)
Failed 1/11 subtests 

This is most probably caused by wildcard DNS records on this system. See tokuhirom/Furl#128 for a discussion of this problem.

A possible fix could look like this:

diff --git a/t/aws/s3.t b/t/aws/s3.t
index 1523287..8413e22 100644
--- a/t/aws/s3.t
+++ b/t/aws/s3.t
@@ -27,7 +27,7 @@ use_ok('AWS::S3');
 my $s3 = AWS::S3->new(
   access_key_id     => $ENV{AWS_ACCESS_KEY_ID}     // 'foo',
   secret_access_key => $ENV{AWS_SECRET_ACCESS_KEY} // 'bar',
-  endpoint          => 'bad.hostname',
+  endpoint          => 'bad.hostname.',
 );
 
 my $bucket_name = "aws-s3-test-" . int(rand() * 1_000_000) . '-' . time() . "-foo";
@@ -58,7 +58,7 @@ subtest 'create bucket strange temporary redirect' => sub {
 
             # first PUT request, send a forward
             is( $req->method, 'PUT', 'bucket creation with PUT request' );
-            is( $req->uri->as_string, 'http://bar.bad.hostname/', '... and with correct URI' );
+            is( $req->uri->as_string, 'http://bar.bad.hostname./', '... and with correct URI' );
 
             $i++;
             return HTTP::Response->new(
@@ -81,7 +81,7 @@ subtest 'create bucket strange temporary redirect' => sub {
         else {
             # there is a call to ->bucket, which does ->buckets, which is empty.
             is( $req->method, 'GET', '->buckets with GET' );
-            is( $req->uri->as_string, 'http://bad.hostname/', '... and with correct URI' );
+            is( $req->uri->as_string, 'http://bad.hostname./', '... and with correct URI' );
 
             # we need to return XML in the body or xpc doesn't work
             return Mocked::HTTP::Response->new( 200,

AWS::S3::File->signed_url() sometimes returns invalid URL

The signed URLs returned by signed_url() are quite often invalid, leading to a SignatureDoesNotMatch error on AWS. Does this always work on your end and maybe I'm just doing something stupid or is it quite possible that the signing process is somehow flawed?

Code

my $s3 = AWS::S3->new({
  access_key_id       => $access_key,
  secret_access_key   => $secret_key,
  secure              => 0,
});

my $bucket = $s3->add_bucket(
  name      => $bucket_name,
  location  => '',
);

my $file = $bucket->add_file(
  key       => $filename,
  contents  => \$data,
);

my $expiration_date = time() + 7 * 24 * 60 * 60;
my $url = $ipa_file->signed_url( $expiration_date );

Check

$ curl -I <$url>
HTTP/1.1 403 Forbidden

Running the same code again the URL will be correct in some cases.

S3 with temporary credentials from lambda

Hello,

I am experiencing an issue with AWS Lambda and S3. When I use fixed IAM credentials within a Lambda function, I can successfully retrieve a file from S3.

However, when I switch to using the credentials provided in the Lambda execution environment, I receive an HTTP/1.1 403 Forbidden error.

I am wondering if there might be a way to add a header, specifically 'X-Amz-Security-Token', or if there's a specific parameter for the token that I may be missing.

I would greatly appreciate any assistance you could provide.

Best Regards

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.