Git Product home page Git Product logo

s3-bucket-listing's Introduction

Create nice directory listings for s3 buckets using only javascript and HTML.

The listing can be deployed on any site and can also be deployed into a bucket.

Inspiration from http://aws.amazon.com/code/Amazon-S3/1713

Usage

Copy these 4 lines into the HTML file where you want the listing to show up:

<div id="navigation"></div>
<div id="listing"></div>

<!-- add jQuery - if you already have it just ignore this line -->
<script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/3.1.1/jquery.min.js"></script>

<!-- the JS variables for the listing -->
<script type="text/javascript">
  // var S3BL_IGNORE_PATH = true;
  // var BUCKET_NAME = 'BUCKET';
  // var BUCKET_URL = 'https://BUCKET.s3-REGION.amazonaws.com';
  // var S3B_ROOT_DIR = 'SUBDIR_L1/SUBDIR_L2/';
  // var S3B_SORT = 'DEFAULT';
  // var S3B_STAT_DIRS = false;
  // var EXCLUDE_FILE = 'index.html';  // change to array to exclude multiple files
  // var AUTO_TITLE = true;
  // var S3_REGION = 's3'; // for us-east-1
</script>

<!-- the JS to the do the listing -->
<script type="text/javascript" src="https://rufuspollock.github.io/s3-bucket-listing/list.js"></script>

We've provided an example index.html file you can just copy if you want.

How it works

The script downloads your XML bucket listing, parses it and simulates a webserver's text-based directory browsing mode.

S3BL_IGNORE_PATH variable

Valid options = false (default) or true

Setting this to false will cause URL navigation to be in this form:

  • http://data.openspending.org/worldbank/cameroon/

You will have to put the html code in your page html AND your error 404 document.

Setting this to true will cause URL navigation to be in this form:

  • http://data.openspending.org/index.html?prefix=worldbank/cameroon/

BUCKET_URL variable

Valid options = '' (default) or your bucket URL, e.g.

https://BUCKET.s3-REGION.amazonaws.com (both http & https are valid)

  • Do NOT put a trailing '/', e.g. https://BUCKET.s3-REGION.amazonaws.com/
  • Do NOT put S3 website URL, e.g. https://BUCKET.s3-website-REGION.amazonaws.com

This variable tells the script where your bucket XML listing is, and where the files are. If the variable is left empty, the script will use the same hostname as the index.html.

BUCKET_NAME variable

Valid options = '' (default) or your bucket name, e.g.

BUCKET

This option is designed to support access to S3 buckets in non-website mode, via both path-style and virtualhost-style access urls simultaneously, from the same index.html file.

NOTE: It is not recommended to use both BUCKET_URL and BUCKET_NAME in the same index.html file.

See the Amazon Documentation for details on the different url access formats.

The tables below attempt to highlight how BUCKET_NAME affects configuration and use cases.

Without using BUCKET_NAME:

Configuration Result Link
bucket_url is undefined; access url is virtualhost-based Success link
bucket_url is undefined; access url is path-based Error (Ok, expected) link
bucket_url is virtualhost-based; access url is virtualhost-based Success link
bucket_url is virtualhost-based; access url is path-based Error (Fail) link
bucket_url is path-based; access url is virtualhost-based Error (Fail) link
bucket_url is path-based; access url is path-based Success link

Using BUCKET_NAME to address the two failing configurations from above:

Configuration Result Link
bucket_name is set; access url is virtualhost-based Success link
bucket_name is set; access url is path-based Success link

S3B_ROOT_DIR variable

Valid options = '' (default) or 'SUBDIR_L1/' or 'SUBDIR_L1/SUBDIR_L2/' or etc.

  • Do NOT put a leading '/', e.g. '/SUBDIR_L1/'
  • Do NOT omit the trailing '/', e.g. 'SUBDIR_L1'

This will disallow navigation shallower than your set directory.

Note that this only disallows navigation to shallower directories, but NOT access. Any person with knowledge of the existence of bucket XML listings will be able to manually access those files.

Use Amazon S3 permissions to set granular file permissions.

BUCKET_WEBSITE_URL variable

This variable is optional. It allows you to modify the host used for link hrefs. You may want to use this if you have a custom domain name for your S3 bucket, or if you want to leverage things like "virtual files" (like 301 redirects).

Normally your links will point to <BUCKET_URL>/<KEY>. If specified, your links will point to <BUCKET_WEBSITE_URL>/<KEY> (but the list API calls will still use the configured BUCKET_URL);

S3B_SORT variable

This will sort your bucket listing. Variable options should be self-explanatory.

Valid options:

  • OLD2NEW
  • NEW2OLD
  • A2Z
  • Z2A
  • BIG2SMALL
  • SMALL2BIG

S3B_STAT_DIRS variable

This will obtain last modified information for directories at the cost of an additional request made per directory. Variable is a boolean.

EXCLUDE_FILE variable

This variable is optional. It allows you to exclude a file (e.g. index.html) or a list of files from the file listings.

AUTO_TITLE variable

This variable is optional. It allows you to automatically set the title.

S3_REGION variable

This variable is optional. It allows you specify the S3 region that the bucket is in so that the BUCKET_URL and BUCKET_WEBSITE_URL variables will be configured automatcially.

The 'us-east-1' region is unique and would require this variable be set to 's3' for a bucket in that region, buckets in other regions would just have this set to 's3-' + their region name (e.g. 's3-eu-west-1').

E.g. setting S3_REGION to 's3' for a bucket named 'www.example.com' in the us-east-1 region would automatically set:

BUCKET_URL = 'http://www.example.com.s3.amazonaws.com' BUCKET_WEBSITE_URL = 'http://www.example.com'

Four Valid Configurations

  1. Embed into your website
  2. Use Amazon S3 in website mode with URL navigation
  3. Use Amazon S3 in website mode with prefix mode (ignore_path mode)
  4. Use Amazon S3 in non-website mode

1. Embed into your website

Mandatory settings:

      var S3BL_IGNORE_PATH = true;
      var BUCKET_URL = 'https://BUCKET.s3-REGION.amazonaws.com';

Copy the code into whatever file you want to act as your listing page.

2. Use Amazon S3 in website mode with URL navigation

Mandatory settings:

      var S3BL_IGNORE_PATH = false;
      var BUCKET_URL = 'https://BUCKET.s3-REGION.amazonaws.com';
  • Enable website hosting under Static website hosting in your S3 bucket settings.
  • Under Permissions grant Everyone the List and View permissions.
  • Under Permissions go to Edit CORS Configuration and add the configuration listed in the following section 'S3 website bucket permissions'
  • Enter index.html as your Index Document and Error Document.
  • Put index.html in your bucket.
  • Navigate to http://BUCKET.s3-website-REGION.amazonaws.com to access the script.

The -website- in the URL is important, as the non-website URL is what serves your XML Bucket List.

http://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteEndpoints.html#WebsiteRestEndpointDiff

A specific example for the EU west region:

  • Website endpoint: http://example-bucket.s3-website-eu-west-1.amazonaws.com/
  • S3 bucket endpoint (for RESTful calls): http://example-bucket.s3-eu-west-1.amazonaws.com/

Note that US east region is different in that the S3 bucket endpoint does not include a location spec but the website version does:

  • Website endpoint: http://example-bucket.s3-website-us-east-1.amazonaws.com/
  • S3 bucket endpoint (for RESTful calls): http://example-bucket.s3.amazonaws.com/

3. Use Amazon S3 in website mode with prefix mode (ignore_path mode)

Mandatory settings:

      var S3BL_IGNORE_PATH = true;
      var BUCKET_URL = 'https://BUCKET.s3-REGION.amazonaws.com';
  • Enable website hosting under Static website hosting in your S3 bucket settings.
  • Enter index.html as your Index Document (Error Document is not required).
  • Put index.html in your bucket.
  • Navigate to http://BUCKET.s3-website-REGION.amazonaws.com to access the script.

4. Use Amazon S3 in non-website mode

Mandatory settings:

      var S3BL_IGNORE_PATH = true;
      var BUCKET_NAME = 'BUCKET';

S3 website bucket permissions

You must setup the S3 website bucket to allow public read access.

  • Grant Everyone the List and View permissions: List & View permissions
  • Alternatively you can assign the following bucket policy if policies are your thing:
{
    "Version": "2008-10-17",
    "Statement": [
        {
            "Sid": "AllowPublicRead",
            "Effect": "Allow",
            "Principal": {
                "AWS": "*"
            },
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::{your-bucket-name}/*"
        }
    ]
}

If you want to allow read/download only access to a specific set of IP addresses, you can block all public access and assign a bucket policy like below. Note the ListBucket permission is necessary as it allows client access to the bucket XML, which our index.html javascript operates from to generate the listing. See this AWS article for more information on other policy conditionals.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowSpecificIPsOnly",
            "Effect": "Allow",
            "Principal": "*",
            "Action": [
                "s3:GetObject",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::{your-bucket-name}/*",
                "arn:aws:s3:::{your-bucket-name}"
            ],
            "Condition": {
                "IpAddress": {
                    "aws:SourceIp": [
                        "12.34.56.78/24",
                    ]
                }
            }
        }
    ]
}
  • Assign the following CORS policy
[
    {
        "AllowedHeaders": [
            "*"
        ],
        "AllowedMethods": [
            "GET"
        ],
        "AllowedOrigins": [
            "*"
        ],
        "ExposeHeaders": []
    }
]

Enabling HTTPS

You MUST use config 1 or 4. Amazon S3 doesn't support HTTPS in website mode.

Use https for your BUCKET_URL.

For config 4, navigate to your index.html's full path using https, e.g. https://BUCKET.s3-REGION.amazonaws.com/index.html

To stop browser warnings about displaying insecure content in secure mode:

With config 4, you will then be utilising AmazonAWS' wildcard SSL (unfortunately it is SHA1 only).

S3 Bucket https only permissions (ie. deny http access)

This is only possible for config 1 or 4.

Set the following bucket policy

{
	"Version": "2012-10-17",
	"Statement": [
		{
			"Sid": "HTTPSOnly",
			"Effect": "Deny",
			"Principal": "*",
			"Action": "s3:*",
			"Resource": "arn:aws:s3:::{your-bucket-name}/*",
			"Condition": {
				"Bool": {
					"aws:SecureTransport": false
				}
			}
		},
		{
			"Sid": "AllowPublicRead",
			"Effect": "Allow",
			"Principal": "*",
			"Action": "s3:GetObject",
			"Resource": "arn:aws:s3:::{your-bucket-name}/*"
		},
		{
			"Sid": "AllowPublicList",
			"Effect": "Allow",
			"Principal": "*",
			"Action": "s3:ListBucket",
			"Resource": "arn:aws:s3:::{your-bucket-name}"
		}
	]
}

Copyright and License

Copyright 2012-2016 Rufus Pollock.

Licensed under the MIT license:

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

s3-bucket-listing's People

Contributors

bobrik avatar drprofesq avatar extempl avatar firefishy avatar floptical avatar frederickjansen avatar fuzzball03 avatar gazab avatar igorzet avatar istvanp avatar jbzhur avatar jvperrin avatar lorengordon avatar lukeramsden avatar maroux avatar mrdavidlaing avatar mrdos avatar musketyr avatar nickstenning avatar omercnet avatar pdxfixit avatar penberg avatar pracplayopen avatar pudo avatar rufuspollock avatar thethief avatar thilohaas avatar tomberek avatar vlad-ro avatar yofreke avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

s3-bucket-listing's Issues

S3B_SORT wants unquoted values?

var S3B_SORT = 'A2Z' ; does not produce intended sort.

var S3B_SORT = A2Z ; produces intended sort -- but trips Uncaught ReferenceError: A2Z is not defined.

Exclude files support

Partially implemented in #64 but would be nice to generalise to a list of files i.e. EXCLUDE_FILES and have it be either a single string or an array of strings (or regexes?).

Throws Error

http://test.healthwrights.org.s3-us-west-2.amazonaws.com/list.html

Was trying to set this up on an existing bucket with no success. So to simplify I created a new bucket without website mode and without a subdomain in us-west-2 (as opposed to east which is the special case). (link above)

Still your JS keeps throwing the error alert which does not indicate type error, neither does the inspection console. This bucket has your policy and cors copy and pasted to it's permissions.

even if I add the endpoint as bucket url variable same error.
http://test.healthwrights.org.s3-us-west-2.amazonaws.com/list2.html
If I use your bucket URL lanched from my bucket...no error.
http://test.healthwrights.org.s3-us-west-2.amazonaws.com/openspending.html

blank list

Hi

Ive spent some time on this and it keeps returning nothing back?

the URL of bucket is https://s3greenzone.s3.eu-west-2.amazonaws.com/ . as you can see there is files . showing if going direct to the URL even without adding the zone it works https://s3greenzone.s3.amazonaws.com/

I have added the html file here
https://www.fuse2.net/greezone/index.html
I have changed all paramaters to different scenarios but nothing is working?

any ideas what i am doing wrong?

Current config is below

<!DOCTYPE html>
<html>
<head>
  <title>S3 Bucket Listing Generator</title>
</head>
<body>
  <div id="navigation"></div>
  <div id="listing"></div>

<script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/3.1.1/jquery.min.js"></script>
<script type="text/javascript">
  // var S3BL_IGNORE_PATH = true;
  // var BUCKET_NAME = '';
  // var BUCKET_URL = 'https://s3greenzone.s3.amazonaws.com';
  // var S3B_ROOT_DIR = 'SUBDIR_L1/SUBDIR_L2/';
  // var S3B_SORT = 'NEW2OLD';
  // var EXCLUDE_FILE = 'index.html';  // change to array to exclude multiple files
  // var AUTO_TITLE = true;
  // var S3_REGION = 's3-eu-west-2'; // for us-east-1
</script>
<script type="text/javascript" src="https://rawgit.com/rufuspollock/s3-bucket-listing/gh-pages/list.js"></script>
</body>
</html>

CORS and permissions have been set as per your documentation

Used to work but now shows no directory listing of contents or buckets

Hi,

I installed this last Christmas, and it has worked great. Client recently used it but it now doesn't show a directory listing.

Here's the site: http://www.halltech.com.au/

I went and reinstalled with the most recent files. Still nothing. Checked the S3 Settings, they all seem correct, nothing changed. Check the Issues list and went back and checked the configuration. Nothing.

Advice would be greatly appreciated.

Thanks

Chris

Config 2 failed with 'Error: [object Object]'

I try to use config 2. Use Amazon S3 in website mode with URL navigation but failed with error Error: [object Object].

Bucket: dl.khadas.com

My index.html is:

<!DOCTYPE html>
<html>
<head>
  <title>S3 Bucket Listing Generator</title>
</head>
<body>
  <div id="navigation"></div>
  <div id="listing"></div>

<script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/3.1.1/jquery.min.js"></script>
<script type="text/javascript">
 var S3BL_IGNORE_PATH = false;
 var BUCKET_URL = 'https://dl.khadas.com.s3-us-east-1.amazonaws.com';
</script>
<script type="text/javascript" src="https://rawgit.com/rufuspollock/s3-bucket-listing/gh-pages/list.js"></script>
</body>
</html>

Bucket : Everyone have the List and View permissions.

Policies:

{
    "Version": "2008-10-17",
    "Statement": [
        {
            "Sid": "AllowPublicRead",
            "Effect": "Allow",
            "Principal": {
                "AWS": "*"
            },
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::dl.khadas.com/*"
        }
    ]
}

CORS policy:

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
    <AllowedOrigin>*</AllowedOrigin>
    <AllowedMethod>GET</AllowedMethod>
    <AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>

Do you have any suggestions about this?

Thanks.

encodeURIcomponent call on hrefs collapses directory structures into filenames

The encodeUIRComponent call added to item.href for files can cause the names of downloaded files to include a dash-separated full directory path. This can be seen on the example site, http://data.openspending.org/worldbank/cameroon/. Select the file Yaounde2.csv file. The href for this file is http://data.openspending.org/worldbank%2Fcameroon%2FYaounde2.csv. When selected, the file does not download as Yaounde2.csv. Instead, it downloads as worldbank-cameroon-yaounde2.csv.

This seems like a potential unintended consequence of escaping other special characters. Perhaps this was meant to be encodeURI() instead, since item.Key is used as a URL and not a param value in this case.

Doesn't seem to work if bucket name contains fullstops

Hello,

I've had this code working on a bucket with a name like "sophia-test". However, when I made the bucket "test.sophiaexamplebucket.com" and tried to implement the code, the directory appears blank (of course I edited the code according to the instructions)

(note, the bucket names I've used here are purely for examples)

I have seen examples of people using custom URLs/names with fullstops before, so I'm not sure why mine wouldn't be working

Thanks

EDIT: I made an example to show what I'm talking about:

http://sophia.test.bucket.s3-website-eu-west-1.amazonaws.com/

<html>
<head>
</head>
<body>
  <div id="listing"></div>
<script>
    var BUCKET_URL = document.location.href.replace(/\/[^\/]+$/, '');
</script>

<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.9.0/jquery.min.js"></script>
<script src="http://rgrp.github.io/s3-bucket-listing/list.js"></script>
</body>
</html>

(I've tried variations of bucketurl/bucketname, and kept them close to the examples. I either end up with Error: [object object], or a blank directory)

sort orders are computer-friendly, user-antagonistic

With A2Z, all upper-case currently come before all lower-case -- i.e., A comes before Z comes before a comes before z -- i.e., sorting is apparently by code-point order.

Mixed-case alpha-order is more typically human (i.e., A is treated as identical to a [and to ร , and to ร…, etc.] for sorting). Any way this can be enabled?

A related but probably very different challenge -- all "files" are listed before all "folders". Any way to have these intermingled when sorting by name/date?

Page blank

Well, I tried a lot of thigs, following all steps, and the page still blank.

is there some issue concern it?

Not listing directory names

FYI. The bucket I'm hosting it on is set to serve static websites

I manually configured the bucketUrl to instead of having it use the static website location.

var bucketUrl = 'http://combat.camera.s3.amazonaws.com';

I set the bucket permissions to list + view permissions.

I set the CORS permissions as specified in the instructions.

I set the bucket policy according the the instructions.

It seems to display files fine but it's showing nothing for directories. You can see my testing bucket here.

Only list certain file extensions

I have a static site on S3, and want to list only the .zip files, and not my index.html and 404.html files. Is there currently a way to do that? Or maybe there's an alternative way, like putting all .zip files in a folder, then setting the path for listing files to root/folder/

404 Returns on Internet Explorer/Edge

Hi,
Using the BUCKET_URL and S3BL_IGNORE_PATH set to false, I can navigate to a CNAME'ed url and have it load the initial listing from index.html. However, all links return 404's. I have the error document on the bucket's website hosting set to index.html also, but it's not working. Chrome and Firefox work properly.

Has anyone seen or debugged this issue?

Listing is "../"

Hi Rufus,
First, thanks a lot for the snippet!

I just cannot seem to get it display a list of the contents of my s3 bucket.
RTFM-ed, but still not luck!

My settings:
[Static web]
web-settings

[permissions]
myperms

[html to call the list.js]
list-settings

For BUCKET-URL value, edited the S3 endpoint as suggested in the Readme.md.

When I enter the BUCKET_URL value in the addressbar, I get a full XML listing of the contents.
The bucket policy and CORS settings are pasted from your site site.

When I refreshed my browser, this is what I get.
[Output]
list-output

Any pointers how to diagnose the problem?

Thanks very much

Error: [object Object]

Have S3 bucket in website mode with URL navigation
I think I followed your readme closely but this is what comes up.

Error: [object Object]

Here is the website endpoint (you can check the actual source from your browser).
http://images.healthwrights.org.s3-website-us-east-1.amazonaws.com/

It is in east-1 and I followed the correct rest api url for that you indicated. (no trailing /)

typo somewhere, or a setup issue or a bug?


<div id="listing"></div>

<!-- add jquery - if you already have it just ignore this line -->
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.9.0/jquery.min.js"></script>

<!-- the JS variables for the listing -->
<script type="text/javascript">
  var S3BL_IGNORE_PATH = false;
  // var BUCKET_NAME = 'images.healthwrights.org';
  var BUCKET_URL = 'https://images.healthwrights.org.s3.amazonaws.com';
  // var S3B_ROOT_DIR = 'SUBDIR_L1/SUBDIR_L2/';
</script>

<!-- the JS to the do the listing -->
<script src="https://rgrp.github.io/s3-bucket-listing/list.js"></script>

BUCKET_URL

In list.js function "createS3QueryUrl" (lines 32-37) makes a reference to BUCKET_URL; however, there is no other reference to it. Where is it that we should set the variable as outlined in the readme:

Configuring the Bucket to List

By default, the script will attempt to guess the bucket based on the url you
are trying to access. However, you can configure it to point at any s3 bucket
by setting the BUCKET_URL javascript variable, e.g.:

var BUCKET_URL = 'https://s3-eu-west-1.amazonaws.com/data.openspending.org/';

Pulling resource for publicly write-able S3 bucket

In list.js, you are pulling a gif from assets.okfn.org/images/icons/ajaxload-circle.gif. However assets.okfn.org is an s3 bucket that's misconfigured to be publicly write-able.

Because of this, an attacker can replace the ajaxload-circle.gif with something obscene.

Consider using an image that is hosted more securely.

1000 file limit

Need to extend tool to allow for more than 1000 files.

To get more than 1000 objects, you must make multiple requests using the Marker parameter to tell S3 where you left off for each request.

allocation size overflow

unfortunately I don't get it working most likely because the buckets content is to big. It actually only contains 5 top level-directories and branches out but I assume the js tries to load it all at once.

So on Firefox 60.6.1esr loading stops showing allocation size overflow

getS3Data/<
i
fireWith
A
c/<

Trying the same with Choromium 73.0.3683.75 the script would continuously (non-stopable) request ?delimiter=/ and retrieve the same truncated first 1000 entries.

Extra '/' in path of the link

Description

Since a couple of weeks your index.html (http://spawncamping-dds-snapshots.s3-website-us-east-1.amazonaws.com/) is not putting the links to the bucket files correctly.

The problem is that it puts additional '/' in front of the file name resulting in malformed links. This only is a problem if the BUCKET_URL contains a trailing '/'.

Example

var BUCKET_URL = 'https://spawncamping-dds-snapshots.s3.amazonaws.com/';

yields

https://spawncamping-dds-snapshots.s3.amazonaws.com//spawncamping-dds-2.0.0-SNAPSHOT_2.11.jar

but should be

https://spawncamping-dds-snapshots.s3.amazonaws.com/spawncamping-dds-2.0.0-SNAPSHOT_2.11.jar

Listing of S3 bucket content does not work when using cloudFront

I am trying to using this code for listing S3 bucket content. Works perfect when s3 content is published directly. However, in my case S3 is not configured direct for public access and is controlled via Origin Access Identity of the cloudFront.

How can I use this when cloudFront is used in front of S3?

Tests

We should get some simple tests for this!

Error: [object Object]

I want to list everything from https://astralcultural.s3.amazonaws.com/capas/marketplace

But all I got is this error message:
Error: [object Object]

`

<title>S3 Bucket Listing Generator</title>
<script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/3.1.1/jquery.min.js"></script> <script type="text/javascript"> var S3BL_IGNORE_PATH = true; var BUCKET_NAME = 'astralcultural.com.br'; var BUCKET_URL = 'https://astralcultural.s3.amazonaws.com'; var S3B_ROOT_DIR = 'capas/marketplace/'; var S3B_SORT = 'DEFAULT'; var EXCLUDE_FILE = 'index.html'; // change to array to exclude multiple files var AUTO_TITLE = true; var S3_REGION = 's3'; // for us-east-1 </script> <script type="text/javascript" src="https://rawgit.com/rufuspollock/s3-bucket-listing/gh-pages/list.js"></script> `

[Question] Google Storage Bucket

Is there any plans to do the same but with a Google Storage Bucket or can someone help me how I can change it to work with it?

AWS Bucket Policy Seems to be incorrect as of latest version

I tried the original policy in the script, and it kept suggesting my bucket was an invalid resource. I replaced with this, which seems to work:
{
"Version":"2008-10-17",
"Statement":[{
"Sid":"AllowPublicRead",
"Effect":"Allow",
"Principal": {
"AWS": ""
},
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::bucket/
"
]
}
]
}

Page appears blank

When loading a bucket listing that is previously known to work, the page appears blank.

screen shot 2015-10-01 at 6 10 52 pm

This is because the JS refuses to load:

screen shot 2015-10-01 at 6 04 37 pm

This seems to be because Safari is trying to load the content over plaintext:

screen shot 2015-10-01 at 6 00 40 pm

Even though it is definitely an HTTPS link in the source tag:

screen shot 2015-10-01 at 6 00 54 pm

Note that Chrome seems to complain about a similar issue:

screen shot 2015-10-01 at 6 19 31 pm

This seems to be because GitHub is issuing a 301 redirect to a non-SSL URI:

curl -vvv https://rgrp.github.io/s3-bucket-listing/list.js
*   Trying 23.235.44.133...
* Connected to rgrp.github.io (23.235.44.133) port 443 (#0)
* TLS 1.2 connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate: www.github.com
* Server certificate: DigiCert SHA2 High Assurance Server CA
* Server certificate: DigiCert High Assurance EV Root CA
> GET /s3-bucket-listing/list.js HTTP/1.1
> Host: rgrp.github.io
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 301 Moved Permanently
< Server: GitHub.com
< Content-Type: text/html
< Location: http://dev.rufuspollock.org/s3-bucket-listing/list.js
< Content-Length: 178
< Accept-Ranges: bytes
< Date: Thu, 01 Oct 2015 23:21:07 GMT
< Via: 1.1 varnish
< Age: 0
< Connection: keep-alive
< X-Served-By: cache-dfw1833-DFW
< X-Cache: MISS
< X-Cache-Hits: 0
< X-Timer: S1443741667.238523,VS0,VE37
< Vary: Accept-Encoding
<
<html>
<head><title>301 Moved Permanently</title></head>
<body bgcolor="white">
<center><h1>301 Moved Permanently</h1></center>
<hr><center>nginx</center>
</body>
</html>

I am working around by hosting list.js myself. That seems to work fine.

Known config: OSX 10.11 (15A284), Safari 9.0 (11601.1.56), Chrome 45.0.2454.101 (64-bit)

I assume this is actually GitHub's bug, but I don't think they have an actual bugtracker :-/

[feature request] Organise list by last modified

Hi,

First of all thank you for such a great project. For my specific implementation it would be greatly useful if the contents could be organised by "last modified" (newest file on top) instead of alphabetical order, could this be achieve?

Help to deploy on S3

I'm trying to deploy a index.html file on s3 bucket, to list the file on the bucket.
I added the following to the index file:

<div id="listing"></div>
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.9.0/jquery.min.js"></script>
<script>
  var BUCKET_URL='http://s3.amazonaws.com/downloads.osv.io/';
  var S3BL_IGNORE_PATH = true;
</script>
<script src="http://rgrp.github.io/s3-bucket-listing/list.js"></script>

and upload the file.
If I use

var BUCKET_URL='http://s3.amazonaws.com/downloads.osv.io/';

I'm getting a No 'Access-Control-Allow-Origin' header is present on the requested resource from the browser, as the site URL is different then the bucket URL.

If I try

var BUCKET_URL='downloads.osv.io.s3-website-us-east-1.amazonaws.com'

Nothing is presented

What am I'm missing?

Thanks

do not link to root prefix with S3B_ROOT_DIR enabled

My configuration is

S3B_ROOT_DIR = 'dl/'

So the tool should not generate the first navigation hyperlink to prefix= which is an empty directory with no link back to dl/

<div id="navigation"><a href="?prefix=">https://www.example.com</a> / <a href="?prefix=dl/">dl</a> / <a href="?prefix=dl//"></a></div>

(This is the breadcrumb navigation above the file listing)

BTW, thanks for the great tool :)

Calling from raw.github.com ends in sadness.

Refused to execute script from 'https://raw.github.com/rgrp/s3-bucket-listing/master/list.js' because its MIME type ('text/plain') is not executable, and strict MIME type checking is enabled.

curl -IL https://raw.github.com/rgrp/s3-bucket-listing/master/list.js
HTTP/1.1 301 Moved Permanently
Date: Wed, 26 Nov 2014 21:41:46 GMT
Server: Apache
Location: https://raw.githubusercontent.com/rgrp/s3-bucket-listing/master/list.js
Accept-Ranges: bytes
Via: 1.1 varnish
Age: 0
X-Served-By: cache-jfk1020-JFK
X-Cache: MISS
X-Cache-Hits: 0
Vary: Accept-Encoding

HTTP/1.1 200 OK
Date: Wed, 26 Nov 2014 21:41:46 GMT
Server: Apache
Access-Control-Allow-Origin: https://render.githubusercontent.com
Content-Security-Policy: default-src 'none'
X-XSS-Protection: 1; mode=block
X-Frame-Options: deny
X-Content-Type-Options: nosniff
Strict-Transport-Security: max-age=31536000
ETag: "74a82fddbf446312e22c8266875f1b5749a2b64c"
Content-Type: text/plain; charset=utf-8
Cache-Control: max-age=300
Content-Length: 1425
Accept-Ranges: bytes
Via: 1.1 varnish
X-Served-By: cache-jfk1034-JFK
X-Cache: MISS
X-Cache-Hits: 0
Vary: Authorization,Accept-Encoding
Expires: Wed, 26 Nov 2014 21:46:46 GMT
Source-Age: 0

Docs and examples should be updated to use:

<script src="http://rgrp.github.io/s3-bucket-listing/list.js"></script>

how to open s3 bucket files in another tab

Hii sir,
i want to open my s3 files in another tab ,i given target="_blank",but this is not working for me
please help me how to open my files in new tab when i was on-click.
Thank you Advanced

collapse folders

is there any why how to collapse the folders in the way how github is now doing? if there is a folder which only contains another folder make it a single link so I don't have to click trough 5 nested directories.

Chrome issue: Some directory links go to about:blank yet the directories are not empty

I am using var S3BL_IGNORE_PATH = false;

Using 'true' works except for the fact that users have to click on index.html to see the content, so that is a non-starter.

I have a directory where all the subdirectory links go straight to "about:blank" on Chrome. Works fine on Safari.

Note: All values in the 'size' column are zero.

The directories are in fact not empty. I have another tree in this bucket where the directories render correctly. Besides their names and content there is no difference between them that I am aware of. For example, they all have an index.html.

Last Modified Size Key

                                           ../
                            0              2018-03-09T01:04:51Z/
                            0              2018-03-09T01:06:43Z/
                            0              2018-03-09T05:58:22Z/
                            0              2018-03-09T18:11:41Z/

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.