Git Product home page Git Product logo

aws-js-s3-explorer's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aws-js-s3-explorer's Issues

Remember credentials

Whenever I refresh the page (or I restart the browser) I need to input the login/authentication details again.

Is it possible to remember the credentials (locally), either via cookie or localStorage (preferred)?

Feature request: Expiring URL for bucket object creation

The abillity to interact with the bucket contents once listed and select induvidual or multiple files for the creation of a expiring signed URL for providing downloads would be really useful. (if open to feature requests)

Feature Request: Ship as js bundle via package registry

I'd love to be able to have a bundle giving me a js function which i can just call to mount the app within my own context. This would enable easy style changes and automated deployment for new versions.

Just to give you an example of what i am imagining:

<html>
  <head>
    <script src="node_modules/aws-js-s3-explorer/dist/index.js" />
  </head>
  <body>
    <div id="main"></div>
    <script type="text/javascript">
      mountAwsJsS3Explorer(document.getElementById('main'))
    </script>
  </body>
</html>

[v2-alpha] Folders drag-n-drop is broken

Hi guys,

I'm afraid you have broken folders drag-n-drop feature with the last commits...

When drop the following files structure to the app:

Test
├── aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
│   └── jquery-ui-1.12.1.custom.zip
├── jquery-ui-1.12.1.customaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa.zip
└── Test2
    └── jquery-ui-1.12.1.custom.zip

I previously saw this:

Screenshot from 2019-08-03 13-09-03

But now I see this:

Screenshot from 2019-08-03 13-10-23

I would suggest to revert the last commit until fixed.

Downloading files from S3 bucket results in modified filenames

Hello,
I am browsing the NOAA GOES-16 data archive on AWS: https://noaa-goes16.s3.amazonaws.com/index.html
And have noticed that if I click a file to download it then the filename of the downloaded file doesn't match the filename in the S3 bucket.

For example, if I load the above webpage and then click '2019' followed by '001', then '00' I will get a list of files. If I then click the topmost file it will begin downloading, and will be saved with this filename:
ABI-L1b-RadF_2019_001_00_OR_ABI-L1b-RadF-M3C01_G16_s20190010000364_e20190010011131_c20190010011177.nc
But that filename doesn't follow the correct convention for GOES files, it should have this name:
OR_ABI-L1b-RadF-M3C01_G16_s20190010000364_e20190010011131_c20190010011177.nc
i.e: The ABI-L1b-RadF_2019_001_00_ at the front of the filename should not be there. This only happens when downloading files via a web browser. Downloading via the AWS CLI, python or any other tool works ok. I mentioned this to @zflamig via email and he says that the code behind the S3 explorer is prepending the directory structure to the filename.

Would it be possible to add a workaround for this? It's causing some issues for GOES data users as the filenames aren't matching what the various processing / image display packages expect.

Thanks!
Simon

Hosting the JS app outside of AWS

The documentation extensively discusses how to deploy this JS app to an AWS bucket in order to manage other AWS buckets.

I'm interested in integrating this app into my existing application that is hosted outside of AWS.

This is a supported use case, correct? Are there any problems I'll run into by hosting the files of aws-js-s3-explorer on a "traditional" webserver?

Thank you!

Specify prefix in URL

Hi,

I just discovered this project and for now, it works like a charm and responds to almost every needs I have.

I was wondering if it could be possible to specify the prefix directly in the URL ?
I saw this PR but can't get it work (maybe specific for

Here is an example of URL I tried (no public bucket) :
https://bucket-test-src.s3.amazonaws.com/index.html#specific-folder

It only works when I specify it in the connection modal.
Am I missing something ?

I am currently using v2-alpha version.

Change Search label

This is very simple and awkwardly question, I know but, cannot figured it out and stack. How can I change "Search" and "Show"... "entries" labels to different language? Where are their location?

[feature request] download folder

I would like to download a folder (as a zip or otherwise). I believe for folders of a reasonable size, this can be done client side using something like JSZip.

timestamp for files over last modified

Hello!

This project is awesome and looks great but i was wondering if there could either be a new column or a toggle between last modified and timestamp. The reason I ask, is because we would be using this to pull call recordings and we would need to compare timestamps to ensure we are pulling the correct calls, etc.

Thanks!

Config Via URL

It would be useful to be able to specify the configuration (except keys, of course) in the URL, for bookmarking purposes.

One example implementation:

Given the following config string: {'bucket':'BUCKETNAME,'mfa':'false','region':'us-east-2'}

Could be base 64 encoded to: eydidWNrZXQnOidCVUNLRVROQU1FLCdtZmEnOidmYWxzZScsJ3JlZ2lvbic6J3VzLWVhc3QtMid9

And put on the URL:

bucketview.example.com/#eydidWNrZXQnOidCVUNLRVROQU1FLCdtZmEnOidmYWxzZScsJ3JlZ2lvbic6J3VzLWVhc3QtMid9

Which would then preload the config on page load.

Hide error after timeout or success

Currently the errors stay on the screen indefinitely, it seems. It would be great to have them hide after a given time, or after another successful action.

default setting needed

when we start the tool the bucket dialog appearing. supposed I don't want to show that dialog and get the data from s3. there have any default setting for this soi can just set the default and it'll get the data for me

Question about files upload

Hi @john-aws

I have a question about this line:
if (fileii.type || fileii.size % 4096 !== 0 || fileii.size > 1048576)

Could you explain logic behind the code please?

I have found quite weird file on Windows desktop with no content-type and size exactly 49152 bytes which does not fit into the condition above and hence can not be uploaded.

I'm not sure is this something we should fix or not...

Screen shots:

Screenshot_1

Screenshot_2

Modernize JS, add linting

This is a placeholder to make other contributors aware of pending changes to the v2-alpha branch:

  1. We plan to modernize the JavaScript to take advantage of ES6 features.
  2. We will introduce linting via ESLint, airbnb style.

We expect to merge these changes sometime in the next two weeks.

Does it possible to use a custom domain?

For the normal Static Website I can use a custom domain like www.mysite.com to access the website, but for aws-js-s3-explorer I can only use https://s3.amazonaws.com/my-BUCKET/index.html to access the bucket. Does it possible to use custom domain like www.mysite.com to access the bucket?

Thanks.

search option doesn't do a nested search

Hi team,

we really like the simplicity of the tool. But we have terabytes of data in our bucket and its a great pain to search through the bucket, especially when you have files with keys that represent directories within directories.

I am unable to search for that type of files. Adding that feature will be immensely helpful

Feature Request: Display Images

The current implementation is good!

Say within the bucket is a folder object, and the folder object contains images. Can you display the images directly in the site when you click on the specified folder? Or perhaps a smaller version of the image - and you click to download or view the full image?

Feature Request: Last modified for folders

I really like this project!

One thing i'd like to see is "last modified" timestamps for folders though. I know this is a problem because the API doesn't give us more than a name for folders. I think there would have to be some fetching and filtering of files within a folder, finding the most recent timestamp and displaying that.

copy/paste problem for non Latin characters named object

Hi. I changed s3.makeUnauthenticatedRequest to s3.makeAuthenticatedRequest and so, my clients may enters to their own non-website private buckets, upload / download file, add dir, rename, copy/cut/paste files easily via browser. But non-Latin character named objects cannot be able to copy and paste. This is the sample error in console;

TypeError: Cannot convert string to ByteString because the character at index 23 has value 287 which is greater than 255.

My code is quite simple;

 var s3 = new AWS.S3();
var params = {
    Bucket: 'myBucket',
    CopySource: 'myBucket' + '/folder_a/' + utf-8-file,
    Key: 'folder_b/' + utf-8-file,
    StorageClass: 'STANDARD_IA'
 };
s3.copyObject(params, function (err, data) {
     if (err)console.log(err);
 });

I can change file names with Latin characters easily to Latin/non-Latin chars via copyObject property but non-Latin chars named object cannot be changed at all. What can be done?

Accessing bucket but can't list folders

I've followed all the instructions but can't see any of the root folders of the bucket. However I don't see any errors, so the buckets are being retrieved correctly and the object count is showing up in the top right corner. I am not sure what I may be doing wrong.
(Top right corner of the screenshot) Any ideas of what may be happening?
Screen Shot 2019-11-09 at 7 45 40 PM

Bucket Policy:

I am logging in through this JS app as user other than AWS-somerootUser, with full S3 permissions.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "BucketPolicyForSFTP",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::123123123123:role/AWS-somerootUser"
            },
            "Action": "s3:*",
            "Resource": [
                "arn:aws:s3:::mybucket",
                "arn:aws:s3:::mybucket/*"
            ]
        }
    ]
}

My CORS:


<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
    <AllowedOrigin>*</AllowedOrigin>
    <AllowedMethod>GET</AllowedMethod>
    <AllowedMethod>POST</AllowedMethod>
    <AllowedMethod>PUT</AllowedMethod>
    <AllowedMethod>HEAD</AllowedMethod>
    <MaxAgeSeconds>3000</MaxAgeSeconds>
    <ExposeHeader>ETag</ExposeHeader>
    <ExposeHeader>x-amz-meta-myheader</ExposeHeader>
    <AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>

network faliure

Im getting a network faliure despite having followed the steps'

image

If I hit advanced i get more details

image

how can I fix it?

https://cyberduck.io works fine from my computer

Error accessing S3 bucket s3. Error: NetworkingError: Network Failure

Hello,
I follow instructions, setting Bucket Policy and CORS accordingly.

Then I tried static website hosting via S3, but error appears:
Error accessing S3 bucket s3. Error: NetworkingError: Network Failure
In browser console I can see:
aws-sdk-2.0.13.min.js:4 OPTIONS https://s3-share.amazonaws.com/?delimiter=%2F net::ERR_NAME_NOT_RESOLVED

All files in my bucket are made public.

I am not sure, what I'm doing wrong, I suspect it might be CORS settings, which are in my case:

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
    <AllowedOrigin>http://MYBUCKETNAME.s3-website.eu-central-1.amazonaws.com</AllowedOrigin>
    <AllowedMethod>HEAD</AllowedMethod>
    <AllowedMethod>GET</AllowedMethod>
    <ExposeHeader>ETag</ExposeHeader>
    <ExposeHeader>x-amz-meta-custom-header</ExposeHeader>
    <AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>

I also tried to simply put URL of index.html file to browser (I have changed AllowedOrigin accordingly, but not sure if it actually do anything in that case), but the issue reappears again.

Weird filename on Download

Hoping someone can help me here...

I'm seeing some weird filenames when downloading from the bucket using this tool.

I'm expecting just the file, with just the filename, but instead I'm getting a file with the complete path tacked on with a bunch of percentages in the filename.

Example:

https://mybucket.s3.amazonaws.com/MAIN%2F2018.05.10-23_00_01%2FInstaller%2FSetup.exe

I'm expecting to just get the Setup.exe. I've been running this for more than a month and I don't recall seeing this happen before. Looking at the developer console in chrome I see:

target href=https://my-bucket.s3.amazonaws.com/MAIN%2F2018.05.07-10_52_08%2FInstaller%2FSetup.exe

Anyway to resolve this?

Not Optimal for buckets with many objects

The UI becomes unresponsive when loading a bucket with many objects (100K+). I believe this is because the app is loading all the objects in the bucket in batches of 1000. You may consider paginating the calls to S3, fetching only DataTables pageLength number of objects at a time. Subsequent batches can be fetch by triggering the page event. I believe fnDrawCallback and fnInfoCallback are also called when the next button is clicked.

https://datatables.net/reference/event/page
http://legacy.datatables.net/usage/callbacks#fnDrawCallback
http://legacy.datatables.net/usage/callbacks#fnInfoCallback

Feature Request: Filter by metadata

Hello!

This is not something major and im not sure if its possible with potential API limitations, etc.

Do you think it would be possible to have a filter for metadata tags, etc? I was just thinking that if you tagged certain items or certain items were auto tagged, it would be cool to be able to filter by that metadata information as well as timestamp, name, etc.

Not sure how complicated that would be, etc. but just thought I would ask.

Thanks!

[v2-alpha] Feature request: upload files into a new folder

This is a feature request for v2-alpha branch which supports file uploads.

Currently, it's not possible to upload a whole folder to S3 bucket using the tool nor to create a new folder and then navigate into it. This makes impossible to upload files into a folder if it doesn't pre-exist in S3 bucket.

Acess Error

The code don´t work.
but with code:
AWS.config.apiVersions = {
s3: '2006-03-01',
// other service API versions
};
the code could work.

Thanks

Specific Folder access Only

Hi , how to make this to explore inside specific folder.
i mean to say i don't want to show root of bucket instead i want to explore files and folders inside of specific folder.

File upload does not work for files greater than 5Mb

This relates to version v2-alpha which allows files upoading.
The feature works just fine for files smaller than 5Mb but fails in case of larger one.
The error is as follows:

Access to XMLHttpRequest at 'https://bucket.s3.amazonaws.com/test2/test10Mb.file?uploads' from origin 'https://site.com' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource.

Load all CDN assets via HTTPS

RE: Protocol-relative URL
Now that SSL is encouraged for everyone and doesn’t have performance concerns, this technique is now an anti-pattern. If the asset you need is available on SSL, then always use the https:// asset.

Allowing the snippet to request over HTTP opens the door for attacks like the GitHub Man-on-the-side attack.

CORS for Read-Only S3 Bucket section in readme.md not working

BRANCH : v2-alpha

I have setup the buckets in S3 as per the readme instructions . In the CORS sections of the readme it is mentioned

If you intend to allow read-only access from BUCKET1, which hosts S3 Explorer, to BUCKET2, then you will need to supply a CORS configuration on BUCKET2 that permits HEAD and GET operations, for example:

https://s3.amazonaws.com https://BUCKET1.s3.amazonaws.com HEAD GET * 3000 ETag x-amz-meta-custom-header x-amz-server-side-encryption x-amz-request-id x-amz-id-2 date

My BUCKET1 is aws-js-s3-explorer.

However , if I provide CORS origin in BUCKET2 CORS properties as https://BUCKET1.s3.amazonaws.com
the setup is not working and I am getting CORS error .

Changing it to :
https://aws-js-s3-explorer.s3.ap-south-1.amazonaws.com

works for me .

So , should the readme.md be updated to
https://BUCKET1.s3.REGION1.amazonaws.com
from
https://BUCKET1.s3.amazonaws.com

where REGION1 is the region of BUCKET1?

Can't support AWS China Region url.

When click the object in the explorer list, the open window was blank with wrong url format, like "bucketname.s3-cn-north-1.amazonaws.com/object". Maybe cause by AWS china region url format is different with global region. The object correct url format is "bucketname.s3.cn-north-1.amazonaws.com.cn/object".

Please fix it, Thanks.

v2-alpha: support signature v4 regions

The v2-alpha does not support signature v4-only regions, such as Mumbai (ap-south-1). Requests fail with InvalidRequest and "The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256."

Better handling of uploads of many (100+) large files.

When uploading a folder of 100+ files (probably less too) with files 5-300 mb, it seems like the upload progress stalls a bit because it starts all of them at once. Could we perhaps set a customisable limit of 10 active uploads at a time? Might make it more reliable? Or look into using something like uppy.js (has s3 integration) to upload instead?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.