Git Product home page Git Product logo

s3fs-nio's Introduction

S3FS NIO

Master Build Status JDK support badge Maven Release Version Docs License Chat
Bugs Coverage Reliability Rating Security Rating Vulnerabilities
GitHub issues by-label GitHub issues by-label GitHub issues by-label GitHub issues by-label

This is an implementation of an Amazon AWS S3 FileSystem provider using JSR-203 (a.k.a. NIO2) for Java 8.

Amazon Simple Storage Service provides a fully redundant data storage infrastructure for storing and retrieving any amount of data, at any time.

NIO2 is the new file management API, introduced in Java version 7.

This project provides a complete API implementation, for managing files and folders directly in Amazon S3.

Compatibility

We support JDK 8, 11 and 17.

Documentation

You can check out our documentation here.

s3fs-nio's People

Contributors

adnanekhan avatar bjoernakamanf avatar carlspring avatar dependabot[bot] avatar edmang avatar electricsam avatar ennoruijters avatar github-actions[bot] avatar guicamest avatar heikkipora avatar jarnaiz avatar jcustovic avatar mslowiak avatar nmonokov avatar pditommaso avatar ptirador avatar rlavolee avatar snyk-bot avatar songjinze avatar stanakaj avatar steve-todorov avatar twz123 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

s3fs-nio's Issues

Set up issue templates

Task Description

We should set up issue templates for the project. This will help bring more consistency to the way that issues and feature requests are described.

Tasks

The following tasks will need to be carried out:

  • Create issue templates.

Re-work the configuration properties to support multiple buckets, credentials and regions

Task Description

At the moment, it's not possible to use multiple buckets, credentials and regions. In a more sophisticated application this would be quite a limitation.

The format of the amazon.properties currently looks like this:

s3fs.bucket.name=/bucket
s3fs.access.key=AKIADEMOACCESSKEY
s3fs.secret.key=sfsoj291aSFfasfsafs1fsa1
s3fs.region=eu-west-2
s3fs.protocol=https

Once #136 has been implemented, (and the underscores in the properties have been converted to dots), we should consider doing something like this:

s3fs.${configurationName}.bucket.name=/bucket
s3fs.${configurationName}.access.key=AKIADEMOACCESSKEY
s3fs.${configurationName}.secret.key=sfsoj291aSFfasfsafs1fsa1
s3fs.${configurationName}.region=eu-west-2
s3fs.${configurationName}.protocol=https

For example:

s3fs.filestore1.bucket.name=/bucket
s3fs.filestore1.access.key=AKIADEMOACCESSKEY
s3fs.filestore1.secret.key=sfsoj291aSFfasfsafs1fsa1
s3fs.filestore1.region=eu-west-2
s3fs.filestore1.protocol=https

This is just a proposal, which is open to better suggestions, if anyone has such.

Tasks

The following tasks will need to be carried out:

  • Re-work the properties file for the tests.
  • Implement the necessary changes to the code.
  • Update the documentation.

Task Relationships

This task:

Help

Change the licenses

Task Description

We should define the project's licensing terms.

Tasks

  • Contact the original authors.
    • Reached out to @martint who created [martint/s3fs] on which [Upplication/Amazon-S3-FileSystem-NIO2] is based. Martin Traverso has confirmed he is happy to have the project under both an Apache 2.0 and MIT license, as well as for someone else to continue the work on this; (his original project is under an Apache 2.0 license).
    • Reached out to @jarnaiz who forked [martint/s3fs] and further developed it as [Upplication/Amazon-S3-FileSystem-NIO2]. The fork appears as being under an MIT license, but the pom.xml file states it is (only) under an Apache 2.0. As @jarnaiz has not replied, and, since it isn't perfectly unclear under which of the two licenses the project is, our spin-off will be dual-licensed.
  • Add licenses:
    • Update the pom.xml.
    • Add license files for both licenses.

Related Tasks

Create a custom provider for AwsRegionProviderChain

Task Description

The S3Factory class manages the build of a new Amazon S3 instance, which needs a region. This is provided by getting it from a properties file, and, in the case there's no property defined, the region is obtained by using a default region provider chain.

As specified in this Pull Request discussion, although it's highly unlikely, there could be a different region provider chain needed by a consumer of the API (this would have to be custom, as the SDK only has Default). Might want to provide a customization option for this, rather than hard coding new DefaultAwsRegionProviderChain.

Tasks

The following tasks will need to be carried out:

  • Create a custom provider class org.carlspring.cloud.storage.s3fs.S3AwsRegionProviderChain.
  • This class must extend from software.amazon.awssdk.regions.providers.AwsRegionProviderChain.
  • This class must have a constructor which will call its parent constructor to define a list of region providers, which will be (for the moment), in this order:
    software.amazon.awssdk.regions.providers.SystemSettingsRegionProvider
    software.amazon.awssdk.regions.providers.AwsProfileRegionProvider
  • getRegion method should not be overridden, so method from parent will be used.
  • Use this new provider chain in org.carlspring.cloud.storage.s3fs.S3Factory class, getRegionmethod, by replacing the DefaultAwsProviderChain with the new one.

Task Relationships

This task:

  • Is a follow-up of: #63

Useful Links

Help

Set up pull requests templates

Task Description

We should set up pull request templates for the project. This will help bring more consistency to the way that pull request are described.

Tasks

The following tasks will need to be carried out:

  • Create pull request templates.

Switch the NIO implementation to use AsynchronousFileChannel instead of FileChannel

Task Description

The AWS SDK for Java 2.x utilizes a new, non-blocking SDK architecture built on Netty to support true non-blocking I/O. Taking advantage of it, we can also switch the NIO implementation to use AsynchronousFileChannel. This should give a significant performance boost.

A plain FileChannel implements InterruptibleChannel so it, as well as anything that uses it such as the OutputStream returned by Files.newOutputStream(), has the unfortunate [1], [2] behaviour that performing any blocking operation on it (e.g. read() and write()) in a thread in interrupted state will cause the Channel itself to close with java.nio.channels.ClosedByInterruptException.

If this is a problem, using AsynchronousFileChannel instead is a possible alternative.

Tasks

The following tasks will need to be carried out:

  • Make S3FileChannel extend AsynchronousFileChannel instead of FileChannel and override the respective methods associated to this new class.
  • Make S3FileSystemProvider override the method newAsynchronousFileChannel instead of newFileChannel from FileSystemProvider.

Task Relationships

This task:

  • Is a follow-up of: #63

Useful Links

Help

Migrate to JUnit 5.x

Task Description

The project is still using JUnit 4.x. We should make the necessary code changes to migrate to JUnit 5 (Jupiter).

Tasks

  • Upgrade to JUnit 5.x.
  • Make the necessary code changes.
  • Make sure there are no legacy dependencies.

Update all Maven plugins to their latest versions

Task Description

A lot of the plugins are significantly outdated, or don't have explicit versions and we should make sure we're using up-to-date versions.

Tasks

The following tasks will need to be carried out:

  • Update all plugins.

Set up project labels

Task Description

We need to add some meaningful labels for the project.

Tasks

The following labels will need to be added:

  • needs triage
  • requirements pending
  • in progres
  • on hold
  • help wanted
  • hacktoberfest
  • good first issue
  • feature

Analyze options to use SdkHttpClient implementations

Task Description

The S3Factory class manages the build of a new Amazon S3 instance, which right now it's using an Apache HTTP Client.

As specified in this Pull Request discussion, this is locking in customers to the ApacheHttpClient, which adds a dependency they may not want. It's needed to provide an option for other SdkHttpClient implementations.

The UrlConnectionHttpClient is fairly popular choice in Java-based Lambda functions as it has faster startup time, so less impact to cold starts.

Tasks

The following tasks will need to be carried out:

  • Analyze options to use SdkHttpClient implementations

Task Relationships

This task:

  • Is a follow-up of: #63

Useful Links

Help

Where does amazon-test.properties go?

Hi,

I'm trying to run the integration tests before submitting a PR. But I'm having a hard time getting the amazon-test.properties file to load. EnvironmentBuilder.class.getResourceAsStream("/amazon-test.properties"); returns null. (I'd argue that should be checked for null before passing props.load because it took me more time than I'd like to admit to find the cause.)

On one hand, the path is /amazon-test.properties which would imply to me it has to be put in the root of whatever filesystem the source is on. If that is intended that could be problematic for people running on Windows.

However, even if I just change it to EnvironmentBuilder.class.getResourceAsStream("amazon-test.properties"); and put that file in either the root of the repo or in the target directory it still doesn't work.

Any advice?

Thanks,
Adam

Change the project's artifact coordinates

Task Description

As we'd like to make a somewhat "fresh" spin-off of the Upplication/Amazon-S3-FileSystem-NIO2 project, we'd like to make it clearer for people that this is a new project and for this we will need new artifact coordinates.

The new artifact coordinates will be:

    <groupId>org.carlspring.cloud.aws</groupId>
    <artifactId>s3fs-nio2</artifactId>
    <packaging>jar</packaging>

Tasks

  • Update the pom.xml as required.

Integration tests must clean up after execution

Task Description

The integration tests currently leave behind the resources they have created without cleaning up after the execution.
We should improve this so that when you test with a real S3 bucket, it cleans up and reduces any costs for hosting.

Currently, these tests are using constructs like this to generate random names for the files:

Path file = fileSystemAmazon.getPath(bucket, UUID.randomUUID().toString());

While this does do the job, there is no way to actually clean up, unless we can invoke a "delete all" operation against the S3 bucket via the AWS SDK. This is inefficient.

We can do something like what we've done in our strongbox project. In essence, thanks to @sbespalov, we have implemented annotations (and a mechanism around them) for creating test repositories and test artifacts. These are created for every test method and then disposed of, once it's completed (there is an option to preserve these, of course, if required for debugging purposes).

Classes of Interest From The Strongbox Project

Task List

  • Research what needs to be done and whether we can do it without Spring, or if it would require it. Perhaps we could add spring-test to the mix just for this, without having to turn this project into one that requires Spring as a runtime dependency.
  • Implement a @S3TestPath annotation.
  • Implement a S3TestPathContext. The path can still use UUID-s for this, but this way they'll be tracked and removed.
  • Implement a S3TestPathExecutionListener.
  • Re-work the test cases
    • org.carlspring.cloud.storage.s3fs.path.ToURLIT
    • org.carlspring.cloud.storage.s3fs.S3ClientIT
    • org.carlspring.cloud.storage.s3fs.FilesIT
    • org.carlspring.cloud.storage.s3fs.spike.EnvironmentIT
    • org.carlspring.cloud.storage.s3fs.ExampleClassIT
    • org.carlspring.cloud.storage.s3fs.fileSystemProvider.NewFileSystemIT
    • org.carlspring.cloud.storage.s3fs.fileSystemProvider.GetFileSystemIT
    • org.carlspring.cloud.storage.s3fs.fileSystemProvider.NewByteChannelIT
    • org.carlspring.cloud.storage.s3fs.S3UtilsIT
    • org.carlspring.cloud.storage.s3fs.FileSystemsIT

Help

Remove dependency on logback-classic

Task Description

The project uses SLF4J. However, the pom includes a dependency on a specific logging implementation, logback. Logback should be removed from the pom or moved to a different scope like test or provided if it is needed for testing.

Set up Sonarcloud analysis

Task Description

We should set up Sonarcloud so that we can make sure that the current code is up to standard and so that future contriburions via pull requests are also properly analyzed for code issues.

Tasks

The following tasks will need to be carried out:

  • Set up Sonarcloud.
  • Check if we can have things working properly with Github Action.

Implement support for MinIO

Task Description

We need to investigate what would be required in order to add support for MinIO.

Tasks

The following tasks will need to be carried out:

  • Investigate what will need to be done.
  • Implement the necessary changes.
  • Create a MinIOITTestSuite based on the AmazonS3ITSuite one. (check #183)
    • Implement a JUnit extension that starts the MinIO docker image using testcontainers before all tests in the suite (and then stops it at the end). (check #60)
  • Add test cases.
  • Illustrate how to use this in the documentation.

Task Relationships

This task:

  • Relates to: #60

Useful Links

Help

Follow-up on mockito/mockito#1325 and upgrade to mockito >= 3.5.6

Task Description

Under JDK11, we are seeing the following error (which doesn't prevent the code from working, but which should be properly addressed):

[ERROR] WARNING: An illegal reflective access operation has occurred
[ERROR] WARNING: Illegal reflective access using Lookup on org.mockito.internal.util.reflection.InstrumentationMemberAccessor$Dispatcher$ByteBuddy$yCgmqpdD to class java.nio.channels.spi.AbstractInterruptibleChannel
[ERROR] WARNING: Please consider reporting this to the maintainers of org.mockito.internal.util.reflection.InstrumentationMemberAccessor$Dispatcher$ByteBuddy$yCgmqpdD
[ERROR] WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
[ERROR] WARNING: All illegal access operations will be denied in a future release

Once org.mockito:mockito:3.5.6 has been released, we should try this out again and report back our findings.

Tasks

The following tasks will need to be carried out:

  • Upgrade org.mockito:mockito to >= 3.5.6.

Task Relationships

This task:

Help

Use Multipart upload API to upload files larger than 5 GB

Task Description

The multipart upload API is designed to improve the upload experience for larger objects. You can upload an object in parts. These object parts can be uploaded independently, in any order, and in parallel. You can use a multipart upload for objects from 5 MB to 5 TB in size.

Tasks

The following tasks will need to be carried out:

  • First create a multipart upload with CreateMultipartUploadRequest and get the upload id.
  • Upload all the different parts of the object with the help of UploadPartRequest and CompletedPart.
  • Finally call completeMultipartUpload operation with CompletedMultipartUpload request to tell S3 to merge all uploaded parts and finish the multipart operation.

Task Relationships

This task:

  • Is a follow-up of: #63

Useful Links

Help

Unable to change directories when exposed via Mina SFTP

Task Description

When using S3FS as a file system for Mina SFTP, it's not possible to change the directory (cd command):

sftp> ls -l
d---------   1 OWNER@   GROUP@          0 Jan 25 10:51 bar
d---------   1 OWNER@   GROUP@          0 Jan 25 10:42 foo
sftp> cd foo
Can't change directory: Can't check target

POSIX permissions are not being returned for these directories, which is causing an exception in the SFTP client when attempting to cd to the directory.

Tasks

The following tasks will need to be carried out:

  • When getting S3 Posix File Attributes in S3Utils, if the path is a directory, obtain the bucket ACL, then obtain its ACL grant permissions, convert them to Posix file permissions and return them with the response.

Useful Links

Help

Investigate how to fix the tests, when we upgrade to com.github.marschall:memoryfilesystem >= 2.1.0

Task Description

We need to investigate what will need to be done in order to upgrade to com.github.marschall:memoryfilesystem from 2.1.0 to 0.9.1 due to incompatibilities with JDK11. This causes a problem with timestamps like this:

[ERROR] Failures:
[ERROR]   ReadAttributesTest.readAttributesAll:463 expected:<2020-08-23T10:19:52.0635Z> but was:<2020-08-23T21:19:52.635Z>
[ERROR]   ReadAttributesTest.readAttributesFile:92 expected:<2020-08-23T21:19:52.669127Z> but was:<2020-08-23T21:19:52.669Z>

This issue seems to be related to the higher precision of timestamps introduced in JDK 9 (JDK-8068730)

Task Relationships

This task:

Help

Set up badges

Task Description

We should gather a list of badges that we'd like to display such as:

  • Maven
  • License
  • Build status
  • Number of tests no such option in Github Actions.
  • Chat channel
  • SonarQube quality gate badge

Tasks

The following tasks will need to be carried out:

  • Add badges for the above.

Get code coverage working in Sonarcloud using JaCoCo

Task Description

Currently, we're not collecting any code coverage, as this has been commented out.

Tasks

The following tasks will need to be carried out:

  • Create a new Maven profile for the code coverage.
  • Upgrade the jacoco-maven-plugin to the latest.
  • Apply the necessary fixes to get things working.
  • Make things work through the Github Action.

Task Relationships

This task:

  • Is a sub-task of: #
  • Depends on: #
  • Is a follow-up of: #
  • Relates to: #59

Help

Upgrade to aws-sdk-java-v2

Task Description

We should upgrade to AWS' Java SDK v2.

Tasks

The following tasks will need to be carried out:

  • Investigate what work needs to be done exactly and gather the requirements here.
  • Apply the necessary changes.

Useful Links

Help

Upgrade to the latest aws-java-sdk-s3

Task Description

The version of aws-java-sdk-s3 currently at use is 1.11.232. We need to upgrade to the latest version, (currently, 1.11.830) and make sure things still work as expected.

Tasks

The following tasks will need to be carried out:

  • Upgrade to the latest aws-java-sdk-s3.
  • Apply any required code fixes.

Investigate if and how we can extract the integration tests into suites

Task Description

At the moment, the integration tests are executed as separate tests using the maven-failsafe-plugin. We need to re-work this so that they are executed as a suite. This needs to be done so that we could be able to re-run the same integration tests against different environments. For example, the default profile for integration tests should be against a real Amazon S3. However, if we re-work these to be executed as a suite, we could then add another suite with the same tests, but for MinIO instead.

What we can do is:

  • Have all the integration tests that are connecting to S3 tagged using the @Tag("integration-test-s3") annotation.
  • Create separate test suites that have an @IncludeTags("integration-test-s3") annotation
  • Create an AmazonS3ITTestSuite.
  • Eventually, create additional test suites such as, for example, MinIOITTestSuite.
  • Make the maven-failsafe-plugin only execute the *ITSuite classes.

Tasks

The following tasks will need to be carried out:

  • Investigate whether it's possible to enable/trigger profiles for JUnit tests at a suite level.
  • Create a test suite for all the integration tests.
  • Make the maven-failsafe-plugin only execute *ITSuite-s.

Task Relationships

This task:

Stackoverflow Questions

Help

Design a logo for the project

Task Description

We should create a logo for the project.

The logo should resemble something between the logo for AWS S3 and a file system.

We should check under what license the S3 logo is and whether it could be used for OSS purposes. (We need links that quote the official Amazon view on this.

Help

Replace log4j with slf4j

Task Description

We should consider replacing log4j with slf4j.

Tasks

The following tasks will need to be carried out:

  • Make the necessary changes in order to migrate to slf4j.

Set up Github Actions

Task Description

We need to set up a matrix build that tests the code:

  • Linux
  • Windows
  • JDK8
  • JDK11

Tasks

The following tasks will need to be carried out:

  • Set up the Github Action script.

Improve github build

Task Description

  • Merge jobs into one instead of one for build and one for analysis.
  • Enable ~/.m2/repository cache to speed-up build.
  • Send code analysis report to SonarCloud (once).
  • Send code coverage/analysis reports to SonarCloud only on branches. (check here)
  • Setup matrices for ubutnu, windows and macos.

Remove all the unnecessary `throws` in method definitions

Task Description

There are a lot of methods (both in production and test code) that either have generic throws Exception clauses, or exception clauses for exceptions that aren't being thrown at all in the respective methods.

Tasks

The following tasks will need to be carried out:

  • Go over the code and clean these up.

Re-work the way the configuration works

Task Description

The current way that the configuration is implemented is quite confusing and error-prone to work with. We need to clarify how we'd like this to work, gather the requirements and refactor this.

Current Way It Works

Generic Way

Map<String, Object> env;

String accessKey = System.getenv(ACCESS_KEY);
String secretKey = System.getenv(SECRET_KEY);
String region = System.getenv(REGION);
String protocol = System.getenv(PROTOCOL);

env = ImmutableMap.<String, Object>builder().put(ACCESS_KEY, accessKey)
                                            .put(SECRET_KEY, secretKey)
                                            .put(REGION, region)
                                            .put(PROTOCOL, protocol)
                                            .build();

Production

If there is an amazon.properties on the classpath, this will be used to load the properties. The configuration is loaded using org.carlspring.cloud.storage.s3fs.S3FileSystemProvider.

If not, the generic way can be used.

Tests

If there is an amazon-test.properties on the classpath, this will be used to load the properties. The configuration is loaded using org.carlspring.cloud.storage.s3fs.util.EnvironmentBuilder.

If not, the generic way can be used.

Current Problems

  • Production and testing should use the same code for loading properties and setting up the configuration.
  • It is not possible to use multiple buckets, region, or credential settings.

Tasks

The following tasks will need to be carried out:

  • Define our use cases.
  • Collect a list of problems with the current implementation.
  • Collect a list of requirements.
    • It must be possible to override the default settings using properties.
    • It must be easy to override the configuration during testing, especially, if we have parallel tests.
    • It must be possible to use in parallel tests.
  • Refactor the code
    • Convert all the properties to use dots instead of underscores, as this is the real Java convention. ( #136 )
    • Make it possible to use settings for multiple buckets, regions, credentials. ( #137 )
    • Make the loading of the configuration work the same way in production and testing.
    • Improve the validation for the core properties (bucket name, access key, secret key, region, protocol) and fail early with meaningful messages.
  • Update the documentation. ( #114 )

Task Relationships

This task:

Help

Feature Request: S3 Access Points

Task Description

AWS S3 Access Points are unique hostnames attached to an S3 bucket, each with dedicated access policies. This allows large scale access control to be delegated to multiple APs, each dedicated to providing access to one user, rather than combining all access control in one large bucket policy. Larger scale users are increasingly using APs to simplify bucket access control.

S3 Access Points ARNs may be used by the AWS CLI and SDK in place of a bucket name; once an S3 AP is defined to a bucket mybucket, the following are equivalent:

s3://mybucket/
s3://arn:aws:s3:<region>:<account-id>:accesspoint/<ap-name>/

Task Relationships

This task:

Help

Translate all comments, methods, vars, etc that are in Spanish into English

Task Description

As the original authors were Spanish native-speakers, there are a lot of comments, variables, etc in the code that are in Spanish. We should translate these in English, so that it's clearer for other contributors that don't speak Spanish.

The code base is not that large, so going through it (both sources and resource files) should not take too long.

Tasks

The following tasks will need to be carried out:

  • Translate all comments from Spanish to English.
  • Translate all variables from Spanish to English.
  • Translate all methods from Spanish to English.

Help

Preliminary work for spin-off of Upplication/Amazon-S3-FileSystem-NIO2

Task Description

This is the master task for creating a spin-off of Upplication/Amazon-S3-FileSystem-NIO2.

Tasks

Assemble a list of the pull requests of Upplication/Amazon-S3-FileSystem-NIO2 and evaluate them

Task Description

While Upplication/Amazon-S3-FileSystem-NIO2 seems abandoned, there appear to be quite a few open pull requests. We should see if there are any of them that we would like to apply to our project and reach out to the respective contributors to ask them, if they'd like to contribute them back to our project.

This is the master task. The actual work should be split into sub-tasks and be added to the list below.

Tasks

The following tasks will need to be carried out:

Help

Set up testcontainers

Task Description

We should consider setting up our tests to use testcontainers.

Tasks

The following tasks will need to be carried out:

  • Investigate what needs to be done to get things working (and document the findings here).
  • Investigate, if we can somehow have parameterized (JUnit 5) integration tests that can work with both testcontainers and Amazon. - We'll be using JUnit Tags for now (see #183)
  • Set up testcontainers so it starts the container and tests when -Pit-minio is used.
  • Enable MinIO as part of the GH Actions pipeline.
  • Create a JUnit extension to start the MinIO testcontainer and use it in @MinioIntegrationTest with @ExtendWith in tests that can run against MinIO. An exampe of how to do this can be found here.
  • Make integration tests use the correct URI when creating the filesystem depending on the running.it property (i.e. for S3 s3://s3.amazonaws.com/ and s3://localhost:9000/ for MinIO). Perhaps it would be a good idea to add this into the BaseIntegrationTest as a method which generates the appropriate URI for you.
  • Configure integration tests to use testcontainers via @MinioIntegrationTest where this is possible. Some integration tests can be ran against S3 and MinIO since the APIs are compatible. If that is possible, then the test can be annotated with both @S3IntegrationTest and @MinioIntegrationTest.

Task Relationships

This task:

Help

Rename the packages to org.carlspring.cloud.storage.s3fs

Task Description

We need to do this to avoid confusing future users in regards to where to find the code and whether they should be looking at one of the earlier projects from which we're inheriting the code.

Tasks

  • Rename com.upplication.s3fs to org.carlspring.cloud.storage.s3fs.

Set up Snyk.io

Task Description

We should enable snyk.io scans for the project, as well as for pull requests.

Tasks

The following tasks will need to be carried out:

  • Enable snyk.io scanning.

Re-work the README.md

Task Description

We should re-write the README.md and cover:

  • Project History
  • Project Goals
  • Roadmap
  • Location of documentation
  • Examples

Tasks

The following tasks will need to be carried out:

  • Update the README.md, so that it reflects the new reality.

Convert the configuration properties to use dots instead of underscores

Task Description

We should convert all the properties to use dots instead of underscores, as this is the real Java convention.

How it looks currently:

  • amazon.properties
bucket_name=/bucket
s3fs_access_key=AKIAEXAMPLEKEY
s3fs_secret_key=fadf3rafsadf3r23wafsdf9i209ifs
s3fs_region=eu-west-2
s3fs_protocol=https

How it should look:

  • amazon.properties
s3fs.bucket.name=/bucket
s3fs.access.key=AKIAEXAMPLEKEY
s3fs.secret.key=fadf3rafsadf3r23wafsdf9i209ifs
s3fs.region=eu-west-2
s3fs.protocol=https

Tasks

The following tasks will need to be carried out:

  • Convert the properties to use dots instead of underscores.
  • Make the necessary changes to the code.
  • Rename the bucket_name property to s3fs.bucket.name.
  • Update the documentation.

Task Relationships

This task:

Help

Document the configuration options

Task Description

We need to improve the documentation to properly explain the supported configuration properties and how to use them and what for.

The documentation for this is under docs/content/reference/configuration-options.md.

Task Relationships

This task:

Help

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.