Git Product home page Git Product logo

s3mock's Introduction

Latest Version Maven Build Docker Hub Docker Pulls Java17 Kotlin
OpenSSF Best Practices OpenSSF Scorecard Project Map GitHub stars

S3Mock

S3Mock is a lightweight server that implements parts of the Amazon S3 API.
It has been created to support local integration testing by reducing infrastructure dependencies.

The S3Mock server can be started as a standalone Docker container, using Testcontainers, JUnit4, JUnit5 and TestNG support, or programmatically.

Changelog

See GitHub releases.
See also the changelog for detailed information about changes in releases and changes planned for future releases.

Supported S3 operations

Of these operations of the Amazon S3 API, all marked ✅ are supported by S3Mock:

Operation Support Comment
AbortMultipartUpload
CompleteMultipartUpload
CopyObject
CreateBucket
CreateMultipartUpload
DeleteBucket
DeleteBucketAnalyticsConfiguration
DeleteBucketCors
DeleteBucketEncryption
DeleteBucketIntelligentTieringConfiguration
DeleteBucketInventoryConfiguration
DeleteBucketLifecycle
DeleteBucketMetricsConfiguration
DeleteBucketOwnershipControls
DeleteBucketPolicy
DeleteBucketReplication
DeleteBucketTagging
DeleteBucketWebsite
DeleteObject
DeleteObjects
DeleteObjectTagging
DeletePublicAccessBlock
GetBucketAccelerateConfiguration
GetBucketAcl
GetBucketAnalyticsConfiguration
GetBucketCors
GetBucketEncryption
GetBucketIntelligentTieringConfiguration
GetBucketInventoryConfiguration
GetBucketLifecycle Deprecated in S3 API
GetBucketLifecycleConfiguration
GetBucketLocation
GetBucketLogging
GetBucketMetricsConfiguration
GetBucketNotification
GetBucketNotificationConfiguration
GetBucketOwnershipControls
GetBucketPolicy
GetBucketPolicyStatus
GetBucketReplication
GetBucketRequestPayment
GetBucketTagging
GetBucketVersioning
GetBucketWebsite
GetObject
GetObjectAcl
GetObjectAttributes for objects, not parts
GetObjectLegalHold
GetObjectLockConfiguration
GetObjectRetention
GetObjectTagging
GetObjectTorrent
GetPublicAccessBlock
HeadBucket
HeadObject
ListBucketAnalyticsConfigurations
ListBucketIntelligentTieringConfigurations
ListBucketInventoryConfigurations
ListBucketMetricsConfigurations
ListBuckets
ListMultipartUploads
ListObjects Deprecated in S3 API
ListObjectsV2
ListObjectVersions Only dummy implementation
ListParts
PutBucketAccelerateConfiguration
PutBucketAcl
PutBucketAnalyticsConfiguration
PutBucketCors
PutBucketEncryption
PutBucketIntelligentTieringConfiguration
PutBucketInventoryConfiguration
PutBucketLifecycle Deprecated in S3 API
PutBucketLifecycleConfiguration
PutBucketLogging
PutBucketMetricsConfiguration
PutBucketNotification
PutBucketNotificationConfiguration
PutBucketOwnershipControls
PutBucketPolicy
PutBucketReplication
PutBucketRequestPayment
PutBucketTagging
PutBucketVersioning
PutBucketWebsite
PutObject
PutObjectAcl
PutObjectLegalHold
PutObjectLockConfiguration
PutObjectRetention
PutObjectTagging
PutPublicAccessBlock
RestoreObject
SelectObjectContent
UploadPart
UploadPartCopy
WriteGetObjectResponse

Usage

Usage of AWS S3 SDKs

S3Mock can be used with any of the available AWS S3 SDKs.

The Integration Tests contain various examples of how to use the S3Mock with the AWS SDK for Java v1 and v2 in Kotlin. The modules below testsupport contain examples in Java.

S3Client or S3Presigner instances are created here:

Path-style vs Domain-style access

AWS S3 SDKs usually use domain-style access by default. Configuration is needed for path-style access.

S3Mock currently only supports path-style access (e.g., http://localhost:9090/bucket/someKey).

Domain-style access to buckets (e.g., http://bucket.localhost:9090/someKey) does not work. This is due to the fact that the domain localhost is special and does not allow for subdomain access without modifications to the operating system.

Presigned URLs

S3 SDKs can be used to create presigned URLs, the S3 API supports access through those URLs.

S3Mock will accept presigned URLs, but it ignores all parameters.
For instance, S3Mock does not verify the HTTP verb that the presigned uri was created with, and it does not validate whether the link is expired or not.

S3 SDKs can be used to create presigned URLs pointing to S3Mock if they're configured for path-style access. See the "Usage of..." section above for links to examples on how to use the SDK with presigned URLs.

Usage of AWS CLI

S3Mock can be used with the AWS CLI. Setting the --endpoint-url enables path-style access.

Examples:

Create bucket

aws s3api create-bucket --bucket my-bucket --endpoint-url=http://localhost:9090

Put object

aws s3api put-object --bucket my-bucket --key my-file --body ./my-file --endpoint-url=http://localhost:9090

Get object

aws s3api get-object --bucket my-bucket --key my-file --endpoint-url=http://localhost:9090 my-file-output

Usage of plain HTTP

As long as the requests work with the S3 API, they will work with S3Mock as well.

Examples:

Create bucket

curl --request PUT "http://localhost:9090/my-test-bucket/"

Put object

curl --request PUT --upload-file ./my-file http://localhost:9090/my-test-bucket/my-file

Get object

curl --request GET http://localhost:9090/my-test-bucket/my-file

S3Mock configuration options

The mock can be configured with the following environment variables:

  • validKmsKeys: list of KMS Key-Refs that are to be treated as valid.
    • KMS keys must be configured as valid ARNs in the format of "arn:aws:kms:region:acct-id:key/key-id", for example "arn:aws:kms:us-east-1:1234567890:key/valid-test-key-id"
    • The list must be comma separated keys like arn-1, arn-2
    • When requesting with KMS encryption, the key ID is passed to the SDK / CLI, in the example above this would be "valid-test-key-id".
    • S3Mock does not implement KMS encryption, if a key ID is passed in a request, S3Mock will just validate if a given Key was configured during startup and reject the request if the given Key was not configured.
  • initialBuckets: list of names for buckets that will be available initially.
    • The list must be comma separated names like bucketa, bucketb
  • root: the base directory to place the temporary files exposed by the mock. If S3Mock is started in Docker, a volume must be mounted as the root directory, see examples below.
  • debug: set to true to enable Spring Boot's debug output.
  • trace: set to true to enable Spring Boot's trace output.
  • retainFilesOnExit: set to true to let S3Mock keep all files that were created during its lifetime. Default is false, all files are removed if S3Mock shuts down.

S3Mock Docker

The S3Mock Docker container is the recommended way to use S3Mock.
It is released to Docker Hub.
The container is lightweight, built on top of the official Linux Alpine image.

If needed, configure memory and cpu limits for the S3Mock docker container.

The JVM will automatically use half the available memory.

Start using the command-line

Starting on the command-line:

docker run -p 9090:9090 -p 9191:9191 -t adobe/s3mock

The port 9090 is for HTTP, port 9191 is for HTTPS.

Example with configuration via environment variables:

docker run -p 9090:9090 -p 9191:9191 -e initialBuckets=test -e debug=true -t adobe/s3mock

Start using the Fabric8 Docker-Maven-Plugin

Our integration tests are using the Amazon S3 Client to verify the server functionality against the S3Mock. During the Maven build, the Docker image is started using the docker-maven-plugin and the corresponding ports are passed to the JUnit test through the maven-failsafe-plugin. See BucketV2IT as an example on how it's used in the code.

This way, one can easily switch between calling the S3Mock or the real S3 endpoint and this doesn't add any additional Java dependencies to the project.

Start using Testcontainers

The S3MockContainer is a Testcontainer implementation that comes pre-configured exposing HTTP and HTTPS ports. Environment variables can be set on startup.

The example S3MockContainerJupiterTest demonstrates the usage with JUnit 5. The example S3MockContainerManualTest demonstrates the usage with plain Java.

Testcontainers provides integrations for JUnit 4, JUnit 5 and Spock.
For more information, visit the Testcontainers website.

To use the S3MockContainer, use the following Maven artifact in test scope:

<dependency>
 <groupId>com.adobe.testing</groupId>
 <artifactId>s3mock-testcontainers</artifactId>
 <version>...</version>
 <scope>test</scope>
</dependency>

Start using Docker compose

Simple example

Create a file docker-compose.yml

services:
  s3mock:
    image: adobe/s3mock:latest
    environment:
      - initialBuckets=bucket1
    ports:
      - 9090:9090

Start with docker compose up -d

Stop with docker compose down

Expanded example

Suppose we want to see what S3Mock is persisting, and look at the logs it generates in detail.

A local directory is needed, let's call it locals3root. This directory must be mounted as a volume into the Docker container when it's started, and that mounted volume must then be configured as the root for S3Mock. Let's call the mounted volume inside the container containers3root. S3Mock will delete all files when it shuts down, retainFilesOnExit=true tells it to leave all files instead.

Also, to see debug logs, debug=true must be configured for S3Mock.

Create a file docker-compose.yml

services:
  s3mock:
    image: adobe/s3mock:latest
    environment:
      - debug=true
      - retainFilesOnExit=true
      - root=containers3root
    ports:
      - 9090:9090
      - 9191:9191
    volumes:
      - ./locals3root:/containers3root

Create a directory locals3root.

Start with docker compose up -d

Create a bucket "my-test-bucket" with curl --request PUT "http://localhost:9090/my-test-bucket/"

Stop with docker compose down

Look into the directory locals3root where metadata and contents of the bucket are stored.

$ mkdir s3mock-mounttest
$ cd s3mock-mounttest
$ mkdir locals3root
$ cat docker-compose.yml
services:
  s3mock:
    image: adobe/s3mock:latest
    environment:
      - debug=true
      - retainFilesOnExit=true
      - root=containers3root
    ports:
      - 9090:9090
      - 9191:9191
    volumes:
      - ./locals3root:/containers3root

$ docker compose up -d
[+] Running 2/2
 ✔ Network s3mock-mounttest_default     Created
 ✔ Container s3mock-mounttest-s3mock-1  Started
$ curl --request PUT "http://localhost:9090/my-test-bucket/"
$ docker compose down
[+] Running 2/0
 ✔ Container s3mock-mounttest-s3mock-1  Removed
 ✔ Network s3mock-mounttest_default     Removed
 
$ ls locals3root
my-test-bucket
$ ls locals3root/my-test-bucket
bucketMetadata.json

S3Mock Java

S3Mock Java libraries are released and published to the Sonatype Maven Repository and subsequently published to the official Maven mirrors.

⚠️ WARNING
Using the Java libraries is discouraged, see explanation below
Using the Docker image is encouraged to insulate both S3Mock and your application at runtime.

S3Mock is built using Spring Boot, if projects use S3Mock by adding the dependency to their project and starting the S3Mock during a JUnit test, classpaths of the tested application and of the S3Mock are merged, leading to unpredictable and undesired effects such as class conflicts or dependency version conflicts.
This is especially problematic if the tested application itself is a Spring (Boot) application, as both applications will load configurations based on availability of certain classes in the classpath, leading to unpredictable runtime behaviour.

This is the opposite of what software engineers are trying to achieve when thoroughly testing code in continuous integration...

S3Mock dependencies are updated regularly, any update could break any number of projects.
See also issues labelled "dependency-problem".

See also the Java section below

Start using the JUnit4 Rule

The example S3MockRuleTest demonstrates the usage of the S3MockRule, which can be configured through a builder.

To use the JUnit4 Rule, use the following Maven artifact in test scope:

<dependency>
 <groupId>com.adobe.testing</groupId>
 <artifactId>s3mock-junit4</artifactId>
 <version>...</version>
 <scope>test</scope>
</dependency>

Start using the JUnit5 Extension

The S3MockExtension can currently be used in two ways:

  1. Declaratively using @ExtendWith(S3MockExtension.class) and by injecting a properly configured instance of AmazonS3 client and/or the started S3MockApplication to the tests. See examples: S3MockExtensionDeclarativeTest (for SDKv1) or S3MockExtensionDeclarativeTest (for SDKv2)

  2. Programmatically using @RegisterExtension and by creating and configuring the S3MockExtension using a builder. See examples: S3MockExtensionProgrammaticTest (for SDKv1) or S3MockExtensionProgrammaticTest (for SDKv2)

To use the JUnit5 Extension, use the following Maven artifact in test scope:

<dependency>
  <groupId>com.adobe.testing</groupId>
  <artifactId>s3mock-junit5</artifactId>
  <version>...</version>
  <scope>test</scope>
</dependency>

Start using the TestNG Listener

The example S3MockListenerXMLConfigurationTest demonstrates the usage of the S3MockListener, which can be configured as shown in testng.xml. The listener bootstraps the S3Mock application before TestNG execution starts and shuts down the application just before the execution terminates. Please refer to IExecutionListener from the TestNG API.

To use the TestNG Listener, use the following Maven artifact in test scope:

<dependency>
 <groupId>com.adobe.testing</groupId>
 <artifactId>s3mock-testng</artifactId>
 <version>...</version>
 <scope>test</scope>
</dependency>

Start programmatically

Include the following dependency and use one of the start methods in com.adobe.testing.s3mock.S3MockApplication:

<dependency>
  <groupId>com.adobe.testing</groupId>
  <artifactId>s3mock</artifactId>
  <version>...</version>
</dependency>

File System Structure

S3Mock stores Buckets, Objects, Parts and other data on disk.
This lets users inspect the stored data while the S3Mock is running.
If the config property retainFilesOnExit is set to true, this data will not be deleted when S3Mock is shut down.

❗ FYI
While it may be possible to start S3Mock on a root folder from a previous run and have all data available through the S3 API, the structure and contents of the files are not considered Public API, and are subject to change in later releases.
Also, there are no automated test cases for this behaviour.

Root-Folder

S3Mock stores buckets and objects a root-folder.

This folder is expected to be empty when S3Mock starts. See also FYI above.

/<root-folder>/

Buckets

Buckets are stored as a folder with their name as created through the S3 API directly below the root:

/<root-folder>/<bucket-name>/

BucketMetadata is stored in a file in the bucket directory, serialized as JSON.
BucketMetadata contains the "key" -> "uuid" dictionary for all objects uploaded to this bucket among other data.

/<root-folder>/<bucket-name>/bucketMetadata.json

Objects

Objects are stored in folders below the bucket they were created in. A folder is created that uses the Object's UUID assigned in the BucketMetadata as a name.

/<root-folder>/<bucket-name>/<uuid>/

Object data is stored below that UUID folder.

Binary data is always stored in a file binaryData

/<root-folder>/<bucket-name>/<uuid>/binaryData

Object metadata is serialized as JSON and stored as objectMetadata.json

/<root-folder>/<bucket-name>/<uuid>/objectMetadata.json

Object ACL is serialized as XML and stored as objectAcl.xml

/<root-folder>/<bucket-name>/<uuid>/objectAcl.xml

Multipart Uploads

Multipart Uploads are created in a bucket using object keys and an uploadId.
The object is assigned a UUID within the bucket (stored in BucketMetadata).
The Multipart upload metadata is currently not stored on disk.

The parts folder is created below the object UUID folder named with the uploadId:

/<root-folder>/<bucket-name>/<uuid>/<uploadId>/

Each part is stored in the parts folder with the partNo as name and .part as a suffix.

/<root-folder>/<bucket-name>/<uuid>/<uploadId>/<partNo>.part

Build & Run

To build this project, you need Docker, JDK 17 or higher, and Maven:

./mvnw clean install

If you want to skip the Docker build, pass the optional parameter "skipDocker":

./mvnw clean install -DskipDocker

You can run the S3Mock from the sources by either of the following methods:

  • Run or Debug the class com.adobe.testing.s3mock.S3MockApplication in the IDE.
  • using Docker:
    • ./mvnw clean package -pl server -am -DskipTests
    • docker run -p 9090:9090 -p 9191:9191 -t adobe/s3mock:latest
  • using the Docker Maven plugin:
    • ./mvnw clean package docker:start -pl server -am -DskipTests -Ddocker.follow -Dit.s3mock.port_http=9090 -Dit.s3mock.port_https=9191 (stop with ctrl-c)

Once the application is started, you can execute the *IT tests from your IDE.

Java

This repo is built with Java 17, output is currently bytecode compatible with Java 17.

Kotlin

The Integration Tests are built in Kotlin.

Contributing

Contributions are welcomed! Read the Contributing Guide for more information.

Licensing

This project is licensed under the Apache V2 License. See LICENSE for more information.

s3mock's People

Contributors

412b avatar adobe-bot avatar afranken avatar agudian avatar arnoturelinckx avatar arteam avatar chaithanyagk avatar chumper avatar dependabot[bot] avatar flexfrank avatar hennejg avatar jaschygu avatar magro avatar mattelacchiato avatar mbenson avatar miguelscruz avatar rombert avatar sangupta avatar santthosh avatar sdavids13 avatar serhiyverovka avatar shauryauppal-1mg avatar step-security-bot avatar sullis avatar sumitkharche avatar sveryovka avatar timoe avatar tombeck avatar vlsi avatar vpondala avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

s3mock's Issues

Update heading for repository

Current

A simple mock implementation of the AWS S3 API startable as Docker image or JUnit rule

Expected

A simple mock implementation of the AWS S3 API startable as Docker image, JUnit 4 rule, or JUnit Jupiter extension

Consul Auto Config Causes Failure

Hello Everyone. I am back again with another bug.

I am facing a similar issue as with the Security Auto Config. The project included Consul as a key/value store. When the following dependency is included, the app tries to connect to consul even in JUnits, which causes a failure. While the service does use Consul, JUnits should not be reliant on the service being up or down.

Parent:

	<parent>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-parent</artifactId>
		<version>2.0.1.RELEASE</version>
	</parent>

Dependency:

		<dependency>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-starter-consul-config</artifactId>
			<version>2.0.0.RC1</version>
		</dependency>
		<dependency>
			<groupId>com.adobe.testing</groupId>
			<artifactId>s3mock-junit4</artifactId>
			<version>2.0.3</version>
			<scope>test</scope>
		</dependency>

Exception:

com.ecwid.consul.v1.OperationException: OperationException(statusCode=503, statusMessage='Service Unavailable: Back-end server is at capacity', statusContent='')
	at com.ecwid.consul.v1.kv.KeyValueConsulClient.getKVValues(KeyValueConsulClient.java:159)
	at com.ecwid.consul.v1.ConsulClient.getKVValues(ConsulClient.java:534)
	at org.springframework.cloud.consul.config.ConsulPropertySource.init(ConsulPropertySource.java:66)
	at org.springframework.cloud.consul.config.ConsulPropertySourceLocator.create(ConsulPropertySourceLocator.java:166)
	at org.springframework.cloud.consul.config.ConsulPropertySourceLocator.locate(ConsulPropertySourceLocator.java:132)
	at org.springframework.cloud.bootstrap.config.PropertySourceBootstrapConfiguration.initialize(PropertySourceBootstrapConfiguration.java:94)
	at org.springframework.boot.SpringApplication.applyInitializers(SpringApplication.java:633)
	at org.springframework.boot.SpringApplication.prepareContext(SpringApplication.java:373)
	at org.springframework.boot.SpringApplication.run(SpringApplication.java:325)
	at org.springframework.boot.builder.SpringApplicationBuilder.run(SpringApplicationBuilder.java:137)
	at com.adobe.testing.s3mock.S3MockApplication.start(S3MockApplication.java:177)
	at com.adobe.testing.s3mock.testsupport.common.S3MockStarter.start(S3MockStarter.java:130)
	at com.adobe.testing.s3mock.junit4.S3MockRule.access$000(S3MockRule.java:42)
	at com.adobe.testing.s3mock.junit4.S3MockRule$1.evaluate(S3MockRule.java:66)
	at org.junit.rules.RunRules.evaluate(RunRules.java:20)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
	at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:86)
	at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
	at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:538)
	at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:760)
	at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:460)
	at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:206)


Store objects using normalized file names to allow special chars

The S3 API allows for special characters in the object keys.

We don't yet support all special chars and the character / is interpreted as a directory delimiter (S3 doesn't do that).

I propose that we store all objects using a UUID file name and map the original object key to the UUID in a map.

Support @Nested for JUnit5

Hi Adobe-team,

when trying to use JUnit5 nested tests and the @ExtendWith method the mock fails with following error:

org.springframework.boot.web.embedded.tomcat.ConnectorStartFailedException: Connector configured to listen on port 8086 failed to start

Cause

In the log we can see, that the mock tries to start a second time. When the @BeforeAll in the nested test class is called.

Possible sollution

As a simple workaround the number of accesses can be counted in the S3MockExtension and only be started or stopped when the counter is 0

Example:

public class NestedS3MockExtension extends S3MockExtension {

    private int started;

    public NestedS3MockExtension() {
        super();
    }

    public void beforeAll(ExtensionContext context) {
        if (started == 0) {
            this.start();
        }
        started++;
    }

    public void afterAll(ExtensionContext context) {
        started--;
        if (started == 0){
            this.stop();
        }
    }
}

I do not know if there is an better sollution for this using JUnit5 methods, but if you like I can create an PR with this addition.

Lexical sort results in corrupted files

In FileStore::completeMultipartUpload when the parts are put together it is sorted using lexical order. This leads to corrupted files if the number of parts is > 10

    Arrays.sort(partNames);

This leads to (for example):
0.part
1.part
10.part
11.part
2.part
3.part
.
.
etc

getS3Objects with prefix checks the whole object name against the prefix

Test Case:

  1. Given an s3Mock with a bucket ful'o'files and no directories

  2. The base case with a ListObjectsV2Request will list all the files in the bucket.

     fandom
    country~1
    fanciful~1
    fanciful~2
    fanciful~3
    fanciful~4
    bar~1
    bar~2
    bar~3
    bar~4
    bar~5
    biz
    bonkers
    
  3. However, if one is to use a withPrefix matching a part of a given obejct's name such as f or bar~ or fanciful~:

    ListObjectsV2Request req = new ListObjectsV2Request()
      .withBucketName(bucketName)
      .withMaxKeys(50)
      .withPrefix(prefix);
    
    ListObjectsV2Result result = s3.listObjectsV2(req);;
    
  4. You will get an empty list. Whereas the real S3 client will return non-zero amount of objects.

The problem:

In the FileStore class we have:

return isEmpty(prefix) || (null != p && p.startsWith(prefix));

On UNIX for example, the path "foo/bar" starts with "foo" and "foo/bar". It does not start with "f" or "fo".

So this is trying to check if the given path p matches the whole prefix up to / whereas the actual S3 implementation is closer to p.toString().startsWith(prefix).

add support for xml responses?

First of all, thanks for putting such effort in this amazing project. We're using it internally in our company to run the integration tests and everything has been running expected.

We're wondering if there's a plan to add support for XML responses soon. The reason for this is because in our particular case we're trying to write a test when we get a not found from the getObject method, but seems like s3mock return 404 without any XML response which the AWS SDK doesn't understand and just returns a generic 406 code.

Thanks again.

/cc @lecocchi

NullPointerException at com.adobe.testing.s3mock.testsupport.common.S3MockStarter.getPort(S3MockStarter.java:95)

Hi, trying to follow the JUnit4 example through and tweak it enough to work with Cucumber & Spring Boot, on Java 10...

I want to be able to start up the S3Mock instance in the Cucumber @before hook, but unfortunately I am getting a NullPointerException error:

java.lang.NullPointerException
at com.adobe.testing.s3mock.testsupport.common.S3MockStarter.getPort(S3MockStarter.java:95)
at com.adobe.testing.s3mock.testsupport.common.S3MockStarter.createS3Client(S3MockStarter.java:89)
at com.elsevier.q2c.backupService.feature.support.Hooks.setUp(Hooks.java:34)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at cucumber.runtime.Utils$1.call(Utils.java:31)
at cucumber.runtime.Timeout.timeout(Timeout.java:16)
at cucumber.runtime.Utils.invoke(Utils.java:25)
at cucumber.runtime.java.JavaHookDefinition.execute(JavaHookDefinition.java:60)
at cucumber.runtime.HookDefinitionMatch.runStep(HookDefinitionMatch.java:17)
at cucumber.runner.UnskipableStep.executeStep(UnskipableStep.java:22)
at cucumber.api.TestStep.run(TestStep.java:83)
at cucumber.api.TestCase.run(TestCase.java:58)
at cucumber.runner.Runner.runPickle(Runner.java:80)
at cucumber.runtime.Runtime.runFeature(Runtime.java:119)
at cucumber.runtime.Runtime.run(Runtime.java:104)
at cucumber.api.cli.Main.run(Main.java:36)
at cucumber.api.cli.Main.main(Main.java:18)

In my Hook class I have:

public class Hooks {

  private final AWSCredentialsProvider awsCredentialsProvider;

  public static S3MockRule S3_MOCK_RULE = S3MockRule.builder().silent().build();

  @Autowired
  public Hooks(AWSCredentialsProvider awsCredentialsProvider) {
    this.awsCredentialsProvider = awsCredentialsProvider;
  }

  @Value("${aws.bucket}")
  private String bucket;

  @Before
  public void setUp() {
    AmazonS3 s3Mock = S3_MOCK_RULE.createS3Client("eu-west-1");
    s3Mock.createBucket(bucket);
  }

Is what I am trying to achieve possible, and if so where am I going wrong?

List Multipart Uploads not supported?

I'm using s3Client.listMultipartUploads to check for existing multipart uploads, but it always returns an empty list in getMultipartUploads. After a quick look in FileStoreController it seems that this is not supported, right? It would be great if this could be added, I might also try to submit a PR for this.

Support for "from offset" range (e.g., bytes=9500-)

Hi There,

The Range header of with value bytes=9500- is a valid header, which represents start from byte 9500 to the end of file).

Currently code fails to convert such byte range and failed with exception.

This should be easy fix, would you mine if I fork, fix, and create PR.

Regards,

Syed Farhan Ali

Split mock and JUnit rules into separate modules and add JUnit 5 support

Many users don't use the JUnit rule to start the service and to create the S3 client instance and don't want the dependency, or rather want to use a proper JUnit 5 support.

To allow for that, split up the mock application and the JUnit related classes into separate modules, e.g.:

  • s3mock
  • s3mock-junit4 (dependening on junit 4 and the s3mock fat-jar)
  • s3mock-junit5 (dependening on junit 5 and the s3mock fat-jar)

This would be breaking change for users of the JUnit rule, so I suggest doing that change with a major version bump.

Support signatureVersion v4 for s3 upload

When using upload() to upload a file (or stream) to the s3 mock, it always worked, but it did not store any content. When retrieving the stored file, it was of Content-Length: 0.

To make it work with the s3mock, I have to add signatureVersion: 'v2' to the params when initializing the s3 aws client.

Not sure if that applies to other "upload stuff"-functions as well.

Compatibility with latest aws s3 standards (signature v4)

Spent a decent amount of time trying to get it work with official [email protected] library for nodejs.
Below I describe all problems I encountered and how I worked around them.

  1. Started with docker-compose.yml and initialized my-bucket
  s3:
    image: adobe/s3mock
    ports:
      - 9090:9090
    command: --initialBuckets=my-bucket
  1. Created aws s3 client
const aws = require('aws-sdk');

const s3 = new aws.S3({
    endpoint: 'http://localhost:9090',
})

await s3.putObject({
    ContentType: 'image/png',
    Key: 'my-key/image.png',
    Body: imageBuffer, // The actual image buffer
    Bucket: 'my-bucket',
}).promise()
  1. After trying to upload my object to the bucket I would get the following error:
[error] message: The specified bucket does not exist., stack: NoSuchBucket: The specified bucket does not exist.

This error message was very confusing because the bucket was there and I could see it with listBuckets and I was able to upload to it with my postman query. What I found out later is that the aws-sdk is using subdomain bucket naming convention which adobe/s3mock is not compatible with. I was able to fix it with s3ForcePathStyle: true option on my s3 client configuration.

  1. The next error I encountered was
Internal Server Error, stack: 406: null

After long investigation I found out that adobe/s3-mock is not compatible with latest version of aws signatures so I had to add signatureVersion: 'v3' to my s3 configuration.

  1. Final working configuration looked like
const aws = require('aws-sdk');

const s3 = new aws.S3({
    endpoint: 'http://localhost:9090',
    s3ForcePathStyle: true,
    signatureVersion: 'v3',
})

await s3.putObject({
    ContentType: 'image/png',
    Key: 'my-key/image.png',
    Body: imageBuffer, // The actual image buffer
    Bucket: 'my-bucket',
}).promise()

TLDR:
It's a great s3 mock server but is not compatible with the latest s3 standards

  1. It does not support subdomain bucket naming conventions which is used as the default in aws sdk
  2. It does not support v4 of signature generation which is also used as the default in aws sdk

Deleting multiple objects fails with status code 415

Hey there,

I’m trying the S3Mock Docker image to automate testing of an Elixir application that uses S3.

I have noticed that there is one operation that consistently fails with S3Mock while it works fine with other implementations of the S3 API: Deleting multiple objects.

When I try deleting multiple objects, here is the response I get:

body: "",
headers: [
      {"Accept",
       "application/xml, application/x-www-form-urlencoded, application/octet-stream, text/plain, text/xml, application/*+xml, multipart/form-data, application/json, application/*+json, */*"},
      {"Content-Length", "0"},
      {"Date", "Thu, 19 Jul 2018 22:31:36 GMT"}
    ],
status_code: 415

This is the request body that was sent:

<?xml version=\"1.0\" encoding=\"UTF-8\"?><Delete><Object><Key>bar</Key></Object></Delete>"

Nothing was logged to the Docker logs. Deleting the same file with a simple DELETE request works fine.

Is this functionality currently not supported?
I saw there is a BatchDeleteRequest class which I assume is intended for this, but I haven’t digged any deeper yet.

limit spring application executors number

upon application start, it initializes some executor threads, 1 for each vCore AFAIU.

If initialized with
S3MockApplication.start(props),

what key-value should be passed to configure executors number?

My use case is non parallel requests to S3.

Thanks

Spring Security on The Classpath results in 401

I am running the JUnit4 example given here:

https://github.com/adobe/S3Mock/blob/master/testsupport/junit4/src/test/java/com/adobe/testing/s3mock/junit4/S3MockRuleTest.java

I had alot of dependencies in my project, so I had to go through each one, one by one to narrow it down to this. Once the following dependency is in the POM, the exception below starts to show up. The test has not been modified at all.

The testing project parent:
<parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.0.1.RELEASE</version> </parent>

Dependency:
<dependency> <groupId>org.springframework.security.oauth</groupId> <artifactId>spring-security-oauth2</artifactId> <version>2.0.8.RELEASE</version> </dependency>

JUnit Exception:

com.amazonaws.services.s3.model.AmazonS3Exception: (Service: Amazon S3; Status Code: 401; Error Code: 401 ; Request ID: null; S3 Extended Request ID: null), S3 Extended Request ID: null
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1632)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1304)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1058)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4365)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4312)
at com.amazonaws.services.s3.AmazonS3Client.createBucket(AmazonS3Client.java:1030)
at com.amazonaws.services.s3.AmazonS3Client.createBucket(AmazonS3Client.java:967)
at S3MockRuleTest.shouldUploadAndDownloadObject(S3MockRuleTest.java:51)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at com.adobe.testing.s3mock.junit4.S3MockRule$1.evaluate(S3MockRule.java:68)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:86)
at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:538)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:760)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:460)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:206)

Setting silent from the Docker command line?

I can set other properties when running S3Mock via Docker, but the silent parameter doesn't seem to be picked up?

$ docker run -p 9090:9090 -p 9191:9191 -t adobe/s3mock:latest --server.port=0 --initialBuckets=abc,def --silent=true
+------------------------------------------------------------------------------+
|             _______  ______    _______  _______  _______  _                  |
|            (  ____ \/ ___  \  (       )(  ___  )(  ____ \| \    /\           |
|            | (    \/\/   \  \ | () () || (   ) || (    \/|  \  / /           |
|            | (_____    ___) / | || || || |   | || |      |  (_/ /            |
|            (_____  )  (___ (  | |(_)| || |   | || |      |   _ (             |
|                  ) |      ) \ | |   | || |   | || |      |  ( \ \            |
|            /\____) |/\___/  / | )   ( || (___) || (____/\|  /  \ \           |
|            \_______)\______/  |/     \|(_______)(_______/|_/    \/           |
|                                                                              |
+------------------------------------------------------------------------------+

2018-06-01 01:14:25.760  INFO 1 --- [           main] c.a.testing.s3mock.S3MockApplication     : Starting S3MockApplication on b1e6221d7dca with PID 1 (/opt/service/s3mock-2.0.5.jar started by root in /opt/service)
2018-06-01 01:14:25.763  INFO 1 --- [           main] c.a.testing.s3mock.S3MockApplication     : No active profile set, falling back to default profiles: default
2018-06-01 01:14:25.794  INFO 1 --- [           main] ConfigServletWebServerApplicationContext : Refreshing org.springframework.boot.web.servlet.context.AnnotationConfigServletWebServerApplicationContext@5b1d2887: startup date [Fri Jun 01 01:14:25 GMT 2018]; root of context hierarchy
2018-06-01 01:14:26.500  INFO 1 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat initialized with port(s): 0 (https) 9090 (http)
...
2018-06-01 01:14:26.779  INFO 1 --- [           main] c.a.testing.s3mock.FileStoreController   : Creating initial buckets: [abc, def]
2018-06-01 01:14:26.779  INFO 1 --- [           main] c.a.testing.s3mock.FileStoreController   : Creating bucket: abc
2018-06-01 01:14:26.780  INFO 1 --- [           main] c.a.testing.s3mock.FileStoreController   : Creating bucket: def

The other parameters seem to be picked up, but not --silent=true. Is there another way I should be doing this?

Prepare to perform releases via Travis

In order to perform automated releases to Maven central and to Docker Hub, we need to implement a mechanism that allows us to run the release job on Travis-CI.

We could do something along the following:

  • use a special branch "release" on which Travis will:
    • invoke the mvn release... commands instead of the mvn clean install.
    • the Maven release job should not directly push the Git commits back, but will only commit the version changes in the pom.
    • after the deploy to oss.sonatype.org and the push to Docker Hub are successful, we programmatically release the staging repository on oss.sonatype.org. That can all be part of the mvn release build and just needs the proper plugin configurations.
    • when everything's fine, let the adobe-bot account push the modifications back to the release branch. And maybe directly open a PR to merge that back to master, if we're fancy.

`aws cp` does not work with recent versions

I'm trying to use S3Mock as part of my development environment and I sometimes have to use the AWS CLI (which might not be the intended use case).

aws s3 cp myFile.txt s3://bucketName/myFile.txt --endpoint-url http:localhost:9090 works like a charm with version 1.11.13 of aws-cli (the one that comes from Ubuntu 16.04 repositories), but if I try to run the command on version 1.14.44 (the one that comes with Ubuntu 18.04) or older I get (from the S3Mock console):

2018-07-19 01:00:27.354 ERROR 1 --- [nio-9090-exec-2] c.adobe.testing.s3mock.domain.FileStore  : Wasn't able to store file on disk!

java.io.EOFException: Unexpected EOF read on the socket
	at org.apache.coyote.http11.Http11InputBuffer.fill(Http11InputBuffer.java:722) ~[tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.coyote.http11.Http11InputBuffer.access$300(Http11InputBuffer.java:40) ~[tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.coyote.http11.Http11InputBuffer$SocketInputBuffer.doRead(Http11InputBuffer.java:1072) ~[tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.coyote.http11.filters.IdentityInputFilter.doRead(IdentityInputFilter.java:140) ~[tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.coyote.http11.Http11InputBuffer.doRead(Http11InputBuffer.java:261) ~[tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.coyote.Request.doRead(Request.java:581) ~[tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.catalina.connector.InputBuffer.realReadBytes(InputBuffer.java:326) ~[tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.catalina.connector.InputBuffer.checkByteBufferEof(InputBuffer.java:642) ~[tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.catalina.connector.InputBuffer.readByte(InputBuffer.java:337) ~[tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.catalina.connector.CoyoteInputStream.read(CoyoteInputStream.java:93) ~[tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at com.adobe.testing.s3mock.util.AwsChunkDecodingInputStream.readUntil(AwsChunkDecodingInputStream.java:109) ~[classes!/:na]
	at com.adobe.testing.s3mock.util.AwsChunkDecodingInputStream.read(AwsChunkDecodingInputStream.java:72) ~[classes!/:na]
	at java.io.InputStream.read(InputStream.java:170) ~[na:1.8.0_151]
	at java.io.InputStream.read(InputStream.java:101) ~[na:1.8.0_151]
	at com.adobe.testing.s3mock.domain.FileStore.inputStreamToFile(FileStore.java:437) [classes!/:na]
	at com.adobe.testing.s3mock.domain.FileStore.putS3Object(FileStore.java:248) [classes!/:na]
	at com.adobe.testing.s3mock.FileStoreController.putObject(FileStoreController.java:284) [classes!/:na]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_151]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_151]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_151]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_151]
	at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:209) [spring-web-5.0.5.RELEASE.jar!/:5.0.5.RELEASE]
	at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:136) [spring-web-5.0.5.RELEASE.jar!/:5.0.5.RELEASE]
	at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:102) [spring-webmvc-5.0.5.RELEASE.jar!/:5.0.5.RELEASE]
	at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:877) [spring-webmvc-5.0.5.RELEASE.jar!/:5.0.5.RELEASE]
	at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:783) [spring-webmvc-5.0.5.RELEASE.jar!/:5.0.5.RELEASE]
	at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) [spring-webmvc-5.0.5.RELEASE.jar!/:5.0.5.RELEASE]
	at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:991) [spring-webmvc-5.0.5.RELEASE.jar!/:5.0.5.RELEASE]
	at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:925) [spring-webmvc-5.0.5.RELEASE.jar!/:5.0.5.RELEASE]
	at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:974) [spring-webmvc-5.0.5.RELEASE.jar!/:5.0.5.RELEASE]
	at org.springframework.web.servlet.FrameworkServlet.doPut(FrameworkServlet.java:888) [spring-webmvc-5.0.5.RELEASE.jar!/:5.0.5.RELEASE]
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:664) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:851) [spring-webmvc-5.0.5.RELEASE.jar!/:5.0.5.RELEASE]
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:742) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) [tomcat-embed-websocket-8.5.29.jar!/:8.5.29]
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at com.adobe.testing.s3mock.KmsValidationFilter.doFilterInternal(KmsValidationFilter.java:87) [classes!/:na]
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-5.0.5.RELEASE.jar!/:5.0.5.RELEASE]
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:99) [spring-web-5.0.5.RELEASE.jar!/:5.0.5.RELEASE]
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-5.0.5.RELEASE.jar!/:5.0.5.RELEASE]
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:101) [spring-web-5.0.5.RELEASE.jar!/:5.0.5.RELEASE]
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.springframework.web.filter.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java:81) [spring-web-5.0.5.RELEASE.jar!/:5.0.5.RELEASE]
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-5.0.5.RELEASE.jar!/:5.0.5.RELEASE]
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:200) [spring-web-5.0.5.RELEASE.jar!/:5.0.5.RELEASE]
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-5.0.5.RELEASE.jar!/:5.0.5.RELEASE]
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:198) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:496) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:140) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:81) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:342) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:803) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:790) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1459) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_151]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_151]
	at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at java.lang.Thread.run(Thread.java:748) [na:1.8.0_151]

And from the CLI I get:

upload failed: ./myFile.txt to s3://bucketName/myFile.txt Connection was closed before we received a valid response from endpoint URL: "http://localhost:9090/bucketName/myFile.txt".

Any suggestions?

Support for CORS headers

Hello!

I'm trying to use this to test a browser-based uploader. I get issues related to CORS. Is there anyway to set the cross origin headers with S3Mock ?

Build error when omitting test as dependency scope

I'm trying to use S3 mock in a spring boot application that uses TLS. I want to start it programmatically and so I've included this in my pom.xml per the readme:

<dependency>
  <groupId>com.adobe.testing</groupId>
  <artifactId>s3mock</artifactId>
  <version>...</version>
</dependency>

But when I start the application, I'm getting a "ConnectorStartFailedException" with the message "Connector configured to listen on port 8443 failed to start."

Anyone know how I might address this? Is s3mock starting a mock server on the same port 8443 as my application?

S3Mock Configuration Options

We had to requests regarding configuring S3Mock regarding usage of the silent parameter (#76) and Tomcats amount of threads (#75). Today there are some options to configure those (but there's no way to choose the option. So, silent can only be configured programmatically, Tomcat threads can be set via VM argument.

Let's introduce a way to harmonise and open configuration options. When done users can pass their config via environment variable, via VM argument and via command line. This include documentation of existing configuration options.

s3mock 2.1.0 fails to start with

While s3mock 2.0.11 works well in our tests, updating to 2.1.0 let's the s3mock startup fail with

09:15:27.511 INFO  o.s.boot.SpringApplication - Starting application on mescalin with PID 377 (started by magro in /path/to/project)
09:15:27.512 INFO  o.s.boot.SpringApplication - No active profile set, falling back to default profiles: default
09:15:28.324 WARN  o.s.b.w.s.c.AnnotationConfigServletWebServerApplicationContext - Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.support.BeanDefinitionOverrideException: Invalid bean definition with name 'httpRequestHandlerAdapter' defined in class path resource [org/springframework/data/rest/webmvc/config/RepositoryRestMvcConfiguration.class]: Cannot register bean definition [Root bean: class [null]; scope=; abstract=false; lazyInit=false; autowireMode=3; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=org.springframework.data.rest.webmvc.config.RepositoryRestMvcConfiguration; factoryMethodName=httpRequestHandlerAdapter; initMethodName=null; destroyMethodName=(inferred); defined in class path resource [org/springframework/data/rest/webmvc/config/RepositoryRestMvcConfiguration.class]] for bean 'httpRequestHandlerAdapter': There is already [Root bean: class [null]; scope=; abstract=false; lazyInit=false; autowireMode=3; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=org.springframework.boot.autoconfigure.web.servlet.WebMvcAutoConfiguration$EnableWebMvcConfiguration; factoryMethodName=httpRequestHandlerAdapter; initMethodName=null; destroyMethodName=(inferred); defined in class path resource [org/springframework/boot/autoconfigure/web/servlet/WebMvcAutoConfiguration$EnableWebMvcConfiguration.class]] bound.
09:15:28.334 INFO  o.s.b.a.l.ConditionEvaluationReportLoggingListener - 

Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
09:15:28.336 ERROR o.s.b.d.LoggingFailureAnalysisReporter - 

***************************
APPLICATION FAILED TO START
***************************

Description:

The bean 'httpRequestHandlerAdapter', defined in class path resource [org/springframework/data/rest/webmvc/config/RepositoryRestMvcConfiguration.class], could not be registered. A bean with that name has already been defined in class path resource [org/springframework/boot/autoconfigure/web/servlet/WebMvcAutoConfiguration$EnableWebMvcConfiguration.class] and overriding is disabled.

Action:

Consider renaming one of the beans or enabling overriding by setting spring.main.allow-bean-definition-overriding=true

For us 2.0.11 is sufficient right now (i.e. it's not a problem for us), but I still wanted to let you know about this. If you are sure that things are working and there's evidence that it's just a classpath issue on our side you can also just close this ticket as invalid.

Multiple spring boot servers in class path?

This is more of a question. I'm trying to use s3mock using the junit rule method. While running it using the sample from this repository I can see that the server is started, when I try to do the same in my project it doesn't start s3mock, but instead tries to run my service. I am not sure how to configure it such that it knows to start s3mock instead of my service.

Any ideas?

Thanks

Wrong alias for bucket name field in InitiateMultipartUploadResult

Mock's InitiateMultipartUpload endpoint returns bucket name within <Bucketname> element (because of alias defined in InitiateMultipartUploadResult) whereas S3 documentation specify that bucket should be a content of <Bucket> element.

Example request:

POST /test/123?uploads= HTTP/1.1
Accept: */*
Accept-Encoding: gzip, deflate
Connection: keep-alive
Content-Length: 0
Host: localhost:9090
User-Agent: HTTPie/0.9.8


HTTP/1.1 200 
Content-Type: application/x-www-form-urlencoded
Date: Fri, 14 Sep 2018 09:54:30 GMT
Transfer-Encoding: chunked

<InitiateMultipartUploadResult><Bucketname>test</Bucketname><Key>123</Key><UploadId>f01174e3-5fe8-4a76-8eb8-bc73efc2a919</UploadId></InitiateMultipartUploadResult>

Avoid logging StackTraces when sending error responses

We should avoid letting the default dispatcher Servlet log out the ridiculously long stack traces when we return error responses.

Perhaps using ResponseEntityExceptionHandler instead of HandlerExceptionResolver could help there.

[Feature] adding option to configure FileStore root directory

Hi

I am using S3Mock for my integration tests. Sometimes on my workplace VM or on the test environment, housekeeping jobs deletes all contents in the /tmp folder. It will be a good feature if we could configure the FileStore root directory instead creating it in default construct.

Thanks,

Sujith

Error using ssl endpoint

When using the default aws s3 client created by S3MockRule.createS3Client(), I'm getting an exception when trying to use the client. It doesn't matter which operation I try, the root exception is the same. Stacktrace:

com.amazonaws.SdkClientException: Unable to execute HTTP request: Unrecognized SSL message, plaintext connection?

	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleRetryableException(AmazonHttpClient.java:1116)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1066)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
	at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
	at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4368)
	at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4315)
	at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1758)
	at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1613)
	at com.example.messaging.RouteTest.testTransform(RouteTest.java:126)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.springframework.test.context.junit4.statements.RunBeforeTestExecutionCallbacks.evaluate(RunBeforeTestExecutionCallbacks.java:73)
	at org.springframework.test.context.junit4.statements.RunAfterTestExecutionCallbacks.evaluate(RunAfterTestExecutionCallbacks.java:83)
	at org.springframework.test.context.junit4.statements.RunBeforeTestMethodCallbacks.evaluate(RunBeforeTestMethodCallbacks.java:75)
	at org.springframework.test.context.junit4.statements.RunAfterTestMethodCallbacks.evaluate(RunAfterTestMethodCallbacks.java:86)
	at org.springframework.test.context.junit4.statements.SpringRepeat.evaluate(SpringRepeat.java:84)
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
	at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:251)
	at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:97)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
	at org.springframework.test.context.junit4.statements.RunBeforeTestClassCallbacks.evaluate(RunBeforeTestClassCallbacks.java:61)
	at org.springframework.test.context.junit4.statements.RunAfterTestClassCallbacks.evaluate(RunAfterTestClassCallbacks.java:70)
	at com.adobe.testing.s3mock.junit4.S3MockRule$1.evaluate(S3MockRule.java:66)
	at org.junit.rules.RunRules.evaluate(RunRules.java:20)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
	at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.run(SpringJUnit4ClassRunner.java:190)
	at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
	at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
	at com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
	at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
	at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
Caused by: javax.net.ssl.SSLException: Unrecognized SSL message, plaintext connection?
	at sun.security.ssl.InputRecord.handleUnknownRecord(InputRecord.java:710)
	at sun.security.ssl.InputRecord.read(InputRecord.java:527)
	at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:983)
	at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385)
	at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1413)
	at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1397)
	at org.apache.http.conn.ssl.SSLConnectionSocketFactory.createLayeredSocket(SSLConnectionSocketFactory.java:396)
	at org.apache.http.conn.ssl.SSLConnectionSocketFactory.connectSocket(SSLConnectionSocketFactory.java:355)
	at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142)
	at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:373)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at com.amazonaws.http.conn.ClientConnectionManagerFactory$Handler.invoke(ClientConnectionManagerFactory.java:76)
	at com.amazonaws.http.conn.$Proxy119.connect(Unknown Source)
	at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:381)
	at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:237)
	at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185)
	at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
	at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
	at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56)
	at com.amazonaws.http.apache.client.impl.SdkHttpClient.execute(SdkHttpClient.java:72)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1238)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1058)
	... 43 more

Switching to creating an s3 client like so:

final BasicAWSCredentials credentials = new BasicAWSCredentials("foo", "bar");

return AmazonS3ClientBuilder.standard()
        .withCredentials(new AWSStaticCredentialsProvider(credentials))
        .withClientConfiguration(
                S3_MOCK_RULE.configureClientToIgnoreInvalidSslCertificates(new ClientConfiguration()))
        .withEndpointConfiguration(
                new AwsClientBuilder.EndpointConfiguration("http://localhost:" + S3_MOCK_RULE.getPort(), "us-east-1"))
        .enablePathStyleAccess()
        .build();

Fixes the issue. The only change I made from the S3MockRule.createS3Client() implementation is using http instead of https. It doesn't matter if i use S3MockRule.getPort() or S3MockRule.getHttpPort(), both will work as long as the scheme is http.

Question - MultiPartUpload response content type

Hello,

This is more of a question, I am trying to figure out where is the gap.

We are using Scala as programming language, for reading and writing to S3 we are using Alpakka, which provide us streaming.
We are using multi part upload which works fine when directly using Amazon S3 but when using agains S3Mock we get following error:

Unsupported Content-Type, supported: application/xml, application/octet-stream

We are also tried other S3 mock and it works fine as well.

Any idea what is going on?

Your response will be appreciated.

Regards,

Syed

objectMetadata().getLastModified is null

when working with the S3Object.getObjectMetadata().getLastModified() it returns null (When doing this through AWS. It would be best if it returns at least the lastmodifiedDate from the filesystem?

Continuation token invalidated by delete.

In case the bucket has more than a 1000 objects and I would like to delete them, I list the objects batchwise using nextMarker(v1)/nextContinuationToken(v2). So I request a batch, delete the batch, request the next batch using the next marker/token, delete it and so on.

The problem I face with the adobe/S3Mock is that the continuation token specifies the offset into the buckets current object list as seen here. Obviously if some items before the marker are deleted, the itemstoskip mapped to the marker/continuation token is invalidated.

A correct implementation should return items starting next after the marker in S3 sort oder.
See marker and continuation token.

I hacked a solution that works for me here. If a solution based on sorting and mapping the token to a key would be fine, I would be able to submit a PR.

PutObject returns "201 Created" rather than "200 OK"

Hi,

First of all thank you for even coming up with this project, let alone for making it publicly available, I've been looking for a replacement of fake-s3 that we were using internally for years when I found this repo. We were looking for an alternative that had a fully implemented multipart upload API and voila, this project has it! Good job!

Now the question: I see that PUT Object API currently returns a 201 Created as the response code but Amazon S3 (as well as other S3 like alternatives that we've used in Production) do return a 200 OK as the response code, is this something you omitted buy mistake or is it a valid response code?
Nothing from the public doc https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html , other than the examples provided, mentions the expected response code, hence my question; all I know is that other implementations, including fake-s3, do return a 200 in case of success.

If it is expected, feel free to close this issue.

Thank you.

Last-Modified header missing from blob response

Causes with JClouds 2.0.3:

org.jclouds.http.HttpException: Last-Modified header not present in response: {statusCode=200, headers={Accept-Ranges=[bytes], Access-Control-Allow-Origin=[*], ETag=["d41d8cd98f00b204e9800998ecf8427e"], Date=[Tue, 13 Feb 2018 13:40:31 GMT]}, payload=[content=true, contentMetadata=[cacheControl=null, contentDisposition=null, contentEncoding=null, contentLanguage=null, contentLength=0, contentMD5=null, contentType=application/unknown, expires=null], written=false, isSensitive=false]}
at org.jclouds.blobstore.functions.ParseSystemAndUserMetadataFromHeaders.parseLastModifiedOrThrowException(ParseSystemAndUserMetadataFromHeaders.java:92)
at org.jclouds.blobstore.functions.ParseSystemAndUserMetadataFromHeaders.apply(ParseSystemAndUserMetadataFromHeaders.java:72)
at org.jclouds.s3.functions.ParseObjectMetadataFromHeaders.apply(ParseObjectMetadataFromHeaders.java:61)
at org.jclouds.s3.functions.ParseObjectFromHeadersAndHttpContent.apply(ParseObjectFromHeadersAndHttpContent.java:48)
at org.jclouds.s3.functions.ParseObjectFromHeadersAndHttpContent.apply(ParseObjectFromHeadersAndHttpContent.java:34)
at org.jclouds.rest.internal.InvokeHttpMethod.invoke(InvokeHttpMethod.java:90)
at org.jclouds.rest.internal.InvokeHttpMethod.apply(InvokeHttpMethod.java:73)
at org.jclouds.rest.internal.InvokeHttpMethod.apply(InvokeHttpMethod.java:44)
at org.jclouds.rest.internal.DelegatesToInvocationFunction.handle(DelegatesToInvocationFunction.java:156)
at org.jclouds.rest.internal.DelegatesToInvocationFunction.invoke(DelegatesToInvocationFunction.java:123)
at com.sun.proxy.$Proxy155.getObject(Unknown Source)
at org.jclouds.s3.blobstore.S3BlobStore.getBlob(S3BlobStore.java:235)
at org.jclouds.blobstore.internal.BaseBlobStore.getBlob(BaseBlobStore.java:217)
``

Should sync path fragment

Hi all,

First of all thank you for the fix of the issue #8

I'm using the S3Mock that I find very convenient for integration tests and I have a new case that fails :

@Rule
public TemporaryFolder folder= new TemporaryFolder();
@Test
public void shouldSyncPathFragment() {
  final File uploadFile = new File(UPLOAD_FILE_NAME);

  s3Client.createBucket(BUCKET_NAME);
  s3Client.putObject(new PutObjectRequest(BUCKET_NAME, UPLOAD_FILE_NAME, uploadFile));

  final TransferManager tm = createDefaultTransferManager();
  tm.downloadDirectory(BUCKET_NAME, "src", folder.getRoot());
  tm.shutdownNow(false);

  assertThat(new File(folder.getRoot() + UPLOAD_FILE_NAME).exists(), is(true));
}

That is when objects are under a/s3/path then downloading a key should make like a rsync with remote directory. That case is working with a real S3 remote. With the mock I get a

EDIT : use of downloadDirectory instead of download method. There is no 404 but the assertion fails.

Is it a new feature for S3Mock ?

List objects v1 does not honor max keys

The S3 mock implementation seems to not honor the max-keys/maxKeys parameter when listing objects using the V1 API.

Below is java code that demonstrates the problem (using Junit 5).

package somepackage;

import com.adobe.testing.s3mock.junit5.S3MockExtension;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.model.ListObjectsRequest;
import com.amazonaws.services.s3.model.ObjectListing;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.extension.ExtendWith;

import static org.junit.jupiter.api.Assertions.assertEquals;

@ExtendWith(S3MockExtension.class)
class SomeTestClass {
    @Test
    void someTest(AmazonS3 s3) {
        String bucketName = "some-bucket";
        s3.createBucket(bucketName);
        s3.putObject(bucketName, "a", "");
        s3.putObject(bucketName, "b", "");

        ListObjectsRequest request = new ListObjectsRequest().withBucketName(bucketName).withMaxKeys(1);
        ObjectListing objectListing = s3.listObjects(request);

        // This assertion fails. listObjects returns 2 objects instead of 1.
        assertEquals(1, objectListing.getObjectSummaries().size());
    }
}

Can't putObject with path as a key

Hi all,

I can't do a putObject with a path key :

  @Test
  public void shouldUploadObjectWithAPath() throws Exception {
        final File uploadFile = new File(UPLOAD_FILE_NAME);

        s3Client.createBucket(BUCKET_NAME);
        s3Client.putObject(new PutObjectRequest(BUCKET_NAME,UPLOAD_FILE_NAME, uploadFile));

        assertThat(s3Client.doesObjectExist(BUCKET_NAME, UPLOAD_FILE_NAME), is(true));
  }

I have a Not Acceptable 406 code.
When I test this with the real S3, it creates a file in a src/test/resources/ path.

am I missing something ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.