Git Product home page Git Product logo

s3mock's Issues

Consul Auto Config Causes Failure

Hello Everyone. I am back again with another bug.

I am facing a similar issue as with the Security Auto Config. The project included Consul as a key/value store. When the following dependency is included, the app tries to connect to consul even in JUnits, which causes a failure. While the service does use Consul, JUnits should not be reliant on the service being up or down.

Parent:

	<parent>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-parent</artifactId>
		<version>2.0.1.RELEASE</version>
	</parent>

Dependency:

		<dependency>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-starter-consul-config</artifactId>
			<version>2.0.0.RC1</version>
		</dependency>
		<dependency>
			<groupId>com.adobe.testing</groupId>
			<artifactId>s3mock-junit4</artifactId>
			<version>2.0.3</version>
			<scope>test</scope>
		</dependency>

Exception:

com.ecwid.consul.v1.OperationException: OperationException(statusCode=503, statusMessage='Service Unavailable: Back-end server is at capacity', statusContent='')
	at com.ecwid.consul.v1.kv.KeyValueConsulClient.getKVValues(KeyValueConsulClient.java:159)
	at com.ecwid.consul.v1.ConsulClient.getKVValues(ConsulClient.java:534)
	at org.springframework.cloud.consul.config.ConsulPropertySource.init(ConsulPropertySource.java:66)
	at org.springframework.cloud.consul.config.ConsulPropertySourceLocator.create(ConsulPropertySourceLocator.java:166)
	at org.springframework.cloud.consul.config.ConsulPropertySourceLocator.locate(ConsulPropertySourceLocator.java:132)
	at org.springframework.cloud.bootstrap.config.PropertySourceBootstrapConfiguration.initialize(PropertySourceBootstrapConfiguration.java:94)
	at org.springframework.boot.SpringApplication.applyInitializers(SpringApplication.java:633)
	at org.springframework.boot.SpringApplication.prepareContext(SpringApplication.java:373)
	at org.springframework.boot.SpringApplication.run(SpringApplication.java:325)
	at org.springframework.boot.builder.SpringApplicationBuilder.run(SpringApplicationBuilder.java:137)
	at com.adobe.testing.s3mock.S3MockApplication.start(S3MockApplication.java:177)
	at com.adobe.testing.s3mock.testsupport.common.S3MockStarter.start(S3MockStarter.java:130)
	at com.adobe.testing.s3mock.junit4.S3MockRule.access$000(S3MockRule.java:42)
	at com.adobe.testing.s3mock.junit4.S3MockRule$1.evaluate(S3MockRule.java:66)
	at org.junit.rules.RunRules.evaluate(RunRules.java:20)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
	at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:86)
	at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
	at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:538)
	at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:760)
	at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:460)
	at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:206)


[Feature] adding option to configure FileStore root directory

Hi

I am using S3Mock for my integration tests. Sometimes on my workplace VM or on the test environment, housekeeping jobs deletes all contents in the /tmp folder. It will be a good feature if we could configure the FileStore root directory instead creating it in default construct.

Thanks,

Sujith

Build error when omitting test as dependency scope

I'm trying to use S3 mock in a spring boot application that uses TLS. I want to start it programmatically and so I've included this in my pom.xml per the readme:

<dependency>
  <groupId>com.adobe.testing</groupId>
  <artifactId>s3mock</artifactId>
  <version>...</version>
</dependency>

But when I start the application, I'm getting a "ConnectorStartFailedException" with the message "Connector configured to listen on port 8443 failed to start."

Anyone know how I might address this? Is s3mock starting a mock server on the same port 8443 as my application?

add support for xml responses?

First of all, thanks for putting such effort in this amazing project. We're using it internally in our company to run the integration tests and everything has been running expected.

We're wondering if there's a plan to add support for XML responses soon. The reason for this is because in our particular case we're trying to write a test when we get a not found from the getObject method, but seems like s3mock return 404 without any XML response which the AWS SDK doesn't understand and just returns a generic 406 code.

Thanks again.

/cc @lecocchi

Split mock and JUnit rules into separate modules and add JUnit 5 support

Many users don't use the JUnit rule to start the service and to create the S3 client instance and don't want the dependency, or rather want to use a proper JUnit 5 support.

To allow for that, split up the mock application and the JUnit related classes into separate modules, e.g.:

  • s3mock
  • s3mock-junit4 (dependening on junit 4 and the s3mock fat-jar)
  • s3mock-junit5 (dependening on junit 5 and the s3mock fat-jar)

This would be breaking change for users of the JUnit rule, so I suggest doing that change with a major version bump.

Error using ssl endpoint

When using the default aws s3 client created by S3MockRule.createS3Client(), I'm getting an exception when trying to use the client. It doesn't matter which operation I try, the root exception is the same. Stacktrace:

com.amazonaws.SdkClientException: Unable to execute HTTP request: Unrecognized SSL message, plaintext connection?

	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleRetryableException(AmazonHttpClient.java:1116)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1066)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
	at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
	at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4368)
	at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4315)
	at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1758)
	at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1613)
	at com.example.messaging.RouteTest.testTransform(RouteTest.java:126)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.springframework.test.context.junit4.statements.RunBeforeTestExecutionCallbacks.evaluate(RunBeforeTestExecutionCallbacks.java:73)
	at org.springframework.test.context.junit4.statements.RunAfterTestExecutionCallbacks.evaluate(RunAfterTestExecutionCallbacks.java:83)
	at org.springframework.test.context.junit4.statements.RunBeforeTestMethodCallbacks.evaluate(RunBeforeTestMethodCallbacks.java:75)
	at org.springframework.test.context.junit4.statements.RunAfterTestMethodCallbacks.evaluate(RunAfterTestMethodCallbacks.java:86)
	at org.springframework.test.context.junit4.statements.SpringRepeat.evaluate(SpringRepeat.java:84)
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
	at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:251)
	at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:97)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
	at org.springframework.test.context.junit4.statements.RunBeforeTestClassCallbacks.evaluate(RunBeforeTestClassCallbacks.java:61)
	at org.springframework.test.context.junit4.statements.RunAfterTestClassCallbacks.evaluate(RunAfterTestClassCallbacks.java:70)
	at com.adobe.testing.s3mock.junit4.S3MockRule$1.evaluate(S3MockRule.java:66)
	at org.junit.rules.RunRules.evaluate(RunRules.java:20)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
	at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.run(SpringJUnit4ClassRunner.java:190)
	at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
	at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
	at com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
	at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
	at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
Caused by: javax.net.ssl.SSLException: Unrecognized SSL message, plaintext connection?
	at sun.security.ssl.InputRecord.handleUnknownRecord(InputRecord.java:710)
	at sun.security.ssl.InputRecord.read(InputRecord.java:527)
	at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:983)
	at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385)
	at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1413)
	at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1397)
	at org.apache.http.conn.ssl.SSLConnectionSocketFactory.createLayeredSocket(SSLConnectionSocketFactory.java:396)
	at org.apache.http.conn.ssl.SSLConnectionSocketFactory.connectSocket(SSLConnectionSocketFactory.java:355)
	at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142)
	at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:373)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at com.amazonaws.http.conn.ClientConnectionManagerFactory$Handler.invoke(ClientConnectionManagerFactory.java:76)
	at com.amazonaws.http.conn.$Proxy119.connect(Unknown Source)
	at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:381)
	at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:237)
	at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185)
	at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
	at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
	at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56)
	at com.amazonaws.http.apache.client.impl.SdkHttpClient.execute(SdkHttpClient.java:72)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1238)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1058)
	... 43 more

Switching to creating an s3 client like so:

final BasicAWSCredentials credentials = new BasicAWSCredentials("foo", "bar");

return AmazonS3ClientBuilder.standard()
        .withCredentials(new AWSStaticCredentialsProvider(credentials))
        .withClientConfiguration(
                S3_MOCK_RULE.configureClientToIgnoreInvalidSslCertificates(new ClientConfiguration()))
        .withEndpointConfiguration(
                new AwsClientBuilder.EndpointConfiguration("http://localhost:" + S3_MOCK_RULE.getPort(), "us-east-1"))
        .enablePathStyleAccess()
        .build();

Fixes the issue. The only change I made from the S3MockRule.createS3Client() implementation is using http instead of https. It doesn't matter if i use S3MockRule.getPort() or S3MockRule.getHttpPort(), both will work as long as the scheme is http.

Prepare to perform releases via Travis

In order to perform automated releases to Maven central and to Docker Hub, we need to implement a mechanism that allows us to run the release job on Travis-CI.

We could do something along the following:

  • use a special branch "release" on which Travis will:
    • invoke the mvn release... commands instead of the mvn clean install.
    • the Maven release job should not directly push the Git commits back, but will only commit the version changes in the pom.
    • after the deploy to oss.sonatype.org and the push to Docker Hub are successful, we programmatically release the staging repository on oss.sonatype.org. That can all be part of the mvn release build and just needs the proper plugin configurations.
    • when everything's fine, let the adobe-bot account push the modifications back to the release branch. And maybe directly open a PR to merge that back to master, if we're fancy.

Avoid logging StackTraces when sending error responses

We should avoid letting the default dispatcher Servlet log out the ridiculously long stack traces when we return error responses.

Perhaps using ResponseEntityExceptionHandler instead of HandlerExceptionResolver could help there.

Support for "from offset" range (e.g., bytes=9500-)

Hi There,

The Range header of with value bytes=9500- is a valid header, which represents start from byte 9500 to the end of file).

Currently code fails to convert such byte range and failed with exception.

This should be easy fix, would you mine if I fork, fix, and create PR.

Regards,

Syed Farhan Ali

List objects v1 does not honor max keys

The S3 mock implementation seems to not honor the max-keys/maxKeys parameter when listing objects using the V1 API.

Below is java code that demonstrates the problem (using Junit 5).

package somepackage;

import com.adobe.testing.s3mock.junit5.S3MockExtension;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.model.ListObjectsRequest;
import com.amazonaws.services.s3.model.ObjectListing;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.extension.ExtendWith;

import static org.junit.jupiter.api.Assertions.assertEquals;

@ExtendWith(S3MockExtension.class)
class SomeTestClass {
    @Test
    void someTest(AmazonS3 s3) {
        String bucketName = "some-bucket";
        s3.createBucket(bucketName);
        s3.putObject(bucketName, "a", "");
        s3.putObject(bucketName, "b", "");

        ListObjectsRequest request = new ListObjectsRequest().withBucketName(bucketName).withMaxKeys(1);
        ObjectListing objectListing = s3.listObjects(request);

        // This assertion fails. listObjects returns 2 objects instead of 1.
        assertEquals(1, objectListing.getObjectSummaries().size());
    }
}

objectMetadata().getLastModified is null

when working with the S3Object.getObjectMetadata().getLastModified() it returns null (When doing this through AWS. It would be best if it returns at least the lastmodifiedDate from the filesystem?

Can't putObject with path as a key

Hi all,

I can't do a putObject with a path key :

  @Test
  public void shouldUploadObjectWithAPath() throws Exception {
        final File uploadFile = new File(UPLOAD_FILE_NAME);

        s3Client.createBucket(BUCKET_NAME);
        s3Client.putObject(new PutObjectRequest(BUCKET_NAME,UPLOAD_FILE_NAME, uploadFile));

        assertThat(s3Client.doesObjectExist(BUCKET_NAME, UPLOAD_FILE_NAME), is(true));
  }

I have a Not Acceptable 406 code.
When I test this with the real S3, it creates a file in a src/test/resources/ path.

am I missing something ?

S3Mock Configuration Options

We had to requests regarding configuring S3Mock regarding usage of the silent parameter (#76) and Tomcats amount of threads (#75). Today there are some options to configure those (but there's no way to choose the option. So, silent can only be configured programmatically, Tomcat threads can be set via VM argument.

Let's introduce a way to harmonise and open configuration options. When done users can pass their config via environment variable, via VM argument and via command line. This include documentation of existing configuration options.

Question - MultiPartUpload response content type

Hello,

This is more of a question, I am trying to figure out where is the gap.

We are using Scala as programming language, for reading and writing to S3 we are using Alpakka, which provide us streaming.
We are using multi part upload which works fine when directly using Amazon S3 but when using agains S3Mock we get following error:

Unsupported Content-Type, supported: application/xml, application/octet-stream

We are also tried other S3 mock and it works fine as well.

Any idea what is going on?

Your response will be appreciated.

Regards,

Syed

Store objects using normalized file names to allow special chars

The S3 API allows for special characters in the object keys.

We don't yet support all special chars and the character / is interpreted as a directory delimiter (S3 doesn't do that).

I propose that we store all objects using a UUID file name and map the original object key to the UUID in a map.

Support signatureVersion v4 for s3 upload

When using upload() to upload a file (or stream) to the s3 mock, it always worked, but it did not store any content. When retrieving the stored file, it was of Content-Length: 0.

To make it work with the s3mock, I have to add signatureVersion: 'v2' to the params when initializing the s3 aws client.

Not sure if that applies to other "upload stuff"-functions as well.

Last-Modified header missing from blob response

Causes with JClouds 2.0.3:

org.jclouds.http.HttpException: Last-Modified header not present in response: {statusCode=200, headers={Accept-Ranges=[bytes], Access-Control-Allow-Origin=[*], ETag=["d41d8cd98f00b204e9800998ecf8427e"], Date=[Tue, 13 Feb 2018 13:40:31 GMT]}, payload=[content=true, contentMetadata=[cacheControl=null, contentDisposition=null, contentEncoding=null, contentLanguage=null, contentLength=0, contentMD5=null, contentType=application/unknown, expires=null], written=false, isSensitive=false]}
at org.jclouds.blobstore.functions.ParseSystemAndUserMetadataFromHeaders.parseLastModifiedOrThrowException(ParseSystemAndUserMetadataFromHeaders.java:92)
at org.jclouds.blobstore.functions.ParseSystemAndUserMetadataFromHeaders.apply(ParseSystemAndUserMetadataFromHeaders.java:72)
at org.jclouds.s3.functions.ParseObjectMetadataFromHeaders.apply(ParseObjectMetadataFromHeaders.java:61)
at org.jclouds.s3.functions.ParseObjectFromHeadersAndHttpContent.apply(ParseObjectFromHeadersAndHttpContent.java:48)
at org.jclouds.s3.functions.ParseObjectFromHeadersAndHttpContent.apply(ParseObjectFromHeadersAndHttpContent.java:34)
at org.jclouds.rest.internal.InvokeHttpMethod.invoke(InvokeHttpMethod.java:90)
at org.jclouds.rest.internal.InvokeHttpMethod.apply(InvokeHttpMethod.java:73)
at org.jclouds.rest.internal.InvokeHttpMethod.apply(InvokeHttpMethod.java:44)
at org.jclouds.rest.internal.DelegatesToInvocationFunction.handle(DelegatesToInvocationFunction.java:156)
at org.jclouds.rest.internal.DelegatesToInvocationFunction.invoke(DelegatesToInvocationFunction.java:123)
at com.sun.proxy.$Proxy155.getObject(Unknown Source)
at org.jclouds.s3.blobstore.S3BlobStore.getBlob(S3BlobStore.java:235)
at org.jclouds.blobstore.internal.BaseBlobStore.getBlob(BaseBlobStore.java:217)
``

NullPointerException at com.adobe.testing.s3mock.testsupport.common.S3MockStarter.getPort(S3MockStarter.java:95)

Hi, trying to follow the JUnit4 example through and tweak it enough to work with Cucumber & Spring Boot, on Java 10...

I want to be able to start up the S3Mock instance in the Cucumber @before hook, but unfortunately I am getting a NullPointerException error:

java.lang.NullPointerException
at com.adobe.testing.s3mock.testsupport.common.S3MockStarter.getPort(S3MockStarter.java:95)
at com.adobe.testing.s3mock.testsupport.common.S3MockStarter.createS3Client(S3MockStarter.java:89)
at com.elsevier.q2c.backupService.feature.support.Hooks.setUp(Hooks.java:34)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at cucumber.runtime.Utils$1.call(Utils.java:31)
at cucumber.runtime.Timeout.timeout(Timeout.java:16)
at cucumber.runtime.Utils.invoke(Utils.java:25)
at cucumber.runtime.java.JavaHookDefinition.execute(JavaHookDefinition.java:60)
at cucumber.runtime.HookDefinitionMatch.runStep(HookDefinitionMatch.java:17)
at cucumber.runner.UnskipableStep.executeStep(UnskipableStep.java:22)
at cucumber.api.TestStep.run(TestStep.java:83)
at cucumber.api.TestCase.run(TestCase.java:58)
at cucumber.runner.Runner.runPickle(Runner.java:80)
at cucumber.runtime.Runtime.runFeature(Runtime.java:119)
at cucumber.runtime.Runtime.run(Runtime.java:104)
at cucumber.api.cli.Main.run(Main.java:36)
at cucumber.api.cli.Main.main(Main.java:18)

In my Hook class I have:

public class Hooks {

  private final AWSCredentialsProvider awsCredentialsProvider;

  public static S3MockRule S3_MOCK_RULE = S3MockRule.builder().silent().build();

  @Autowired
  public Hooks(AWSCredentialsProvider awsCredentialsProvider) {
    this.awsCredentialsProvider = awsCredentialsProvider;
  }

  @Value("${aws.bucket}")
  private String bucket;

  @Before
  public void setUp() {
    AmazonS3 s3Mock = S3_MOCK_RULE.createS3Client("eu-west-1");
    s3Mock.createBucket(bucket);
  }

Is what I am trying to achieve possible, and if so where am I going wrong?

Setting silent from the Docker command line?

I can set other properties when running S3Mock via Docker, but the silent parameter doesn't seem to be picked up?

$ docker run -p 9090:9090 -p 9191:9191 -t adobe/s3mock:latest --server.port=0 --initialBuckets=abc,def --silent=true
+------------------------------------------------------------------------------+
|             _______  ______    _______  _______  _______  _                  |
|            (  ____ \/ ___  \  (       )(  ___  )(  ____ \| \    /\           |
|            | (    \/\/   \  \ | () () || (   ) || (    \/|  \  / /           |
|            | (_____    ___) / | || || || |   | || |      |  (_/ /            |
|            (_____  )  (___ (  | |(_)| || |   | || |      |   _ (             |
|                  ) |      ) \ | |   | || |   | || |      |  ( \ \            |
|            /\____) |/\___/  / | )   ( || (___) || (____/\|  /  \ \           |
|            \_______)\______/  |/     \|(_______)(_______/|_/    \/           |
|                                                                              |
+------------------------------------------------------------------------------+

2018-06-01 01:14:25.760  INFO 1 --- [           main] c.a.testing.s3mock.S3MockApplication     : Starting S3MockApplication on b1e6221d7dca with PID 1 (/opt/service/s3mock-2.0.5.jar started by root in /opt/service)
2018-06-01 01:14:25.763  INFO 1 --- [           main] c.a.testing.s3mock.S3MockApplication     : No active profile set, falling back to default profiles: default
2018-06-01 01:14:25.794  INFO 1 --- [           main] ConfigServletWebServerApplicationContext : Refreshing org.springframework.boot.web.servlet.context.AnnotationConfigServletWebServerApplicationContext@5b1d2887: startup date [Fri Jun 01 01:14:25 GMT 2018]; root of context hierarchy
2018-06-01 01:14:26.500  INFO 1 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat initialized with port(s): 0 (https) 9090 (http)
...
2018-06-01 01:14:26.779  INFO 1 --- [           main] c.a.testing.s3mock.FileStoreController   : Creating initial buckets: [abc, def]
2018-06-01 01:14:26.779  INFO 1 --- [           main] c.a.testing.s3mock.FileStoreController   : Creating bucket: abc
2018-06-01 01:14:26.780  INFO 1 --- [           main] c.a.testing.s3mock.FileStoreController   : Creating bucket: def

The other parameters seem to be picked up, but not --silent=true. Is there another way I should be doing this?

s3mock 2.1.0 fails to start with

While s3mock 2.0.11 works well in our tests, updating to 2.1.0 let's the s3mock startup fail with

09:15:27.511 INFO  o.s.boot.SpringApplication - Starting application on mescalin with PID 377 (started by magro in /path/to/project)
09:15:27.512 INFO  o.s.boot.SpringApplication - No active profile set, falling back to default profiles: default
09:15:28.324 WARN  o.s.b.w.s.c.AnnotationConfigServletWebServerApplicationContext - Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.support.BeanDefinitionOverrideException: Invalid bean definition with name 'httpRequestHandlerAdapter' defined in class path resource [org/springframework/data/rest/webmvc/config/RepositoryRestMvcConfiguration.class]: Cannot register bean definition [Root bean: class [null]; scope=; abstract=false; lazyInit=false; autowireMode=3; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=org.springframework.data.rest.webmvc.config.RepositoryRestMvcConfiguration; factoryMethodName=httpRequestHandlerAdapter; initMethodName=null; destroyMethodName=(inferred); defined in class path resource [org/springframework/data/rest/webmvc/config/RepositoryRestMvcConfiguration.class]] for bean 'httpRequestHandlerAdapter': There is already [Root bean: class [null]; scope=; abstract=false; lazyInit=false; autowireMode=3; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=org.springframework.boot.autoconfigure.web.servlet.WebMvcAutoConfiguration$EnableWebMvcConfiguration; factoryMethodName=httpRequestHandlerAdapter; initMethodName=null; destroyMethodName=(inferred); defined in class path resource [org/springframework/boot/autoconfigure/web/servlet/WebMvcAutoConfiguration$EnableWebMvcConfiguration.class]] bound.
09:15:28.334 INFO  o.s.b.a.l.ConditionEvaluationReportLoggingListener - 

Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
09:15:28.336 ERROR o.s.b.d.LoggingFailureAnalysisReporter - 

***************************
APPLICATION FAILED TO START
***************************

Description:

The bean 'httpRequestHandlerAdapter', defined in class path resource [org/springframework/data/rest/webmvc/config/RepositoryRestMvcConfiguration.class], could not be registered. A bean with that name has already been defined in class path resource [org/springframework/boot/autoconfigure/web/servlet/WebMvcAutoConfiguration$EnableWebMvcConfiguration.class] and overriding is disabled.

Action:

Consider renaming one of the beans or enabling overriding by setting spring.main.allow-bean-definition-overriding=true

For us 2.0.11 is sufficient right now (i.e. it's not a problem for us), but I still wanted to let you know about this. If you are sure that things are working and there's evidence that it's just a classpath issue on our side you can also just close this ticket as invalid.

Support @Nested for JUnit5

Hi Adobe-team,

when trying to use JUnit5 nested tests and the @ExtendWith method the mock fails with following error:

org.springframework.boot.web.embedded.tomcat.ConnectorStartFailedException: Connector configured to listen on port 8086 failed to start

Cause

In the log we can see, that the mock tries to start a second time. When the @BeforeAll in the nested test class is called.

Possible sollution

As a simple workaround the number of accesses can be counted in the S3MockExtension and only be started or stopped when the counter is 0

Example:

public class NestedS3MockExtension extends S3MockExtension {

    private int started;

    public NestedS3MockExtension() {
        super();
    }

    public void beforeAll(ExtensionContext context) {
        if (started == 0) {
            this.start();
        }
        started++;
    }

    public void afterAll(ExtensionContext context) {
        started--;
        if (started == 0){
            this.stop();
        }
    }
}

I do not know if there is an better sollution for this using JUnit5 methods, but if you like I can create an PR with this addition.

List Multipart Uploads not supported?

I'm using s3Client.listMultipartUploads to check for existing multipart uploads, but it always returns an empty list in getMultipartUploads. After a quick look in FileStoreController it seems that this is not supported, right? It would be great if this could be added, I might also try to submit a PR for this.

getS3Objects with prefix checks the whole object name against the prefix

Test Case:

  1. Given an s3Mock with a bucket ful'o'files and no directories

  2. The base case with a ListObjectsV2Request will list all the files in the bucket.

     fandom
    country~1
    fanciful~1
    fanciful~2
    fanciful~3
    fanciful~4
    bar~1
    bar~2
    bar~3
    bar~4
    bar~5
    biz
    bonkers
    
  3. However, if one is to use a withPrefix matching a part of a given obejct's name such as f or bar~ or fanciful~:

    ListObjectsV2Request req = new ListObjectsV2Request()
      .withBucketName(bucketName)
      .withMaxKeys(50)
      .withPrefix(prefix);
    
    ListObjectsV2Result result = s3.listObjectsV2(req);;
    
  4. You will get an empty list. Whereas the real S3 client will return non-zero amount of objects.

The problem:

In the FileStore class we have:

return isEmpty(prefix) || (null != p && p.startsWith(prefix));

On UNIX for example, the path "foo/bar" starts with "foo" and "foo/bar". It does not start with "f" or "fo".

So this is trying to check if the given path p matches the whole prefix up to / whereas the actual S3 implementation is closer to p.toString().startsWith(prefix).

limit spring application executors number

upon application start, it initializes some executor threads, 1 for each vCore AFAIU.

If initialized with
S3MockApplication.start(props),

what key-value should be passed to configure executors number?

My use case is non parallel requests to S3.

Thanks

Update heading for repository

Current

A simple mock implementation of the AWS S3 API startable as Docker image or JUnit rule

Expected

A simple mock implementation of the AWS S3 API startable as Docker image, JUnit 4 rule, or JUnit Jupiter extension

Wrong alias for bucket name field in InitiateMultipartUploadResult

Mock's InitiateMultipartUpload endpoint returns bucket name within <Bucketname> element (because of alias defined in InitiateMultipartUploadResult) whereas S3 documentation specify that bucket should be a content of <Bucket> element.

Example request:

POST /test/123?uploads= HTTP/1.1
Accept: */*
Accept-Encoding: gzip, deflate
Connection: keep-alive
Content-Length: 0
Host: localhost:9090
User-Agent: HTTPie/0.9.8


HTTP/1.1 200 
Content-Type: application/x-www-form-urlencoded
Date: Fri, 14 Sep 2018 09:54:30 GMT
Transfer-Encoding: chunked

<InitiateMultipartUploadResult><Bucketname>test</Bucketname><Key>123</Key><UploadId>f01174e3-5fe8-4a76-8eb8-bc73efc2a919</UploadId></InitiateMultipartUploadResult>

Deleting multiple objects fails with status code 415

Hey there,

I’m trying the S3Mock Docker image to automate testing of an Elixir application that uses S3.

I have noticed that there is one operation that consistently fails with S3Mock while it works fine with other implementations of the S3 API: Deleting multiple objects.

When I try deleting multiple objects, here is the response I get:

body: "",
headers: [
      {"Accept",
       "application/xml, application/x-www-form-urlencoded, application/octet-stream, text/plain, text/xml, application/*+xml, multipart/form-data, application/json, application/*+json, */*"},
      {"Content-Length", "0"},
      {"Date", "Thu, 19 Jul 2018 22:31:36 GMT"}
    ],
status_code: 415

This is the request body that was sent:

<?xml version=\"1.0\" encoding=\"UTF-8\"?><Delete><Object><Key>bar</Key></Object></Delete>"

Nothing was logged to the Docker logs. Deleting the same file with a simple DELETE request works fine.

Is this functionality currently not supported?
I saw there is a BatchDeleteRequest class which I assume is intended for this, but I haven’t digged any deeper yet.

Spring Security on The Classpath results in 401

I am running the JUnit4 example given here:

https://github.com/adobe/S3Mock/blob/master/testsupport/junit4/src/test/java/com/adobe/testing/s3mock/junit4/S3MockRuleTest.java

I had alot of dependencies in my project, so I had to go through each one, one by one to narrow it down to this. Once the following dependency is in the POM, the exception below starts to show up. The test has not been modified at all.

The testing project parent:
<parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.0.1.RELEASE</version> </parent>

Dependency:
<dependency> <groupId>org.springframework.security.oauth</groupId> <artifactId>spring-security-oauth2</artifactId> <version>2.0.8.RELEASE</version> </dependency>

JUnit Exception:

com.amazonaws.services.s3.model.AmazonS3Exception: (Service: Amazon S3; Status Code: 401; Error Code: 401 ; Request ID: null; S3 Extended Request ID: null), S3 Extended Request ID: null
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1632)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1304)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1058)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4365)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4312)
at com.amazonaws.services.s3.AmazonS3Client.createBucket(AmazonS3Client.java:1030)
at com.amazonaws.services.s3.AmazonS3Client.createBucket(AmazonS3Client.java:967)
at S3MockRuleTest.shouldUploadAndDownloadObject(S3MockRuleTest.java:51)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at com.adobe.testing.s3mock.junit4.S3MockRule$1.evaluate(S3MockRule.java:68)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:86)
at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:538)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:760)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:460)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:206)

`aws cp` does not work with recent versions

I'm trying to use S3Mock as part of my development environment and I sometimes have to use the AWS CLI (which might not be the intended use case).

aws s3 cp myFile.txt s3://bucketName/myFile.txt --endpoint-url http:localhost:9090 works like a charm with version 1.11.13 of aws-cli (the one that comes from Ubuntu 16.04 repositories), but if I try to run the command on version 1.14.44 (the one that comes with Ubuntu 18.04) or older I get (from the S3Mock console):

2018-07-19 01:00:27.354 ERROR 1 --- [nio-9090-exec-2] c.adobe.testing.s3mock.domain.FileStore  : Wasn't able to store file on disk!

java.io.EOFException: Unexpected EOF read on the socket
	at org.apache.coyote.http11.Http11InputBuffer.fill(Http11InputBuffer.java:722) ~[tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.coyote.http11.Http11InputBuffer.access$300(Http11InputBuffer.java:40) ~[tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.coyote.http11.Http11InputBuffer$SocketInputBuffer.doRead(Http11InputBuffer.java:1072) ~[tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.coyote.http11.filters.IdentityInputFilter.doRead(IdentityInputFilter.java:140) ~[tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.coyote.http11.Http11InputBuffer.doRead(Http11InputBuffer.java:261) ~[tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.coyote.Request.doRead(Request.java:581) ~[tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.catalina.connector.InputBuffer.realReadBytes(InputBuffer.java:326) ~[tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.catalina.connector.InputBuffer.checkByteBufferEof(InputBuffer.java:642) ~[tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.catalina.connector.InputBuffer.readByte(InputBuffer.java:337) ~[tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.catalina.connector.CoyoteInputStream.read(CoyoteInputStream.java:93) ~[tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at com.adobe.testing.s3mock.util.AwsChunkDecodingInputStream.readUntil(AwsChunkDecodingInputStream.java:109) ~[classes!/:na]
	at com.adobe.testing.s3mock.util.AwsChunkDecodingInputStream.read(AwsChunkDecodingInputStream.java:72) ~[classes!/:na]
	at java.io.InputStream.read(InputStream.java:170) ~[na:1.8.0_151]
	at java.io.InputStream.read(InputStream.java:101) ~[na:1.8.0_151]
	at com.adobe.testing.s3mock.domain.FileStore.inputStreamToFile(FileStore.java:437) [classes!/:na]
	at com.adobe.testing.s3mock.domain.FileStore.putS3Object(FileStore.java:248) [classes!/:na]
	at com.adobe.testing.s3mock.FileStoreController.putObject(FileStoreController.java:284) [classes!/:na]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_151]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_151]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_151]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_151]
	at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:209) [spring-web-5.0.5.RELEASE.jar!/:5.0.5.RELEASE]
	at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:136) [spring-web-5.0.5.RELEASE.jar!/:5.0.5.RELEASE]
	at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:102) [spring-webmvc-5.0.5.RELEASE.jar!/:5.0.5.RELEASE]
	at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:877) [spring-webmvc-5.0.5.RELEASE.jar!/:5.0.5.RELEASE]
	at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:783) [spring-webmvc-5.0.5.RELEASE.jar!/:5.0.5.RELEASE]
	at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) [spring-webmvc-5.0.5.RELEASE.jar!/:5.0.5.RELEASE]
	at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:991) [spring-webmvc-5.0.5.RELEASE.jar!/:5.0.5.RELEASE]
	at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:925) [spring-webmvc-5.0.5.RELEASE.jar!/:5.0.5.RELEASE]
	at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:974) [spring-webmvc-5.0.5.RELEASE.jar!/:5.0.5.RELEASE]
	at org.springframework.web.servlet.FrameworkServlet.doPut(FrameworkServlet.java:888) [spring-webmvc-5.0.5.RELEASE.jar!/:5.0.5.RELEASE]
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:664) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:851) [spring-webmvc-5.0.5.RELEASE.jar!/:5.0.5.RELEASE]
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:742) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) [tomcat-embed-websocket-8.5.29.jar!/:8.5.29]
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at com.adobe.testing.s3mock.KmsValidationFilter.doFilterInternal(KmsValidationFilter.java:87) [classes!/:na]
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-5.0.5.RELEASE.jar!/:5.0.5.RELEASE]
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:99) [spring-web-5.0.5.RELEASE.jar!/:5.0.5.RELEASE]
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-5.0.5.RELEASE.jar!/:5.0.5.RELEASE]
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:101) [spring-web-5.0.5.RELEASE.jar!/:5.0.5.RELEASE]
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.springframework.web.filter.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java:81) [spring-web-5.0.5.RELEASE.jar!/:5.0.5.RELEASE]
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-5.0.5.RELEASE.jar!/:5.0.5.RELEASE]
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:200) [spring-web-5.0.5.RELEASE.jar!/:5.0.5.RELEASE]
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-5.0.5.RELEASE.jar!/:5.0.5.RELEASE]
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:198) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:496) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:140) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:81) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:342) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:803) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:790) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1459) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_151]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_151]
	at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) [tomcat-embed-core-8.5.29.jar!/:8.5.29]
	at java.lang.Thread.run(Thread.java:748) [na:1.8.0_151]

And from the CLI I get:

upload failed: ./myFile.txt to s3://bucketName/myFile.txt Connection was closed before we received a valid response from endpoint URL: "http://localhost:9090/bucketName/myFile.txt".

Any suggestions?

Support for CORS headers

Hello!

I'm trying to use this to test a browser-based uploader. I get issues related to CORS. Is there anyway to set the cross origin headers with S3Mock ?

Continuation token invalidated by delete.

In case the bucket has more than a 1000 objects and I would like to delete them, I list the objects batchwise using nextMarker(v1)/nextContinuationToken(v2). So I request a batch, delete the batch, request the next batch using the next marker/token, delete it and so on.

The problem I face with the adobe/S3Mock is that the continuation token specifies the offset into the buckets current object list as seen here. Obviously if some items before the marker are deleted, the itemstoskip mapped to the marker/continuation token is invalidated.

A correct implementation should return items starting next after the marker in S3 sort oder.
See marker and continuation token.

I hacked a solution that works for me here. If a solution based on sorting and mapping the token to a key would be fine, I would be able to submit a PR.

Should sync path fragment

Hi all,

First of all thank you for the fix of the issue #8

I'm using the S3Mock that I find very convenient for integration tests and I have a new case that fails :

@Rule
public TemporaryFolder folder= new TemporaryFolder();
@Test
public void shouldSyncPathFragment() {
  final File uploadFile = new File(UPLOAD_FILE_NAME);

  s3Client.createBucket(BUCKET_NAME);
  s3Client.putObject(new PutObjectRequest(BUCKET_NAME, UPLOAD_FILE_NAME, uploadFile));

  final TransferManager tm = createDefaultTransferManager();
  tm.downloadDirectory(BUCKET_NAME, "src", folder.getRoot());
  tm.shutdownNow(false);

  assertThat(new File(folder.getRoot() + UPLOAD_FILE_NAME).exists(), is(true));
}

That is when objects are under a/s3/path then downloading a key should make like a rsync with remote directory. That case is working with a real S3 remote. With the mock I get a

EDIT : use of downloadDirectory instead of download method. There is no 404 but the assertion fails.

Is it a new feature for S3Mock ?

Multiple spring boot servers in class path?

This is more of a question. I'm trying to use s3mock using the junit rule method. While running it using the sample from this repository I can see that the server is started, when I try to do the same in my project it doesn't start s3mock, but instead tries to run my service. I am not sure how to configure it such that it knows to start s3mock instead of my service.

Any ideas?

Thanks

Lexical sort results in corrupted files

In FileStore::completeMultipartUpload when the parts are put together it is sorted using lexical order. This leads to corrupted files if the number of parts is > 10

    Arrays.sort(partNames);

This leads to (for example):
0.part
1.part
10.part
11.part
2.part
3.part
.
.
etc

PutObject returns "201 Created" rather than "200 OK"

Hi,

First of all thank you for even coming up with this project, let alone for making it publicly available, I've been looking for a replacement of fake-s3 that we were using internally for years when I found this repo. We were looking for an alternative that had a fully implemented multipart upload API and voila, this project has it! Good job!

Now the question: I see that PUT Object API currently returns a 201 Created as the response code but Amazon S3 (as well as other S3 like alternatives that we've used in Production) do return a 200 OK as the response code, is this something you omitted buy mistake or is it a valid response code?
Nothing from the public doc https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html , other than the examples provided, mentions the expected response code, hence my question; all I know is that other implementations, including fake-s3, do return a 200 in case of success.

If it is expected, feel free to close this issue.

Thank you.

Compatibility with latest aws s3 standards (signature v4)

Spent a decent amount of time trying to get it work with official [email protected] library for nodejs.
Below I describe all problems I encountered and how I worked around them.

  1. Started with docker-compose.yml and initialized my-bucket
  s3:
    image: adobe/s3mock
    ports:
      - 9090:9090
    command: --initialBuckets=my-bucket
  1. Created aws s3 client
const aws = require('aws-sdk');

const s3 = new aws.S3({
    endpoint: 'http://localhost:9090',
})

await s3.putObject({
    ContentType: 'image/png',
    Key: 'my-key/image.png',
    Body: imageBuffer, // The actual image buffer
    Bucket: 'my-bucket',
}).promise()
  1. After trying to upload my object to the bucket I would get the following error:
[error] message: The specified bucket does not exist., stack: NoSuchBucket: The specified bucket does not exist.

This error message was very confusing because the bucket was there and I could see it with listBuckets and I was able to upload to it with my postman query. What I found out later is that the aws-sdk is using subdomain bucket naming convention which adobe/s3mock is not compatible with. I was able to fix it with s3ForcePathStyle: true option on my s3 client configuration.

  1. The next error I encountered was
Internal Server Error, stack: 406: null

After long investigation I found out that adobe/s3-mock is not compatible with latest version of aws signatures so I had to add signatureVersion: 'v3' to my s3 configuration.

  1. Final working configuration looked like
const aws = require('aws-sdk');

const s3 = new aws.S3({
    endpoint: 'http://localhost:9090',
    s3ForcePathStyle: true,
    signatureVersion: 'v3',
})

await s3.putObject({
    ContentType: 'image/png',
    Key: 'my-key/image.png',
    Body: imageBuffer, // The actual image buffer
    Bucket: 'my-bucket',
}).promise()

TLDR:
It's a great s3 mock server but is not compatible with the latest s3 standards

  1. It does not support subdomain bucket naming conventions which is used as the default in aws sdk
  2. It does not support v4 of signature generation which is also used as the default in aws sdk

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.