Git Product home page Git Product logo

Comments (14)

afranken avatar afranken commented on July 22, 2024

@vlbaluk I added an integration test using TransferManager that is working just fine, see linked PR.

from s3mock.

vlbaluk avatar vlbaluk commented on July 22, 2024

@afranken, Big thanks for looking at it. I see you set explicitly .checksumAlgorithm(ChecksumAlgorithm.SHA256).
By default, the CRT client uses CRC32(visible in the screenshot above). I did the same trick for my upload configuration, and it fixed the problem. 🙂
But I didn't see anywhere in AWS docs recommendation to set the checksum algorithm explicitly.

Is it safe to change it for uploads to S3, or could it potentially disrupt any processes?
Can you explain why the default CRC32 algorithm is causing the files to break?

from s3mock.

afranken avatar afranken commented on July 22, 2024

@vlbaluk when I change the line to .checksumAlgorithm(ChecksumAlgorithm.CRC32) or remove it alltogether, the test still runs without problems.

from s3mock.

afranken avatar afranken commented on July 22, 2024

I added checksum support in version 2.17.0:
https://github.com/adobe/S3Mock/releases/tag/2.17.0
maybe you tested with an older version?

Older versions would ignore the additional bytes the AWS SDK adds to the payload when asking for checksum validation of (mulitpart) uploads.
The old behaviour would match your screenshot.

from s3mock.

afranken avatar afranken commented on July 22, 2024

see also #1123

from s3mock.

glennvdv avatar glennvdv commented on July 22, 2024

Got the same problem using 3.5.1
Sample code

@Test
    void testPutAndGetObject() throws Exception {
        URL resource = Thread.currentThread().getContextClassLoader().getResource("jon.png");
        var uploadFile = new File(resource.toURI());
        s3AsyncClient.putObject(PutObjectRequest.builder().bucket("eojt").key(uploadFile.getName()).build(),AsyncRequestBody.fromFile(uploadFile)).get();
        var response =
                s3Client.getObject(
                        GetObjectRequest.builder().bucket("eojt").key(uploadFile.getName()).build());

        var uploadFileIs = Files.newInputStream(uploadFile.toPath());
        var uploadDigest = hexDigest(uploadFileIs);
        var downloadedDigest = hexDigest(response);
        uploadFileIs.close();
        response.close();

        Assertions.assertThat(uploadDigest).isEqualTo(downloadedDigest);
    }

Test output:
Expected :"dcd37a339ac2f037a7498b9fc63048bb" Actual :"930c76274807e15e0873cb30a9d0d012"
In the file following content is added
0 x-amz-checksum-crc32:ntdN8g==

from s3mock.

afranken avatar afranken commented on July 22, 2024

@glennvdv how do you construct the async client?
I can't reproduce these errors locally.

from s3mock.

glennvdv avatar glennvdv commented on July 22, 2024

Using auto configuration from spring boot and Spring Cloud for Amazon Web Services

from s3mock.

vlbaluk avatar vlbaluk commented on July 22, 2024

@afranken We used crtBuilder() for constructing crtClient:

 final S3CrtAsyncClientBuilder s3AsyncClientBuilder = S3AsyncClient.crtBuilder()
            .maxConcurrency(100) 
            .minimumPartSizeInBytes(10 * MB)
            .thresholdInBytes(100 * MB)
            .region(Region.of(...))
            .credentialsProvider(...);

You may be using a different HTTP client, which could explain the difference in our setups.

from s3mock.

afranken avatar afranken commented on July 22, 2024

@glennvdv I meant the actual code you are using to construct the client. As I said, I can't reproduce the problem locally.
I'm using several different clients in the integration tests:
https://github.com/adobe/S3Mock/blob/main/integration-tests/src/test/kotlin/com/adobe/testing/s3mock/its/S3TestBase.kt#L115

@vlbaluk that looks almost exactly the same as the client I'm using in the integration tests:
https://github.com/adobe/S3Mock/blob/main/integration-tests/src/test/kotlin/com/adobe/testing/s3mock/its/S3TestBase.kt#L231

from s3mock.

glennvdv avatar glennvdv commented on July 22, 2024

@afranken i let spring boot create the async client. From there source they do something like this

	@Bean
	@ConditionalOnMissingBean
	S3AsyncClient s3AsyncClient(AwsCredentialsProvider credentialsProvider) {
		S3CrtAsyncClientBuilder builder = S3AsyncClient.crtBuilder().credentialsProvider(credentialsProvider)
				.region(this.awsClientBuilderConfigurer.resolveRegion(this.properties));
		Optional.ofNullable(this.awsProperties.getEndpoint()).ifPresent(builder::endpointOverride);
		Optional.ofNullable(this.properties.getEndpoint()).ifPresent(builder::endpointOverride);
		Optional.ofNullable(this.properties.getCrossRegionEnabled()).ifPresent(builder::crossRegionAccessEnabled);
		Optional.ofNullable(this.properties.getPathStyleAccessEnabled()).ifPresent(builder::forcePathStyle);

		if (this.properties.getCrt() != null) {
			S3CrtClientProperties crt = this.properties.getCrt();
			PropertyMapper propertyMapper = PropertyMapper.get();
			propertyMapper.from(crt::getMaxConcurrency).whenNonNull().to(builder::maxConcurrency);
			propertyMapper.from(crt::getTargetThroughputInGbps).whenNonNull().to(builder::targetThroughputInGbps);
			propertyMapper.from(crt::getMinimumPartSizeInBytes).whenNonNull().to(builder::minimumPartSizeInBytes);
			propertyMapper.from(crt::getInitialReadBufferSizeInBytes).whenNonNull()
					.to(builder::initialReadBufferSizeInBytes);
		}

		return builder.build();
	}

from s3mock.

afranken avatar afranken commented on July 22, 2024

after testing with different configurations of clients and upload files in different sizes, I may have found the problem:
Some clients decide dynamically whether to use chunked uploads or not unless explicitly configured.
Signing is also dynamically decided upon, unless explicitly configured.

We currently do not handle chunked, unsigned uploads correctly, either we cut off some of the chunks before persisting the bytes to disk, or we write the chunks together with their chunk boundaries to disk.
Either way, we persist the wrong data to disk and later return the wrong data.

from s3mock.

afranken avatar afranken commented on July 22, 2024

uploading chunked, signed data works without issues, BTW.

from s3mock.

afranken avatar afranken commented on July 22, 2024

@glennvdv / @vlbaluk I just released 3.7.1 which now correctly handles unsigned, chunked uploads when using async http clients, as long as the uploaded files are below 16KB.

See #1818

from s3mock.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.