Git Product home page Git Product logo

azure / azure-sdk-for-java Goto Github PK

View Code? Open in Web Editor NEW
2.2K 365.0 1.9K 2.97 GB

This repository is for active development of the Azure SDK for Java. For consumers of the SDK we recommend visiting our public developer docs at https://docs.microsoft.com/java/azure/ or our versioned developer docs at https://azure.github.io/azure-sdk-for-java.

License: MIT License

Java 99.35% JavaScript 0.01% PowerShell 0.15% Python 0.05% Batchfile 0.01% HTML 0.01% Groovy 0.07% Shell 0.01% CSS 0.01% Dockerfile 0.01% Jupyter Notebook 0.02% Scala 0.33% Bicep 0.01% XSLT 0.01% Q# 0.01%
java azure-services azure-resources media azure azure-sdk hacktoberfest

azure-sdk-for-java's Introduction

Azure SDK for Java

Packages Build Documentation

This repository is for active development of the Azure SDK for Java. For consumers of the SDK we recommend visiting our public developer docs or our versioned developer docs.

Getting started

To get started with a specific service library, see the README.md file located in the library's project folder. You can find service libraries in the /sdk directory. For a list of all the services we support access our list of all existing libraries.

For tutorials, samples, quick starts and other documentation, visit Azure for Java Developers.

Prerequisites

All libraries are baselined on Java 8, with testing and forward support up until the latest Java long-term support release (currently Java 17).

Available packages

Each service can have both 'client' and 'management' libraries. 'Client' libraries are used to consume the service, whereas 'management' libraries are used to configure and manage the service.

Client Libraries

Our client libraries follow the Azure SDK Design Guidelines for Java, and share a number of core features such as HTTP retries, logging, transport protocols, authentication protocols, etc., so that once you learn how to use these features in one client library, you will know how to use them in other client libraries. You can learn about these shared features here. These libraries can be easily identified by folder, package, and namespaces names starting with azure-, e.g. azure-keyvault.

You can find the most up to date list of all of the new packages on our page. This list includes the most recent releases: both stable and beta.

NOTE: If you need to ensure your code is ready for production use one of the stable, non-beta libraries.

Management Libraries

Similar to our client libraries, the management libraries follow the Azure SDK Design Guidelines for Java. These libraries provide a high-level, object-oriented API for managing Azure resources, that are optimized for ease of use, succinctness, and consistency. You can find the list of management libraries on this page.

For general documentation on how to use the new libraries for Azure Resource Management, please visit here. We have also prepared plenty of code samples as well as migration guide in case you are upgrading from previous versions.

The management libraries can be identified by namespaces that start with azure-resourcemanager, e.g. azure-resourcemanager-compute.

Historical Releases

Note that the latest libraries from Microsoft are in the com.azure Maven group ID, and have the package naming pattern of beginning with com.azure. If you're using libraries that are in com.microsoft.azure Maven group ID, or have this as the package structure, please consider migrating to the latest libraries. You can find a mapping table from these historical releases to their equivalent here.

Need help?

Navigating the repository

Main branch

The main branch has the most recent code with new features and bug fixes. It does not represent latest released stable SDK.

Release branches (Release tagging)

For each package we release there will be a unique git tag created that contains the name and the version of the package to mark the commit of the code that produced the package. This tag will be used for servicing via hotfix branches as well as debugging the code for a particular beta or stable release version. Format of the release tags are <package-name>_<package-version>. For more information please see our branching strategy.

Contributing

For details on contributing to this repository, see the contributing guide.

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, view Microsoft's CLA.

When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repositories using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

Additional Helpful Links for Contributors

Many people all over the world have helped make this project better. You'll want to check out:

Reporting security issues and security bugs

Security issues and bugs should be reported privately, via email, to the Microsoft Security Response Center (MSRC) [email protected]. You should receive a response within 24 hours. If for some reason you do not, please follow up via email to ensure we received your original message. Further information, including the MSRC PGP key, can be found in the Security TechCenter.

License

Azure SDK for Java is licensed under the MIT license.

Impressions

azure-sdk-for-java's People

Contributors

alzimmermsft avatar amarzavery avatar anuchandy avatar azure-sdk avatar chenrujun avatar chentanyi avatar conniey avatar fabianmeiswinkel avatar g2vinay avatar gapra-msft avatar jcookems avatar jianghaolu avatar jimsuplizio avatar kushagrathapar avatar mikeharder avatar mitchdenny avatar moderakh avatar mssfang avatar netyyyy avatar rickle-msft avatar rikkigibson avatar samvaity avatar sima-zhu avatar sreeramgarlapati avatar srnagar avatar stankovski avatar vcolin7 avatar weidongxu-microsoft avatar xinlian12 avatar xseeseesee avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

azure-sdk-for-java's Issues

BlobRestProxy.getBlob does not respect GetBlobOptions.setComputeRangeMD5

There is no difference between calling getBlob with or without setComputeRangeMD5 set. For example:

    options = new GetBlobOptions();
    options.setRangeStart(50L);
    options.setRangeEnd(200L);
    options.setComputeRangeMD5(true);
    service.getBlob(container, blob, options));

Sends the following HTTP message:

GET http://XXX.blob.core.windows.net/qa-476476-a1/qa-476476-int-8 HTTP/1.1
x-ms-version: 2011-08-18
Range: bytes=50-200
Date: Sun, 20 May 2012 00:52:22 GMT
Authorization: XXX
User-Agent: Java/1.6.0_29
Host: XXX.blob.core.windows.net
Accept: text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2
Connection: keep-alive

One fix would be to add this clause to the getBlob method:

    if (options.isComputeRangeMD5()) {
        builder = addOptionalHeader(builder, "x-ms-range-get-content-md5", "true");
    }

md5 checksum not checked on server?

Should the md5 checksum be checked for file upload?

Situation 1:
The blob is being overwritten.
My steps:

  1. Get a blob reference
  2. calculate the md5 checksum on the original file
  3. upload the md5 checksum using upload properties
  4. upload another file which is slightly modified than the original file

My modified file is uploaded successfully, and I can download the file from azure cloud and the downloaded file is the modified file.

Situation 2:
The blob is new.
How should I submit a md5 to be checked when the file is uploaded?

  1. I can not upload properties before the file upload because blob is not created.
  2. If I submit the md5 after the file upload, how can the server know my md5?

Suggestion: Allow null container name to indicate root container for blob APIs

Currently, there are two ways to indicate that the root container should be used in the blob APIs:

// Explicitly indicate root:
service.getBlob("$root", blob);
// Implicitly indicate, using an empty string:
service.getBlob("", blob);

It seems natural that null could also be used to indicate the root container (even more natural than an empty string), because null means "no container". That would allow code like this to work:

service.getBlob(null, blob);

One potential downside would make it harder to debug issues where the container name is not initialized.

International support for ServiceBus in Java SDK

Currently, the root of the service bus Uri is hard coded in Java SDK. It is impossible for the client to point to a different Windows Azure Instance. As Windows Azure is going global, this becomes a hurdle, we should stop doing that ASAP.

Blob breakLease proxy API signature incorrect

Dev Estimate: 1
Test Estimate: 1

The current breakLease API is this:

public void breakLease(String container, String blob, String leaseId)

However, according to http://msdn.microsoft.com/en-us/library/windowsazure/ee691972,

x-ms-lease-action: <acquire | renew | release | break>
...
Break:
... the request is not required to specify a matching lease ID.
... the response indicates the interval in seconds until a new lease may be acquired.

This means that

  1. The leaseId parameter should be removed
  2. The function should return an object that includes a LeaseTime property.

BlobContract.create*Blob|copyBlob should return etag of created object

When a blob is created, the service returns a message like this (with some parts elided)

HTTP/1.1 201 Created
Last-Modified: Thu, 17 May 2012 17:54:50 GMT
ETag: "0x8CF026D4A29F829"

The ETag is especially important, because that allows you to ensure that the blob is in the state you think it should be in. For example, if after creating the blob, you want to set some metadata, and you want to be sure that the blob has not been altered by anyone else in the meantime, you would use code like this:

service.setBlobMetadata(container, blob, metadata,
    new SetBlobMetadataOptions()
        .setAccessCondition(AccessCondition.ifMatch(etag));

However, the createBlockBlob is a void method; it returns nothing. So you cannot pass the ETag, and you lose confidence that you are the only one touching the blob. (Reading the blob to get the ETag won't help, because you will have the same latency as just setting the metadata blindly.)

Same issue affects the createBlockBlob and copyBlob methods.

Spike: Service Bus: List*Result classes need to surface continuations

Dev Estimate: 3
Test Estimate: 0

Like most services, the Service Bus will return a partial list, with a continuation to get more, if you use TOP or if the number of results crosses some threshold. For example, with a call like this, when the server has more than 22 queues:

https://XXX.servicebus.windows.net/$Resources/Queues?%24top=2&amp;%24skip=20

the server can return:

<feed xmlns="http://www.w3.org/2005/Atom">
  ...
  <link rel="self" href="https://azuresdkdev.servicebus.windows.net/
    $Resources/Queues?%24top=2&amp;%24skip=20"/>
  <link rel="next" href="https://azuresdkdev.servicebus.windows.net/
    $Resources/Queues?%24top=2&amp;%24skip=22"/>

However, that "next" link is not returned to the consumer of the API. This means that the List_Result classes need to parse and *expose_ those continuations to the user, so they can use them to get the next batch of results. For example, this is what is done with the Blob APIS, ListBlobsResult has a Marker and a NextMarker properties.

(The PHP SDK has a similar issue, tracked Azure/azure-sdk-for-php#479)

Page blobs > 2GB cannot be created with Blob service layer library

In BlobContract.java, I spied this defect:

/**
 * Creates a page blob of the specified maximum length in the specified container.
 * <p>
 * Note that this method only initializes the blob. To add content to a page blob, use the
 * {@link BlobContract#createBlobPages(String, String, PageRange, long, InputStream)} or
 * {@link BlobContract#createBlobPages(String, String, PageRange, long, InputStream, CreateBlobPagesOptions)}
 * methods.
 * 
 * @param container
 *            A {@link String} containing the name of the container to create the blob in.
 * @param blob
 *            A {@link String} containing the name of the blob to create. A blob name can contain any combination of
 *            characters, but reserved URL characters must be properly escaped. A blob name must be at least one
 *            character long and cannot be more than 1,024 characters long, and must be unique within the container.
 * @param length
 *            The length in bytes of the page blob to create. The length must be a multiple of 512 and may be up to
 *            1 TB.
 * @throws ServiceException
 *             if an error occurs accessing the storage service.
 */
void createPageBlob(String container, String blob, int length) throws ServiceException;

Note that the length parameter is declared as an int, which maxes out at 2GB. This issue also appears in the overload

void createPageBlob(String container, String blob, int length, CreateBlobOptions options) throws ServiceException;

and in the implementations of createPageBlob in BlobExceptionProcess.java and BlobRestProxy.java.

getBlobProperties throws when using failing IF_MODIFIED_SINCE access condition

This code fails with a null pointer exception:

service.getBlobProperties(container, blob, metadata,
    new GetBlobPropertiesOptions()
        .setAccessCondition(AccessCondition.ifModifiedSince(currentLastModifiedDate));

The reason is that the service returns a 304 code (NotModified), which is an exceptional condition, but processing continues regardless. The fix is to add the following code to BlobRestProxy.getBlobPropertiesResultFromResponse:

    ThrowIfNotSuccess(response);

This will throw a new ServiceException, which is more actionable by calling code.

Null Etag in TableResults

InsertOrMerge and InsertOrReplace is returning TableResult with Null Etag for single row operations.

Table: Unify the creation of Batch and single op requests

The REST request for a single table operation (for example, updateEntity) should be the same as the MIME part of a batch request containing the same operation.

However, the current Table service implementation had two separate code paths for generating those requests:

updateEntity 
-> putOrMergeEntityCore 
    -> (code for building headers and constructing body)
batch(InsertEntityOperation)
-> createBatchRequestBody
    -> createBatchInsertOrUpdateEntityPart
        -> (code for building headers and constructing body)

To simplify the code, reduce maintenance costs, and ensure consistency between single ops and batch, both code paths should end up in the same function.

investigate whether from and other properties of table entity query services are supported. Maybe remove non-supported ones.

The From property is not used and not defined in the MSDN docs for query. It should be removed.

gchengrepo collab

an hour ago

MSDN documentation says it supports select and filter, with no specificity about whether from is supported or not. Even if it doesn't support right now, leaving it in sdk may still be valuable for future scenarios.

http://msdn.microsoft.com/en-us/library/windowsazure/dd179421.aspx

jcookemsrepo collab

43 minutes ago

I disagree; no code in the SDK is using that property, so it is misleading to keep it in a class where it is implied to be an option, and it is bad practice to have dead code. If we need to, we can add it later.

gchengrepo collab

39 minutes ago

This is an interesting perspective, what about orderby and other properties? My point is that we should be consistent, if we delete from, we should delete others as well. Thoughts?

ogail

4 minutes ago

I understand both of your point of views and for that, I suggest to leave this function with private modifier. By doing this we are not exposing something that's not working and at same time we can turn it to public when they are supported.

Readme is inaccurate

The Readme file is inaccurate. The opening sentence suggests the SDK supports "messaging through Service Bus":

This SDK allows you to build Windows Azure applications in Java that allow you to take advantage of Azure scalable cloud computing resources: table and blob storage, messaging through Service Bus.

Blob unit test failures in the dev-bookmark branch

In the dev-bookmark branch, the following unit tests fail:

  • com.microsoft.windowsazure.services.blob.BlobServiceIntegrationTest
    • getBlobwithIfNoneMatchETagAccessconditionWorks
    • getBlobWithIfModifiedSinceAccessconditionWorks

This issue was introduced with 802e9b0 , where PipelineHelpers.ThrowIfError changed what it considered to be an error from >=300 to >=400. This change is correct, but the getBlob code should consider >=300 as an error, because the server returns a 304 if it cannot give a match for the If-* headers.

A potential fix is to change the BlobRestProxy code to replace

    ThrowIfError(response);

with something like:

    // 304 is returned if the blob requested does not match the If-None-Match, etc. headers.
    if (response.getStatus() >= 300) {
        throw new UniformInterfaceException(response);
    }

running test goal fails for maven

Moved from the private repository.

For the default Java project on jenkins, it fails to run the unit tests with the following error.
[INFO] --- maven-surefire-plugin:2.7.2:test (default-test) @ microsoft-windowsazure-api ---
[INFO] Surefire report directory: e:\workspace\workspace\azure-sdk-for-java-WindowsAzure-dev-junit\microsoft-azure-api\target\surefire-reports
The system cannot find the path specified.
[ERROR] There are test failures.

Please refer to e:\workspace\workspace\azure-sdk-for-java-WindowsAzure-dev-junit\microsoft-azure-api\target\surefire-reports for the individual test results.
[JENKINS] Recording test results
[INFO]

Documentation for SetBlobPropertiesOptions setters is incorrect

For example, in the docs for setContentType:

/**
 * Sets the optional MIME content type for the blob content. 
 * This value will be returned to clients in the
 * <code>Content-Type</code> header of the response when
 * the blob data or blob properties are requested. If no
 * content type is specified, the default content type is 
 * <strong>application/octet-stream</strong>.
 * <p>

This is not strictly correct; the service has a default value, but when you use setBlobProperties, the content type header is removed unless the ContentType property is set to the previous value. See http://msdn.microsoft.com/en-us/library/windowsazure/ee691966, which says:

x-ms-blob-content-type
Optional. Sets the blob’s content type.
If this property is not specified on the request, then the property will be cleared for the blob. Subsequent calls to Get Blob Properties (REST API) will not return this property, unless it is explicitly set on the blob again.

It appears the the recommendation for using setBlobProperties should be to first use getBlobProperties, alter it as you like, then submit. Unset properties will clobber server values.

Silent failure when only specify GetBlobOptions's RangeEnd property

It is an error on the server if you specify a range end without a range start, but it is OK to have a start without an end.

However, the service API does hides this fact by silently ignoring range end if there is no range start. Silently changing what the user intended is not good.

The simplest fix would be to allow the bad condition to flow through to the server, and let it complain. This can be accomplished by making this change in PipelineHelpers.java, in

 public static Builder addOptionalRangeHeader(Builder builder, Long rangeStart, Long rangeEnd) {
-    if (rangeStart != null) {
-        String range = rangeStart.toString() + "-";
-        if (rangeEnd != null) {
-            range += rangeEnd.toString();
-        }
+    if (rangeStart != null || rangeEnd != null) {
+        String range = (rangeStart == null ? "" : rangeStart.toString()) + "-"
+                + (rangeEnd == null ? "" : rangeEnd.toString());
         builder = addOptionalHeader(builder, "Range", "bytes=" + range);
     }
     return builder;
 }

Another choice would be to try to validate in the API, but then it would be more brittle with respect to server changes.

README.md is not actually Markdown

The README.md file is HTML, not Markdown. We should do this right and 'convert' it to Markdown.

An important side benefit is that we will then be able to use GFM (GitHub Flavored Markdown)'s excellent source code syntax highlighting for any code samples inline, etc., that are not highlighted today.

Need to properly handle null for blob container names

When null is passed as the container name in the Blob service APIs, it maps to the container named "null", which is not the intended behavior. It should throw an exception, or interpret null as the root container (which would be consistent with passing the empty string, see issue #82).

Cannot copy blob from implicit root container

Most of the blob APIs allow the the user to pass the empty string as the container name to indicate the implicit root container (as opposed to being explicit, using "$root").

However, that does not work for the source container's name when using the copyBlob API. Looking at the headers sent out, the problems is that the source is malformed with a double slash in the middle:

X-Ms-Copy-Source: /XXX//qa-214273-int-39

The problem in in BlobRestProxy.getCopyBlobSourceName, where source container is checked for null, but not empty. The fix is to change:

-    if (sourceContainer != null) {
+    if (sourceContainer != null && !sourceContainer.isEmpty()) {

Support Select in Table service layer

The Table service layer provides the SelectFields property on the QueryEntitiesOptions, to allow for queries to contain $select. However, when using TableRestProxy.queryEntities, the request will fail when those properties are set:

400 Bad Request: One of the request inputs is not valid.

This appears to be because the request sets this header:

DataServiceVersion: 1.0;NetFx

which is not set when using the Table client layer to make a similar request.

(Edited to remove OrderBy, which is not supported in Azure Table: http://msdn.microsoft.com/en-us/library/dd135725.aspx)

All blob lease proxy APIs need to support access conditions

All the lease REST APIs support the access conditions (etag and time based), but only the acquireLease proxy API allows the user to specify access conditions.

The fix would be to rename AcquireLeaseOptions to be something more generic, like LeaseOptions, and use that LeaseOptions for all the lease proxy APIs.

Include CHANGELOG

Guys, thanks for the hard work. But an invisible release .1.2 is now out, with no CHANGELOG).

Suggestion: Combine common parts of SetBlobPropertiesOptions and BlobProperties

As noted in #76, you will clobber server blob properties when using setBlobProperties unless you first call getBlobProperties and pass that info to setBlobProperties. However, you must copy 5 separate values by hand. That makes is easy for people to miss some, which would lead to data loss in some situations.

I recommend that the common properties be refactored out into a separate class, which can easily be passed from the get to the set. The common properties are:

  • CacheControl
  • ContentEncoding
  • ContentLanguage
  • ContentMD5
  • ContentType

Enable the configuration of storage account and key in unit test

Currently no, to run the tests for table you need to change the account string in TableTestBase.java. If you guys would like to provide more information on how ongoing testing is going to be accomplished ( i.e. config, reporting etc) then we can modify our tests to fit that. For this release however we will continue to test as is so please let us know when a build is available,

thx

joe

From: Jason Cooke
Sent: Wednesday, February 22, 2012 10:12 PM
To: Joe Giardino; Jeff Irwin; Deepak Verma; Joost de Nijs; Jai Haridas; Metodi Mladenov; Jeff Wilcox
Cc: Mohit Srivastava; Louis DeJardin
Subject: RE: Shipping Java Tables

Can the unit tests be run without changing the test strings in the unit tests?

For our unit tests, they look at the env variables to modify the default strings. That makes it easy to run the unit tests on our CI server (an on testers’ machines).

Thanks,
Jason

Get error when try to delete blob snapshot

The message sent by using this code:

    service.createPageBlob(container, blob, 512);
    CreateBlobSnapshotResult snapshot = service.createBlobSnapshot(container, blob);
    options = new DeleteBlobOptions();
    options.setSnapshot(snapshot.getSnapshot());
    service.deleteBlob(container, blob, options);

is

DELETE http://XXX.blob.core.windows.net/XXX/XXX?snapshot=2012-05-21T16:15:40.1301586Z HTTP/1.1
x-ms-version: 2011-08-18
x-ms-delete-snapshots: include
...

which returns

HTTP/1.1 400 Value for one of the query parameters specified in the request URI is invalid.
QueryParameterName: snapshot
QueryParameterValue: 2012-05-21T16:15:40.1301586Z
Reason: This operation is only allowed on the root blob. Snapshot should not be provided.

The reason for this is explained (somewhat) in the documentation at http://msdn.microsoft.com/en-us/library/windowsazure/dd179413:

x-ms-delete-snapshots: {include, only}
...
This header should be specified only for a request against the base blob resource. If this header is specified on a request to delete an individual snapshot, the Blob service returns status code 400 (Bad Request).

There are a few potential fixes.

  1. Change the options class to make the DeleteSnaphotsOnly property nullable, and not include the header if null. But that makes the user have to do more work, because that makes them unable to delete a blob which has associated snapshots unless they explicitly set DeleteSnapshotsOnly to false. But that might be more appropriate for a service layer.
  2. Another choice would be to only add the x-ms-delete-snapshots header if there is no snapshot id provided, and silently ignore the user-supplied DeleteSnaphotsOnly property.
  3. Same as (2), but throw an exception to indicate the error (like invalid argument exception) to indicate that it is an error to specify both DeleteSnaphotsOnly and a snapshot.

I'm leaning toward (1), because that is most in line with what a service layer should do. A higher-level convenience layer can add in trickier logic for inferring the correct headers.

support top for table query

From ogail,

By only supporting Filter and removing Query you removed the support for $top query option. I know that $select is not working on tables but $top does work. Also there could be extra query options to be added in the future which will be used for tables so why make the code here very dependent on the case?

From jason,
Good point. Top should be added.

Not able to set properties on CloudBlob

When uploading to block blob storage, I want to change the ContentType property for the blob. But the setProperties() method is protected. Should it be public? Or there is another way to make the change?

maven plugin for azure

We work on azure with java and would find a maven plugin very useful.

I have already started work on this and so far have a plugin that can deploy maven artifacts to azure blob storage and could contribute this.

Our startup script then pulls this artifact out of blob storage and runs it...

Is maven on the plan? I'd be keen to contribute to this...

Source access conditions not honored by BlobRestProxy.copyBlob

Regardless of what source access conditions are provided in the options of CopyBlob, the calls succeed. This is because of an issue in PipelineHelpers.addOptionalSourceAccessContitionHeader, reproduced here:

            switch (accessCondition.getHeader()) {
                case IF_MATCH:
                    headerName = "x-ms-source-if-match";
                case IF_UNMODIFIED_SINCE:
                    headerName = "x-ms-source-if-unmodified-since";
                case IF_MODIFIED_SINCE:
                    headerName = "x-ms-source-if-modified-since";
                case IF_NONE_MATCH:
                    headerName = "x-ms-source-if-none-match";
                default:
                    headerName = "";
            }

The problem is that in Java, one case cascades into the next, so regardless of the value of the Header property, the headerName is always the empty string.

The fix is to add a break; statement at the end of each case block.

CloudQueue.retrieveMessage ignores QueueRequestOptions and OperationContext Parameters

From version 0.2.1 code (lines 1056 - 1060 in CloudQueue.java):

 @DoesServiceRequest
    public CloudQueueMessage retrieveMessage(final int visibilityTimeoutInSeconds, final QueueRequestOptions options,
            final OperationContext opContext) throws StorageException {
        return getFirstOrNull(this.retrieveMessages(1, visibilityTimeoutInSeconds, null, null));
    }

QueueRequestOptions and OperationContext Parameters are ignored and never used :(

Base64 decoder doesn't support new lines -- brakes winazurestorage.py, waz-storage

While creating an item with this SDK and reading it back works (I tried it), I cannot create an item with waz-storage or winazurestorage.py and read it from Java.

I made a small test case in this repo to demonstrate the problem using Ruby and Java. Here's the output I get:

Running test...
Sending message...
Waiting 1 second...
Exception in thread "main" java.lang.IllegalArgumentException: The String is not
a valid Base64-encoded string.
        at com.microsoft.windowsazure.services.core.storage.utils.Base64.decode(Base64.java:84)
        at com.microsoft.windowsazure.services.queue.client.CloudQueueMessage.getMessageContentAsString(CloudQueueMessage.java:179)
        at Receiver.main(Receiver.java:16)

Queue: updateMessage exposes bug in Queue service

Copied from issue 207 on the private repository.

According to http://msdn.microsoft.com/en-us/library/windowsazure/hh452234.aspx, the visibilitytimeout when updating "cannot be set to a value later than the expiry time", but that does not seem to be enforced. Here are the HTTP traces showing that is not the case, where first I set the TTL to 6 seconds, then update the visibility timeout to 8 seconds. I'm entering this issue into our database for tracking; I don't know where to enter the issue for the Queue service team.

POST http://azuresdkdev.queue.core.windows.net/qa-691182-a1/messages?timeout=4&messagettl=6
HTTP/1.1 201 Created

GET http://azuresdkdev.queue.core.windows.net/qa-691182-a1/messages
HTTP/1.1 200 OK

c63223fa-4825-4c08-8ac8-f069d73fe856
Wed, 18 Jan 2012 21:15:13 GMT
Wed, 18 Jan 2012 21:15:19 GMT
1
AgAAAAEAAAApAAAAWBVjVibWzAE=
Wed, 18 Jan 2012 21:15:43 GMT
foo bar

PUT http://azuresdkdev.queue.core.windows.net/qa-691182-a1/messages/c63223fa-4825-4c08-8ac8-f069d73fe856?popreceipt=AgAAAAEAAAApAAAAWBVjVibWzAE%3D&visibilitytimeout=8
HTTP/1.1 204 No Content

Blob and Queue : ServiceProperties.Logging: backing fields should be "boolean"

In ServiceProperties.Logging, the Read/Write/Delete properties are "boolean" (basic true/false), while the backing fields are "Boolean" (nullable, so true/false/null).

This means the following code gives are NullPointerException:

Logging l = new Logging();
l.isDelete();

The fix is to make the backing fields "boolean"

Base64 internal use only

In doing md5 checksum, I need to use the Base64 encoder class in the sdk, because other Base64 encoders (apache commons) generate different string.

But the Base64 in the sdk is commented as 'INTERNAL USE ONLY'. Any reason I should not use this encoder? And if I can't use this encoder, how should I produce the md5 string to be uploaded and checked by the server?

Blob storage hangs for files > about 3500 kb

Guys,
InputStream is0 = new BufferedInputStream(new FileInputStream(f));
Date now = new Date();
System.out.println("uploading...@"+now+" "+f);
String storageConnectionString="DefaultEndpointsProtocol=http"+
";AccountName=mikebell" +
";AccountKey=";
CloudStorageAccount storageAccount = CloudStorageAccount.parse(storageConnectionString);

    CloudBlobClient bc = storageAccount.createCloudBlobClient();
    CloudBlobContainer c=bc.getContainerReference("retain");
    CloudBlockBlob blob =c.getBlockBlobReference(f.getName());
    blob.upload(is0,f.length());
    System.out.println("File uploaded... took "+(System.currentTimeMillis()-now.getTime())+" ms");

hangs forever for me if file exceeds 3500 MB

Attempts to diagnose:

  1. Tried many files between 3500-7000KB. All fail pass the threshold which for me was about 3500-3800KB.
  2. All files that fail always fail.
  3. A .NET explorer gui works when uploading
  4. Windows 7 x64 laptop, plenty of RAM (8 GB).
  5. JVM given 1GB of RAM via xMx
  6. Run from Eclipse 3.7.1 in IDE (run from application)
  7. Java 1.6.0_24 JDK (oracle)
  8. Memory and Buffered Streams made no dfifference.
  9. Stack trace showed many threads all blocking while uploading in httpurlconnection, main coordinator thread waiting on take (pretty much as expected for a thread pool)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.