azure / azure-storage-net Goto Github PK
View Code? Open in Web Editor NEWMicrosoft Azure Storage Libraries for .NET
License: Apache License 2.0
Microsoft Azure Storage Libraries for .NET
License: Apache License 2.0
Hi,
I'm currently developing an WinRT app using MVVM. I've tried to bind my storage classes (which derive from TableEntity) to the UI. This works fine as long as nothing changes in the objects. But if I go and change a property in one of the class properties it obviously doesn't update the UI.
Tried to derive my classes from the INotifyPropertyChanged but it seems that deriving from TableEntity prevents it from deriving it from it.
I think that it would be a great adition having TableEntity implement INotifyPropertyChanged and adding a OnPropertyChanged method that could be called in the property setters. Something like the following:
private string _name;
[DataMember(Name = "Name")]
public string Name
{
get
{
return _name;
}
set
{
if (_name == value)
return;
_name = value;
OnPropertyChanged("Name");
}
}
This would allow developers to use TableEntity objects directly in their MVVM apps and on other use scenarios that call for property change notifications.
I believe that this is not such a huge or complicated change for the team to implement. Nevertheless I'm willing to contribute with the necessary code if that is worth the effort of going through an external contribution.
Looking forward for your feedback!
Cheers,
At the moment the test are written so that they use the method .Wait() to test the async methods https://github.com/Azure/azure-storage-net/blob/master/Test/ClassLibraryCommon/Blob/LeaseTests.cs. This is all fine and will pass the tests, but as soon as you try to test the method with keyword await it will deadlock if there is no lease to release. The deadlock happens because the method apparently doesn't handle the errors correctly? This makes testing all the lease related async methods really difficult in real life situations.
I want to update because it is useful as a change summary.
It seems like every time you guys push out a new release (YAY!) it 400s the current version of the Storage Emulator (boo!).
It seems to me that if you rolled the emulator into this solution, not only would it simplify keeping the emulator up to date with the latest and greatest (by force of failed tests, yeah, but still...), but it would also allow us users easy and quick access to the updated emulator, making it super easy to get up and running after an update.
I'm writing this now prior to my tracking down the latest version of the storage emulator. Experience has taught me that I may end up chucking my laptop across the room in frustration trying to track it down and install it correctly.
Currently readme title is
the following sounds more accurate:
Given these 2 files and lines:
There is a difference when they get appended later:
The TableQuery one replaces accents, the TableQueryGeneric doesnt, I assume this is a mistake? Since passing in a string with a quote will break it
There aren't any XML files delivered with the nuget package, which is uncool for a number of reasons obvious to most devs.
In CloudBlobContainer.cs:2568 (method DeleteContainerImpl
), the variable named as putCmd
instead of deleteCmd
(this is the name that appears on DeleteBlobImpl
methods). Seems totally harmless, just a naming issue.
Is a version of WindowsAzure.Storage for PCL projects (not just universal apps) on the roadmap? This way it would be super easy to use in a MVVM context where you have a PCL for all model code which could be shared with Universal apps, Xamarin.Android, Xamarin.iOS and other platforms.
While serializing a RequestResult using its WriteXML method, I noticed a comment was being written to the stream:
<!--An exception has occurred. For more information please deserialize this message via RequestResult.TranslateFromExceptionMessage.-->
Now take a look at line 240 here:
writer.WriteComment(SR.ExceptionOccurred);
What's the purpose behind this?
Thanks,
Felipe
Is there any reason why EchoContent can only be set on TableOperations of type Insert? We're using Upsert primarily (in order to make our data import idempotent) and aren't really interested in the response of these messages so would like to make the most of the performance gain by not transferring and processing the data
I am using client v3.0.3.0 on Mac OS X with Mono. All DELETE requests (delete container, CloudBlockBlob.Delete, CloudPageBlob.Delete) are failing with HTTP 403 Forbidden:
Microsoft.WindowsAzure.Storage.StorageException: The remote server returned an error: (403) Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.. ---> System.Exception: The remote server returned an error: (403) Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature..
at System.Net.HttpWebRequest.CheckFinalStatus (System.Net.WebAsyncResult result) [0x0030c] in /private/tmp/source/bockbuild-mono-3.2.6/profiles/mono-mac-xamarin/build-root/mono-3.2.6/mcs/class/System/System.Net/HttpWebRequest.cs:1606
Seems like a signing issue. Repro is rather easy:
var account = new CloudStorageAccount (new Microsoft.WindowsAzure.Storage.Auth.StorageCredentials (STORAGE_ACCT, STORAGE_KEY), true);
var client = account.CreateCloudBlobClient ();
var container = client.GetContainerReference (Guid.NewGuid().ToString());
container.Delete (); // throws exception
It was also the same on v3.0.2.x, I just upgraded today to see if it's a known issue. Same version works fine on Windows. And please note, it's just DELETE operations, all others work fine.
Here's a dump of RequestEventArgs.Request.Header.ToString()
caught in OperationContext.ResponseReceived
event on both platforms,
OS X:
{User-Agent: WA-Storage/3.0.3 (.NET CLR 4.0.30319.17020; Unix 13.0.0.0)
x-ms-version: 2013-08-15
x-ms-client-request-id: b90b3f36-1f88-4150-a71c-20f1502a8e96
x-ms-date: Wed, 12 Feb 2014 06:27:20 GMT
Authorization: SharedKey f00f00f00:x+0U7Z9ggFl3MHuYfvrOR3wgieFLhQr/5j+sjaSxBnc=
Content-Length: 0
Connection: keep-alive
Host: f00f00f00.blob.core.windows.net
}
Windows:
{User-Agent: WA-Storage/3.0.3 (.NET CLR 4.0.30319.34003; Win32NT 6.2.9200.0)
x-ms-version: 2013-08-15
x-ms-client-request-id: e34b16cc-2921-4493-888c-307994d532d5
x-ms-date: Wed, 12 Feb 2014 06:29:47 GMT
Authorization: SharedKey f00f00f00:I9Gd72nFal6v8IPOYigOQsiJCYq90/VkDTLEsWGYWR8=
Host: f00f00f00.blob.core.windows.net
Connection: Keep-Alive
}
The only difference I see is Content-Length: 0
added on Windows probably while sending the request. I'll try to see if it is missing while signing. I'll be investigating a bit.
Hi
There is a bug in Json.NET that the Windows Azure Storage Client 3.0 is exposed to. Basically if the storage client is run using Json.NET 5.0.4 or earlier it will throw an error when foreaching over an array.
The fix is really simple. Change this line - https://github.com/WindowsAzure/azure-storage-net/blob/c9d52db3f18f971933111f5ba3f7ce4e79927a73/Lib/ClassLibraryCommon/Table/Protocol/TableOperationHttpResponseParsers.cs#L364 - to this:
JToken dataTable = dataSet["value"];
Removing the cast to JArray will stop your library from using the bad GetEnumerator method.
I'm going to fix this bug in Json.NET 6.0. At some point in the future when you upgrade to it you can choose to revert this change if you want.
The following link is broken
Storage Client Library Reference for .NET - MSDN
With the switch to using Json as the default PayloadFormat, a bug has been exposed in OData that causes partition keys or row keys with char.MaxValue in the string to fail.
The workaround I've used is to set the PayloadFormat back to AtomPub but I'd like to use the Json format.
Feel free to close this bug if it's not useful, I just wasn't seeing any action on the issue I posted in the OData Codeplex repo.
https://gist.github.com/s093294/bc37a83fc93995b8dbbe
In the gist I listed my test program that shows how things worked in 3.x but not in 4.x with ListBlobs.
Due to the amount of files its hard to find out which file causes the issue.
This is just an improvement proposal/question.
According to this doc the "sp" (signed permissions) query parameter is required in all valid signatures.
The code in SharedAccessSignatureHelper::GetSignature
does not add the "sp" query parameter if the permissions string is empty (which could occur if the flags are incorrectly passed in, believe me :)), which generates what I understand is an invalid SAS.
string permissions = SharedAccessBlobPolicy.PermissionsToString(policy.Permissions);
if (!string.IsNullOrEmpty(permissions))
{
AddEscapedIfNotNull(builder, Constants.QueryConstants.SignedPermissions, permissions);
}
Should an exception be thrown notifying that no permissions were specified?
Byte[] bs = Encoding.UTF8.GetBytes(message);
await blob.UploadFromByteArrayAsync(bs, 0, bs.Count());
hangs (both emulator and real device) if bs.Count() is >65K, if I limit to <65K it runs fine.
Regards
Hello
As far as I could see non of the operations that modify objects (crud excluding r letter :) )
returns job it/task id. For example in azure sdk for .net most responses have RequestId (for example OperationResponse) so we could track status. I could see two async versions: first that returns Task and second IAsyncResult. Theoretically they should solve this problem (if they send requests about progress to server to set completed, as far as I could see for example create snapshot returns x-ms-request-id). But anyway we will loose persistence: we could not save task to database so main process could just save task id and other periodically check. In addition for example for snapshots, Task become completed, Blob is available over api, but physically copying process is not finished. So I am not sure that storage client really checks job id (or may be this is api problem that returns completed before really copying).
Any ideas are welcome.
Retrieve method in TableOperation class or any other places where there is assert for partitionKey and rowKey should Assert NotNullOrEmpty instead of Assert NotNull.
CloudBlockBlob.DownloadText() behaves differently than File.ReadAllText in respect to UTF8 pre-amble/BOM
Repro:
Create an XML File in Visual Studio and upload it to a Cloud Blob container. The file will begin with a BOM (EF BB BF). Then download it using CloudBlockBlob.DownloadText() and pass the resulting string to XDocument.Parse. The parser will fail with XMLException - "Data at the root level is invalid. Line 1, position 1.".
Failing code:
var storageAccount = CloudStorageAccount.Parse("connectionString");
var blobClient = storageAccount.CreateCloudBlobClient();
var container = blobClient.GetContainerReference("MyContainer");
var blob = container.GetBlockBlobReference("my.xml");
var s = blob.DownloadText();
var x = XDocument.Parse(s);
A workaround suggested at http://stackoverflow.com/questions/2111586/parsing-xml-string-to-an-xml-document-fails-if-the-string-begins-with-xml by Dave Cluderay suggests passing the read string through StreamReader.
Working code
var storageAccount = CloudStorageAccount.Parse("connectionString");
var blobClient = storageAccount.CreateCloudBlobClient();
var container = blobClient.GetContainerReference("MyContainer");
var blob = container.GetBlockBlobReference("my.xml");
var s = blob.DownloadText();
using (var memoryStream = new MemoryStream(Encoding.UTF8.GetBytes(s)))
{
using (var streamReader = new StreamReader(memoryStream))
{
var x = XDocument.Load(streamReader);
}
}
Moved from Azure/azure-sdk-for-net#626
The library executor automatically disposes MultiBufferMemoryStream instance passed to any blob API call, for example CloudBlockBlob.UploadFromStreamAsync. This logic causes UB if the stream instance is reused after the library call end.
Microsoft.WindowsAzure.Storage.dll!Microsoft.WindowsAzure.Storage.Core.MultiBufferMemoryStream.Dispose(bool disposing)
mscorlib.dll!System.IO.Stream.Close() Microsoft.WindowsAzure.Storage.dll!Microsoft.WindowsAzure.Storage.Core.Executor.ExecutionState<Microsoft.WindowsAzure.Storage.Core.NullType>.CheckDisposeSendStream() Microsoft.WindowsAzure.Storage.dll!Microsoft.WindowsAzure.Storage.Core.Executor.ExecutionState<Microsoft.WindowsAzure.Storage.Core.NullType>.Dispose(bool disposing)
Microsoft.WindowsAzure.Storage.dll!Microsoft.WindowsAzure.Storage.Core.Util.StorageCommandAsyncResult.Dispose()
Microsoft.WindowsAzure.Storage.dll!Microsoft.WindowsAzure.Storage.Core.Util.StorageCommandAsyncResult.End() Microsoft.WindowsAzure.Storage.dll!Microsoft.WindowsAzure.Storage.Core.Executor.Executor.EndExecuteAsync<Microsoft.WindowsAzure.Storage.Core.NullType>(System.IAsyncResult result)> Microsoft.WindowsAzure.Storage.dll!Microsoft.WindowsAzure.Storage.Blob.CloudBlockBlob.UploadFromStreamHandler.AnonymousMethod__14(System.IAsyncResult ar)
The error is In function ListQueuesImpl at row 165
List queuesList = listQueuesResponse.Queues.Select(item => new CloudQueue(item.Name, this)).ToList();
When the CloudQueue object is created the Metadata information is lost.
It exist in the listQueuesReponse.Queues but it not moved over to CloudeQueue object.
When you try to execute a batch in which there are two items with the exact same partition key and row key, the library throws the following exception:
Additional information: Unexpected response code for operation : 1
This exception does not really disclose what went wrong. Perhaps it would be better to show something like:
Could not insert two items with with the same partition- and row key.
The code to reproduce this issue is as follows:
CloudStorageAccount storageAccount = CloudStorageAccount.DevelopmentStorageAccount;
CloudTableClient tableClient = storageAccount.CreateCloudTableClient();
CloudTable table = tableClient.GetTableReference("demo");
table.CreateIfNotExists();
TableBatchOperation batch = new TableBatchOperation
{
TableOperation.InsertOrReplace(new TableEntity { PartitionKey = "NBA", RowKey = "Lakers" }),
TableOperation.InsertOrReplace(new TableEntity { PartitionKey = "NBA", RowKey = "Lakers" }),
};
table.ExecuteBatch(batch);
We ran into a problem the other day using DownloadTextAsync
.
A little background: We are creating cscfg/cspkg files from visual studio and uploading them to blob storage and using them in a automated deployment process. The cscfg files are when saved with visual studio stored as xml files with a encoding signature char at the first char of the file.
When using DownloadTextAsync
on the blob, then it do not remove this signatur char but rather include it in the returned string. If you try doing a XDocument.Parse(await blob.DownloadTextAsync())
you will get a parse error because the first char is not '<' but the encoding signature.
Seems that StorageException.GetObjectData needs to be marked [SecurityCritical].
Received this error:
A first chance exception of type 'System.TypeLoadException' occurred in Microsoft.WindowsAzure.Storage.dll
Additional information: Inheritance security rules violated while overriding member: 'Microsoft.WindowsAzure.Storage.StorageException.GetObjectData(System.Runtime.Serialization.SerializationInfo,
at Microsoft.WindowsAzure.Storage.Blob.CloudBlobContainer.CreateIfNotExists(BlobContainerPublicAccessType accessType, BlobRequestOptions requestOptions, OperationContext operationContext)
at Microsoft.WindowsAzure.Storage.Blob.CloudBlobContainer.CreateIfNotExists(BlobRequestOptions requestOptions, OperationContext operationContext)
Received the error on code running in an AppDomain with restricted permissions. .Net doesn't seem to perform this validation by default.
Using WindowsAzure.Storage-Preview 3.0.1.0-preview from NuGet. Created a simple Windows Phone 8.0 app. All it does is insert a table entity. This fails on device without a debugger attached. WITH a debugger attached OR in the emulator it works fine.
public partial class MainPage : PhoneApplicationPage
{
// Constructor
public MainPage()
{
InitializeComponent();
this.Loaded += MainPage_Loaded;
}
async void MainPage_Loaded(object sender, RoutedEventArgs e)
{
try
{
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(connection);
CloudTableClient tableClient = storageAccount.CreateCloudTableClient();
CloudTable table = tableClient.GetTableReference("ratings");
bool createresult = await table.CreateIfNotExistsAsync();
Rating rating = new Rating(Guid.NewGuid().ToString(), 1, "[email protected]");
TableOperation operation = TableOperation.InsertOrReplace(rating);
var insertresult = await table.ExecuteAsync(operation);
}
catch (Exception ex)
{
Output.Text += "Failed " + ex.ToString() + "\n";
}
}
}
public class Rating : TableEntity
{
public Rating(string product, int value, string user)
{
PartitionKey = product;
RowKey = user;
Value = value;
}
public Rating() { }
public int Value { get; set; }
}
This produces the following exception
Microsoft.WindowsAzure.Storage.StorageException: The argument 'offset' is larger than maximum of '3075'
Parameter name: offset ---> System.ArgumentOutOfRangeException: The argument 'offset' is larger than maximum of '3075'
Parameter name: offset
at Microsoft.WindowsAzure.Storage.Core.Util.CommonUtility.AssertInBounds[T](String paramName, T val, T min, T max)
at Microsoft.WindowsAzure.Storage.Core.MultiBufferMemoryStream.Write(Byte[] buffer, Int32 offset, Int32 count)
at System.IO.StreamWriter.Flush(Boolean flushStream, Boolean flushEncoder)
at System.IO.StreamWriter.Flush()
at Microsoft.Data.OData.Json.IndentedTextWriter.Flush()
at Microsoft.Data.OData.Json.JsonWriter.Flush()
at Microsoft.Data.OData.Json.ODataJsonOutputContextBase.Flush()
at Microsoft.Data.OData.JsonLight.ODataJsonLightWriter.FlushSynchronously()
at Microsoft.Data.OData.ODataWriterCore.Flush()
at Microsoft.Data.OData.ODataWriterCore.WriteEnd()
at Microsoft.WindowsAzure.Storage.Table.Protocol.TableOperationHttpWebRequestFactory.WriteOdataEntity(ITableEntity entity, TableOperationType operationType, OperationContext ctx, ODataWriter writer)
at Microsoft.WindowsAzure.Storage.Table.Protocol.TableOperationHttpWebRequestFactory.BuildRequestForTableOperation(Uri uri, UriQueryBuilder builder, IBufferManager bufferManager, Nullable1 timeout, TableOperation operation, OperationContext ctx, TablePayloadFormat payloadFormat, String accountName) at Microsoft.WindowsAzure.Storage.Table.TableOperation.<>c__DisplayClassa.<InsertImpl>b__7(Uri uri, UriQueryBuilder builder, Nullable
1 timeout, OperationContext ctx)
at Microsoft.WindowsAzure.Storage.Core.Executor.Executor.ProcessStartOfRequest[T](ExecutionState1 executionState, String startLogMessage) at Microsoft.WindowsAzure.Storage.Core.Executor.Executor.InitRequest[T](ExecutionState
1 executionState)
--- End of inner exception stack trace ---
at Microsoft.WindowsAzure.Storage.Core.Executor.Executor.EndExecuteAsync[T](IAsyncResult result)
at Microsoft.WindowsAzure.Storage.Table.CloudTable.EndExecute(IAsyncResult asyncResult)
at Microsoft.WindowsAzure.Storage.Core.Util.AsyncExtensions.<>c__DisplayClass11.<CreateCallback>b__0(IAsyncResult ar) --- End of stack trace from previous location where exception was thrown --- at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at System.Runtime.CompilerServices.TaskAwaiter
1.GetResult()
at AzureStorageTest.MainPage.<MainPage_Loaded>d__0.MoveNext()
Request Information
RequestID:
RequestDate:
StatusMessage:
Right now, async equivalents of ListBlobs/ListContainers are ListBlobsSegmentedAsync/ListContainersSegmentedAsync. These segmented methods needs to be used in a while loop to pass ContinuationToken between HTTP requests to get all blobs (just like in ListBlobs method).
Many people get caught into this pitfall and think segmented methods are direct equivalents of those sync methods and therefore they only get first e.g. 5000 results from the REST API –which mostly introduces a bug later on. I have seen many people using it improperly like this.
Therefore adding helper methods ListBlobsAsync/ListContainersAsync would certainly help people to find async equivalent of those sync methods and they wouldn't need to go back and forth between MSDN docs & VS.
Hi,
I am interested in designing an optional caching system for the table storage client so queries (that are explicitly marked "cachable") can use a local (possibly distributed) cache of Json responses instead of executing the query against the table storage endpoint.
Can the developers offer any insights as to where I should best look at implementing this feature?
Hi!
I am really missing PCL support! It would be great if you include it.
Here's a connection string which uses a SAS instead of the traditional account name / key authentication:
TableEndpoint=http://....table.core.windows.net/;SharedAccessSignature=?sv=2014-02-14&tn=MyTable&sig=MySig&se=2114-09-28T19%3A28%3A32Z&sp=au;
This used to work in a previous version of the SDK, but with the latest version of the SDK this is no longer supported. This is caused by some validation code in the CloudStorageAccount class (https://github.com/Azure/azure-storage-net/blob/master/Lib/Common/CloudStorageAccount.cs)
if (splittedNameValue.Length != 2)
{
error("Settings must be of the form \"name=value\".");
return null;
}
Pulled the nuGet package and spent a lot of time writing the code to use it. Everything went well and works great!
Then I went to submit for certification:
Supported APIs
•Error Found: The supported APIs test detected the following errors:◦This API is not supported for this application type - Api=CryptAcquireContextW. Module=advapi32.dll. File=Microsoft.WindowsAzure.Storage.dll.
◦This API is not supported for this application type - Api=CryptCreateHash. Module=advapi32.dll. File=Microsoft.WindowsAzure.Storage.dll.
◦This API is not supported for this application type - Api=CryptDestroyHash. Module=advapi32.dll. File=Microsoft.WindowsAzure.Storage.dll.
◦This API is not supported for this application type - Api=CryptGetHashParam. Module=advapi32.dll. File=Microsoft.WindowsAzure.Storage.dll.
◦This API is not supported for this application type - Api=CryptHashData. Module=advapi32.dll. File=Microsoft.WindowsAzure.Storage.dll.
◦This API is not supported for this application type - Api=CryptReleaseContext. Module=advapi32.dll. File=Microsoft.WindowsAzure.Storage.dll.
•Impact if not fixed: Using an API that is not part of the Windows SDK for Windows Phone Store apps violates the Windows Phone Store certification requirements.
•How to fix: Review the error messages to identify the API that is not part of the Windows SDK for Windows Phone Store app. Please note, C++ apps that are built in a debug configuration will fail this test even if they only use APIs from the Windows SDK for Windows Phone Store apps.
When can we expect the ability to submit apps to the store that utilize this project??
We generated a SAS for Table Access and could not get Access when using partition and rowKeys in the signature in conjunction with the CloudTable(uri) constructor.
var tableUri = table.Uri.AbsoluteUri;
var sharedAccessSignature = table.GetSharedAccessSignature(policy, null, partitionKey, rowKey, partitionKey, rowKey);
var cloudTabel = new CloudTable(new Uri(tableUri + sharedAccessSignature));
var customer = GetCustomerToInsert();
cloudTable.Execute(TableOperation.InsertOrReplace(customer));
As we used start and end row and partition keys in the signature and used the Uri parsing constructor of CloudTable we lost part of the signature parameters resulting in an invalid signature.
This code works:
var tableUri = table.Uri.AbsoluteUri;
var sharedAccessSignature = table.GetSharedAccessSignature(policy, null, partitionKey, rowKey, partitionKey, rowKey);
var storageCredentials = new StorageCredentials(sharedAccessSignature);
var cloudTable = new CloudTable(new Uri(tableUri), storageCredentials);
var customer = GetCustomerToInsert();
cloudTable.Execute(TableOperation.InsertOrReplace(customer));
Thanks to the source code we found out that SharedAccessSignatureHelper in the ParseQuery method does not use the start and end keys from the signature to build the StorageCredentials.
How the call loses the data?
public CloudTable(Uri tableAddress) : this(tableAddress, null /* credentials */)
public CloudTable(Uri tableAbsoluteUri, StorageCredentials credentials) : this(new StorageUri(tableAbsoluteUri), credentials)
public CloudTable(StorageUri tableAddress, StorageCredentials credentials)
{
this.ParseQueryAndVerify(tableAddress, credentials);
}
private void ParseQueryAndVerify(StorageUri address, StorageCredentials credentials)
{
...
this.StorageUri = NavigationHelper.ParseQueueTableQueryAndVerify(address, out parsedCredentials);
...
}
internal static StorageUri ParseQueueTableQueryAndVerify(StorageUri address, out StorageCredentials parsedCredentials)
{
...
return new StorageUri(
ParseQueueTableQueryAndVerify(address.PrimaryUri, out parsedCredentials),
...
}
private static Uri ParseQueueTableQueryAndVerify(Uri address, out StorageCredentials parsedCredentials)
{
...
parsedCredentials = SharedAccessSignatureHelper.ParseQuery(queryParameters, false);
...
}
internal static StorageCredentials ParseQuery(IDictionary<string, string> queryParameters, bool mandatorySignedResource)
This method only knows:
string signature = null;
string signedStart = null;
string signedExpiry = null;
string signedResource = null;
string sigendPermissions = null;
string signedIdentifier = null;
string signedVersion = null;
The Microsoft.WindowsAzure.Storage library depends on version 5.6.0 of Microsoft.Data.Services.Client, but version 5.6.1 is installed.
If you try to install the latest Azure Storage libs within a portable library (WIn8.1 and Phone 8.1). No references are being added.
I tested compiling the source and adding the libraries manually. I could create successfully a container and place some files in it.
Looks like it is just a packaging issue.
This issue was originally opened in the azure-sdk-for-net
repo by @sitereactor, who commented the following:
I'm not sure if this intentional or an oversight, so thought I'd create an issue for it as I just ran into an issue trying to read a DateTime object from an EntityProperty.
I'm extending TableEntity with the following (simplified) class impl.:
public class DictionaryTableEntity : TableEntity, IDictionary<string, EntityProperty>
{
private IDictionary<string, EntityProperty> _properties;
public DictionaryTableEntity()
{
_properties = new Dictionary<string, EntityProperty>();
}
public override void ReadEntity(IDictionary<string, EntityProperty> properties, OperationContext operationContext)
{
_properties = properties;
}
public override IDictionary<string, EntityProperty> WriteEntity(OperationContext operationContext)
{
return _properties;
}
// Remaining implementation is left out
}
When reading a value from the DictionaryTableEntity like this:
entity["myDate"] I don't have the option for a typed DateTime value like:
entity["myDate"].DateTime.Value because DateTime is internal. So I have to do something like this instead:
DateTime.Parse(entity["myDate"].PropertyAsObject.ToString())
This is using version 2.1.0.3 of the Windows Azure Storage SDK.
My app crashed during loading/uploading blobs.
Here is a Stack Trace:
There are no context policies.
System.NullReferenceException: Object reference not set to an instance of an object.
at Microsoft.WindowsAzure.Storage.Core.Util.AsyncStreamCopier1.ForceAbort(AsyncStreamCopier
1 copier, Boolean timedOut) in AsyncStreamCopier.cs: line 317
at Microsoft.WindowsAzure.Storage.Core.Util.AsyncStreamCopier`1.MaximumCopyTimeCallback(Object copier, Boolean timedOut) in AsyncStreamCopier.cs: line 305
at System.Threading._ThreadPoolWaitOrTimerCallback.WaitOrTimerCallback_Context(Object state, Boolean timedOut)
at System.Threading._ThreadPoolWaitOrTimerCallback.WaitOrTimerCallback_Context_t(Object state)
at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
at System.Threading._ThreadPoolWaitOrTimerCallback.PerformWaitOrTimerCallback(Object state, Boolean timedOut)
Moving this from Azure/azure-sdk-for-net#131 as suggested by stankovski. I still have the same problem. Also see http://stackoverflow.com/questions/13456606/azure-access-denied-on-shared-access-signature-for-storage-2-0.
Given the following code:
var blobClient = account.CreateCloudBlobClient();
var container = blobClient.GetContainerReference(containerName);
CloudBlockBlob _blockblob = container.GetBlockBlobReference(fileName);
var sharedAccessPolicy = new SharedAccessBlobPolicy
{
SharedAccessStartTime = DateTime.UtcNow.AddMinutes(-10),
SharedAccessExpiryTime = DateTime.UtcNow.AddMinutes(30),
Permissions = SharedAccessBlobPermissions.Read
};
var sharedAccessSignature = _blockblob.GetSharedAccessSignature(sharedAccessPolicy);
var link = _blockblob.Uri.AbsoluteUri + sharedAccessSignature;
a AuthenticationFailed error will occur from the link if the containerName has a trailing slash (this is allowed in all other places).
The reason for this is that the GetCanonicalName method in CloudBlockBlobBase will add a slash to the container name resulting in a double slash. This is then signed and returned in the SAS. The AbsoluteUri however will not add the extra slash and thus the signature will not be valid for the created link.
A change in GetCanonicalName to trim any trailing slash from container name would solve this.
You currently handle 'WebException.Status as WebExceptionStatus.Timeout' to generate TimeoutException in EndGetResponse method. Unfortunately in some cases you can also get WebExceptionStatus.RequestCanceled. Because of this original WebException is escaped when TimeoutException is expected.
Note, I do see a lock around State.ReqTimedOut which suppose to act as memory barrier in case of request.Abort(), but the fact is I see following exception in my log originated from storage client library 3.0.3: "Request failed with unhanded exception of type 'WebException' and message: 'The request was aborted: The request was canceled.'."
So, can we add a check for "WebExceptionStatus.RequestCanceled" additional to "WebExceptionStatus.Timeout" inside EndGetResponse?
In latest storage client (v3.0.2.0), there's inconsistency in the analogous CloudBlockBlob.PutBlock and CloudPageBlob.WritePages method signatures per contentMD5
argument.
class CloudBlockBlob:
contentMD5 parameter is always required.
class CloudPageBlob:
public void WritePages (Stream pageData, long startOffset, **string contentMD5 = null**, AccessCondition accessCondition = null, BlobRequestOptions options = null, OperationContext operationContext = null)
public Task WritePagesAsync (Stream pageData, long startOffset, **string contentMD5**, AccessCondition accessCondition, BlobRequestOptions options, OperationContext operationContext, CancellationToken cancellationToken)
public Task WritePagesAsync (Stream pageData, long startOffset, **string contentMD5**, AccessCondition accessCondition, BlobRequestOptions options, OperationContext operationContext)
public Task WritePagesAsync (Stream pageData, long startOffset, **string contentMD5**)
public Task WritePagesAsync (Stream pageData, long startOffset, **string contentMD5**, CancellationToken cancellationToken)
contentMD5 parameter is optional on one overload, on the others always required.
I believe those two methods are analogous to each other therefore should have consistent policy towards contentMD5
.
In fact this argument is always optional on REST API, why enforcing users to put null/empty string in the client library?
Looks like this package is dependent on OData = 5.6.0, not >= 5.6.0, which is a problem when combining with Application Insights, which requires OData >= 5.6.1.
Is it possible to fix this?
Since updating to Storage Client 3.1.0.1 from version 2.1.0.3, a StorageException "InvalidInput" is thrown on List properties where as before the behaviour was simply that they were ignored.
Adding the Microsoft.WindowsAzure.Storage.Table IgnorePropertyAttribute has no effect. The problem is solved as soon as I change the offending property to an array.
Reproduction from LinqPad is below, referencing WindowsAzure.Storage 3.1.0.1 nuget package.
The error is produced by the below code, remedied by changing ListProperty to an array.
void Main()
{
var acc_dev = Microsoft.WindowsAzure.Storage.CloudStorageAccount.Parse("UseDevelopmentStorage=true;DevelopmentStorageProxyUri=http://127.0.0.1");
Test(acc_dev, "testtable");
}
public void Test(Microsoft.WindowsAzure.Storage.CloudStorageAccount toAcc, string table) {
var toTC = toAcc.CreateCloudTableClient();
var toT = toTC.GetTableReference(table);
toT.CreateIfNotExists();
var toContext = toTC.GetTableServiceContext();
toContext.Format.UseAtom();
var fromData = new List<TestClass>();
fromData.Add(new TestClass(){Foo="x", PartitionKey="foo", RowKey="bar", ListProperty=(new List<string>(){{"Hello"},{"Azure"}})});
fromData.Dump();
foreach (var item in fromData.ToList())
{
toContext.AddObject(table,item);
toContext.UpdateObject(item);
}
toContext.SaveChangesWithRetries(SaveChangesOptions.ReplaceOnUpdate);
}
public class TestClass : TableServiceEntity {
public string Foo {get;set;}
[IgnoreProperty]
public List<string> ListProperty {get;set;}
}
Let's say the blob "test.txt" exists in a container and the variable blob is a CloudBlockBlob object that references that blob. I'd expect this code to fail:
string leaseId = blob.AcquireLease(TimeSpan.FromMinutes(1), null);
blob.UploadFromByteArray(new byte[0], 0, 0, new AccessCondition() { IfNoneMatchETag = "*", LeaseId = leaseId });
Surprisingly (at least to me) it succeeds. If I remove the AcquireLease and the LeaseId condition from the UploadFromByteArray method, it fails as expected.
Is this the expected behavior? Even if I have a lease on the blob, I think it should fail if I require to match (or not) an ETag.
If you create a TableQuery<T>
using the factory method (for LINQ) of CloudTable.CreateQuery<T>()
, and then set a FilterString
on the resulting object, the FilterString
is ignored by ExecuteQuery
on the TableQuery<T>
. However, if you set the FilterString
on a TableQuery<T>
that is created via the default constructor of the TableQuery<T>
object, it works as expected.
It seems like there are some strange behavior differences based on whether the queryProvider
private field is set in the TableQuery<T>
object. While I'd have personally preferred that the LINQ stuff wasn't comingled with the FilterString
, at the very least it seems like FilterString
should throw if the queryProvider
is set, or TableQuery.Execute
, etc. should throw if the FilterString
is set and the queryProvider
is also set.
As requested by @stankovski and initially asked by @prabirshrestha ...
Create interface/virtual methods so we can easily write unit tests.
eg.
var fakeMsg = new Mock<CloudQueueMessage>();
var cloudQueue = A.Fake<ICloudQueue>();
etc...
The Azure Storage emulator (3.0.0.0) performs case-insensitive queries, although Azure Storage performs case-sensitive queries. This can result in bugs where results are found where they should NOT be found.
In my case, ASP.NET Identity expects a user be uniquely identifiable by user name (ugh, idiots). Since AS performs case-sensitive searches (example later), you can end up with different users with the same user name that differ only by case. This is not optimal, and should be avoided, for obvious reasons.
GIVEN the following table query (everything not shown is obvious)
var userNameQuery = new TableQuery().Where(
TableQuery.CombineFilters(
TableQuery.GenerateFilterCondition("PartitionKey", QueryComparisons.Equal, "derp"),
TableOperators.And,
TableQuery.GenerateFilterCondition("Name", QueryComparisons.Equal, userName)))
.Take(1);
return table.ExecuteQuery(userNameQuery).FirstOrDefault();
Azure Storage will perform a case-sensitive search on "Name". However, if you perform the same exact query against the storage emulator, it will perform a case-insensitive search (e.g., search for "moe" you'll get "Moe" back when it should return nothing).
Trying to use (in version 3.0.x) a SAS that was generated using the 2.1 version for a cloud table fails with the following error:
Microsoft.WindowsAzure.Storage.StorageException: The remote server returned an error: (415) JSON format is not supported.. ---> System.Net.WebException: The remote server returned an error: (415) JSON format is not supported..
at System.Net.HttpWebRequest.GetResponse()
at Microsoft.WindowsAzure.Storage.Core.Executor.Executor.ExecuteSync[T](RESTCommand`1 cmd, IRetryPolicy policy, OperationContext operationContext)
--- End of inner exception stack trace ---
at Microsoft.WindowsAzure.Storage.Core.Executor.Executor.ExecuteSync[T](RESTCommand`1 cmd, IRetryPolicy policy, OperationContext operationContext)
at Microsoft.WindowsAzure.Storage.Table.TableOperation.Execute(CloudTableClient client, CloudTable table, TableRequestOptions requestOptions, OperationContext operationContext)
at Microsoft.WindowsAzure.Storage.Table.CloudTable.Exists(Boolean primaryOnly, TableRequestOptions requestOptions, OperationContext operationContext)
at Microsoft.WindowsAzure.Storage.Table.CloudTable.Exists(TableRequestOptions requestOptions, OperationContext operationContext)
I updated my nuget package to the latest version and now I am receiving a bad request exception when calling DeleteIfExists on a cloud table. Any ideas?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.