Git Product home page Git Product logo

sebagomez / azurestorageexplorer Goto Github PK

View Code? Open in Web Editor NEW
123.0 16.0 65.0 11.73 MB

☁💾 Manage your Azure Storage blobs, tables, queues and file shares from this simple and intuitive web application.

Home Page: https://azurestorage.azurewebsites.net/

License: MIT License

C# 74.20% CSS 3.51% HTML 19.99% Dockerfile 0.28% Shell 0.49% Smarty 1.53%
azure azure-storage azure-storage-explorer docker-image c-sharp angular hacktoberfest dotnet

azurestorageexplorer's Introduction

.NET build Docker build Helm release

Docker Pulls

justforfunnoreally.dev badge

Try it live at https://azurestorage.azurewebsites.net

Or deploy it wherever you want thanks to docker images (created with GitHub Actions)

drawing

Azure Storage Explorer

Original blog post from 2009! https://sgomez.blogspot.com/2009/11/mi-first-useful-azure-application.html

Azure Storage Web Explorer makes it easier for developers to browse and manage Blobs, Queues and Tables from Azure Storage. You'll no longer have to install a local client to do that. It was originally developed in C# with asp.net and WebForms 2.0, but now it has been migrated to .NET Core 2.1, 2.2, 3.1, 5.0, 6, 7 8 and Angular.

Edit: Sick and tired of all del npm module and dependency hell I moved this project to a Blazor Server app.

Login

To login just enter your account name and key or Shared Access Signature, or, a full Connection String.

The Connection String can also allow you to connect to a local Azurite or potentionally (I have not been able to test it) to Azure Government.

Login

Environment Varibales

You can also set up these fields in environment variables and Azure Storage Explorer will go straight to the home page if it could successfully authenticate.

These variables are AZURE_STORAGE_CONNECTIONSTRING, AZURE_STORAGE_ACCOUNT, AZURE_STORAGE_KEY, and AZURE_STORAGE_ENDPOINT. The connection string takes precedence over the others, meaning if you set it, no more variables will be read. On the other hand, if the connection string variable is not set, all the rest variables will be read and they all have to be present.

Exploring

Blobs: Create public or private Containers and Blobs (only BlockBlobs for now). Download or delete your blobs.

Queues: Create Queues and messages.

File Shares: Navigate across File Shares and directories.

Tables: Create table and Entities. To create an Entity you'll have to add one property per line in the form of <PropertyName>='<PropertyValue>'

If you don't set PertitionKey or RowKey default values will be used ("1" for PartitionKey and a current timestamp for RowKey).
For example to create a new movie:

PartitionKey=Action
RowKey=1
Title=Die Hard

You can also set the desired data type for a specific property setting the desired EEdm datatype as follows:

Year=1978
[email protected]=Edm.Int32

This will create the Year as a 32 bit integer in the table.

Allowed datatypes are the following:

Edm.Int64
Edm.Int32
Edm.Boolean
Edm.DateTime
Edm.Double
Edm.Guid

Anything else would be treated as a string.

To query the entities from a table use the following syntax: <PropertyName> [operator] <ProepertyValue> Where the valid operators are: eq (equals), gt (greater than), ge (greater or equal), lt (less than), le (less or equal) and ne (not equal).
Take a look at the supported comparaison operators
To query action movies use the following:

PartitionKey eq 'Action'

Please note there's a space character before and after the eq operator.

If you don't write a query the system will retrieve every Entity on the Table

Build

To build this repo make sure you install .NET 8.0 SDK.

At the root of the project just execute the ./build.sh script

./build.sh

Run locally

Just execute the ./publish.sh script on the root folder on the repo. Kestrell will kick in and you'll see in the terminal what port number was asigned, navigate to that port, in my case http://localhost:5000 and that's it!

CMD

Docker

There's a docker image at hub.docker.com that you can use to run this app on the environment of your choice. Keep reading for Kubernetes.

To spin up a container with the latest version just run the following command

docker run --rm -it -p 8000:8080 sebagomez/azurestorageexplorer

Then open your browser and navigate to http://localhost:8000, and voilá!

Docker Compose

There's now a Docker Compose manifest in this repo that allows you to spin Azurite and Azure Storage web Explorer. In the manifest you can see that the AZURE_STORAGE_CONNECTIONSTRING environment variable is already set up to connect to Azurite; so fter spinning up the containers you can navigate to http://localhost:8080 and you should be already logged in to Azurite.

 docker-compose -f ./docker-compose/azurestorageexplorer.yaml up 

Kubernetes

A deployment and a service are available in the k8s folder. If you have kubectl locally configured with a cluster just apply them and you'll have an instance of Azure Storage Explorer running in your cluster.

 kubectl apply -f ./k8s

Port-forward to a local host to easyly test your set up

kubectl port-forward svc/azurestorageexplorer 8080:8080

and access http://localhost:8080

Helm

As of version 2.7.1 there's a new Helm chart with this project ready to be deployed in your favorite K8s cluster.
If you want this app to run in your cluster, make sure you have helm installed on your system.

Add the repo

helm repo add sebagomez https://sebagomez.github.io/azurestorageexplorer

Install the chart

helm install azurestorageexplorer sebagomez/azurestorageexplorer

The helm chart provides a deployment and a service, you can enable port-forwarding to that service with the following command:

kubectl port-forward service/azurestorageexplorer 8080:8080

or, you can follow helm instructions the get the application URL:

export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=azurestorageexplorer,app.kubernetes.io/instance=azurestorageexplorer" -o jsonpath="{.items[0].metadata.name}")
export CONTAINER_PORT=$(kubectl get pod --namespace default $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
echo "Visit http://127.0.0.1:8080 to use your application"

kubectl --namespace default port-forward $POD_NAME 8080:$CONTAINER_PORT

Thanks to this repo for the info and detailed steps on how to create your own Helm repo with GitHub pages.

azurestorageexplorer's People

Contributors

azure-pipelines[bot] avatar ckocyigit avatar criztovyl avatar dependabot[bot] avatar philbo avatar pickypg avatar saschagottfried avatar sebagomez avatar waffle-iron avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

azurestorageexplorer's Issues

Question: Is it possible to get the ATS record with their EDM types intact

I have gone through the code and it is able to achieve what it says except few things:

It gets all the data as a string in TableEntityWrapper.Dictionary<string, object> m_properties. This creates a problem, lets's say I need to update some data on a filter condition and the types in my data do have different types from EDM. It is having a string, boolean, DateTime, etc.

Now when I update the data using this library, it changes all the types to the string which is a problem.

For creating new records also, the same issue. I can't create records with EDM type except for string.

Is it possible to achieve the same here in this package? Would love that capability.

Internal Server error when using an authorized action

When you create a SAS token you can specify the kind of permissions that token is allowed to do.
Right now, if you aren't allowed to create containers and you want to create one, you get an Internal Server Error message, whoch is not nice.

Support for connection strings via environment variables

Hello, Is it possible to provide environment variables to docker container so that connection to storage account is pre-authenticated or pre-filled in the login UI? This is super convenient when using as part of local dev environment like docker compose.

pgweb is a good example of how this can work. In my mind pgweb projects serves a similar purpose, so linking here as an example.

It would be even nice to have multiple pre-authenticate connection strings to support local azurite storage emulator.

Azurite Emulator Support

How feasible is it to add support for this to play nice with the local emulator (e.g. azurite) where you do not have to signin to azure? When I run a version of Azure Storage Explorer installed in my labtop I can access a local emulator no problem without login to azure. Wondering if that is or would be possible with the dockerized version.

SAS login not functional?

Logging in with SAS tokens does not seem to work. I tried using the storage account name, container name, SAS URL, query string and different combinations of these.

Request: input sanitization on account name

Microsoft's Storage Accounts only support lowercase and numbers in names: https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/resource-name-rules#microsoftstorage

Supplying invalid values (accidentally since I missed the space!) results in attempting to use the value, instead of validating it before submitting, and the response 'Invalid account or key', though I think it'd be more user friendly if the account name box failed regex validation. There may be a reason to support other characters for custom domains, but since both requests are defaulting to the normal endpoints I think the logic could be split if there's a suffix.

Request URL: https://REDACTED.azurewebsites.net/api/Queues/GetQueues?account=fakename**%20**&key=fddfsfdsfdfd

{description: "System.UriFormatException: 'Invalid URI: The hostname could not be parsed.'",…}
description: "System.UriFormatException: 'Invalid URI: The hostname could not be parsed.'"
statusText: "Invalid URI: The hostname could not be parsed."

Request URL: https://REDACTED.azurewebsites.net/api/Queues/GetQueues?account=fakenaDDDD&key=fddfsfdsfdfd

{"description":"System.AggregateException: 'Retry failed after 6 tries. Retry settings can be adjusted in ClientOptions.Retry. (Name or service not known (fakenadddd.queue.core.windows.net:443)) (Name or service not known (fakenadddd.queue.core.windows.net:443)) (Name or service not known (fakenadddd.queue.core.windows.net:443)) (Name or service not known (fakenadddd.queue.core.windows.net:443)) (Name or service not known (fakenadddd.queue.core.windows.net:443)) (Name or service not known (fakenadddd.queue.core.windows.net:443))'","statusText":"Retry failed after 6 tries. Retry settings can be adjusted in ClientOptions.Retry. (Name or service not known (fakenadddd.queue.core.windows.net:443)) (Name or service not known (fakenadddd.queue.core.windows.net:443)) (Name or service not known (fakenadddd.queue.core.windows.net:443)) (Name or service not known (fakenadddd.queue.core.windows.net:443)) (Name or service not known (fakenadddd.queue.core.windows.net:443)) (Name or service not known (fakenadddd.queue.core.windows.net:443))"}

Container Volume/Overlay filled up with 100gb temp files

I've been use the explorer for a while now but the monitoring just triggered an alarm due to a filled up space.

The container volume which was attached to this container contained 96 GB of *.tmp files
image

I had to delete those files manually in order to make the other containers work again since those were suffocating on non-available disk-space.

Unchecked runtime.lastError: A listener indicated an asynchronous response by returning true, but the message channel closed before a response was received

Does anyone else get this is the web page / demo page:

Unchecked runtime.lastError: A listener indicated an asynchronous response by returning true, but the message channel closed before a response was received

Docker image:

image

Demo webpage - https://azurestorage.azurewebsites.net/

image

I tried this tool a couple of years ago, but I can't get it to work today. Anything obvious I am doing wrong? I get errors before i even enter settings, and then 'Login' does nothing

Mixed objects from different account

When you log in into an account and see the containers, if you log out and log in again with a different account, you'll see both sets of containers, the previous and current account ¯_(ツ)_/¯

Screenshots

Thank you for this amazing application.

Could you please provide more screenshots of Blob section, containers and files.

Website sends Azure storage account access keys over unencrypted HTTP connection.

I am pretty sure, you are aware of this. Why a web developer offers a service in such an unsecure way? The least you could have done is to notify potential users about this security aspect. Then they could choose whether they use your service. I am not aware of any notes regarding neither on the website nor on Github project page.

I forked your project, deployed using "Deploy to Azure" button. I was pretty surprised to find, that Azure websites have SSL endpoints out-of-the-box.

A more secure solution just needs a couple of minutes. Please update the link on your website to use HTTPS. Thank you for creating this tool.

A more complete approach would disable HTTP endpoint or force HTTPS too. Given that the link to your website is spread over the internet, a redirection could help.

References

Folders

Hi,

Any plans to support showing the folders in Blob instead of all Blobs in flat structure? With lots of blobs in one container the pages take forever to load and its impossible to browse.

Add a configurable base path

Hello,

it would be nice to be able to run the whole application under a base path.

E.g.: localhost:5000/app

I've tried:

var appPath = Environment.GetEnvironmentVariable("BASE_PATH");
if (!string.IsNullOrEmpty(appPath))
{
    app.Use(async (context, next) =>
    {
        context.Request.PathBase = new PathString(appPath);
        await next();
    });
}

And

var basePath = Environment.GetEnvironmentVariable("BASE_PATH");
if (!string.IsNullOrEmpty(basePath))
{
    app.UsePathBase(basePath);
}

Within Program.cs.
But both attemps unfortuntely were unsuccessful

UsePathBase however worked for the login-screen and started trying to redirect to non-UsePathBase patterns afterwards.

Attempting to upload blob fails due to Invalid URI

After successful signin and container enumeration, attempting to upload a file to blob storage says Invalid URI: The format of the URI could not be determined. under the upload control in red and no blob is written to the container.

How to login?

Hello,
How to login to this app?

What's the Azure Account?

Add Dockerfile

Add Dockerfile and push the image to some public Docker registry

[BUG] SAS: Fileshares not working

Hello,

when connected via SAS Token (or rather the whole Connection string, since only SAS does not work) the fileshares are displayed incorrectly.

E.g. if the File share is named banana and the fileshare has 5 differently named folders, clicking on FS Banana results in all folders within also being named "banana". Proceeding to click on a incorrectly named folder results in an error.

image

I also can see why this happens. Analysing it via Web Tools shows that the URL is incorrectly build:

https://*****.file.core.windows.net/banana?sv=/banana-files (With banana files being the correct folder name)

The foldername is attached to the end but it should have been after the fileshare name:

https://*****.file.core.windows.net/banana/banana-files?sv=

SAS Support

Any chance of including SAS (shared access signature) support? SAS allows a more finely tuned access model than using full keys.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.