Git Product home page Git Product logo

vsphere-storage-for-docker's Introduction

Build Status Go Report Card Docker Pulls VIB_Download Windows Plugin

VMware has ended active development of this project, this repository will no longer be updated.

vSphere Storage for Docker

vSphere Storage for Docker enables customers to address persistent storage requirements for Docker containers in vSphere environments. This service is integrated with Docker Volume Plugin framework. Docker users can now consume vSphere Storage (vSAN, VMFS, NFS, VVol) to stateful containers using Docker.

vSphere Storage for Docker is Docker Certified to use with Docker Enterprise Edition and available in Docker store.

If you would like to contribute then please check out CONTRIBUTING.md & FAQ on the project site.

Documentation

Detailed documentation can be found on our GitHub Documentation Page.

Downloads

Download releases from Github releases page

The download consists of 2 parts:

  1. VIB (VDVS driver): The ESX code is packaged as a vib or an offline depot
  2. Managed plugin (VDVS plugin): Plugin is available on Docker store.

Please check VDVS Installation User Guide to get started. To ensure compatibility, make sure to use the same version of driver (on ESX) and managed plugin (on Docker host VM) for vSphere Storage for Docker.

Supported Platforms

ESXi: 6.0U2 and above
Docker (Linux): 17.06.1 and above to use managed plugin
Docker (Windows): 17.06 and above (Windows containers mode only)
Guest Operating System:

Logging

The relevant logging for debugging consists of the following:

  • Docker Logs
  • Plugin logs - VM (docker-side)
  • Plugin logs - ESX (server-side)

Docker logs: see https://docs.docker.com/engine/admin/logging/overview/

/var/log/upstart/docker.log # Upstart
journalctl -fu docker.service # Journalctl/Systemd

VDVS Plugin logs

  • Log location (Linux): /var/log/vsphere-storage-for-docker.log
  • Log location (Windows): C:\Windows\System32\config\systemprofile\AppData\Local\vsphere-storage-for-docker\logs\vsphere-storage-for-docker.log
  • Config file location (Linux): /etc/vsphere-storage-for-docker.conf.
  • Config file location (Windows): C:\ProgramData\vsphere-storage-for-docker\vsphere-storage-for-docker.conf.
  • This JSON-formatted file controls logs retention, size for rotation and log location. Example:
 {"MaxLogAgeDays": 28,
 "MaxLogFiles": 10,
 "MaxLogSizeMb": 10,
 "LogPath": "/var/log/vsphere-storage-for-docker.log"}
  • Turning on debug logging:

    • Package user (DEB/RPM installation): Stop the service and manually run with --log_level=debug flag

    • Managed plugin user: You can change the log level by passing VDVS_LOG_LEVEL key to docker plugin install.

    • Managed plugin user: Set the group ID to use for the plugin socket file via the VDVS_SOCKET_GID env. variable.

      e.g.

      docker plugin install --grant-all-permissions --alias vsphere vmware/vsphere-storage-for-docker:latest VDVS_LOG_LEVEL=debug VDVS_SOCKET_GID=<group name>
      

VDVS Driver logs

  • Log location: /var/log/vmware/vmdk_ops.log
  • Config file location: /etc/vmware/vmdkops/log_config.json See Python logging config format for content details.
  • Turning on debug logging: replace all 'INFO' with 'DEBUG' in config file, restart the service

Please refer VDVS configuration page for detailed steps.

References

vsphere-storage-for-docker's People

Contributors

abrarshivani avatar akutz avatar andrewjstone avatar ashahi1 avatar asomaiah avatar aspear avatar baludontu avatar brunotm avatar divyenpatel avatar dstefka avatar girishshilamkar avatar govint avatar kerneltime avatar lipingxue avatar marksoper avatar nnf97 avatar pdhamdhere avatar shivanshu21 avatar shuklanirdesh82 avatar tusharnt avatar venilnoronha avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

vsphere-storage-for-docker's Issues

Packaging - ESX side (VIB, live install/uninstall, VUM updates in cluster, watchdog, jumpstart)

As of now, the VIB only has the files to land on ESX. The following work needs to happen:

  • python code needs to be converted to a daemon, with related init.d script and watchdog configuration added
  • jumpstart definition should be added to start the service on ESX start
  • VIB should invoke the service on start and remove on remove (if possible)
  • VIB should be signed
  • VIB should be tested for rollout with VUM

Also, rename the daemon to comply with the spec: it should be vmdkcontrold. The rename impacts

  • build (and potentially test) scripts , including watchdog / init.d config
  • file names in repo

We should use one of VSAN VIBs (health of perf or iscsi) or ESXi client vib as an example.

/bin/sh -c apt-get update throws error

I've got libc-dev-i386 installed but this command is failing in "apt-get udpate".
root@hte-1s-eng-dhcp98:/home/administrator/vpl/src/github.com/vmware/docker-vmdk-plugin# /bin/sh -c apt-get update && apt-get install -y libc6-dev-i386
apt 1.0.1ubuntu2 for amd64 compiled on Sep 23 2014 12:09:48
Usage: apt-get [options] command
apt-get [options] install|remove pkg1 [pkg2 ...]
apt-get [options] source pkg1 [pkg2 ...]

apt-get is a simple command line interface for downloading and
installing packages. The most frequently used commands are update
and install.
...

Its installed though,
root@hte-1s-eng-dhcp98:/home/administrator/vpl/src/github.com/vmware/docker-vmdk-plugin# dpkg -l|grep libc6-dev-i386
ii libc6-dev-i386 2.19-0ubuntu6.6 amd64 Embedded GNU C Library: 32-bit development libraries for AMD64

Review K8S needs and update design / protocol / code to support it

In preliminary discussions, we decide that we need a few simple fixes to the guest-host protocol to support K8S:

  • allow "no volume formatting" option
  • allow to choose a folder on datastore

We should review it again, get a list of changes and implement.
We should do it earlier to allow the reuse of protocol and code.

Support volumes snapshots (offline)

Support "action=snapshot"
docker volume create --driver=vmdk --name=VolName -o action=snapshot -o src=baseVol"

This could be a great feature to demo :) and easy to implement, but basically it is a stretch

The work include:

  • adding ESX-style operation on DISK only, so it will fail if disk is attached.
  • adding support for vmdk name manipulation (understand correct names on list, search and delete)
    • pre-req: write up design for name convention, what happens on conflicts, on parent delete, on child delete, revert/clone, etc...
  • block delete if there are children (p1)
  • tests and especially error handling tests
    • tests for metadata on snapshots

There does not seem to be any work on docker side here.

Add support for format of VSAN volumes.

Currently mkfs uses -flat.vmdk (python code). Need to check disk.backing and choose the backend appropriately (note: not needed when there is no FS formatting)

Installation and build fixes

Filling the issues Mark and I encountered with the build on a new machine.

  • Code layout should be split into common backend and docker plugin. Common backend will eventually be a separate repo shared by others (kube)
  • libc6-dev-i386 is missing from default ubuntu and needs to be packaged in the containerized build
  • GOPATH in Makefile assumes go modules and dvolplug source in the same mounted folder. This breaks the build, because bind mount hides docker pulled packages.
  • Fix message when dvolplug runs (ignored and it's OK)

Docker PRs - still missing functionality

  • 'docker volume inspect' is missing metadata passed with -o. So if we pass size or policy via -o, there is nothing about this in inspect. every early adopter looking at TP asked for this. They also asked for actual size, consumed storage, storage policy, etc... Fixed "details" in 1.12 and related volume plugin support
  • We need some way to know which container creates and requests a volume. This way we can map volume->vm->container, which is what customers are asking about all the time not needed - we can interrogate docker for containers
  • docker 'ls' is missing a way to do driver-specific filter, or driver-specifci information (e.g. size , consumed storage, datastore, policy...)
  • docker does not prove use any ID nor repeats any ID from plugin, so a swarm cluster may talk to plugin running on different storage clusters, and we will not know

update:

  • missing a way to config a volume post-create (e.g. change a policy
  • missing a way to tell per-volume "global" or "local" scope. During the scope discussion the Docker folks decided there was no use case, thus creating a problem for vsphere driver

Andrew - this one compliments #17

Validate with the latest ESX python3, probably port

ESXi seems to be moving all to python3 and not shipping python 2.7 in release. We need to validate if it's the case, and potentially support both version. vmkernel@vmware or someone on VSAN team should be able to shed more light here

Update of "docker create" -o parsing, and -datastore options support in the plugin

This is a tracking item for the following work:

  • Set a list of known -o options and warn if the option is unknown (we do not want to fail as it will make it harder to add new things, like -o action=
    • note: whoever works on this, please have a quick discussion on "fail vs warn" here
    • add validation parser to size=. Currently it will fail on ESX side if the syntax is wrong. It's OK to fail on ESX side but we need to check the error handling. Or we can parse and fail on Docker side
  • add -datastore=<> option (this one can be pushed out to v1.next - but please create a PR if doing so)

Refcount - Add internal volume tracking to support multiple containers attach, per Docker spec

See docker spec (and plugin.go) for more info.
The PR includes spec & review of the changes ; and implementation

from plugin.go:
Per Docker spec
//
// multiple containers on the same docker engine may use the same vmdk volume
// thus we need to track the volumes already attached, and just do bind mount for them
// Also it means we need to serialize all ops with mutex

// We also need to track volumes attached and mounted to save on this ops if requested
// On start, we need to list vmdks attached to the VM and populate list of volumes from it

Build: check.sh needs to validate GO version and folder location

we had a few cases of people ignoring the checkout location and then building no-docker, or using an old version of go.
'check.sh nodocker' should :

  1. Fail of GO version is < 1.5
  2. Warn if GO version > 1.5 (since we did not test it)
  3. Check that the current folder is under GOPATH/src/github.com/vmware

Add CI to the repo

This is a part of longer list in #3
Need to decide, plan and implement full CI for the plugin.
The goal is to automate build/validation tests on checkins

The work includes:

  • spec and review the plan
  • implement (includes troubleshooting / logging support)
  • educate the team

esx side - Handle time outs in calls to FindChild() and friends

(Python) connection can close due to no activity timeout which is normal. we need to handle graceful reconnect.
As of now , when there is an idle period , connection to hostd can silently timeout and the next legitimate command will throw "not connected"

Packaging - docker side

Need to package the docker-vmdk-plugin as a minimal container with caps to run the plugin.
It would be installed and invoked by 'docker run github.com/vmware/docker-vmdk-plugin' . It need to run the plugin service (talking to privileged vSocket) with required capability.

Separate C code from GO files

Separate the files and use native 'go build' instead of make to compile.
Currently it's close to impossible to edit - see vmkdops.go

Add unit tests to Python code

there are placeholders, need to design and add basic unit tests to both Go and Python

  • unit tests should run without full install , during build
  • on Go, we can use "dummy" communication plugin and make sure create/delete/ls path work (see existing code)
  • on Python/ESX, this needs discussing

VC story

how to get a list of all VMDKs used (or explicitly requested) by docker VMs? Size? Check for stale VMDKs (i.e. created but not used in ages) ? Etc..

This is partially speced on the vmware WIKI. this PR is to finalize the design and open the follow up PRs.

Support VSAN policies on volumes

Support VSAN policies on creation; configuring and changing of VSAN named policies; changing policies on existing volumes (existing esxcli?) ; metadata is saved in vmdk folder in a json file (or extracted from vsan object?)

See vmware wiki for details

list of items - A Road to MVP (minimum viable product)

This issue enumerates the work we need to do before releasing MVP for native Docker VMDK Data Volume support. Please extend the list if you see something missing.

  • Discuss and review/finalize plans
    • Finalize use cases/ clean up wiki / publish functional spec with use cases (depends on other items in this section)
    • * authentication / config: How do we identity trusted VM ? What is the config format (a list of "enable" glob expressions for .vmx config or VM uuid (think about vmotion ) ? Where do we keep the info ? Do we force port_id < 1024 ?
    • * policy and quotas. Do we introduce VMDK grouping for this purposes? Format? Where to keep data?
    • * packaging - ESX side (VIB, live install/uninstall, VUM updates in cluster, watchdog, jumpstart)
    • * packaging - docker side (daemon config, minimal container with caps to run the plugin)
    • * release timing and documentation -- what do we need? Also, release process - legals? docs? goto market? anything else ?
    • * CI and test. (note: if needed, We can do scaffolding around cron/nimbus, and then go to drone.io)
    • * VC story: how to get a list of all VMDKs used (or explicitly requested) by docker VMs? Size? Check for stale VMDKs (i.e. created but not used in ages) ? Etc...
  • Clean up and review/fix of TODO in the code
    • * Design and Use proper logs (ESX and linux), drop prints
    • Add timing metrics for ops to logs for t-shooting
    • * Handle time outs in calls to FindChild() and friends – it’s normal if we need to reconnect to hostd as session may time out.
    • * Discuss and Add support needed for K8S: "no formatting" option (or explicit formatting type request), location.
    • * Add support for VSAN/Vvol. Currently mkfs uses -flat.vmdk. Need to check disk.backing and choose the backend appropriately (note: not needed when there is no FS formatting)
    • * Add proper PVSCSI controller/disks create() automation. Currently we rely on pre-configured VM, this is unacceptable and already mentioned in "installation issues".
    • Make VMDK requests non-blocking: split Go/C API into SendRequest and GetReply (blocking), have ESX code spawn threads tor this and have Go use goroutines.
    • Drop exec(“mount”) from Go - use syscall.mount(). Drop exec("vmkfstools") from python, use VirtualDiskManager
    • * Add internal volume tracking per Docker spec - e.g. no need to attach stuff already attached by another container. Detach also happens on refcount only.
    • * Other TODOs - signal handling, track volumes UUIDs for more reliable delete (currently full file name is the ID of the disk to delete), clean up sizes (drop fixed size) in C/vSocket code.
  • * Design and implement build ,CI & testing set
    • [ x] Separate components (see "installation issues" Issue docker-vmdk-plugin#1). Needs to have
      • Go module (vmware/docker-vmdk-plugin/vmdkops) - uses 'go get' and 'go build' (in container)
      • ESX build (vmware/docker-vmdk-plugin/vmdkops-esxsrv) - uses vibauthor container
      • Docker plugin (vmware/docker-vmdk-plugin/plugin) - uses 'go' in container
      • container used for build only
    • [ x] Build: Make all container-driven, per above.
    • DeployToESX for CI: assuming packaging (above) is done and ESX is deployed, govmomi or pyvmomi code to deployVIB, deploy docker VM (a few flavors) with plugin, run basic tests
    • CI: GitHub, CI and test infra - deploy ESX (numbus? drone.io?) , run deployToEsx
    • * unit tests. (_test.go) – includes a vmdkops replacemend module to do local "vmdk" (falloc , format, loopback mount) - that will test the rest of the logic.
    • [ x] use Photon for build and Plugin container (low priority) - already supported
    • * Busy work: Support Python3 and keep Python2 (since newer ESX support only Python 3).
  • * Docker PR - Missing functionality (independent from MVP)
    • * 'docker volume inspect' is missing metadata passed with -o. So if we pass size or policy via -o, there is nothing about this in inspect
    • * 'docker volume ls' does not allow to enumerated driver-specific volumes which were not "created" In the current docker daemon. So there is no way to see what's already there for VMDKs.
    • [x ] Docker 1.10 seems to be addressing these already, see "moby/moby#16534 - Move responsibility of ls/inspect to volume driver #16534 " -(comment: not true. Still missing a way to pass our metadata)
    • * It also adds support for anon. volumes (moby/moby#19190) , we need to support / test it

(note: * before the text indicates that there is a separate PR opened for the item

Add tests to CI pass

After #7 is done, we need to add test framework and test coverage :

Test framework
Being able to connect to Docker to test machines (plural) from build machine, and executed docker commands. (depends on #160)
Have a working configuration and example of running such a test as a part of build and CI
Note: we will keep using GO TEST framework and related "testing" module in golang.

*test coverage *

  • reinstall the package (for getting new version on ESX and guest)
  • add / inspect / ls / remove a volume
  • add volume; attach a volume by run; try to remove (should fail)
  • add volume, attach a volume by rub; rm the container, remove the volume
  • run out of storage on a volume
  • re-attach the volume and validate the content

I think this is what we should call "sanity pass":

create VMDK
    small size
    huge size to cause error due to storage capacity
    huge size to cause error due to quotas
    list VMDK, make sure it's there
    inspect VMDK, make sure it's there
    check location and size of VMDK (diskTool -q -i or vmkfstools) on ESX
    delete VMDK, make sure it's gone from docker and ESX
    create VMDK, run container.
        I suggest running 'thrash' there for integrity, so it may be an overkill
    try to rm volume, make sure it fails
    run another container with the same volume
    check the data is there
    kill both containers
    rm volume and make sure it does not show up

We also need to have a way for devs to invoke it for debugging session. Using CI is OK if it's fast enough

Code cleanup: error checking

Many of these are small items.
review all code for errors handling and make sure we do handle error properly.
In Python:

  • throw and get back to listening on the socket. Only current command is killed, service continues
  • if throw on every command for a while, kill the service or inject delays
  • or, we can handle above via watchdog. Then python should throw and die on issues

In GO:
TBD

ESX Admin command line for basic ops

We need to be able to answer the following questions:
how to get a list of all VMDKs used (or explicitly requested) by docker VMs? Size? Check for stale VMDKs (i.e. created but not used in ages) ? Etc..
We need to minimize the effort here , since FCD will eventually provide the support. Howerver, we want a big install base so FCD is not an option yet.

This is speced on the vmware WIKI (see "Admin features - config file and command line " on https://wiki.eng.vmware.com/CNA/Storage.

Command line (in /usr/lib/vmware/vmdkops/bin/esx-dvoladmin). Commands are passed via command line (not interactive). Commands:
ls - full list of volumes, with all metadata available
df - storage taken/available per datastore for docker
config --datastore= -- key=value - change config file "key" to "value"
e.g.: config --datastore=vsan --vsanPolicy threeReplicas="(hostFailureToTolerate 'i2)"

Code cleanup: track volumes UUIDs for more reliable delete (currently full file name is the ID of the disk to delete)

The POC code uses full VMDK path as internal ID, and constructs it on the fly. This is error prone. We need to keep UUID for disks , at least for the plugin lifetime, and use them to match the VMDKs.
We also can scan the folders and build "known VMDK" structure.
Note that if another docker engine/plugin changes the content of the folder, we need to re-build this cache ov uuid->vmdk path

Implement Get() and List() to get it working with Docker 1.10

1.10 evidently interrogates Get() and is failing a volume greation is Get() is returning something unexpected.
Need to remove stubs and add proper implementation of Get() and List() to get plugin wokring in 1.10

Details:

Create() issue: I thought it's better to fail create() if the vmdk already exists. This would prevent accidental reuse of what you don't want to reuse (e.g. name clash with VMDK created by another Docker engine on shared store). To actually use existing VMDK, the user would have to pass force=true.
So, if myDisk.vmdk exists:
docker volume create —driver=vmdk —name=myDisk
FAILS with EEXISTS (Creation failed - vmdk already exists) , but
docker volume create —driver=vmdk —name=myDisk -o force=true
would succeed.
I do not see a way to implement it in 1.10

Also, we need to cover the following cases (they are still not addressed in 1.10):
(1) Get(): We create a volume with special metadata. I.e. Datastore name, or policy name, or replication count, or whatever. We want to show this metadata in Inspect(). So Get() [Or, better, Inspect()[, should be allowed to return any map of string->string with volume metadata
(2) List(). We need to be able to show where the volumes are located, and what policies do the have. We also want to show what is attached is what is not. I think 'docker volume ls' should show volumes known to Docker (I.e. Attached) , just like 'docker ps' shows running containers. "docker volume ls –a" should dive into drivers and give a full list of stuff know there.
"Docker volume ls –a –l" should allow special metadata (like datastore and policy) being passed back. "docker volume ls –a –l –fields "list of fields" should limit the shown metadata to the known one.

Keeping volume metadata with VMDKs

In order to provide admin info about "volume creator" (and potentially other) and implement any kind of multi-tenancy, we need to keep volume-specific metadata. FCD is not going to support user-defined metadata in 2016 (and further time needs negotiation), adding it to VMDK file as ddb. entries will break optimistic locking. and after email discussion with Nick Ryan it seems we can either use VMDK/DDB if we don't care about optimistic locking (which will generate perf hit, especially on "open" storms), or use DiskLib_Sidecar set of APIs. Sidecars allow to put eny additional info about a VMDK, and sidecars travel with VMDKS (e.g. svmotion, DRS,. etc...)

This work item is to investigate both options and create a PR with implementation details.
We may delay the actual work to GA or v.next. but we need to complete investigation

Prepare github pages

  • Co-ordinate with VMware Open Source to bless repo, internal ticket with OSSTP was filed when repo was created.
  • Co-ordinate with CNA Org for look and feel
  • Investigate options for running surveys and analytics.

Multi-tenancy

Add use case and related design / code for basic mutli-tenancy as the most common usage:

  • disk quota for a group of docker VMd

  • dedicated location for a group of docker VMs

  • maybe even passing in-guest security principal information for doing the above

    So we need to consider VM groups as units of quota

Remove serialization of everything in the plugin

Currently the plugin is fully blocking on each request. Given a volume format may take long time, and for volume attach we can have a "start storm", we need to remove serialization

  • python code should spawn threads for long-running work, certainly for format and attach
  • Go code should use goroutines for the work
  • overall goroutine sync and async behavior should be reviewed (locking is currently serializes all mounts
  • vmci protocol should be changed to non-blocking with callback on completion

Build without docker fails with missing go-plugin-helpers/volume

administrator@hte-1s-eng-dhcp98:~/vpl/src/github.com/vmware/docker-vmdk-plugin$ make DOCKER_USE=false
echo *** Skipping "(DOCKER_USE=false):" docker build -t docker-vmdk-plugin-go-bld .
CONTRIBUTING.md Dockerfile Makefile README.md check.sh main.go plugin.go src vendor vmdkops vmdkops-esxsrv Skipping (DOCKER_USE=false): docker build -t docker-vmdk-plugin-go-bld .
Building ./bin/docker-vmdk-plugin ...
GO15VENDOREXPERIMENT=1 go build -o ./bin/docker-vmdk-plugin github.com/vmware/docker-vmdk-plugin
main.go:8:2: cannot find package "github.com/docker/go-plugins-helpers/volume" in any of:
/usr/src/pkg/github.com/docker/go-plugins-helpers/volume (from $GOROOT)
/home/administrator/vpl/src/github.com/docker/go-plugins-helpers/volume (from $GOPATH)
make: *** [bin/docker-vmdk-plugin] Error 1

I'm pulling in the helper stuff manually to run the build now.

Integration of plugin and VIC

Vsphere integrated Containers (VIC) have a different (from regulatr Docker-in-a-vm case) model of Engine/VM/Containers deployment. In VIC, one VM runs a docker engine and that handles creating containers (VM == container) on request. The container VMs can run on any node in a vSphere cluster. Also, the volume attach is done by the VIC engine creating the correct VM reconfigure spec, and running this when the container VM is forked.

So in order to use named VMDK volumes in VIC there need to be an integration with this mechanism.
This is a lower priority item, as VIC already auto-creates VMDK volumes for -v /path:/container_path option, so basic support of external VMDK volumes is already there

Also, this may be better done by VIC team. Still, we need to decide when / who does the integration, and execute. Adding tracking item

Docker PR (potential) - need support for existing volume reconfigure

Use case: volume exists , but we need to change policy to tolerate more failure.
Desired support:
Docker volume config <vol_name> -o option1=value1...

And related API /Volume.Config
And support on our driver side

This is an extension of #42 which already requests PRs to /Volume.Get /Volume.Create /Volume.List

remove command lines usage where API is available

We use 'vmkfstools' to create and delete VMDK (python code), and "mount /umount' to mount a disk (GO code).
We should use si.content.virtualDiskManager (VDM) on python side to create/delete the disk, and syscall.mount sycall.umount to unmount on GO side

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.