Git Product home page Git Product logo

goblob's Introduction

goblob

goblob is a tool for migrating Cloud Foundry blobs from one blobstore to another. Presently it only supports migrating from an NFS blobstore to an S3-compatible one.

Installing

Download the latest release.

Install from source

Requirements:

mkdir -p $GOPATH/src/github.com/pivotal-cf/goblob
git clone [email protected]:pivotal-cf/goblob.git $GOPATH/src/github.com/pivotal-cf/goblob
cd $GOPATH/src/github.com/pivotal-cf/goblob
glide install
GOARCH=amd64 GOOS=linux go install github.com/pivotal-cf/goblob/cmd/goblob

Usage

The tool is a Golang binary, which must be executed on the NFS VM that you intend to migrate. The commands of the tool are:

Command Description
goblob migrate [OPTIONS] Migrate NFS blobstore to S3-compatible blobstore
goblob migrate2azure [OPTIONS] Migrate NFS blobstore to Azure blob storage

For each option you use, add -- before the option name in the command you want to execute.

Migrate NFS blobstore to S3-compatible blobstore

goblob migrate [OPTIONS]

Options

  • concurrent-uploads: Number of concurrent uploads (default: 20)
  • exclude: Directory to exclude (may be given more than once)
NFS-specific Options
  • blobstore-path: The path to the root of the NFS blobstore, e.g. /var/vcap/store/shared
S3-specific Options
  • s3-endpoint: The endpoint of the S3-compatible blobstore
  • s3-accesskey: The access key to use with the S3-compatible blobstore
  • s3-secretkey: The secret key to use with the S3-compatible blobstore
  • region: The region to use with the S3-compatible blobstore
  • buildpacks-bucket-name: The bucket containing buildpacks
  • droplets-bucket-name: The bucket containing droplets
  • packages-bucket-name: The bucket containing packages
  • resources-bucket-name: The bucket containing resources
  • use-multipart-uploads: Whether to use multi-part uploads
  • disable-ssl: Whether to disable SSL when uploading blobs
  • insecure-skip-verify: Skip server SSL certificate verification

Migrate NFS blobstore to Azure blob storage

goblob migrate2azure [OPTIONS]

Example

goblob migrate2azure --blobstore-path /var/vcap/store/shared \
  --azure-storage-account $storage_account_name \
  --azure-storage-account-key $storage_account_key \
  --cloud-name AzureCloud \
  --buildpacks-bucket-name cf-buildpacks \
  --droplets-bucket-name cf-droplets \
  --packages-bucket-name cf-packages \
  --resources-bucket-name cf-resources

Options

  • concurrent-uploads: Number of concurrent uploads (default: 20)
  • exclude: Directory to exclude (may be given more than once)
NFS-specific Options
  • blobstore-path: The path to the root of the NFS blobstore, e.g. /var/vcap/store/shared
Azure-specific Options
  • azure-storage-account: Azure storage account name
  • azure-storage-account-key: Azure storage account key
  • cloud-name: cloud name, available names are: AzureCloud, AzureChinaCloud, AzureGermanCloud, AzureUSGovernment
  • buildpacks-bucket-name: The container for buildpacks
  • droplets-bucket-name: The container for droplets
  • packages-bucket-name: The container for packages
  • resources-bucket-name: The container for resources

Post-migration Tasks

  • If your S3 service uses an SSL certificate signed by your own CA: Before applying changes in Ops Manager to switch to S3, make sure the root CA cert that signed the endpoint cert is a BOSH-trusted-certificate. You will need to update Ops Manager ca-certs (place the CA cert in /usr/local/share/ca-certificates and run update-ca-certificates, and restart tempest-web). You will need to add this certificate back in each time you do an upgrade of Ops Manager. In PCF 1.9+, Ops Manager will let you replace its own SSL cert and have that persist across upgrades.
  • Update OpsManager File Storage Config to point at S3 blobstore using buckets (cc-buildpacks-, cc-droplets-, cc-packages-, cc-resources-)
  • Click Apply Changes in Ops Manager
  • Once changes are applied, re-run goblob to migrate any files which were created after the initial migration
  • Validate apps can be restaged and pushed

Removing NFS post-migration

  • Turn off bosh resurrector (bosh vm resurrection off)
  • In the IaaS console (e.g. AWS EC2, vCenter console, etc.), terminate all the CC VM jobs (Cloud Controller, Cloud Controller Worker, and Clock Global) + NFS (ensure the attached disks are removed as well). Note that your CF API services will stop being available at this point (running apps should continue to be available though). This step is required to ensure the removal of the NFS mount from these jobs.
  • bosh cck the cf deployment to check for any errors with the bosh state. It should ask you if you want to delete references to the missing CC/NFS jobs, which you want to do.
  • Go back to Ops Mgr and update your ERT configurations to zero NFS instances and re-add your desired instance counts for the CC jobs.
  • Click Apply Changes in Ops Manager. After this deploy is finished, your CF API service availibility will resume.
  • Turn back on the bosh vm resurrector, if it isn’t turned back on after your re-deploy (bosh vm resurrection on).

Known Issues

  • Starting with PCF 1.12, Ops Manager no longer allows you to select the S3 blobstorage configuration for ERT/PAS, instead of the internal NFS option, without also deleting the NFS server VM. This means that you should shut down the Cloud Controller VMs before you switch your configuration, as you will not have a chance to run a post-configuration-switch migration once the NFS server is gone.

Developing

  • Install Docker
  • docker pull minio/minio

To run all of the tests in a Docker container:

./testrunner

To continually run the tests during development:

  • docker run -p 9000:9000 -e "MINIO_ACCESS_KEY=AKIAIOSFODNN7EXAMPLE" -e "MINIO_SECRET_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" minio/minio server /tmp
  • (in a separate terminal) MINIO_ACCESS_KEY=AKIAIOSFODNN7EXAMPLE MINIO_SECRET_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY ginkgo watch -r

goblob's People

Contributors

gossion avatar joefitzgerald avatar ryanpei avatar xchapter7x avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

goblob's Issues

Please configure GITBOT

Pivotal uses GITBOT to synchronize Github issues and pull requests with Pivotal Tracker.
Please add your new repo to the GITBOT config-production.yml in the Gitbot configuration repo.
If you don't have access you can send an ask ticket to the CF admins. We prefer teams to submit their changes via a pull request.

Steps:

  • Fork this repo: cfgitbot-config
  • Add your project to config-production.yml file
  • Submit a PR

If there are any questions, please reach out to [email protected].

Need more info on using the binary

Hi,

It'd be really helpful if the README was updated showing how to use the different options. It's not exactly clear how to use the different options like 'resources bucket name' and others with the binary on the command line.

Thanks,
Moiz Khan

Need Clarification on Changes to PAS Tile

Hello,
first of all thank you for developing this utility.

In the section "Removing NFS post-migration", it is mentioned to update ERT/PAS configuration to zero instances. However there is no indication of changes that need to be made to "File Storage" settings on PAS tile.
Could you please clarify?

Thank you,
NG

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.