Git Product home page Git Product logo

bosh-deployment's Introduction

bosh-deployment

This repository is intended to serve as a reference and starting point for developer-friendly configuration of the Bosh Director. Consume the master branch. Any changes should be made against the develop branch (it will be automatically promoted once it passes tests).

Important notice for users of bosh-deployment and Bosh DNS versions older than 1.28

As of Bosh DNS version 1.28, Bosh DNS is now built with Go 1.15. This version of Go demands that TLS certificates be created with a SAN field, in addition to the usual CN field.

The following certificates are affected by this change and will need to be regenerated:

  • /dns_healthcheck_server_tls
  • /dns_healthcheck_client_tls
  • /dns_api_server_tls
  • /dns_api_client_tls

If you're using Credhub or another external variable store, then you will need to use update_mode: converge as documented here: https://bosh.io/docs/manifest-v2/#variables.
If you are not using Credhub or another external variable store, then you will need to follow the usual procedure for regenerating your certificates.

Jammy stemcells

We deploy using Jammy stemcells; however, if you would prefer to use the Bionic stemcells, append the ops files [IAAS]/use-bionic.yml and misc/source-releases/bosh.yml after the ops file [IAAS]/cpi.yml.

How is bosh-deployment updated?

An automatic process updates Bosh, and other releases within bosh-deployment

  1. A new release of bosh is created.
  2. A CI pipeline updates bosh-deployment on develop with a compiled bosh release.
  3. Smoke tests are performed to ensure create-env works with this potential collection of resources and the new release.
  4. A commit to master is made.

Other releases such as UAA, CredHub, and various CPIs are also updated automatically.

Using bosh-deployment

Ops files

  • bosh.yml: Base manifest that is meant to be used with different CPI configurations
  • [alicloud|aws|azure|docker|gcp|openstack|softlayer|vcloud|vsphere|virtualbox]/cpi.yml: CPI configuration
  • [alicloud|aws|azure|docker|gcp|openstack|softlayer|vcloud|vsphere|virtualbox]/cloud-config.yml: Simple cloud configs
  • [alicloud|aws|azure|docker|gcp|openstack|vcloud|virtualbox|vsphere|warden]/use-bionic.yml: use Bionic stemcell instead of Jammy stemcell
  • jumpbox-user.yml: Adds user jumpbox for SSH-ing into the Director (see Jumpbox User)
  • uaa.yml: Deploys UAA and enables UAA user management in the Director
  • credhub.yml: Deploys CredHub and enables CredHub integration in the Director
  • bosh-lite.yml: Configures Director to use Garden CPI within the Director VM (see BOSH Lite)
  • syslog.yml: Configures syslog to forward logs to some destination
  • local-dns.yml: Enables Director DNS beta functionality
  • misc/config-server.yml: Deploys config-server (see credhub.yml)
  • misc/proxy.yml: Configure HTTP proxy for Director and CPI
  • runtime-configs/syslog.yml: Runtime config to enable syslog forwarding

See tests/run-checks.sh for example usage of different ops files.

Security Groups

Please ensure you have security groups setup correctly. i.e:

Type                 Protocol Port Range  Source                     Purpose
SSH                  TCP      22          <IP you run bosh CLI from> SSH (if Registry is used)
Custom TCP Rule      TCP      6868        <IP you run bosh CLI from> Agent for bootstrapping
Custom TCP Rule      TCP      25555       <IP you run bosh CLI from> Director API
Custom TCP Rule      TCP      8443        <IP you run bosh CLI from> UAA API (if UAA is used)
Custom TCP Rule      TCP      8844        <IP you run bosh CLI from> CredHub API (if CredHub is used)
SSH                  TCP      22          <((internal_cidr))>        BOSH SSH (optional)
Custom TCP Rule      TCP      4222        <((internal_cidr))>        NATS
Custom TCP Rule      TCP      25250       <((internal_cidr))>        Blobstore
Custom TCP Rule      TCP      25777       <((internal_cidr))>        Registry if enabled

bosh-deployment's People

Contributors

belinda-liu avatar benjaminguttmann-avtq avatar beyhan avatar bgandon avatar charleshansen avatar cppforlife avatar cunnie avatar danjahner avatar dpb587-pivotal avatar drnic avatar genevieve avatar h4xnoodle avatar jfmyers9 avatar jpalermo avatar jrussett avatar lnguyen avatar luan avatar mattcui avatar metskem avatar mfine30 avatar miguelverissimo avatar mikexuu avatar mrosecrance avatar pivotal-jamil-shamy avatar rkoster avatar sapientcoffee avatar stefanwutz avatar voelzmo avatar xtreme-sameer-vohra avatar ystros avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bosh-deployment's Issues

Docker CPI - Cannot upload stemcell due to "Cannot connect to the Docker daemon... Is the docker daemon running?"

Hello,

Please note that this issue has nothing in common with #93.

I was able to run the bosh director on docker by using the unix socket approach. The sample procedure is here and I confirm that it works fine.

I was able to target my director and then log-in successfully. However, when I tried to upload warden stemcell I got the following result:

#
# With this I confirmed that my bosh director is in good health.
#
[mycomp@homeoffice cloudfoundry]$ bosh2 -e docker vms
Using environment '10.245.0.10' as client 'admin'

Succeeded

#
# Just in case, I checked if the docker container is running and it seems it's fine.
#
[mycomp@homeoffice cloudfoundry]$ docker ps
CONTAINER ID        IMAGE                                                    COMMAND                   CREATED             STATUS              PORTS                                                                                                                                                                           NAMES
7d88343a66c6        bosh.io/stemcells:d974cae3-658a-4220-56dd-2791573cceb7   "bash -c '\n      u..."   11 minutes ago      Up 11 minutes       0.0.0.0:32802->22/tcp, 0.0.0.0:32801->4222/tcp, 0.0.0.0:32800->6868/tcp, 0.0.0.0:32799->8080/tcp, 0.0.0.0:32798->8443/tcp, 0.0.0.0:32797->25250/tcp, 0.0.0.0:32796->25555/tcp   e07c76c7-54b1-48ce-7f16-d6623b7ae4b1

#
# Now I try to upload warden stemcell and I get the error below.
#
[mycomp@homeoffice cloudfoundry]$ bosh2 -e docker upload-stemcell bosh-warden-stemcell-3421.6.tgz
Using environment '10.245.0.10' as client 'admin'

######################################################### 100.00% 240.32 MB/s 1s
Task 4

16:40:06 | Update stemcell: Extracting stemcell archive (00:00:02)
16:40:08 | Update stemcell: Verifying stemcell manifest (00:00:00)
16:40:08 | Update stemcell: Checking if this stemcell already exists (00:00:00)
16:40:08 | Update stemcell: Uploading stemcell bosh-warden-boshlite-ubuntu-trusty-go_agent/3421.6 to the cloud (00:00:00)
            L Error: CPI error 'Bosh::Clouds::CloudError' with message 'Importing stemcell from '/var/vcap/data/tmp/director/stemcell20170626-1864-1yudoue/image': Starting image import: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?' in 'create_stemcell' CPI method

16:40:08 | Error: CPI error 'Bosh::Clouds::CloudError' with message 'Importing stemcell from '/var/vcap/data/tmp/director/stemcell20170626-1864-1yudoue/image': Starting image import: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?' in 'create_stemcell' CPI method

Started  Mon Jun 26 16:40:06 UTC 2017
Finished Mon Jun 26 16:40:08 UTC 2017
Duration 00:00:02

Task 4 error

Uploading stemcell file:
  Expected task '4' to succeed but state is 'error'

Exit code 1

The error I get is this one:

Error: CPI error 'Bosh::Clouds::CloudError' with message 'Importing stemcell from '/var/vcap/data/tmp/director/stemcell20170626-1864-1yudoue/image': Starting image import: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?' in 'create_stemcell' CPI method

I am in doubt how to approach this issue because I know for sure that my docker daemon is running just fine. Could you please assist me and point me to the possible root cause of the issues that I need to investigate? Thank you!

Regards,
Ivan Davidov

Please provide instructions for configuring network

In the instructions
"Note: Above assumes that you have configured Host-only network 'vboxnet0' with 192.168.50.0/24 and NAT network 'NatNetwork' with DHCP enabled."

@cppforlife showed me this config has to be done in the Virtualbox UI. I would not have known what to do otherwise, and without this step it took 7min to contact the agent.

Please replace this text with step by step instructions.

Thank you!

generate creds (without deploying anything) error

I tried this instruction using bosh 2.0 cli (beta version yet): To generate creds (without deploying anything) or just to check if your manifest builds...

    bosh-cli int ../../bosh-deployment/bosh.yml \
      --var-errs \
      -o ../../bosh-deployment/aws/cpi.yml \
      --vars-store ./creds.yml \
      -v access_key_id=abc \
      -v secret_access_key=def

result is:

    Getting all variables from variable definitions sections:
      Generating variable 'default_ca':
        Missing required CA name

    Exit code 1

Invalid DNS canonical name - must begin with a letter

Hi,

I've tried to deploy cloudfoundry to a docker based bosh installation (both current head), and I'm getting following error:

Task 219 | 12:12:42 | Preparing deployment: Preparing deployment (00:00:02)
Task 219 | 12:12:50 | Preparing package compilation: Finding packages to compile (00:00:00)
Task 219 | 12:12:50 | Compiling packages: proxy/8c6fbff8f965520043c08d72cdf210fcd720ce26d816f23ddb90be7ca50b76e0
Task 219 | 12:12:50 | Compiling packages: tar/1de08f190630baf01c0741c86773a02f7c88c2786db1f219152e46e8853f1ccc
Task 219 | 12:12:50 | Compiling packages: busybox/eab1896633e23c0eea7e07b91b27c2c090d16efd7fe07de1adbd8e5a1b16578e
Task 219 | 12:12:50 | Compiling packages: iptables/85ef0fef60ac079fb75b36d0693cce704a662b23b0727c824f15d58713476f13
Task 219 | 12:12:50 | Compiling packages: libseccomp/0594acc533fd2801a347d6b725d0db9f3591c583d06f1e53201e701b12e09824
Task 219 | 12:12:50 | Compiling packages: apparmor/3789c10fa8ef4349f58badae51723eec7855d59b9bb6c1bb7ee21a264fae1dbb
Task 219 | 12:12:52 | Compiling packages: proxy/8c6fbff8f965520043c08d72cdf210fcd720ce26d816f23ddb90be7ca50b76e0 (00:00:02)
L Error: Invalid DNS canonical name '10245504-cf-compilation-6faf465d-7793-4352-9ce8-7eaa547cb6c5', must begin with a letter
Task 219 | 12:12:52 | Compiling packages: tar/1de08f190630baf01c0741c86773a02f7c88c2786db1f219152e46e8853f1ccc (00:00:02)
L Error: Invalid DNS canonical name '10245504-cf-compilation-bc431bff-f42d-4ad2-8cfc-b98ea04e51c2', must begin with a letter
Task 219 | 12:12:52 | Compiling packages: busybox/eab1896633e23c0eea7e07b91b27c2c090d16efd7fe07de1adbd8e5a1b16578e (00:00:02)
L Error: Invalid DNS canonical name '10245504-cf-compilation-be56fb8a-f206-4b0c-948a-fe622cf45611', must begin with a letter
Task 219 | 12:12:52 | Compiling packages: libseccomp/0594acc533fd2801a347d6b725d0db9f3591c583d06f1e53201e701b12e09824 (00:00:02)
L Error: Invalid DNS canonical name '10245504-cf-compilation-95b7e587-cb31-45eb-be0c-98501a749def', must begin with a letter
Task 219 | 12:12:52 | Compiling packages: apparmor/3789c10fa8ef4349f58badae51723eec7855d59b9bb6c1bb7ee21a264fae1dbb (00:00:02)
L Error: Invalid DNS canonical name '10245504-cf-compilation-85014a6d-09f0-4047-b6f4-1e5d5c666b5f', must begin with a letter
Task 219 | 12:12:52 | Compiling packages: iptables/85ef0fef60ac079fb75b36d0693cce704a662b23b0727c824f15d58713476f13 (00:00:02)
L Error: Invalid DNS canonical name '10245504-cf-compilation-89e52e3d-010a-42a7-a976-662a4bde18e0', must begin with a letter
Task 219 | 12:12:52 | Error: Invalid DNS canonical name '10245504-cf-compilation-6faf465d-7793-4352-9ce8-7eaa547cb6c5', must begin with a letter

Task 219 Started Thu Sep 21 12:12:42 UTC 2017
Task 219 Finished Thu Sep 21 12:12:52 UTC 2017
Task 219 Duration 00:00:10
Task 219 error

Updating deployment:
Expected task '219' to succeed but state is 'error'

Exit code 1

bosh create-env error

With below "command" i deployed bosh-lite on virtualbox, i got the below "deploy log".

--- command ---
bosh create-env bosh.yml --state ./state.json -o virtualbox/cpi.yml -o virtualbox/outbound-network.yml -o bosh-lite.yml -o bosh-lite-runc.yml -o jumpbox-user.yml --vars-store ./creds.yml -v director_name="Bosh Lite Director" -v internal_ip=192.168.150.6 -v internal_gw=192.168.150.1 -v internal_cidr=192.168.150.0/24 -v outbound_network_name=NatNetwork
--- deploy log ---
Deployment manifest: '/home/ihocho/workspace/bosh-lite/bosh-deployment/bosh.yml'
Deployment state: './state.json'

Started validating
  Downloading release 'bosh'... Skipped [Found in local cache] (00:00:00)
  Validating release 'bosh'... Finished (00:00:00)
  Downloading release 'bosh-virtualbox-cpi'... Skipped [Found in local cache] (00:00:00)
  Validating release 'bosh-virtualbox-cpi'... Finished (00:00:01)
  Downloading release 'bosh-warden-cpi'... Skipped [Found in local cache] (00:00:00)
  Validating release 'bosh-warden-cpi'... Finished (00:00:00)
  Downloading release 'os-conf'... Skipped [Found in local cache] (00:00:00)
  Validating release 'os-conf'... Finished (00:00:00)
  Downloading release 'garden-runc'... Skipped [Found in local cache] (00:00:00)
  Validating release 'garden-runc'... Finished (00:00:00)
  Validating cpi release... Finished (00:00:00)
  Validating deployment manifest... Finished (00:00:00)
  Downloading stemcell... Skipped [Found in local cache] (00:00:00)
  Validating stemcell... Finished (00:00:02)
Finished validating (00:00:06)

Started installing CPI
  Compiling package 'golang_1.7/21609f611781e8586e713cfd7ceb389cee429c5a'... Finished (00:00:12)
  Compiling package 'virtualbox_cpi/b088193439d01014a46711523fbcfa00073c6b36'... Finished (00:00:08)
  Installing packages... Finished (00:00:02)
  Rendering job templates... Finished (00:00:00)
  Installing job 'virtualbox_cpi'... Finished (00:00:00)
Finished installing CPI (00:00:23)

Starting registry... Finished (00:00:00)
Uploading stemcell 'bosh-vsphere-esxi-ubuntu-trusty-go_agent/3312.15'... Finished (00:00:11)

Started deploying
  Creating VM for instance 'bosh/0' from stemcell 'sc-71928506-6a9f-4328-533f-f8710fc55469'... Failed (00:00:00)
Failed deploying (00:00:00)

Stopping registry... Finished (00:00:00)
Cleaning up rendered CPI jobs... Finished (00:00:00)

Deploying:
  Creating instance 'bosh/0':
    Creating VM:
      Creating vm with stemcell cid 'sc-71928506-6a9f-4328-533f-f8710fc55469':
        Executing external CPI command: '/home/ihocho/.bosh/installations/a97ca73f-2e3f-4cb8-6be2-81fb01ca6a19/jobs/virtualbox_cpi/bin/cpi':
          Running command: '/home/ihocho/.bosh/installations/a97ca73f-2e3f-4cb8-6be2-81fb01ca6a19/jobs/virtualbox_cpi/bin/cpi', stdout: '', stderr: '[File System] 2017/04/20 16:14:14 DEBUG - Reading file /home/ihocho/.bosh/installations/a97ca73f-2e3f-4cb8-6be2-81fb01ca6a19/jobs/virtualbox_cpi/config/cpi.json
[File System] 2017/04/20 16:14:14 DEBUG - Read content
********************
{"Host":"","Username":"ubuntu","PrivateKey":"","BinPath":"VBoxManage","StoreDir":"~/.bosh_virtualbox_cpi","StorageController":"ide","AutoEnableNetworks":true,"Agent":{"NTP":["time1.google.com","time2.google.com","time3.google.com","time4.google.com"],"blobstore":{"provider":"local","options":{"blobstore_path":"/var/vcap/micro_bosh/data/cache"}},"mbus":"https://mbus:[email protected]:6868"}}

********************
[rpc.JSONDispatcher] 2017/04/20 16:14:14 DEBUG - Request bytes
********************
{"method":"create_vm","arguments":["9f1332c1-1b31-4099-5e70-26af7581bbbb","sc-71928506-6a9f-4328-533f-f8710fc55469",{"cpus":2,"ephemeral_disk":16384,"memory":4096},{"default":{"cloud_properties":{},"default":["dns","gateway"],"dns":["8.8.8.8"],"gateway":"192.168.150.1","ip":"192.168.150.6","netmask":"255.255.255.0","type":"manual"},"outbound":{"cloud_properties":{"name":"NatNetwork","type":"natnetwork"},"type":"dynamic"}},[],{"bosh":{"password":"*"}}],"context":{"director_uuid":"99c0a426-9b2a-4acd-6b9c-5cdefd476972"}}
********************
[rpc.JSONDispatcher] 2017/04/20 16:14:14 DEBUG - Deserialized request
********************
{create_vm [9f1332c1-1b31-4099-5e70-26af7581bbbb sc-71928506-6a9f-4328-533f-f8710fc55469 map[cpus:%!!(MISSING)s(float64=2) ephemeral_disk:%!!(MISSING)s(float64=16384) memory:%!!(MISSING)s(float64=4096)] map[default:map[type:manual cloud_properties:map[] default:[dns gateway] dns:[8.8.8.8] gateway:192.168.150.1 ip:192.168.150.6 netmask:255.255.255.0] outbound:map[cloud_properties:map[type:natnetwork name:NatNetwork] type:dynamic]] [] map[bosh:map[password:*]]] {{"director_uuid":"99c0a426-9b2a-4acd-6b9c-5cdefd476972"}}}
********************
[driver.LocalRunner] 2017/04/20 16:14:14 DEBUG - Execute 'VBoxManage list' 'natnetworks'
[Cmd Runner] 2017/04/20 16:14:14 DEBUG - Running command 'VBoxManage list natnetworks'
[Cmd Runner] 2017/04/20 16:14:14 DEBUG - Stdout: NetworkName:    NatNetwork
IP:             10.0.2.1
Network:        10.0.2.0/24
IPv6 Enabled:   No
IPv6 Prefix:    fd17:625c:f037:2::/64
DHCP Enabled:   Yes
Enabled:        Yes
Port-forwarding (ipv4)
        ssh1:tcp:[127.0.0.1]:1234:[10.0.2.15]:22
        ssh2:tcp:[127.0.0.1]:2345:[10.0.2.4]:22
loopback mappings (ipv4)
        127.0.0.1=2

[Cmd Runner] 2017/04/20 16:14:14 DEBUG - Stderr: 
[Cmd Runner] 2017/04/20 16:14:14 DEBUG - Successful: true (0)
panic: Internal inconsistency: Expected len(^([a-zA-Z0-9\s]+):\s*(.+)?$ matches) == 3: line 'Port-forwarding (ipv4)'

goroutine 1 [running]:
panic(0x670120, 0xc4200fc0f0)
	/home/ihocho/.bosh/installations/a97ca73f-2e3f-4cb8-6be2-81fb01ca6a19/packages/golang_1.7/src/runtime/panic.go:500 +0x1a1
github.com/cppforlife/bosh-virtualbox-cpi/vm/network.Networks.NATNetworks(0x809740, 0xc42000edc0, 0x80b6c0, 0xc42000d6e0, 0x3, 0xc4200d8136, 0x0, 0x4, 0xc420063da8)
	/home/ihocho/.bosh/installations/a97ca73f-2e3f-4cb8-6be2-81fb01ca6a19/tmp/bosh-release-pkg684871484/src/github.com/cppforlife/bosh-virtualbox-cpi/vm/network/networks.go:62 +0x89c
github.com/cppforlife/bosh-virtualbox-cpi/vm.natNetworksAdapter.List(0x809740, 0xc42000edc0, 0x80b6c0, 0xc42000d6e0, 0xc41fff8ffc, 0x0, 0x1, 0xc41fff8ffd, 0xc4200cdc40)
	/home/ihocho/.bosh/installations/a97ca73f-2e3f-4cb8-6be2-81fb01ca6a19/tmp/bosh-release-pkg684871484/src/github.com/cppforlife/bosh-virtualbox-cpi/vm/host.go:163 +0x4d
github.com/cppforlife/bosh-virtualbox-cpi/vm.(*natNetworksAdapter).List(0xc4200e0040, 0x20, 0x6bbb80, 0xc4200d657b, 0x2, 0x20)
	<autogenerated>:65 +0x6d
github.com/cppforlife/bosh-virtualbox-cpi/vm.(*hostNetwork).Enable(0xc4200cdec0, 0xc4200cdd38, 0xc4200e0040)
	/home/ihocho/.bosh/installations/a97ca73f-2e3f-4cb8-6be2-81fb01ca6a19/tmp/bosh-release-pkg684871484/src/github.com/cppforlife/bosh-virtualbox-cpi/vm/host.go:84 +0x56
github.com/cppforlife/bosh-virtualbox-cpi/vm.Host.EnableNetworks(0x809740, 0xc42000edc0, 0x80b6c0, 0xc42000d6e0, 0xc4200d83f0, 0x0, 0x6ccea5)
	/home/ihocho/.bosh/installations/a97ca73f-2e3f-4cb8-6be2-81fb01ca6a19/tmp/bosh-release-pkg684871484/src/github.com/cppforlife/bosh-virtualbox-cpi/vm/host.go:37 +0x2af
github.com/cppforlife/bosh-virtualbox-cpi/vm.Factory.Create(0xc42004fa80, 0x1a, 0xc420015910, 0x3, 0x1, 0x8077c0, 0x846810, 0x809740, 0xc42000edc0, 0x809e00, ...)
	/home/ihocho/.bosh/installations/a97ca73f-2e3f-4cb8-6be2-81fb01ca6a19/tmp/bosh-release-pkg684871484/src/github.com/cppforlife/bosh-virtualbox-cpi/vm/factory.go:83 +0x1011
github.com/cppforlife/bosh-virtualbox-cpi/vm.(*Factory).Create(0xc42007f8c0, 0xc42000de90, 0x24, 0x809f00, 0xc42000f090, 0x807940, 0xc42004ff20, 0xc4200d8150, 0xc4200d8240, 0x6eec1c, ...)
	<autogenerated>:1 +0x10c
github.com/cppforlife/bosh-virtualbox-cpi/cpi.VMs.CreateVM(0x807b00, 0xc42008bf40, 0x807b80, 0xc42007f8c0, 0x807bc0, 0xc42007f9e0, 0xc42000de90, 0x24, 0xc42000dec0, 0x27, ...)
	/home/ihocho/.bosh/installations/a97ca73f-2e3f-4cb8-6be2-81fb01ca6a19/tmp/bosh-release-pkg684871484/src/github.com/cppforlife/bosh-virtualbox-cpi/cpi/vms.go:31 +0x208
github.com/cppforlife/bosh-virtualbox-cpi/cpi.(*CPI).CreateVM(0xc420084d00, 0xc42000de90, 0x24, 0xc42000dec0, 0x27, 0x807940, 0xc42004ff20, 0xc4200d8150, 0x846810, 0x0, ...)
	<autogenerated>:23 +0x103
github.com/cppforlife/bosh-cpi-go/apiv1.ActionFactory.Create.func3(0xc42000de90, 0x24, 0xc42000dec0, 0x27, 0xc42000df20, 0x2f, 0x30, 0xc4200d8150, 0x846810, 0x0, ...)
	/home/ihocho/.bosh/installations/a97ca73f-2e3f-4cb8-6be2-81fb01ca6a19/tmp/bosh-release-pkg684871484/src/github.com/cppforlife/bosh-cpi-go/apiv1/action_factory.go:44 +0x140
reflect.Value.call(0x68f480, 0xc42004faa0, 0x13, 0x6cc25c, 0x4, 0xc4200c40c0, 0x6, 0x8, 0xc420015f40, 0x681000, ...)
	/home/ihocho/.bosh/installations/a97ca73f-2e3f-4cb8-6be2-81fb01ca6a19/packages/golang_1.7/src/reflect/value.go:434 +0x5c8
reflect.Value.Call(0x68f480, 0xc42004faa0, 0x13, 0xc4200c40c0, 0x6, 0x8, 0x6, 0x8, 0x0)
	/home/ihocho/.bosh/installations/a97ca73f-2e3f-4cb8-6be2-81fb01ca6a19/packages/golang_1.7/src/reflect/value.go:302 +0xa4
github.com/cppforlife/bosh-cpi-go/rpc.JSONCaller.Call(0x68f480, 0xc42004faa0, 0xc4200108a0, 0x6, 0x6, 0x68f480, 0xc42004faa0, 0x0, 0x0)
	/home/ihocho/.bosh/installations/a97ca73f-2e3f-4cb8-6be2-81fb01ca6a19/tmp/bosh-release-pkg684871484/src/github.com/cppforlife/bosh-cpi-go/rpc/json_caller.go:44 +0x313
github.com/cppforlife/bosh-cpi-go/rpc.(*JSONCaller).Call(0x846810, 0x68f480, 0xc42004faa0, 0xc4200108a0, 0x6, 0x6, 0xc42004faa0, 0x0, 0x0, 0xc42002a008)
	<autogenerated>:5 +0x81
github.com/cppforlife/bosh-cpi-go/rpc.JSONDispatcher.Dispatch(0x807800, 0xc420015a20, 0x807980, 0x846810, 0x6d04ef, 0x12, 0x80b6c0, 0xc42000d6e0, 0xc4200d2600, 0x20b, ...)
	/home/ihocho/.bosh/installations/a97ca73f-2e3f-4cb8-6be2-81fb01ca6a19/tmp/bosh-release-pkg684871484/src/github.com/cppforlife/bosh-cpi-go/rpc/json_dispatcher.go:72 +0x483
github.com/cppforlife/bosh-cpi-go/rpc.(*JSONDispatcher).Dispatch(0xc420012fc0, 0xc4200d2600, 0x20b, 0x600, 0x600, 0x0, 0x0)
	<autogenerated>:12 +0x93
github.com/cppforlife/bosh-cpi-go/rpc.CLI.ServeOnce(0x807000, 0xc42002a008, 0x807040, 0xc42002a010, 0x8079c0, 0xc420012fc0, 0x6cc058, 0x3, 0x80b6c0, 0xc42000d6e0, ...)
	/home/ihocho/.bosh/installations/a97ca73f-2e3f-4cb8-6be2-81fb01ca6a19/tmp/bosh-release-pkg684871484/src/github.com/cppforlife/bosh-cpi-go/rpc/cli.go:38 +0x272
main.main()
	/home/ihocho/.bosh/installations/a97ca73f-2e3f-4cb8-6be2-81fb01ca6a19/tmp/bosh-release-pkg684871484/src/github.com/cppforlife/bosh-virtualbox-cpi/main/main.go:43 +0xabe
':
            exit status 2

Exit code 1

bosh-lite director failing to start garden containers

With the update from 5/5, we're seeing our bosh lites fail to start garden containers.

Deploying
---------

Director task 3
  Started preparing deployment > Preparing deployment. Done (00:00:01)

  Started preparing package compilation > Finding packages to compile. Done (00:00:00)

  Started creating missing vms
  Started creating missing vms > mysql/3cfce04c-99e1-479a-a419-ef4f3758ba3b (0)
  Started creating missing vms > mysql/f3e428f7-7f18-471b-9d06-e975864c668f (1)
  Started creating missing vms > arbitrator/1e1c21ac-12cb-44ce-81a9-51876f6f3125 (0)
  Started creating missing vms > proxy/c01f42b6-0e5f-4ffc-a94a-66d956ded0ad (0)
  Started creating missing vms > proxy/acb8fe2c-4583-4279-aa89-d0962c57e941 (1). Failed: Timed out pinging to a1841d37-cc0e-436b-bca0-6631602cb77f after 600 seconds (00:10:11)
   Failed creating missing vms > mysql/f3e428f7-7f18-471b-9d06-e975864c668f (1): Timed out pinging to 7bf2ef98-7a69-43b3-837f-3221d131a0bf after 600 seconds (00:10:11)
   Failed creating missing vms > proxy/c01f42b6-0e5f-4ffc-a94a-66d956ded0ad (0): Timed out pinging to 27dcb91c-fbbb-4ea7-9c0a-5dff775df21c after 600 seconds (00:10:11)
   Failed creating missing vms > arbitrator/1e1c21ac-12cb-44ce-81a9-51876f6f3125 (0): Timed out pinging to 0543a979-8163-4361-9d8a-ad4219cfc51e after 600 seconds (00:10:11)
   Failed creating missing vms > mysql/3cfce04c-99e1-479a-a419-ef4f3758ba3b (0): Timed out pinging to 6caf9c4a-abe8-47c0-a42f-5683885fa3e8 after 600 seconds (00:10:11)
   Failed creating missing vms (00:10:11)

Error 450002: Timed out pinging to a1841d37-cc0e-436b-bca0-6631602cb77f after 600 seconds

No Separate Variables for Persistent and Non-Persistent Datastores on vSphere cpi.yml

Currently we can only specify one datastore for both persistent and non-persistent datastores for vSphere CPI (https://github.com/cloudfoundry/bosh-deployment/blob/master/vsphere/cpi.yml) through vcenter_ds variable.

datastore_pattern: ((vcenter_ds))
persistent_datastore_pattern: ((vcenter_ds))

If we 2 separate datastores for persistent and non-persistent disks, it's not possible to use the bosh-deployment for this scenario unless further modifications are done to the cpi.yml file.

Can we change to use 2 variables vcenter_ds and vcenter_pds (persistent-datastore) in cpi.yml of bosh-deployment project?

Non static IP BOSH Director

Hi,

I am trying to deploy a BOSH director onto a VLAN which is shared so the policy in OpenStack doesn't allow me to specify the "internal_ip".

Started deploying
  Creating VM for instance 'bosh/0' from stemcell '9e3ec591-be63-4588-b5d5-556eb51476c0'... Failed (00:00:02)
Failed deploying (00:00:02)

Stopping registry... Finished (00:00:00)
Cleaning up rendered CPI jobs... Finished (00:00:00)

Deploying:
  Creating instance 'bosh/0':
    Creating VM:
      Creating vm with stemcell cid '9e3ec591-be63-4588-b5d5-556eb51476c0':
        CPI 'create_vm' method responded with error: CmdError{"type":"Bosh::Clouds::CloudError","message":"OpenStack API Forbidden (Policy doesn't allow (rule:create_port and rule:create_port:fixed_ips) to be performed.). Check task debug log for details.","ok_to_retry":false}

Exit code 1

Is there a way to make BOSH use DHCP rather than having to specify an IP for the BOSH director?

openstack--Updating CPI config:How to CMD-format

bosh -e bosh-1 update-cpi-config /bosh/workspace/bosh-deployment/openstack/cpi.yml -v auth_url=http://192.168.70.161:5000/v3 -v az=nova -v default_key_name=bosh-stemcell-3363-20-key -v default_security_groups=[bosh] -v director_name=bosh-1 -v internal_cidr=10.100.101.0/24 -v internal_gw=10.100.101.1 -v internal_ip=10.100.101.50 -v net_id=64eba259-86c2-46ca-a82f-a36f1a1aca13 -v openstack_domain=default -v openstack_password=admin -v openstack_project=admin -v openstack_username=admin -v region=RegionOne -v postgres_password=adfx0hrqd6svcn5p7wpb -v registry_password=4ij6385kdcizodh96umk --var-file private_key=/bosh/bosh-stemcell-3363-20-key.pem

out-error๏ผš
Updating CPI config:
Director responded with non-successful status code '400' response '{"code":40000,"description":"Object ([{"path"=>"/releases/-", "type"=>"replace", "value"=>{"name"=>"bosh-openstack-cpi", "sha1"=>"ed48a0e021805448e4581764d11d20696a4eaecb", "url"=>"file:///bosh/bosh-openstack-cpi-release-31.tgz", "version"=>31}}, {"path"=>"/resource_pools/name=vms/stemcell?", "type"=>"replace", "value"=>{"sha1"=>"bc8096c0d817b407aed15af487bd6d5f4ad069f7", "url"=>"file:///bosh/bosh-stemcell-3363.20-openstack-kvm-ubuntu-trusty-go_agent.tgz"}}, {"path"=>"/resource_pools/name=vms/cloud_properties?", "type"=>"replace", "value"=>{"availability_zone"=>"nova", "instance_type"=>"m1.xlarge"}}, {"path"=>"/networks/name=default/subnets/0/cloud_properties?", "type"=>"replace", "value"=>{"net_id"=>"64eba259-86c2-46ca-a82f-a36f1a1aca13"}}, {"path"=>"/instance_groups/name=bosh/jobs/-", "type"=>"replace", "value"=>{"name"=>"registry", "release"=>"bosh"}}, {"path"=>"/instance_groups/name=bosh/properties/registry?", "type"=>"replace", "value"=>{"address"=>"10.100.101.50", "db"=>{"adapter"=>"postgres", "database"=>"bosh", "host"=>"127.0.0.1", "password"=>"adfx0hrqd6svcn5p7wpb", "user"=>"postgres"}, "endpoint"=>"http://registry:[email protected]:25777", "host"=>"10.100.101.50", "password"=>"4ij6385kdcizodh96umk", "port"=>25777, "username"=>"registry"}}, {"path"=>"/instance_groups/name=bosh/jobs/-", "type"=>"replace", "value"=>{"name"=>"openstack_cpi", "release"=>"bosh-openstack-cpi"}}, {"path"=>"/instance_groups/name=bosh/properties/director/cpi_job?", "type"=>"replace", "value"=>"openstack_cpi"}, {"path"=>"/cloud_provider/template?", "type"=>"replace", "value"=>{"name"=>"openstack_cpi", "release"=>"bosh-openstack-cpi"}}, {"path"=>"/instance_groups/name=bosh/properties/openstack?", "type"=>"replace", "value"=>{"api_key"=>"admin", "auth_url"=>"http://192.168.70.161:5000/v3", "default_key_name"=>"bosh-stemcell-3363-20-key", "default_security_groups"=>["bosh"], "domain"=>"default", "human_readable_vm_names"=>true, "project"=>"admin", "region"=>"RegionOne", "state_timeout"=>7200, "username"=>"admin"}}, {"path"=>"/cloud_provider/ssh_tunnel?", "type"=>"replace", "value"=>{"host"=>"10.100.101.50", "port"=>22, "private_key"=>"-----BEGIN RSA PRIVATE KEY-----\nMIIEowIBAAKCAQEA6o9FU1V9hJBT8d9O+Ka/FRDwog8236UtvCj7aLeo3f5EA1cP\nNnEOs5PLHn+bJrTqDxqLH9EDzcvW8SBvdzpPtVFrzKPyFHH62hoz5+PzckjAJVP9\npmf/UuDEZ1MImwq9pA5eXMIpPXBxVU6x6X4g/rZ2ddObVv08L3gZqlGbvlSf+rt6\nUtkuHnULCv294tYKF8959swilr/bkV6+sKLMJp1tt3NZ6y7bQeVl22bag8/WCVfy\n1R/e7ZIK9WYA7h3hJy+6xrANXsosCExRfF95h+fgCxSDsiS1fVgeG5rxnumlDSo9\nmlCqlAtvATIbhH1cAJqEQ4siHX2gi2x6q7WvyQIDAQABAoIBAGhqOtwr9GIstZG4\nbLk30VwZXGVoDG9mYoOeYrxs9ZlM9Q3flYAQuBCsvADpoTGL2525nTEepGjaUuao\nH7admJoIkspYOQ1s59RrUavqx5aaWB7F0uZe6UQFlqjAR9Zs45rCYrM8I4ucIHdv\nPrfIU/vPUdCF0GLa/A3Nku9EwOXqf7C+sENIOwTurZvpxcPdZOXS1f7o9ejMkDqx\n1d9I67ePi2UyuX5C3K1ALeLEmvsq3tadktf+CpWXdWZJkIQuL8R9jQJAFeeIcz0J\nSCMGn67k2ep7+17VUpYp+YLaCL8PSYJhO8N7QIjGAa9UmOuIt9F0wEDlClXtNw2o\nTus5/QECgYEA/pwYtyYaASMFGe4SSnGJUJmLSzxcZa/2dLBT4vrnDr65DiN5Pwvw\nIuJ9BRNKbfIYjyGAS8yixvQAvNEJe75oabF13JyyztyVpHZa6sR85ngwWYI7wWCJ\ntxbumWquOhu9Z5rHJQEwJ5zzqN02Ztw1q5QNP/Chh8bAXCW3D1XApjkCgYEA69cl\nvu2QGrPan25AUzBoW+OaBUKY+RCIokG93KpRqN5Fv8wnLz0iOoEGmQVNmd4PJi38\nUaOoJ+5RWEVCSy+vZRRHD5qP47n5v48GbulQ/fY/UcAYOH4SHB3SK8szaUhzXVr8\nkYcunx1vQY6dIJIDLh1cnwZUwGnfZKD2b76v1hECgYEA1O+DlBjfgrfhGlCLJ4tQ\nxgHEB0YSGFFTkz+syJYCC8jiR7rPOjUnvmUhHc+GXfEtLPddrwcT08RZoZBJmB4k\ngNCTu8+pk2vUEtmRK+rscmtuNE3A3/d29ZLONayMzbhJbY56oq4dseOHvGBVkSz2\nDesiMalzznQgiHBaaw7SsbECgYBIH3GRpADvyZTQMN1HE4S2pTIS7bzuXhoK1OQF\nOajjZaYa84oALkfrcE3eOfrzVS9405NYPB5Op9kEj5moeJrA5KSepvvd/p/b7xde\nj8ePAuF2VLKThCpxosUFU40TY260XADlWFvvmQbPG5f9v+ltDtmmYD9G4JnKolb6\n8WvAoQKBgFs7GKX6Fl/q4uJ86H0TlejJQCpsOviOLRMSaq8M3N7N069xsymJesYB\ncWLycu8lwwmqlSz+CSY7IlOgwioU5TBtslixAamqwVmANETnYSpAfnRcAH8jz2vX\nuXIu1SZ6ppv+Z6SdKFbicBhO8RY81cqArP26fcwVnMOF8O6COQpl\n-----END RSA PRIVATE KEY-----\n", "user"=>"vcap"}}, {"path"=>"/cloud_provider/properties/openstack?", "type"=>"replace", "value"=>{"api_key"=>"admin", "auth_url"=>"http://192.168.70.161:5000/v3", "default_key_name"=>"bosh-stemcell-3363-20-key", "default_security_groups"=>["bosh"], "domain"=>"default", "human_readable_vm_names"=>true, "project"=>"admin", "region"=>"RegionOne", "state_timeout"=>7200, "username"=>"admin"}}, {"path"=>"/variables/-", "type"=>"replace", "value"=>{"name"=>"registry_password", "type"=>"password"}}]) did not match the required type 'Hash'"}'

Exit code 1

Add variable for bosh DNS

Can we make the DNS configurable in the bosh deployment like the rest of the network settings?
Google's public DNS generally doesn't work for on-prem deploys.

networks:
- name: default
  type: manual
  subnets:
  - range: ((internal_cidr))
    gateway: ((internal_gw))
    static: [((internal_ip))]
    dns: [8.8.8.8]            # <--------

Unclear why `credhub.yml` uses a non-default CA?

I noticed this commit by @danjahner that changes the credhub.yml ops file to generate its own CA by default, instead of using the default_ca.

A side-effect of this is that any client wishing to communicate with CredHub needs to maintain config for both a UAA CA, and the CredHub CA. While I guess that could always be true, I'm curious as to why the default was changed in this manner? (and hoping the rationale might be documented somewhere).

I'll note that once Concourse fixes it's other related problems, I think they'll be affected by this as they don't have a config option to allow a CA to be specified for the CredHub UAA communication (at least that I can see - although the command-line appears to be able to take multiple certs, that can't be specified via their bosh deployment AFAICT).

bosh-lite director not persistent across reboot

I've successfully deployed bosh-lite on virtualbox but after reboot, the bosh director is failing.
SSHing into to VM indicates that monit is not running and even after manually starting it, many service are not initializing.
Is this expected? I've seen comments, that bosh deployments are not persistent across reboots on bosh-lite but I haven't seen such a comment regarding the director.
Thx!

Vsphere cpi.yml missing resource_pool

In order to get vms created in the appropriate resource pool we had to make the following change to the vsphere/cpi.yml

      clusters:
      - ((vcenter_cluster)): {}

changed to

      clusters:
      - ((vcenter_cluster)):
          resource_pool: ((vcenter_resource_pool))

We were thinking about making this a pull request since it's a simple change but then the documentation would need to be updated as well at https://bosh.io/docs/init-vsphere.html so maybe it's more appropriate for the bosh team to handle it.

Versioning

Are these configurations going to be versioned so it's easy to talk about what's been deployed when using this repo over time, similar to how cf-deployment is versioning itself?

Docker CPI - Can't find property 'docker_cpi.docker.tls.certificate'

Hello,

I've been trying to setup Bosh director on Docker and due to lack of installation instructions I used the command provided here:

bosh create-env bosh.yml \
  -o docker/cpi.yml \
  -o jumpbox-user.yml \
  --state=$vars_store_prefix \
  --vars-store $(mktemp ${vars_store_prefix}.XXXXXX) \
  -v director_name=docker \
  -v internal_cidr=10.245.0.0/16 \
  -v internal_gw=10.245.0.1 \
  -v internal_ip=10.245.0.10 \
  -v docker_host=tcp://192.168.50.8:4243 \
  -v docker_tls=ca_cert \
  -v network=net3

I issued the command exactly as it is provided. The only thing I changes was the call to bosh because on my machine I installed it as bosh2 in order to be able to work both with v1 and v2 CLI.

This is the output I get each time I execute the command.

+ bosh2 create-env bosh.yml -o docker/cpi.yml -o jumpbox-user.yml --state= --vars-store .wZPieL -v director_name=docker -v internal_cidr=10.245.0.0/16 -v internal_gw=10.245.0.1 -v internal_ip=10.245.0.10 -v docker_host=tcp://192.168.50.8:4243 -v docker_tls=ca_cert -v network=net3
Deployment manifest: '/FS/fslocal/cloudfoundry/bosh-deployment/bosh.yml'
Deployment state: '/FS/fslocal/cloudfoundry/bosh-deployment/bosh-state.json'

Started validating
  Downloading release 'bosh'... Skipped [Found in local cache] (00:00:00)
  Validating release 'bosh'... Finished (00:00:02)
  Downloading release 'bosh-docker-cpi'... Skipped [Found in local cache] (00:00:00)
  Validating release 'bosh-docker-cpi'... Finished (00:00:02)
  Downloading release 'os-conf'... Skipped [Found in local cache] (00:00:00)
  Validating release 'os-conf'... Finished (00:00:00)
  Validating cpi release... Finished (00:00:00)
  Validating deployment manifest... Finished (00:00:00)
  Downloading stemcell... Skipped [Found in local cache] (00:00:00)
  Validating stemcell... Finished (00:00:02)
Finished validating (00:00:08)

Started installing CPI
  Compiling package 'golang_1.7/21609f611781e8586e713cfd7ceb389cee429c5a'... Finished (00:00:00)
  Compiling package 'docker_cpi/ab58317d13c89566a161b40115de222b80474739'... Finished (00:00:00)
  Installing packages... Finished (00:00:02)
  Rendering job templates... Failed (00:00:00)
Failed installing CPI (00:00:02)

Installing CPI:
  Rendering and uploading Jobs:
    Rendering job templates for installation:
      Rendering templates for job 'docker_cpi/ea5bcf51e5da2233b03c97ec118caa0ec7b716eb':
        Rendering template src: cpi.json.erb, dst: config/cpi.json:
          Rendering template src: /home/apama/.bosh/installations/7e5e7c36-e77e-4599-7a17-db143a0860d6/tmp/bosh-release-job582547316/templates/cpi.json.erb, dst: /home/apama/.bosh/installations/7e5e7c36-e77e-4599-7a17-db143a0860d6/tmp/rendered-jobs826377360/config/cpi.json:
            Running ruby to render templates:
              Running command: 'ruby /home/apama/.bosh/installations/7e5e7c36-e77e-4599-7a17-db143a0860d6/tmp/erb-renderer227669177/erb-render.rb /home/apama/.bosh/installations/7e5e7c36-e77e-4599-7a17-db143a0860d6/tmp/erb-renderer227669177/erb-context.json /home/apama/.bosh/installations/7e5e7c36-e77e-4599-7a17-db143a0860d6/tmp/bosh-release-job582547316/templates/cpi.json.erb /home/apama/.bosh/installations/7e5e7c36-e77e-4599-7a17-db143a0860d6/tmp/rendered-jobs826377360/config/cpi.json', stdout: '', stderr: '/home/apama/.bosh/installations/7e5e7c36-e77e-4599-7a17-db143a0860d6/tmp/erb-renderer227669177/erb-render.rb:189:in `rescue in render': Error filling in template '/home/apama/.bosh/installations/7e5e7c36-e77e-4599-7a17-db143a0860d6/tmp/bosh-release-job582547316/templates/cpi.json.erb' for docker_cpi/0 (line 26: #<TemplateEvaluationContext::UnknownProperty: Can't find property 'docker_cpi.docker.tls.certificate'>) (RuntimeError)
        from /home/apama/.bosh/installations/7e5e7c36-e77e-4599-7a17-db143a0860d6/tmp/erb-renderer227669177/erb-render.rb:175:in `render'
        from /home/apama/.bosh/installations/7e5e7c36-e77e-4599-7a17-db143a0860d6/tmp/erb-renderer227669177/erb-render.rb:200:in `<main>'
':
                exit status 1

Exit code 1

What bothers me is the message Can't find property 'docker_cpi.docker.tls.certificate'. The only reference to this property that I could find is in this pull request merged by Dmitriy Kalinin.

Could you please assist me and elaborate on the necessary prerequisites I need to fulfill before I run the create-env command? I have the feeling that I might need to generate certificate keypair and provide the path(???) to the keypair in this line. Currently this is my best guess.

Long story short - I'm trying to run Bosh director via Docker CPI and I got the problem described above. All assistance on this matter is highly appreciated. Thank you!

Regards,
Ivan Davidov

Please configure GITBOT

Pivotal uses GITBOT to synchronize Github issues and pull requests with Pivotal Tracker.
Please add your new repo to the GITBOT config-production.yml in the Gitbot configuration repo.
If you don't have access you can send an ask ticket to the CF admins. We prefer teams to submit their changes via a pull request.

Steps:

  • Fork this repo: cfgitbot-config
  • Add your project to config-production.yml file
  • Submit a PR

If there are any questions, please reach out to [email protected].

Openstack (Mitaka) expects tenant instead of project

With Openstack (Mitaka) the cpi complains about missing tenant key. It appears openstack has enforced using tenant rather than having it interchangeable with project. There has to be some logic to change https://github.com/cloudfoundry/bosh-deployment/blob/master/openstack/cpi.yml#L72 to be tenant: ((openstack_tenant)) and relevant docs. If that works for older versions of openstack this should just be the default, but older versions of openstack need to be tested first.

bosh director trust more than one ip

I create a bosh lite by bosh create-env as below in my machine A(ip address 10.112.113.124). And I can access the director in my local machine by ip 192.168.50.6. Now I want to access the bosh director from another machine B, so I make a iptable nat that forward all request to port(25555) of my local machineA to 192.168.50.6:25555. So now I can curl https://10.112.113.124:25555 successfully and get response body. But when I use bosh cli v2 to access the ip 10.112.113.124:25555 but it returns an certificate error:

Fetching info:
Performing request GET 'https://10.121.94.103:25555/info':
Performing GET request:
Retry: Get https://10.121.94.103:25555/info: x509: certificate is valid for 192.168.50.6, not 10.121.94.103

Exit code 1_

So my question is how to make my request work? The director can trust both 192.168.50.6 and 10.121.94.103?

create env cmd:
bosh create-env ~/workspace/bosh-deployment/bosh.yml
--state ./state.json
-o ~/workspace/bosh-deployment/virtualbox/cpi.yml
-o ~/workspace/bosh-deployment/virtualbox/outbound-network.yml
-o ~/workspace/bosh-deployment/bosh-lite.yml
-o ~/workspace/bosh-deployment/bosh-lite-runc.yml
-o ~/workspace/bosh-deployment/jumpbox-user.yml
--vars-store ./creds.yml
-v director_name="Bosh Lite Director"
-v internal_ip=192.168.50.6
-v internal_gw=192.168.50.1
-v internal_cidr=192.168.50.0/24
-v outbound_network_name=NatNetwork

Please provide an ops file for forwarding connections to bosh-lite

It is useful to be able to forward connections to bosh-lite when deploying a bosh-lite on an IaaS (e.g. bosh-lite on GCP) so you can test against the bosh-lite directly. There seems to be a networking-release for bosh that enables this kind of port forwarding. It would be nice if bosh-deployment included an ops file that did this port forwarding for you. I believe that the following should be sufficient:

- type: replace
  path: /releases/-
  value:
    name: networking
    sha1: 092a6e0649e5adb88ec947b0d37d3e5afebe68fe
    url: https://bosh.io/d/github.com/cloudfoundry/networking-release?v=8
    version: 8

- type: replace
  path: /instance_groups/name=bosh/jobs/-
  value:
    name: port_forwarding
    release: networking-release

- type: replace
  path: /instance_groups/name=bosh/properties/networking?/port_forwarding?
  value:
  - external_port: 80
    internal_ip: 10.244.0.34
    internal_port: 80
  - external_port: 443
    internal_ip: 10.244.0.34
    internal_port: 443
  - external_port: 2222
    internal_ip: 10.244.0.34
    internal_port: 2222
  - external_port: 4443
    internal_ip: 10.244.0.34
    internal_port: 4443

help

openstack:ocata
networking option:Provider networks

bosh create-env bosh-deployment/bosh.yml
--state=state.json
--vars-store=creds.yml
-o bosh-deployment/openstack/cpi.yml
-v director_name=bosh-1
-v internal_cidr=172.18.0.0/16
-v internal_gw=172.18.0.10
-v internal_ip=172.18.25.135
-v auth_url=http://172.18.25.116:5000/v3/
-v az=nova
-v default_key_name=microbosh
-v default_security_groups=[bosh]
-v net_id=f9eb147c-227b-4d2b-b71f-446e48392bfc
-v openstack_password=zxc123
-v openstack_username=admin
-v openstack_domain=Default
-v openstack_project=admin
-v private_key=/root/microbosh.pem
-v region=RegionOne

bosh alias-env bosh-1 -e 172.18.25.135 --ca-cert <(bosh int ./creds.yml --path /director_ssl/ca)
export BOSH_CLIENT=admin
export BOSH_CLIENT_SECRET=bosh int ./creds.yml --path /admin_password
bosh -e bosh-1 env

those steps are fine, i can see a vm is running in dashboard. but when i upload stemcell i got error:

  • 12:22:21 | Update stemcell: Uploading stemcell bosh-openstack-kvm-ubuntu-trusty-go_agent/3445 to the cloud (00:01:01)
  •         L Error: CPI error 'Bosh::Clouds::CloudError' with message 'Unable to connect to the OpenStack Keystone API http://172.18.25.116:5000/v3/auth/tokens
    
  • getaddrinfo: Name or service not known (SocketError)' in 'create_stemcell' CPI method
  • 12:23:22 | Error: CPI error 'Bosh::Clouds::CloudError' with message 'Unable to connect to the OpenStack Keystone API http://172.18.25.116:5000/v3/auth/tokens
  • getaddrinfo: Name or service not known (SocketError)' in 'create_stemcell' CPI method

i think it's a problem about /etc/hosts so i try ssh [email protected] -i /root/microbosh.pem and see 127.0.0.1 localhost e47f7adb-b857-4f7f-75b9-6dc88c409e74 in /etc/hosts
i try to modify /etc/hosts but it is readonly

question1:did i do something wrong?
question2: password of user vcap or root (not c1oudc0w) or can i specify when i deploy?
question3: can cloudy foundry be deployed in provider network?

bosh-lite with s3 blobstore fails at Rendering job templates

I'm trying to test a s3 compatible blobstore with bosh-lite and virtualbox.
The create-env fails at Rendering job templates with following error:

Deploying:
  Building state for instance 'bosh/0':
    Rendering job templates for instance 'bosh/0':
      Rendering templates for job 'director/a71b0296ec100edb6a69e410defe39c637a63c3f':
        Rendering template src: director.yml.erb.erb, dst: config/director.yml.erb:
          Rendering template src: /home/vchrisb/.bosh/installations/f88f636c-b1bb-4b89-6e6d-cf5d2e50b427/tmp/bosh-release-job752984867/templates/director.yml.erb.erb, dst: /home/vchrisb/.bosh/installations/f88f636c-b1bb-4b89-6e6d-cf5d2e50b427/tmp/rendered-jobs358980496/config/director.yml.erb:
            Running ruby to render templates:
              Running command: 'ruby /home/vchrisb/.bosh/installations/f88f636c-b1bb-4b89-6e6d-cf5d2e50b427/tmp/erb-renderer210957634/erb-render.rb /home/vchrisb/.bosh/installations/f88f636c-b1bb-4b89-6e6d-cf5d2e50b427/tmp/erb-renderer210957634/erb-context.json /home/vchrisb/.bosh/installations/f88f636c-b1bb-4b89-6e6d-cf5d2e50b427/tmp/bosh-release-job752984867/templates/director.yml.erb.erb /home/vchrisb/.bosh/installations/f88f636c-b1bb-4b89-6e6d-cf5d2e50b427/tmp/rendered-jobs358980496/config/director.yml.erb', stdout: '', stderr: '/home/vchrisb/.bosh/installations/f88f636c-b1bb-4b89-6e6d-cf5d2e50b427/tmp/erb-renderer210957634/erb-render.rb:189:in `rescue in render': Error filling in template '/home/vchrisb/.bosh/installations/f88f636c-b1bb-4b89-6e6d-cf5d2e50b427/tmp/bosh-release-job752984867/templates/director.yml.erb.erb' for director/0 (line 161: #<TemplateEvaluationContext::UnknownProperty: Can't find property 'blobstore.address'>) (RuntimeError)
        from /home/vchrisb/.bosh/installations/f88f636c-b1bb-4b89-6e6d-cf5d2e50b427/tmp/erb-renderer210957634/erb-render.rb:175:in `render'
        from /home/vchrisb/.bosh/installations/f88f636c-b1bb-4b89-6e6d-cf5d2e50b427/tmp/erb-renderer210957634/erb-render.rb:200:in `<main>'
':
                exit status 1

Exit code 1

I've created an ops file with following content:

- type: remove
  path: /instance_groups/name=bosh/jobs/name=blobstore

- type: remove
  path: /variables/name=blobstore_director_password

- type: remove
  path: /variables/name=blobstore_agent_password
  
- type: replace
  path: /instance_groups/name=bosh/properties/blobstore
  value:
    properties:
      blobstore:
        provider: s3
        access_key_id: ((access_key_id))
        secret_access_key: ((secret_access_key))
        bucket_name: ((bucket_name))
        host: ((s3_host))
        use_ssl: true

- type: replace
  path: /instance_groups/name=bosh/properties/warden_cpi/agent/blobstore
  value:
    provider: s3
    options:
      access_key_id: ((access_key_id))
      secret_access_key: ((secret_access_key))
      bucket_name: ((bucket_name))
      host: ((s3_host))
      use_ssl: true

- type: replace
  path: /instance_groups/name=bosh/jobs/name=virtualbox_cpi/properties/blobstore
  value:
    provider: s3
    access_key_id: ((access_key_id))
    secret_access_key: ((secret_access_key))
    bucket_name: ((bucket_name))
    host: ((s3_host))
    use_ssl: true

and my create-env command is:

bosh create-env ~/workspace/bosh-deployment/bosh.yml \
  --state ~/deployments/vbox/state.json \
  -o ~/workspace/bosh-deployment/virtualbox/cpi.yml \
  -o ~/workspace/bosh-deployment/virtualbox/outbound-network.yml \
  -o ~/workspace/bosh-deployment/bosh-lite.yml \
  -o ~/workspace/bosh-deployment/bosh-lite-runc.yml \
  -o ~/workspace/bosh-deployment/jumpbox-user.yml \
  -o ~/workspace/blobstore_s3.yml \
  --vars-store ~/deployments/vbox/creds.yml \
  -v director_name="Bosh Lite Director" \
  -v internal_ip=192.168.50.6 \
  -v internal_gw=192.168.50.1 \
  -v internal_cidr=192.168.50.0/24 \
  -v outbound_network_name=NatNetwork \
  -v access_key_id=123456 \
  -v secret_access_key=123456 \
  -v bucket_name=bosh_blobstore \
  -v s3_host=object.domain.com

the resulting manifest is:

cloud_provider:
  mbus: https://mbus:[email protected]:6868
  properties:
    agent:
      mbus: https://mbus:[email protected]:6868
    blobstore:
      path: /var/vcap/micro_bosh/data/cache
      provider: local
    ntp:
    - time1.google.com
    - time2.google.com
    - time3.google.com
    - time4.google.com
  template:
    name: virtualbox_cpi
    release: bosh-virtualbox-cpi
disk_pools:
- disk_size: 65536
  name: disks
instance_groups:
- instances: 1
  jobs:
  - name: nats
    release: bosh
  - name: postgres-9.4
    release: bosh
  - name: director
    release: bosh
  - name: health_monitor
    release: bosh
  - name: virtualbox_cpi
    properties:
      agent:
        mbus: nats://nats:[email protected]:4222
      blobstore:
        access_key_id: 123456
        bucket_name: bosh_blobstore
        host: object.domain.com
        provider: s3
        secret_access_key: 123456
        use_ssl: true
      ntp:
      - 0.pool.ntp.org
      - 1.pool.ntp.org
    release: bosh-virtualbox-cpi
  - name: warden_cpi
    release: bosh-warden-cpi
  - name: garden
    release: garden-runc
  - name: disable_agent
    release: os-conf
  - name: user_add
    properties:
      users:
      - name: jumpbox
        public_key: |
          ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDZh+/tbUpco+zr6yzOjHB4iuatCSZMfVtEk4sNgN12g5nRNu5RV/elSeKvdjpCnFzP6DAP0YGTQdWqs2PPT6rnTc1/e0WccCje1yyYcjw6xthsOA2F+UWf3XpPIODr/2nES7ZFWJicjTkQ3rLZz2BLggwx+5031N2Qmy5lzIk4syaxq5k7mJ7Uy0TlUBbY+1rlOyXPLcbSNZGNq60g3I71CtL9kGFJC0oPNcwGjbXun1WoYMVxIZrZrpOK1Sr8HHybqYy+pCKx4o4xmaLwBp/zjsnCckGfxG3mlcvgi7iAx+T8YagUEoHauLmaRweSAsWdxU2PdjKRdUejpzxl+Wzz
    release: os-conf
  name: bosh
  networks:
  - default:
    - dns
    - gateway
    name: default
    static_ips:
    - 192.168.50.6
  - name: outbound
  persistent_disk_pool: disks
  properties:
    agent:
      mbus: nats://nats:[email protected]:4222
    blobstore:
      properties:
        blobstore:
          access_key_id: 123456
          bucket_name: bosh_blobstore
          host: object.domain.com
          provider: s3
          secret_access_key: 123456
          use_ssl: true
    compiled_package_cache:
      options:
        blobstore_path: /var/vcap/store/tmp/compiled_package_cache
      provider: local
    director:
      address: 127.0.0.1
      cpi_job: warden_cpi
      db:
        adapter: postgres
        database: bosh
        host: 127.0.0.1
        listen_address: 127.0.0.1
        password: x9r6y07v55cle15nsc4w
        user: postgres
      enable_dedicated_status_worker: true
      enable_post_deploy: true
      events:
        record_events: true
      flush_arp: true
      generate_vm_passwords: true
      ignore_missing_gateway: true
      name: Bosh Lite Director
      ssl:
        cert: |
          -----BEGIN CERTIFICATE-----
          MIIDQTCCAimgAwIBAgIRALdipFKD4Kp2I/vs8q1/rtowDQYJKoZIhvcNAQELBQAw
          MzEMMAoGA1UEBhMDVVNBMRYwFAYDVQQKEw1DbG91ZCBGb3VuZHJ5MQswCQYDVQQD
          EwJjYTAeFw0xNzAzMTMxNDExMjdaFw0xODAzMTMxNDExMjdaMD0xDDAKBgNVBAYT
          A1VTQTEWMBQGA1UEChMNQ2xvdWQgRm91bmRyeTEVMBMGA1UEAxMMMTkyLjE2OC41
          MC42MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA4RXft2YF5QCOjrZe
          bk7uZpe28vZXPjidyk5eLTNQ//Jh5WdBswiNfGxwS8ejiUH0UjyK6ZoD0zr2X9HW
          CJzmdWX2e9S0HxwI3RU7WKocNzBFdlpxsFTh39a6fo0ImZ7yirdUcsAR3qSHX30a
          U8NhPnqQoZaCtZFedpPAIUi+veUsXZSNrkMWqB97QLLk++S6lf7hQyJ3+miK4a4m
          vnpA4/Nzo2p38NPda5AnI2V8vSZKpGN3iCBhgjSnbPkoznklIukzHQ8/Jk30KUEq
          TdClP1cjxrG1vQ52IT8LeE9LR3iEEKjxeFouRQBxu7svLlziZtksqRzs6UEe7yhR
          5chd0wIDAQABo0YwRDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAwwCgYIKwYBBQUH
          AwEwDAYDVR0TAQH/BAIwADAPBgNVHREECDAGhwTAqDIGMA0GCSqGSIb3DQEBCwUA
          A4IBAQAgDBvBCBiBTf4nVq8HeZMPkpBoI4vuUOgQ/HJ7lMgMY+di0H2l30GemOIU
          yHkUT5y1R+tlytDeoNR3OcMi0c8t2AwTRqXBSi393Leo3VRccUrIWdMMh0ivDHF0
          M3mw6UCXD7HOCpmaE5beb3SzT9jujxC7CzL2gvLzL2bRTiKc2DHO8nhJ6VU78FmQ
          LM3SNEehav52I+Ky6Ap7tTv6+32Ik2lcSE1zM1ztbwjTsyRJSpqhfDGWBu/LTOYd
          z4V0ZHrHSY1i8JCP5POGid/YYDq4mU0jvAvL7EJvbOxyJzzkxdqnleanP/CbIi5W
          ORswZYH+HHDp7PGy6PNAZpmv4eQQ
          -----END CERTIFICATE-----
        key: |
          -----BEGIN RSA PRIVATE KEY-----
          MIIEpAIBAAKCAQEA4RXft2YF5QCOjrZebk7uZpe28vZXPjidyk5eLTNQ//Jh5WdB
          swiNfGxwS8ejiUH0UjyK6ZoD0zr2X9HWCJzmdWX2e9S0HxwI3RU7WKocNzBFdlpx
          sFTh39a6fo0ImZ7yirdUcsAR3qSHX30aU8NhPnqQoZaCtZFedpPAIUi+veUsXZSN
          rkMWqB97QLLk++S6lf7hQyJ3+miK4a4mvnpA4/Nzo2p38NPda5AnI2V8vSZKpGN3
          iCBhgjSnbPkoznklIukzHQ8/Jk30KUEqTdClP1cjxrG1vQ52IT8LeE9LR3iEEKjx
          eFouRQBxu7svLlziZtksqRzs6UEe7yhR5chd0wIDAQABAoIBAHpSiOICb/Gj+9VT
          Br6r5qOaj7I6be9ClX38WPH3kW2HK+yf0PSbEUktJVoJhLZzQXPvsw6AxNNml747
          KzZDDnt+jhV94uWFNsvvXfExgWP8t8M6I87QUnBzIabkvme+GdGJEDvMZem5QFiE
          hGpBI/fwY+ltAlvqRIvsf92WyxInFJUomaSXgqDK7rMLIs347CEjQ7VN1lUsZgG+
          NC/Tlll9iqzMiqqFoRcctsFP7IChyfmJVY+6He0M5mw7QYP88QyohVEWrvqEkf/1
          6xRAqGun/m1R6NPLRAH3+dX5MtfpCNaZ3MMYP5wai138e5I4XJKrgF4tGsMnB6ku
          O8r7sQECgYEA6cUWaPUKnVIMIMdOl0IpzMMtwABaslhZ7v+/x18T3T71MkKnhPSD
          JPnqnnmEnCMn+8SoORU+uayGgUMKyzHE3XRx4WWx9qExT6mUhrrjr2aX98x//z4T
          U8/SxZqyyNhf+M+wO8utEnifA0vk6UBweYPohmcF2sN7Yvqj4SNBpn0CgYEA9n1f
          N0JppNz+T62l7dx+BN36CWSyEUjLQDzBIhbkKOXxWE32wzU9i9h59mReIgj8KWqE
          n4/628PkSo35E8jghUFCDtRf0BoEZtLhdjBlvS0KxbM753JqhG/QIIECzNV6wJ5V
          9EJC0zMBsac2dpXqsX1yspvtQpAGDxoU2bFWNo8CgYEAz0RiwT56YdBMVofAQ9Zy
          700idDkcMUKqwoBZjrDbEPBwQFbe5sBQwukfP9FoZXO6UL0lli8jBUdVnqhNmqmO
          7fb/vaQILS7wZLxrpyVvGKZzGU9lMW7dfhMmwvONjwxh01552BqXYmg2PJr+5Fyx
          HNx6vyf7BeMKtFCcGtLCs5UCgYA2KNYDDlSoJOa8GyuaWBhYeW23Iqj9o0EFnFPT
          abQ4SE3/WSIfQlODps0llmgYkmDVuNHrPXehUimXOBrCfiDXJr+dAo0K7KyK60se
          7QNtzbfQONGwyTMeZnMUsUQsPbv7Fs9MHEMSpOJ6ZoNRCx/GYAoTtK8tMPgj2Vc7
          ffuzgQKBgQDObjAgDneXpr1bjr8efzUrV/9RyQyoNqAvU32nYDBQYF2Gjdw1uTfD
          Hy5CV1c8g7prdJ1EzEAHZS3j8OOlLiaPAYCNaTV+2GoGrNh6AqxYQw9SH60MqZcg
          OZgNQEbwS3gEbI8dz/f1e10zKBHBt/M8bB3UVH2g1xx00/f4R/ekDg==
          -----END RSA PRIVATE KEY-----
      user_management:
        local:
          users:
          - name: admin
            password: dzq9apljq4tvedij08lg
          - name: hm
            password: xrc0g77vznrtts07rml5
        provider: local
      workers: 4
    garden:
      allow_host_access: true
      debug_listen_address: 127.0.0.1:17013
      default_container_grace_time: 0
      destroy_containers_on_start: true
      graph_cleanup_threshold_in_mb: 0
      listen_address: 127.0.0.1:7777
      listen_network: tcp
    hm:
      director_account:
        ca_cert: |
          -----BEGIN CERTIFICATE-----
          MIIDEzCCAfugAwIBAgIQPteaxzfbeS+hV0k8x+LedDANBgkqhkiG9w0BAQsFADAz
          MQwwCgYDVQQGEwNVU0ExFjAUBgNVBAoTDUNsb3VkIEZvdW5kcnkxCzAJBgNVBAMT
          AmNhMB4XDTE3MDMxMzE0MTEyN1oXDTE4MDMxMzE0MTEyN1owMzEMMAoGA1UEBhMD
          VVNBMRYwFAYDVQQKEw1DbG91ZCBGb3VuZHJ5MQswCQYDVQQDEwJjYTCCASIwDQYJ
          KoZIhvcNAQEBBQADggEPADCCAQoCggEBANQzt+E3lQB19/Wwi2z02MnqcMEA+MNA
          lsjKQfqyix2HHA1n7GTw8+wTkkReiYrjW0kVr7lsK94PowNVivXDbjOgTx6eVu9F
          i9Sj3nb8YUTp58dRhTpAEnALwz7PSOdYRb9Pe29L7tIsVrgm/JRz/L+iqom/vFMT
          4mJExKsNShjIWoIaVC9uuIoxYE3L+wOsP5YZ146mYCGxTvi9uy5dNW0LSFLVYf4u
          xyfwvF5MsUCzjUgWhPLRSvEOoqpOlBpIFDfvFDK9I+DrrhLs9mceGPYF8/T/b3jx
          IWV3vHxJXsKMz/1+pQz5/VWNfZX2vZtWdZXOsW0OO6GozSs2z1IcLxsCAwEAAaMj
          MCEwDgYDVR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQEL
          BQADggEBAC1UCVNq0HhqRi60uzjUKZjd8wLZ+8XCqe8nAiPa0pUddugDmPwwMLr/
          aNrfZ78Pe5bLQ3tJOlIOizVKZUFCQY4QbqTxueFAuMFeGu6x0CoFRzYJpfv33q59
          r2UKPzt1jdPGlruZ3AUSGfCwplRjU+fUnGJVfBvAGzMU/LeQHBy0u7EjNCLwJ45C
          Z9MH4DNo5vwF27peO4uQ4oaNtp9OfQlJ0rwjbQ1L8oeJNCp0OR2XgcC1sH9BoSnc
          iyt+XgVTpUf3bocvOiMmxpl8bqZksh8vrJHba8cVBmB59WGjDF90+tDhM/8M3E7U
          9zKl6kYuACErOY20p9tQR2vPQ1QcYVc=
          -----END CERTIFICATE-----
        password: xrc0g77vznrtts07rml5
        user: hm
      resurrector_enabled: true
    nats:
      address: 127.0.0.1
      password: lqgpjp78jokhoa0pnvkl
      user: nats
    ntp:
    - time1.google.com
    - time2.google.com
    - time3.google.com
    - time4.google.com
    postgres:
      adapter: postgres
      database: bosh
      host: 127.0.0.1
      listen_address: 127.0.0.1
      password: x9r6y07v55cle15nsc4w
      user: postgres
    warden_cpi:
      agent:
        blobstore:
          options:
            access_key_id: 123456
            bucket_name: bosh_blobstore
            host: object.domain.com
            secret_access_key: 123456
            use_ssl: true
          provider: s3
        mbus: nats://nats:[email protected]:4222
      host_ip: 10.254.50.4
      warden:
        connect_address: 127.0.0.1:7777
        connect_network: tcp
  resource_pool: vms
name: bosh
networks:
- name: default
  subnets:
  - dns:
    - 8.8.8.8
    gateway: 192.168.50.1
    range: 192.168.50.0/24
    static:
    - 192.168.50.6
  type: manual
- cloud_properties:
    name: NatNetwork
    type: natnetwork
  name: outbound
  type: dynamic
releases:
- name: bosh
  sha1: 16966c90fb3535a2de6e2e19bf8252524d2f2d27
  url: https://s3.amazonaws.com/bosh-compiled-release-tarballs/bosh-260.5-ubuntu-trusty-3312.15-20170124-025145-688314225-20170124025151.tgz?versionId=XdnsJBm4uh.wTJ1aKy5BZ.B.NtBOZFTD
  version: 260.5
- name: bosh-virtualbox-cpi
  sha1: fd67549c2165f845b77798f5d861020d87d9da9c
  url: https://bosh.io/d/github.com/cppforlife/bosh-virtualbox-cpi-release?v=0.0.10
  version: 0.0.10
- name: bosh-warden-cpi
  sha1: 53642f8485b3b601ac163f8ca07c7dc5dfb11531
  url: https://s3.amazonaws.com/bosh-compiled-release-tarballs/bosh-warden-cpi-34-ubuntu-trusty-3312.15-20170201-081004-654654371-20170201081008.tgz?versionId=uO9gUrQL0tF.i_9OFwi53U9CpVdsm8Rf
  version: 34
- name: os-conf
  sha1: 651f10a765a2900a7f69ea07705f3367bd8041eb
  url: https://bosh.io/d/github.com/cloudfoundry/os-conf-release?v=11
  version: 11
- name: garden-runc
  sha1: 72748d8ddad788f2092fe121e6944c90cfd3b5d2
  url: https://s3.amazonaws.com/bosh-compiled-release-tarballs/garden-runc-1.1.1-ubuntu-trusty-3312.15-20170120-025258-725649478-20170120025303.tgz?versionId=LM5X6WRr_SSQkCpTDg197bRwr8YQxWqP
  version: 1.1.1
resource_pools:
- cloud_properties:
    cpus: 2
    ephemeral_disk: 16384
    memory: 4096
  env:
    bosh:
      password: '*'
  name: vms
  network: default
  stemcell:
    sha1: 60b8230da19d7e27b5101fc34f569e85eb5b3ad0
    url: https://bosh.io/d/stemcells/bosh-vsphere-esxi-ubuntu-trusty-go_agent?v=3312.15
variables:
- name: admin_password
  type: password
- name: hm_password
  type: password
- name: mbus_bootstrap_password
  type: password
- name: nats_password
  type: password
- name: postgres_password
  type: password
- name: default_ca
  options:
    common_name: ca
    is_ca: true
  type: certificate
- name: director_ssl
  options:
    alternative_names:
    - 192.168.50.6
    ca: default_ca
    common_name: 192.168.50.6
  type: certificate
- name: jumpbox_ssh
  type: ssh

aws/cloud-config.yml needs reserved IPs

https://github.com/cloudfoundry/bosh-deployment/blob/master/aws/cloud-config.yml#L36 has an empty list for reserved: [] but AWS does indeed reserve the first 4 IPs of each subnet (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html#vpc-sizing-ipv4)

Currently reserved: [] is guaranteed to be wrong and means the README's demo fails:

$ bosh -e aws-demo -d zookeeper deploy tmp/zookeeper-release/manifests/zookeeper.yml
15:23:15 | Preparing deployment: Preparing deployment (00:00:00)
15:23:15 | Preparing package compilation: Finding packages to compile (00:00:00)
15:23:15 | Compiling packages: golang_1.7/482e72c8435a11e1d1c3c25e4ee86ced53cc8739
15:23:15 | Compiling packages: zookeeper/ca455273c83e828eb50a21d21811684eceda2603
15:23:15 | Compiling packages: java/c524e46e61b37894935ae28016973e0e8644fcde
15:25:48 | Compiling packages: zookeeper/ca455273c83e828eb50a21d21811684eceda2603 (00:02:33)
15:26:09 | Compiling packages: java/c524e46e61b37894935ae28016973e0e8644fcde (00:02:54)
15:26:11 | Compiling packages: golang_1.7/482e72c8435a11e1d1c3c25e4ee86ced53cc8739 (00:02:56)
15:26:11 | Compiling packages: smoke_tests/cc583f5a7b430df6bd9a5bb191a5473f63d170ea (00:00:21)
15:27:41 | Creating missing vms: zookeeper/52bc8fdf-cbd7-4345-b3c0-4f1a731d7e5d (0)
15:27:41 | Creating missing vms: zookeeper/969fe0a0-106b-423d-bf7f-3824454a08fa (3)
15:27:41 | Creating missing vms: zookeeper/52279d33-207e-4523-8b39-26e214f3f7e9 (4)
15:27:41 | Creating missing vms: zookeeper/b17beeaf-3b73-452d-b533-074981bab2c0 (1)
15:27:41 | Creating missing vms: zookeeper/81a0bef1-7730-4866-9868-bffb222bf198 (2)
15:27:52 | Creating missing vms: zookeeper/969fe0a0-106b-423d-bf7f-3824454a08fa (3) (00:00:11)
            L Error: Unknown CPI error 'Unknown' with message 'Address 10.11.0.3 is in subnet's reserved address range' in 'create_vm' CPI method
15:27:52 | Creating missing vms: zookeeper/52bc8fdf-cbd7-4345-b3c0-4f1a731d7e5d (0) (00:00:11)
            L Error: Unknown CPI error 'Unknown' with message 'Address 10.11.0.2 is in subnet's reserved address range' in 'create_vm' CPI method
15:29:55 | Creating missing vms: zookeeper/52279d33-207e-4523-8b39-26e214f3f7e9 (4) (00:02:14)
15:30:00 | Creating missing vms: zookeeper/b17beeaf-3b73-452d-b533-074981bab2c0 (1) (00:02:19)

@cppforlife @dpb587-pivotal Ideas? Perhaps aws_cpi needs a default reserved list if it is configured empty?

(BTW, why didn't the Compiling packages step fail first? Wouldn't it have accidentally used IPs in the misconfigured AWS reserved range?)

bosh-dev.yml opsfile can't be used with openstack/cpi.yml

Seems like the cpi.yml cannot set details in the resource_pool, because bosh-dev.yml removes it:

$ bosh deploy ../bosh.yml -o ../bosh-dev.yml -o ../openstack/cpi.yml -l ../inner-bosh/creds.yml -l ../inner-bosh/secrets.yml -l ../openstack-secrets.yml -d bosh

Evaluating manifest:
  Expected to find a map key 'resource_pools' for path '/resource_pools'

Recent release version bumps broke bosh-lite deployment

I wasn't able to deploy a bosh-lite using HEAD today, because the compiled release tarballs referenced in bosh-lite.yml couldn't be fetched from S3. I ended up reverting back to 096b550 in order to get things working again. Is it possible the paths are wrong or the compiled release tarballs still need to be uploaded?

Persistent disks not working in garden containers.

Steps to reproduce the problem:

  • Deploy a bosh release with a persistent disk.
  • Create any file inside /var/vcap/store
  • Update the deployment manifest and deploy it again.
  • The file inside /var/vcap/store has vanished

It is also possible to notice that the content of /var/vcap/store inside the instance is different from the one inside the bosh VM (/var/vcap/store/warden_cpi/persistent_bind_mounts_dir/:uuid/:uuid)

I am using bosh-deployment with Virtual Box version 5.1.22r115126.

Deployed using the latest version of bosh-deployment (latest commit 01bf99c) with the following command:

bosh create-env ~/workspace/bosh-deployment/bosh.yml \
  --state ./state.json \
  -o ~/workspace/bosh-deployment/virtualbox/cpi.yml \
  -o ~/workspace/bosh-deployment/virtualbox/outbound-network.yml \
  -o ~/workspace/bosh-deployment/bosh-lite.yml \
  -o ~/workspace/bosh-deployment/bosh-lite-runc.yml \
  -o ~/workspace/bosh-deployment/jumpbox-user.yml \
  --vars-store ./creds.yml \
  -v director_name="Bosh Lite Director" \
  -v internal_ip=192.168.50.6 \
  -v internal_gw=192.168.50.1 \
  -v internal_cidr=192.168.50.0/24 \
  -v outbound_network_name=NatNetwork

Fresh virtualbox deployment incorrectly configures time

We've followed the instructions in the README to create a new Bosh Lite on VirtualBox.

The created VM has incorrectly configured its time. It's erroneously taken the host machine's time (e.g., 9:30am MDT) as the VM's UTC time (e.g., 9:30am UTC).

README edge-case: --ca-cert from `<(bosh int)` doesn't work if BOSH_LOG_LEVEL=DEBUG

If you are trying to run the command from the README:

bosh -e 10.0.0.6 --ca-cert <(bosh int ./creds.yml --path /director_ssl/ca) alias-env bosh-1

And get the error:

[CLI] 2017/08/22 09:55:20 ERROR - Validating Director connection config: Parsing certificate 2: Missing PEM block
Validating Director connection config:
  Parsing certificate 2: Missing PEM block

Exit code 1

Check your BOSH_LOG_LEVEL -- if it's "DEBUG" then the process-substitution in the shell command won't quite work because the output has extra lines

When using bosh create-env with the gcp/cpi.yml and bosh-lite.yml ops files, the resulting bosh director does not accept warden stemcells

I expect to be able to use bosh create-env with the gcp/cpi.yml and bosh-lite.yml ops files to deploy a bosh-lite to GCP. However, the bosh director that gets deployed when using those two files includes jobs on the director for both the warden CPI and the google CPI. When trying to upload a bosh lite stemcell, you get the error: Error: CPI error 'Bosh::Clouds::CloudError' with message 'Creating stemcell: Invalid 'warden' infrastructure' in 'create_stemcell' CPI method. Removing the google_cpi job from the director manifest fixes the issue. This should not be something we do ourselves, but rather should be included as part of bosh-deployment. Here is an example ops file that makes the change needed:

- type: remove
  path: /instance_groups/name=bosh/jobs/name=google_cpi

Move config-server.yml into misc/config-server.yml

Afaik, there is no reason a normal user would use -o config-server.yml rather than -o credhub.yml. I propose we move this file out of the home folder to avoid ppl accidentally trying to use it out of confusion.

This issue is prompted by someone accidentally trying to use config-server.yml

can not find the bosh-lite vm create by bosh create-env

I create bosh-lite in my MacOS with the cmd below. And all the deployments works well. I want to save the bosh-lite vm state, but I can not find any vms by VBoxManage list vms. Then I use ps -ef | grep VBox to find the process, they are there:

0 6677 1 0 10:55AM ?? 0:03.18 /Applications/VirtualBox.app/Contents/MacOS/VBoxXPCOMIPCD
0 6679 1 0 10:55AM ?? 0:19.27 /Applications/VirtualBox.app/Contents/MacOS/VBoxSVC --auto-shutdown
0 6736 6679 0 10:55AM ?? 239:38.19 /Applications/VirtualBox.app/Contents/MacOS/VBoxHeadless --comment vm-f463e421-a238-481f-54e0-15bf1ef3e034 --startvm f463e421-a238-481f-54e0-15bf1ef3e034 --vrde config
0 6737 6679 0 10:55AM ?? 0:00.50 /Applications/VirtualBox.app/Contents/MacOS/VBoxNetDHCP --ip-address 10.0.2.3 --lower-ip 10.0.2.4 --mac-address 08:00:27:1F:C9:43 --need-main on --netmask 255.255.255.0 --network NatNetwork --trunk-type whatever --upper-ip 10.0.2.254
0 6738 6679 0 10:55AM ?? 1:56.60 /Applications/VirtualBox.app/Contents/MacOS/VBoxNetNAT --ip-address 10.0.2.1 --netmask 255.255.255.0 --network NatNetwork --trunk-type whatever
501 13877 3796 0 4:15PM ttys001 0:00.00 grep VBox

bosh create-env bosh.yml
--state ./state.json
-o virtualbox/cpi.yml
-o virtualbox/outbound-network.yml
-o bosh-lite.yml
-o bosh-lite-runc.yml
-o jumpbox-user.yml
-o external_ip.yml
--vars-store ./creds.yml
-v director_name="Bosh Lite Director"
-v internal_ip=192.168.50.6
-v internal_gw=192.168.50.1
-v internal_cidr=192.168.50.0/24
-v outbound_network_name=NatNetwork
-v admin_password=admin

GCP authentication issue

Have the following error when uploading stemcell to bosh director:
20:41:58 | Update stemcell: Uploading stemcell bosh-google-kvm-ubuntu-trusty-go_agent/3445.7 to the cloud (00:00:30)
L Error: CPI error 'Bosh::Clouds::CloudError' with message 'Creating stemcell: Creating Google Image from URL : Failed to create Google Image: Post https://www.googleapis.com/compute/v1/projects/pivotal-lab/global/images?alt=json: oauth2: cannot fetch token: Post https://accounts.google.com/o/oauth2/token: dial tcp 74.125.141.84:443: i/o timeout' in 'create_stemcell' CPI method

Bosh director was created using the following command:
sudo bosh create-env bosh-deployment/bosh.yml
--state=state.json
--vars-store=creds.yml
-o bosh-deployment/gcp/cpi.yml
-v director_name=bosh-concourse
-v internal_cidr=10.142.0.0/20
-v internal_gw=10.142.0.1
-v internal_ip=10.142.0.10
--var-file gcp_credentials_json=~/Downloads/PivotalLab-1dafffbaeae7.json
-v project_id=pivotal-lab
-v zone=us-east1-c
-v tags=[internal]
-v network=default
-v subnetwork=default

I followed the procedure to create bosh director described here:http://bosh.io/docs/init-google.html

I found out that the reason for authentication error with GCP is that bosh director vm is created without external IP. Adding manually ephemeral external IP to director VM resolved the issue for me.

Interpolating gcp_credentials_json for GCP provider

Probably because I'm bad at bash, but I had a tough time getting gcp_credentials_json to interpolate. Assuming your creds JSON is in an env var named GOOGLE_CREDENTIALS, this is how you do it:

bosh create-env ~/dev/bosh/workspaces/bosh-1/bosh.yml \
 -v gcp_credentials_json=\'"$GOOGLE_CREDENTIALS"\' \
...

I'm working on docs for all of this on GCP and will include this there.

default to v3 or v2 openstack?

@voelzmo

Currently openstack/cpi.yml defaults to v2 config options and there is openstack/keystone-v3.yml to use v3. Should we default to v3 and have v2 override?

BOSH lite instructions do not work

  • MacOS 10.12.5
  • VirtualBox 5.1.22
  • bosh-cli v2.0.26 installed from Homebrew
  • cloned this repo, on master (commit bedbfc9)

Followed instructions for BOSH Lite on VirtualBox, I get as far as step 4:

bosh -e 192.168.50.6 --ca-cert <(bosh int ./creds.yml --path /director_ssl/ca) alias-env vbox

which errors with

Exected URL '%!!(MISSING)s(<nil>)' to be a string

Exit code 1

README's instructions assume deploying to a local-only network -- is this intentional?

Hi there,

We're targeting AWS, and using bosh create-env to get a non-prod environment, and so we're heavily inspired by https://github.com/cloudfoundry/bosh-deployment#sample-installation-instructions

It seems like these instructions are assuming you have access to 10.0.0.6 where the Director gets deployed, either via VPN or you're already in the network somehow. Are we reading this correctly? cc/ @goutamtadi1

Perhaps the default use case you're showing assumes that the user would add more networking infrastructure, like a public load balancer or something, to route traffic into the private IP address for the Director?

In our case, we want to let AWS assign the bosh director VM an elastic IP, and then rely on security groups to allow access over a limited number of ports. DNS etc. is not an immediate concern.

We think the solution is to

  1. pre-provision the elastic IP
  2. add a network to

    bosh-deployment/bosh.yml

    Lines 23 to 30 in b1f8d1b

    networks:
    - name: default
    type: manual
    subnets:
    - range: ((internal_cidr))
    gateway: ((internal_gw))
    static: [((internal_ip))]
    dns: [8.8.8.8]
- name: public
  type: vip
  1. put the given elastic IP address in at

    bosh-deployment/bosh.yml

    Lines 43 to 45 in b1f8d1b

    networks:
    - name: default
    static_ips: [((internal_ip))]
    as a new network for the job:
    - name: public
      static_ips: [101.102.103.104]

We're happy to write our own ops-file to layer those elements on, but we just want to make sure we're not missing something. Thanks!

instructions for boshlite on virtualbox using v2 cli are incompatible with manifest generation scripts for cf-release

The instructions at https://github.com/cloudfoundry/bosh-deployment/blob/master/docs/bosh-lite-on-vbox.md result in a director named vbox.

The manifest generation scripts for cf-release require a director named Bosh Lite Director.

We are currently trying to change the name of our director to test whether creating the BOSH Lite vm with -v director_name=Bosh Lite Director fixes the issue

Health Monitor report 'Connection Failed' when bosh-lite director is deployed with UAA

We had a working deployment without the uaa using these ops-files:

bosh2 create-env   \
  --vars-store ~/workspace/bosh-deployment/bosh-vars.yml   \
  --state ~/workspace/bosh-deployment/bosh-state.json   \
  -o ~/workspace/bosh-deployment/gcp/cpi.yml   \
  -o ~/workspace/bosh-deployment/bosh-lite.yml   \
  -o ~/workspace/bosh-deployment/gcp/bosh-lite-vm-type.yml   \
  -o ~/workspace/bosh-deployment/jumpbox-user.yml   \
  -o ~/workspace/bosh-deployment/bosh-lite-runc.yml   \
  -o ~/workspace/bosh-deployment/external-ip-not-recommended.yml   \
  ~/workspace/bosh-deployment/bosh.yml

We then added the uaa ops-file (and the external-ip-not-recommended-uaa.yml)

bosh2 create-env   \
  --vars-store ~/workspace/bosh-deployment/bosh-vars.yml   \
  --state ~/workspace/bosh-deployment/bosh-state.json   \
  -o ~/workspace/bosh-deployment/gcp/cpi.yml   \
  -o ~/workspace/bosh-deployment/bosh-lite.yml   \
  -o ~/workspace/bosh-deployment/gcp/bosh-lite-vm-type.yml   \
  -o ~/workspace/bosh-deployment/jumpbox-user.yml   \
  -o ~/workspace/bosh-deployment/bosh-lite-runc.yml   \
  -o ~/workspace/bosh-deployment/uaa.yml \
  -o ~/workspace/bosh-deployment/external-ip-not-recommended.yml   \
  -o ~/workspace/bosh-deployment/external-ip-not-recommended-uaa.yml   \
  ~/workspace/bosh-deployment/bosh.yml

Here is the monit summary:

Process 'nats'                      running
Process 'postgres'                  running
Process 'blobstore_nginx'           running
Process 'director'                  running
Process 'worker_1'                  running
Process 'worker_2'                  running
Process 'worker_3'                  running
Process 'worker_4'                  running
Process 'director_scheduler'        running
Process 'director_nginx'            running
Process 'health_monitor'            Connection failed
Process 'warden_cpi'                running
Process 'garden'                    running
Process 'uaa'                       running
System 'system_localhost'           running

Both /var/vcap/sys/log/health_monitor/health_monitor.stderr.log and /var/vcap/sys/log/health_monitor/health_monitor.stdout.log are empty:

bosh/0:/home/jumpbox# ll /var/vcap/sys/log/health_monitor/
total 128
drwxr-xr-x  2 vcap vcap   4096 Jun 15 13:39 ./
drwxr-x--- 14 root vcap   4096 Jun 15 13:39 ../
-rw-r--r--  1 vcap vcap 112096 Jun 15 15:03 health_monitor.log
-rw-r--r--  1 vcap vcap      0 Jun 15 14:15 health_monitor.stderr.log
-rw-r--r--  1 vcap vcap      0 Jun 15 13:39 health_monitor.stdout.log

Here is a dump of the health_monitor.log:

I, [2017-06-15T15:03:12.373508 #13734]  INFO : HealthMonitor starting...
I, [2017-06-15T15:03:12.375070 #13734]  INFO : Logging delivery agent is running...
I, [2017-06-15T15:03:12.375455 #13734]  INFO : Event logger is running...
I, [2017-06-15T15:03:12.376172 #13734]  INFO : HTTP server is starting on port 25923...
I, [2017-06-15T15:03:12.405311 #13734]  INFO : BOSH HealthMonitor 0.0.0 is running...
I, [2017-06-15T15:03:12.407186 #13734]  INFO : Connected to NATS at 'nats://127.0.0.1:4222'

The above pattern repeats itself.

We noticed that the pid is always out of sync with the health_monitor.pidfile. Coupled with the fact that the health_monitor stderr and stdout logs are empty, we think that this points to an issue with the ctl script.

Any help would be much appreciated.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.