cloudfoundry / diego-release Goto Github PK
View Code? Open in Web Editor NEWBOSH Release for Diego
License: Apache License 2.0
BOSH Release for Diego
License: Apache License 2.0
I want to deploy a docker image to Diego that listens on an arbitrary port (not 8080). I am trying to deploy Redis. Since CF only supports container port 8080, I can't use the diego-cli cf
plugin to push docker images.
The documentation on docker support describes architecture and concepts but makes a lot of assumptions that CC is the client, and is of little practical help in crafting the DesiredLRPCreateRequest
.
Looking at /v1/desired_lrps
after using diego-cli-plugin to push a docker image, there is a lot of configuration but no way to know what is absolutely necessary.
Please document instructions for deploying a basic docker image using only the Receptor API. How about Redis, for example :)
hi guys, I've uploaded a the diego release .789 and .817 to my microbosh and have deployed both successfully. I've noticed that there are numerous references to the services.dc1.consul domain in various spec files, likely for dev purposes but not sure. in my manifest, I've had to change the diego_api_url to reference the actual IP of the receptor server (cell_z1) in order for my cf push's to get partially completed. They get through the retrieval of the buildpacks but fail at the callback to the stager. I see this reference in the stager logs.
"callback_url":"http://stager.service.dc1.consul:8888",
Should this domain be an attribute in the manifest so that I can easily use the domain of my cf releases haproxy? Also, I've tried adding the domain of the haproxy as the diego_api_url but have run into issues where the haproxy doesn't know where to even send it. Maybe I'm just doing it wrong. Thoughts?
Trying to checkout a tag took me quite a few tries today, because ./scripts/update
didn't give any indication of failing. Actually, it does at the beginning:
Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
But this goes by very quickly, followed by all the submodules being synchronized. https://gist.github.com/shalako/d3a6d5b200a68491e952
Instead, could the script stop once permission is denied?
I'm trying to create a local CF using diego, and ran into error when doing the step below.
I've spent quite a bit of time researching this error, but it seems no-one ran into the same issue :-(
Can you shine some light into what issue I may have. I'm using ruby version 1.9.3-p484
Thanks
Beth Tran
Deploying Diego to a local BOSH-Lite instance
8: Do the BOSH dance:
bosh create release --force && bosh -n upload release && bosh -n deploy
This command generates the following error
ERROR:
Building cloud_controller_ng...
Final version: NOT FOUND
Dev version: NOT FOUND
Generating...
Pre-packaging...
- set -e -x
- cd /var/folders/jh/xdjrq63n7g3cy4507vn7lrb80000gp/T/d20150827-54570-1suwh47/d20150827-54570-1wpmf2n/cloud_controller_ng
- BUNDLE_WITHOUT=development:test
- bundle package --all --no-install --path ./vendor/cache
Unknown switches '--no-install'
`cloud_controller_ng' pre-packaging failed
Hello, i deploy the diego-release on centos6.5, when i start the cell component, the garden-linux process can not running. i view the error log for /var/vcap/sys/log/garden-linux/garden-linux.stderror.log, it shows that “container_pool: setting up allow rules in iptables: exit status 2”,i have try to create a iptables rules by myself, it succeed. is that any reason for this error, if you know, i hope you can tell me. thank you.
Seems like the step to generate a file containing the director uuid is unnecessary. The generate_manifest script could do this.
"Error 430001: Could not reserve network for package compilation: capacity"
Output of bosh task 6 --debug
: https://gist.github.com/shalako/8ab6016f97aa1dc71da7
Per the instructions on the release README, I have deployed cf-release first: https://gist.github.com/shalako/e244ba78b42331b8d028
@atulkc told me he worked around this by reducing the number of the compilation job instances in the manifest. So one fix could be to modify the bosh-lite stub to enable a more painless deployment experience.
compilation:
cloud_properties: {}
network: diego1
reuse_compilation_vms: true
workers: 4
But this seems like potentially a bosh-lite CPI thing, for it should support deploying cf-release and diego-release together. Opened issue for BOSH.
As a consumer of Diego, I care most about two things:
The README does mention use of CF, however Diego's API docs are buried in a docs directory in the Receptor repo (not even in the README for Receptor), to which there is no link on the diego-release README. I did eventually find a link to the API docs in the design notes repo.
The user experience would be improved if there were a link to the API docs on the README.
latest stemcell for centos is 6.6, it prompts that kernel version is too low, and when i compile the ruby 2.1.4 and garden-linux, it appear the error of lack of some lib, I must upgrade the glibc and system kernel to support Diego.
so,
For the section: Install and start Concourse, following its README. I am bit confused. Is it possible to deploy Concourse on a baremetal box, instead of Vagrant? Concourse's Getting Started page (http://concourse.ci/getting-started.html) refers yet again to Vagrant way of doing things. Is there a way I can deploy it on baremetal, OpenStack or VMware?
I executed following command in my mac pro book and got an error like :
./generate_deployment_manifest warden ~/git/deployments/bosh-lite/director.yml ~/git/diego-release/templates/enable_diego_docker_in_cc.yml > ~/git/deployments/bosh-lite/cf.yml
2015/03/23 22:28:17 error generating manifest: unresolved nodes:
(( lamb_meta.loggregator_templates )) in dynaml jobs.[23].templates
(( lamb_meta.loggregator_templates )) in dynaml jobs.[24].templates
(( lamb_meta.loggregator_trafficcontroller_templates )) in dynaml jobs.[25].templates
(( lamb_meta.loggregator_trafficcontroller_templates )) in dynaml jobs.[26].templates
(( merge )) in ./templates/cf-jobs.yml lamb_meta
any ideas what happened, i followed the steps in section "Deploying Diego to a local BOSH-Lite instance"
I already have an existing bosh-lite director setup based on the warden cf release (v208).
Is cf-release on develop branch(diego) compatible with cf-release in master after executing ./update
?
Or should I just setup a brand new bosh-lite running diego?
hello, i have some question about the diego.
when i run:
./bin/wshd --run ./run --lib ./lib --root /var/vcap/data/garden-linux/graph/vfs/dir/mqvpupgerpk --title 'wshd: mqvpupgerpk' --userns enabled
error:
clone: Invalid argument
/var/vcap/data/garden-linux/depot/mqvpupgerpk/start.sh: line 25: 2992 Aborted (core dumped)
Other bosh releases provide a make-manifest script for bosh-lite.
This would turn this:
mkdir -p ~/deployments/bosh-lite
cd ~/workspace/diego-release
./scripts/print-director-stub > ~/deployments/bosh-lite/director.yml
./scripts/generate-deployment-manifest \
~/deployments/bosh-lite/director.yml \
manifest-generation/bosh-lite-stubs/property-overrides.yml \
manifest-generation/bosh-lite-stubs/instance-count-overrides.yml \
manifest-generation/bosh-lite-stubs/persistent-disk-overrides.yml \
manifest-generation/bosh-lite-stubs/iaas-settings.yml \
manifest-generation/bosh-lite-stubs/additional-jobs.yml \
~/deployments/bosh-lite \
> ~/deployments/bosh-lite/diego.yml
bosh deployment ~/deployments/bosh-lite/diego.yml
Into this:
cd ~/workspace/diego-release
./bosh-lite/make_manifest
Examples:
Thanks for your reply.
Rather than installing spiff from source via go get
, it should recommend using an official release binary.
Hi,
I'm setting up diego_release on bosh-lite with a compatible v208 cf-release. I cloned the diego-release 0.1197 release . I guess enable_diego_ssh_in_cc.yml is missing in this release?
emperor@emperor:~/workspace/cf-release$ ./generate_deployment_manifest warden \
> ~/deployments/bosh-lite/director.yml \
> ~/workspace/diego-release/stubs-for-cf-release/enable_diego_docker_in_cc.yml \
> ~/workspace/diego-release/stubs-for-cf-release/enable_consul_with_cf.yml \
> ~/workspace/diego-release/stubs-for-cf-release/enable_diego_ssh_in_cc.yml \
> ~/workspace/diego-release/manifest-generation/bosh-lite-stubs/property-overrides.yml \
> > ~/deployments/bosh-lite/cf.yml
2015/05/25 12:21:59 error reading stub [/home/emperor/workspace/diego-release/stubs-for-cf-release/enable_diego_ssh_in_cc.yml]: open /home/emperor/workspace/diego-release/stubs-for-cf-release/enable_diego
ssh_in_cc.yml: no such file or directory
The generate_deployment_manifest
script in step #7 of the README requires spiff
. A link to the project and/or note on how to install it would save a bit of searching.
hi all,
I am trying to let Diego work on centos 7.1 and i have upgrade kernel to 4.0.2 with aufs4.0.
I get a new problem that pivot_root(".", "tmp/garden-host")
failed, the failure message is Invalid argument
.
Is there need any kernel support ?
{"timestamp":"1431546463.278645992","source":"garden-linux","message":"garden-linux.pool.nevp5f9irno.creating","log_level":1,"data":{"session":"2.2"}}
{"timestamp":"1431546463.278743505","source":"garden-linux","message":"garden-linux.pool.nevp5f9irno.acquired-pool-resources","log_level":1,"data":{"session":"2.2"}}
{"timestamp":"1431546463.465993166","source":"garden-linux","message":"garden-linux.pool.nevp5f9irno.created","log_level":1,"data":{"session":"2.2"}}
{"timestamp":"1431546463.564160347","source":"garden-linux","message":"garden-linux.garden-server.bulk_info.got-bulkinfo","log_level":1,"data":{"handles":[""],"session":"4.74"}}
{"timestamp":"1431546463.680500031","source":"garden-linux","message":"garden-linux.pool.nevp5f9irno.start.command.failed","log_level":2,"data":{"argv":["/var/vcap/data/garden-linux/depot/nevp5f9irno/start.sh"],"error":"exit status 1","exit-status":1,"session":"2.2.4.1","stderr":"pivot_root: Invalid argument\nError waiting for acknowledgement from child process\n","stdout":"","took":"214.221802ms"}}
{"timestamp":"1431546463.680669546","source":"garden-linux","message":"garden-linux.pool.nevp5f9irno.start.failed-to-start","log_level":2,"data":{"error":"exit status 1","session":"2.2.4"}}
{"timestamp":"1431546463.680748224","source":"garden-linux","message":"garden-linux.pool.destroy.destroying","log_level":1,"data":{"id":"nevp5f9irno","session":"2.3"}}
{"timestamp":"1431546463.731221914","source":"garden-linux","message":"garden-linux.pool.destroy.command.failed","log_level":2,"data":{"argv":["/var/vcap/packages/garden-linux/garden-bin/destroy.sh","/var/vcap/data/garden-linux/depot/nevp5f9irno"],"error":"exit status 1","exit-status":1,"id":"nevp5f9irno","session":"2.3.1","stderr":"/var/vcap/data/garden-linux/depot/nevp5f9irno/destroy.sh: line 27: kill: (6599) - No such process\n","stdout":"","took":"41.399822ms"}}
{"timestamp":"1431546463.731603861","source":"garden-linux","message":"garden-linux.garden-server.create.failed","log_level":2,"data":{"error":"container: start: exit status 1","request":{"handle":"41717c36-71cc-4f9c-ac4d-41def7c1e0c3-ab38b89a91944b12848411a93335a6cf","rootfs":"/var/vcap/packages/rootfs_cflinuxfs2","properties":{"executor:action":"{\"timeout\":{\"action\":{\"serial\":{\"actions\":[{\"emit_progress\":{\"start_message\":\"\",\"success_message\":\"\",\"failure_message_prefix\":\"Failed to set up docker environment\",\"action\":{\"download\":{\"from\":\"http://file-server.service.consul:8080/v1/static/docker_app_lifecycle/docker_app_lifecycle.tgz\",\"to\":\"/tmp/docker_app_lifecycle\",\"cache_key\":\"builder-docker\"}}}},{\"emit_progress\":{\"start_message\":\"Staging...\",\"success_message\":\"Staging Complete\",\"failure_message_prefix\":\"Staging Failed\",\"action\":{\"run\":{\"path\":\"/tmp/docker_app_lifecycle/builder\",\"args\":[\"-outputMetadataJSONFilename\",\"/tmp/docker-result/result.json\",\"-dockerRef\",\"10.10.10.210:8080/lvguanglin/kodexplorer:latest\",\"-dockerRegistryAddresses\",\"10.10.10.210:8080\",\"-insecureDockerRegistries\",\"10.10.10.210:8080\",\"-cacheDockerImage\"],\"env\":[{\"name\":\"VCAP_APPLICATION\",\"value\":\"{\\\"limits\\\":{\\\"mem\\\":256,\\\"disk\\\":1024,\\\"fds\\\":16384},\\\"application_version\\\":\\\"10d506e1-79d6-48be-aa8c-017817689b46\\\",\\\"application_name\\\":\\\"kod\\\",\\\"version\\\":\\\"10d506e1-79d6-48be-aa8c-017817689b46\\\",\\\"name\\\":\\\"kod\\\",\\\"space_name\\\":\\\"admin\\\",\\\"space_id\\\":\\\"7b633d82-73fb-47e2-bf2d-24734bf68f9a\\\"}\"},{\"name\":\"VCAP_SERVICES\",\"value\":\"{}\"},{\"name\":\"MEMORY_LIMIT\",\"value\":\"256m\"},{\"name\":\"CF_STACK\",\"value\":\"cflinuxfs2\"},{\"name\":\"DIEGO_DOCKER_CACHE\",\"value\":\"true\"}],\"resource_limits\":{\"nofile\":16384},\"privileged\":true}}}}]}},\"timeout\":900000000000}}","executor:allocated-at":"1431546463252012459","executor:cpu-weight":"100","executor:disk-mb":"6144","executor:egress-rules":"[{\"protocol\":\"all\",\"destinations\":[\"0.0.0.0-9.255.255.255\"],\"log\":false},{\"protocol\":\"all\",\"destinations\":[\"11.0.0.0-169.253.255.255\"],\"log\":false},{\"protocol\":\"all\",\"destinations\":[\"169.255.0.0-172.15.255.255\"],\"log\":false},{\"protocol\":\"all\",\"destinations\":[\"172.32.0.0-192.167.255.255\"],\"log\":false},{\"protocol\":\"all\",\"destinations\":[\"192.169.0.0-255.255.255.255\"],\"log\":false},{\"protocol\":\"tcp\",\"destinations\":[\"0.0.0.0/0\"],\"ports\":[53],\"log\":false},{\"protocol\":\"udp\",\"destinations\":[\"0.0.0.0/0\"],\"ports\":[53],\"log\":false},{\"protocol\":\"all\",\"destinations\":[\"0.0.0.0-9.255.255.255\"],\"log\":false},{\"protocol\":\"all\",\"destinations\":[\"11.0.0.0-169.253.255.255\"],\"log\":false},{\"protocol\":\"all\",\"destinations\":[\"169.255.0.0-172.15.255.255\"],\"log\":false},{\"protocol\":\"all\",\"destinations\":[\"172.32.0.0-192.167.255.255\"],\"log\":false},{\"protocol\":\"all\",\"destinations\":[\"192.169.0.0-255.255.255.255\"],\"log\":false},{\"protocol\":\"tcp\",\"destinations\":[\"0.0.0.0/0\"],\"ports\":[53],\"log\":false},{\"protocol\":\"udp\",\"destinations\":[\"0.0.0.0/0\"],\"ports\":[53],\"log\":false},{\"protocol\":\"tcp\",\"destinations\":[\"10.10.10.210\"],\"ports\":[8080],\"log\":false}]","executor:env":"null","executor:log-config":"{\"guid\":\"41717c36-71cc-4f9c-ac4d-41def7c1e0c3\",\"index\":0,\"source_name\":\"STG\"}","executor:memory-mb":"1024","executor:metrics-config":"{\"guid\":\"\",\"index\":0}","executor:monitor":"null","executor:owner":"executor","executor:result":"{\"failed\":false,\"failure_reason\":\"\",\"stopped\":false}","executor:rootfs":"/var/vcap/packages/rootfs_cflinuxfs2","executor:setup":"null","executor:start-timeout":"0","executor:state":"created","tag:domain":"cf-app-staging","tag:lifecycle":"task","tag:result-file":"/tmp/docker-result/result.json"},"privileged":true},"session":"4.71"}}
{"timestamp":"1431546463.769743919","source":"garden-linux","message":"garden-linux.garden-server.destroy.failed","log_level":2,"data":{"error":"unknown handle: 41717c36-71cc-4f9c-ac4d-41def7c1e0c3-ab38b89a91944b12848411a93335a6cf","handle":"41717c36-71cc-4f9c-ac4d-41def7c1e0c3-ab38b89a91944b12848411a93335a6cf","session":"4.75"}}
Filesystem Type Size Used Avail Use% Mounted on
devtmpfs devtmpfs 2.0G 0 2.0G 0% /dev
tmpfs tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs tmpfs 2.0G 8.3M 2.0G 1% /run
tmpfs tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/vda1 ext4 38G 11G 25G 30% /
/dev/vdb2 ext4 36G 1.5G 33G 5% /var/vcap/data
tmpfs tmpfs 1.0M 28K 996K 3% /var/vcap/data/sys/run
/dev/loop0 ext4 120M 1.6M 115M 2% /tmp
cgroup tmpfs 2.0G 16K 2.0G 1% /tmp/garden-/cgroup
none aufs 36G 1.5G 33G 5% /var/vcap/data/garden-linux/overlays/nerqi07c98r/rootfs
none aufs 36G 1.5G 33G 5% /var/vcap/data/garden-linux/overlays/nerqi07c98r/rootfs
none aufs 36G 1.5G 33G 5% /var/vcap/data/garden-linux/overlays/nevp5f9irno/rootfs
none aufs 36G 1.5G 33G 5% /var/vcap/data/garden-linux/overlays/nevp5f9irno/rootfs
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
devtmpfs /dev devtmpfs rw,nosuid,size=2008488k,nr_inodes=502122,mode=755 0 0
securityfs /sys/kernel/security securityfs rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /run tmpfs rw,nosuid,nodev,mode=755 0 0
tmpfs /sys/fs/cgroup tmpfs rw,nosuid,nodev,noexec,mode=755 0 0
cgroup /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd 0 0
pstore /sys/fs/pstore pstore rw,nosuid,nodev,noexec,relatime 0 0
cgroup /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset 0 0
cgroup /sys/fs/cgroup/cpu,cpuacct cgroup rw,nosuid,nodev,noexec,relatime,cpu,cpuacct 0 0
cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio 0 0
cgroup /sys/fs/cgroup/memory cgroup rw,nosuid,nodev,noexec,relatime,memory 0 0
cgroup /sys/fs/cgroup/devices cgroup rw,nosuid,nodev,noexec,relatime,devices 0 0
cgroup /sys/fs/cgroup/freezer cgroup rw,nosuid,nodev,noexec,relatime,freezer 0 0
cgroup /sys/fs/cgroup/net_cls cgroup rw,nosuid,nodev,noexec,relatime,net_cls 0 0
cgroup /sys/fs/cgroup/perf_event cgroup rw,nosuid,nodev,noexec,relatime,perf_event 0 0
cgroup /sys/fs/cgroup/hugetlb cgroup rw,nosuid,nodev,noexec,relatime,hugetlb 0 0
configfs /sys/kernel/config configfs rw,relatime 0 0
/dev/vda1 / ext4 rw,relatime,data=ordered 0 0
systemd-1 /proc/sys/fs/binfmt_misc autofs rw,relatime,fd=29,pgrp=1,timeout=300,minproto=5,maxproto=5,direct 0 0
mqueue /dev/mqueue mqueue rw,relatime 0 0
hugetlbfs /dev/hugepages hugetlbfs rw,relatime 0 0
debugfs /sys/kernel/debug debugfs rw,relatime 0 0
/dev/vdb2 /var/vcap/data ext4 rw,relatime,data=ordered 0 0
tmpfs /var/vcap/data/sys/run tmpfs rw,relatime,size=1024k 0 0
/dev/loop0 /tmp ext4 rw,relatime,data=ordered 0 0
/dev/vdb2 /var/vcap/data/garden-linux/graph/aufs ext4 rw,relatime,data=ordered 0 0
cgroup /tmp/garden-/cgroup tmpfs rw,relatime,mode=755 0 0
cgroup /tmp/garden-/cgroup/cpuset cgroup rw,relatime,cpuset 0 0
cgroup /tmp/garden-/cgroup/blkio cgroup rw,relatime,blkio 0 0
cgroup /tmp/garden-/cgroup/memory cgroup rw,relatime,memory 0 0
cgroup /tmp/garden-/cgroup/devices cgroup rw,relatime,devices 0 0
cgroup /tmp/garden-/cgroup/freezer cgroup rw,relatime,freezer 0 0
cgroup /tmp/garden-/cgroup/net_cls cgroup rw,relatime,net_cls 0 0
cgroup /tmp/garden-/cgroup/perf_event cgroup rw,relatime,perf_event 0 0
cgroup /tmp/garden-/cgroup/hugetlb cgroup rw,relatime,hugetlb 0 0
none /var/vcap/data/garden-linux/overlays/nerqi07c98r/rootfs aufs rw,relatime,si=3d779c18ad618124 0 0
devpts /var/vcap/data/garden-linux/overlays/nerqi07c98r/rootfs/dev/pts devpts rw,relatime,mode=600,ptmxmode=666 0 0
none /var/vcap/data/garden-linux/overlays/nerqi07c98r/rootfs aufs rw,relatime,si=3d779c18ad618124 0 0
devpts /var/vcap/data/garden-linux/overlays/nerqi07c98r/rootfs/dev/pts devpts rw,relatime,mode=600,ptmxmode=666 0 0
none /var/vcap/data/garden-linux/overlays/nevp5f9irno/rootfs aufs rw,relatime,si=3d779c18ad61d124 0 0
devpts /var/vcap/data/garden-linux/overlays/nevp5f9irno/rootfs/dev/pts devpts rw,relatime,mode=600,ptmxmode=666 0 0
none /var/vcap/data/garden-linux/overlays/nevp5f9irno/rootfs aufs rw,relatime,si=3d779c18ad61d124 0 0
devpts /var/vcap/data/garden-linux/overlays/nevp5f9irno/rootfs/dev/pts devpts rw,relatime,mode=600,ptmxmode=666 0 0
thanks
I've been trying to install the latest release 0.1335.0 on my bosh-lite with cf release 212. Each time I run a bosh -n deploy
for diego, I get the following error:
Started binding instance vms
Started binding instance vms > database_z1/0
Started binding instance vms > brain_z1/0
Started binding instance vms > cell_z1/0
Started binding instance vms > cc_bridge_z1/0
Started binding instance vms > route_emitter_z1/0
Started binding instance vms > access_z1/0. Done (00:00:00)
Done binding instance vms > cc_bridge_z1/0 (00:00:00)
Done binding instance vms > database_z1/0 (00:00:00)
Done binding instance vms > cell_z1/0 (00:00:00)
Done binding instance vms > brain_z1/0 (00:00:00)
Done binding instance vms > route_emitter_z1/0 (00:00:00)
Done binding instance vms (00:00:00)
Started preparing configuration > Binding configuration. Failed: Error filling in template `agent_ctl.sh.erb' for `database_z1/0' (line 31: undefined method `tr' for ["bbs", {}]:Array) (00:00:00)
Error 100: Error filling in template `agent_ctl.sh.erb' for `database_z1/0' (line 31: undefined method `tr' for ["bbs", {}]:Array)
Task 23 error
I'm not sure what should go in for the bbs configuration?
I get similar issues with cc_bridge_z1
and access_z1
bosh vms
results with the following output
bosh vms
Acting as user 'admin' on 'Bosh Lite Director'
Deployment `cf-warden'
Director task 28
Task 28 done
+------------------------------------+---------+---------------+--------------+
| Job/index | State | Resource Pool | IPs |
+------------------------------------+---------+---------------+--------------+
| api_z1/0 | running | large_z1 | 10.244.0.134 |
| consul_z1/0 | running | medium_z1 | 10.244.0.54 |
| doppler_z1/0 | running | medium_z1 | 10.244.0.142 |
| etcd_z1/0 | running | medium_z1 | 10.244.0.42 |
| ha_proxy_z1/0 | running | router_z1 | 10.244.0.34 |
| hm9000_z1/0 | running | medium_z1 | 10.244.0.138 |
| loggregator_trafficcontroller_z1/0 | running | small_z1 | 10.244.0.146 |
| nats_z1/0 | running | medium_z1 | 10.244.0.6 |
| postgres_z1/0 | running | medium_z1 | 10.244.0.30 |
| router_z1/0 | running | router_z1 | 10.244.0.22 |
| runner_z1/0 | running | runner_z1 | 10.244.0.26 |
| uaa_z1/0 | running | medium_z1 | 10.244.0.130 |
+------------------------------------+---------+---------------+--------------+
VMs total: 12
Deployment `cf-warden-diego'
Director task 29
Task 29 done
+--------------------+---------+------------------+---------------+
| Job/index | State | Resource Pool | IPs |
+--------------------+---------+------------------+---------------+
| access_z1/0 | running | access_z1 | 10.244.16.6 |
| brain_z1/0 | running | brain_z1 | 10.244.16.134 |
| cc_bridge_z1/0 | running | cc_bridge_z1 | 10.244.16.142 |
| cell_z1/0 | running | cell_z1 | 10.244.16.138 |
| database_z1/0 | running | database_z1 | 10.244.16.130 |
| route_emitter_z1/0 | running | route_emitter_z1 | 10.244.16.146 |
Hello, I performed a new install of bosh-lite on vagrant, and using a linux box I followed the guide to deploy cf (branch develop) and diego
Everything is deployed (I've had some troubles on the release creation because not all the compiler were configured to take the proxy info but is fine now)
cf apps:
Getting apps in org myOrg / space dev as admin...
FAILED
Server error, status code: 500, error code: 10001, message: An unknown error occurred.
cf push:
OK
Warning: error tailing logs
Unauthorized error: You are not authorized. Error: Invalid authorization
Starting app hwphp in org myOrg / space dev as admin...
FAILED
Server error, status code: 500, error code: 10001, message: An unknown error occurred.
It seems to be something like:
https://groups.google.com/a/cloudfoundry.org/forum/#!topic/vcap-dev/VzBQDblEEwk [solved but I didn't found the file to edit, its an old release]
https://groups.google.com/a/cloudfoundry.org/forum/#!msg/bosh-users/Yb3X9vMtd9Y/NZBtkXvI1igJ [unanswered]
can someone give me an hint about that?
Thank you!
edit:
this is a log I collected:
<11>2015-03-10T12:33:21.493514+00:00 10.244.0.138 vcap.cloud_controller_ng [job=api_z1 index=0] {"timestamp":1425990801.4934351,"message":"Request failed: 500: {\"code\"=>10001, \"description\"=>\"SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed\", \"error_code\"=>\"CF-SSLError\", \"backtrace\"=>[\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/httpclient-2.5.3.3/lib/httpclient/session.rb:314:in `connect'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/httpclient-2.5.3.3/lib/httpclient/session.rb:314:in `ssl_connect'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/httpclient-2.5.3.3/lib/httpclient/session.rb:771:in `block in connect'\", \"/var/vcap/packages/ruby-2.1.4/lib/ruby/2.1.0/timeout.rb:91:in `block in timeout'\", \"/var/vcap/packages/ruby-2.1.4/lib/ruby/2.1.0/timeout.rb:101:in `call'\", \"/var/vcap/packages/ruby-2.1.4/lib/ruby/2.1.0/timeout.rb:101:in `timeout'\", \"/var/vcap/packages/ruby-2.1.4/lib/ruby/2.1.0/timeout.rb:127:in `timeout'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/httpclient-2.5.3.3/lib/httpclient/session.rb:762:in `connect'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/httpclient-2.5.3.3/lib/httpclient/session.rb:620:in `query'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/httpclient-2.5.3.3/lib/httpclient/session.rb:164:in `query'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/httpclient-2.5.3.3/lib/httpclient.rb:1161:in `do_get_block'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/httpclient-2.5.3.3/lib/httpclient.rb:962:in `block in do_request'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/httpclient-2.5.3.3/lib/httpclient.rb:1059:in `protect_keep_alive_disconnected'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/httpclient-2.5.3.3/lib/httpclient.rb:961:in `do_request'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/httpclient-2.5.3.3/lib/httpclient.rb:823:in `request'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/httpclient-2.5.3.3/lib/httpclient.rb:726:in `post'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/cloud_controller/dea/hm9000/client.rb:76:in `post_bulk_app_state'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/cloud_controller/dea/hm9000/client.rb:96:in `make_request'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/cloud_controller/dea/hm9000/client.rb:90:in `app_state_bulk_request'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/cloud_controller/dea/hm9000/client.rb:23:in `healthy_instances_bulk'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/cloud_controller/dea/instances_reporter.rb:38:in `healthy_instances_bulk'\", \"/var/vcap/packages/cloud_controller_ng/cloud_c...","log_level":"error","source":"cc.api","data":{"request_guid":"5e05004f-7ce5-4f02-7744-2b898cdc81aa::a412f221-85ac-4919-8d59-66e1565f2abd"},"thread_id":70040017744920,"fiber_id":70040026187480,"process_id":225,"file":"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/sinatra/vcap.rb","lineno":53,"method":"block in registered"}
I'd prefear to fix the certificate problem instead of lower the security level and dont use SSL, but any suggestion will be appreciated
In my Dockerfile I added a line to create the VCAP user as 0:0 (root)
RUN useradd -R / -m -s /bin/bash -ou 0 -g 0 vcap
2015-03-30T12:37:23.03-0700 [APP/0] OUT uid=0(root) gid=0(root) groups=0(root),65534(nogroup)
2015-03-30T12:37:23.03-0700 [APP/0] OUT root
which seemed to work but, starting a script from within the container (image) I am unable to start nginx? Any thoughts?
2015-03-30T12:37:22.75-0700 [CELL/0] OUT Successfully created container
2015-03-30T12:37:23.03-0700 [APP/0] OUT CF_INSTANCE_ADDR=10.244.17.6:61260
2015-03-30T12:37:23.03-0700 [APP/0] OUT TMPDIR=/app/tmp
2015-03-30T12:37:23.03-0700 [APP/0] OUT USER=vcap
2015-03-30T12:37:23.03-0700 [APP/0] OUT VCAP_APPLICATION={"application_name":"tmt","application_version":"ea9c851d-400c-4ce3-80a3-f2d6644ae566","host":"0.0.0.0","instance_id":"5b6336b0-c293-4b1d-7c6e-196bb30e7810","instance_index":0,"limits":{"disk":1024,"fds":16384,"mem":256},"name":"tmt","port":8080,"space_id":"50f34aa7-f63b-4f33-9854-6400945febec","space_name":"demo","version":"ea9c851d-400c-4ce3-80a3-f2d6644ae566"}
2015-03-30T12:37:23.03-0700 [APP/0] OUT PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
2015-03-30T12:37:23.03-0700 [APP/0] OUT MONO_IOMAP=all
2015-03-30T12:37:23.03-0700 [APP/0] OUT PWD=/
2015-03-30T12:37:23.03-0700 [APP/0] OUT LANG=en_US.UTF-8
2015-03-30T12:37:23.03-0700 [APP/0] OUT CF_INSTANCE_PORT=61260
2015-03-30T12:37:23.03-0700 [APP/0] OUT CF_INSTANCE_IP=10.244.17.6
2015-03-30T12:37:23.03-0700 [APP/0] OUT VCAP_SERVICES={}
2015-03-30T12:37:23.03-0700 [APP/0] OUT SHLVL=1
2015-03-30T12:37:23.03-0700 [APP/0] OUT HOME=/app
2015-03-30T12:37:23.03-0700 [APP/0] OUT CF_INSTANCE_PORTS=61260:8080
2015-03-30T12:37:23.03-0700 [APP/0] OUT INSTANCE_INDEX=0
2015-03-30T12:37:23.03-0700 [APP/0] OUT PORT=8080
2015-03-30T12:37:23.03-0700 [APP/0] OUT INSTANCE_GUID=5b6336b0-c293-4b1d-7c6e-196bb30e7810
2015-03-30T12:37:23.03-0700 [APP/0] OUT HOST_MACHINE=config.10.244.0.34.xip.io
2015-03-30T12:37:23.03-0700 [APP/0] OUT MEMORY_LIMIT=256m
2015-03-30T12:37:23.03-0700 [APP/0] OUT _=/usr/bin/env
2015-03-30T12:37:23.03-0700 [APP/0] OUT uid=0(root) gid=0(root) groups=0(root),65534(nogroup)
2015-03-30T12:37:23.03-0700 [APP/0] OUT root
2015-03-30T12:37:23.04-0700 [APP/0] ERR nginx: [alert] could not open error log file: open() "/var/log/nginx/error.log" failed (13: Permission denied)
2015-03-30T12:37:23.04-0700 [APP/0] ERR 2015/03/30 19:37:23 [emerg] 26#0: mkdir() "/var/lib/nginx/body" failed (13: Permission denied)
2015-03-30T12:37:23.04-0700 [APP/0] OUT Exit status 1
There must be a way to be able to start a process as root? Can I elevate the VCAP event more? How are the processes run?
I followed the instructions in the README, but hit this error. What do I need to do get past this?
I do not actually see ba8575466b0a7da8b799f996fe93c7873760b01b within this .final_builds/packages/acceptance-tests/index.yml file.
$ bosh create release --force
Syncing blobs...
sqlite/sqlite-autoconf-3070500.tar.gz downloaded
postgres/postgres-9.0.3-1.amd64.tar.gz downloaded
postgres/postgres-9.0.3-1.i386.tar.gz downloaded
postgres/postgresql-9.0.3.tar.gz downloaded
debian_nfs_server/nfs-kernel-server_1%3a1... downloaded
mysql/client-5.1.62-rel13.3-435-Linux-x86... downloaded
ruby/rubygems-1.8.24.tgz downloaded
git/git-1.7.11.2.tar.gz downloaded
uaa/cloudfoundry-identity-varz-1.0.2.war downloaded
ruby/bundler-1.2.1.gem downloaded
libyaml/yaml-0.1.4.tgz downloaded
buildpack_cache/ruby-1.8.7.tgz downloaded
buildpack_cache/ruby-1.9.2.tgz downloaded
buildpack_cache/ruby-1.9.3.tgz downloaded
buildpack_cache/ruby-build-1.8.7.tgz downloaded
buildpack_cache/ruby-build-1.9.2.tgz downloaded
buildpack_cache/bundler-1.3.0.pre.5.tgz downloaded
buildpack_cache/libyaml-0.1.4.tgz downloaded
buildpack_cache/ruby_versions.yml downloaded
buildpack_cache/bundler-1.3.1.tgz downloaded
buildpack_cache/ruby-2.0.0.tgz downloaded
buildpack_cache/apache-tomcat-7.0.37.tar.... downloaded
buildpack_cache/mysql-connector-java-5.1.... downloaded
buildpack_cache/postgresql-9.0-801.jdbc4.... downloaded
buildpack_cache/rails3_serve_static_asset... downloaded
buildpack_cache/rails_log_stdout.tgz downloaded
buildpack_cache/bundler-1.3.2.tgz downloaded
buildpack_cache/auto-reconfiguration-0.6.... downloaded
buildpack_cache/play-jpa-plugin-0.6.6.jar... downloaded
rootfs/lucid64.tar.gz downloaded
buildpack_cache/openjdk-1.6.0_27.tar.gz downloaded
buildpack_cache/openjdk-1.7.0_21.tar.gz downloaded
buildpack_cache/openjdk-1.8.0_M7.tar.gz downloaded
buildpack_cache/apache-tomcat-7.0.40.tar.... downloaded
buildpack_cache/apache-tomcat-7.0.41.tar.... downloaded
buildpack_cache/auto-reconfiguration-0.6.... downloaded
buildpack_cache/openjdk-1.7.0_25.tar.gz downloaded
buildpack_cache/auto-reconfiguration-0.6.... downloaded
haproxy/haproxy-1.5-dev19.tar.gz downloaded
haproxy/pcre-8.33.tar.gz downloaded
nginx/headers-more-v0.25.tgz downloaded
nginx/nginx-1.4.5.tar.gz downloaded
nginx/nginx-upload-module-2.2.tar.gz downloaded
nginx/pcre-8.34.tar.gz downloaded
nginx/newrelic_nginx_agent.tar.gz downloaded
golang/go1.2.1.linux-amd64.tar.gz downloaded
uaa/openjdk-1.7.0_51.tar.gz downloaded
uaa/openjdk-1.7.0-u40-unofficial-linux-am... downloaded
uaa/apache-tomcat-7.0.52.tar.gz downloaded
uaa/openjdk-1.7.0-u40-unofficial-macosx-x... downloaded
cli/cf-darwin-amd64.tgz downloaded
cli/cf-linux-amd64.tgz downloaded
ruby/yaml-0.1.6.tar.gz downloaded
ruby/ruby-1.9.3-p547.tar.gz downloaded
nodejs-buildpack/nodejs-buildpack-offline... downloaded
java-buildpack/java-buildpack-offline-v2.... downloaded
java-buildpack/java-buildpack-v2.4.zip downloaded
php-buildpack/php_buildpack-offline-v1.0.... downloaded
go-buildpack/go_buildpack-offline-v1.0.1.... downloaded
ruby-buildpack/ruby_buildpack-offline-v1.... downloaded
python-buildpack/python_buildpack-offline... downloaded
Please enter development release name: dora
Building DEV release
---------------------------------
Building packages
-----------------
Building acceptance-tests...
Final version: NOT FOUND
Dev version: NOT FOUND
/usr/local/var/rbenv/versions/1.9.3-p547/lib/ruby/gems/1.9.1/gems/bosh_cli-1.2334.0/lib/cli/versions_index.rb:45:in `block in latest_version': There is a duplicate version `ba8575466b0a7da8b799f996fe93c7873760b01b' in index `/Users/phil/src/cloudfoundry/cf-release/.final_builds/packages/acceptance-tests/index.yml' (RuntimeError)
from /usr/local/var/rbenv/versions/1.9.3-p547/lib/ruby/gems/1.9.1/gems/bosh_cli-1.2334.0/lib/cli/versions_index.rb:41:in `sort'
from /usr/local/var/rbenv/versions/1.9.3-p547/lib/ruby/gems/1.9.1/gems/bosh_cli-1.2334.0/lib/cli/versions_index.rb:41:in `latest_version'
from /usr/local/var/rbenv/versions/1.9.3-p547/lib/ruby/gems/1.9.1/gems/bosh_cli-1.2334.0/lib/cli/packaging_helper.rb:157:in `generate_tarball'
from /usr/local/var/rbenv/versions/1.9.3-p547/lib/ruby/gems/1.9.1/gems/bosh_cli-1.2334.0/lib/cli/packaging_helper.rb:56:in `block in build'
from /usr/local/var/rbenv/versions/1.9.3-p547/lib/ruby/gems/1.9.1/gems/bosh_cli-1.2334.0/lib/cli/core_ext.rb:14:in `with_indent'
from /usr/local/var/rbenv/versions/1.9.3-p547/lib/ruby/gems/1.9.1/gems/bosh_cli-1.2334.0/lib/cli/packaging_helper.rb:55:in `build'
from /usr/local/var/rbenv/versions/1.9.3-p547/lib/ruby/gems/1.9.1/gems/bosh_cli-1.2334.0/lib/cli/commands/release.rb:371:in `block in build_packages'
from /usr/local/var/rbenv/versions/1.9.3-p547/lib/ruby/gems/1.9.1/gems/bosh_cli-1.2334.0/lib/cli/commands/release.rb:369:in `each'
from /usr/local/var/rbenv/versions/1.9.3-p547/lib/ruby/gems/1.9.1/gems/bosh_cli-1.2334.0/lib/cli/commands/release.rb:369:in `build_packages'
from /usr/local/var/rbenv/versions/1.9.3-p547/lib/ruby/gems/1.9.1/gems/bosh_cli-1.2334.0/lib/cli/commands/release.rb:314:in `create_from_spec'
from /usr/local/var/rbenv/versions/1.9.3-p547/lib/ruby/gems/1.9.1/gems/bosh_cli-1.2334.0/lib/cli/commands/release.rb:48:in `create'
from /usr/local/var/rbenv/versions/1.9.3-p547/lib/ruby/gems/1.9.1/gems/bosh_cli-1.2334.0/lib/cli/command_handler.rb:57:in `run'
from /usr/local/var/rbenv/versions/1.9.3-p547/lib/ruby/gems/1.9.1/gems/bosh_cli-1.2334.0/lib/cli/runner.rb:56:in `run'
from /usr/local/var/rbenv/versions/1.9.3-p547/lib/ruby/gems/1.9.1/gems/bosh_cli-1.2334.0/lib/cli/runner.rb:16:in `run'
from /usr/local/var/rbenv/versions/1.9.3-p547/lib/ruby/gems/1.9.1/gems/bosh_cli-1.2334.0/bin/bosh:7:in `<top (required)>'
from /usr/local/var/rbenv/versions/1.9.3-p547/bin/bosh:23:in `load'
from /usr/local/var/rbenv/versions/1.9.3-p547/bin/bosh:23:in `<main>'
This is contents of .final_builds/packages/acceptance-tests/index.yml
---
builds:
!binary "YWE3Y2ZiNDhjNGQwMTFiNzQ4MGZjZTQ1YmQ4YjQ0YWM0MjA3OWZiZQ==":
blobstore_id: 4fd7d956-be06-4bf3-a30a-ce6cf9a77bf8
version: 1
sha1: !binary |-
NDRiYTk4MmEzMmJiMzJjY2FkM2JkMmMwOGNlYmI2NmI2NDk1MWQyMQ==
!binary "NmE3NTAzZTRmOTVmZDI3ZGFlZmJjN2YzYjYyOGRkZmNkMzgxOWY3MA==":
blobstore_id: 1b55cfee-5d18-403a-871a-d86ac54f7e7a
version: 2
sha1: !binary |-
ZjlkNjA5NWUzZTgyNzIyMWVjYjU0ZDRjYzNmNGUzY2UxNTFjYWE2OA==
!binary "NzZiMmI3MjVmYTYyNjM5YjJjNzUzNWZiMjc5MzBlYzk1ZGY0ZDZjYQ==":
blobstore_id: e5c49dbd-cadd-42bc-9410-79fa92b47773
version: !binary |-
NzZiMmI3MjVmYTYyNjM5YjJjNzUzNWZiMjc5MzBlYzk1ZGY0ZDZjYQ==
sha1: !binary |-
NTU3OTJmNjkxMTMxN2VmN2FlNTYyN2VhYTc4ZjU2NzAzN2E2NWQ4Ng==
!binary "ZmI0YjNkNWUxODgyNTFjZTYzNDBmNDQ5MDhhMDMzMmQxNDlmY2M1OA==":
blobstore_id: 30164ca6-cbf4-43c5-a546-487658ecbef0
version: !binary |-
ZmI0YjNkNWUxODgyNTFjZTYzNDBmNDQ5MDhhMDMzMmQxNDlmY2M1OA==
sha1: !binary |-
YmIxZDI1NDg5MjMyYzFiYmQzMjc3NTcxMTgzNGM5NjQzYzZhODBkOA==
!binary "YmE4NTc1NDY2YjBhN2RhOGI3OTlmOTk2ZmU5M2M3ODczNzYwYjAxYg==":
blobstore_id: 8e1ed9c1-55bf-40f7-85d4-b2257b772bef
version: !binary |-
YmE4NTc1NDY2YjBhN2RhOGI3OTlmOTk2ZmU5M2M3ODczNzYwYjAxYg==
sha1: !binary |-
ZTE1YmI3ZTY4ZmEzYThlNDU5YjNjYTliNmU5ZjJkYTQ0M2I1YTM3ZA==
!binary "MTRhYmMxMDY5YTE5NDFmMDhjMmQ0YTQ0YjIxZWFlZWFlMWJmN2U2MQ==":
version: !binary |-
MTRhYmMxMDY5YTE5NDFmMDhjMmQ0YTQ0YjIxZWFlZWFlMWJmN2U2MQ==
sha1: !binary |-
ZGU0YzU4OTA4Y2YzODIxYWU5YzRhZDg2MmY0NjRkYzMxMjY5ZmRiMg==
blobstore_id: c89ce816-3f86-4bd0-9da1-49285b05b769
Hi,
It would really help folks engage with this project if there was more information on what it is :)
Cheers,
Phil
Hello,i have try to use docker-push push my docker app,but get following errors:
root@lv:~/workspace/docker# cf docker-push phpapp 9.91.17.17:5000/lv/kodexplorer:latest
Creating app phpapp ...
Ok
Creating route for phpapp ...
Route phpapp.9.91.39.31.xip.io created
Ok
Mapping route to phpapp ...
Mapped phpapp.9.91.39.31.xip.io route to phpapp
Ok
Start app phpapp ...
Starting app phpapp in org diego / space diego as admin...
FAILED
Server error, status code: 400, error code: 170001, message: Staging error: failed to stage application:
Hostname not supplied: ''
FAILED
Error: Error executing cli core command
Starting app phpapp in org diego / space diego as admin...
FAILED
Server error, status code: 400, error code: 170001, message: Staging error: failed to stage application:
Hostname not supplied: ''
here is cf logs:
root@lv:~/workspace/docker# cf logs phpapp --recent
Connected, dumping recent logs for app phpapp in org diego / space diego as admin...
2015-03-18T18:52:51.54+0000 [API/0] OUT Created app with guid a079585c-d286-4eef-9985-cd5f06a9d4b0
2015-03-18T18:52:51.87+0000 [DEA/0] OUT Got staging request for app with id a079585c-d286-4eef-9985-cd5f06a9d4b0
2015-03-18T18:52:52.34+0000 [API/0] OUT Updated app with guid a079585c-d286-4eef-9985-cd5f06a9d4b0 ({"route"=>"b957ab27-4185-410c-b277-16c1004abe44"})
2015-03-18T18:52:53.46+0000 [API/0] ERR exception handling first response Staging error: failed to stage application:
2015-03-18T18:52:53.46+0000 [API/0] ERR Hostname not supplied: ''
2015-03-18T18:52:53.47+0000 [API/0] ERR encountered error: Staging error: failed to stage application: staging had already been marked as failed, this could mean that staging took too long
here is the cf events:
root@lv:~/workspace/docker# cf events phpapp
Getting events for app phpapp in org diego / space diego as admin...
time event actor description
2015-03-18T18:52:52.00+0000 audit.app.update admin
2015-03-18T18:52:52.00+0000 audit.app.map-route admin
2015-03-18T18:52:51.00+0000 audit.app.create admin instances: 1, state: STOPPED, environment_json: PRIVATE DATA HIDDEN
here is the trace logs:
WEBSOCKET REQUEST: [2015-03-18T18:52:52Z]
GET /tail/?app=a079585c-d286-4eef-9985-cd5f06a9d4b0 HTTP/1.1
Host: wss://loggregator.9.91.39.31.xip.io:443
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Version: 13
Sec-WebSocket-Key: [HIDDEN]
Origin: http://localhost
Authorization: [PRIVATE DATA HIDDEN]
WEBSOCKET RESPONSE: [2015-03-18T18:52:52Z]
HTTP/1.1 101 Switching Protocols
Upgrade: websocket
Connection: Upgrade
Sec-Websocket-Accept: uvdsXVX6vTCjbaGT1SCUfHc1ecA=
REQUEST: [2015-03-18T18:52:52Z]
PUT /v2/apps/a079585c-d286-4eef-9985-cd5f06a9d4b0?async=true&inline-relations-depth=1 HTTP/1.1
Host: api.9.91.39.31.xip.io
Accept: application/json
Authorization: [PRIVATE DATA HIDDEN]
Content-Type: application/json
User-Agent: go-cli 6.10.0-b78bf10 / linux
{"state":"STARTED"}
RESPONSE: [2015-03-18T18:52:53Z]
HTTP/1.1 400 Bad Request
Content-Length: 149
Content-Type: application/json;charset=utf-8
Date: Wed, 18 Mar 2015 18:52:53 GMT
Server: nginx
X-Cf-Requestid: c0768d3a-c9ee-45d4-5b5b-0eb4a2f43ff8
X-Content-Type-Options: nosniff
X-Vcap-Request-Id: 119051d8-2ecd-4756-6aaf-e22c1b0eb969::a4d081f9-a71b-4ffc-8d3d-67722882d173
{
"code": 170001,
"description": "Staging error: failed to stage application:\nHostname not supplied: ''\n",
"error_code": "CF-StagingError"
}
FAILED
Server error, status code: 400, error code: 170001, message: Staging error: failed to stage application:
Hostname not supplied: ''
thank you .
In scripts/generate-deployment-manifest
spiff merge \
${manifest_generation}/config-from-cf.yml \
${manifest_generation}/config-from-cf-internal.yml \
${deployments}/cf.yml \
> ${tmpdir}/config-from-cf.yml
We make an assumption that the deployment manifest is called cf.yml for cf.
In my installations I call the manifest by version.
➜ manifests git:(master) ✗ ls
178.yml 183.yml 187.yml 192.yml 194.yml 195.yml 196.yml 197.yml 199.yml 200.yml 203.yml 205.yml 207.yml 208.yml 209.yml 210.yml
Would you guys/gals be open for a PR to take a path to a manifest rather than a path to a dir where path/cf.yml must be present?
I want to get container status info just like the varz info from dea. I find the interface named "handleinfo" in garden server(request_handling.go) which has the info I need.
My question is which component will use this interface? request the garden server directly?
Currently, cf docker-push
provisions a docker image as an app. Would there be support for provisioning same as a CF service?
Use case: Provision an Eureka service running in a docker container as a service instance for a CF app
hello, i push a app use a docker image i make. but this app need to run the start command by a specicial user, not vcap. and it need to write something to the container. when i run the start command it appear "permission deny". I login to the container find that the user and user group was changed to this:
drwx------ 6 65534 65534 4096 Mar 7 09:44 myuser
-rwxr-xr-x 1 65534 65534 520 Mar 7 10:18 startServer.sh
so i have some questions:
thank you !
hi,
I try to deploy a docker application with my private docker registry, i config it at properties.stager:docker_registry_url: http://9.91.39.37
and properties.garden-linux:insecure_docker_registry_list: ["9.91.39.37"]
cf start app return:
Starting app kod in org admin / space admin as admin...
FAILED
Server error, status code: 500, error code: 170011, message: Stager error: staging failed: 500
cf logs app return:
2015-04-17T18:32:06.63+0000 [API/0] OUT Created app with guid 37ff6f3c-a4e8-478e-817b-cd0a24c409e2
2015-04-17T18:32:06.74+0000 [API/0] OUT Updated app with guid 37ff6f3c-a4e8-478e-817b-cd0a24c409e2 ({"command"=>"PRIVATE DATA HIDDEN"})
2015-04-17T18:32:07.37+0000 [API/0] OUT Updated app with guid 37ff6f3c-a4e8-478e-817b-cd0a24c409e2 ({"route"=>"1f8f3a25-376f-463c-968a-44d8a5b5b695"})
2015-04-17T18:32:25.38+0000 [API/0] ERR Failed to stage application: staging failed
/var/vcap/sys/log/stager/stager.stdout.log:
{"timestamp":"1429296690.342980623","source":"stager","message":"stager.docker.build-recipe.staging-request","log_level":1,"data":{"Request":{"app_id":"37ff6f3c-a4e8-478e-817b-cd0a24c409e2","file_descriptors":16384,"memory_mb":1024,"disk_mb":6144,"environment":[{"name":"VCAP_APPLICATION","value":"{\"limits\":{\"mem\":256,\"disk\":1024,\"fds\":16384},\"application_version\":\"609989bd-3793-464d-b61e-068e613e6d51\",\"application_name\":\"kod\",\"version\":\"609989bd-3793-464d-b61e-068e613e6d51\",\"name\":\"kod\",\"space_name\":\"admin\",\"space_id\":\"81963f13-6090-4f01-8dec-ef922cc6a71a\"}"},{"name":"VCAP_SERVICES","value":"{}"},{"name":"MEMORY_LIMIT","value":"256m"},{"name":"CF_STACK","value":"cflinuxfs2"}],"egress_rules":[{"protocol":"all","destinations":["0.0.0.0-9.255.255.255"],"log":false},{"protocol":"all","destinations":["11.0.0.0-169.253.255.255"],"log":false},{"protocol":"all","destinations":["169.255.0.0-172.15.255.255"],"log":false},{"protocol":"all","destinations":["172.32.0.0-192.167.255.255"],"log":false},{"protocol":"all","destinations":["192.169.0.0-255.255.255.255"],"log":false},{"protocol":"tcp","destinations":["0.0.0.0/0"],"ports":[53],"log":false},{"protocol":"udp","destinations":["0.0.0.0/0"],"ports":[53],"log":false}],"timeout":900,"log_guid":"37ff6f3c-a4e8-478e-817b-cd0a24c409e2","lifecycle":"docker","lifecycle_data":{"docker_image":"9.91.39.37/lvguanglin/php_kodexplorer:latest"}},"session":"2.2"}}
{"timestamp":"1429296690.345285892","source":"stager","message":"stager.staging-handler.staging-request.recipe-building-failed","log_level":2,"data":{"error":"missing docker registry","session":"3.2","staging-guid":"37ff6f3c-a4e8-478e-817b-cd0a24c409e2-817d03f871bb416198201d0c3f31f387","staging-request":{"app_id":"37ff6f3c-a4e8-478e-817b-cd0a24c409e2","file_descriptors":16384,"memory_mb":1024,"disk_mb":6144,"environment":[{"name":"VCAP_APPLICATION","value":"{\"limits\":{\"mem\":256,\"disk\":1024,\"fds\":16384},\"application_version\":\"609989bd-3793-464d-b61e-068e613e6d51\",\"application_name\":\"kod\",\"version\":\"609989bd-3793-464d-b61e-068e613e6d51\",\"name\":\"kod\",\"space_name\":\"admin\",\"space_id\":\"81963f13-6090-4f01-8dec-ef922cc6a71a\"}"},{"name":"VCAP_SERVICES","value":"{}"},{"name":"MEMORY_LIMIT","value":"256m"},{"name":"CF_STACK","value":"cflinuxfs2"}],"egress_rules":[{"protocol":"all","destinations":["0.0.0.0-9.255.255.255"],"log":false},{"protocol":"all","destinations":["11.0.0.0-169.253.255.255"],"log":false},{"protocol":"all","destinations":["169.255.0.0-172.15.255.255"],"log":false},{"protocol":"all","destinations":["172.32.0.0-192.167.255.255"],"log":false},{"protocol":"all","destinations":["192.169.0.0-255.255.255.255"],"log":false},{"protocol":"tcp","destinations":["0.0.0.0/0"],"ports":[53],"log":false},{"protocol":"udp","destinations":["0.0.0.0/0"],"ports":[53],"log":false}],"timeout":900,"log_guid":"37ff6f3c-a4e8-478e-817b-cd0a24c409e2","lifecycle":"docker","lifecycle_data":{"docker_image":"9.91.39.37/lvguanglin/php_kodexplorer:latest"}}}}
i get a "error":"missing docker registry"
,what's mean by this error? any config miss?
thansk.
When I try to deploy diego besides bosh on Openstack I get the following error:
https://gist.github.com/jhiemer/4be4bbb61c71994ea7d4
The big problem is that I don't find any logs under /var/vcap/sys/log I could start debugging from. Did the logging directory change?
I'm trying to deploy Diego release in my local, as part of that i'm running bosh create , but some of the package spec file github path has been migrated and create release is failing.
bosh create release --force && quit
Syncing blobs...
Release artifact cache: /Users/sumanaluru/.bosh/cache
Building license...
Using final version '861d82c0d9745784acfcd026ccf44e942579824a'
Building acceptance-tests...
Using final version '2f54ad04d4f720d8e42ad802ca431677635902c8'
Downloading from blobstore (id=e2d9e686-0234-4c24-b0e9-ddf0278ab164)...
Building auctioneer...
Package 'auctioneer' has a glob that resolves to an empty file list: github.com/cloudfoundry-incubator/auctioneer/cmd/auctioneer/*.go
i docker-push a app to the diego, when i use cf app it shows:
Showing health and status for app portal in org admin / space admin as admin...
OK
requested state: started
instances: 3/3
usage: 512M x 3 instances
urls: portal.test.xip.io
last uploaded: Sat Mar 28 07:52:25 UTC 2015
state since cpu memory disk details
#0 running 2015-03-28 03:53:05 AM 0.0% 0 of 0 0 of 0
#1 running 2015-03-28 04:03:05 AM 0.0% 0 of 0 0 of 0
#2 running 2015-03-28 04:03:05 AM 0.0% 0 of 0 0 of 0
it can not monitoring the cpu mem disk??
Under step 11 of the diego-release README instructions, I ran...
bosh -n deploy
and got this error
Failed updating job cf_bridge_z1 > cf_bridge_z1/0: `cf_bridge_z1/0' is not running after update (00:01:11)
Failed updating job cell_z1 > cell_z1/0: `cell_z1/0' is not running after update (00:01:41)
Error 400007: `cf_bridge_z1/0' is not running after update
Anything I can do to get past this or pointers to why this might be happening?
Is this correct?
Deployment `cf-warden-diego'
+--------------------+---------+------------------+--------------+
| Job/index | State | Resource Pool | IPs |
+--------------------+---------+------------------+--------------+
| access_z1/0 | failing | access_z1 | 10.244.16.46 |
| brain_z1/0 | failing | brain_z1 | 10.244.16.6 |
| cc_bridge_z1/0 | failing | cc_bridge_z1 | 10.244.16.14 |
| cell_z1/0 | failing | cell_z1 | 10.244.16.10 |
| etcd_z1/0 | running | etcd_z1 | 10.244.16.2 |
| route_emitter_z1/0 | failing | route_emitter_z1 | 10.244.16.18 |
+--------------------+---------+------------------+--------------+
Working of 65327ce and cf-release cloudfoundry-attic/cf-release@9334295
Crash at bosh create release --force
:
Building garden-linux...
Package 'garden-linux' has a glob that resolves to an empty file list: github.com/cloudfoundry-incubator/garden-linux/network/devices/bridgetest/*.go
hello i have deploy the diego and cf, and i want to push a app, which can use the docker image created by myself. can you tell me how to do? thank you .
I'm trying to update the routes
field for an LRP.
$ curl receptor.10.244.0.34.xip.io/v1/desired_lrps/scoen-1 -X PUT -d '{"routes":{"tcp-router":[{"external_port":70000,"container_port":6379}]}}'
{"name":"InvalidJSON","message":"EOF"}
I have verified that this is valid JSON.
{
"routes":{
"tcp-router":[
{
"external_port":50001,
"container_port":6379
}
]
}
}
@amitkgupta suggested something about the payload needs to be escaped; I have not found this documented.
The deploy fails and I get
Getting deployment properties from director...
Compiling deployment manifest...
Deployment name: cf.yml' Director name:
microbosh'
Director task 8
Started preparing deployment
Started preparing deployment > Binding deployment. Done (00:00:00)
Started preparing deployment > Binding releases. Done (00:00:00)
Started preparing deployment > Binding existing deployment. Done (00:00:00)
Started preparing deployment > Binding resource pools. Done (00:00:00)
Started preparing deployment > Binding stemcells. Done (00:00:00)
Started preparing deployment > Binding templates. Done (00:00:00)
Started preparing deployment > Binding properties. Done (00:00:00)
Started preparing deployment > Binding unallocated VMs. Done (00:00:00)
Started preparing deployment > Binding instance networks. Done (00:00:00)
Done preparing deployment (00:00:00)
Started preparing package compilation > Finding packages to compile. Done (00:00:00)
Started compiling packages
Started compiling packages > cli/1796b9b4dce96175bcefa60e1afbe1d4b7cd1f6b
Started compiling packages > buildpack_python/076c11da464aa50911e1744b39e95522a00e1f48
Started compiling packages > rootfs_cflinuxfs2/f528b08de7797c06725a2bdcf116e8ca0496cc12
Started compiling packages > rootfs_lucid64/933e8e6829308fbaeeff1b2aaef030f6f3fc8886
Started compiling packages > buildpack_php/60fb983e430ab8de7fb647cba59954f8d0c4b9c9
Started compiling packages > buildpack_go/c647d65201f25e34bcc304898afe43c82104d950
Failed compiling packages > rootfs_lucid64/933e8e6829308fbaeeff1b2aaef030f6f3fc8886: expected string value for option instance_type (00:00:01)
Failed compiling packages > rootfs_cflinuxfs2/f528b08de7797c06725a2bdcf116e8ca0496cc12: expected string value for option instance_type (00:00:01)
Failed compiling packages > buildpack_go/c647d65201f25e34bcc304898afe43c82104d950: expected string value for option instance_type (00:00:01)
Failed compiling packages > cli/1796b9b4dce96175bcefa60e1afbe1d4b7cd1f6b: expected string value for option instance_type (00:00:01)
Failed compiling packages > buildpack_python/076c11da464aa50911e1744b39e95522a00e1f48: expected string value for option instance_type (00:00:01)
Failed compiling packages > buildpack_php/60fb983e430ab8de7fb647cba59954f8d0c4b9c9: expected string value for option instance_type (00:00:01)
Error 100: expected string value for option instance_type
Task 8 error
I am trying bosh upload release ./releases/cf-205.yml right now which I just discovered.
Desired workflow:
bosh upload release/diego-0.1319.yml
git checkout v0.1319
./generate_deployment_manifest
bosh deploy
Hi, We all know garden supports running docker images, can running garden instance commit to a docker image?
Thanks.
How can I bind an app using Diego backend to CF services? cf env <appname>
doesn't return VCAP_SERVICES.
Would be great with a infrastructure-vsphere.yml!
Hi,
I'm getting buildpack comile error when i'm trying to push the application to my local Diego install.
-----> Checking Godeps/Godeps.json file.
-----> Resource https://storage.googleapis.com/golang/go1.4.linux-amd64.tar.gz is not provided by this buildpack. Please upgrade your buildpack to receive the latest resources.
-----> Installing go1.4... Staging failed: Buildpack compilation step failed
FAILED
BuildpackCompileFailed
List of Build packs from my local install.
MacBook-Pro:simple-go-web-app saluru$ cf buildpacks
Getting buildpacks...
buildpack position enabled locked filename
staticfile_buildpack 1 true false staticfile_buildpack-cached-v1.2.1.zip
java_buildpack 2 true false java-buildpack-v3.1.1.zip
ruby_buildpack 3 true false ruby_buildpack-cached-v1.6.2.zip
nodejs_buildpack 4 true false nodejs_buildpack-cached-v1.5.0.zip
go_buildpack 5 true false go_buildpack-cached-v1.5.0.zip
python_buildpack 6 true false python_buildpack-cached-v1.5.0.zip
php_buildpack 7 true false php_buildpack-cached-v4.0.0.zip
binary_buildpack 8 true false binary_buildpack-cached-v1.0.1.zip
I've tried to convert the BOSH Lite instructions, but end up lost because some stubs assume BOSH Lite. Some instructions for deploying a Diego CF to BOSH on AWS would make me happy.
hi all,
I get a crash problem when i try to get docker image from my private docker registry.it is trying to access a null pointer.
here is the stack:
2015/04/20 21:04:45 http: panic serving 127.0.0.1:59200: runtime error: invalid memory address or nil pointer dereference
goroutine 947 [running]:
net/http.func·011()
/usr/local/go/src/net/http/server.go:1130 +0xbb
github.com/docker/docker/registry.NewSession(0x0, 0x0, 0xc2081b2aa0, 0x1, 0xc208152b10, 0x0, 0x0)
/var/vcap/packages/garden-linux/src/github.com/cloudfoundry-incubator/garden-linux/Godeps/_workspace/src/github.com/docker/docker/registry/session.go:58 +0x75a
github.com/cloudfoundry-incubator/garden-linux/old/repository_fetcher.registryProvider.ProvideRegistry(0xa49eb0, 0x1b, 0xc208103720, 0x1, 0x1, 0xc2080fc349, 0xf, 0x0, 0x0, 0x0, ...)
/var/vcap/packages/garden-linux/src/github.com/cloudfoundry-incubator/garden-linux/old/repository_fetcher/repository_provider.go:49 +0x2ff
github.com/cloudfoundry-incubator/garden-linux/old/repository_fetcher.(*registryProvider).ProvideRegistry(0xc2080a6cf0, 0xc2080fc349, 0xf, 0x0, 0x0, 0x0, 0x0)
<autogenerated>:14 +0xe1
github.com/cloudfoundry-incubator/garden-linux/old/repository_fetcher.(*DockerRepositoryFetcher).Fetch(0xc2080a6d20, 0x7fa679df5a30, 0xc2081067e0, 0xc208096620, 0xc2080fc375, 0x6, 0x0, 0x0, 0x9ca0c0, 0x0, ...)
/var/vcap/packages/garden-linux/src/github.com/cloudfoundry-incubator/garden-linux/old/repository_fetcher/repository_fetcher.go:101 +0x324
github.com/cloudfoundry-incubator/garden-linux/old/repository_fetcher.Retryable.Fetch(0x7fa679df7350, 0xc2080a6d20, 0x7fa679df5a30, 0xc2081067e0, 0xc208096620, 0xc2080fc375, 0x6, 0x0, 0x0, 0x0, ...)
/var/vcap/packages/garden-linux/src/github.com/cloudfoundry-incubator/garden-linux/old/repository_fetcher/retryable.go:21 +0x15b
github.com/cloudfoundry-incubator/garden-linux/old/repository_fetcher.(*Retryable).Fetch(0xc208103730, 0x7fa679df5a30, 0xc2081067e0, 0xc208096620, 0xc2080fc375, 0x6, 0x0, 0x0, 0x7fa679de3000, 0x0, ...)
<autogenerated>:15 +0x141
github.com/cloudfoundry-incubator/garden-linux/old/rootfs_provider.(*dockerRootFSProvider).ProvideRootFS(0xc20802f9a0, 0x7fa679df5a30, 0xc2081067e0, 0xc20814de10, 0xb, 0xc208096620, 0x0, 0x0, 0xc20816f1c8, 0x0, ...)
/var/vcap/packages/garden-linux/src/github.com/cloudfoundry-incubator/garden-linux/old/rootfs_provider/docker_rootfs_provider.go:56 +0x13e
github.com/cloudfoundry-incubator/garden-linux/container_pool.(*LinuxContainerPool).acquireSystemResources(0xc2081681c0, 0xc20814de10, 0xb, 0xc2080964d0, 0x6e, 0xc2081364e0, 0x2d, 0xc2080fc340, 0x3b, 0xc208106780, ...)
/var/vcap/packages/garden-linux/src/github.com/cloudfoundry-incubator/garden-linux/container_pool/container_pool.go:569 +0x6c9
github.com/cloudfoundry-incubator/garden-linux/container_pool.(*LinuxContainerPool).Create(0xc2081681c0, 0xc2080964d0, 0x6e, 0x34630b8a000, 0xc2080fc340, 0x3b, 0x0, 0x0, 0x0, 0x0, ...)
/var/vcap/packages/garden-linux/src/github.com/cloudfoundry-incubator/garden-linux/container_pool/container_pool.go:254 +0x502
github.com/cloudfoundry-incubator/garden-linux/linux_backend.(*LinuxBackend).Create(0xc20802fa40, 0xc2080964d0, 0x6e, 0x34630b8a000, 0xc2080fc340, 0x3b, 0x0, 0x0, 0x0, 0x0, ...)
/var/vcap/packages/garden-linux/src/github.com/cloudfoundry-incubator/garden-linux/linux_backend/linux_backend.go:147 +0x1b9
github.com/cloudfoundry-incubator/garden/server.(*GardenServer).handleCreate(0xc208064700, 0x7fa679df8cb8, 0xc20803f220, 0xc20817c4e0)
/var/vcap/packages/garden-linux/src/github.com/cloudfoundry-incubator/garden-linux/Godeps/_workspace/src/github.com/cloudfoundry-incubator/garden/server/request_handling.go:61 +0x309
github.com/cloudfoundry-incubator/garden/server.*GardenServer.(github.com/cloudfoundry-incubator/garden/server.handleCreate)·fm(0x7fa679df8cb8, 0xc20803f220, 0xc20817c4e0)
/var/vcap/packages/garden-linux/src/github.com/cloudfoundry-incubator/garden-linux/Godeps/_workspace/src/github.com/cloudfoundry-incubator/garden/server/server.go:74 +0x45
net/http.HandlerFunc.ServeHTTP(0xc2081962f0, 0x7fa679df8cb8, 0xc20803f220, 0xc20817c4e0)
/usr/local/go/src/net/http/server.go:1265 +0x41
github.com/bmizerany/pat.(*PatternServeMux).ServeHTTP(0xc2080fa128, 0x7fa679df8cb8, 0xc20803f220, 0xc20817c4e0)
/var/vcap/packages/garden-linux/src/github.com/cloudfoundry-incubator/garden-linux/Godeps/_workspace/src/github.com/bmizerany/pat/mux.go:109 +0x21c
github.com/cloudfoundry-incubator/garden/server.func·002(0x7fa679df8cb8, 0xc20803f220, 0xc20817c4e0)
/var/vcap/packages/garden-linux/src/github.com/cloudfoundry-incubator/garden-linux/Godeps/_workspace/src/github.com/cloudfoundry-incubator/garden/server/server.go:113 +0x57
net/http.HandlerFunc.ServeHTTP(0xc208196550, 0x7fa679df8cb8, 0xc20803f220, 0xc20817c4e0)
/usr/local/go/src/net/http/server.go:1265 +0x41
net/http.serverHandler.ServeHTTP(0xc208064710, 0x7fa679df8cb8, 0xc20803f220, 0xc20817c4e0)
/usr/local/go/src/net/http/server.go:1703 +0x19a
net/http.(*conn).serve(0xc20812c1e0)
/usr/local/go/src/net/http/server.go:1204 +0xb57
created by net/http.(*Server).Serve
/usr/local/go/src/net/http/server.go:1751 +0x35e
I have deployed cf-release. Which version of Diego do I choose?
Two use cases:
Stable
Given a final release of cf-release, what final release of diego-release do I deploy, and how do I generate a compatible manifest? Note: instructions should not involve looking up shas.
Desired workflow:
bosh upload release releases/diego-0.1319.yml
git checkout v0.1319
./bosh-lite/make-manifest
bosh deploy
This may require tagging. [#55]
Edge
I'm deploying release candidates. Given a sha of cf-release, what sha of diego-release should I checkout to create a release and manifest from?
Desired workflow:
git checkout 4137e31
bosh create release
bosh upload release
./bosh-lite/make-manifest
bosh deploy
$ bosh download public stemcell bosh-stemcell-3-warden-boshlite-ubuntu-trusty-go_agent
'bosh-stemcell-3-warden-boshlite-ubuntu-trusty-go_agent' not found.
Which current stemcell should be used here?
Hi there - I'm just trying to work through the install docs in the readme, but I'm hitting a problem with
jim@ip-10-11-0-123:~/workspace/diego-release$ go install github.com/coreos/etcd
src/github.com/coreos/etcd/pkg/transport/listener.go:56:4: error: unknown field ‘KeepAlive’ in ‘net.Dialer’
KeepAlive: 30 * time.Second,
^
src/github.com/coreos/etcd/pkg/transport/listener.go:58:3: error: unknown field ‘TLSHandshakeTimeout’ in ‘http.Transport’
TLSHandshakeTimeout: 10 * time.Second,
^
src/github.com/coreos/etcd/pkg/transport/timeout_transport.go:34:4: error: unknown field ‘KeepAlive’ in ‘net.Dialer’
KeepAlive: 30 * time.Second,
^
I suspect this is caused by Ubuntu 14.04 coming with gccgo-go 4.9, which as I understand it only contains support for Go 1.2
Is there any workaround for this one?
Cheers!!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.