Git Product home page Git Product logo

docs-deploying-cf's Introduction

Deploying Cloud Foundry

This is a guide for operators on deploying Cloud Foundry on various IaaS using BOSH.

This repository is one of several content repositories that go into a complete documentation set.

The contents here are structured as a topic repository intended to be compiled into a larger document with Bookbinder. Follow the documentation contained in the docs-bookbinder to build the book locally before you request a pull request. For example, run bookbinder bind local to check for any broken links. If you have added subnav changes to the docs, you can also run bookbinder bind local --require-valid-subnav-links to make sure there is no duplicate subnav links.

See the docs-book-cloudfoundry repo for the complete list of open source documentation repositories, as well as information about the publishing process.

docs-deploying-cf's People

Contributors

abbyachau avatar amannamedsmith avatar amitkgupta avatar anexper avatar animatedmax avatar apeek4 avatar araher avatar benjsmi avatar bentarnoff avatar cf-pub-tools avatar cshollingsworth avatar dsabeti avatar elenasharma avatar genevieve avatar gossion avatar jbheron avatar killian avatar kinjelom avatar kodykantor avatar ljarzynski avatar mjgutermuth avatar mlimonczenko avatar pspinrad avatar seviet avatar socalnick avatar tcdowney avatar teamhedgehog avatar vikafed avatar voelzmo avatar yupengzte avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

docs-deploying-cf's Issues

Missing step in deployment

As a nice to have, can we include the following.

After cloning the cf-release repository, you need to upload this release version to Bosh.

dea_next resources too small in Vsphere install doc

This doc: http://docs.cloudfoundry.org/deploying/common/vsphere-vcloud-cf-stub.html has a stub that the user is supposed to edit. Specifically this part:

dea_next:
disk_mb: 2048
memory_mb: 1024

That number is too small. The default staging numbers provided by cf-release are:
staging_disk_limit_mb: 6144
staging_memory_limit_mb: 1024

This check:
https://github.com/cloudfoundry/cloud_controller_ng/blob/1b748b49d93157cd5f267f9a57c727b3fffaa91f/lib/cloud_controller/dea/pool.rb#L97

Will fail because staging_disk_limit_mb is greater than disk_mb and any pushed app will not start.

I Keep Exceeding my Quota. How many VMs do I Need to Deploy CF

I am following the instructions to deploy CF on GCP but the deployment fails because I am exceeding my quota (24 VMs). I doubled my CPU quota to 48 VMs and it still fails with the new limit. Will be nice to know the number of VMs needed so that I can just go for it. The error is as shown below:

Error: CPI error 'Bosh::Clouds::VMCreationFailed' with message 'VM failed to create: 
googleapi: Error 403: Quota 'CPUS' exceeded. 
Limit: 48.0 in region us-east1., quotaExceeded' in 'create_vm' CPI method

Release 138 is now 6 months old

It's time to make this documentation up-to-date for release v161. Particularly the example manifest displayed in this section is inadequate to get a CF up and running.

Description why I should blacklist log drains is hard to understand

According to the description in log_drain_blacklists.html.md.erb I'm not able to fully understand the use case of the configuration.

The doppler component allows application log drains. An application developer deploying their app to Cloud Foundry can bind a drain to a log analysis service.

  • Are you referring to the cf bind-service -l ... feature? I guess you do but I'm not sure.

The performance of your Cloud Foundry installation can be affected if the application developer attempts to bind a drain to an internal Cloud Foundry component.

  • Why should a developer try to bind a logdrain to a Cloud Foundry component and which Cloud Foundry component would make sense to bind a drain to?
  • How/Why does it affect the performance?

Update docs on vSphere hardware requirements

Currently, it says a minimal install requires 48GB, and a proof-of-concept install requires 128GB. Those numbers are off by 1-2 orders of magnitude. Also "minimal" and "proof-of-concept" are hard to distinguish. Suggestions:

  • in each use case, differentiate between the resources needed for the platform and the resources allocated for apps running on it.
  • give some indication of some of the potential tradeoffs (e.g. BOSH needs x cores minimum, but if you deploy with y cores, you can provision the CF cluster much faster)
  • cover more uses cases: absolute minimum (1-5GB, this strategy not included in OSS docs), minimal HA deployment, sizable production deployment for 100s-1000s of users running 1000s-100000s of app containers.

Instructions for deploying with bosh-lite (warden) and script syntax seem not to match

The current bosh-lite deployment instructions for a local machine (which I assume to be for a warden-based deployment) state that you need to generate a stub containing the director UUID, but cf-release/scripts/generate-bosh-lite-dev-manifest contains this code:

BOSH_STATUS=$(bosh status)
EXPECTED_DIRECTOR_NAME="Bosh Lite Director"

if [[ "$(echo "$BOSH_STATUS" | grep Name)" != *"$EXPECTED_DIRECTOR_NAME"* ]]; then
  echo "Can only target $EXPECTED_DIRECTOR_NAME. Please use 'bosh target' before running this script."
  exit 1
fi

mkdir -p "${CF_RELEASE_DIR}/bosh-lite/deployments"

DIRECTOR_UUID=$(echo "$BOSH_STATUS" | grep UUID | awk '{print $2}')

When I pass a manifest stub into the script as directed by the docs, I get an error. The old method I was using before of calling the script with no args still seems to work correctly.

jwt verification key incorrect information

The documentation for editing the cf-stub.yml - jwt section does not tell us what to do properly. We could not use the generated pub file.
Used the following the cat the output file:
openssl rsa -in jwt-key.pem -pubout > key.pub

ssh-keygen -f jwt-key.pem does not create a pub key which includes the begin and end lines.

jwt:
verification_key: JWT_VERIFICATION_KEY
signing_key: JWT_SIGNING_KEY

Instead of just stating where each key is used, the begin and end lines should be inserted so that admins know what to look for after generating these keys. For example,

verification_key: JWT_VERIFICATION_KEY
-----BEGIN PUBLIC KEY-----
PUBLIC_KEY
-----END PUBLIC KEY-----
signing_key: JWT_SIGNING_KEY
-----BEGIN RSA PRIVATE KEY-----
RSA_PRIVATE_KEY
-----END RSA PRIVATE KEY-----

Without proper information my team spent close to two months trying to figure out the resolution for:
API endpoint: https://api.cftest.test.local (API version: 2.58.0)
User: admin
No org or space targeted, use 'cf target -o ORG -s SPACE'
FAILED
Error finding available orgs
Server error, status code: 500, error code: 0, message:

Instructions to intsal CF on OpenStack need to be updated for the latest release

Instructions to intsal CF on OpenStack need to be updated for the latest release 183.

I updated the demo.yml and it works for the minimum requirements specified in the current tutorial link .

<%
director_uuid = 'd806b033-0e90-4990-9404-5c554f334efd'
static_ip = '128.138.202.110'
root_domain = "mycloud.com"
deployment_name = 'cf'
cf_release = '183'
protocol = 'http'
common_password = 'c1oudc0wc1oudc0w'

%>

name: <%= deployment_name %>
director_uuid: <%= director_uuid %>

releases:

  • name: cf
    version: <%= cf_release %>

compilation:
workers: 2
network: shared
reuse_compilation_vms: true
cloud_properties:
instance_type: bosh.medium

update:
canaries: 0
canary_watch_time: 30000-600000
update_watch_time: 30000-600000
max_in_flight: 32
serial: false

networks:

  • name: shared
    type: dynamic
    cloud_properties:
    net_id: 0a3ac1cf-5019-4028-9d56-433b470fda46
    security_groups:
    - default
    - bosh
    - cf-private

resource_pools:

  • name: common
    network: shared
    stemcell:
    name: bosh-openstack-kvm-ubuntu-trusty-go_agent
    version: latest
    cloud_properties:
    instance_type: bosh.small
  • name: large
    network: shared
    stemcell:
    name: bosh-openstack-kvm-ubuntu-trusty-go_agent
    version: latest
    cloud_properties:
    instance_type: bosh.medium

jobs:

  • name: nats
    templates:
    • name: nats
    • name: nats_stream_forwarder
      instances: 1
      resource_pool: common
      networks:
    • name: shared
      shared: [dns, gateway]
  • name: syslog_aggregator
    templates:
    • name: syslog_aggregator
      instances: 1
      resource_pool: common
      persistent_disk: 65536
      networks:
    • name: shared
      shared: [dns, gateway]
  • name: nfs_server
    templates:
    • name: debian_nfs_server
      instances: 1
      resource_pool: common
      persistent_disk: 65535
      networks:
    • name: shared
      shared: [dns, gateway]
  • name: postgres
    templates:
    • name: postgres
      instances: 1
      resource_pool: common
      persistent_disk: 65536
      networks:
    • name: shared
      shared: [dns, gateway]
      properties:
      db: databases
  • name: uaa
    templates:
    • name: uaa
      instances: 1
      resource_pool: common
      networks:
    • name: shared
      shared: [dns, gateway]
  • name: trafficcontroller
    templates:
    • name: loggregator_trafficcontroller
      instances: 1
      resource_pool: common
      networks:
    • name: shared
      default: [dns, gateway]
  • name: cloud_controller
    templates:
    • name: cloud_controller_ng
      instances: 1
      resource_pool: common
      networks:
    • name: shared
      shared: [dns, gateway]
      properties:
      db: ccdb
  • name: health_manager
    templates:
    • name: hm9000
      instances: 1
      resource_pool: common
      networks:
    • name: shared
      shared: [dns, gateway]
  • name: dea
    templates:
    • name: dea_logging_agent
    • name: dea_next
      instances: 2
      resource_pool: large
      networks:
    • name: shared
      shared: [dns, gateway]
  • name: router
    templates:
    • name: gorouter
      instances: 1
      resource_pool: common
      networks:
    • name: shared
      shared: [dns, gateway]
      properties:
      metron_agent:
      zone: z1

properties:
domain: <%= root_domain %>
system_domain: <%= root_domain %>
system_domain_organization: 'admin'
app_domains:
- <%= root_domain %>

haproxy: {}

networks:
apps: shared

nats:
user: nats
password: <%= common_password %>
address: 0.nats.shared.<%= deployment_name %>.microbosh
port: 4222
machines:
- 0.nats.shared.<%= deployment_name %>.microbosh

syslog_aggregator:
address: 0.syslog-aggregator.shared.<%= deployment_name %>.microbosh
port: 54321

nfs_server:
address: 0.nfs-server.shared.<%= deployment_name %>.microbosh
network: "*.<%= deployment_name %>.microbosh"
allow_from_entries:
- 10.0.0.0/24

debian_nfs_server:
no_root_squash: true

metron_agent:
zone: z1
metron_endpoint:
zone: z1
shared_secret: <%= common_password %>

loggregator_endpoint:
shared_secret: <%= common_password %>
host: 0.trafficcontroller.shared.<%= deployment_name %>.microbosh

loggregator:
zone: z1
servers:
zone:
- 0.loggregator.shared.<%= deployment_name %>.microbosh

traffic_controller:
zone: 'zone'

logger_endpoint:
use_ssl: <%= protocol == 'https' %>
port: 80

ssl:
skip_cert_verify: true

router:
endpoint_timeout: 60
status:
port: 8080
user: gorouter
password: <%= common_password %>
servers:
z1:
- 0.router.shared.<%= deployment_name %>.microbosh
z2: []

etcd:
machines:
- 0.etcd.shared.<%= deployment_name %>.microbosh

dea: &dea
disk_mb: 102400
disk_overcommit_factor: 2
memory_mb: 15000
memory_overcommit_factor: 3
directory_server_protocol: <%= protocol %>
mtu: 1460
deny_networks:
- 169.254.0.0/16 # Google Metadata endpoint

dea_next: *dea

disk_quota_enabled: false

dea_logging_agent:
status:
user: admin
password: <%= common_password %>

databases: &databases
db_scheme: postgres
address: 0.postgres.shared.<%= deployment_name %>.microbosh
port: 5524
roles:
- tag: admin
name: ccadmin
password: <%= common_password %>
- tag: admin
name: uaaadmin
password: <%= common_password %>
databases:
- tag: cc
name: ccdb
citext: true
- tag: uaa
name: uaadb
citext: true

ccdb: &ccdb
db_scheme: postgres
address: 0.postgres.shared.<%= deployment_name %>.microbosh
port: 5524
roles:
- tag: admin
name: ccadmin
password: <%= common_password %>
databases:
- tag: cc
name: ccdb
citext: true

ccdb_ng: *ccdb

uaadb:
db_scheme: postgresql
address: 0.postgres.shared.<%= deployment_name %>.microbosh
port: 5524
roles:
- tag: admin
name: uaaadmin
password: <%= common_password %>
databases:
- tag: uaa
name: uaadb
citext: true

cc: &cc
security_group_definitions : []
default_running_security_groups : []
default_staging_security_groups : []
srv_api_uri: <%= protocol %>://api.<%= root_domain %>
jobs:
local:
number_of_workers: 2
generic:
number_of_workers: 2
global:
timeout_in_seconds: 14400
app_bits_packer:
timeout_in_seconds: null
app_events_cleanup:
timeout_in_seconds: null
app_usage_events_cleanup:
timeout_in_seconds: null
blobstore_delete:
timeout_in_seconds: null
blobstore_upload:
timeout_in_seconds: null
droplet_deletion:
timeout_in_seconds: null
droplet_upload:
timeout_in_seconds: null
model_deletion:
timeout_in_seconds: null
bulk_api_password: <%= common_password %>
staging_upload_user: upload
staging_upload_password: <%= common_password %>
quota_definitions:
default:
memory_limit: 10240
total_services: 100
non_basic_services_allowed: true
total_routes: 1000
trial_db_allowed: true
resource_pool:
resource_directory_key: cloudfoundry-resources
fog_connection:
provider: Local
local_root: /var/vcap/nfs/shared
packages:
app_package_directory_key: cloudfoundry-packages
fog_connection:
provider: Local
local_root: /var/vcap/nfs/shared
droplets:
droplet_directory_key: cloudfoundry-droplets
fog_connection:
provider: Local
local_root: /var/vcap/nfs/shared
buildpacks:
buildpack_directory_key: cloudfoundry-buildpacks
fog_connection:
provider: Local
local_root: /var/vcap/nfs/shared
install_buildpacks:
- name: java_buildpack
package: buildpack_java
- name: ruby_buildpack
package: buildpack_ruby
- name: nodejs_buildpack
package: buildpack_nodejs
- name: go_buildpack
package: buildpack_go
db_encryption_key: <%= common_password %>
hm9000_noop: false
diego: false
newrelic:
license_key: null
environment_name: <%= deployment_name %>

ccng: *cc

login:
enabled: false

uaa:
url: <%= protocol %>://uaa.<%= root_domain %>
no_ssl: <%= protocol == 'http' %>
cc:
client_secret: <%= common_password %>
admin:
client_secret: <%= common_password %>
batch:
username: batch
password: <%= common_password %>
clients:
cf:
override: true
authorized-grant-types: password,implicit,refresh_token
authorities: uaa.none
scope: cloud_controller.read,cloud_controller.write,openid,password.write,cloud_controller.admin,scim.read,scim.write
access-token-validity: 7200
refresh-token-validity: 1209600
admin:
secret: <%= common_password %>
authorized-grant-types: client_credentials
authorities: clients.read,clients.write,clients.secret,password.write,scim.read,uaa.admin
scim:
users:
- admin|<%= common_password %>|scim.write,scim.read,openid,cloud_controller.admin,uaa.admin,password.write
- services|<%= common_password %>|scim.write,scim.read,openid,cloud_controller.admin
jwt:
signing_key: |
-----BEGIN RSA PRIVATE KEY-----
MIICXAIBAAKBgQDHFr+KICms+tuT1OXJwhCUmR2dKVy7psa8xzElSyzqx7oJyfJ1
JZyOzToj9T5SfTIq396agbHJWVfYphNahvZ/7uMXqHxf+ZH9BL1gk9Y6kCnbM5R6
0gfwjyW1/dQPjOzn9N394zd2FJoFHwdq9Qs0wBugspULZVNRxq7veq/fzwIDAQAB
AoGBAJ8dRTQFhIllbHx4GLbpTQsWXJ6w4hZvskJKCLM/o8R4n+0W45pQ1xEiYKdA
Z/DRcnjltylRImBD8XuLL8iYOQSZXNMb1h3g5/UGbUXLmCgQLOUUlnYt34QOQm+0
KvUqfMSFBbKMsYBAoQmNdTHBaz3dZa8ON9hh/f5TT8u0OWNRAkEA5opzsIXv+52J
duc1VGyX3SwlxiE2dStW8wZqGiuLH142n6MKnkLU4ctNLiclw6BZePXFZYIK+AkE
xQ+k16je5QJBAN0TIKMPWIbbHVr5rkdUqOyezlFFWYOwnMmw/BKa1d3zp54VP/P8
+5aQ2d4sMoKEOfdWH7UqMe3FszfYFvSu5KMCQFMYeFaaEEP7Jn8rGzfQ5HQd44ek
lQJqmq6CE2BXbY/i34FuvPcKU70HEEygY6Y9d8J3o6zQ0K9SYNu+pcXt4lkCQA3h
jJQQe5uEGJTExqed7jllQ0khFJzLMx0K6tj0NeeIzAaGCQz13oo2sCdeGRHO4aDh
HH6Qlq/6UOV5wP8+GAcCQFgRCcB+hrje8hfEEefHcFpyKH+5g1Eu1k0mLrxK2zd+
4SlotYRHgPCEubokb2S1zfZDWIXW3HmggnGgM949TlY=
-----END RSA PRIVATE KEY-----
verification_key: |
-----BEGIN PUBLIC KEY-----
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDHFr+KICms+tuT1OXJwhCUmR2d
KVy7psa8xzElSyzqx7oJyfJ1JZyOzToj9T5SfTIq396agbHJWVfYphNahvZ/7uMX
qHxf+ZH9BL1gk9Y6kCnbM5R60gfwjyW1/dQPjOzn9N394zd2FJoFHwdq9Qs0wBug
spULZVNRxq7veq/fzwIDAQAB
-----END PUBLIC KEY-----

Unclear instructions for installing spiff

The guide does not specify required actions to install spiff.
It provides a link to the spiff readme (here) which does not provide clear instructions either.

I'm missing which spiff architecture bosh-lite expects, between those available as spiff releases (darwin or linux)

I'm also missing how to install downloaded binary. Although this would better fit in spiff install instructions.
chmod +x is probably a must here, but I don't know where to place it so next instructions find it.

How to get the cert files given in the command

We are following steps given in the step 4: "Create Infrastructure, Bastion, BOSH Director, and Load Balancers". The plan command is refering to "YOUR-CERT.crt" and "YOUR-KEY.key" but doesn't tell steps on how to get those files. It would be great if the steps are eloborated on getting those files.

Why are such large disk sizes required? (instance flavors for CF on OpenStack)

In this doc docs-deploying-cf/openstack/required-flavors.html.md.erb is written:

[...] flavors must have the following listed specifications, at minimum:

Flavor Name VCPUs RAM (GB) Root Disk (GB) Ephemeral Disk (GB)
m1.small 1 2 10 20
m1.medium 2 4 10 40
m1.large 4 8 10 80
m1.xlarge 8 16 10 160

I found that when I reduce flavors disk sizes to these values:

Flavor Name VCPUs RAM (GB) Root Disk (GB) Ephemeral Disk (GB)
m1.small 1 2 4 4
m1.medium 2 4 4 8
m1.large 4 8 4 16
m1.xlarge 8 16 4 32

then instances disks usage in my deployment look like this (bosh instance --ps):

Instance RAM Usage Swap Usage System Disk Usage Ephemeral Disk Usage Persistent Disk Usage
api_z1 10% (838MB) 0% 29% (23i%) 6% (5i%) -
blobstore_z1 3% (134MB) 0% 29% (23i%) 3% (1i%) 0% (0i%)
clock_z1 7% (290MB) 0% 29% (23i%) 12% (10i%) -
consul_z1 5% (95MB) 0% 29% (23i%) 5% (3i%) 0% (0i%)
doppler_z1 4% (142MB) 0% 29% (23i%) 4% (1i%) -
etcd_z1 2% (168MB) 0% 29% (23i%) 2% (1i%) 0% (0i%)
ha_proxy_z1 3% (117MB) 0% 29% (23i%) 3% (1i%) -
loggregator[...]_z1 5% (100MB) 0% 29% (23i%) 6% (3i%) -
nats_z1 3% (109MB) 0% 29% (23i%) 3% (1i%) -
postgres_z1 3% (140MB) 0% 29% (23i%) 4% (3i%) 2% (1i%)
router_z1 3% (114MB) 0% 29% (23i%) 3% (2i%) -
stats_z1 6% (125MB) 0% 29% (23i%) 13% (6i%) -
uaa_z1 18% (711MB) 0% 29% (23i%) 13% (2i%) -

Why are such large disk sizes required by default?
Doesn't it slow down IaaS operations?

ha_proxy information is not very clear

Relevant doc: https://docs.cloudfoundry.org/deploying/openstack/cf-stub.html

Replace RSA_PRIVATE_KEY and SSL_CERTIFICATE_SIGNED_BY_PRIVATE_KEY with the PEM
encoded private key and certificate associated with the system domain and apps domains you’ve
configured to terminate at the floating IP associated with the ha_proxy job.

This is not very clear. Replace xxx with the xxx associated with the xxx ... at the floating IP associated with xxx. Is there a way to be more clear about this? Perhaps just 'Replace xxx with xxx'?

Build the Cloud Foundry Release Failed

When I go to the step2(http://docs.cloudfoundry.org/deploying/common/deploy.html), which is 'Build the Cloud Foundry Release', run 'bosh create release' then I get the result below:

Building collector...
  No artifact found for collector
  Generating...
  Pre-packaging...
  > + set -e $'-x\r'
: invalid optiong: line 1: set: -
  > set: usage: set [-abefhkmnptuvxBCHP] [-o option-name] [--] [arg ...]
  > + $'\r'
  > pre_packaging: line 2: $'\r': command not found
  > + cd $'/tmp/d20160927-2162-c8vdwp/d20160927-2162-cgzkck/collector\r'
: No such file or directorycd: /tmp/d20160927-2162-c8vdwp/d20160927-2162-cgzkck/collector
  > + BUNDLE_WITHOUT=development:test
  > + bundle package --all --no-install --path $'./vendor/cache\r'
  > Could not locate Gemfile
  > + rm -rf $'./vendor/cache/ruby\r'
  > + rm -rf $'./vendor/cache/vendor\r'
  > + bundle config --delete $'NO_INSTALL\r'
  > + $'\r'
  > pre_packaging: line 8: $'\r': command not found
'collector' pre-packaging failed

my env is windows7 x64, deploy on BOSH-Lite, what should I do?

Does this repo really end up on docs.cloudfoundry.org??

Looking at https://github.com/cloudfoundry/docs-deploying-cf/blob/master/openstack/index.html.md

The markdown says:

---
title: Deploying Cloud Foundry on OpenStack

---

Follow these steps to install Cloud Foundry on OpenStack using BOSH.

1. [Install the BOSH CLI](http://docs.cloudfoundry.org/bosh/deploy-microbosh.html#cli)

And yet, looking at http://docs.cloudfoundry.org/deploying/openstack/ , this link points to: http://docs.cloudfoundry.org/bosh/setup/

How does one account for the difference? I read somewhere that the docs are built every night, and this markdown has been in the repo for quite some time now.

Don't assume 192.168.50.4 IP for BOSH-Lite

The docs for deploying CF to BOSH-Lite assume the BOSH-Lite director will be at 192.168.50.4. This is the default IP when deploying BOSH-Lite with the Virtualbox provider, but (a) the user can override this, and (b) when deploying BOSH-Lite with the AWS provider, this IP will never be correct. The docs should mention that these other cases are possible, and what the user should do differently in such cases.

Copy the following policy text to your clipboard:

Can you please let me know how to do this

"Copy the following policy text to your clipboard:" , it is under Step 3: Create an IAM User

i am facing the below issue while executing

$ aws iam put-user-policy --user-name "bbl-user"
--policy-name "bbl-policy"
--policy-document "$(pbpaste)"

Issue:

bash: command substitution: line 270: syntax error near unexpected token )' bash: command substitution: line 270: pbpaste)"'

multiple typos in examples - deploy-aws.html.md.erb

Dashes where they shouldn't be caused several issues:

bosh upload-stemcell

  • produces an error: Unknown command: upload-stemcell
  • should be: bosh upload stemcell

bosh create-release

  • produces an error: Unknown command: create-release
  • should be: bosh create release

bosh upload-release

  • produces an error: Unknown command: upload-release
  • should be: bosh upload release

The 'Deploy Cloud Foundry' step also failed:

bosh -d cf deploy ../cf-deployment.yml

  • produces an error: wrong number of arguments (given 1, expected 0).
    Usage: deploy [--recreate] [--no-redact] [--skip-drain [job1,job2]]

I'm still researching the correct command here - suggestions are welcome

docs-deploying-cf/blob/master/common/deploy.html.md.erb ubuntu not specified as necessary

The instruction for choosing a stemcell says to choose one from bosh.io/stemcells however the bosh deploy command is specifically looking for the warden ubuntu stemcell. May want to clarify this. Specifically i chose centos and it failed as the cf files defaulted in the example of the cf-release project specifically called for the ubuntu trusty stemcell.

Update/remove reference to `.ruby-version`

In the Create a Manifest doc it says:

Install the Ruby version listed in the .ruby-version file of the cf-release repository. You can manage your Ruby versions using rvm, rbenv, or chruby

https://github.com/cloudfoundry/docs-deploying-cf/blob/master/boshlite/create_a_manifest.html.md.erb#L64

But the .ruby-version file was recently removed:

https://github.com/cloudfoundry/cf-release/issues/1025

Should this line just be delete from the docs or changed to say something about the last known working ruby version? /cc @Amit-PivotalLabs

CF deployment on BOSH lite failing

Deployment on bosh-lite is not working if I follow the steps mentioned in here

I used cf-271 release as well as cf-260 release but none of them works. The deployment fails as the consul_z1 is not running after update.

I ssh into the vm and see the following logs.

error during stop: dial tcp [::1]:8400: getsockopt: connection refused
error during start: timeout exceeded: "rpc error: failed to get conn: x509: certificate has expired or is not yet valid"
2017/08/09 04:23:24 [ERR] agent.client: Failed to decode response header: EOF
2017/08/09 04:23:24 [ERR] agent.client: Failed to decode response header: EOF
error during start: timeout exceeded: "rpc error: failed to get conn: x509: certificate has expired or is not yet valid"
2017/08/09 04:24:26 [ERR] agent.client: Failed to decode response header: EOF
2017/08/09 04:24:26 [ERR] agent.client: Failed to decode response header: EOF
error during start: timeout exceeded: "rpc error: failed to get conn: x509: certificate has expired or is not yet valid"

Looks like there is some certificate generation steps missing or the certificates are invalid. Can you please help?

Also, should we use cf-deployment for development and testing purposes?

Docs on filling out cf-stub should clarify non-overlapping requirements for domain, system_domain, and apps_domains

I don't actually know what the real requirements are to pass the validations in Cloud Controller, but it would be nice if the docs guided users to not misconfiguring these domains so they can avoid the validation errors that only surface late at deploy time.

This is related to this issue: cloudfoundry/cloud_controller_ng#552

(This issue was driven out by these stories: https://www.pivotaltracker.com/story/show/114144105, https://www.pivotaltracker.com/story/show/85293086)

OpenStack rate limit validation doesn't reveal potential problems

As part of our efforts to automate OpenStack validation (https://github.com/cloudfoundry-incubator/cf-openstack-validator) we introduce a rate limit test.

We just discovered two things:

  • Fog caches the results of compute.servers, i.e. a valid check would be servers = compute.servers; 100.times { servers.reload }
  • GET is in general not limited, i.e. just enabling rate limits with default values wouldn't limit the calls to list servers

https://github.com/cloudfoundry/docs-deploying-cf/blob/master/openstack/validate_openstack.html.md.erb#L36-L41

What is the background of this specific check?

cf-stub.html is unclear

The following sections could use some clarification:

consul: The command doesnt work until git submodule init && git submodule update has been run on the cf-release repo
ccdb (and other db sections): No list of supported database types (postgresql is listed, but there's no list of other valid options)
jwt: The command openssl rsa -in jwt-key.pem -pubout > key.pub doesn't work by itself (as jwt-key.pem doesn't exist)
etcd: no instruction
generate_deployment_manifest: This is listed as an executable, when its in fact a script in the scripts directory. The cmd line should be expressed as ./scripts/generate_deployment_manifest (this is corrected farther down the documentation, but imho the doc would be a lot clearer if the first reference to a command was correct.

Links to IaaS-specific landing pages don't show me in sidenav

On the main landing page for these docs, there are links for the instructions for each of the IaaSes. E.g. there's (aws/index.html). When I click it, my URL doesn't have the index.html, it's just /aws at the end, and so the sidenav doesn't show where I am. If I manually add /index.html to the end of the URL, I see where I am in the sidenav.

metron agent cannot collect logs of the apps

OpenStack Security Groups - missing rule , the udp port 3457 should be open, because metron agent send and collect information with this port. if this port is closed, the doppler cannot get the logs of the apps.

Security group configuration for OpenStack is incorrect

With the new loggregator features that have been in Cloud Foundry since release 147 or so, loggregator sends websocket protobufs to loggregator's internal endpoint at port 3456. Thus, "cf-private" needs to be updated to open UDP port 3456.

I would do it myself as a pull request, but the images are hosted on EverNote and I can't manipulate them through GitHub.

Deployment instructions need updating?

The current page on creating a deployment manifest for CloudFoundry references precursor steps as installing BOSH-Lite and BOSH CLI. The linked instructions on bosh.io then have you install CLI v2 ... but the deployment manifest instructions reference BOSH CLI v1 instructions such as "bosh target", as does the generate-bosh-lite-dev-manifest script. Am I missing something, do I need to run this activity using BOSH CLI v1, or does the project need to be updated for CLI v2 command set?

Suggestion for method to generate db encryption key may be insecure

In the instructions to customize the deployment manifest for OpenStack (which are quite good btw!) the following suggestion is given to generate a "secure password" for database encryption:

md5 -qs "$(date)"

This is really not going to generate anything that's suitable for security... realistically this cuts down the range of possible values for the encryption key exponentially, especially if you know roughly when the deployment was created.

If we want to recommend a secure way to generate this password we should instruct the user to pull from /dev/random.

OpenStack Security Groups - missing rule

Hi,

the documentation lists the following security group rule:

| Ingress | IPv4 | TCP | - | cf (Security Gp) |

but no UDP counterpart. I think this is required for Consul to work.

| Ingress | IPv4 | UDP | - | cf (Security Gp) |

issue cf release

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.