Git Product home page Git Product logo

openshift-on-openstack's Introduction

OpenShift on OpenStack

Maintenance Status

This project is no longer being developed or maintained by its original authors.

The official OpenShift installer now supports various cloud providers including OpenStack so a lot of the development effort has moved there:

We recommend you take a look at it.

About

A collection of documentation, Heat templates, configuration and everything else that’s necessary to deploy OpenShift on OpenStack.

This template uses Heat to create the OpenStack infrastructure components, then calls the OpenShift Ansible installer playbooks to install and configure OpenShift on the VMs.

Architecture

All of the OpenShift VMs will share a private network. This network is connected to the public network by a router.

The deployed OpenShift environment is composed of a replicated set of OpenShift master VMs fronted by a load_balancer. This provides both a single point of access and some HA capabilities. The applications run on one or more OpenShift node VMs. These are connected by a private software defined network (SDN) which can be implemented either with OpenVSwitch or Flannel.

A bastion server is used to control the host and service configuration. The host and service configuration is run using Ansible playbooks executed from the bastion host.

Bastion server, master nodes and infra nodes is also given a floating IP address on the public network. This provides direct access to the bastion server from where you can access all nodes by SSH. Master nodes and infra nodes have floating IP assigned to make sure these nodes are accessible when an external loadbalancer is used for accessing OpenShift services.

All of the OpenShift hosts (master, infra and node) have block storage for Docker images and containers provided by Cinder. OpenShift will run a local Docker registry, also backed by Cinder block storage. Finally all nodes will have access to Cinder volumes which can be created by OpenStack users and mounted into containers by Kubernetes.

architecture

Prerequisites

  1. OpenStack version Juno or later with the Heat, Neutron, Ceilometer, Aodh (Mitaka or later) services running:

    • heat-api-cfn service - used for passing heat metadata to nova instances

    • Neutron LBaaS service (optional) - used for loadbalancing requests in HA mode, if this service is not available, you can deploy dedicated loadbalancer node, see Select Loadbalancer Type

    • Ceilometer services (optional) - used when autoscaling is enabled

  2. ServerGroupAntiAffinityFilter enabled in Nova service (optionally ServerGroupAffinityFilter when using all-in-one OpenStack environment)

  3. CentOS 7.2 cloud image (we leverage cloud-init) loaded in Glance for OpenShift Origin Deployments. RHEL_ 7.2 cloud image if doing Atomic Enterprise or OpenShift Container Platform. Make sure to use official images to avoid unexpected issues during deployment (e.g. a custom firewall may block OpenShift inter-node communication).

  4. An SSH keypair loaded into Nova

  5. A (Neutron) network with a pool of floating IP addresses available

CentOS and RHEL are the only tested distros for now.

DNS Server

The OpenShift installer requires that all nodes be reachable via their hostnames. Since OpenStack does not currently provide an internal name resolution, this needs to be done with an external DNS service that all nodes use via the dns_nameserver parameter.

In a production deployment this would be your existing DNS, but if you don’t have the ability to update it to add new name records, you will have to deploy one yourself.

We have provided a separate repository that can deploy a DNS server suitable for OpenShift:

Note
If your DNS supports dynamic updates via RFC 2136, you can pass the update key to the Heat stack and all nodes will register themselves as they come up. Otherwise, you will have to update your DNS records manually.

Red Hat Software Repositories

When installing OpenShift Container Platform on RHEL the OpenShift and OpenStack repositories must be enabled, along with several common repositories. These repositories must be available under the subscription account used for installation.

Table 1. Required Repositories for RHEL installation
Repo Name Purpose

rhel-7-server-rpms

Standard RHEL Server RPMs

rhel-7-server-extras-rpms

Supporting RPMs

rhel-7-server-optional-rpms

Supporting RPMs

rhel-7-server-openstack-10-rpms

OpenStack client and data collection RPMs

rhel-7-server-ose-3.5-rpms

OpenShift Container Platform RPMs

rhel-7-fast-datapath-rpms

Required for OSP 3.5+ and OVS 2.6+

Creating an All-In-One Demo Environment

Following steps can be used to setup all-in-one testing/developer environment:

# OpenStack does not run with NetworkManager
systemctl stop NetworkManager
systemctl disable NetworkManager

# The Packstack Installer is not supported for production but will work
# for demonstrations
yum -y install openstack-packstack libvirt git

# Add room for images if /varlib is too small
mv /var/lib/libvirt/images /home
ln -s /home/images /var/lib/libvirt/images

# Install OpenStack demonstrator with no real security
#   This produces the keystonerc_admin file used below
packstack --allinone --provision-all-in-one-ovs-bridge=y \
  --os-heat-install=y --os-heat-cfn-install=y \
  --os-neutron-lbaas-install=y \
  --keystone-admin-passwd=password --keystone-demo-passwd=password

# Retrieve the Heat templates for OpenShift
git clone https://github.com/redhat-openstack/openshift-on-openstack.git

# Retrieve a compatible image for the OpenShift VMs
curl -O http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2

# Set access environment parameters for the new OpenStack service
source keystonerc_admin

# Load the VM image into the store and make it available for creating VMs
glance image-create --name centos72 --is-public True \
  --disk-format qcow2 --container-format bare \
  --file CentOS-7-x86_64-GenericCloud.qcow2
# For newer versions of glance clients, substitute "--is-public True" with "--visibility public"

# Install the current user's SSH key for access to VMs
nova keypair-add --pub-key ~/.ssh/id_rsa.pub default

Deployment

You can pass all environment variables to heat on command line. However, two environment files are provided as examples.

  • env_origin.yaml is an example of the variables to deploy an OpenShift Origin 3 environment.

  • env_aop.yaml is an example of the variables to deploy an Atomic Enterprise or OpenShift Container Platform 3 environment. Note deployment type should be openshift-enterprise for OpenShift or atomic-enterprise for Atomic Enterprise. Also, a valid RHN subscription is required for deployment.

Here is a sample of environment file which uses a subset of parameters which can be set by the user to configure OpenShift deployment. All configurable parameters including description can be found in the parameters section in the main template. Assuming your external network is called public, your SSH key is default, your CentOS 7.2 image is centos72 and your domain name is example.com, this is how you deploy OpenShift Origin:

cat << EOF > openshift_parameters.yaml
parameters:
   # Use OpenShift Origin (vs OpenShift Container Platform)
   deployment_type: origin

   # set SSH access to VMs
   ssh_user: centos
   ssh_key_name: default

   # Set the image type and size for the VMs
   bastion_image: centos72
   bastion_flavor: m1.medium
   master_image: centos72
   master_flavor: m1.medium
   infra_image: centos72
   infra_flavor: m1.medium
   node_image: centos72
   node_flavor: m1.medium
   loadbalancer_image: centos72
   loadbalancer_flavor: m1.medium

   # Set an existing network for inbound and outbound traffic
   external_network: public
   dns_nameserver: 8.8.4.4,8.8.8.8

   # Define the host name templates for master and nodes
   domain_name: "example.com"
   master_hostname: "origin-master"
   node_hostname: "origin-node"

   # Allocate additional space for Docker images
   master_docker_volume_size_gb: 25
   infra_docker_volume_size_gb: 25
   node_docker_volume_size_gb: 25

   # Specify the (initial) number of nodes to deploy
   node_count: 2

   # Add auxiliary services: OpenStack router and internal Docker registry
   deploy_router: False
   deploy_registry: False

   # If using RHEL image, add RHN credentials for RPM installation on VMs
   rhn_username: ""
   rhn_password: ""
   rhn_pool: '' # OPTIONAL

   # Currently Ansible 2.1 is not supported so add these parameters as a workaround
   openshift_ansible_git_url: https://github.com/openshift/openshift-ansible.git
   openshift_ansible_git_rev: master

resource_registry:
  # use neutron LBaaS
  OOShift::LoadBalancer: openshift-on-openstack/loadbalancer_neutron.yaml
  # use openshift SDN
  OOShift::ContainerPort: openshift-on-openstack/sdn_openshift_sdn.yaml
  # enable ipfailover for router setup
  OOShift::IPFailover: openshift-on-openstack/ipfailover_keepalived.yaml
  # create dedicated volume for docker storage
  OOShift::DockerVolume: openshift-on-openstack/volume_docker.yaml
  OOShift::DockerVolumeAttachment: openshift-on-openstack/volume_attachment_docker.yaml
  # use ephemeral cinder volume for openshift registry
  OOShift::RegistryVolume: openshift-on-openstack/registry_ephemeral.yaml
EOF
# retrieve the Heat template (if you haven't yet)
git clone https://github.com/redhat-openstack/openshift-on-openstack.git

After this you can deploy using the heat command

# create a stack named 'my-openshift'
heat stack-create my-openshift -t 180 \
  -e openshift_parameters.yaml \
  -f openshift-on-openstack/openshift.yaml

or using the generic OpenStack client

# create a stack named 'my-openshift'
openstack stack create --timeout 180 \
  -e openshift_parameters.yaml \
  -t openshift-on-openstack/openshift.yaml my-openshift

The node_count parameter specifies how many compute nodes you want to deploy. In the example above, we will deploy one master, one infra node and two compute nodes.

The templates will report stack completion back to Heat only when the whole OpenShift setup is finished.

Debugging

Sometimes it’s necessary to find out why a stack was not deployed as expected. Debugging helps you find the root cause of the issue.

OpenStack Integration

OpenShift on OpenStack takes advantage of the cloud provider to offer features such as dynamic storage to the OpenShift users. Auto scaling also requires communication with the OpenStack service. You must provide a set of OpenStack credentials so that OpenShift and the heat scaling mechanism can work correctly.

These are the same values used to create the Heat stack.

Sample OSP Credentials - osp_credentials.yaml
---
parameters:
  os_auth_url: http://10.0.x.x:5000/v2.0
  os_username: <username>
  os_password: <password>
  os_region_name: regionOne
  os_tenant_name: <tenant name>
  os_domain_name: <domain name>

When invoking the stack creation, include this by adding -e osp_credentials.yaml to the command.

OpenStack with SSL/TLS

If your OpenStack service is encrypted with SSL/TLS, you will need to provide the CA certificate so that the communication channel can be validated.

The CA certificate is provided as a literal string copy of contents of the CA certificate file, and can be included in an additional environment file:

CA Certificate Parameter File ca_certificates.yaml
---
parameters:
  ca_cert: |
    -----BEGIN CERTIFICATE-----
   ...
   -----END CERTIFICATE-----

When invoking the stack creation, includ this by adding -e ca_certificates.yaml.

You can include multiple CA certificate strings and all will be imported into the CA list on all instances.

Multiple Master Nodes

You can deploy OpenShift with multiple master hosts using the 'native' HA method (see https://docs.openshift.org/latest/install_config/install/advanced_install.html#multiple-masters for details) by increasing number of master nodes. This can be done by setting master_count heat parameter:

heat stack-create my-openshift \
   -e openshift_parameters.yaml \
   -P master_count=3 \
   -f openshift-on-openstack/openshift.yaml

Three master nodes will be deployed. Console and API URLs point to the loadbalancer server which distributes requests across all three nodes. You can get the URLs from Heat by running heat output-show my-openshift console_url and heat output-show my-openshift api_url.

Multiple Infra Nodes

You can deploy OpenShift with multiple infra hosts. Then OpenShift router is deployed on each of infra node (only if -P deploy_router=true is used) and router requests are load balanced by either dedicated or neutron loadbalancer. This can be done by setting infra_count heat parameter:

heat stack-create my-openshift \
   -e openshift_parameters.yaml \
   -P infra_count=2 \
   -P deploy_router=true \
   -f openshift-on-openstack/openshift.yaml

Two infra nodes will be deployed. Loadbalancer server distributes requests on ports 80 and 443 across both nodes.

Select Loadbalancer Type

When deploying multiple master nodes, both access to the nodes and OpenShift router pods (which run on infra nodes) have to be loadbalanced. openshift-on-openstack provides multiple options for setting up loadbalancing:

  • Neutron LBaaS - this loadbalancer is used by default. Neutron loadbalancer serviceis used for loadbalancing console/api requests to master nodes. At the moment OpenShift router requests are not loadbalanced and an external loadbalancer has to be used for it. This is default option, but can be set explicitly by including -e openshift-on-openstack/env_loadbalancer_neutron.yaml when creating the stack. By default, this mode uses IP failover.

  • External loadbalancer - a user is expected to set its own loadbalancer both for master nodes and OpenShift routers. This is suggested type for production. To select this type include -e openshift-on-openstack/env_loadbalancer_external.yaml when creating the stack and also set lb_hostname parameter to point to the loadbalancer’s fully qualified domain name. Once stack creation is finished you can set your external loadbalancer with the list of created master nodes.

  • Dedicated loadbalancer node - a dedicated node is created during stack creation and HAProxy loadbalancer is configured on it. Both console/API and OpenSHift router requests are load balanced by this dedicated node. This type is useful for demo/testing purposes only because HA is not assured for the single loadbalancer. To select this type include -e openshift-on-openstack/env_loadbalancer_dedicated.yaml when creating the stack. node.

  • None - if only single master node is deployed, it’s possible to skip loadbalancer creation, then all master node requests and OpenShift router requests point to the single master node. To select this type include -e openshift-on-openstack/env_loadbalancer_none.yaml when creating the stack. By default, this mode uses IP failover.

Select SDN Type

By default, OpenShift is deployed with OpenShift-SDN. When used with OpenStack Neutron with GRE or VXLAN tunnels, packets are encapsulated twice which can have an impact on performances. Those Heat templates allow using Flannel instead of openshift-sdn, with the host-gw backend to avoid the double encapsulation. To do so, you need to include the env_flannel.yaml environment file when you create the stack:

heat stack-create my_openshift \
   -e openshift_parameters.yaml \
   -f openshift-on-openstack/openshift.yaml \
   -e openshift-on-openstack/env_flannel.yaml

To use this feature, the Neutron port_security extension driver needs to be enabled. To do so and when using the ML2 driver, edit the file /etc/neutron/plugins/ml2/ml2_conf.ini and make sure it contains the line:

extension_drivers = port_security

Note that this feature is still in experimental mode.

LDAP authentication

You can use an external LDAP server to authenticate OpenShift users. Update parameters in env_ldap.yaml file and include this environment file when you create the stack.

Example of env_ldap.yaml using an Active Directory server:

LDAP parameter file `env_ldap.yaml
parameter_defaults:
   ldap_hostname: <ldap hostname>
   ldap_ip: <ip of ldap server>
   ldap_url: ldap://<ldap hostname>:389/CN=Users,DC=example,DC=openshift,DC=com?sAMAccountName
   ldap_bind_dn: CN=Administrator,CN=Users,DC=example,DC=openshift,DC=com?sAMAccountName
   ldap_bind_password: <admin password>
heat stack-create my-openshift \
  -e openshift_parameters.yaml \
  -e openshift-on-openstack/env_ldap.yaml \
  -f openshift-on-openstack/openshift.yaml

If your LDAP service uses SSL, you will also need to add a CA Certficate for the LDAP communications.

Using Custom Yum Repositories

You can set additional Yum repositories on deployed nodes by passing extra_repository_urls parameter which contains list of Yum repository URLs delimited by comma:

heat stack-create my-openshift \
  -e openshift_parameters.yaml \
  -P extra_repository_urls=http://server/my/own/repo1.repo,http://server/my/own/repo2.repo
  -f openshift-on-openstack/openshift.yaml

Using Custom Docker Respositories

You can set additional Docker repositories on deployed nodes by passing extra_docker_repository_urls parameter which contains list of docker repository URLs delimited by comma, if a repository is insecure you can use #insecure suffix for the repository:

heat stack-create my-openshift \
  -e openshift_parameters.yaml \
  -P extra_docker_repository_urls='user.docker.example.com,custom.user.example.com#insecure'
  -f openshift-on-openstack/openshift.yaml

Using Persistent Cinder Volume for Docker Registry

When deploying OpenShift registry (-P deploy_registry=true) you can use either an ephemeral or persistent Cinder volume. Ephemeral volume is used by default, the volume is automatically created when creating the stack and is also deleted when deleting the stack. Alternatively you can use an existing Cinder volume by including the env_registry_persistent.yaml environment file and registry_volume_id when you create the stack:

heat stack-create my-openshift \
  -e openshift_parameters.yaml \
  -f openshift-on-openstack/openshift.yaml \
  -e openshift-on-openstack/env_registry_persistent.yaml \
  -P registry_volume_id=<cinder_volume_id>

Persistent volume is not formatted when creating the stack, if you have a new unformatted volume you can enforce formatting by passing -P prepare_registry=true.

Accessing OpenShift

From user point of view there are two entry points into the deployed OpenShift:

  • OpenShift console and API URLs: these URLs usually point to the loadbalancer host and can be obtained by:

heat output-show my-openshift console_url
heat output-show my-openshift api_url
  • Router IP: the IP address which application OpenShift router service listens on. This IP will be used for setting wildcard DNS for .apps.<domain> subdomain. The IP can be obtained by:

heat output-show my-openshift router_ip

Setting DNS

To make sure that console and API URLs resolving works properly, you have to create a DNS record for the hostname used in console_url and api_url URLs. The floating IP address can be obtained by:

heat output-show my-openshift loadbalancer_ip

For example if console_url is https://default32-lb.example.com:8443/console/ and loadbalancer_ip is 172.24.4.166 there should be a DNS record for domain example.com:

default32-lb  IN A  172.24.4.166

If OpenShift router was deployed (-P deploy_router=true) you also may want to make sure that wildcard DNS is set for application subdomain. For example if used domain is example.com and router_ip is 172.24.4.168 there should be a DNS record for domain example.com:

*.cloudapps.example.com. 300 IN  A 172.24.4.168
Note

The above DNS records should be set on the DNS server authoritative for the domain used in OpenShift cluster (example.com in the example above).

Dynamic DNS Updates

If your DNS servers support dynamic updates (as defined in RFC 2136), you can pass the update key in the dns_update_key parameter and each node will register its internal IP address to all the DNS servers in the dns_nameserver list.

In addition, if you use the dedicated load balancer, the API and wildcard entries will be created as well. Otherwise, you will need to set them manually.

Retrieving the OpenShift CA certificate

You can retrieve the CA certificate that was generated during the OpenShift installation by running

heat output-show --format=raw my-openshift ca_cert > ca.crt
heat output-show --format=raw my-openshift ca_key > ca.key

Container and volumes quotas

OpenShift has preliminary support for local emptyDir volume quotas. You can set the volume_quota parameter to a resource quantity representing the desired quota per FSGroup.

You can set quota on the maximum size of the containers using the container_quota parameter in GB.

Example:

   volume_quota: 10
   container_quota: 20

Disabling Cinder volumes for Docker storage

By default, the Heat templates create a Cinder volume per OpenShift node to host containers. This can be disabled by including both volume_noop.yaml and volume_attachment_noop.yaml in your environment file:

resource_registry: …​ OOShift::DockerVolume: volume_noop.yaml OOShift::DockerVolumeAttachment: volume_attachment_noop.yaml

IP failover

These templates allow using IP failover for the OpenShift router. In this mode, a virtual IP address is assigned for the OpenShift router. Multiple instances of router may be active but only one instance at a time will have the virtual IP. This ensures that minimal downtime in the case of the failure of the current active router.

By default, IP failover is used when the load balancing mode is Neutron LBaas or None (see section Select Loadbalancer Type).

The virtual IP of the router can be retrieved with

heat output-show --format=raw my-openshift router_ip

Scaling Up or Down

You can manually scale up or down OpenShift nodes by updating node_count heat stack parameter to the desired new count:

heat stack-update -P node_count=5 <other parameters>

If the stack has 2 nodes, 3 new nodes are added. If the stack has 7 nodes, 2 are removed. Any running pods are evacuated from the node being removed.

Autoscaling

Scaling of OpenShift nodes can be automated by using Ceilometer metrics. By default cpu_util metering is used. You can enable autoscaling by autoscaling heat parameter and tweaking properties of cpu_alarm_high and cpu_alarm_low in openshift.yaml.

Removing or Replacing Specific Nodes

Sometimes it’s necessary to remove or replace specific nodes from the stack. For example because of a hardware issue. Because OpenShift "compute" nodes are members of heat AutoScalingGroup adding or removing nodes is by default handled by a scaling policy and when removing a node the oldest one is selected by Heat by default. A specific node can be removed with following steps though:

# delete the node
$ nova delete instance_name

# let heat detect the missing node
$ heat action-check stack_name

# update the stack with desired new number of nodes (same is before
# for replacement, decreased by 1 for removal)
$ heat stack-update <parameters> -P node_count=<desired_count>

Known Bugs

Here is the list of bugs which are not fixed and you may hit.

Customize OpenShift installation

Those Heat templates make use of openshift-ansible to deploy OpenShift. You can provide additional parameters to openshift-ansible by specifying a JSON string as the extra_openshift_ansible_params parameter. For example :

$ heat stack-create <parameters> -P extra_openshift_ansible_params='{"osm_use_cockpit":true}'

This parameter must be used with caution as it may conflict with other parameters passed to openshift-ansible by the Heat templates.

Current Status

  1. The CA certificate used with OpenShift is currently not configurable.

  2. The apps cloud domain is hardcoded for now. We need to make this configurable.

Prebuild images

A customize-disk-image script is provided to preinstall OpenShift packages.

./customize-disk-image --disk rhel7.2.qcow2 --sm-credentials user:password

The modified image must be uploaded into Glance and used as the server image for the heat stack with the server_image parameter.

Copyright 2016 Red Hat, Inc.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

openshift-on-openstack's People

Contributors

apevec avatar cbrown2 avatar drebes avatar gbraad avatar ioggstream avatar jprovaznik avatar lebauce avatar maci0 avatar markllama avatar newgoliath avatar pburgisser avatar prayagverma avatar rlopez133 avatar scollier avatar shivkumar13 avatar tomassedovic avatar wrichter avatar ypraveen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

openshift-on-openstack's Issues

Could not install ansible

Hi
I got this problem:

[ 1561.591418] cloud-init[1431]: + yum -y --enablerepo=epel install ansible1.9
[ 1568.643366] cloud-init[1431]: Loaded plugins: fastestmirror
[ 1572.728427] cloud-init[1431]: One of the configured repositories failed (Unknown),
[ 1572.735699] cloud-init[1431]: and yum doesn't have enough cached data to continue. At this point the only
[ 1572.760463] cloud-init[1431]: safe thing yum can do is fail. There are a few ways to work "fix" this:
[ 1572.815172] cloud-init[1431]: 1. Contact the upstream for the repository and get them to fix the problem.
[ 1572.819400] cloud-init[1431]: 2. Reconfigure the baseurl/etc. for the repository, to point to a working
[ 1572.823217] cloud-init[1431]: upstream. This is most often useful if you are using a newer
[ 1572.841765] cloud-init[1431]: distribution release than is supported by the repository (and the
[ 1572.848939] cloud-init[1431]: packages for the previous distribution release still work).
[ 1572.868654] cloud-init[1431]: 3. Disable the repository, so yum won't use it by default. Yum will then
[ 1572.874993] cloud-init[1431]: just ignore the repository until you permanently enable it again or use
[ 1572.896568] cloud-init[1431]: --enablerepo for temporary usage:
[ 1572.915954] cloud-init[1431]: yum-config-manager --disable <repoid>
[ 1572.919903] cloud-init[1431]: 4. Configure the failing repository to be skipped, if it is unavailable.
[ 1572.925578] cloud-init[1431]: Note that yum will try to contact the repo. when it runs most commands,
[ 1572.947275] cloud-init[1431]: so will have to try and fail each time (and thus. yum will be be much
[ 1572.951160] cloud-init[1431]: slower). If it is a very temporary problem though, this is often a nice
[ 1572.978530] cloud-init[1431]: compromise:
[ 1572.984373] cloud-init[1431]: yum-config-manager --save --setopt=<repoid>.skip_if_unavailable=true
[ 1572.991926] cloud-init[1431]: Cannot retrieve metalink for repository: epel/x86_64. Please verify its path and try again
[ 1573.122410] cloud-init[1431]: + notify_failure 'could not install ansible'

with call:

heat stack-create my-openshift -t 180 -e openshift_parameters.yaml -f openshift-on-openstack/openshift.yaml -P master_count=1 -e openshift-on-openstack/env_loadbalancer_none.yaml

and parameters:

parameters:
   # Use OpenShift Origin (vs Openshift Enterprise)
   deployment_type: origin

   # set SSH access to VMs
   ssh_user: centos
   ssh_key_name: "Simon Balkau"

   # Set the image type and size for the VMs
   server_image: centos72
   flavor: m1.medium

   # Set an existing network for inbound and outbound traffic
   external_network: Public
   dns_nameserver: 8.8.4.4,8.8.8.8

   # Define the host name templates for master and nodes
   domain_name: "openshift.os.imito.ch"
   master_hostname: "origin-master"
   node_hostname: "origin-node"

   # Allocate additional space for Docker images
   master_docker_volume_size_gb: 25
   node_docker_volume_size_gb: 25

   # Specify the (initial) number of nodes to deploy
   node_count: 2

   # Add auxiliary services: OpenStack router and internal Docker registry
   deploy_router: False
   deploy_registry: True

   # If using RHEL image, add RHN credentials for RPM installation on VMs
   rhn_username: ""
   rhn_password: ""
   rhn_pool: '' # OPTIONAL

   openshift_ansible_git_url: https://github.com/openshift/openshift-ansible.git
   openshift_ansible_git_rev: master

I added the parameters as stated in #89 (comment)

what else could I try?

Anti-affinity for OSE nodes

Hi,

I did not find any affinity rules for the OSE nodes in order to avoid for instance two PODs that must run on two different OSE nodes run on the same host.
Perhaps, two different OSE nodes should not run on the same compute node?

Best Regards,

quota limits?

I'm getting hit with quota limits. I tried setting all the quota limits for the admin user and admin class/project to 10000, but I'm still getting the following. Is it a different user??

| openshift_nodes           | fdfbfdd3-2891-4e07-ab40-f81354be779b | state changed                                                                                                                                                                                                     | UPDATE_IN_PROGRESS | 2016-05-24T18:51:31 |
| openshift_nodes           | f5fe3f84-fcb7-4659-89fe-c2b56bb6de31 | resources.openshift_nodes: OverQuotaClient: resources.ao4itnqwsnhx.resources.security_group: Quota exceeded for resources: ['security_group']                                                                     | UPDATE_FAILED      | 2016-05-24T18:55:44 |
| openshift                 | af30aee1-b4ef-476f-b4d4-f106d5ada9d1 | resources.openshift_nodes: OverQuotaClient: resources.ao4itnqwsnhx.resources.security_group: Quota exceeded for resources: ['security_group']                                                                     | UPDATE_FAILED      | 2016-05-24T18:55:45 |
+---------------------------+--------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+---------------------+

heat stack-create fails; Deployment exited with non-zero status code: 4

Using command.. heat stack-create my_openshift -t 260 -e openshift_parameters.yaml -f openshift-on-openstack/openshift.yaml -e openshift-on-openstack/env_loadbalancer_none.yaml

parameters:
   # Use OpenShift Origin (vs Openshift Enterprise)
   deployment_type: origin

   # set SSH access to VMs
   ssh_user: centos
   ssh_key_name: jr-mac

   # Set the image type and size for the VMs
   server_image: centos72
   flavor: m1.large

   # Set an existing network for inbound and outbound traffic
   external_network: external_network
   dns_nameserver: 10.34.32.1

   # Define the host name templates for master and nodes
   domain_name: "origin.fabric8.io"
   master_hostname: "origin-master"
   node_hostname: "origin-node"

   # Allocate additional space for Docker images
   master_docker_volume_size_gb: 40
   node_docker_volume_size_gb: 40

   # Specify the (initial) number of nodes to deploy
   node_count: 2

   # Add auxiliary services: OpenStack router and internal Docker registry
   deploy_router: True
   deploy_registry: True

   # If using RHEL image, add RHN credentials for RPM installation on VMs
   rhn_username: ""
   rhn_password: ""
   rhn_pool: '' # OPTIONAL

   openshift_ansible_git_url: https://github.com/openshift/openshift-ansible.git
   openshift_ansible_git_rev: master

Results in..

| stack_status_reason   | Resource CREATE failed: Error: resources.openshift_node                                                                                                                                                |
|                       | s.resources.zrtf3m3figae.resources.deployment_run_ansib                                                                                                                                                |
|                       | le: Deployment to server failed: deploy_status_code:                                                                                                                                                   |
|                       | Deployment exited with non-zero status code: 4                                                                                                                                                         |

I'm not too sure where to look for logs or what went wrong. Any ideas or pointers? I can see the openstack resources were created so I think it's openshift ansible scripts that's failing?

heat signaling?

I seem to be having trouble with resources not signaling complete. Using the HA setup, the lb completes, and the infra_host builds. Looking in cloud-init logs I see everything completed successfully:

However the stack does not seem to advance, instead it stays 'create_in_progress'
deployment_tune_ansible | 5363098c-fa5b-4807-938d-55fafaf47c24 | OS::Heat::SoftwareDeployment | CREATE_IN_PROGRESS | 2016-02-25T13:06:51Z | infra_host |

If I use heat resource-signal and manually signal that task as complete, it moves on to deploying the 3 masters, but they get stuck the same way after the work from cloud-init is done.

Any ideas on what is going on here?

installing RDO packages breaks OSP installs

ok: [localhost] => {"changed": false, "msg": "", "rc": 0, "results": ["https://rdoproject.org/repos/rdo-release.rpm: Nothing to do"]}

nInstall  4 Packages (+3 Dependent packages)\nUpgrade  3 Packages (+2 Dependent packages)\n\nTotal size: 1.7 M\nDownloading packages:\nRunning transaction check\nRunning transaction test\n"]}
msg:

Transaction check error:
  file /usr/lib/python2.7/site-packages/keystoneauth1/__init__.py from install of python2-keystoneauth1-2.4.1-1.el7.noarch conflicts with file from package python-keystoneauth1-1.1.0-4.el7ost.noarch

Must allow for either.

How to change metadata_url?

The heat-api-cfn metadata_url is not accessible from my vm. Cloud-init migth be suffering from this.
How do I change the metadata_url?

I'll change it to something on the public API network, that already has haproxy listening on those ports, forwarding as usual

Failed Install

Will not deploy master nodes
yum -y install openshift-ansible-roles openshift-ansible-playbooks
not found

bad subdomain regex in kubelet?

Underscore is undesirable? How does my_openshift work?

Mar 15 09:29:31 my_openshift_2-openshift-master-1 atomic-openshift-node: I0315 09:29:31.323146    7484 kubelet.go:972] Attempting to register node my_openshift_2-openshift-master-1.example.com
Mar 15 09:29:31 my_openshift_2-openshift-master-1 atomic-openshift-node: I0315 09:29:31.325224    7484 kubelet.go:975] Unable to register my_openshift_2-openshift-master-1.example.com with the apiserver: Node "my_openshift_2-openshift-master-1.example.com" is invalid: metadata.name: invalid value 'my_openshift_2-openshift-master-1.example.com', Details: must be a DNS subdomain (at most 253 characters, matching regex [a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*): e.g. "example.com"

console_url and dns for stack-lb.example.com wrong

I've got all the latest PRs, but DNS seems wrong.

[osp_admin@director openshift-on-openstack]$ heat output-show delloss dns_ip
"192.168.191.28"
[osp_admin@director openshift-on-openstack]$ heat output-show delloss console_url
"https://delloss-loadbalancer-w7lucvawtes4-lb.example.com:8443/console/"

DNS clearly did not get updated:

[root@delloss-infra ~]# dig @192.168.191.28 delloss-loadbalancer-w7lucvawtes4-lb.example.com +short
[root@delloss-infra ~]# echo $?
0

"dig"ing deeper, I find the issue. The delloss-lb.example.com name has been created wrong. :(

delloss-lb.example.com.example.com. 86400 IN A  192.168.191.27
[root@delloss-infra ~]# dig @192.168.191.28 example.com axfr
; <<>> DiG 9.9.4-RedHat-9.9.4-29.el7_2.3 <<>> @192.168.191.28 example.com axfr
; (1 server found)
;; global options: +cmd
example.com.            86400   IN      SOA     delloss-infra.example.com. openshift.example.com. 1464877023 43200 180 2419200 10800
example.com.            86400   IN      NS      delloss-infra.example.com.
delloss-lb.example.com.example.com. 86400 IN A  192.168.191.27
delloss-infra.example.com. 86400 IN     A       192.168.0.5
delloss-openshift-master-0.example.com. 86400 IN A 192.168.0.7
delloss-openshift-master-1.example.com. 86400 IN A 192.168.0.6
delloss-openshift-master-2.example.com. 86400 IN A 192.168.0.8
delloss-openshift-node-1sf6yd49.example.com. 86400 IN A 192.168.0.12
delloss-openshift-node-23rhid2k.example.com. 86400 IN A 192.168.0.10
delloss-openshift-node-o5nw1n7n.example.com. 86400 IN A 192.168.0.11
delloss-openshift-node-v0qh1i7p.example.com. 86400 IN A 192.168.0.13
example.com.            86400   IN      SOA     delloss-infra.example.com. openshift.example.com. 1464877023 43200 180 2419200 10800
;; Query time: 1 msec
;; SERVER: 192.168.191.28#53(192.168.191.28)
;; WHEN: Thu Jun 02 12:21:34 EDT 2016
;; XFR size: 12 records (messages 1, bytes 515)

shift_masters.resources[0].resources.docker_volume: | Went to status error due to "Unknown"

Running both rhel7 + ose and centos + origin results in the same error during the heat stack-create, is this likely to be something in my setup?

| parent                | None                                                                                                                                                                                                   |
| stack_name            | my_openshift                                                                                                                                                                                           |
| stack_owner           | admin                                                                                                                                                                                                  |
| stack_status          | CREATE_FAILED                                                                                                                                                                                          |
| stack_status_reason   | Resource CREATE failed: ResourceInError: resources.open                                                                                                                                                |
|                       | shift_masters.resources[0].resources.docker_volume:                                                                                                                                                    |
|                       | Went to status error due to "Unknown"                                                                                                                                                                  |
| stack_user_project_id | 3497beaeecd34c05b6cc7a0e53532ca8                                                                                                                                                                       |
| template_description  | Deploy Atomic/OpenShift 3 on OpenStack.                                                                                                                                                                |
| timeout_mins          | 180                                                                                                                                                                                                    |
| updated_time          | None                                                                                                                                                                                                   |
+-----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

DNS names are wrong. Prefixed in dnsmasq, not prefixed in heat output

Heat output after successful heat create run shows:

[osp_admin@director openshift-on-openstack]$ heat output-show openshift console_url
"https://lb.example.com:8443/console/"

Query for lb.example.com

[osp_admin@director openshift-on-openstack]$ dig @192.168.191.16 lb.example.com

; <<>> DiG 9.9.4-RedHat-9.9.4-29.el7_2.3 <<>> @192.168.191.16 lb.example.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 10307
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;lb.example.com.                        IN      A

;; Query time: 1 msec
;; SERVER: 192.168.191.16#53(192.168.191.16)
;; WHEN: Wed May 25 15:01:25 EDT 2016
;; MSG SIZE  rcvd: 32

Query for openshift-lb.example.com:

osp_admin@director openshift-on-openstack]$ dig @192.168.191.16 openshift-lb.example.com

; <<>> DiG 9.9.4-RedHat-9.9.4-29.el7_2.3 <<>> @192.168.191.16 openshift-lb.example.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 35571
;; flags: qr aa rd ra ad; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;openshift-lb.example.com.      IN      A

;; ANSWER SECTION:
openshift-lb.example.com. 0     IN      A       192.168.191.14

;; Query time: 1 msec
;; SERVER: 192.168.191.16#53(192.168.191.16)
;; WHEN: Wed May 25 14:56:46 EDT 2016
;; MSG SIZE  rcvd: 58

/etc/hosts on infra

[root@openshift-infra log]# cat /etc/hosts
#
# Initialize the hosts file for dnsmasq on the infra host
#
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.191.16 openshift-infra.example.com openshift-infra
192.168.191.14 openshift-lb openshift-lb.example.com
192.168.191.18 openshift-openshift-master-1.example.com openshift-openshift-master-1 #openshift
192.168.191.19 openshift-openshift-master-0.example.com openshift-openshift-master-0 #openshift
192.168.191.20 openshift-openshift-master-2.example.com openshift-openshift-master-2 #openshift
192.168.191.21 openshift-openshift-node-7c6s2h1s.example.com openshift-openshift-node-7c6s2h1s #openshift
192.168.191.22 openshift-openshift-node-18l6k9j7.example.com openshift-openshift-node-18l6k9j7 #openshift
192.168.191.23 openshift-openshift-node-4vf83ga6.example.com openshift-openshift-node-4vf83ga6 #openshift
192.168.191.24 openshift-openshift-node-k1c231l3.example.com openshift-openshift-node-k1c231l3 #openshift

ERROR: Unknown resource Type : OOShift::LoadBalancer

I'm confused if this means my Neutron doesn't support LBaaS (which it seems to as all the commands are present on the CLI) or rather there is some basic configuration error. I haven't selected LB either way - I'm going with the defaults with master_count=3

HA install fails - deploying on packstack --allinone

Is it a complete anti-pattern to install OpenShift HA like this on a single hardware node running Packstack --allinone with heat, etc? If most everything is contained in different VMs, wouldn't it work? Thanks!

heat stack-create my_openshift --poll -t 180
-e openshift_parameters.yaml
-e openshift-on-openstack/env_ha.yaml
-f openshift-on-openstack/openshift.yaml

20:01:5 2016-02-24 0f732d51-9bef-4512-bb6b-17701ce885c1 [lb_host]: CREATE_FAILED Conflict: resources.lb_host.resources.host: Port
ea66dec1-2c40-48ae-84d9-95c6ff681ccb is still in use. (HTTP 409) (Request-ID: req-2981aa4f-3e2e-4cee-8260-3df28be386f3)
20:01:5 2016-02-24 0661636a-3885-42fe-a15e-22fe7e185dfa [my_openshift]: CREATE_FAILED Resource CREATE failed: Conflict: resources.
lb_host.resources.host: Port ea66dec1-2c40-48ae-84d9-95c6ff681ccb is still in use. (HTTP 409) (Request-ID: req-2981aa4f-3e2e-4cee-826
0-3df28be386f3)

Port Details

Name
my_openshift-lb_host-w5ty7wsehank-port-bv5bpxr4yp34
ID
ea66dec1-2c40-48ae-84d9-95c6ff681ccb
Network ID
575dff93-bc92-433a-864a-d292b475b75d
Project ID
00116efdc59f44f6954eeae263b36821
MAC Address
fa:16:3e:ae:e7:bf
Status
Active
Admin State
UP
Fixed IP
None
Attached Device
Device Owner
compute:None
Device ID
17b7c98e-251e-44fe-9b40-19fa83d02990
Binding
Host
hw1.openshift.oss.lab
Profile
None
VIF Type
ovs
VIF Details

    port_filter True
    ovs_hybrid_plug True

VNIC Type
normal

We should download and Centos immage by default.

It would be nice to have the template reach out and collect the image, that is needed to run the template.

The following can do this:

heat_template_version: 2014-10-16

description: A hot template for provisioning an Glance Image

parameters:

  image_name: 
    type: string
    default: "Centos7" 

  container_format:
    type: string
    default: "bare"
    constraints:
      - allowed_values: [ "ami", "ari", "aki", "bare", "ova", "ovf"]

  disk_format:
    type: string
    default: "qcow2"
    constraints:
      - allowed_values: [ "ami", "ari", "aki", "vhd", "vmdk", "raw", "qcow2", "vdi", "iso" ]

  location: 
    type: string
    default: "http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2" 

  public: 
    type: boolean
    default: true

  protected:
    type: boolean
    default: false

  disk_min:
    type: integer
    default: 0

  ram_min: 
    type: integer
    default: 0

resources:

  image:
    type: OS::Glance::Image
    properties:
      container_format: {get_param: container_format} 
      disk_format: {get_param: disk_format}
      is_public: {get_param: public}
      location: {get_param: location}
      min_disk: {get_param: disk_min}
      min_ram: {get_param: ram_min}
      name: {get_param: image_name}
      protected: {get_param: protected}

outputs: 
 image_name:
    description: Image Name
    value: { get_param: image_name}

However the trick is in finding a way to make this optional.

heat create fails when trying to install EPEL repo on infra node

| stack_status_reason   | Resource CREATE failed: WaitConditionFailure:                                                                                                                                                          |
|                       | resources.infra_host.resources.wait_condition: could                                                                                                                                                   |
|                       | not install EPEL release 7-5

Sure enough I'm getting a 404 on the URL that's used..
wget http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm

I tried hacking https://github.com/redhat-openstack/openshift-on-openstack/blob/master/fragments/infra-boot.sh#L178 to yum -y install epel-release which did install the 7-5 rpm but then the scripts failed further down the line..

| stack_status_reason   | Resource CREATE failed: WaitConditionFailure:                                                                                                                                                          |
|                       | resources.infra_host.resources.wait_condition: could                                                                                                                                                   |
|                       | not install openshift-ansible

So I guess they're two different versions and the latter doesn't have the atomic-openshift-utils which includes openshift-ansible.

There's a fair amount of guess work that I've just done so probably got a fair few things wrong but thought the info might help. Either way is there a work around that I can try as I'm unable to recreate my cluster that I just trashed?

docker update to 1.9 in CentOS causes conflict with Origin <= 1.2

Docker 1.9 is now the default install from CentOS Extras. Due to instabilities with OpenShift < 1.2 it has been marked with a conflict. Origin 1.1.4 is still current in COPR repo from Origin/EPEL.

To install correctly on CentOS 7 the Docker version must be restricted to 1.8.x in master-boot.sh and node-boot.sh unless the Origin version >= 1.2

We should create the SSH key for users or allow them to supply there own.

We can create the SSH key for a user of the template, with the following template.

heat_template_version: 2014-10-16

description: A hot template for provisioning an SSH Key 

parameters:

  key_name:
    type: string
    default: "ssh_key"

  public_key:
    type: string
    default: ""

resources:
  ssh_key:
    type: OS::Nova::KeyPair
    properties:
      save_private_key: true
      name: {get_param: key_name}
      public_key: {get_param: public_key}

outputs:
  key_name:
    description: SSH Key Name
    value: { get_param: key_name}
  private_key:
    description: Private key
    value: { get_attr: [ ssh_key, private_key ] }

The main template can then create the key with something like:

  ssh_key:
    type: ssh_key.yaml             ### Denoted above
    #type: STACK::Resource::SSH_Key_Template
    properties:
      key_name: {get_param: ssh_key_name}
      public_key: {get_param: public_key}

Which can then be referenced by the following and not the parameter name.

      ssh_key_name: { get_attr: [ssh_key, key_name] }

Because the new key will get generated, if the public_key is not supplied, it simplifies the per-requisites, and still allows for the end user to supply a public key if they would like.

DNS not responding

I did a multi-master deployment, stack name 'openshift' I'm having trouble accessing the dnsmasq on the infra server via udp.

Testing with dig +tcp works, but dig +notcp fails on the private address, but succeeds on localhost.

On the infra server ss -lnup shows :53 listening UDP on localhost and the private network.

iptables -L is ACCEPTING on all.

Any advice?

heat stack-create failing: resources.lb_monitor: 404 Not Found The resource could not be found

After previously successfully creating an OpenShift cluster using this awesome heat template I've tried again with the latest master and run into the error below. It's been about 4 weeks since I ran the script that worked for me, after renaming env.yaml > openshift_parameters.yaml, removing dns_hostname renaming master_hostname_prefix and node_hostname_prefix with master_hostname and node_hostname respectfully I hoped it would just work but I get this error during heat stack-create my_openshift3 -t 260 -e openshift_parameters.yaml -f openshift-on-openstack/openshift.yaml..

| stack_status_reason   | Resource CREATE failed: NeutronClientException:                                                                                                                                                        |
|                       | resources.lb_monitor: 404 Not Found  The resource could                                                                                                                                                |
|                       | not be found.

I'm no openstack expert so apologies if this is not the heat script fault but any pointers would be appreciated. All neutron services apart from a cleanup seem to be running ok..

systemctl | grep neut
neutron-dhcp-agent.service                                                                       loaded active running   OpenStack Neutron DHCP Agent
neutron-l3-agent.service                                                                         loaded active running   OpenStack Neutron Layer 3 Agent
neutron-metadata-agent.service                                                                   loaded active running   OpenStack Neutron Metadata Agent
neutron-openvswitch-agent.service                                                                loaded active running   OpenStack Neutron Open vSwitch Agent
neutron-ovs-cleanup.service                                                                      loaded active exited    OpenStack Neutron Open vSwitch Cleanup Utility
neutron-server.service                                                                           loaded active running   OpenStack Neutron Server

os-collect-config failing on packstack --allinone

Fails with timeout:
heat stack-create my_openshift \ --poll \ -e env_aop.yaml \ -P master_count=3 \ -f ~/openshift-on-openstack/openshift.yaml
on infra node:

cat /etc/os-collect-config.conf
[DEFAULT]
command = os-refresh-config

[cfn]
metadata_url = http://10.152.251.17:8000/v1/
stack_name = my_openshift-infra_host-tki2dadcu7iu
secret_access_key = d8a906856339490188b1bd38f1dbe8c3
access_key_id = d06def507b1947e297cf6969c10a36d1
path = host.Metadata

/var/log/messages - the following repeats forever.

Mar 11 11:04:19 localhost os-collect-config: ----------------------- PROFILING -----------------------
Mar 11 11:04:19 localhost os-collect-config: Target: migration.d
Mar 11 11:04:19 localhost os-collect-config: Script                                     Seconds
Mar 11 11:04:19 localhost os-collect-config: ---------------------------------------  ----------
Mar 11 11:04:19 localhost os-collect-config: --------------------- END PROFILING ---------------------
Mar 11 11:04:19 localhost os-collect-config: [2016-03-11 11:04:19,093] (os-refresh-config) [INFO] Completed phase migration
Mar 11 11:04:19 localhost os-collect-config: INFO:os-refresh-config:Completed phase migration
Mar 11 11:04:23 localhost os-collect-config: 2016-03-11 11:04:23.944 28675 WARNING os_collect_config.heat  [-] No auth_url configured.
Mar 11 11:04:23 localhost os-collect-config: 2016-03-11 11:04:23.944 28675 WARNING os_collect_config.request [-] No metadata_url configured.
Mar 11 11:04:23 localhost os-collect-config: 2016-03-11 11:04:23.944 28675 WARNING os-collect-config [-] Source [request] Unavailable.
Mar 11 11:04:23 localhost os-collect-config: 2016-03-11 11:04:23.944 28675 WARNING os_collect_config.local [-] /var/lib/os-collect-config/local-data not found. Skipping
Mar 11 11:04:23 localhost os-collect-config: 2016-03-11 11:04:23.945 28675 WARNING os_collect_config.local [-] No local metadata found (['/var/lib/os-collect-config/local-data'])

Possible cause of misconfiguration?: Back on the hardware OS, I see this funniness in the logs. It seems whatever REGEX turned _ into - was a bit too greedy. Note my-penshift instead of "my-openshift"

2016-03-11 11:04:54.718 2325 DEBUG keystoneclient.session [-] RESP: [200] content-length: 2048 content-encoding: gzip x-subject-token: {SHA1}001999a23705549e8fdd2d11fffa666d1c4fd288 vary: X-Auth-Token,Accept-Encoding server: Apache/2.4.6 (Red Hat Enterprise Linux) connection: close date: Fri, 11 Mar 2016 16:04:54 GMT content-type: application/json x-openstack-request-id: req-038f5070-69e8-4cbc-b916-e08007a54cac
RESP BODY: {"token": {"methods": ["ec2credential"], "roles": [{"id": "5c9d018e88f647d48767443cf7440d2b", "name": "heat_stack_user"}], "expires_at": "2016-03-11T17:04:54.692029Z", "project": {"domain": {"id": "73034a6657a545a0840836b838a5720c", "name": "heat"}, "id": "83bb3dd5e8544022a074c811870cdb32", "name": "cea127b7b5264bb08e7a60570d965445-41f85f30-4d99-48ec-983e-859f326"}, "catalog": "<removed>", "extras": {}, "user": {"domain": {"id": "73034a6657a545a0840836b838a5720c", "name": "heat"}, "id": "cb9b6665d64e4bacb757afc9f7cd815a", "name": "my-penshift-infra_host-tki2dadcu7iu-host-rcog64xdos7i"}, "audit_ids": ["mWpoEXN-T06JHZbkXmv2EQ"], "issued_at": "2016-03-11T16:04:54.692050Z"}}

Or rather, does the hardware host have to have the same domain name as the nodes?

# grep example.com *
env_aop.yaml:  domain_name: "example.com"
# hostname
hw2.openshift.oss.lab

Help?
Thanks,
Judd

Cannot reach pod services from master

Hello all,
I deployed a simple hello-openshift but I'm not able to contact it from master.
I remember this action was possible , downloading this project code some week ago. ( I still have these sources saved )
Now the scenario is:

[root@my-openshift-origin-master-0 origin]# oc describe pod hello-openshift | grep -i ip
IP: 10.1.1.2

[root@my-openshift-origin-master-0 origin]# curl http://10.1.1.2:8080
curl: (7) Failed connect to 10.1.1.2:8080; No route to host

[root@my-openshift-origin-master-0 origin]# ping 10.1.1.2
PING 10.1.1.2 (10.1.1.2) 56(84) bytes of data.
^C
--- 10.1.1.2 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 1999ms

Exposing the service and adding the wildcard to an external DNS server is not letting me reach the exposed service . How can I reach that service if the master first does not do it?
Sorry for my bothering questions but I need to understand !!!
Regards
Alessio Dini

failed invalid storage kind 'openstack' for component 'registry'

Ugh.

TASK: [openshift_facts | Verify Ansible version is greater than or equal to 1.9.4 and less than 2.0] ***
fatal: [ossdell-openshift-master-0.example.com] => Failed to template {% if persistent_volumes | length > 0 or persistent_volume_claims | length > 0 %} True {% else %} False {% endif %}: Failed to template {{ hostvars[groups.oo_first_master.0] | oo_persistent_volumes(groups) }}: |failed invalid storage kind 'openstack' for component 'registry'

FATAL: all hosts have already failed -- aborting

DNS "Unknown host" with exposed service

Hello all,
I know this could be a stupid issue but I'm stuck at this point.
I deployed openshift on our openstack Kilo environment , using this project.
Actually I have 1x master node, 1x infra node , 2x nodes

I made hello-openshift service and pod , exposing the service:

[root@my-openshift-origin-master-0 tmp]# oc get route
NAME HOST/PORT PATH SERVICE TERMINATION LABELS
hello-service hello-service-default.cloudapps.example.com hello-service name=hello-openshift

[root@my-openshift-origin-master-0 tmp]# oc describe svc hello-service
Name: hello-service
Namespace: default
Labels: name=hello-openshift
Selector: name=hello-openshift
Type: ClusterIP
IP: 172.30.174.183
Port: 8080/TCP
Endpoints: 10.1.0.2:8080
Session Affinity: None
No events.

[root@my-openshift-origin-master-0 tmp]# curl http://172.30.174.183:8080
Hello OpenShift! <-- works great

If i try to reach hello-service-default.cloudapps.example.com , I have:

[root@my-openshift-origin-master-0 tmp]# curl http://hello-service-default.cloudapps.example.com:8080
curl: (6) Could not resolve host: hello-service-default.cloudapps.example.com; Name or service not known

looking the DNS server:
[root@my-openshift-origin-master-0 tmp]# cat /etc/resolv.conf
nameserver 192.168.0.3
search example.com

192.168.0.3 is infra host , connecting to it I saw there is no entry about hello-service
Am i missing something? Did i forget some step? What should I do?

Thank for your time.
Alessio Dini

heat stack-create error

Hi everyone, I use mitaka version of openstack, and installed every services that we needed it, but when I executes below command:

heat stack-create my-openshift -t 180 -e openshift_parameters.yaml -f openshift-on-openstack/openshift.yaml -e openshift-on-openstack/env_loadbalancer_none.yaml

I get This error:
ERROR: Failed to validate: Failed to validate: resources[0]: The Resource Type (OOShift::ContainerPort) could not be found.

please help me to understand what is my problem..
Thanks

Failing to create alarms

OSP8:

2016-06-27 21:35:01 [cpu_alarm_high]: CREATE_FAILED Unauthorized: resources.cpu_alarm_high: The request you have made requires authentication. (HTTP 401) (Request-ID: req-ee6bd860-de89-4f78-8985-89b07d6dffcd)
2016-06-27 21:35:02 [deloss]: CREATE_FAILED Resource CREATE failed: Unauthorized: resources.cpu_alarm_high: The request you have made requires authentication. (HTTP 401) (Request-ID: req-ee6bd860-de89-4f78-8985-89b07d6dffcd)

Architecture diagram needs to add OpenShift "router" (http+ reverse proxy)

Inbound connections to OpenShift apps must pass through an haproxy reverse proxy known as the "OpenShift Router". This process accepts all inbound IP requests for the OpenShift apps and forwards them to the correct node using the Server: HTTP header and a map maintained by the OpenShift master.

The proxy IP address is the target of the DNS A Record for the application DNS "wildcard" domain.

The architecture diagram should include the OpenShift router and the architecture write-up should make reference to it and explain it.

cannot push to registry - network config?

Attempting a build:

I0606 21:01:57.350342       1 sti.go:267] Using provided push secret for pushing 172.30.163.90:5000/testp/cakephp-example:latest image
I0606 21:01:57.350401       1 sti.go:271] Pushing 172.30.163.90:5000/testp/cakephp-example:latest image ...
I0606 21:02:12.382211       1 sti.go:276] Registry server Address:
I0606 21:02:12.382251       1 sti.go:277] Registry server User Name: serviceaccount
I0606 21:02:12.382265       1 sti.go:278] Registry server Email: [email protected]
I0606 21:02:12.382278       1 sti.go:283] Registry server Password: <<non-empty>>
F0606 21:02:12.382294       1 builder.go:204] Error: build error: Failed to push image. Response from registry is: Put http://172.30.163.90:5000/v1/repositories/testp/cakephp-example/: dial tcp 172.30.163.90:5000: no route to host
[root@ossdell-openshift-master-0 ~]#

I'm able to curl the registry from the node it's on:

[root@ossdell-openshift-node-vm4047k3 ~]# curl -v http://172.30.19.92:5000
* About to connect() to 172.30.19.92 port 5000 (#0)
*   Trying 172.30.19.92...
* Connected to 172.30.19.92 (172.30.19.92) port 5000 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 172.30.19.92:5000
> Accept: */*
>
< HTTP/1.1 200 OK
< Cache-Control: no-cache
< Date: Mon, 06 Jun 2016 19:10:20 GMT
< Content-Length: 0
< Content-Type: text/plain; charset=utf-8
<
* Connection #0 to host 172.30.19.92 left intact

But I am not able to curl it from another node:

[root@ossdell-openshift-node-g0l4fj21 ~]# curl -v http://172.30.19.92:5000
* About to connect() to 172.30.19.92 port 5000 (#0)
*   Trying 172.30.19.92...
* No route to host
* Failed connect to 172.30.19.92:5000; No route to host
* Closing connection 0
curl: (7) Failed connect to 172.30.19.92:5000; No route to host
[root@ossdell-openshift-node-g0l4fj21 ~]#

I'm confused. Do I need another OpenStack router?

More info from the node that can't hit the Registry:

[root@ossdell-openshift-node-g0l4fj21 ~]# ip r
default via 192.168.0.1 dev eth0  proto static  metric 100
10.0.0.0/24 dev eth1  proto kernel  scope link  src 10.0.0.9  metric 100
10.1.0.0/16 dev tun0  proto kernel  scope link
172.30.0.0/16 dev tun0  scope link
192.168.0.0/24 dev eth0  proto kernel  scope link  src 192.168.0.12  metric 100
[root@ossdell-openshift-node-g0l4fj21 ~]#
ip a: (just tun0)
10: tun0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1350 qdisc noqueue state UNKNOWN
    link/ether 26:8a:64:d5:9d:13 brd ff:ff:ff:ff:ff:ff
    inet 10.1.5.1/24 scope global tun0
       valid_lft forever preferred_lft forever
    inet6 fe80::248a:64ff:fed5:9d13/64 scope link
       valid_lft forever preferred_lft forever

SSH Error: ssh: connect to host my-openshift-origin-node-v00ba4cf.example.com port 22: Connection timed out

Hello all,
I have this issue:
during the deploy of this project , Infra node maps on /etc/hosts the wrong ip class.
Infra host has one interface with 192.168.0.0/24 network but the /etc/hosts has:

[root@my-openshift-infra centos]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

10.0.0.2 my-openshift-origin-master-0.example.com my-openshift-origin-master-0 #openshift
10.0.0.3 my-openshift-origin-master-1.example.com my-openshift-origin-master-1 #openshift
10.0.0.4 my-openshift-origin-node-44b91tzk.example.com my-openshift-origin-node-44b91tzk #openshift
10.0.0.5 my-openshift-origin-node-v00ba4cf.example.com my-openshift-origin-node-v00ba4cf #openshift

During the Ansible playbook execution I have a lot of error due to ssh connections timed out:

(...)
fatal: [my-openshift-origin-master-1.example.com] => SSH Error: ssh: connect to host my-openshift-origin-master-1.example.com port 22: Connection timed out
while connecting to 10.0.0.3:22
(...)
=> SSH Error: ssh: connect to host my-openshift-origin-node-v00ba4cf.example.com port 22: Connection timed out while connecting to 10.0.0.5:22

I actually can bypass this updating manually the /etc/hosts file during the HEAT deploy , but this should be fixed.

Regards
Alessio Dini

could not install EPEL release 7-6

Hello all,
launching a new stack from this project I got this error:

| my-openshift | d6d97dda-af0d-49cd-a269-7aba87d3b61c | Resource CREATE failed: WaitConditionFailure: resources.infra_host.resources.wait_condition: could not install EPEL release 7-6 | CREATE_FAILED | 2016-06-22T10:24:11Z | my-openshift

Looking fragments/infra-boot.sh there are those lines:
EPEL_RELEASE_VERSION=7-6
EPEL_REPO_URL=http://dl.fedoraproject.org/pub/epel/7/x86_64

If u check on http://dl.fedoraproject.org/pub/epel/7/x86_64/e/ there is only 7-7 version , so the line posted above should be changed in:
EPEL_RELEASE_VERSION=7-7

Regards
Alessio Dini

Attempt to deploy OpenShift router causes stack-create to fail

When an OpenShift stack is created with the deploy_router switch set, the services deployment fails.

heat stack-create oshift -e openshift-on-openstack/env_origin.yaml -f openshift-on-openstack/openshift.yaml -P external_network=public -P flavor=m1.shift -P server_image=centos72 -P master_server_group_policies=affinity -P master_docker_volume_size_gb=6 -P node_docker_volume_size_gb=4 -P deploy_router=True

tail -f /var/log/ansible-services-16767.log 
TASK: [Evaluate oo_masters] *************************************************** 
fatal: [localhost] => with_items expects a list or a set

FATAL: all hosts have already failed -- aborting

PLAY RECAP ******************************************************************** 
           to retry, use: --limit @/root/services.retry

localhost                  : ok=0    changed=0    unreachable=0    failed=1   


Mar 24 19:20:40 oshift-infra os-collect-config[9501]: + ansible-playbook --inventory /var/lib/ansible/inventory /var/lib/ansible/services.yml
Mar 24 19:20:40 oshift-infra os-collect-config[9501]: [2016-03-24 19:20:40,080] (heat-config) [ERROR] Error running /var/lib/heat-config/heat-config-script/8cbd5259-3e31-4c56-a14e-36d096caed00. [2]

https://github.com/redhat-openstack/openshift-on-openstack/blob/master/fragments/master-ansible.sh#L234-L248

No floating ip for openshift nodes

Hello all,
if you read the description of the project you can read:

"Every VM in this configuration is also given a floating IP address on the public network. This provides direct access to the master and node hosts when needed."

When I deploy multiple nodes I see that the master and infra vm only are getting floating ip from Openstack.
When I create the Openshift router , this is scheduled and created on one of the available nodes but at same time these nodes are not reachable from the external network because they don't have floating ip. How am I supposed to reach exposed services pointing the floating ip of the master node when the router is deployed on other nodes?
Can someone help me? I'm a bit confused

i attach a couple of screenshots from my Horizon:
Openshift Master and Infra hosts
master and infra

Openshift nodes
nodes

heat create fails; ERROR: cannot find role in /var/lib/ansible/playbooks/roles/openshift_router

openshift ansible fails with..

[root@my_openshift5-infra centos]# cat /var/log/ansible.28990
ERROR: cannot find role in /var/lib/ansible/playbooks/roles/openshift_router or /var/lib/ansible/playbooks/openshift_router or /usr/share/ansible/openshift-ansible/roles/openshift_router

Not sure if this is openshift-ansible or openshift-on-openstack yet but I've done a bit of googling and nothing comes up so I'm thinking it might be best to raise here first

Resource Details: infra_host

Hi, I'm using your instruction, and in this step

heat stack-create my_openshift \
-e openshift_parameters.yaml \
-P master_count=1 \
-f openshift-on-openstack/openshift.yaml\
-e openshift-on-openstack/env_loadbalancer_none.yaml

the output in the command terminal was "CREATE_IN_PROGRESS" but in my openstack environment, in the stack -> resource section, in the infra_host resource type, i get this error:

Create Failed: ResourceInError: resources.infra_host.resources.docker_volume: Went to status error due to "Unknown"

can we help me to understand, what is my problem?
thanks for your reply.

Expecting to find username or userId in passwordCredentials

When running heat create I get the following error..

| stack_status_reason   | Resource DELETE failed: BadRequest: resources.openshift                                                                       |
|                       | _masters.resources[0].resources.docker_volume_attach:                                                                         |
|                       | Expecting to find username or userId in                                                                                       |
|                       | passwordCredentials - the server could not comply with                                                                        |
|                       | the request since it is either malformed or otherwise                                                                         |
|                       | incorrect. The client is assumed to be in error. (HTTP                                                                        |
|                       | 400)

I'm pretty sure any authentication with openstack components should be ok. Is this likely to be related to my openstack setup / openshift-on-openstack / openshift-ansible?

infra cloud init fails, hanging stack-create

Does all of cloud init fail if a few of the scripts fail? I'm running the latest rhel*.qcow2

May 13 17:38:00 localhost cloud-init: Loaded plugins: search-disabled-repos
May 13 17:38:01 localhost cloud-init: No package centos-release-openstack-liberty available.
May 13 17:38:01 localhost cloud-init: Error: Nothing to do
May 13 17:38:01 localhost cloud-init: 2016-05-13 17:38:01,254 - util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-010 [1]

Is the above failure fatal? Is it because I'm indicating a pool-id that might not have a repo with the centos-release-openstack-liberty package in it?

May 13 17:38:01 localhost cloud-init: /var/lib/cloud/instance/scripts/part-011: line 102: os-collect-config: command not found
May 13 17:38:01 localhost cloud-init: 2016-05-13 17:38:01,285 - util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-011 [127]

is the above fatal as a result of the error in part-010?

The install is OK:

May 13 17:38:45 localhost cloud-init: + notify_success 'OpenShift node has been prepared for running ansible.'
May 13 17:38:45 localhost cloud-init: + curl -i -X POST -H 'X-Auth-Token: d70a0286a8344fa8a6c49d8712da512d' -H 'Content-Type: application/json' -H 'Accept: application/json' http://192.168.190.125:8004/v1/ad3598906ade4a15986530d2d518a77c/stacks/openshift-infra_host-qggq6xaqtdvy/2027bf0d-4f19-4f32-91c8-09aed34062de/resources/wait_handle/signal --data-binary '{"status": "SUCCESS", "reason": "OpenShift node has been prepared for running ansible.", "data": "OpenShift node has been prepared for running ansible."}'
May 13 17:38:45 localhost cloud-init: % Total % Received % Xferd Average Speed Time Time Time Current
May 13 17:38:45 localhost cloud-init: Dload Upload Total Spent Left Speed
May 13 17:38:45 localhost cloud-init: 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0#015100 157 100 4 100 153 13 505 --:--:-- --:--:-- --:--:-- 506#015100 157 100 4 100 153 13 504 --:--:-- --:--:-- --:--:-- 506
May 13 17:38:45 localhost cloud-init: HTTP/1.1 200 OK
May 13 17:38:45 localhost cloud-init: Content-Type: application/json; charset=UTF-8
May 13 17:38:45 localhost cloud-init: Content-Length: 4
May 13 17:38:45 localhost cloud-init: X-Openstack-Request-Id: req-916d33fa-1515-496b-b9d1-b1bfd6767d12
May 13 17:38:45 localhost cloud-init: Date: Fri, 13 May 2016 21:38:45 GMT
May 13 17:38:45 localhost cloud-init: null+ exit 0

but cloud-init still seems very sad:

May 13 17:38:45 localhost cloud-init: 2016-05-13 17:38:45,728 - cc_scripts_user.py[WARNING]: Failed to run module scripts-user (scripts in /var/lib/cloud/instance/scripts)
May 13 17:38:45 localhost cloud-init: 2016-05-13 17:38:45,733 - util.py[WARNING]: Running module scripts-user (<module 'cloudinit.config.cc_scripts_user' from '/usr/lib/python2.7/site-packages/cloudinit/config/cc_scripts_user.pyc'>) failed
May 13 17:38:45 localhost cloud-init: 2016-05-13 17:38:45,831 - templater.py[WARNING]: Cheetah not available as the default renderer for unknown template, reverting to the basic renderer.
May 13 17:38:45 localhost cloud-init: Cloud-init v. 0.7.6 finished at Fri, 13 May 2016 21:38:45 +0000. Datasource DataSourceOpenStack [net,ver=2]. Up 331.20 seconds

I'm going to run it again without a pool-id, to see if the code just magically does the right thing.

Support for 3.1

It seems that only 3.0 is supported currently it would be nice if we have support for 3.1

Smoke testing the deployment?

What are folks using to ensure that the deployment went well? Any documented or automated procedures to ensure Master and etcd replication, registry and router setup and config, capacity to deploy apps?

resources.external_router: No more IP addresses

I'm trying to use the openshift-on-openstack scripts however I'm running into the issue below after running heat stack-create. I'm an openstack novice so I'm sure its down to my setup or config but I'm not sure what. I've checked and I have 5 free floating ips when I check in the openstack console under the external-network that I specify.
Any ideas or pointers where to start looking?

heat stack-create my_openshift -t 180 -e env.yaml -e openshift-on-openstack/env_single.yaml -f openshift-on-openstack/openshift.yaml

then checking the status with..

heat stack-show my_openshift
| parent                | None                                                                                                                                                                                                   |
| stack_name            | my_openshift                                                                                                                                                                                           |
| stack_owner           | admin                                                                                                                                                                                                  |
| stack_status          | CREATE_FAILED                                                                                                                                                                                          |
| stack_status_reason   | Resource CREATE failed:                                                                                                                                                                                |
|                       | IpAddressGenerationFailureClient:                                                                                                                                                                      |
|                       | resources.external_router: No more IP addresses                                                                                                                                                        |
|                       | available on network                                                                                                                                                                                   |
|                       | 2ac90f7c-d409-4a80-8392-c333b1bf050e.                                                                                                                                                                  |
| stack_user_project_id | ae945c26a4464612a76a17c7af18dc7e                                                                                                                                                                       |
| template_description  | Deploy Atomic/OpenShift 3 on OpenStack.                                                                                                                                                                |
| timeout_mins          | 180                                                                                                                                                                                                    |
| updated_time          | None                                                                                                                                                                                                   |                                                                                                                                                                              

This is the config I'm using...

cat << EOF > env.yaml
parameters:
  ssh_key_name: jr-mac
  server_image: centos72
  flavor: m1.medium
  external_network: external_network
  dns_nameserver: 8.8.4.4,8.8.8.8
  node_count: 2
  rhn_username: "myusername"
  rhn_password: "****"
  rhn_pool: '****'
  deployment_type: origin
  domain_name: "origin.fabric8.io"
  dns_hostname: "ns"
  master_hostname_prefix: "origin-master"
  node_hostname_prefix: "origin-node"
  ssh_user: centos
  master_docker_volume_size_gb: 45
  node_docker_volume_size_gb: 45
  deploy_router: False
  deploy_registry: False
  openshift_ansible_git_url: https://github.com/openshift/openshift-ansible.git
  openshift_ansible_git_rev: master
EOF

router reliably fails to create pod

It might be out of scope for this project, but every time I install, there's a router without a pod:

[cloud-user@ossdell-openshift-master-0 ~]$ oc get nodes
NAME                                          STATUS                     AGE
ossdell-openshift-master-0.example.com        Ready,SchedulingDisabled   1h
ossdell-openshift-master-1.example.com        Ready,SchedulingDisabled   1h
ossdell-openshift-node-45j22uvs.example.com   Ready                      1h
ossdell-openshift-node-9g7fs5b4.example.com   Ready                      1h
ossdell-openshift-node-v8r3iny2.example.com   Ready                      1h
[cloud-user@ossdell-openshift-master-0 ~]$ oc status
In project default on server https://ossdell-lb.example.com:8443

svc/kubernetes - 172.30.0.1 ports 443, 53, 53

svc/router - 172.30.15.96 ports 80, 443, 1936
  dc/router deploys docker.io/openshift3/ose-haproxy-router:v3.2.0.44
    deployment #1 deployed about an hour ago - 0 pods

View details with 'oc describe <resource>/<name>' or list everything with 'oc get all'.
[cloud-user@ossdell-openshift-master-0 ~]$ oc get all
NAME         REVISION       REPLICAS      TRIGGERED BY
router       1              0             config
NAME         DESIRED        CURRENT       AGE
router-1     0              0             1h
NAME         CLUSTER-IP     EXTERNAL-IP   PORT(S)                   AGE
kubernetes   172.30.0.1     <none>        443/TCP,53/UDP,53/TCP     1h
router       172.30.15.96   <none>        80/TCP,443/TCP,1936/TCP   1h
[cloud-user@ossdell-openshift-master-0 ~]$

is_containerized failing error

Just did a fresh pull and merge. Got this at the end of the ansible run...

PLAY [Copy resolv.conf on infra host] *****************************************

GATHERING FACTS ***************************************************************
<localhost> REMOTE_MODULE setup
<localhost> EXEC ['/bin/sh', '-c', 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1464737061.95-106371402179608 && echo $HOME/.ansible/tmp/ansible-tmp-1464737061.95-106371402179608']
<localhost> PUT /tmp/tmp2HbgRo TO /root/.ansible/tmp/ansible-tmp-1464737061.95-106371402179608/setup
<localhost> EXEC ['/bin/sh', '-c', u'LANG=C LC_CTYPE=C /usr/bin/python /root/.ansible/tmp/ansible-tmp-1464737061.95-106371402179608/setup; rm -rf /root/.ansible/tmp/ansible-tmp-1464737061.95-106371402179608/ >/dev/null 2>&1']
ok: [localhost]

TASK: [copy ] *****************************************************************
fatal: [localhost] => error while evaluating conditional: openshift.common.is_containerized | bool

FATAL: all hosts have already failed -- aborting

Any ideas?

How to restart openshift-ansible step: OS::Heat::SoftwareDeployment?

My install failed on the docker downgrade issue. I worked around it on the nodes, and I'll wait for the errata package release from Red Hat to allow the downgrade or swap.

However, I'd like to not have to rebuild everything.

How can I trigger heat (or ansible) to restart and finish the heat stack-create?

Let's add the answer to the debugging doc. #121

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.