Git Product home page Git Product logo

openstack-on-lxd's Introduction

Overview

An all-on-one machine OpenStack cloud can be useful for development of OpenStack projects, OpenStack Charms, or for exercising and testing the same.

This repository provides resources and processes to support deployment of OpenStack in LXD containers using Juju, a powerful application modelling tool.

Such a process is useful in developer scenarios where OpenStack can be deployed to a single laptop or server, provided of course that enough resources are available on the host machine.

These bundles, configurations, and processes can be customised to fit numerous development or test scenarios.

Requirements

Given that the entire cloud will be running on a single machine, that chosen machine (henceforth known as the "host") is expected to be well resourced in terms of CPU, memory, and storage backend. The resources available to the host will dictate the deploy time of the cloud.

The below specifications are considered sufficient to deploy the cloud (does not include workloads):

  • 16 GiB memory
  • 4 CPU cores (minimally i5 gen.)

It is important to have a fast disk subsystem. This can be achieved in various ways:

  • dedicated SSD block device
  • traditional RAID array
  • ZFS pool backed by multiple block devices
  • btrfs array backed by multiple block devices

Known limitations

Currently it is not possible to run Cinder with iSCSI/LVM based storage under LXD. This limits block storage solutions to those that reside within userspace, such as Ceph.

Networking environment

This section describes the networking environment that will be used in this example cloud.

The LXD network definition is summarised in this table:

value comment
LXD bridge name lxdbr0 Can also denote the network
LXD network address 10.0.8.0/24 --
LXD bridge address 10.0.8.1/24 --
LXD DHCP range 10.0.8.2 -> 10.0.8.200 Cloud node addresses

Other network parameters:

  • The OpenStack floating IP range is 10.0.8.201 -> 10.0.8.254.

  • The OpenStack internal network address is 192.168.20.0/24 and its IP range is 192.168.20.10 -> 192.168.20.99.

  • IPv6 will be disabled on the container DHCP network (undercloud) as it can interfere with the host network (overcloud).

  • Jumbo frames will be enabled for network connections into the containers. This will help avoid packet fragmentation type problems that can occur with overlay networks (overcloud and undercloud).

Host

Install the software

Install Juju and the OpenStack CLI clients on the host:

sudo snap install juju --classic
sudo snap install openstackclients --classic

Install ZFS if you will be using it to manage pools outside of LXD:

sudo apt install zfsutils-linux

Note: On Bionic, LXD is installed by default via apt packages, yet it is recommended to use the snap. Providing you are not using the apt-based LXD, install the snap and remove the packages:

sudo snap install lxd
sudo apt purge liblxc1 lxcfs lxd lxd-client

The snap also includes a tool to migrate containers over from the apt-based deployment: sudo lxd.migrate. Once done it will offer to remove the old software.

Note: Ubuntu releases that are more recent than Bionic ship with LXD installed as a snap. There is nothing to do regarding LXD installation on these releases.

Download this repository:

git clone https://github.com/openstack-charmers/openstack-on-lxd.git ~/openstack-on-lxd

Set kernel options

OpenStack on LXD requires many thousands of file handles and the default kernel thresholds should be increased accordingly. Not doing so may lead to issues such as "too many open files". Kernel options should therefore be set as per the LXD production setup guide, specifically those related to the /etc/sysctl.conf file. Note that swap usage will also be turned down to a very low level.

Tip: Make a copy of file /etc/sysctl.conf before making changes so you can easily revert back to the original configuration.

Change the kernel's behaviour in real-time like this:

echo fs.inotify.max_queued_events=1048576 | sudo tee -a /etc/sysctl.conf
echo fs.inotify.max_user_instances=1048576 | sudo tee -a /etc/sysctl.conf
echo fs.inotify.max_user_watches=1048576 | sudo tee -a /etc/sysctl.conf
echo vm.max_map_count=262144 | sudo tee -a /etc/sysctl.conf
#echo vm.swappiness=1 | sudo tee -a /etc/sysctl.conf
sudo sysctl -p

LXD

Configure LXD

LXD needs to be initialised and configured:

lxd init --auto

If the above fails, ensure your user is recognised as a member of the 'lxd' group by issuing the newgrp lxd command.

Note: An interactive user session will result if the --auto option is omitted.

Configure the LXD network as described earlier:

lxc network set lxdbr0 ipv4.address 10.0.8.1/24
lxc network set lxdbr0 ipv4.dhcp.ranges 10.0.8.2-10.0.8.200
lxc network set lxdbr0 bridge.mtu 9000
lxc network unset lxdbr0 ipv6.address
lxc network unset lxdbr0 ipv6.nat

The third command enables Jumbo frames for the host's lxdbr0 bridge. We will later configure Jumbo frames for containers by updating the LXD profile that Juju will use when creating them.

Optionally set up a ZFS storage backend. For example, to do this for a pool called 'lxd-zfs' that spans three unused block devices:

sudo zpool create lxd-zfs sdb sdc sdd
lxc storage create lxd-zfs zfs source=lxd-zfs

The LXD network configuration can be viewed with the command :command:lxc network show lxdbr0.

Verify LXD

It is recommended to verify that LXD itself is in good working order before continuing. Do this by creating a test container ('focal-1'), issuing a remote command on it, and then removing the container.

lxc launch ubuntu-daily:focal focal-1
lxc exec focal-1 whoami
lxc delete -f focal-1

Juju

Create the Juju controller

Create a Juju controller based on the 'lxd' cloud type to manage the deployment:

juju bootstrap localhost lxd

This will also create the model 'default' and the corresponding LXD profile 'juju-default'. These will respectively be used to contain and configure the cloud containers.

Tip: An APT proxy, such as squid-deb-proxy, can be used to improve cloud installation performance. Define the proxy setting for the container 'default' model with the command juju model-config -m default apt-http-proxy=http://<host>:<port>. See the Juju proxy documentation for guidance.

Update the LXD cloud container profile

Update the 'juju-default' profile with the help of file lxd-profile.yaml provided by the repository downloaded earlier:

cd ~/openstack-on-lxd
cat lxd-profile.yaml | lxc profile edit juju-default

This will ensure that the containers will have the permissions they need for a successful OpenStack deployment. It will also complete the enablement of Jumbo frames for the containers.

You will also need to update this profile if you are using ZFS. In this example deployment, the 'lxd-zfs' pool was previously set up:

lxc profile device set juju-default root pool=lxd-zfs

The resulting profile can be viewed with the command lxc profile show juju-default.

Note: There is nothing special about the Juju 'default' model nor the LXD 'juju-default' profile. For instance, you can create the model 'victoria' and then update the auto-created profile with juju add-model victoria and cat lxd-profile.yaml | lxc profile edit juju-victoria.

OpenStack

Select a bundle

The bundles are located in the ~/openstack-on-lxd directory. Choose one that is appropriate for the host's architecture.

For amd64, arm64, and ppc64el the bundle filenames are of this format:

bundle-<ubuntu-series>-<openstack-release>.yaml

For s390x the bundle filenames have the '-s390x' suffix appended:

bundle-<ubuntu-series>-<openstack-release>-s390x.yaml

There are also some OVN-specific bundles.

As an example, if the host is amd64 and we want to deploy OpenStack Victoria running on Focal containers the following bundle will be selected:

bundle-focal-victoria.yaml

Important: Starting with OpenStack Train, Ceph Mimic will be used in the bundles until a solution has been devised to address the dropping of directory backed OSD support in Ceph Nautilus. See bug GH #72.

Deploy the cloud

Deploy the cloud now. Using our above example:

cd ~/openstack-on-lxd
juju deploy ./bundle-focal-victoria.yaml

You can watch deployment progress with the command watch -n 5 -c juju status --color. This will take a while to complete.

It is normal for the ceilometer application to be blocked at the end of the process. Overcome this with an action:

juju run-action --wait ceilometer/0 ceilometer-upgrade

At this time it is recommended to verify that you can successfully query the cloud's resources. Begin by sourcing the supplied init file:

source openrcv3_project
openstack catalog list
openstack service list
openstack network agent list
openstack volume service list

Configure OpenStack

Import an image

You'll need to import a boot image into Glance in order to create instances. The image architecture should match that of the host. Here we import a Focal amd64 image and call it 'focal-amd64':

curl http://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img | \
   openstack image create --public --container-format=bare --disk-format=qcow2 \
   focal-amd64

Images for other Ubuntu releases and architectures can be obtained in a similar way, but for the ARM 64-bit (arm64) architecture you will need to configure the image to boot in UEFI mode:

curl http://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-arm64.img | \
   openstack image create --public --container-format=bare --disk-format=qcow2 \
   --property hw_firmware_type=uefi focal-arm64

Note: If you are using a ZFS storage backend, the nova-compute charm's force-raw-images option is automatically disabled for OpenStack Pike and later. Be aware that using this setting in a production environment is discouraged as it may have an impact on performance.

Configure the network

First, create the external network 'ext_net' and external subnet 'ext_subnet' which map directly to the LXD bridge:

openstack network create ext_net --external --share --default \
   --provider-network-type flat --provider-physical-network physnet1

openstack subnet create ext_subnet --allocation-pool start=10.0.8.201,end=10.0.8.254 \
   --subnet-range 10.0.8.0/24 --no-dhcp --gateway 10.0.8.1 --network ext_net

Then create the internal network 'int_net' and internal subnet 'int_subnet' for the instances to attach to:

openstack network create int_net --internal

openstack subnet create int_subnet \
   --allocation-pool start=192.168.20.10,end=192.168.20.99 \
   --subnet-range 192.168.20.0/24 \
   --gateway 192.168.20.1 --dns-nameserver 10.0.8.1 \
   --network int_net

Finally, connect the internal and external networks by means of router 'router1':

openstack router create router1
openstack router add subnet router1 int_subnet
openstack router set router1 --external-gateway ext_net

Create a flavor

Create at least one flavor to define a hardware profile for new instances. Here we create one called 'm1.tiny':

openstack flavor create --public --ram 512 --disk 5 --ephemeral 0 --vcpus 1 m1.tiny

Import an SSH keypair

An SSH keypair needs to be imported into the cloud in order to access your instances.

Generate one first if you do not yet have one. This command creates a passphraseless keypair (remove the -N option to avoid that):

ssh-keygen -q -N '' -f ~/.ssh/id_mykey

To import a keypair called 'mykey':

openstack keypair create --public-key ~/.ssh/id_mykey.pub mykey

Configure security groups

Allow ICMP (ping) and SSH traffic to flow to cloud instances by creating corresponding rules for each default security group:

for i in $(openstack security group list | awk '/default/{ print $2 }'); do
    openstack security group rule create $i --protocol icmp --remote-ip 0.0.0.0/0;
    openstack security group rule create $i --protocol tcp --remote-ip 0.0.0.0/0 --dst-port 22;
done

You only need to perform this step once.

Use OpenStack

Create an instance

Note: For OpenStack on LXD, if the host is PowerNV (ppc64el) you will need to disable smt manually before creating instances:

juju ssh nova-compute/0 sudo ppc64_cpu --smt=off

Create a Bionic instance called 'bionic-1' using the 'bionic-amd64' image and the 'm1.tiny' flavor:

NET_ID=$(openstack network show int_net -f value -c id)
openstack server create --image bionic-amd64 --flavor m1.tiny --key-name mykey \
   --network=$NET_ID bionic-1

Attach a volume

This step is optional.

To create a 10GiB volume called 'vol-10g' in Cinder and attach it to instance 'bionic-1':

openstack volume create --size=10 vol-10g
openstack server add volume bionic-1 vol-10g
openstack volume show vol-10g

The volume becomes immediately available to the instance. It will however need to be formatted and mounted before usage.

Assign a floating IP address

Request a floating IP address and assign it to instance 'bionic-1':

FLOATING_IP=$(openstack floating ip create -f value -c floating_ip_address ext_net)
openstack server add floating ip bionic-1 $FLOATING_IP

Log in to an instance

Log in to an instance by connecting to its floating IP address:

ssh -i ~/.ssh/id_mykey ubuntu@$FLOATING_IP

Troubleshooting

Here are a few troubleshooting tips if the SSH connection fails:

  • Ensure that the instance has booted correctly with :command:openstack console log show <instance-name>.

  • Ensure that the metadata service is running with :command:openstack network agent list.

Access the dashboards

There are two web UIs available out of the box. These are the OpenStack dashboard and the Juju dashboard.

OpenStack dashboard

To access the OpenStack dashboard you'll need to determine its IP address and the admin user's credentials. These two commands will provide them, respectively:

juju status openstack-dashboard | grep -A1 'Public address'
juju run --unit keystone/leader 'leader-get admin_passwd'

Our example cloud yields an address of '10.0.8.69'.

Point your browser at the below URL and use the credentials (use your own IP address):

http://10.0.8.69/horizon

domain:  admin_domain
user:  admin
password:  ??????????

If the host is remote you can use SSH local port forwarding to access it (use your own IP address):

ssh -N -L 10080:10.0.8.69:80 <remote-host>

The URL then becomes: http://localhost:10080/horizon

Juju dashboard

To access the Juju dashboard you'll need to determine its URL and credentials. Do so like this:

juju dashboard

Our example cloud shows:

Dashboard 0.1.7 for controller "lxd" is enabled at:
  https://10.0.8.18:17070/dashboard
Your login credential is:
  username: admin
  password: 86f650892c26180a6bf2a116fb7df486

If the host is remote you can use SSH local port forwarding to access it (use your own IP address):

ssh -N -L 10070:10.0.8.18:17070 <remote-host>

The URL then becomes: https://localhost:10070/dashboard

openstack-on-lxd's People

Contributors

andrewdmcleod avatar arif-ali avatar barryprice avatar chrismacnaughton avatar fnordahl avatar gabriel-samfira avatar jamesbeedy avatar javacruft avatar lourot avatar marosg42 avatar mgaruccio avatar natalytvinova avatar nobuto-m avatar rgildein avatar rodrigogansobarbieri avatar ryan-beisner avatar sahid avatar schkovich avatar sfeole avatar thedac avatar wolsen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

openstack-on-lxd's Issues

Deprecation warnings

./neutron-ext-net-ksv3 --network-type flat \
>     -g 10.0.8.1 -c 10.0.8.0/24 \
>     -f 10.0.8.201:10.0.8.254 ext_net
/usr/lib/python2.7/dist-packages/keystoneauth1/adapter.py:179: UserWarning: Using keystoneclient sessions has been deprecated. Please update your software to use keystoneauth1.
  warnings.warn('Using keystoneclient sessions has been deprecated. '

bionic-queens - ceph-mon, ceph-osd, and designate units fail to start

When deploying the bionic-queens bundle ceph-mon and ceph-osd fail to start and show a juju status of hook failed: install. This seems to be due to attempting to pull a xenial source on a bionic OS. I tried changing the source to cloud:bionic-queens but that started generating an error that the source did not exist. Ultimately removing the option entirely fixed the problem.

Designate also failed to start. After further research I found that this is due to designate requiring a nameservers option in queens and above. Adding the option using the example text provided by the designate charm allows the deployment to continue. It's not 100% clear to me if this is asking for addresses of existing nameservers or if it names the servers that are created using those names but it at least allows the rest of the deployment to function.

PR for both of these changes incoming

Two instances launched from the same image do not get an IP

I have two Images bionic-lxd and xenial-lxd. when creating two instances by attaching an internal interface from one of these images, just one instance getting IP.
+-------------------+---------+------------------+------+------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+-------------------+---------+------------------+------+------------+-----------+
| instance-00000008 | RUNNING | 10.5.5.10 (eth0) | | PERSISTENT | 0 |
+-------------------+---------+------------------+------+------------+-----------+
| instance-0000000c | RUNNING | | | PERSISTENT | 0 |
+-------------------+---------+------------------+------+------------+-----------+

When making two instances from different image both instances will get IP and work.
+-------------------+---------+------------------+------+------------+-----------+
| instance-00000008 | RUNNING | 10.5.5.10 (eth0) | | PERSISTENT | 0 |
+-------------------+---------+------------------+------+------------+-----------+
| instance-0000000c | RUNNING | 10.5.5.8 (eth0) | | PERSISTENT | 0 |
+-------------------+---------+------------------+------+------------+-----------+

When attaching an external interface in every case using the same or different images it does not work, I mean none of them getting IP.
In all the above scenarios, the horizon shows everting it looks fine.

Docs: Add note for setting "export OS_AUTH_URL=http://<....>" when no ssl specified

When following this docs until step https://docs.openstack.org/charm-guide/latest/openstack-on-lxd.html#deploy-the-cloud
I loaded the env with source openrcv3_project, but when running a client command such as openstack catalog list there is this error:

Failed to discover available identity versions when contacting https://10.170.140.112:5000/v3. Attempting to parse version from URL.
SSL exception connecting to https://10.170.140.112:5000/v3/auth/tokens: HTTPSConnectionPool(host='10.170.140.112', port=5000): Max retries exceeded with url: /v3/auth/tokens (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'ssl3_get_record', 'wrong version number')],)",),))

Maybe the docs should also be updated for this error case.
OpenStack is fully running on my single machine, how can I fix this SSL error?

Missing memcached unit

When deploying bundle-pike (I think it would be the same for all releases) designate unit stays in blocked state because it is missing coordinator-memcached

eth1 should not get IP

The lxd profile is changed to add eth1 but then both (eth0 + eth1) will get IPs. And sometimes (randomly) juju will get the address from the interface that does not have the default gateway, so charms will expect connection from one of the ips (the one listed on the machines list) but the actual connection goes out with the other ip and is denied, breaking the deployment.

One such example is mysql-innodb-cluster that often gets the error:

Failed configuring instance 10.0.8.168: Cannot set LC_ALL to locale en_US.UTF-8: No such file or directory
Traceback (most recent call last):
  File "<string>", line 1, in <module>
mysqlsh.DBError: MySQL Error (1130): Dba.configure_instance: Host '10.0.8.155' is not allowed to connect to this MySQL server

just because it got the ip from the wrong interface. Another case is that neutron-gateway will take eth1 for provider-network but half the times the IP being used by juju is that one so juju agent is lost after neutron-gateway charm completes deployment.

I would suggest turning off dhcp for the second interface using cloud-init on the profile:

  user.network-config: |
    version: 2
    ethernets:
      eth0:
        dhcp4: true
      eth1:
        dhcp4: false

Ceph-OSD errors

This install was working but now it has stopped. Everytime I install the ceph-osd'd error.

results of "ceph status" on the osd's

2018-06-19 14:27:57.194686 7fd4c4883700 -1 Errors while parsing config file!
2018-06-19 14:27:57.194733 7fd4c4883700 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
2018-06-19 14:27:57.194735 7fd4c4883700 -1 parse_file: cannot open ~/.ceph/ceph.conf: (2) No such file or directory
2018-06-19 14:27:57.194736 7fd4c4883700 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
Error initializing cluster client: ObjectNotFound('error calling conf_read_file',)

Fail to deploy cs:mysql-55

Mysql in bundle-ppc64el.yaml can not be deployed.

The error log in lxc machine:
/var/lib/juju/tools/machine-0/jujud: error while loading shared libraries: libgo.so.5: cannot open shared object file: No such file or directory
/var/lib/juju/tools/machine-0/jujud: error while loading shared libraries: libgo.so.5: cannot open shared object file: No such file or directory
/var/lib/juju/tools/machine-0/jujud: error while loading shared libraries: libgo.so.5: cannot open shared object file: No such file or directory

My environment:
ppc64le Xenial VM, 2.0-beta4-xenial-ppc64el

Any Suggestion? Thanks

ceph-radosgw in error after deploy

Observed on pike and ocata. After deploy, when all units are active, ceph-radosgw is in error, reporting
hook failed: "identity-service-relation-changed" for keystone:identity-service

unit-radosgw.log shows
2017-12-27 17:14:19 DEBUG identity-service-relation-changed Site openstack_https_frontend already disabled
2017-12-27 17:14:19 DEBUG identity-service-relation-changed apache2.service is not active, cannot reload.
2017-12-27 17:14:19 DEBUG identity-service-relation-changed /usr/lib/python2.7/dist-packages/keystoneauth1/adapter.py:182: UserWarning: Using keystoneclient sessions has been deprecated. Please update your software to use keystoneauth1.
2017-12-27 17:14:19 DEBUG identity-service-relation-changed warnings.warn('Using keystoneclient sessions has been deprecated. '
2017-12-27 17:14:40 DEBUG identity-service-relation-changed Traceback (most recent call last):
2017-12-27 17:14:40 DEBUG identity-service-relation-changed File "/var/lib/juju/agents/unit-ceph-radosgw-0/charm/hooks/identity-service-relation-changed", line 388, in
2017-12-27 17:14:40 DEBUG identity-service-relation-changed hooks.execute(sys.argv)
2017-12-27 17:14:40 DEBUG identity-service-relation-changed File "/var/lib/juju/agents/unit-ceph-radosgw-0/charm/hooks/charmhelpers/core/hookenv.py", line 798, in execute
2017-12-27 17:14:40 DEBUG identity-service-relation-changed self._hookshook_name
2017-12-27 17:14:40 DEBUG identity-service-relation-changed File "/var/lib/juju/agents/unit-ceph-radosgw-0/charm/hooks/charmhelpers/contrib/openstack/utils.py", line 1896, in wrapped_f
2017-12-27 17:14:40 DEBUG identity-service-relation-changed restart_functions)
2017-12-27 17:14:40 DEBUG identity-service-relation-changed File "/var/lib/juju/agents/unit-ceph-radosgw-0/charm/hooks/charmhelpers/core/host.py", line 728, in restart_on_change_helper
2017-12-27 17:14:40 DEBUG identity-service-relation-changed r = lambda_f()
2017-12-27 17:14:40 DEBUG identity-service-relation-changed File "/var/lib/juju/agents/unit-ceph-radosgw-0/charm/hooks/charmhelpers/contrib/openstack/utils.py", line 1895, in
2017-12-27 17:14:40 DEBUG identity-service-relation-changed (lambda: f(*args, **kwargs)), restart_map, stopstart,
2017-12-27 17:14:40 DEBUG identity-service-relation-changed File "/var/lib/juju/agents/unit-ceph-radosgw-0/charm/hooks/identity-service-relation-changed", line 233, in identity_changed
2017-12-27 17:14:40 DEBUG identity-service-relation-changed configure_https()
2017-12-27 17:14:40 DEBUG identity-service-relation-changed File "/var/lib/juju/agents/unit-ceph-radosgw-0/charm/hooks/identity-service-relation-changed", line 377, in configure_https
2017-12-27 17:14:40 DEBUG identity-service-relation-changed setup_keystone_certs(CONFIGS)
2017-12-27 17:14:40 DEBUG identity-service-relation-changed File "/var/lib/juju/agents/unit-ceph-radosgw-0/charm/hooks/utils.py", line 356, in _inner2_defer_if_unavailable
2017-12-27 17:14:40 DEBUG identity-service-relation-changed return f(*args, **kwargs)
2017-12-27 17:14:40 DEBUG identity-service-relation-changed File "/var/lib/juju/agents/unit-ceph-radosgw-0/charm/hooks/utils.py", line 496, in setup_keystone_certs
2017-12-27 17:14:40 DEBUG identity-service-relation-changed get_ks_ca_cert(ksclient, auth_endpoint, certs_path)
2017-12-27 17:14:40 DEBUG identity-service-relation-changed File "/var/lib/juju/agents/unit-ceph-radosgw-0/charm/hooks/utils.py", line 356, in _inner2_defer_if_unavailable
2017-12-27 17:14:40 DEBUG identity-service-relation-changed return f(*args, **kwargs)
2017-12-27 17:14:40 DEBUG identity-service-relation-changed File "/var/lib/juju/agents/unit-ceph-radosgw-0/charm/hooks/utils.py", line 414, in get_ks_ca_cert
2017-12-27 17:14:40 DEBUG identity-service-relation-changed ca_cert = get_ks_cert(ksclient, auth_endpoint, 'ca')
2017-12-27 17:14:40 DEBUG identity-service-relation-changed File "/var/lib/juju/agents/unit-ceph-radosgw-0/charm/hooks/utils.py", line 356, in _inner2_defer_if_unavailable
2017-12-27 17:14:40 DEBUG identity-service-relation-changed return f(*args, **kwargs)
2017-12-27 17:14:40 DEBUG identity-service-relation-changed File "/var/lib/juju/agents/unit-ceph-radosgw-0/charm/hooks/utils.py", line 384, in get_ks_cert
2017-12-27 17:14:40 DEBUG identity-service-relation-changed cert = ksclient.certificates.get_ca_certificate()
2017-12-27 17:14:40 DEBUG identity-service-relation-changed File "/usr/lib/python2.7/dist-packages/keystoneclient/v2_0/certificates.py", line 28, in get_ca_certificate
2017-12-27 17:14:40 DEBUG identity-service-relation-changed resp, body = self._client.get('/certificates/ca', authenticated=False)
2017-12-27 17:14:40 DEBUG identity-service-relation-changed File "/usr/lib/python2.7/dist-packages/keystoneauth1/adapter.py", line 288, in get
2017-12-27 17:14:40 DEBUG identity-service-relation-changed return self.request(url, 'GET', **kwargs)
2017-12-27 17:14:40 DEBUG identity-service-relation-changed File "/usr/lib/python2.7/dist-packages/keystoneauth1/adapter.py", line 447, in request
2017-12-27 17:14:40 DEBUG identity-service-relation-changed resp = super(LegacyJsonAdapter, self).request(*args, **kwargs)
2017-12-27 17:14:40 DEBUG identity-service-relation-changed File "/usr/lib/python2.7/dist-packages/keystoneauth1/adapter.py", line 192, in request
2017-12-27 17:14:40 DEBUG identity-service-relation-changed return self.session.request(url, method, **kwargs)
2017-12-27 17:14:40 DEBUG identity-service-relation-changed File "/usr/lib/python2.7/dist-packages/positional/init.py", line 101, in inner
2017-12-27 17:14:40 DEBUG identity-service-relation-changed return wrapped(*args, **kwargs)
2017-12-27 17:14:40 DEBUG identity-service-relation-changed File "/usr/lib/python2.7/dist-packages/keystoneclient/session.py", line 445, in request
2017-12-27 17:14:40 DEBUG identity-service-relation-changed raise exceptions.from_response(resp, method, url)
2017-12-27 17:14:40 DEBUG identity-service-relation-changed keystoneauth1.exceptions.http.ServiceUnavailable: Service Unavailable (HTTP 503)
2017-12-27 17:14:40 ERROR juju.worker.uniter.operation runhook.go:113 hook "identity-service-relation-changed" failed: exit status 1

I did not find anything in keystone log

Option "ram-allocation-ratio" expected float, integer given

$ juju --version
2.3.1-xenial-amd64
$ lxd --version
2.21

On deploying Pike bundle I got the error below.

ERROR cannot deploy bundle: cannot deploy application "nova-cloud-controller": option "ram-allocation-ratio" expected float, got 64

The same problem applies to cpu-allocation-ratio option.

The value of both options should be given in canonical floating number format.

      ram-allocation-ratio: 64.
      cpu-allocation-ratio: 64.

Need reference bundle-lxd-vlan.yaml for vlan network deploys

bundle-lxd.yaml is a great reference for openstack on lxd, but also super basic - as far as networking is concerned. I've been trying to get vlan tenant networks provisioned on a lxd deploy with no luck. Any chance you could add a reference bundle-lxd-vlan?

Thanks!

Re-enable percona-cluster for ppc64el when Xenial percona-cluster bugs are fix-released

This issue is a reminder that when the following bugs are resolved, we need to re-validate with percona-cluster and remove the mysql charm work-around. It should also be possible to remove the bundle-ppc64el.yaml file and update the docs to just use bundle.yaml for both ppc64el and amd64.

Temporarily substituting the mysql charm for percona-cluster, pending
resolution of the following bugs.  Also, there is no Xenial mysql
charm as of 2016 Aug 8.
    https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1570678
    https://bugs.launchpad.net/charms/+source/percona-cluster/+bug/1611134

https://github.com/openstack-charmers/openstack-on-lxd/blob/master/bundle-ppc64el.yaml

Ceph backend and erasure-coding, plus BTRFS issues.

After install and testing multiple times I am finding that the use of replication with ceph is problematic.

There needs to be a way to select and set whether or not you want replication or erasure-coding.

I assume that most people using "openstack-on-lxd" are doing an all-in-one install on a single VM/Server. If not then the current install process is probably great. However, if you are doing an all-in-one, 3X replication eats the heck out of your available storage.

If the options allowed for selecting erasure-coding and setting the encoding rate values so that ceph was installed and configured then the all-in-one installers would have much more storage space available to them with out losing recovery, for the most part.

Also, while replication is great for it's intended use case, erasure-coding in mult-disk/multi-server environments would seem to fit a lot more enterprise use cases. There is a reason it was added to ceph and it seems odd that it has not been accounted for in the install of openstack.

Lastly, by all accounts of what I have read, BTRFS seems to be a better backend for ceph as well as LXD as BTRFS is "suppose" to be superior for nested virtualized storage. That said, the ceph-osd's here throw errors after install if BTRFS is used as the storage backend for LXD.

I see "deprecated btrfs support is not enabled" and other BTRFS related errors in the OSD logs. While I can use a different backend I find it odd that every ceph book I have read says the BTRFS is preferred, but still a tad bit experimental. Is BTRFS support going to be added? What is the preferred LXD storage backend for the openstack-on-lxd environment?

No valid host due to memory

On a host with 32GB RAM, after the openstack-on-lxd model is deployed (Juju 2.2b1), there is less than 1GB mem available on the host. When an instance is created with an m1.small, it enters and ERROR state and the scheduler shows no valid host.

Because this is a developer-centric bundle, and an all-on-one OpenStack cloud (non-HA, limited resources) is not considered production, it should be reasonable to raise the memory overcommit levels.

nested containers are missing permissions when using the bundle openstack-49

this is what i did

lxc profile set juju-default security.nesting true
lxc profile set juju-controller security.nesting true
lxc profile set default security.nesting true
lxc profile set docker security.nesting true

root@gentest:/etc/default# juju bootstrap
Clouds
aws
aws-china
aws-gov
azure
azure-china
cloudsigma
google
joyent
localhost
rackspace

Select a cloud [localhost]: localhost

Enter a name for the Controller [localhost-localhost]: controller

Creating Juju controller "controller" on localhost/localhost
Looking for packaged Juju agent version 2.0.2 for amd64
To configure your system to better support LXD containers, please see: https://github.com/lxc/lxd/blob/master/doc/production-setup.md
Launching controller instance(s) on localhost/localhost...

  • juju-e24c37-0 (arch=amd64)
    Fetching Juju GUI 2.4.2
    Waiting for address
    Attempting to connect to 10.0.8.246:22
    Logging to /var/log/cloud-init-output.log on the bootstrap machine
    Running apt-get update
    Running apt-get upgrade
    Installing curl, cpu-checker, bridge-utils, cloud-utils, tmux
    Fetching Juju agent version 2.0.2 for amd64
    Installing Juju machine agent
    Starting Juju machine agent (service jujud-machine-0)
    Bootstrap agent now started
    Contacting Juju controller at 10.0.8.246 to verify accessibility...
    Bootstrap complete, "controller" controller now available.
    Controller machines are in the "controller" model.
    Initial model "default" added.

juju deploy cs:bundle/openstack-base-49

which results in

Machine State DNS Inst id Series AZ
0 started 10.0.8.168 juju-35f5f0-0 xenial
0/lxd/0 down pending xenial
0/lxd/1 down pending xenial
0/lxd/2 down pending xenial
1 started 10.0.8.107 juju-35f5f0-1 xenial
1/lxd/0 down pending xenial
1/lxd/1 down pending xenial
1/lxd/2 down pending xenial
2 started 10.0.8.241 juju-35f5f0-2 xenial
2/lxd/0 down pending xenial
2/lxd/1 down pending xenial
2/lxd/2 down pending xenial
3 started 10.0.8.131 juju-35f5f0-3 xenial
3/lxd/0 down pending xenial
3/lxd/1 down pending xenial
3/lxd/2 down pending xenial

then i took a look at show-machine and got the message for all nested containers

message: 'Creating container: Failed to change ownership of: /var/lib/lxd/containers/juju-35f5f0-0-lxd-0/rootfs'

is this a bug or do i have to set something else besides

security.nesting true

?

Missing mysql-router instances in bundles

Some of the components have mysql-router charm and some others are directly connected to mysql-innodb in the bundles. It seems that all components should use mysql-router instead.

Example, in bundle-focal-ussuri.yaml, heat is directly connected to innodb-cluster while openstack-dashboard is using the router. There are many other cases as well.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.