Git Product home page Git Product logo

spells's Introduction

NOTICE: conjure-up is EOL

Please note that conjure-up has reached end-of-life. Most of the features of this installer are now available natively with juju and snap.

For the latest recommended way to install and operate the following Canonical software, please see the linked documentation.

conjure-up Build Status

Installing big software like whoa.

what it is

Ever wanted to get started with Kubernetes, Deep Learning, Big Data but didn't want to go through pages and pages of "Getting Started" documentation?

Then conjure-up is for you!

This is the runtime application for processing spells to get those big software solutions up and going with as little hindrance as possible.

installation

Ubuntu and macOS

Ubuntu

$ sudo snap install conjure-up --classic

macOS

$ brew install conjure-up

how to use

Run the installer interactively

You may want to learn a little bit about what you're installing, right? This method provides you with a tutorial like approach without being overburdening.

You can read through descriptions of the software along with ability to set a few config options before deploying. Or, just hold down the enter button and it'll choose sensible defaults for you.

$ conjure-up

Run the installer non-interactively (headless mode)

Already been through the guided tour? Not keen on holding down the enter button on your keyboard? Not a problem, easily get your big software up and running with all the sensible defaults in place.

$ conjure-up canonical-kubernetes localhost

Note that some spells require sudo for certain steps. When running in headless mode, conjure-up should be run with a user with passwordless sudo enabled (or sudo should be pre-authorized before invoking conjure-up).

Destroying deployments

$ conjure-down

authors

license

The MIT License (MIT)

  • Copyright (c) 2015-2019 Canonical Ltd.

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

spells's People

Contributors

arosales avatar chrismacnaughton avatar cynerva avatar falfaro avatar hyperbolic2346 avatar iatrou avatar jhobbs avatar joedborg avatar johnsca avatar ktsakalozos avatar kwmonroe avatar lutostag avatar mikemccracken avatar shrajfr12 avatar simonklb avatar smak1993 avatar tvansteenburgh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

spells's Issues

spell descriptions should mention system requirements

Bug from LP via conjure-up/conjure-up#599

Note that this is probably true of all spells.

When I fire up conjure-up kubernetes I see two options:

Kubernetes Core
The Canonical Distribution of Kubernetes

There is actually a difference between them in terms of system requirements, but the description for either of them doesn't make it clear. Could we expose those so we know which option is suitable to select? We might want to surface more details (number of services, etc?) if that helps the choice.

Upgrade to Ocata breaks openstack-novalxd

I use conjure-up to deploy OpenStack on Nova/LXD using conjure-up. As of today, it deploys Newton. I modified the bundle.yaml file locally to deploy Ocata instead of noticed that step 04 (Neutron) is broken because as of Ocata, "neutron" CLI emits the following warning to stdout:

neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.

This warning breaks parsing of the actual result from running step 04 (Neutron). I've modified steps/share/neutron.sh to fix the issue by replacing usage of "neutron" CLI with that of "openstack".

kubernetes-core & canonical-kubernetes need a different lxd profile.

The release candidate charms for the CDK now use snaps and we are getting some errors on not being able to apply apparmor profiles to the containers.

2017-04-04 14:23:57 INFO install - Setup snap "core" (1441) security profiles (cannot setup apparmor for snap "core": cannot load apparmor profile "snap.core.hook.configure": cannot load apparmor profile: exit status 243
2017-04-04 14:23:57 INFO install apparmor_parser output:
2017-04-04 14:23:57 INFO install apparmor_parser: Unable to replace "snap.core.hook.configure".  Permission denied; attempted to load a profile while confined?

It seems to be related to privileged containers not being able to apply apparmor profiles.

We found some success with comparing the current docker lxd profile to the profile that is set for the kubernetes-core spell. More investigation is needed here and we need to update the profiles for both spells once we figure out what will work.

Related to: conjure-up/conjure-up#802

openstack: allow conjure-up to register as a new cloud

Once the deployment is complete, we should think about adding a step that prompts the user to add this deployment to the list of available clouds conjure-up and deploy to so that subsequent runs of conjure-up will see that newly deployed cloud.

k8s: support for privileged containers

Need a way to enable --allow-privileged for kubelet. This probably should be in the kubernetes charms config, though we could easily make this a step to update the configuration options now.

spell cloud whitelists can be confusing: should we have multiple bundles per spell?

for example the openstack-novalxd bundle is currently restricted to localhost.

IIRC this is mostly because it has no placement directives so it'd take like 16 machines on maas.

however it's also the only spell that gives you an openstack that USES novalxd, which is not good.

two ways to solve this:

  1. start another spell for openstack-on-maas-with-novalxd
  2. change openstack-novalxd to allow any cloud but just have a different bundle to use for each cloud, so the 'maas' bundle will give you a 4 machine setup

openstack-base: openstack instances not getting ip's

This may not be the proper project to post this in. After creating an openstack on MAAS with conjure-up when I create new instances inside of openstack, horizon shows an ip associated with it however when I login through the web console to the machine there is not an IP. The interface is configured as DHCP on the instance.

I ran juju config neutron-openvswitch enable-local-dhcp-and-metadata=true to enable the dhcp server but that didn't seem to work

It looks like the conjurup0 bridge isn't attached to an interface

brctl show
bridge name bridge id STP enabled interfaces
conjureup0 8000.000000000000 no

And that has the IP of the ext-net 10.99.0.1

Original: conjure-up/conjure-up#494

openstack-base: Need a better name that includes novaKVM and MAAS

Currently the name in the list is OpenStack with NovaKVM which includes the use of MAAS. However, this may be confusing or unknown so we need a better name that includes both Nova with KVM and the use of MAAS.

Suggestions welcomed!

Current proposal

OpenStack Nova-KVM on MAAS
OpenStack Nova-KVM in LXD
OpenStack Nova-KVM Telemetry on MAAS
OpenStack Nova-KVM Telemetry in LXD
OpenStack Nova-LXD on MAAS
OpenStack Nova-LXD in LXD

Conjure-up does not account for Xenial commissioned NIC names (enox)

Normally we hack the bundle, but in the case of conjure-up we are not able to (at least not without some hacking). Perhaps it could run a quick ifconfig to validate.

2017-04-17 21:41:08 INFO config-changed Cannot find device "eth1"
2017-04-17 21:41:08 INFO config-changed Traceback (most recent call last):
2017-04-17 21:41:08 INFO config-changed File "/var/lib/juju/agents/unit-neutron-gateway-0/charm/hooks/config-changed", line 359, in
2017-04-17 21:41:08 INFO config-changed hooks.execute(sys.argv)
2017-04-17 21:41:08 INFO config-changed File "/var/lib/juju/agents/unit-neutron-gateway-0/charm/hooks/charmhelpers/core/hookenv.py", line 731, in execute
2017-04-17 21:41:08 INFO config-changed self._hookshook_name
2017-04-17 21:41:08 INFO config-changed File "/var/lib/juju/agents/unit-neutron-gateway-0/charm/hooks/charmhelpers/contrib/openstack/utils.py", line 1864, in wrapped_f
2017-04-17 21:41:08 INFO config-changed restart_functions)
2017-04-17 21:41:08 INFO config-changed File "/var/lib/juju/agents/unit-neutron-gateway-0/charm/hooks/charmhelpers/core/host.py", line 655, in restart_on_change_helper
2017-04-17 21:41:08 INFO config-changed r = lambda_f()
2017-04-17 21:41:08 INFO config-changed File "/var/lib/juju/agents/unit-neutron-gateway-0/charm/hooks/charmhelpers/contrib/openstack/utils.py", line 1863, in
2017-04-17 21:41:08 INFO config-changed (lambda: f(*args, **kwargs)), restart_map, stopstart,
2017-04-17 21:41:08 INFO config-changed File "/var/lib/juju/agents/unit-neutron-gateway-0/charm/hooks/charmhelpers/contrib/hardening/harden.py", line 79, in _harden_inner2
2017-04-17 21:41:08 INFO config-changed return f(*args, **kwargs)
2017-04-17 21:41:08 INFO config-changed File "/var/lib/juju/agents/unit-neutron-gateway-0/charm/hooks/config-changed", line 145, in config_changed
2017-04-17 21:41:08 INFO config-changed configure_ovs()
2017-04-17 21:41:08 INFO config-changed File "/var/lib/juju/agents/unit-neutron-gateway-0/charm/hooks/neutron_utils.py", line 856, in configure_ovs
2017-04-17 21:41:08 INFO config-changed add_bridge_port(br, port, promisc=True)
2017-04-17 21:41:08 INFO config-changed File "/var/lib/juju/agents/unit-neutron-gateway-0/charm/hooks/charmhelpers/contrib/network/ovs/init.py", line 47, in add_bridge_port
2017-04-17 21:41:08 INFO config-changed subprocess.check_call(["ip", "link", "set", port, "up"])
2017-04-17 21:41:08 INFO config-changed File "/usr/lib/python2.7/subprocess.py", line 541, in check_call
2017-04-17 21:41:08 INFO config-changed raise CalledProcessError(retcode, cmd)
2017-04-17 21:41:08 INFO config-changed subprocess.CalledProcessError: Command '['ip', 'link', 'set', u'eth1', 'up']' returned non-zero exit status 1
2017-04-17 21:41:08 ERROR juju.worker.uniter.operation runhook.go:107 hook "config-changed" failed: exit status 1
2017-04-17 21:41:08 DEBUG juju.worker.uniter.operation executor.go:84 lock released
2017-04-17 21:41:08 INFO juju.worker.uniter resolver.go:100 awaiting error resolution for "config-changed" hook
2017-04-17 21:41:08 DEBUG juju.worker.uniter agent.go:17 [AGENT-STATUS] error: hook failed: "config-changed"

canonical-kubernetes friendly-name is wrong

copying conjure-up/conjure-up#546 here

When you launch conjure-up and it shows a list of big software the title is "Canonical Kubernetes"; that should be "The Canonical Distribution of Kubernetes".

Using "canonical-kubernetes" as a cli argument is fine, but when describing it as a product we should use the full name.

openstack-novalxd: Pike support

The following code snippet from Robert Ayres needs to be modified to work with our openstack-novalxd spell:

#!/bin/sh -e

DEBIAN_FRONTEND=noninteractive sudo apt-get -qy -o Dpkg::Options::=--force-confdef -o Dpkg::Options::=--force-confold install python-openstackclient < /dev/null

. ~/admin-openrc

# create ubuntu user
openstack project create --description "Created by Juju" ubuntu
openstack user create --project ubuntu --password password --email juju@localhost ubuntu

# add tiny flavor
openstack flavor create --id 1 --ram 512 --disk 8 --vcpus 1 m1.tiny

# configure external network
openstack network create --external ext-net
openstack subnet create --subnet-range 192.168.122.0/24 --no-dhcp --gateway 192.168.122.1 --network ext-net --allocation-pool start=192.168.122.200,end=192.168.122.254 ext-subnet

# create vm network
openstack network create --project ubuntu ubuntu-net
openstack subnet create --project ubuntu --subnet-range 10.0.5.0/24 --gateway 10.0.5.1 --network ubuntu-net --dns-nameserver 192.168.122.1 ubuntu-subnet
openstack router create --project ubuntu ubuntu-router
openstack router add subnet ubuntu-router ubuntu-subnet
openstack router set --external-gateway ext-net ubuntu-router

# create pool of floating ips
i=0
while [ $i -ne 10 ]; do
	openstack floating ip create ext-net
	i=$((i + 1))
done

. ~/ubuntu-openrc

# configure security groups
openstack security group rule create --protocol icmp --ingress --ethertype IPv4 default
openstack security group rule create --dst-port 22 --protocol tcp --ingress --ethertype IPv4 default

# import key pair
openstack keypair create --public-key id_rsa.pub ubuntu-keypair

@falfaro pinging you on this incase you'd like to take another stab at getting our spell ready for Ocata?

conjure-up: No controllers registered

Hi!

I use this manual to startup OpenStack.

On stage 6 after run command conjure-up --bootstrap-to simple-collie.maas and push on button "Deploy" i see next trace:

Exception in ev.run():
Traceback (most recent call last):
  File "/snap/conjure-up/352/lib/python3.6/site-packages/ubuntui/ev.py", line 82, in run
    cls.loop.run()
  File "/snap/conjure-up/352/lib/python3.6/site-packages/urwid/main_loop.py", line 278, in run
    self._run()
  File "/snap/conjure-up/352/lib/python3.6/site-packages/urwid/main_loop.py", line 376, in _run
    self.event_loop.run()
  File "/snap/conjure-up/352/lib/python3.6/site-packages/urwid/main_loop.py", line 1328, in run
    raise self._exc_info[0](self._exc_info[1]).with_traceback(self._exc_info[2])
  File "/snap/conjure-up/352/usr/lib/python3.6/asyncio/events.py", line 126, in _run
    self._callback(*self._args)
  File "/snap/conjure-up/352/lib/python3.6/site-packages/conjureup/ui/views/deploystatus.py", line 31, in _refresh_nodes_on_main_thread
    status = model_status()
  File "/snap/conjure-up/352/lib/python3.6/site-packages/conjureup/juju.py", line 32, in _decorator
    login(force=True)
  File "/snap/conjure-up/352/lib/python3.6/site-packages/conjureup/juju.py", line 123, in login
    uuid = get_model(app.current_controller, app.current_model)['model-uuid']
  File "/snap/conjure-up/352/lib/python3.6/site-packages/conjureup/juju.py", line 787, in get_model
    models = get_models(controller)['models']
  File "/snap/conjure-up/352/lib/python3.6/site-packages/conjureup/juju.py", line 858, in get_models
    "Unable to list models: {}".format(sh.stderr.decode('utf8')))
LookupError: Unable to list models: error: No controllers registered.

Please either create a new controller using "juju bootstrap" or connect to
another controller that you have been given access to using "juju register".


Traceback (most recent call last):
  File "/snap/conjure-up/352/bin/conjure-up", line 11, in <module>
    load_entry_point('conjure-up==2.1.5', 'console_scripts', 'conjure-up')()
  File "/snap/conjure-up/352/lib/python3.6/site-packages/conjureup/app.py", line 369, in main
    EventLoop.run()
  File "/snap/conjure-up/352/lib/python3.6/site-packages/ubuntui/ev.py", line 82, in run
    cls.loop.run()
  File "/snap/conjure-up/352/lib/python3.6/site-packages/urwid/main_loop.py", line 278, in run
    self._run()
  File "/snap/conjure-up/352/lib/python3.6/site-packages/urwid/main_loop.py", line 376, in _run
    self.event_loop.run()
  File "/snap/conjure-up/352/lib/python3.6/site-packages/urwid/main_loop.py", line 1328, in run
    raise self._exc_info[0](self._exc_info[1]).with_traceback(self._exc_info[2])
  File "/snap/conjure-up/352/usr/lib/python3.6/asyncio/events.py", line 126, in _run
    self._callback(*self._args)
  File "/snap/conjure-up/352/lib/python3.6/site-packages/conjureup/ui/views/deploystatus.py", line 31, in _refresh_nodes_on_main_thread
    status = model_status()
  File "/snap/conjure-up/352/lib/python3.6/site-packages/conjureup/juju.py", line 32, in _decorator
    login(force=True)
  File "/snap/conjure-up/352/lib/python3.6/site-packages/conjureup/juju.py", line 123, in login
    uuid = get_model(app.current_controller, app.current_model)['model-uuid']
  File "/snap/conjure-up/352/lib/python3.6/site-packages/conjureup/juju.py", line 787, in get_model
    models = get_models(controller)['models']
  File "/snap/conjure-up/352/lib/python3.6/site-packages/conjureup/juju.py", line 858, in get_models
    "Unable to list models: {}".format(sh.stderr.decode('utf8')))
LookupError: Unable to list models: error: No controllers registered.

Please either create a new controller using "juju bootstrap" or connect to
another controller that you have been given access to using "juju register".


exception calling callback for <Future at 0x7f440a5b7240 state=finished returned NoneType>
Traceback (most recent call last):
  File "/snap/conjure-up/352/usr/lib/python3.6/concurrent/futures/_base.py", line 297, in _invoke_callbacks
    callback(self)
  File "/snap/conjure-up/352/lib/python3.6/site-packages/conjureup/controllers/deploystatus/gui.py", line 30, in __wait_for_applications
    relations_done_future.result()
AttributeError: 'NoneType' object has no attribute 'result'

cat ~/.cache/conjure-up/conjure-up.log

2017-06-14 10:40:41,594 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: LXD version: 2.14, Juju version: 2.1.3-xenial-amd64
2017-06-14 10:40:52,819 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Path is local filesystem, copying /snap/conjure-up/352/spells/landscape to /home/panaceya/.cache/conjure-up/landscape
2017-06-14 10:40:52,892 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Pulling bundle for landscape-dense-maas from channel: stable
2017-06-14 10:41:06,797 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Performing bootstrap: conjure-up-maas-b1a maas/1.1.130.2
2017-06-14 10:41:06,798 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: bootstrap cmd: juju bootstrap maas/1.1.130.2 conjure-up-maas-b1a --config image-stream=daily --config enable-os-upgrade=false --default-model conjure-landscape-117 --to simple-collie.maas --bootstrap-series=xenial --credential conjure-up-maas-073 --debug
2017-06-14 10:41:07,300 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Failed to find cloud in list-clouds, attempting to read bootstrap-config
2017-06-14 10:41:08,528 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Failed to find cloud in list-clouds, attempting to read bootstrap-config
2017-06-14 10:41:09,207 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Rendering bootstrap wait
2017-06-14 10:41:09,763 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Failed to find cloud in list-clouds, attempting to read bootstrap-config
2017-06-14 10:41:11,007 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Failed to find cloud in list-clouds, attempting to read bootstrap-config
2017-06-14 10:41:12,285 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Failed to find cloud in list-clouds, attempting to read bootstrap-config
2017-06-14 10:41:13,539 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Failed to find cloud in list-clouds, attempting to read bootstrap-config
2017-06-14 10:41:17,403 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Failed to find cloud in list-clouds, attempting to read bootstrap-config
2017-06-14 10:41:18,619 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Failed to find cloud in list-clouds, attempting to read bootstrap-config
2017-06-14 10:41:19,844 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Failed to find cloud in list-clouds, attempting to read bootstrap-config
2017-06-14 10:41:21,103 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Failed to find cloud in list-clouds, attempting to read bootstrap-config
2017-06-14 10:41:24,973 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: handle bootstrap
2017-06-14 10:41:24,974 [ERROR] log.py:17 - conjure-up/_unspecified_spell: 10:41:06 INFO  juju.cmd supercommand.go:63 running juju [2.1.3 gc go1.6.2]

10:41:06 DEBUG juju.cmd supercommand.go:64   args: []string{"juju", "bootstrap", "maas/1.1.130.2", "conjure-up-maas-b1a", "--config", "image-stream=daily", "--config", "enable-os-upgrade=false", "--default-model", "conjure-landscape-117", "--to", "simple-collie.maas", "--bootstrap-series=xenial", "--credential", "conjure-up-maas-073", "--debug"}

10:41:06 INFO  cmd cmd.go:141 cloud "maas" not found, trying as a provider name

10:41:06 INFO  cmd cmd.go:141 interpreting "1.1.130.2" as the cloud endpoint

10:41:06 DEBUG juju.cmd.juju.commands bootstrap.go:780 authenticating with region "" and credential "conjure-up-maas-073" ()

10:41:06 DEBUG juju.cmd.juju.commands bootstrap.go:892 provider attrs: map[]

10:41:07 INFO  cmd cmd.go:141 Adding contents of "/home/panaceya/.local/share/juju/ssh/juju_id_rsa.pub" to authorized-keys

10:41:07 DEBUG juju.cmd.juju.commands bootstrap.go:948 preparing controller with config: map[enable-os-refresh-update:true disable-network-management:false apt-https-proxy: https-proxy: type:maas default-series:xenial uuid:b0bfb494-24c2-45dd-855f-896e5deeacc1 logging-config: image-stream:daily no-proxy: proxy-ssh:false firewall-mode:instance transmit-vendor-metrics:true net-bond-reconfigure-delay:17 authorized-keys:ssh-rsa AAAAB3xxxHIDDENxxx3 juju-client-key

 apt-ftp-proxy: apt-mirror: test-mode:false ftp-proxy: ssl-hostname-verification:true ignore-machine-addresses:false name:controller automatically-retry-hooks:true logforward-enabled:false development:false apt-http-proxy: enable-os-upgrade:false agent-metadata-url: provisioner-harvest-mode:destroyed http-proxy: agent-stream:released image-metadata-url: resource-tags:]

10:41:07 DEBUG juju.provider.maas environprovider.go:51 opening model "controller".

10:41:20 INFO  cmd cmd.go:129 Creating Juju controller "conjure-up-maas-b1a" on maas

10:41:21 INFO  juju.cmd.juju.commands bootstrap.go:526 combined bootstrap constraints: 

10:41:21 DEBUG juju.environs.bootstrap bootstrap.go:199 model "controller" supports service/machine networks: true

10:41:21 DEBUG juju.environs.bootstrap bootstrap.go:201 network management by juju enabled: true

10:41:21 INFO  cmd cmd.go:141 Loading image metadata

10:41:21 INFO  cmd cmd.go:129 Looking for packaged Juju agent version 2.1.3 for amd64

10:41:21 INFO  juju.environs.bootstrap tools.go:72 looking for bootstrap agent binaries: version=2.1.3

10:41:21 INFO  juju.environs.tools tools.go:101 finding agent binaries in stream "released"

10:41:21 INFO  juju.environs.tools tools.go:103 reading agent binaries with major.minor version 2.1

10:41:21 INFO  juju.environs.tools tools.go:111 filtering agent binaries by version: 2.1.3

10:41:21 INFO  juju.environs.tools tools.go:114 filtering agent binaries by series: xenial

10:41:21 INFO  juju.environs.tools tools.go:117 filtering agent binaries by architecture: amd64

10:41:21 DEBUG juju.environs.tools urls.go:109 trying datasource "keystone catalog"

10:41:21 DEBUG juju.environs.simplestreams simplestreams.go:680 using default candidate for content id "com.ubuntu.juju:released:tools" are {20161007 mirrors:1.0 content-download streams/v1/cpc-mirrors.sjson []}

10:41:22 INFO  juju.environs.bootstrap tools.go:74 found 1 packaged agent binaries

10:41:23 INFO  cmd cmd.go:141 Starting new instance for initial controller

Launching controller instance(s) on maas...

10:41:24 INFO  juju.provider.common destroy.go:20 destroying model "controller"

10:41:24 INFO  juju.provider.common destroy.go:31 destroying instances

10:41:24 INFO  juju.provider.common destroy.go:51 destroying storage

10:41:24 ERROR cmd supercommand.go:458 failed to bootstrap model: cannot start bootstrap instance: cannot run instances: cannot run instance: No available machine matches constraints: mem=3584.0 name=simple-collie.maas

10:41:24 DEBUG cmd supercommand.go:459 (error details: [{github.com/juju/juju/cmd/juju/commands/bootstrap.go:574: failed to bootstrap model} {github.com/juju/juju/provider/common/bootstrap.go:179: cannot start bootstrap instance} {github.com/juju/juju/provider/maas/environ.go:950: cannot run instances} {github.com/juju/juju/provider/maas/environ.go:1310: cannot run instance} {github.com/juju/juju/provider/maas/environ.go:751: } {github.com/juju/gomaasapi/controller.go:511: } {github.com/juju/gomaasapi/controller.go:708: } {github.com/juju/gomaasapi/controller.go:739: } {github.com/juju/gomaasapi/client.go:123: } {ServerError: 409 CONFLICT (No available machine matches constraints: mem=3584.0 name=simple-collie.maas)}])

2017-06-14 10:41:24,977 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Showing dialog for exception: Juju failed to bootstrap: maas

canonical-kubernetes: failed to find some applications

[adam:~/Projects/conjure] [conjure-dev] master ยฑ conjure-up -d canonical-kubernetes aws
[info] Summoning canonical-kubernetes to aws
[info] Creating new juju model named 'ghoul', please wait.
[info] Running pre deployment tasks.
[info] Finished pre deploy task: Finished pre deploy tasks...
[info] Deploying easyrsa... 
[info] easyrsa deployed.
[info] Deploying elasticsearch... 
[info] elasticsearch deployed.
[info] Deploying etcd... 
[info] etcd deployed.
[info] Deploying filebeat... 
[info] filebeat deployed.
[info] Deploying flannel... 
[info] flannel deployed.
[info] Deploying kibana... 
[info] kibana deployed.
[info] Deploying kubeapi-load-balancer... 
[info] kubeapi-load-balancer deployed.
[info] Deploying kubernetes-master... 
[info] kubernetes-master deployed.
[info] Deploying kubernetes-worker... 
[info] kubernetes-worker deployed.
[info] Deploying topbeat... 
[info] topbeat deployed.
[info] Setting application relations
[error] Error deploying services: (ServerError(...), 'application "kubernetes-master" not found')
[error] Error deploying services: 1
Exception in worker
Traceback (most recent call last):
  File "/home/adam/Projects/conjure/conjureup/juju.py", line 438, in do_add_all
    params=params)
  File "/home/adam/Projects/conjure/macumba/v2.py", line 119, in _request
    'Params': params})
  File "/home/adam/Projects/conjure/macumba/v2.py", line 130, in call
    return self.receive(req_id, timeout)
  File "/home/adam/Projects/conjure/macumba/api.py", line 136, in receive
    raise ServerError(res['Error'], res)
macumba.errors.ServerError: (ServerError(...), 'application "kubernetes-master" not found')

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/adam/.pyenv/versions/3.5.1/lib/python3.5/concurrent/futures/thread.py", line 55, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/home/adam/Projects/conjure/conjureup/juju.py", line 33, in _decorator
    return f(*args, **kwargs)
  File "/home/adam/Projects/conjure/conjureup/juju.py", line 441, in do_add_all
    exc_cb(e)
  File "/home/adam/Projects/conjure/conjureup/controllers/deploy/tui.py", line 23, in __handle_exception
    sys.exit(1)
SystemExit: 1

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/adam/.pyenv/versions/3.5.1/lib/python3.5/concurrent/futures/thread.py", line 66, in _worker
    work_item.run()
  File "/home/adam/.pyenv/versions/3.5.1/lib/python3.5/concurrent/futures/thread.py", line 57, in run
    self.future.set_exception(e)
  File "/home/adam/.pyenv/versions/3.5.1/lib/python3.5/concurrent/futures/_base.py", line 507, in set_exception
    self._invoke_callbacks()
  File "/home/adam/.pyenv/versions/3.5.1/lib/python3.5/concurrent/futures/_base.py", line 297, in _invoke_callbacks
    callback(self)
  File "/home/adam/Projects/conjure/conjureup/async.py", line 29, in cb
    exc_callback(e)
  File "/home/adam/Projects/conjure/conjureup/controllers/deploy/tui.py", line 23, in __handle_exception
    sys.exit(1)
SystemExit: 1

openstack-novalxd: fails if python3-openstackclient isn't installed

Calling this https://github.com/conjure-up/spells/blob/master/openstack-novalxd/steps/step-03_keypair#L38 will fail if we never installed python3-openstackclient. This was previously resolved by conjure-up installing the openstack package since it was aware from the spell since we required a spell name to run. Now that we allow conjure-up to be run without a spell name we can't run a sudo apt install python3-openstackclient without interrupting the user experience and exiting out of the GUI. So we need either a way to to allow a user to input their sudo password in the gui to install any necessary dependencies for the spells post action steps to work or think of a better way to work around this spells lack of requirements.

openstack-base: can't reach instance's floating ip

conjure-up 2.1.0 rev103, canonical classic

  1. All nodes have 2 NIC ports connected to the same switch, there is no VLAN configurations on the switch. Keep the neutron-gateway default configuration and make sure the eth1 on the physical node is connected to the switch.
  2. Select openstack-base to deploy on a MAAS cloud and run all the post deployment steps to configure the cloud.
  3. Log in to horizon and launch two 16.04 instances on the ubuntu-net, then assign floating IP to each. Instances are launched successfully.
  4. On the conjure-up node, add an interface , which is connected to the same switch, to conjureup0 (10.99.0.1). sudo brctl addif conjureup0 eno2.
  5. On the conjure-up node, can't ping or ssh to the instance via floating ip address.
conjure-up interfaces info:
ubuntu@conjure-up:~$ brctl show
bridge name	bridge id		STP enabled	interfaces
conjureup0		8000.0cc47a6b4e9d	no		eno2
lxdbr0		8000.000000000000	no		
ubuntu@conjure-up:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 0c:c4:7a:6b:4e:9c brd ff:ff:ff:ff:ff:ff
    inet 10.14.15.254/20 brd 10.14.15.255 scope global eno1
       valid_lft forever preferred_lft forever
    inet6 fe80::ec4:7aff:fe6b:4e9c/64 scope link 
       valid_lft forever preferred_lft forever
3: eno2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master conjureup0 state UP group default qlen 1000
    link/ether 0c:c4:7a:6b:4e:9d brd ff:ff:ff:ff:ff:ff
    inet6 fe80::ec4:7aff:fe6b:4e9d/64 scope link 
       valid_lft forever preferred_lft forever
4: ens2f0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 0c:c4:7a:bc:b6:92 brd ff:ff:ff:ff:ff:ff
5: ens2f1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 0c:c4:7a:bc:b6:93 brd ff:ff:ff:ff:ff:ff
6: conjureup0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 0c:c4:7a:6b:4e:9d brd ff:ff:ff:ff:ff:ff
    inet 10.99.0.1/24 scope global conjureup0
       valid_lft forever preferred_lft forever
    inet6 fe80::20df:abff:fefd:dfe4/64 scope link 
       valid_lft forever preferred_lft forever
7: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 6e:a1:01:33:7d:ee brd ff:ff:ff:ff:ff:ff
    inet 10.0.8.1/24 scope global lxdbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::6ca1:1ff:fe33:7dee/64 scope link 
       valid_lft forever preferred_lft forever
ubuntu@conjure-up:~$

neutron-gateway/0: 
ubuntu@node21:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master br-eth0 state UP group default qlen 1000
    link/ether 0c:c4:7a:3a:6e:94 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP group default qlen 1000
    link/ether 0c:c4:7a:3a:6e:95 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::ec4:7aff:fe3a:6e95/64 scope link 
       valid_lft forever preferred_lft forever
4: enp1s0f2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 0c:c4:7a:3a:6e:96 brd ff:ff:ff:ff:ff:ff
5: enp1s0f3: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 0c:c4:7a:3a:6e:97 brd ff:ff:ff:ff:ff:ff
6: enp129s0f0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 0c:c4:7a:b7:40:d8 brd ff:ff:ff:ff:ff:ff
7: enp129s0f1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 0c:c4:7a:b7:40:d9 brd ff:ff:ff:ff:ff:ff
8: enp130s0f0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 0c:c4:7a:b7:41:48 brd ff:ff:ff:ff:ff:ff
9: enp130s0f1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 0c:c4:7a:b7:41:49 brd ff:ff:ff:ff:ff:ff
11: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 6e:14:c8:4a:10:cc brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.1/24 scope global lxdbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::6c14:c8ff:fe4a:10cc/64 scope link 
       valid_lft forever preferred_lft forever
12: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1
    link/ether a2:2b:4a:e3:61:66 brd ff:ff:ff:ff:ff:ff
13: br-int: <BROADCAST,MULTICAST> mtu 1458 qdisc noop state DOWN group default qlen 1
    link/ether 9a:d6:85:68:49:43 brd ff:ff:ff:ff:ff:ff
14: br-ex: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1
    link/ether 0c:c4:7a:3a:6e:95 brd ff:ff:ff:ff:ff:ff
15: br-tun: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1
    link/ether 06:be:e1:a3:8e:4e brd ff:ff:ff:ff:ff:ff
16: br-eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 0c:c4:7a:3a:6e:94 brd ff:ff:ff:ff:ff:ff
    inet 10.14.0.27/20 brd 10.14.15.255 scope global br-eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::ec4:7aff:fe3a:6e94/64 scope link 
       valid_lft forever preferred_lft forever
18: vethXG39MP@if17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-eth0 state UP group default qlen 1000
    link/ether fe:7c:d1:d6:a1:f6 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::fc7c:d1ff:fed6:a1f6/64 scope link 
       valid_lft forever preferred_lft forever
20: vethMAPQ45@if19: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-eth0 state UP group default qlen 1000
    link/ether fe:69:4a:25:79:34 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::fc69:4aff:fe25:7934/64 scope link 
       valid_lft forever preferred_lft forever
22: vethJY3JB0@if21: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-eth0 state UP group default qlen 1000
    link/ether fe:9c:83:c1:1a:f3 brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet6 fe80::fc9c:83ff:fec1:1af3/64 scope link 
       valid_lft forever preferred_lft forever
23: tap84dcbb34-e6@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1458 qdisc noqueue master ovs-system state UP group default qlen 1000
    link/ether 86:8e:67:13:ee:e3 brd ff:ff:ff:ff:ff:ff link-netnsid 3
24: tapc0e1a086-f7@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1458 qdisc noqueue master ovs-system state UP group default qlen 1000
    link/ether 5a:84:d2:2a:f4:a4 brd ff:ff:ff:ff:ff:ff link-netnsid 4
25: tap602efb79-8c@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1458 qdisc noqueue master ovs-system state UP group default qlen 1000
    link/ether 5a:b4:5f:53:fe:2f brd ff:ff:ff:ff:ff:ff link-netnsid 4
26: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN group default qlen 1
    link/gre 0.0.0.0 brd 0.0.0.0
27: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN group default qlen 1000
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
28: gre_sys@NONE: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65490 qdisc pfifo_fast master ovs-system state UNKNOWN group default qlen 1000
    link/ether 62:3f:5c:1e:ab:a2 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::603f:5cff:fe1e:aba2/64 scope link 
       valid_lft forever preferred_lft forever
ubuntu@node21:~$ sudo ovs-vsctl show
bf02bbf8-1ace-4572-845d-fd5c933ec0d9
    Manager "ptcp:6640:127.0.0.1"
        is_connected: true
    Bridge br-ex
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port "eth1"
            Interface "eth1"
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
        Port br-ex
            Interface br-ex
                type: internal
    Bridge br-int
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port "tap602efb79-8c"
            tag: 2
            Interface "tap602efb79-8c"
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port br-int
            Interface br-int
                type: internal
        Port "tap84dcbb34-e6"
            tag: 1
            Interface "tap84dcbb34-e6"
        Port "tapc0e1a086-f7"
            tag: 1
            Interface "tapc0e1a086-f7"
    Bridge br-tun
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port "gre-0a0e001a"
            Interface "gre-0a0e001a"
                type: gre
                options: {df_default="true", in_key=flow, local_ip="10.14.0.27", out_key=flow, remote_ip="10.14.0.26"}
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "gre-0a0e0018"
            Interface "gre-0a0e0018"
                type: gre
                options: {df_default="true", in_key=flow, local_ip="10.14.0.27", out_key=flow, remote_ip="10.14.0.24"}
        Port br-tun
            Interface br-tun
                type: internal
    ovs_version: "2.6.0"
ubuntu@node21:~$ 

canonical-kubernetes localhost: kubelet cannot check disk space

Greetings,

Conjured Canonical Kubernetes on localhost Ubuntu 16.04.2 with default settings.

I'm here again with another issue, on the works I've noticed that kubelet cannot check disk space, complaining about the zfs binary not found. This is not really critical but that means that heapster is not recording nodes/pods stats.

After installing zfsutils-linux manually on the workers, here's the errors I'm getting:

Feb 18 15:38:00 juju-f96834-9 kubelet[1505]: E0218 15:38:00.530263    1505 kubelet.go:1634] Failed to check if disk space is available for the runtime: failed to get fs info for "runtime": failed to find information for the filesystem labeled "docker-images"
Feb 18 15:38:00 juju-f96834-9 kubelet[1505]: E0218 15:38:00.530288    1505 kubelet.go:1642] Failed to check if disk space is available on the root partition: failed to get fs info for "root": did not find fs info for dir: /var/lib/kubelet
Feb 18 15:38:05 juju-f96834-9 kubelet[1505]: E0218 15:38:05.043160    1505 handler.go:246] HTTP InternalServerError serving /stats/summary: Internal Error: failed RootFsInfo: did not find fs info for dir: /var/lib/kubelet
Feb 18 15:38:08 juju-f96834-9 kubelet[1505]: E0218 15:38:08.915959    1505 fs.go:333] Stat fs failed. Error: exit status 1: "/sbin/zfs zfs get -Hp all lxd/containers/juju-f96834-9" => /dev/zfs and /proc/self/mounts are required.
Feb 18 15:38:08 juju-f96834-9 kubelet[1505]: Try running 'udevadm trigger' and 'mount -t proc proc /proc' as root.

I'm noticing /dev/zfs is not existing on the workers, so I tried adding it:

lxc config device add juju-f96834-9 /dev/zfs unix-block path=/dev/zfs

But back in the container, with strace:

root@juju-f96834-9:~# strace zfs get -Hp all lxd/containers/juju-f96834-9
[snip]
access("/sys/module/zfs", F_OK)         = 0
access("/sys/module/zfs", F_OK)         = 0
open("/dev/zfs", O_RDWR)                = -1 ENXIO (No such device or address)
write(2, "The ZFS modules are not loaded.\n"..., 87The ZFS modules are not loaded.
Try running '/sbin/modprobe zfs' as root to load them.
) = 87
exit_group(1)                           = ?
+++ exited with 1 +++

I guess it may have something to do with unprivileged containers and the host zfs ? Let me know if you need more informations, thanks again !

spells/canonical-kubernetes/steps/00_deploy-done is broken

File spells/canonical-kubernetes/steps/00_deploy-done is currently broken:

#!/bin/bash
set -eu

. $CONJURE_UP_SPELLSDIR/sdk/common.sh

if ! juju wait -vwm $JUJU_CONTROLLER:$JUJU_MODEL; then
printf "Applications did not start successfully"
exit 1
fi

printf "result: Applications Ready"
exit 0

But this script should never use "printf" but instead "setResult" as other scripts are doing, like spells/kubernetes-core/steps/00_deploy-done:

#!/bin/bash
set -eu

. "$CONJURE_UP_SPELLSDIR/sdk/common.sh"

if ! juju wait -vwm "$JUJU_CONTROLLER:$JUJU_MODEL"; then
setResult "Applications did not start successfully"
exit 1
fi

setResult "Applications Ready"
exit 0

realtime-syslog-analytics: failed to deploy zeppelin, no resource found

conjure-up/realtime-syslog-analytics: [DEBUG] log.py:14 - conjure-up/realtime-syslog-analytics: Adding Charm cs:trusty/apache-zeppelin-7
conjure-up/realtime-syslog-analytics: [DEBUG] log.py:14 - conjure-up/realtime-syslog-analytics: AddCharm returned {}
conjure-up/realtime-syslog-analytics: [DEBUG] log.py:14 - conjure-up/realtime-syslog-analytics: Resources: [{'Origin': 'store', 'Size': 0, 'Name': 'zeppelin', 'Fingerprint': 'OLBgp1GsljhM2TJ+sbHjaiH9txEUvgdDTAz
Hv2P24donTt6/529l+9Ua0vFImLlb', 'Type': 'file', 'Path': 'zeppelin.tgz', 'Description': 'The Apache Zeppelin distribution', 'Revision': 2}]
conjure-up/realtime-syslog-analytics: [DEBUG] log.py:14 - conjure-up/realtime-syslog-analytics: AddPendingResources: {'resources': [{'Origin': 'store', 'Size': 0, 'Name': 'zeppelin', 'Fingerprint': 'OLBgp1Gsljh
M2TJ+sbHjaiH9txEUvgdDTAzHv2P24donTt6/529l+9Ua0vFImLlb', 'Type': 'file', 'Path': 'zeppelin.tgz', 'Description': 'The Apache Zeppelin distribution', 'Revision': 2}], 'tag': 'application-apache-zeppelin', 'url': '
cs:trusty/apache-zeppelin-7'}
conjure-up/realtime-syslog-analytics: [DEBUG] log.py:14 - conjure-up/realtime-syslog-analytics: AddPendingResources returned: {'pending-ids': ['2c8f7538-4636-435c-80c4-015189cc5914']}
conjure-up/realtime-syslog-analytics: [DEBUG] log.py:14 - conjure-up/realtime-syslog-analytics: Deploying <Service zeppelin>: {'applications': [{'constraints': {}, 'num-units': 0, 'charm-url': 'cs:trusty/apache
-zeppelin-7', 'series': 'trusty', 'expose': False, 'resources': {'zeppelin': '2c8f7538-4636-435c-80c4-015189cc5914'}, 'application': 'zeppelin'}]}
conjure-up/realtime-syslog-analytics: [DEBUG] log.py:14 - conjure-up/realtime-syslog-analytics: Deploy returned {'results': [{'error': {'message': 'cannot add application "zeppelin": pending resource "zeppelin/
zeppelin" (2c8f7538-4636-435c-80c4-015189cc5914) not found', 'code': 'not found'}}]}

canonical-kubernetes: pre-deploy scripts should not rely on juju command line to get name of current model

Affects canonical-kubernetes spell on lxd controller.

Since most of conjure-up either uses the API or specifies the controller:model explicitly, we need to do the same in spells - otherwise we see issues where if no controller is currently selected, then conjure-up is run and an existing controller is chosen, the pre-deploy spell gets nothing from 'juju switch', and errors out, even though there is certainly a profile it could edit.

embed bundles in spells

I think we want to lock revisions for charms and embed those bundles directly into the spells. We'd need a automated way to do so, this bug is for getting that addressed.

openstack-base:18 machines deployed for 4 machine bundle

Selecting openstack-base results in 18 machines being requested from MAAS, rather than the 4 expected. Please see attached logs.

Juju: 2.1.2-xenial-ppc64el

MAAS: 2.1.3+bzr5573-0ubuntu1 (16.04.1)

$ snap info conjure-up
name: conjure-up
summary: "Package runtime for conjure-up spells"
publisher: canonical
contact: https://github.com/conjure-up/conjure-up
description: |
This package provides conjure-up, an interface to installing spells that
provide the user with an end to end walkthrough experience for installing and
using big software.

commands:

  • conjure-up.conjure-down (conjure-down)
  • conjure-up
  • conjure-up.juju (juju)
    tracking: beta
    installed: 2.2-beta3 (232) 94MB classic
    refreshed: 2017-04-28 19:14:02 +0000 UTC
    channels:
    beta: 2.2-beta3 (232) 94MB classic
    edge: 2.2-beta3+cef5df9 (238) 89MB classic

non-integer constraints not handled correctly for add-machines API

It seems with the new 00_deploy-done code the spell is not waiting for the deploy to complete. In recent testing as soon as I select to deploy application the spell almost immediately goes to the final ganglia test which errors in the following stack trace:

... mmcc trimmed red herring error message

The deployment usually takes about 10 minutes in AWS. The spell is only waiting < 1 minute before going to the final ganglia step. Thus, I don't think we are waiting for the applications to be become active.

-thanks,
Antonio

juju wait command in conjure-up 2.2.2 doesn't support retry argument

$ snap list
Name Version Rev Developer Notes
conjure-up 2.2.2 549 canonical classic
core 16-2.26.14 2462 canonical -
$ whereis juju
juju: /snap/bin/juju

$ cat 00_deploy-done.err

  • . /home/samantha_jian/spell/spells/sdk/common.sh
  • retry_arg=-r5
  • [[ '' == \t\e\s\t ]]
  • juju wait -r5 -vwm conjure-up-localhost-95a:conjure-openstack-base-b3a
    usage: juju-wait [-h] [-e MODEL] [--description] [-q] [-v] [-w] [-t MAX_WAIT]
    [--version]
    juju-wait: error: unrecognized arguments: -r5
  • setResult 'Applications did not start successfully'
  • redis-cli set conjure-up.openstack-base.00_deploy-done.result 'Applications did not start successfully'
  • local conjure_redis_cli
  • local system_redis_cli
  • local cli
    ++ which conjure-up.redis-cli
  • conjure_redis_cli=/snap/bin/conjure-up.redis-cli
    ++ which redis-cli
  • system_redis_cli=/snap/conjure-up/549/bin/redis-cli
  • '[' /snap/bin/conjure-up.redis-cli = /snap/bin/conjure-up.redis-cli ']'
  • cli=/snap/bin/conjure-up.redis-cli
  • /snap/bin/conjure-up.redis-cli set conjure-up.openstack-base.00_deploy-done.result 'Applications did not start su
    ccessfully'
  • exit 1
    $

The deployment goes successfully after passing no -r to the juju wait command in 00_deploy-done script.

canonical-kubernetes: Could not read output from step step-01_get-kubectl

Greetings,

I have been trying to install the spell canonical-kubernetes on an Ubuntu 16.04.02 local machine with the defaults and wasn't able to finish the last step (retrieving the kubectl binary).

capture d ecran 2017-02-17 a 17 36 46

And here's the ~/.cache/conjure-up/conjure-up.log:

2017-02-17 17:18:10,443 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: LXD version: 2.0.9, Juju version: 2.1.0-xenial-amd64
2017-02-17 17:20:20,348 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: LXD version: 2.0.9, Juju version: 2.1.0-xenial-amd64
2017-02-17 17:20:22,948 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Path is local filesystem, copying /snap/conjure-up/83/spells/canonical-kubernetes to /home/kube/.cache/conjure-up/canonical-kubernetes
2017-02-17 17:20:27,142 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Found an IPv4 address, assuming LXD is configured.
2017-02-17 17:20:27,142 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Performing bootstrap: conjure-up-localhost-3d6 localhost
2017-02-17 17:20:27,144 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: bootstrap cmd: juju bootstrap localhost conjure-up-localhost-3d6 --config image-stream=daily --config enable-os-upgrade=false --default-model conjure-up-canonical-kubernetes-0e8 --bootstrap-series=xenial --debug
2017-02-17 17:20:33,756 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Rendering bootstrap wait
2017-02-17 17:23:17,339 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: handle bootstrap
2017-02-17 17:23:17,740 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Checking for post bootstrap task: /home/kube/.cache/conjure-up/canonical-kubernetes/steps/00_post-bootstrap
2017-02-17 17:23:17,851 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Running pre-deployment tasks.
2017-02-17 17:23:18,690 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: pre_deploy_done: {'message': 'Successful pre-deploy.', 'returnCode': 0, 'isComplete': True}
2017-02-17 17:23:18,691 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddMachines: [{'series': 'xenial', 'constraints': {}, 'jobs': ['JobHostUnits']}]
2017-02-17 17:23:18,991 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddMachines returned {'machines': [{'machine': '0'}]}
2017-02-17 17:23:18,992 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Adding Charm cs:~containers/xenial/easyrsa-6
2017-02-17 17:23:21,396 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddCharm returned {}
2017-02-17 17:23:21,593 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Resources for charm id 'cs:~containers/xenial/easyrsa-6': [{'Name': 'easyrsa', 'Type': 'file', 'Path': 'easyrsa.tgz', 'Description': 'The release of the EasyRSA software you would like to use to create\ncertificate authority (CA) and other Public Key Infrastructure (PKI). \nThis charm was written using v3.0.1, so earlier versions of EasyRSA may \nnot work. You can find the releases of EasyRSA at \nhttps://github.com/OpenVPN/easy-rsa/releases\n', 'Revision': 0, 'Fingerprint': 'zR3FHi3mikVQCj6pmwTuJxk3G4oBu2vdsIsQ82ktI3oj5F4eQm9oKO+z2rGlNJuj', 'Size': 40960, 'Origin': 'store'}]
2017-02-17 17:23:21,593 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddPendingResources: {'tag': 'application-easyrsa', 'url': 'cs:~containers/xenial/easyrsa-6', 'resources': [{'Name': 'easyrsa', 'Type': 'file', 'Path': 'easyrsa.tgz', 'Description': 'The release of the EasyRSA software you would like to use to create\ncertificate authority (CA) and other Public Key Infrastructure (PKI). \nThis charm was written using v3.0.1, so earlier versions of EasyRSA may \nnot work. You can find the releases of EasyRSA at \nhttps://github.com/OpenVPN/easy-rsa/releases\n', 'Revision': 0, 'Fingerprint': 'zR3FHi3mikVQCj6pmwTuJxk3G4oBu2vdsIsQ82ktI3oj5F4eQm9oKO+z2rGlNJuj', 'Size': 40960, 'Origin': 'store'}]}
2017-02-17 17:23:21,793 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddPendingResources returned: {'pending-ids': ['d03635af-93b8-4ac4-81db-cdfbcc9471ab']}
2017-02-17 17:23:21,794 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Deploying <Service easyrsa>: {'applications': [{'charm-url': 'cs:~containers/xenial/easyrsa-6', 'application': 'easyrsa', 'num-units': 1, 'constraints': {}, 'resources': {'easyrsa': 'd03635af-93b8-4ac4-81db-cdfbcc9471ab'}, 'placement': [{'scope': '#', 'directive': '0'}], 'series': 'xenial'}]}
2017-02-17 17:23:22,094 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Deploy returned {'results': [{}]}
2017-02-17 17:23:22,095 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddMachines: [{'series': 'xenial', 'constraints': {}, 'jobs': ['JobHostUnits']}]
2017-02-17 17:23:22,496 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddMachines returned {'machines': [{'machine': '1'}]}
2017-02-17 17:23:22,496 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddMachines: [{'series': 'xenial', 'constraints': {}, 'jobs': ['JobHostUnits']}]
2017-02-17 17:23:22,797 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddMachines returned {'machines': [{'machine': '2'}]}
2017-02-17 17:23:22,798 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddMachines: [{'series': 'xenial', 'constraints': {}, 'jobs': ['JobHostUnits']}]
2017-02-17 17:23:23,098 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddMachines returned {'machines': [{'machine': '3'}]}
2017-02-17 17:23:23,099 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Adding Charm cs:~containers/xenial/etcd-23
2017-02-17 17:23:26,206 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddCharm returned {}
2017-02-17 17:23:26,366 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Resources for charm id 'cs:~containers/xenial/etcd-23': [{'Name': 'snapshot', 'Type': 'file', 'Path': 'snapshot.tar.gz', 'Description': 'Tarball snapshot of an etcd clusters data.', 'Revision': 0, 'Fingerprint': 'frvllOHNvA83jljU/oCpD8rH9BHw5wMPDCsvVrFaiSKb3FQa1FtJ7+9Ucwt7cKv+', 'Size': 124, 'Origin': 'store'}]
2017-02-17 17:23:26,366 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddPendingResources: {'tag': 'application-etcd', 'url': 'cs:~containers/xenial/etcd-23', 'resources': [{'Name': 'snapshot', 'Type': 'file', 'Path': 'snapshot.tar.gz', 'Description': 'Tarball snapshot of an etcd clusters data.', 'Revision': 0, 'Fingerprint': 'frvllOHNvA83jljU/oCpD8rH9BHw5wMPDCsvVrFaiSKb3FQa1FtJ7+9Ucwt7cKv+', 'Size': 124, 'Origin': 'store'}]}
2017-02-17 17:23:26,567 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddPendingResources returned: {'pending-ids': ['886ec2e6-8a24-45a9-8a29-c1df22b26807']}
2017-02-17 17:23:26,567 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Deploying <Service etcd>: {'applications': [{'charm-url': 'cs:~containers/xenial/etcd-23', 'application': 'etcd', 'num-units': 3, 'constraints': {}, 'resources': {'snapshot': '886ec2e6-8a24-45a9-8a29-c1df22b26807'}, 'placement': [{'scope': '#', 'directive': '1'}, {'scope': '#', 'directive': '2'}, {'scope': '#', 'directive': '3'}], 'series': 'xenial'}]}
2017-02-17 17:23:27,469 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Deploy returned {'results': [{}]}
2017-02-17 17:23:27,470 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Adding Charm cs:~containers/xenial/flannel-10
2017-02-17 17:23:30,375 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddCharm returned {}
2017-02-17 17:23:30,693 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Resources for charm id 'cs:~containers/xenial/flannel-10': [{'Name': 'flannel', 'Type': 'file', 'Path': 'flannel.tar.gz', 'Description': 'A tarball packaged release of flannel', 'Revision': 3, 'Fingerprint': 'uafSjUKrN7R6IE3lz0KGoCAOBO4wr/QE51j+h3+XK0B1H6+KDEm/Kgpar6wJ1a7C', 'Size': 19231200, 'Origin': 'store'}]
2017-02-17 17:23:30,693 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddPendingResources: {'tag': 'application-flannel', 'url': 'cs:~containers/xenial/flannel-10', 'resources': [{'Name': 'flannel', 'Type': 'file', 'Path': 'flannel.tar.gz', 'Description': 'A tarball packaged release of flannel', 'Revision': 3, 'Fingerprint': 'uafSjUKrN7R6IE3lz0KGoCAOBO4wr/QE51j+h3+XK0B1H6+KDEm/Kgpar6wJ1a7C', 'Size': 19231200, 'Origin': 'store'}]}
2017-02-17 17:23:31,094 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddPendingResources returned: {'pending-ids': ['78631f4d-4751-43c4-82ee-fee21d4977c9']}
2017-02-17 17:23:31,095 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Deploying <Service flannel>: {'applications': [{'charm-url': 'cs:~containers/xenial/flannel-10', 'application': 'flannel', 'num-units': 0, 'constraints': {}, 'resources': {'flannel': '78631f4d-4751-43c4-82ee-fee21d4977c9'}, 'series': 'xenial'}]}
2017-02-17 17:23:31,396 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Deploy returned {'results': [{}]}
2017-02-17 17:23:31,396 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddMachines: [{'series': 'xenial', 'constraints': {}, 'jobs': ['JobHostUnits']}]
2017-02-17 17:23:31,799 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddMachines returned {'machines': [{'machine': '4'}]}
2017-02-17 17:23:31,799 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Adding Charm cs:~containers/xenial/kubeapi-load-balancer-6
2017-02-17 17:23:43,530 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddCharm returned {}
2017-02-17 17:23:46,070 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Resources for charm id 'cs:~containers/xenial/kubeapi-load-balancer-6': []
2017-02-17 17:23:46,071 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Deploying <Service kubeapi-load-balancer>: {'applications': [{'charm-url': 'cs:~containers/xenial/kubeapi-load-balancer-6', 'application': 'kubeapi-load-balancer', 'num-units': 1, 'constraints': {}, 'placement': [{'scope': '#', 'directive': '4'}], 'series': 'xenial'}]}
2017-02-17 17:23:46,373 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Deploy returned {'results': [{}]}
2017-02-17 17:23:46,374 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Expose: {'application': 'kubeapi-load-balancer'}
2017-02-17 17:23:46,474 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Expose returned: {}
2017-02-17 17:23:46,474 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddMachines: [{'series': 'xenial', 'constraints': {}, 'jobs': ['JobHostUnits']}]
2017-02-17 17:23:46,876 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddMachines returned {'machines': [{'machine': '5'}]}
2017-02-17 17:23:46,876 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Adding Charm cs:~containers/xenial/kubernetes-master-11
2017-02-17 17:23:59,936 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddCharm returned {}
2017-02-17 17:24:00,895 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Resources for charm id 'cs:~containers/xenial/kubernetes-master-11': [{'Name': 'kubernetes', 'Type': 'file', 'Path': 'kubernetes.tar.gz', 'Description': 'A tarball packaged release of the kubernetes bins.', 'Revision': 6, 'Fingerprint': '0/5aCPmSHeCeusMjBvb/2Yz5riKRJecU6lWHQXYKephmh9ulihty21QlENRmSBZg', 'Size': 77067746, 'Origin': 'store'}]
2017-02-17 17:24:00,897 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddPendingResources: {'tag': 'application-kubernetes-master', 'url': 'cs:~containers/xenial/kubernetes-master-11', 'resources': [{'Name': 'kubernetes', 'Type': 'file', 'Path': 'kubernetes.tar.gz', 'Description': 'A tarball packaged release of the kubernetes bins.', 'Revision': 6, 'Fingerprint': '0/5aCPmSHeCeusMjBvb/2Yz5riKRJecU6lWHQXYKephmh9ulihty21QlENRmSBZg', 'Size': 77067746, 'Origin': 'store'}]}
2017-02-17 17:24:01,900 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddPendingResources returned: {'pending-ids': ['b16020eb-facc-4169-89f2-d88211e99801']}
2017-02-17 17:24:01,900 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Deploying <Service kubernetes-master>: {'applications': [{'charm-url': 'cs:~containers/xenial/kubernetes-master-11', 'application': 'kubernetes-master', 'num-units': 1, 'constraints': {}, 'resources': {'kubernetes': 'b16020eb-facc-4169-89f2-d88211e99801'}, 'placement': [{'scope': '#', 'directive': '5'}], 'series': 'xenial'}]}
2017-02-17 17:24:02,101 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Deploy returned {'results': [{}]}
2017-02-17 17:24:02,101 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddMachines: [{'series': 'xenial', 'constraints': {}, 'jobs': ['JobHostUnits']}]
2017-02-17 17:24:02,402 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddMachines returned {'machines': [{'machine': '6'}]}
2017-02-17 17:24:02,403 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddMachines: [{'series': 'xenial', 'constraints': {}, 'jobs': ['JobHostUnits']}]
2017-02-17 17:24:02,804 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddMachines returned {'machines': [{'machine': '7'}]}
2017-02-17 17:24:02,805 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddMachines: [{'series': 'xenial', 'constraints': {}, 'jobs': ['JobHostUnits']}]
2017-02-17 17:24:03,206 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddMachines returned {'machines': [{'machine': '8'}]}
2017-02-17 17:24:03,207 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Adding Charm cs:~containers/xenial/kubernetes-worker-13
2017-02-17 17:24:24,723 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddCharm returned {}
2017-02-17 17:24:25,970 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Resources for charm id 'cs:~containers/xenial/kubernetes-worker-13': [{'Name': 'kubernetes', 'Type': 'file', 'Path': 'kubernetes.tar.gz', 'Description': 'An archive of kubernetes binaries for the worker.', 'Revision': 6, 'Fingerprint': 'KptkA9eoAIIBcnI4FRVVMF26LiTSYHoL/clnOXGssgdCEL4piUMd840HESYlOe2R', 'Size': 46155602, 'Origin': 'store'}]
2017-02-17 17:24:25,970 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddPendingResources: {'tag': 'application-kubernetes-worker', 'url': 'cs:~containers/xenial/kubernetes-worker-13', 'resources': [{'Name': 'kubernetes', 'Type': 'file', 'Path': 'kubernetes.tar.gz', 'Description': 'An archive of kubernetes binaries for the worker.', 'Revision': 6, 'Fingerprint': 'KptkA9eoAIIBcnI4FRVVMF26LiTSYHoL/clnOXGssgdCEL4piUMd840HESYlOe2R', 'Size': 46155602, 'Origin': 'store'}]}
2017-02-17 17:24:27,274 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddPendingResources returned: {'pending-ids': ['b7dc9b04-a15c-4803-8dc9-de0dcf8a538e']}
2017-02-17 17:24:27,274 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Deploying <Service kubernetes-worker>: {'applications': [{'charm-url': 'cs:~containers/xenial/kubernetes-worker-13', 'application': 'kubernetes-worker', 'num-units': 3, 'constraints': {}, 'resources': {'kubernetes': 'b7dc9b04-a15c-4803-8dc9-de0dcf8a538e'}, 'placement': [{'scope': '#', 'directive': '6'}, {'scope': '#', 'directive': '7'}, {'scope': '#', 'directive': '8'}], 'series': 'xenial'}]}
2017-02-17 17:24:27,574 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Deploy returned {'results': [{}]}
2017-02-17 17:24:27,575 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Expose: {'application': 'kubernetes-worker'}
2017-02-17 17:24:27,675 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Expose returned: {}
2017-02-17 17:24:27,675 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddRelation: {'Endpoints': ['kubernetes-master:loadbalancer', 'kubeapi-load-balancer:loadbalancer']}
2017-02-17 17:24:27,776 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddRelation returned: {'endpoints': {'kubeapi-load-balancer': {'name': 'loadbalancer', 'role': 'provider', 'interface': 'public-address', 'optional': False, 'limit': 0, 'scope': 'global'}, 'kubernetes-master': {'name': 'loadbalancer', 'role': 'requirer', 'interface': 'public-address', 'optional': False, 'limit': 1, 'scope': 'global'}}}
2017-02-17 17:24:27,776 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddRelation: {'Endpoints': ['flannel:cni', 'kubernetes-worker:cni']}
2017-02-17 17:24:27,876 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddRelation returned: {'endpoints': {'flannel': {'name': 'cni', 'role': 'requirer', 'interface': 'kubernetes-cni', 'optional': False, 'limit': 1, 'scope': 'container'}, 'kubernetes-worker': {'name': 'cni', 'role': 'provider', 'interface': 'kubernetes-cni', 'optional': False, 'limit': 0, 'scope': 'container'}}}
2017-02-17 17:24:27,877 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddRelation: {'Endpoints': ['kubernetes-master:kube-api-endpoint', 'kubeapi-load-balancer:apiserver']}
2017-02-17 17:24:27,977 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddRelation returned: {'endpoints': {'kubeapi-load-balancer': {'name': 'apiserver', 'role': 'requirer', 'interface': 'http', 'optional': False, 'limit': 1, 'scope': 'global'}, 'kubernetes-master': {'name': 'kube-api-endpoint', 'role': 'provider', 'interface': 'http', 'optional': False, 'limit': 0, 'scope': 'global'}}}
2017-02-17 17:24:27,977 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddRelation: {'Endpoints': ['kubernetes-worker:certificates', 'easyrsa:client']}
2017-02-17 17:24:28,077 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddRelation returned: {'endpoints': {'easyrsa': {'name': 'client', 'role': 'provider', 'interface': 'tls-certificates', 'optional': False, 'limit': 0, 'scope': 'global'}, 'kubernetes-worker': {'name': 'certificates', 'role': 'requirer', 'interface': 'tls-certificates', 'optional': False, 'limit': 1, 'scope': 'global'}}}
2017-02-17 17:24:28,078 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddRelation: {'Endpoints': ['kubernetes-master:cluster-dns', 'kubernetes-worker:kube-dns']}
2017-02-17 17:24:28,178 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddRelation returned: {'endpoints': {'kubernetes-master': {'name': 'cluster-dns', 'role': 'provider', 'interface': 'kube-dns', 'optional': False, 'limit': 0, 'scope': 'global'}, 'kubernetes-worker': {'name': 'kube-dns', 'role': 'requirer', 'interface': 'kube-dns', 'optional': False, 'limit': 1, 'scope': 'global'}}}
2017-02-17 17:24:28,178 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddRelation: {'Endpoints': ['kubernetes-master:etcd', 'etcd:db']}
2017-02-17 17:24:28,279 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddRelation returned: {'endpoints': {'etcd': {'name': 'db', 'role': 'provider', 'interface': 'etcd', 'optional': False, 'limit': 0, 'scope': 'global'}, 'kubernetes-master': {'name': 'etcd', 'role': 'requirer', 'interface': 'etcd', 'optional': False, 'limit': 1, 'scope': 'global'}}}
2017-02-17 17:24:28,279 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddRelation: {'Endpoints': ['kubeapi-load-balancer:certificates', 'easyrsa:client']}
2017-02-17 17:24:28,379 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddRelation returned: {'endpoints': {'easyrsa': {'name': 'client', 'role': 'provider', 'interface': 'tls-certificates', 'optional': False, 'limit': 0, 'scope': 'global'}, 'kubeapi-load-balancer': {'name': 'certificates', 'role': 'requirer', 'interface': 'tls-certificates', 'optional': False, 'limit': 1, 'scope': 'global'}}}
2017-02-17 17:24:28,379 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddRelation: {'Endpoints': ['flannel:cni', 'kubernetes-master:cni']}
2017-02-17 17:24:28,480 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddRelation returned: {'endpoints': {'flannel': {'name': 'cni', 'role': 'requirer', 'interface': 'kubernetes-cni', 'optional': False, 'limit': 1, 'scope': 'container'}, 'kubernetes-master': {'name': 'cni', 'role': 'provider', 'interface': 'kubernetes-cni', 'optional': False, 'limit': 0, 'scope': 'container'}}}
2017-02-17 17:24:28,481 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddRelation: {'Endpoints': ['etcd:certificates', 'easyrsa:client']}
2017-02-17 17:24:28,582 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddRelation returned: {'endpoints': {'easyrsa': {'name': 'client', 'role': 'provider', 'interface': 'tls-certificates', 'optional': False, 'limit': 0, 'scope': 'global'}, 'etcd': {'name': 'certificates', 'role': 'requirer', 'interface': 'tls-certificates', 'optional': False, 'limit': 1, 'scope': 'global'}}}
2017-02-17 17:24:28,583 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddRelation: {'Endpoints': ['kubernetes-master:certificates', 'easyrsa:client']}
2017-02-17 17:24:28,684 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddRelation returned: {'endpoints': {'easyrsa': {'name': 'client', 'role': 'provider', 'interface': 'tls-certificates', 'optional': False, 'limit': 0, 'scope': 'global'}, 'kubernetes-master': {'name': 'certificates', 'role': 'requirer', 'interface': 'tls-certificates', 'optional': False, 'limit': 1, 'scope': 'global'}}}
2017-02-17 17:24:28,684 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddRelation: {'Endpoints': ['flannel:etcd', 'etcd:db']}
2017-02-17 17:24:28,784 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddRelation returned: {'endpoints': {'etcd': {'name': 'db', 'role': 'provider', 'interface': 'etcd', 'optional': False, 'limit': 0, 'scope': 'global'}, 'flannel': {'name': 'etcd', 'role': 'requirer', 'interface': 'etcd', 'optional': False, 'limit': 1, 'scope': 'global'}}}
2017-02-17 17:24:28,784 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddRelation: {'Endpoints': ['kubernetes-worker:kube-api-endpoint', 'kubeapi-load-balancer:website']}
2017-02-17 17:24:28,885 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: AddRelation returned: {'endpoints': {'kubeapi-load-balancer': {'name': 'website', 'role': 'provider', 'interface': 'http', 'optional': False, 'limit': 0, 'scope': 'global'}, 'kubernetes-worker': {'name': 'kube-api-endpoint', 'role': 'requirer', 'interface': 'http', 'optional': False, 'limit': 1, 'scope': 'global'}}}
2017-02-17 17:36:21,650 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Queueing step: <StepWidget: Kubernetes Cluster Controller>
2017-02-17 17:36:21,652 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Queueing step: <StepWidget: Kubernetes Cluster Status Check>
2017-02-17 17:36:21,653 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: {'label': <Text flow widget ''>, 'key': 'submit', 'input': None}
2017-02-17 17:36:40,468 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Contents: (<Padding selectable flow widget <AttrMap selectable flow widget <PlainButton selectable flow widget 'Run'> attr_map={None: 'button_primary'} focus_map={None: 'button_primary focus'}> align='right' width=('relative', 20)>, ('weight', 1))
2017-02-17 17:36:40,469 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: {'label': <Text flow widget ''>, 'key': 'submit', 'input': None}
2017-02-17 17:36:40,470 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Contents: (<Text flow widget ''>, ('weight', 1))
2017-02-17 17:36:40,572 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Set_env inputs: []
2017-02-17 17:36:40,572 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Executing script: /home/kube/.cache/conjure-up/canonical-kubernetes/steps/step-01_get-kubectl
2017-02-17 17:36:40,583 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Showing dialog for exception: Could not read output from step /home/kube/.cache/conjure-up/canonical-kubernetes/steps/step-01_get-kubectl: []
2017-02-17 17:36:40,587 [DEBUG] log.py:14 - conjure-up/_unspecified_spell: Showing dialog for exception: Could not read output from step /home/kube/.cache/conjure-up/canonical-kubernetes/steps/step-01_get-kubectl: []

Weirdly enough, I have been able to retrieve the kubectl binary by setting the environnment manually and running the script:

$ export CONJURE_UP_SPELLSDIR=/snap/conjure-up/83/spells/
$ export JUJU_CONTROLLER=conjure-up-localhost-3d6
$ export JUJU_MODEL=conjure-up-canonical-kubernetes-0e8
$ ./.cache/conjure-up/canonical-kubernetes/steps/step-01_get-kubectl
{"message": "The Kubernetes client utility is now available at '~/bin/kubectl.conjure-up-canonical-kubernetes-0e8'", "returnCode": 0, "isComplete": true}

Let me know if you need me to provide more informations.

Thanks you !

openstack-base: console access and object store do not work

This may be the wrong place to post this so please close and let me know if there is another project I should post this too. I created a brand new OpenStack on MAAS using conjure-up (great piece of software) I am noticing that the web console to an instance does not work. Also going to "Object Store" indicates it can't communicate with swift. Looking and juju status I don't see any swift instances. I am assuming this is something not yet implemented?

Original: conjure-up/conjure-up#491

kubernetes-core & canonical-kubernetes assume default LXD storage pool name

The LXD profile installed by the kubernetes-core and canonical-kubernetes spells assume a pool name of "default" for the root device. This is fine if you've used the default pool name suggested by lxd init but causes all LXD instances to fail to come up otherwise. The errors emitted are fairly cryptic and it took me a fair bit of debugging to figure out what happened.

Can the profile field be left out so that LXD will just use the right value? Alternatively, can conjure-up ask LXD for the storage pool name (perhaps by looking at the default profile?) and put that into the profile it sets?

openstack-novalxd update ip addr selection for ext-subnet

With recent changes to conjure-up, neither network bridge is using 10.99.0.0/24 therefore deployed openstack instances are reachable. I had to manually remove the ext-net and create a new to work with the conjureup0 bridge.

(I picked the bridge not being used by the containers deployed for the openstack)

Most noticeable when doing a juju bootstrap against a conjured-up openstack.

openstack-novalxd: keystone.sh: key pair not uploaded to nova.

It appears that keystone.sh assumes that the username running conjure-up is ubuntu, e.g. the same as the user in the nova-compute unit. If they are different, the key pair isn't uploaded.

ubuntu@juju-0a0c8d-12:$ tail keypair.sh
...
export SSHPUBLICKEY=/home/ubuntu1-admin/.ssh/id_rsa.pub
if ! openstack keypair show ubuntu-keypair > /dev/null 2>&1; then
debug "adding ssh keypair from $SSHPUBLICKEY"
openstack keypair create --public-key $SSHPUBLICKEY ubuntu-keypair > /dev/null 2>&1
fi
exposeResult "SSH Keypair is now imported and accessible when creating compute nodes." 0 "true"
ubuntu@juju-0a0c8d-12:
$ nova keypair-list
+------+-------------+
| Name | Fingerprint |
+------+-------------+
+------+-------------+
ubuntu@juju-0a0c8d-12:$ ls /home/ubuntu/.ssh
authorized_keys id_rsa id_rsa.pub
ubuntu@juju-0a0c8d-12:
$

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.