Git Product home page Git Product logo

amulet's Introduction

Juju logo next to the text Canonical Juju

Juju is an open source application orchestration engine that enables any application operation (deployment, integration, lifecycle management) on any infrastructure (Kubernetes or otherwise) at any scale (development or production) in the same easy way (typically, one line of code), through special operators called ‘charms’.

juju snap build

👉 Juju Learn how to quickly deploy, integrate, and manage charms on any cloud with Juju.
It's as simple as juju deploy foo, juju integrate foo bar, ..., on any cloud.
Charmhub Sample our existing charms on Charmhub.
A charm can be a cluster (OpenStack, Kubernetes), a data platform (PostgreSQL, MongoDB, etc.), an observability stack (Canonical Observability Stack), an MLOps solution (Kubeflow), and so much more.
Charm SDK Write your own charm!
Juju is written in Go, but our SDK supports easy charm development in Python.

Give it a try!

Let's use Juju to deploy, configure, and integrate some Kubernetes charms:

Set up

You will need a cloud and Juju. The quickest way is to use a Multipass VM launched with the charm-dev blueprint.

Install Multipass: Linux | macOS | Windows. On Linux:

sudo snap install multipass

Use Multipass to launch an Ubuntu VM with the charm-dev blueprint:

multipass launch --cpus 4 --memory 8G --disk 30G --name tutorial-vm charm-dev 

Open a shell into the VM:

multipass shell tutorial-vm

Verify that you have Juju and two localhost clouds:

juju clouds

Bootstrap a Juju controller into the MicroK8s cloud:

juju bootstrap microk8s tutorial-controller

Add a workspace, or 'model':

juju add-model tutorial-model

Deploy, configure, and integrate a few things

Deploy Mattermost:

juju deploy mattermost-k8s

See more: Charmhub | mattermost-k8s

Deploy PostgreSQL:

juju deploy postgresql-k8s --channel 14/stable --trust

See more: Charmhub | postgresql-k8s

Enable security in your PostgreSQL deployment:

juju deploy tls-certificates-operator
juju config tls-certificates-operator generate-self-signed-certificates="true" ca-common-name="Test CA"
juju integrate postgresql-k8s tls-certificates-operator

Integrate Mattermost with PostgreSQL:

juju integrate mattermost-k8s postgresql-k8s:db

Watch your deployment come to life:

juju status --watch 1s

(Press Ctrl-C to quit. Drop the --watch 1s flag to get the status statically. Use the --relations flag to view more information about your integrations.)

Test your deployment

When everything is in active or idle status, note the IP address and port of Mattermost and pass them to curl:

curl <IP address>:<port>/api/v4/system/ping

You should see the output below:

{"AndroidLatestVersion":"","AndroidMinVersion":"","IosLatestVersion":"","IosMinVersion":"","status":"OK"}

Congratulations!

You now have a Kubernetes deployment consisting of a Mattermost backed by PosgreSQL with TLS-encrypted traffic!

Clean up

Delete your Multipass VM:

multipass delete --purge tutorial-vm

Uninstall Multipass: Linux | macOS | Windows. On Linux:

snap remove multipass

Next steps

Learn more

Chat with us

Read our Code of conduct and:

File an issue

Make your mark

amulet's People

Contributors

absoludity avatar adamisrael avatar arosales avatar bitdeli-chef avatar bjornt avatar bz2 avatar cynerva avatar dannf avatar dpb1 avatar freyes avatar johnsca avatar marcoceppi avatar mbruzek avatar mitechie avatar nicopace avatar niedbalski avatar paulgear avatar seman avatar tasdomas avatar tkuhlman avatar tvansteenburgh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

amulet's Issues

amulet not respecting series= declaration for multi-series charms

With the following snippet in a layers metadata.yaml

series:
  - trusty
  - xenial

And the following defined in a test in tests/10-deploy-test.py

        cls.deployment = amulet.Deployment(series='xenial')

I will get a trusty series charm every time i execute the tests. I've discovered that if you move 'xenial' to the top of the series list on the output charm, it will deploy the series of xenial, but it appears the series= directive was not respected.

ii amulet 1.14.4-0ubuntu4~ubuntu1 all Testing harness for Juju Charms

Deploying "to:" machines not working

For some reason, when amulet deploys a bundle, all the machines get stripped from the bundle. This poses a problem when you specify the machines in a manual environment.

The error:

~/streaming$ bundletester
cs:~tengu-bot/trusty/modelinfo-0
    charm-proof                                                            PASS
cs:~tengu-bot/trusty/storm-2
    charm-proof                                                            PASS
    charm-proof                                                            PASS
bundle
    charm-proof                                                            PASS
    00-setup                                                               PASS
    11-integration                                                         PASS

PASS: 6 Total: 6 (2.516341 sec)
ubuntu@h-merlijnt4:~/streaming$ nano tests/11-integration 
ubuntu@h-merlijnt4:~/streaming$ bundletester
cs:~tengu-bot/trusty/modelinfo-0
    charm-proof                                                            PASS
cs:~tengu-bot/trusty/storm-2
    charm-proof                                                            PASS
    charm-proof                                                            PASS
bundle
    charm-proof                                                            PASS
    00-setup                                                               PASS
    11-integration                                                         ERROR

------------------------------------------------------------------------------
ERROR: bundle::11-integration
[/tmp/bundletester-thirbD/streaming/tests/11-integration exit 1]
2016-08-02 02:33:05 Starting deployment of merlijnt41
2016-08-02 02:33:05 Service placement to machine not supported worker to lxc:1
2016-08-02 02:33:05 Deployment stopped. run time: 0.25
/usr/lib/python3/dist-packages/path.py:1719: DeprecationWarning: path is deprecated. Use Path instead.
  warnings.warn(msg, DeprecationWarning)
/usr/lib/python3/dist-packages/charmstore/lib.py:105: ResourceWarning: unclosed <ssl.SSLSocket fd=5, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6, laddr=('192.168.14.144', 35858), raddr=('162.213.33.122', 443)>
  AVAILABLE_INCLUDES).get('Meta')
/usr/lib/python3/dist-packages/charmstore/lib.py:105: ResourceWarning: unclosed <ssl.SSLSocket fd=5, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6, laddr=('192.168.14.144', 47801), raddr=('162.213.33.121', 443)>
  AVAILABLE_INCLUDES).get('Meta')
/usr/lib/python3/dist-packages/charmstore/lib.py:105: ResourceWarning: unclosed <ssl.SSLSocket fd=5, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6, laddr=('192.168.14.144', 35860), raddr=('162.213.33.122', 443)>
  AVAILABLE_INCLUDES).get('Meta')
/usr/lib/python3/dist-packages/charmstore/lib.py:105: ResourceWarning: unclosed <ssl.SSLSocket fd=5, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6, laddr=('192.168.14.144', 47803), raddr=('162.213.33.121', 443)>
  AVAILABLE_INCLUDES).get('Meta')
/usr/lib/python3/dist-packages/charmstore/lib.py:105: ResourceWarning: unclosed <ssl.SSLSocket fd=5, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6, laddr=('192.168.14.144', 35862), raddr=('162.213.33.122', 443)>
  AVAILABLE_INCLUDES).get('Meta')
E
======================================================================
ERROR: setUpClass (__main__.TestCharm)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/tmp/bundletester-thirbD/streaming/tests/11-integration", line 51, in setUpClass
    cls.deployment.setup(timeout=SECONDS_TO_WAIT)
  File "/usr/lib/python3/dist-packages/amulet/deployer.py", line 668, in setup
    subprocess.check_call(shlex.split(cmd))
  File "/usr/lib/python3.4/subprocess.py", line 561, in check_call
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['juju-deployer', '-W', '-c', '/tmp/amulet-juju-deployer-vxk34ppu/deployer-schema.json', '-e', 'merlijnt41', '-t', '1300', 'merlijnt41']' returned non-zero exit status 1

----------------------------------------------------------------------
Ran 0 tests in 1.402s

FAILED (errors=1)
Deploying bundle: /tmp/bundletester-thirbD/streaming/bundle.yaml

PASS: 5 ERROR: 1 Total: 6 (3.818765 sec)

You can find the complete bundle (tests and bundle code) here: https://github.com/IBCNServices/tengu-charms/tree/a18583f897f8b68c9ebc59d8ff622bc43630acde/bundles/streaming

change 00-setup.sh to a just setup.sh or something that can be run

Since 00-setup runs and then it tears the environment down, instead amulet should provide a

amulet.setup()

that looks for a setup.sh file or something and then then runs the scripts so that you can include it in each test run.

It might also be cool to have teardown, but only if we get to where we can run tests on the same bootstrap setup.

Regressed Talisman.wait_for_status() - containers not considered

It looks like this change has broken deployments that use containers:

c299eee

For example given this status output like the one below, wait_for_status() will loop forever and eventually timeout, because the line:

machine = status['machines'].get(unit.get('machine'), {})

doesn't take into account the 'containers' sub-tree.

==== status yaml output ====

environment: maas
machines:
  "0":
    agent-state: started
    agent-version: 1.24.6
    dns-name: shawmut.scapestack
    instance-id: /MAAS/api/1.0/nodes/node-32fdcb12-546c-11e4-b3f2-2c59e54ace74/
    series: trusty
    containers:
      0/lxc/7:
        agent-state: started
        agent-version: 1.24.6
        dns-name: 10.96.5.15
        instance-id: juju-machine-0-lxc-7
        series: trusty
        hardware: arch=amd64
      0/lxc/8:
        agent-state: started
        agent-version: 1.24.6
        dns-name: 10.96.5.150
        instance-id: juju-machine-0-lxc-8
        series: trusty
        hardware: arch=amd64
      0/lxc/9:
        agent-state: started
        agent-version: 1.24.6
        dns-name: 10.96.9.249
        instance-id: juju-machine-0-lxc-9
        series: trusty
        hardware: arch=amd64
      0/lxc/10:
        agent-state: started
        agent-version: 1.24.6
        dns-name: 10.96.9.247
        instance-id: juju-machine-0-lxc-10
        series: trusty
        hardware: arch=amd64
    hardware: arch=amd64 cpu-cores=4 mem=16384M tags=ssd
    state-server-member-status: has-vote
services:
  haproxy:
    charm: cs:trusty/haproxy-10
    can-upgrade-to: cs:trusty/haproxy-13
    exposed: true
    service-status:
      current: unknown
      since: 11 Oct 2015 06:44:39-07:00
    relations:
      peer:
      - haproxy
      reverseproxy:
      - landscape-server
    units:
      haproxy/0:
        workload-status:
          current: unknown
          since: 11 Oct 2015 06:44:39-07:00
        agent-status:
          current: idle
          since: 11 Oct 2015 06:46:32-07:00
          version: 1.24.6
        agent-state: started
        agent-version: 1.24.6
        machine: 0/lxc/7
        open-ports:
        - 80/tcp
        - 443/tcp
        - 10000/tcp
        public-address: 10.96.5.15
  landscape-server:
    charm: local:trusty/landscape-server-5
    exposed: false
    service-status:
      current: unknown
      since: 11 Oct 2015 06:46:13-07:00
    relations:
      amqp:
      - rabbitmq-server
      db:
      - postgresql
      website:
      - haproxy
    units:
      landscape-server/0:
        workload-status:
          current: unknown
          since: 11 Oct 2015 06:46:13-07:00
        agent-status:
          current: idle
          since: 11 Oct 2015 06:48:33-07:00
          version: 1.24.6
        agent-state: started
        agent-version: 1.24.6
        machine: 0/lxc/8
        public-address: 10.96.5.150
  postgresql:
    charm: cs:trusty/postgresql-27
    can-upgrade-to: cs:trusty/postgresql-28
    exposed: false
    service-status:
      current: unknown
      since: 11 Oct 2015 06:45:32-07:00
    relations:
      db-admin:
      - landscape-server
      replication:
      - postgresql
    units:
      postgresql/0:
        workload-status:
          current: unknown
          since: 11 Oct 2015 06:45:32-07:00
        agent-status:
          current: idle
          since: 11 Oct 2015 06:46:30-07:00
          version: 1.24.6
        agent-state: started
        agent-version: 1.24.6
        machine: 0/lxc/9
        open-ports:
        - 5432/tcp
        public-address: 10.96.9.249
  rabbitmq-server:
    charm: cs:trusty/rabbitmq-server-26
    can-upgrade-to: cs:trusty/rabbitmq-server-37
    exposed: false
    service-status:
      current: unknown
      since: 11 Oct 2015 06:45:42-07:00
    relations:
      amqp:
      - landscape-server
      cluster:
      - rabbitmq-server
    units:
      rabbitmq-server/0:
        workload-status:
          current: unknown
          since: 11 Oct 2015 06:45:42-07:00
        agent-status:
          current: idle
          since: 11 Oct 2015 06:46:43-07:00
          version: 1.24.6
        agent-state: started
        agent-version: 1.24.6
        machine: 0/lxc/10
        open-ports:
        - 5672/tcp
        public-address: 10.96.9.247
networks:
  maas-br0:
    provider-id: maas-br0
    cidr: 10.96.0.0/17
  provider-cisco1-vlan:
    provider-id: provider-cisco1-vlan
    cidr: 169.254.0.0/16
  provider-osmgmt-flat:
    provider-id: provider-osmgmt-flat
    cidr: 1.1.1.0/24

bzr reference

The Source section of the README refers to using Bazaar to get the source. If the source has been moved to github, this should refer to using git instead.

Its difficult to discern which unit is a leader in amulet

Often times when I'm working with quorem or need special behaviour out of a "leader" unit, I use leader-election to do things, like spin up a management service, or deploy specialized configuration.

I will often need to test this in amulet, and thus far the only way i've come up with to find the leader is basically sentry.run('is-leader') in a loop and break when we find the leader.

What would be handy is to have this exposed in amulet with a get_leader('service') so I can scope assertions without implementing the same for loop in all of my amulet tests.

Uncommitted changes in git charms are not copied to the deployed version

When deploying our charm we construct a 'fat charm' payload by creating a 'files' subdirectory and putting a tarball in it. These files are not included in the charm passed to the deployer.

The old behavior of copying the tree to a new spot and initializing it as a bzr branch did not have this restriction and worked nicely. I suspect doing more in place is the reason for #55 too.

In our instance we never want those files checked in, so the work-around of just checking things in before starting (if you remember) doesn't work.

wait_for_messages should use status-history

So that tests can watch for transient messages to reliably detect changes even if the status goes back to "ready". It should ensure that it ignores all messages from the status-history previous to when it was called.

Disambiguate the return value of Deployment.action_fetch()

The return value of action_fetch doesn't provide you any way to distinguish between timeout or failure, and if an action doesn't use action-set, then that is also indistinguishable from failure / timeout.

We should probably:

  1. raise on timeout
  2. return full results instead of just the 'results' dict

Both of these will need to be controlled by kw params for now so as to not break back compat before next major version bump. Consider:

def action_fetch(self, action_id, timeout=600, raise_on_timeout=False, full_output=False):

Designing Amulet for Juju 2.0

With 2.0 coming around the corner we should also make Amulet a little more 2.0 friendly by releasing Amulet 2.0!

First off, this is a chance for us to address several flaws in Amulet's original design that have festered over the years. Also, this should be something that, while breaks amulet 1.x should not deprecate amulet 1.x right away. There are a lot of charms written to use amulet 1.x and we shouldn't mess with that support, but also gently nudge authors to move in an amulet 2.0 direction.

This issue is meant to be a blueprint for this work, comments and suggestions will be incorporated into this body and work split out into separate issues.

Deployment

Items to be considered for Deployment class

Rename to Model

Going forward, in a Juju 2.0 world, we no longer have Deployments or Environments, but a controller that we can create models for. The testing harness should create a model for tests to run against, so Deployment will instead be the representation of a Model.

juju-deployer deprecation

Juju 2.0 will support bundles natively, including support for paths as a valid charm url. Given this, deployer is no longer needed and Amulet should interact directly with Juju via it's API/CLI.

Language update

Several methods in Deployment don't align with Juju verbiage. These should be updated to better reflect actual commands in juju.

Deployment.add

This should become Model.deploy to align with juju

Missing features

There are A LOT of new features in 2.0 which need to be represented in Amulet, spaces, storage, budget, resources.

Resources

This is vital for amulet to be used in CI/CD pipeline for bundles and charms

UnitSentry & Talisman

Going forward an overhaul of Talisman and UnitSentry should take place. Over the years it's become prickly to update and interact with

UnitSentry

Code examples

from amulet import Model

m = Model()
m = Model().from_bundle()

m.add_unit(service, scale)
m.deploy(service, charm, scale=int, resources={})
m.destroy('service')
m.relate('service:rel', 'service:rel')

m.services['service']  # Service(list)
m.services['service'].destroy()
m.services['service'].add_unit()
m.services['service'].attach()
m.services['service'].leader  # Unit(dict)

m.services['service'][int]  # Unit(dict)
m.services['service'][int].run_action('action', params={})
m.services['service'][int].actions => list of actions/parameters
m.services['service'][int].list_actions()
m.services['service'][int].run('cmd')
m.services['service'][int].destroy()
m.services['service'][int].storage => list of storage

m.machines  # list
m.machines[int]  # Machine(dict)
m.machines[int].run()

sentry list indexerror when unit exists

We're seeing this periodically with amulet tests of subordinates. It passes sometimes, sometimes it traces.

00:14:50.545 Traceback (most recent call last):
00:14:50.545   File "./tests/020-basic-wily-liberty", line 8, in <module>
00:14:50.545     deployment = LXDBasicDeployment(series='wily')
00:14:50.545   File "/var/lib/jenkins/checkout/lxd/tests/basic_deployment.py", line 47, in __init__
00:14:50.545     self._initialize_tests()
00:14:50.545   File "/var/lib/jenkins/checkout/lxd/tests/basic_deployment.py", line 128, in _initialize_tests
00:14:50.546     self.lxd1_sentry = self.d.sentry['lxd'][1]
00:14:50.546 IndexError: list index out of range



[Services]            
NAME                  STATUS  EXPOSED CHARM                                
glance                active  false   local:wily/glance-150                
keystone              active  false   local:wily/keystone-0                
lxd                           false   local:wily/lxd-1                     
mysql                 unknown false   local:wily/mysql-326                 
nova-cloud-controller active  false   local:wily/nova-cloud-controller-501 
nova-compute          active  false   local:wily/nova-compute-133          
rabbitmq-server       active  false   local:wily/rabbitmq-server-150       

[Units]                 
ID                      WORKLOAD-STATE AGENT-STATE VERSION MACHINE PORTS    PUBLIC-ADDRESS MESSAGE       
glance/0                active         idle        1.25.3  1       9292/tcp 172.17.119.159 Unit is ready 
keystone/0              active         idle        1.25.3  2                172.17.119.160 Unit is ready 
mysql/0                 unknown        idle        1.25.3  3       3306/tcp 172.17.119.161               
nova-cloud-controller/0 active         idle        1.25.3  4       8774/tcp 172.17.119.162 Unit is ready 
nova-compute/0          active         idle        1.25.3  5                172.17.119.163 Unit is ready 
  lxd/1                 active         idle        1.25.3                   172.17.119.163 Unit is ready 
nova-compute/1          active         idle        1.25.3  6                172.17.119.164 Unit is ready 
  lxd/0                 active         idle        1.25.3                   172.17.119.164 Unit is ready 
rabbitmq-server/0       active         idle        1.25.3  7       5672/tcp 172.17.119.165 Unit is ready 

[Machines] 
ID         STATE   VERSION DNS            INS-ID                               SERIES HARDWARE                                                                 
0          started 1.25.3  172.17.119.158 31eec3fc-2b9b-4857-8c63-755e0cee165b trusty arch=amd64 cpu-cores=1 mem=1536M root-disk=10240M availability-zone=nova 
1          started 1.25.3  172.17.119.159 4c18571a-a3fc-4f7d-aa61-181247f0ebd2 wily   arch=amd64 cpu-cores=1 mem=1536M root-disk=10240M availability-zone=nova 
2          started 1.25.3  172.17.119.160 26d334ab-8536-4f88-b455-79039aa37091 wily   arch=amd64 cpu-cores=1 mem=1536M root-disk=10240M availability-zone=nova 
3          started 1.25.3  172.17.119.161 e5828c44-f1d0-402e-a118-90838cf4d569 wily   arch=amd64 cpu-cores=1 mem=1536M root-disk=10240M availability-zone=nova 
4          started 1.25.3  172.17.119.162 a5fbc4f8-d4d5-4542-a25e-1b393274f5c0 wily   arch=amd64 cpu-cores=1 mem=1536M root-disk=10240M availability-zone=nova 
5          started 1.25.3  172.17.119.163 8c6a4d2a-a8f5-4b16-b177-3fe921d89fb7 wily   arch=amd64 cpu-cores=1 mem=1536M root-disk=10240M availability-zone=nova 
6          started 1.25.3  172.17.119.164 4410ade1-88fc-4c90-b10b-cc13fc7e6e00 wily   arch=amd64 cpu-cores=1 mem=1536M root-disk=10240M availability-zone=nova 
7          started 1.25.3  172.17.119.165 1cb5eb52-b9bc-49a3-a470-fee00d20460b wily   arch=amd64 cpu-cores=1 mem=1536M root-disk=10240M availability-zone=nova 

Support github.com URL locations and branches

We've recently moved the OpenStack charm development upstream to github.

Right now, our amulet test are dependent on a reverse mirror of git repositories back to bzr; I'd like to switch that so we can use the github.com repositories directly in the formats:

https://github.com/openstack/charm-neutron-api
https://github.com/openstack/charm-neutron-api;stable (indicating the stable branch or something similar - could just be an extra option for the amulet test case)

file_stat broken by the switch from juju run to juju ssh

Moving away from juju run to juju ssh sentry.py has broken the scenario where sentry_unit.file_stat(filename) is used to get file or dir attrs for files not owned by the ubuntu user.

Ultimately, filesystem_data.py gets:
OSError: [Errno 13] Permission denied: '/etc/neutron/neutron.conf'

Because the dir and file are root-owned.

The impact is that all of the OpenStack Charm amulet tests are failing, unable to stat the files as they had been able to prior to this change.

b6da1c3

Error loading metadata.yaml with non-ascii chars

From a test run of lp:~jose/charms/trusty/pubphoto/trunk

DEBUG:runner:call ['/tmp/bundletester-KdHqrm/pubphoto/tests/10-deploy'] (cwd: /tmp/bundletester-KdHqrm/pubphoto)
DEBUG:runner:/usr/lib/python3/dist-packages/path.py:1719: DeprecationWarning: path is deprecated. Use Path instead.
DEBUG:runner:  warnings.warn(msg, DeprecationWarning)
DEBUG:runner:E
DEBUG:runner:======================================================================
DEBUG:runner:ERROR: setUpClass (__main__.TestDeployment)
DEBUG:runner:----------------------------------------------------------------------
DEBUG:runner:Traceback (most recent call last):
DEBUG:runner:  File "/tmp/bundletester-KdHqrm/pubphoto/tests/10-deploy", line 13, in setUpClass
DEBUG:runner:    cls.deployment.add('pubphoto')
DEBUG:runner:  File "/usr/lib/python3/dist-packages/amulet/deployer.py", line 184, in add
DEBUG:runner:    service_name, charm, branch=branch, series=series or self.series)
DEBUG:runner:  File "/usr/lib/python3/dist-packages/amulet/charm.py", line 57, in fetch
DEBUG:runner:    series=series)
DEBUG:runner:  File "/usr/lib/python3/dist-packages/amulet/charm.py", line 40, in get_charm
DEBUG:runner:    return LocalCharm(charm_path, series)
DEBUG:runner:  File "/usr/lib/python3/dist-packages/amulet/charm.py", line 83, in __init__
DEBUG:runner:    self._raw = self._load(os.path.join(path, 'metadata.yaml'))
DEBUG:runner:  File "/usr/lib/python3/dist-packages/amulet/charm.py", line 110, in _load
DEBUG:runner:    data = yaml.safe_load(f.read())
DEBUG:runner:  File "/usr/lib/python3.4/encodings/ascii.py", line 26, in decode
DEBUG:runner:    return codecs.ascii_decode(input, self.errors)[0]
DEBUG:runner:UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 83: ordinal not in range(128)
DEBUG:runner:
DEBUG:runner:----------------------------------------------------------------------
DEBUG:runner:Ran 0 tests in 0.204s
DEBUG:runner:
DEBUG:runner:FAILED (errors=1)
DEBUG:runner:Exit Code: 1
DEBUG:bundletester.utils:Updating JUJU_ENV: "charm-testing-aws" -> ""
DEBUG:bundletester.fetchers:bzr revision-info: 1 [email protected]

Show output of juju-deployer cmd if it fails

In Deployer.setup(), if the juju-deployer call fails, you never get to see the output, which makes debugging what went wrong difficult, especially since the deployer file gets deleted.

Cannot destroy services

I would like to test that on service destruction, recreation and re-relation, that my charms do the right thing. But Deployer doesn't have any destroy_service type method.

I might be able to use remove_unit and add_unit, but this won't quite test the same thing as service relations won't be destroyed.

Move Deployment.action* methods to UnitSentry?

...so you'd call it like self.d.sentry['service'][0].action_do('foo')
instead of self.d.action_do(self.d.sentry['service'][0].info['unit_name'], 'foo')

Keep the old methods (on Deployment) but issue a DeprecationWarning and remove at next major version bump.

python-amulet dependency not in PPA

On Xenial, after adding ppa:juju/stable when I try to install python-amulet I get:

Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies.
python-amulet : Depends: python-libcharmstore but it is not installable
E: Unable to correct problems, you have held broken packages.

It wasn't clear where to file this, but @mitechie suggested to start here

Add juju-deployer as dependency

Doing so will ensure that amulet will actually work right-out-of-the-box, for example when installed from pip in a fresh container, without having to manually install other deps.

As of juju-solutions/bundletester#42, you can replace this:

tvansteenburgh@xenial-vm:/tmp/meteor⟫ cat tests/00-setup 
#!/bin/bash

#sudo add-apt-repository ppa:juju/stable -y
#sudo apt-get update
#sudo apt-get install amulet python-requests -y

With this:

tvansteenburgh@xenial-vm:/tmp/meteor⟫ cat tests/tests.yaml
python_packages:
  - requests
  - amulet
  - juju-deployer
  - bzr
virtualenv: true

Instead of just installing amulet, you have to install juju-deployer and bzr (because juju-deployer doesn't install bzr, which we should also fix).

I know deployer will be going away, but I'd prefer to fix this papercut in the meantime anyway.

Add the juju 2.0 ability to use resources in Amulet

I am trying to develop tests for a charm that only uses the juju 2.0 resources feature. The charm fails to find the resource and does not install the package. The amulet code should have some way to add and remove resources to test these kind of charms.

Relevant information:

The charm command resource-get will fetch a resource from the Juju controller or the Juju Charm store

Please note there are two types of commands, the juju resource commands and the charm resource commands. The juju resource commands only attach, or list resources for the current controller. The charm commands attach and retrieve from the Charm Store.

I suspect that an MVP for Amulet will not include attaching resources, but we will need to test resource-get from both the controller and Charm Store.

add_unit times out with error

The etcd charm fails in automated testing on an add_unit() amulet call on Azure.

A timeout is being reached and there appears to be no way to add timeout to add_unit()

Here is the log file with the error:

2015-10-02 09:30:30 Starting deployment of charm-testing-azure
2015-10-02 09:30:31 Deploying services...
2015-10-02 09:30:33  Deploying service etcd using /tmp/charmwx4Gpn/trusty/etcd
2015-10-02 09:37:13 Adding relations...
2015-10-02 09:37:14 Deployment complete in 404.21 seconds
.Timeout occurred, printing juju status...environment: charm-testing-azure
machines:
  "0":
    agent-state: started
    agent-version: 1.24.6
    dns-name: juju-charm-testing-azure-qjw8y4c6mz.cloudapp.net
    instance-id: juju-charm-testing-azure-qjw8y4c6mz-jujuivwksn2z80zortgzx3mvskweoiym3e2634lad5h94msyby
    instance-state: ReadyRole
    series: trusty
    hardware: arch=amd64 cpu-cores=2 mem=7168M root-disk=130048M
    state-server-member-status: has-vote
  "5":
    agent-state: started
    agent-version: 1.24.6
    dns-name: juju-charm-testing-azure-i4k45usv5k.cloudapp.net
    instance-id: juju-charm-testing-azure-i4k45usv5k-juju5vb1pch8yv2gkocvfycpqeh1nsobdp4hyqs1b832lpzkbx
    instance-state: ReadyRole
    series: trusty
    hardware: arch=amd64 cpu-cores=1 mem=3584M root-disk=130048M
  "6":
    agent-state: down
    agent-state-info: (started)
    agent-version: 1.24.6
    dns-name: juju-charm-testing-azure-em27xw1ha1.cloudapp.net
    instance-id: juju-charm-testing-azure-em27xw1ha1-jujuyk5kdzkfeauo2e0u0tgrpmu480jg0eza3qi3it2ttytw12
    instance-state: ReadyRole
    series: trusty
    hardware: arch=amd64 cpu-cores=1 mem=3584M root-disk=130048M
services:
  etcd:
    charm: local:trusty/etcd-2
    exposed: false
    service-status:
      current: maintenance
      message: installing charm software
      since: 02 Oct 2015 09:48:09Z
    relations:
      cluster:
      - etcd
    units:
      etcd/0:
        workload-status:
          current: unknown
          since: 02 Oct 2015 09:39:46Z
        agent-status:
          current: idle
          since: 02 Oct 2015 09:43:01Z
          version: 1.24.6
        agent-state: started
        agent-version: 1.24.6
        machine: "5"
        open-ports:
        - 4001/tcp
        public-address: juju-charm-testing-azure-i4k45usv5k.cloudapp.net
      etcd/1:
        workload-status:
          current: maintenance
          message: installing charm software
          since: 02 Oct 2015 09:48:09Z
        agent-status:
          current: executing
          message: running install hook
          since: 02 Oct 2015 09:48:09Z
          version: 1.24.6
        agent-state: pending
        agent-version: 1.24.6
        machine: "6"
        public-address: juju-charm-testing-azure-em27xw1ha1.cloudapp.net
E
======================================================================
ERROR: test_two_node_scale (__main__.TestDeployment)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/tmp/tmpnaYXLh/etcd/tests/10-deploy", line 26, in test_two_node_scale
    self.deployment.add_unit('etcd')
  File "/usr/lib/python2.7/dist-packages/amulet/deployer.py", line 182, in add_unit
    self.sentry = Talisman(self.services)
  File "/usr/lib/python2.7/dist-packages/amulet/sentry.py", line 202, in __init__
    status = self.wait_for_status(self.juju_env, services, timeout)
  File "/usr/lib/python2.7/dist-packages/amulet/sentry.py", line 340, in wait_for_status
    for i in helpers.timeout_gen(timeout):
  File "/usr/lib/python2.7/dist-packages/amulet/helpers.py", line 108, in timeout_gen
    raise TimeoutError()
TimeoutError

----------------------------------------------------------------------
Ran 2 tests in 904.803s

Here is the code that is running that test.

#!/usr/bin/python

import amulet
import unittest


class TestDeployment(unittest.TestCase):
    @classmethod
    def setUpClass(cls):
        cls.deployment = amulet.Deployment(series='trusty')
        cls.deployment.add('etcd')
        try:
            cls.deployment.setup(timeout=900)
            cls.deployment.sentry.wait()
        except amulet.helpers.TimeoutError:
            msg = "Environment wasn't stood up in time"
            amulet.raise_status(amulet.SKIP, msg=msg)
        except:
            raise

    def test_single_service(self):
        status = self.deployment.sentry['etcd/0'].run('service etcd status')
        self.assertTrue("running" in status[0])

    def test_two_node_scale(self):
        self.deployment.add_unit('etcd')
        self.deployment.sentry.wait()

        status1 = self.deployment.sentry['etcd/0'].run('service etcd status')
        status2 = self.deployment.sentry['etcd/1'].run('service etcd status')
        self.assertTrue("running" in status1[0])
        self.assertTrue("running" in status2[0])


if __name__ == '__main__':
    unittest.main()

Amulet sentry stack trace

Filing a bug here. I haven't had time to digg into this and seperate from an existing issue or user error. Filing a bug here to confirm if this is a new issue, duplicate, or possible user error.

Test code is at:
https://code.launchpad.net/~a.rosales/charms/precise/ubuntu/add-basic-amulet-test/+merge/203199

$ juju test 10-deploy-test.py
juju-test INFO : Starting test run on hp-scale using Juju 1.17.0
Launching instance

  • 3461791
    Waiting for address
    Attempting to connect to 10.5.31.158:22
    Attempting to connect to 15.185.125.174:22
    Connection to 15.185.125.174 closed.
    2014-01-24 20:05:35 Starting deployment of hp-scale
    2014-01-24 20:05:39 Deploying services...
    2014-01-24 20:05:41 Deploying service ubuntu-sentry using local:precise/ubuntu-sentry
    2014-01-24 20:05:44 Exposing service 'ubuntu-sentry'
    2014-01-24 20:05:44 Deploying service relation-sentry using local:precise/relation-sentry
    2014-01-24 20:05:48 Exposing service 'relation-sentry'
    2014-01-24 20:05:48 Deploying service ubuntu using local:precise/ubuntu
    2014-01-24 20:08:12 Config specifies num units for subordinate: ubuntu-sentry
    2014-01-24 20:08:12 Adding relations...
    2014-01-24 20:08:13 Adding relation ubuntu:juju-info <-> ubuntu-sentry:juju-info
    2014-01-24 20:09:20 Deployment complete in 225.05 seconds
    Traceback (most recent call last):
    File "tests/10-deploy-test.py", line 50, in
    output, code = d.sentry.unit['ubuntu/0'].run(lsb_release_command)
    File "/usr/lib/python3/dist-packages/amulet/sentry.py", line 98, in run
    r = self.query('/run', data=command)
    File "/usr/lib/python3/dist-packages/amulet/sentry.py", line 47, in query
    return self._fetch(self.config['address'], endpoint, query, data)
    File "/usr/lib/python3/dist-packages/amulet/sentry.py", line 50, in _fetch
    url = "%s/%s?%s" % (address, endpoint, urllib.parse.urlencode(query))
    AttributeError: 'module' object has no attribute 'parse'
    juju-test.conductor.10-deploy-test.py RESULT : ✘
    juju-test INFO : Results: 0 passed, 1 failed, 0 errored
    ERROR exit status 1

-thanks,
Antonio

wait_for_messages should detect and surface hook errors

It currently surfaces a timeout because the status message doesn't match, but that can be misleading.

2016-08-30 20:44:18 Starting deployment of charm-testing-gce
2016-08-30 20:44:19 Deploying services...
2016-08-30 20:44:30 Adding relations...
2016-08-30 20:44:31 Deployment complete in 14.07 seconds
/usr/lib/python3/dist-packages/path.py:1719: DeprecationWarning: path is deprecated. Use Path instead.
  warnings.warn(msg, DeprecationWarning)
Timeout occurred, printing juju status...environment: charm-testing-gce
machines:
  "0":
    agent-state: started
    agent-version: 1.25.6
    dns-name: 104.199.6.154
    instance-id: juju-ec8adfe5-eac1-4931-881d-65488d7fd1c2-machine-0
    instance-state: RUNNING
    series: trusty
    hardware: arch=amd64 cpu-cores=8 cpu-power=2200 mem=7200M root-disk=10240M availability-zone=europe-west1-b
    state-server-member-status: has-vote
  "1":
    agent-state: started
    agent-version: 1.25.6
    dns-name: 104.155.86.172
    instance-id: juju-ec8adfe5-eac1-4931-881d-65488d7fd1c2-machine-1
    instance-state: RUNNING
    series: xenial
    hardware: arch=amd64 cpu-cores=4 cpu-power=1100 mem=3600M root-disk=10240M availability-zone=europe-west1-c
  "2":
    agent-state: started
    agent-version: 1.25.6
    dns-name: 104.199.2.158
    instance-id: juju-ec8adfe5-eac1-4931-881d-65488d7fd1c2-machine-2
    instance-state: RUNNING
    series: xenial
    hardware: arch=amd64 cpu-cores=4 cpu-power=1100 mem=3600M root-disk=10240M availability-zone=europe-west1-b
  "3":
    agent-state: started
    agent-version: 1.25.6
    dns-name: 104.155.105.142
    instance-id: juju-ec8adfe5-eac1-4931-881d-65488d7fd1c2-machine-3
    instance-state: RUNNING
    series: xenial
    hardware: arch=amd64 cpu-cores=4 cpu-power=1100 mem=3600M root-disk=10240M availability-zone=europe-west1-d
services:
  zookeeper:
    charm: local:xenial/zookeeper-0
    exposed: false
    service-status:
      current: error
      message: 'hook failed: "config-changed"'
      since: 30 Aug 2016 20:45:57Z
    relations:
      zkpeer:
      - zookeeper
    units:
      zookeeper/0:
        workload-status:
          current: error
          message: 'hook failed: "config-changed"'
          since: 30 Aug 2016 20:45:57Z
        agent-status:
          current: idle
          since: 30 Aug 2016 20:45:57Z
          version: 1.25.6
        agent-state: error
        agent-state-info: 'hook failed: "config-changed"'
        agent-version: 1.25.6
        machine: "1"
        open-ports:
        - 2181/tcp
        - 9998/tcp
        public-address: 104.155.86.172
      zookeeper/1:
        workload-status:
          current: error
          message: 'hook failed: "config-changed"'
          since: 30 Aug 2016 20:45:57Z
        agent-status:
          current: idle
          since: 30 Aug 2016 20:45:57Z
          version: 1.25.6
        agent-state: error
        agent-state-info: 'hook failed: "config-changed"'
        agent-version: 1.25.6
        machine: "2"
        open-ports:
        - 2181/tcp
        - 9998/tcp
        public-address: 104.199.2.158
      zookeeper/2:
        workload-status:
          current: error
          message: 'hook failed: "config-changed"'
          since: 30 Aug 2016 20:45:57Z
        agent-status:
          current: idle
          since: 30 Aug 2016 20:45:57Z
          version: 1.25.6
        agent-state: error
        agent-state-info: 'hook failed: "config-changed"'
        agent-version: 1.25.6
        machine: "3"
        open-ports:
        - 2181/tcp
        - 9998/tcp
        public-address: 104.155.105.142
Es
======================================================================
ERROR: test_bind_port (__main__.TestBindClientPort)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/tmp/bundletester-Eut7sH/zookeeper/tests/10-bind-address", line 45, in test_bind_port
    self.d.sentry.wait_for_messages({'zookeeper': 'updating network interface'}, timeout=60)
  File "/usr/lib/python3/dist-packages/amulet/sentry.py", line 598, in wait_for_messages
    for i in helpers.timeout_gen(timeout):
  File "/usr/lib/python3/dist-packages/amulet/helpers.py", line 108, in timeout_gen
    raise TimeoutError()
amulet.helpers.TimeoutError

----------------------------------------------------------------------
Ran 2 tests in 157.637s

FAILED (errors=1, skipped=1)

Add Support for Juju Storage

This is needed for testing charms that depend on the juju-storage feature, these charms come with

  • [name]-storage-attached
  • [name]-storage-detaching
    hooks for every used storage relation, which need testing.

A charm can be deployed with different storage provider (e.g. loop or ebs for block)
$ juju deploy <charm> --storage <name>=<pool>,<size>,count

This could be expressed as an extension of Deployment.add(), e.g.
.add( 'servicename', storage={servicestorage:{'pool': 'ebs', 'size': 1024, 'count': 3}})

Juju-Storage also provides the possibility to add storage to already deployed units
$ juju storage add <unit> <name>=<pool>,<size>,count

I'm not sure how the adding (removing) of storage should be expressed in terms of amulet.

Trying to add this support myself, I stopped when I realize that amulet makes use of juju-deployer. juju-deployer does not provide support for juju-storage (not sure if it is expressible in the bundle-format, was not able to create a bundle using juju-gui export on a deployment with juju-storage)

'branch:', doesn't quite work

Seems like it is reaching out to the charm store to check on things when I'm actually using 'branch'. My goal here is to test on utopic without having to have utopic (and vivid, etc) bits in the charm store.

dpb@helo:trunk$ tests/10-bundles-test.py
E/usr/lib/python3.4/unittest/suite.py:173: ResourceWarning: unclosed <ssl.SSLSocket fd=5, family=AddressFamily.AF_INET, type=SocketType.SOCK_STREAM, proto=6, laddr=('10.172.68.236', 44129), raddr=('91.189.92.34', 443)>
  self._addClassOrModuleLevelException(result, e, errorName)

======================================================================
ERROR: setUpClass (__main__.BundleTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/amulet/deployer.py", line 39, in fetch
    return super(CharmCache, self).__getitem__(service)
KeyError: 'ubuntu-trusty'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/charmworldlib/charm.py", line 158, in _fetch
    revision=revision))
  File "/usr/lib/python3/dist-packages/charmworldlib/api.py", line 18, in get
    return self.fetch_json(endpoint, params, 'get')
  File "/usr/lib/python3/dist-packages/charmworldlib/api.py", line 26, in fetch_json
    raise Exception('Request failed with: %s' % r.status_code)
Exception: Request failed with: 404

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "tests/10-bundles-test.py", line 22, in setUpClass
    d.load(contents)
  File "/usr/lib/python3/dist-packages/amulet/deployer.py", line 83, in load
    constraints=constraints,
  File "/usr/lib/python3/dist-packages/amulet/deployer.py", line 96, in add
    c = self.charm_cache.fetch(service, charm, self.series)
  File "/usr/lib/python3/dist-packages/amulet/deployer.py", line 44, in fetch
    series=series,
  File "/usr/lib/python3/dist-packages/amulet/charm.py", line 28, in get_charm
    return Charm(with_series(charm_path))
  File "/usr/lib/python3/dist-packages/charmworldlib/charm.py", line 131, in __init__
    self._fetch(charm_id)
  File "/usr/lib/python3/dist-packages/charmworldlib/charm.py", line 161, in _fetch
    raise CharmNotFound('API request failed: %s' % str(e))
charmworldlib.charm.CharmNotFound: API request failed: Request failed with: 404

----------------------------------------------------------------------
Ran 0 tests in 0.827s

FAILED (errors=1)
dpb@helo:trunk$ vim tests/bundles.yaml
dpb@helo:trunk$ cat tests/bundles.yaml
landscape-client-test:
  services:
    landscape-client-precise:
      charm: landscape-client
      series: precise

    ubuntu-precise:
      branch: lp:charms/ubuntu
      series: precise
      num_units: 1

    landscape-client-trusty:
      charm: landscape-client
      series: trusty

    ubuntu-trusty:
      branch: lp:charms/ubuntu
      series: trusty
      num_units: 1

  relations:
    - [ "landscape-client-trusty", "ubuntu-trusty" ]
    - [ "landscape-client-precise", "ubuntu-precise" ]
dpb@helo:trunk$

Amulet deployment hitting timeout when model is deployed

We've hit this a few times in OpenStack CI; specifically Amulet raises:

DEBUG:runner:Timeout occurred, printing juju status...environment: osci-sv16

The model is deployed and all units are in the started agent state:

https://openstack-ci-reports.ubuntu.com/artifacts/test_charm_pipeline/openstack/charm-percona-cluster/344162/4/2016-07-21_08-46-40/test_charm_amulet_smoke_1730/juju-stat-yaml-collect.txt

so I'm not quite sure why this happens; full test run details here:

https://openstack-ci-reports.ubuntu.com/artifacts/test_charm_pipeline/openstack/charm-percona-cluster/344162/4/2016-07-21_08-46-40/index.html

Add back-compat for old-style sentry indexing

When a test does sentry['mycharm/0'], do sentry['mycharm'][0] behind-the-scenes instead of returning a KeyError. I.e., use the unit num as an index into the sentry array. This will make old tests work again without having to update them all. Arguably this is what we should have done in the first place.

Update docs to warn about dangers of hardcoding unit numbers in tests

For example:

# bad
sentry.unit['mysql/0']
# good
sentry.unit['mysql'][0]

The reason for this is that bundletester, by default, resets the env between test file executions. So, unit numbers for the running test may not start at zero if the test is not the first to run in the env. Not that env resets can be disabled via the tests.yaml file.

Amulet skips local charms when uncommitted bzr changes are present

When deploying a charm in bundle format from a dirty directory - it skips loading the local charm and instead resorts to using the store charm. A known work-around is to commit the changes pre-emptively before executing the test.

Conditions that exhibited the behavior:

The following bundle yaml was loaded in an amulet test for hdp-hive

  "compute-node":
      charm: "hdp-hadoop"
      num_units: 2
      annotations:
        "gui-x": "768"
        "gui-y": "591.0585428804295"
    mysql:
      charm: "cs:trusty/mysql"
      num_units: 1
      options:
        "binlog-format": ROW
      annotations:
        "gui-x": "1102.9983551210835"
        "gui-y": "591.0585428804295"
    hdp-hive:
      charm: "hdp-hive"
      num_units: 1
      annotations:
        "gui-x": "1105.4991775605417"
        "gui-y": "284.9414571195705"
    "yarn-hdfs-master":
      charm: "hdp-hadoop"
      num_units: 1
      annotations:
        "gui-x": "769.0016448789165"
        "gui-y": "285.0585428804295"
  relations:
    - - "hdp-hive:namenode"
      - "yarn-hdfs-master:namenode"
    - - "yarn-hdfs-master:namenode"
      - "compute-node:datanode"
    - - "hdp-hive:resourcemanager"
      - "yarn-hdfs-master:resourcemanager"
    - - "yarn-hdfs-master:resourcemanager"
      - "compute-node:nodemanager"
    - - "compute-node:hadoop-nodes"
      - "hdp-hive:hadoop-nodes"
    - - "hdp-hive:db"
      - "mysql:db"
  series: trusty

Bundletester was executed from the charm directory:

$ bundletester -F -l DEBUG -v -e hpcloud

Output snippet from juju status:

  hdp-hive:
    charm: cs:trusty/hdp-hive-2

current bzr status in directory:

removed:
  files/data/
  files/data/elasticsearch-hadoop-2.0.0.zip
  hooks/elasticsearch-relation-joined@
  hooks/hadoop-nodes-relation-changed@
added:
  hooks/elk-relation-joined@
modified:
  README.md
  hooks/hdp-hive-common.py
  icon.svg
  metadata.yaml
  tests/01-deploy-hive-cluster
  tests/hive-hadoop-sql-cluster.yaml
unknown:
  tests/trusty/

add_unit needs a timeout

00:54:26.592 Traceback (most recent call last):
00:54:26.592 File "/usr/lib/python3/dist-packages/zope/testrunner/runner.py", line 400, in run_layer
00:54:26.592 setup_layer(options, layer, setup_layers)
00:54:26.592 File "/usr/lib/python3/dist-packages/zope/testrunner/runner.py", line 713, in setup_layer
00:54:26.592 layer.setUp()
00:54:26.592 File "/var/jenkins/workspace/charm-your-own-adventure/tests/layers.py", line 68, in setUp
00:54:26.592 cls.environment.set_unit_count("landscape-server", 2)
00:54:26.592 File "/var/jenkins/workspace/charm-your-own-adventure/tests/helpers.py", line 370, in set_unit_count
00:54:26.592 self._deployment.add_unit(service)
00:54:26.592 File "/usr/lib/python3/dist-packages/amulet/deployer.py", line 182, in add_unit
00:54:26.592 self.sentry = Talisman(self.services)
00:54:26.592 File "/usr/lib/python3/dist-packages/amulet/sentry.py", line 202, in init
00:54:26.592 status = self.wait_for_status(self.juju_env, services, timeout)
00:54:26.592 File "/usr/lib/python3/dist-packages/amulet/sentry.py", line 344, in wait_for_status
00:54:26.592 for i in helpers.timeout_gen(timeout):
00:54:26.592 File "/usr/lib/python3/dist-packages/amulet/helpers.py", line 108, in timeout_gen
00:54:26.592 raise TimeoutError()
00:54:26.592 amulet.helpers.TimeoutError

The add_unit in deployer.py should take an optional timeout which is passed along to Talisman(self.services)

As it is, there is no way to adjust for slower running charms/tests/clouds, etc.

stack trace on unknown options such as "--version" "--help" "help" or even with no args

$ amulet --version
Traceback (most recent call last):
  File "/usr/bin/amulet", line 9, in <module>
    load_entry_point('amulet==1.2.1', 'console_scripts', 'amulet')()
  File "/usr/lib/python3/dist-packages/amulet/cli.py", line 21, in main
    p = setup_parser()
  File "/usr/lib/python3/dist-packages/amulet/cli.py", line 15, in setup_parser
    getattr(obj, baby_name)(subparsers)
TypeError: setup_parser() takes 0 positional arguments but 1 was given
arosales@x230:~/devel/bzr/charms/ubuntu$ amulet --help
Traceback (most recent call last):
  File "/usr/bin/amulet", line 9, in <module>
    load_entry_point('amulet==1.2.1', 'console_scripts', 'amulet')()
  File "/usr/lib/python3/dist-packages/amulet/cli.py", line 21, in main
    p = setup_parser()
  File "/usr/lib/python3/dist-packages/amulet/cli.py", line 15, in setup_parser
    getattr(obj, baby_name)(subparsers)
TypeError: setup_parser() takes 0 positional arguments but 1 was given
arosales@x230:~/devel/bzr/charms/ubuntu$ amulet help
Traceback (most recent call last):
  File "/usr/bin/amulet", line 9, in <module>
    load_entry_point('amulet==1.2.1', 'console_scripts', 'amulet')()
  File "/usr/lib/python3/dist-packages/amulet/cli.py", line 21, in main
    p = setup_parser()
  File "/usr/lib/python3/dist-packages/amulet/cli.py", line 15, in setup_parser
    getattr(obj, baby_name)(subparsers)
TypeError: setup_parser() takes 0 positional arguments but 1 was given
arosales@x230:~/devel/bzr/charms/ubuntu$ amulet 
Traceback (most recent call last):
  File "/usr/bin/amulet", line 9, in <module>
    load_entry_point('amulet==1.2.1', 'console_scripts', 'amulet')()
  File "/usr/lib/python3/dist-packages/amulet/cli.py", line 21, in main
    p = setup_parser()
  File "/usr/lib/python3/dist-packages/amulet/cli.py", line 15, in setup_parser
    getattr(obj, baby_name)(subparsers)
TypeError: setup_parser() takes 0 positional arguments but 1 was given

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.