Git Product home page Git Product logo

charm-k8s-prometheus's Introduction

Juju Charm/Operator for Prometheus on Kubernetes

CI Badges

Click on each badge for more details.

Branch Build Status Coverage
master Build Status (master) Coverage Status

Quick Start

git submodule update --init --recursive
sudo snap install microk8s --classic
sudo snap install juju --classic
sudo microk8s.enable dns dashboard registry storage metrics-server ingress
sudo usermod -a -G microk8s $(whoami)
sudo chown -f -R $USER ~/.kube

Log out then log back in so that the new group membership is applied to your shell session.

juju bootstrap microk8s mk8s

Optional: Grab coffee/beer/tea or do a 5k run. Once the above is done, do:

juju create-storage-pool operator-storage kubernetes storage-class=microk8s-hostpath
juju add-model lma
juju deploy . --resource prometheus-image=prom/prometheus:v2.18.1 --resource nginx-image=nginx:1.19.0

Wait until juju status shows that the prometheus app has a status of active.

Preview the Prometheus GUI

Add the following entry to your machine's /etc/hosts file:

<microk8s-host-ip>	prometheus.local

Run:

juju config prometheus juju-external-hostname=prometheus.local
juju expose prometheus

Now browse to http://prometheus.local.

A NOTE ABOUT THE EXTERNAL HOSTNAME: If you are using a k8s distribution other than microk8s, you need to ensure that there is an LB sitting in front of the k8s nodes and that you use that LB's IP address in place of <microk8s-host-ip>. Alternatively, instead of adding a static entry in /etc/hosts such as above, you may use an FQDN as the value to juju-external-hostname.

The default prometheus.yml includes a configuration that scrapes metrics from Prometheus itself. Execute the following query to show TSDB stats:

rate(prometheus_tsdb_head_chunks_created_total[1m])

For more info on getting started with Prometheus see its official getting started guide.

Monitoring Kubernetes

To monitor the kubernetes cluster, deploy it with the following config option:

juju deploy . --resource prometheus-image=prom/prometheus:v2.18.1 \
    --resource nginx-image=nginx:1.19.0 --config monitor-k8s=true

If the charm has already been deployed, you may also configure it at runtime:

juju config prometheus monitor-k8s=true

WARNING: This second method is experimental and not yet fully supported and will require some manual intervention by sending a SIGHUP to the Prometheus process in the k8s pod. Do this by running the following after executing the juju config command:

kubectl -n lma exec <k8s-pod-name> -- kill -1 <prometheus-pid>

Prometheus' PID in the pod is usually 1 but if you're not sure, run:

kubectl -n lma exec <k8s-pod-name> -- ps | grep /bin/prometheus | awk '{print $1}'

Use Prometheus as a Grafana Datasource

Refer to the Grafana Operator Quick Start guide to learn how to use Prometheus with Grafana.

Use Prometheus with AlertManager

Refer to the AlertManager Operator Quick Start guide to learn how to use Prometheus with AlertManager.

This Charm's Architecture

To learn how to navigate this charm's code and become an effective contributor, please read the Charmed LMA Operators Architecture reference doc.

Preparing Your Workstation for Local Development

  1. Install pyenv so that you can test with different versions of Python

    curl -L https://raw.githubusercontent.com/yyuu/pyenv-installer/master/bin/pyenv-installer | bash

  2. Append the following to your ~/.bashrc then log out and log back in

    export PATH="/home/mark/.pyenv/bin:$PATH" eval "$(pyenv init -)" eval "$(pyenv virtualenv-init -)"

  3. Install development packages. These are needed by pyenv to compile Python

    sudo apt install build-essential libssl-dev zlib1g-dev libbz2-dev
    libreadline-dev libsqlite3-dev wget curl llvm libncurses5-dev libncursesw5-dev
    xz-utils tk-dev libffi-dev liblzma-dev python3-openssl git

  4. Install Python 3.6.x and 3.7.x

    NOTE: Replace X with the correct minor version as shown in pyenv install --list

    pyenv install 3.6.X pyenv install 3.7.X

  5. Test by cd-ing in and out of your working directory

    ~/charm-k8s-prometheus $ python --version Python 3.7.7 ~ $ cd .. ~ $ python --version Python 3.6.9 ~ $ cd - ~/charm-k8s-prometheus $ python --version Python 3.7.7

  6. Test if tox is able to run tests against all declared environments

    tox

If You Want More Control Over Your Python Environments

  1. Install venv and virtualenvwrapper

    sudo apt install python3-venv pip3 install virtualenvwrapper

  2. Append the following to your ~/.bashrc, then log out and log back in

    export WORKON_HOME=$HOME/.virtualenvs source $(which virtualenvwrapper.sh)

  3. Make two virtualenvs that you can quickly switch between for testing

    pyenv virtualenv --python $(pyenv which python3.6) charm-k8s-prometheus-py36 pyenv virtualenv --python $(pyenv which python3.7) charm-k8s-prometheus-py37

  4. Activate any one of them

    pyenv activate charm-k8s-prometheus-py36

  5. Install the dependencies in the virtualenv

    pip3 install -r test-requirements.txt

  6. Deactivate to leave the virtual environment

    pyenv deactivate

Running the Unit Tests on Your Workstation

To run the test using the default interpreter as configured in tox.ini, run:

tox

If you want to specify an interpreter that's present in your workstation, you may run it with:

tox -e py37

To view the coverage report that gets generated after running the tests above, run:

make coverage-server

The above command should output the port on your workstation where the server is listening on. If you are running the above command on Multipass, first get the Ubuntu VM's IP via multipass list and then browse to that IP and the abovementioned port.

NOTE: You can leave that static server running in one session while you continue to execute tox on another session. That server will pick up any new changes to the report automatically so you don't have to restart it each time.

Troubleshooting

Since Kubernetes charms are not supported by juju debug-hooks, the only way to intercept code execution is to initialize the non-tty-bound debugger session and connect to the session externally.

For this purpose, we chose the rpdb, the remote Python debugger based on pdb.

For example, given that you have already deployed an application named prometheus in a Juju model named lma and you would like to debug your config-changed handler, execute the following:

kubectl exec -it pod/prometheus-operator-0 -n lma -- /bin/bash

This will open an interactive shell within the operator pod. Then, install the editor and the RPDB:

apt update
apt install telnet vim -y
pip3 install rpdb

Open the charm entry point in the editor:

vim /var/lib/juju/agents/unit-prometheus-0/charm/src/charm.py

Find a on_config_changed_handler function definition in the charm.py file. Modify it as follows:

def on_config_changed_handler(event, fw_adapter):
    import rpdb
    rpdb.set_trace()
    # < ... rest of the code ... >

Save the file (:wq). Do not close the current shell session!

Open another terminal session and trigger the config-changed hook as follows:

juju config prometheus external-labels='{"foo": "bar"}'

Do a juju status, until you will see the following:

Unit           Workload  Agent      Address    Ports     Message
prometheus/0*  active    executing  10.1.28.2  9090/TCP  (config-changed)

This message means, that unit has started the config-changed hook routine and it was already intercepted by the rpdb.

Now, return back to the operator pod session.

Enter the interactive debugger:

telnet localhost 4444

You should see the debugger interactive console.

# telnet localhost 4444
Trying ::1...
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
> /var/lib/juju/agents/unit-prometheus-0/charm/hooks/config-changed(91)on_config_changed_handler()
-> set_juju_pod_spec(fw_adapter)
(Pdb) where
  /var/lib/juju/agents/unit-prometheus-0/charm/hooks/config-changed(141)<module>()
-> main(Charm)
  /var/lib/juju/agents/application-prometheus/charm/lib/ops/main.py(212)main()
-> _emit_charm_event(charm, juju_event_name)
  /var/lib/juju/agents/application-prometheus/charm/lib/ops/main.py(128)_emit_charm_event()
-> event_to_emit.emit(*args, **kwargs)
  /var/lib/juju/agents/application-prometheus/charm/lib/ops/framework.py(205)emit()
-> framework._emit(event)
  /var/lib/juju/agents/application-prometheus/charm/lib/ops/framework.py(710)_emit()
-> self._reemit(event_path)
  /var/lib/juju/agents/application-prometheus/charm/lib/ops/framework.py(745)_reemit()
-> custom_handler(event)
  /var/lib/juju/agents/unit-prometheus-0/charm/hooks/config-changed(68)on_config_changed()
-> on_config_changed_handler(event, self.fw_adapter)
> /var/lib/juju/agents/unit-prometheus-0/charm/hooks/config-changed(91)on_config_changed_handler()
-> set_juju_pod_spec(fw_adapter)
(Pdb)

From this point forward, the usual pdb commands apply. For more information on how to use pdb, see the official pdb documentation

Relying on More Comprehensive Unit Tests

To ensure that this charm is tested on the widest number of platforms possible, we make use of Travis CI which also automatically reports the coverage report to a publicly available Coveralls.io page. To get a view of what the state of each relevant branch is, click on the appropriate badges found at the top of this README.

References

Much of how this charm is architected is guided by the following classic references. It will do well for future contributors to read and take them to heart:

  1. Hexagonal Architecture by Alistair Cockburn
  2. Boundaries (Video) by Gary Bernhardt
  3. Domain Driven Design (Book) by Eric Evans

charm-k8s-prometheus's People

Stargazers

 avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

charm-k8s-prometheus's Issues

Handle the case where the unit is not (yet) the leader

2020-02-20 08:41:26 DEBUG start Traceback (most recent call last):
2020-02-20 08:41:26 DEBUG start   File "/var/lib/juju/agents/unit-charm-k8s-grafana-1/charm/hooks/start", line 67, in <module>
2020-02-20 08:41:26 DEBUG start     main(Charm)
2020-02-20 08:41:26 DEBUG start   File "lib/ops/main.py", line 187, in main
2020-02-20 08:41:26 DEBUG start     _emit_charm_event(charm, juju_event_name)
2020-02-20 08:41:26 DEBUG start   File "lib/ops/main.py", line 118, in _emit_charm_event
2020-02-20 08:41:26 DEBUG start     event_to_emit.emit(*args, **kwargs)
2020-02-20 08:41:26 DEBUG start   File "lib/ops/framework.py", line 192, in emit
2020-02-20 08:41:26 DEBUG start     framework._emit(event)
2020-02-20 08:41:26 DEBUG start   File "lib/ops/framework.py", line 602, in _emit
2020-02-20 08:41:26 DEBUG start     self._reemit(event_path)
2020-02-20 08:41:26 DEBUG start   File "lib/ops/framework.py", line 637, in _reemit
2020-02-20 08:41:26 DEBUG start     custom_handler(event)
2020-02-20 08:41:26 DEBUG start   File "/var/lib/juju/agents/unit-charm-k8s-grafana-1/charm/hooks/start", line 48, in on_start_delegator
2020-02-20 08:41:26 DEBUG start     self.adapter.set_pod_spec(output.spec)
2020-02-20 08:41:26 DEBUG start   File "/var/lib/juju/agents/application-charm-k8s-grafana/charm/src/adapters.py", line 28, in set_pod_spec
2020-02-20 08:41:26 DEBUG start     self._framework.model.pod.set_spec(spec_obj)
2020-02-20 08:41:26 DEBUG start   File "lib/ops/model.py", line 452, in set_spec
2020-02-20 08:41:26 DEBUG start     raise ModelError('cannot set a pod spec as this unit is not a leader')
2020-02-20 08:41:26 DEBUG start ops.model.ModelError: cannot set a pod spec as this unit is not a leader
2020-02-20 08:41:27 ERROR juju.worker.uniter.operation runhook.go:132 hook "start" failed: exit status 1
2020-02-20 08:41:37 DEBUG start Traceback (most recent call last):
2020-02-20 08:41:37 DEBUG start   File "/var/lib/juju/agents/unit-charm-k8s-grafana-1/charm/hooks/start", line 67, in <module>
2020-02-20 08:41:37 DEBUG start     main(Charm)
2020-02-20 08:41:37 DEBUG start   File "lib/ops/main.py", line 187, in main
2020-02-20 08:41:37 DEBUG start     _emit_charm_event(charm, juju_event_name)
2020-02-20 08:41:37 DEBUG start   File "lib/ops/main.py", line 118, in _emit_charm_event
2020-02-20 08:41:37 DEBUG start     event_to_emit.emit(*args, **kwargs)
2020-02-20 08:41:37 DEBUG start   File "lib/ops/framework.py", line 192, in emit
2020-02-20 08:41:37 DEBUG start     framework._emit(event)
2020-02-20 08:41:37 DEBUG start   File "lib/ops/framework.py", line 602, in _emit
2020-02-20 08:41:37 DEBUG start     self._reemit(event_path)
2020-02-20 08:41:37 DEBUG start   File "lib/ops/framework.py", line 637, in _reemit
2020-02-20 08:41:37 DEBUG start     custom_handler(event)
2020-02-20 08:41:37 DEBUG start   File "/var/lib/juju/agents/unit-charm-k8s-grafana-1/charm/hooks/start", line 48, in on_start_delegator
2020-02-20 08:41:37 DEBUG start     self.adapter.set_pod_spec(output.spec)
2020-02-20 08:41:37 DEBUG start   File "/var/lib/juju/agents/application-charm-k8s-grafana/charm/src/adapters.py", line 28, in set_pod_spec
2020-02-20 08:41:37 DEBUG start     self._framework.model.pod.set_spec(spec_obj)
2020-02-20 08:41:37 DEBUG start   File "lib/ops/model.py", line 452, in set_spec
2020-02-20 08:41:37 DEBUG start     raise ModelError('cannot set a pod spec as this unit is not a leader')
2020-02-20 08:41:37 DEBUG start ops.model.ModelError: cannot set a pod spec as this unit is not a leader
2020-02-20 08:41:37 ERROR juju.worker.uniter.operation runhook.go:132 hook "start" failed: exit status 1

domain.check_config_propagation() will always return False

Because domain.check_config_propagation() uses the Prometheus API to get the current config:

response = _prometheus_http_api_call(
model_name, app_name, 'GET', '/api/v1/status/config'
)
current_config = yaml.safe_load(response['data']['yaml'])

current_config will actually be the effective configuration as loaded in memory. This effective configuration will be equal to expected_config plus some optional configuration options that Prometheus will have set to their default values. Thus the following line will always return False

return current_config == expected_config.to_dict()

RuntimeError: unable to define an event with event_kind that overlaps with an existing type

Originally inspired by canonical/operator#307

tl;dr Harness and non-Harness tests are interfering with each other, so we should not access charm object directly, like we did before.

We need to refactor an existing unit tests (e.g remove direct charm object references) to fix this:

======================================================================== FAILURES ========================================================================
_____________________________________________ OnConfigChangedHandlerTest.test__it_blocks_until_pod_is_ready ______________________________________________

self = <charm_test.OnConfigChangedHandlerTest testMethod=test__it_blocks_until_pod_is_ready>
mock_pod_spec = <function set_juju_pod_spec at 0x7fb334d35f80>, mock_juju_pod_spec = <function build_juju_pod_spec at 0x7fb334d4c710>
mock_time = <NonCallableMagicMock name='time' spec_set='module' id='140407662251216'>
mock_k8s_mod = <NonCallableMagicMock name='k8s' spec_set='module' id='140407662251984'>
mock_build_juju_unit_status_func = <function build_juju_unit_status at 0x7fb334d5e050>

    @patch('charm.build_juju_unit_status', spec_set=True, autospec=True)
    @patch('charm.k8s', spec_set=True, autospec=True)
    @patch('charm.time', spec_set=True, autospec=True)
    @patch('charm.build_juju_pod_spec', spec_set=True, autospec=True)
    @patch('charm.set_juju_pod_spec', spec_set=True, autospec=True)
    def test__it_blocks_until_pod_is_ready(
            self,
            mock_pod_spec,
            mock_juju_pod_spec,
            mock_time,
            mock_k8s_mod,
            mock_build_juju_unit_status_func):
        # Setup
        mock_fw_adapter_cls = \
            create_autospec(framework.FrameworkAdapter, spec_set=True)
        mock_fw_adapter = mock_fw_adapter_cls.return_value

        mock_juju_unit_states = [
            MaintenanceStatus(str(uuid4())),
            MaintenanceStatus(str(uuid4())),
            ActiveStatus(str(uuid4())),
        ]
        mock_build_juju_unit_status_func.side_effect = mock_juju_unit_states

        mock_event_cls = create_autospec(EventBase, spec_set=True)
        mock_event = mock_event_cls.return_value

        harness = Harness(charm.Charm)
>       harness.begin()

test/charm_test.py:93:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
lib/ops/testing.py:121: in begin
    self._charm = TestCharm(self._framework, self._framework.meta.name)
src/charm.py:39: in __init__
    super().__init__(*args)
lib/ops/charm.py:353: in __init__
    self.on.define_event(relation_name + '_relation_created', RelationCreatedEvent)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

cls = <class 'ops.testing.Harness.begin.<locals>.TestEvents'>, event_kind = 'http_api_relation_created'
event_type = <class 'ops.charm.RelationCreatedEvent'>

    @classmethod
    def define_event(cls, event_kind, event_type):
        """Define an event on this type at runtime.

        cls: a type to define an event on.

        event_kind: an attribute name that will be used to access the
                    event. Must be a valid python identifier, not be a keyword
                    or an existing attribute.

        event_type: a type of the event to define.

        """
        prefix = 'unable to define an event with event_kind that '
        if not event_kind.isidentifier():
            raise RuntimeError(prefix + 'is not a valid python identifier: ' + event_kind)
        elif keyword.iskeyword(event_kind):
            raise RuntimeError(prefix + 'is a python keyword: ' + event_kind)
        try:
            getattr(cls, event_kind)
            raise RuntimeError(
>               prefix + 'overlaps with an existing type {} attribute: {}'.format(cls, event_kind))
E               RuntimeError: unable to define an event with event_kind that overlaps with an existing type <class 'ops.testing.Harness.begin.<locals>.TestEvents'> attribute: http_api_relation_created

lib/ops/framework.py:322: RuntimeError

Charm is stuck forever in "Pod is getting ready" even if pod creation has failed

Model  Controller  Cloud/Region        Version  SLA          Timestamp
lma5   mk8s        microk8s/localhost  2.7.7    unsupported  13:02:38Z

App         Version  Status   Scale  Charm       Store  Rev  OS          Address        Notes
prometheus           waiting      1  prometheus  local    0  kubernetes  10.152.183.41

Unit           Workload     Agent  Address     Ports           Message
prometheus/0*  maintenance  idle   10.1.50.17  80/TCP,443/TCP  Pod is getting ready
ubuntu@node10:~$ microk8s.kubectl get all -n lma5
NAME                        READY   STATUS             RESTARTS   AGE
pod/prometheus-0            1/2     CrashLoopBackOff   5          4m58s
pod/prometheus-operator-0   1/1     Running            0          5m7s

NAME                           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
service/prometheus             ClusterIP   10.152.183.41   <none>        80/TCP,443/TCP   4m58s
service/prometheus-endpoints   ClusterIP   None            <none>        <none>           4m58s
service/prometheus-operator    ClusterIP   10.152.183.29   <none>        30666/TCP        5m7s

NAME                                   READY   AGE
statefulset.apps/prometheus            0/1     4m58s
statefulset.apps/prometheus-operator   1/1     5m7s

Prometheus is ignoring advertised-port config option

$ juju config prom advertised-port

9111
NAME                           READY   STATUS             RESTARTS   AGE
pod/grafana-7d6497b77b-kn72t   1/1     Running            0          9m44s
pod/grafana-operator-0         1/1     Running            0          11m
pod/prom-0                     1/1     Running            0          12m
pod/prom-1                     1/1     Running            0          12m
pod/prom-2                     0/1     CrashLoopBackOff   5          6m45s
pod/prom-operator-0            1/1     Running            0          13m

NAME                       TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)     AGE
service/grafana            ClusterIP   10.152.183.131   <none>        3000/TCP    11m
service/grafana-operator   ClusterIP   10.152.183.121   <none>        30666/TCP   11m
service/prom               ClusterIP   10.152.183.120   <none>        9111/TCP    12m
service/prom-endpoints     ClusterIP   None             <none>        <none>      12m
service/prom-operator      ClusterIP   10.152.183.190   <none>        30666/TCP   13m

from the kubectl describe pod:

Warning  Unhealthy  54s (x4 over 114s)   kubelet, node10    Liveness probe failed: Get "http://10.1.28.216:9111/-/healthy": dial tcp 10.1.28.216:9111: connect: connection refused

Warning  Unhealthy  46s (x9 over 2m16s)  kubelet, node10    Readiness probe failed: Get "http://10.1.28.216:9111/-/ready": dial tcp 10.1.28.216:9111: connect: connection refused

Testing:

$ kubectl exec -it pod/prom-2 -n lma -- /bin/sh
/prometheus $ 
/prometheus $ telnet 127.0.0.1 9111
telnet: can't connect to remote host (127.0.0.1): Connection refused
/prometheus $ telnet 127.0.0.1 9090
Connected to 127.0.0.1
^]^C

KeyError: 'JUJU_MODEL_NAME' if Harness-backed test is executed

Originally inspired by canonical/operator#309

======================================================================== FAILURES ========================================================================
_____________________________________________ OnConfigChangedHandlerTest.test__it_blocks_until_pod_is_ready ______________________________________________

self = <charm_test.OnConfigChangedHandlerTest testMethod=test__it_blocks_until_pod_is_ready>, mock_pod_spec = <function set_juju_pod_spec at 0x10b794200>
mock_juju_pod_spec = <function build_juju_pod_spec at 0x10b794950>, mock_time = <NonCallableMagicMock name='time' spec_set='module' id='4487497232'>
mock_k8s_mod = <NonCallableMagicMock name='k8s' spec_set='module' id='4482352976'>
mock_build_juju_unit_status_func = <function build_juju_unit_status at 0x10b7a5290>

    @patch('charm.build_juju_unit_status', spec_set=True, autospec=True)
    @patch('charm.k8s', spec_set=True, autospec=True)
    @patch('charm.time', spec_set=True, autospec=True)
    @patch('charm.build_juju_pod_spec', spec_set=True, autospec=True)
    @patch('charm.set_juju_pod_spec', spec_set=True, autospec=True)
    def test__it_blocks_until_pod_is_ready(
            self,
            mock_pod_spec,
            mock_juju_pod_spec,
            mock_time,
            mock_k8s_mod,
            mock_build_juju_unit_status_func):
        # Setup
        mock_fw_adapter_cls = \
            create_autospec(framework.FrameworkAdapter, spec_set=True)
        mock_fw_adapter = mock_fw_adapter_cls.return_value

        mock_juju_unit_states = [
            MaintenanceStatus(str(uuid4())),
            MaintenanceStatus(str(uuid4())),
            ActiveStatus(str(uuid4())),
        ]
        mock_build_juju_unit_status_func.side_effect = mock_juju_unit_states

        mock_event_cls = create_autospec(EventBase, spec_set=True)
        mock_event = mock_event_cls.return_value

        harness = Harness(charm.Charm)
        harness.begin()
        harness.charm._stored.set_default(is_started=False)
>       harness.charm.on.config_changed.emit()

test/charm_test.py:99:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
lib/ops/framework.py:207: in emit
    framework._emit(event)
lib/ops/framework.py:714: in _emit
    self._reemit(event_path)
lib/ops/framework.py:757: in _reemit
    custom_handler(event)
src/charm.py:74: in on_config_changed
    on_config_changed_handler(event, self.fw_adapter, self._stored)
src/charm.py:96: in on_config_changed_handler
    juju_model = fw_adapter.get_model_name()
src/adapters/framework.py:88: in get_model_name
    return os.environ["JUJU_MODEL_NAME"]
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = environ({'TMPDIR': '/var/folders/ch/ptmqpg6d4rjbmlgh_6smvrxm0000gn/T/', 'PATH': '/Users/vgrevtsev/Canonical/charm-k8s-...', 'PYTEST_CURRENT_TEST': 'test/charm_test.py::OnConfigChangedHandlerTest::test__it_blocks_until_pod_is_ready (call)'})
key = 'JUJU_MODEL_NAME'

    def __getitem__(self, key):
        try:
            value = self._data[self.encodekey(key)]
        except KeyError:
            # raise KeyError with the original key value
>           raise KeyError(key) from None
E           KeyError: 'JUJU_MODEL_NAME'

/usr/local/opt/python/Frameworks/Python.framework/Versions/3.7/lib/python3.7/os.py:681: KeyError

---------- coverage: platform darwin, python 3.7.7-final-0 -----------
Name                        Stmts   Miss Branch BrPart  Cover
-------------------------------------------------------------
src/adapters/framework.py      58      4      6      0    94%
src/adapters/k8s.py            43      1     10      2    94%
src/charm.py                   73     36     10      1    48%
src/domain.py                 108     10     42      9    87%
src/exceptions.py               5      1      0      0    80%
src/interface_http.py          26      7      2      0    68%
-------------------------------------------------------------
TOTAL                         313     59     70     12    79%
Coverage HTML written to dir coverage-report

================================================================ short test summary info =================================================================
FAILED test/charm_test.py::OnConfigChangedHandlerTest::test__it_blocks_until_pod_is_ready - KeyError: 'JUJU_MODEL_NAME'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.