Git Product home page Git Product logo

ceph-dev-docker's Introduction

ceph-dev-docker

This project was migrated to https://github.com/suse/ceph-dev-docker

The purpose of these docker images is to ease the local development of Ceph, by providing a container-based runtime and development environment (based on openSUSE "Tumbleweed").

It requires a local git clone to start up a vStart environment.

Usage

Docker User Group

The docker command usually requires root privileges. On some Linux distributions, you can add your user account to the docker user group to remove this requirement.

Older Ceph Releases

When developing for older ceph versions you should use the dedicated dockerfile for that release, p.e. we have a mimic.Dockerfile.

To create a container with that dockerfile you can either call setup.sh with VERSION var specified or add -f <version>.Dockerfile to the docker build command.

For each release there will exist a specific bin folder that will have priority over the shared bin scripts.

setup.sh

This script can be used to get a working container with 1 command.

It will remove the previous container (with the same name), rebuild the image and create a new container.

The image will not be removed, so it will be an incremental build.

You can customize the outcome of the script with the following env variables:

  • NAME - Name of the container. If you want more than 1 container you need to change this. Default: ceph-1.
  • CEPH - Path to the ceph repository. Default: ../ceph
  • CCACHE - Path to ccache. Default: ../ceph-ccache
  • VERSION - Specify an already released ceph version which you are going work on. Default: master. Available versions: mimic, nautilus.

Note: CEPH and CCACHE need to be absolute paths.

Build the Image

From inside this project's git repo, run the following command:

# docker build --network=host -t ceph-dev-docker .

You should now have two additional images in your local Docker repository, named ceph-dev-docker and docker.io/opensuse:

# docker images
REPOSITORY           TAG                 IMAGE ID            CREATED             SIZE
ceph-dev-docker      latest              559deb8b9b4f        15 minutes ago      242 MB
docker.io/opensuse   tumbleweed          f27ade5f6fe7        11 days ago         104 MB

Clone Ceph

Somewhere else on your host system, create a local clone of the Ceph git repository. Replace <ceph-repository> with the remote git repo you want to clone from, e.g. https://github.com/ceph/ceph.git:

# cd <workdir>
# git clone <ceph-repository>
# cd ceph

Now switch or create your development branch using git checkout or git branch.

Starting the Container and Building Ceph

Now start up the container, by mounting the local git clone directory as /ceph:

# docker run -itd \
  -v <CEPH_ROOT>:/ceph \
  -v <CCACHE_ROOT>:/root/.ccache \
  -v <CEPH_DEV_DOCKER_ROOT>/shared:/shared \
  --net=host \
  --name=ceph-dev \
  --hostname=ceph-dev \
  --add-host=ceph-dev:127.0.0.1 \
  ceph-dev-docker

Lets walk through some of the flags from the above command:

  • -d: runs the container shell in detach mode
  • <CCACHE_ROOT>: the directory where ccache will store its data
  • --name: custom name for the container, this can be used for managing the container
  • --hostname: custom hostname for the docker container, it helps to distinguish one container from another
  • --add-host: fixes the problem with resolving hostname inside docker

Extra flags:

  • --env CEPH_PORT=<CEPH_PORT>: This port will be used by vstart.sh to determine each service's port. All services will have a port with an increment of 1000 relative to the previous one; and since the dashboard is currently the first one its port will be <CEPH_PORT>+1000. Make sure the <CEPH_PORT> you pick won't cause conflicts in your environment.

After running this command you will have a running docker container. Now, anytime you want to access the container shell you just have to run

# docker attach ceph-dev

Inside the container, you can now call setup-ceph.sh, which will install all the required build dependencies and then build Ceph from source.

(docker)# setup-ceph.sh

If you need to limit the number of build jobs you can do the following:

NPROC=2 setup-ceph.sh

NPROC can be set to any value which is lower or equal to your number of logical CPU cores.

This script reads the following env variables:

// Determines which gcc version should be used. If 'true' it will use gcc7,
// which is compatible with the mimic release
// default: false
MIMIC=true

// Forces the use of a specific python version.
// default: 3
WITH_PYTHON=2

// Forces a clean compilation of ceph. It will remove the build folder and
// node_modules and dist from the dashboard
// default: false
CLEAN=true

Docker Container Lifecycle

To start a container run,

# docker start ceph-dev

And to attach to a running container shell,

# docker attach ceph-dev

Or to create a new session to the same container,

# docker exec -it ceph-dev /bin/zsh

If you want to detach from the container and stop the container,

(docker)# exit

However if you want to simply detach, without stoping the container, which would allow you to reattach at a later time,

(docker)# CTRL+P CTRL+Q

Finally, to stop the container,

# docker stop ceph-dev

Multiple Docker Containers

If you want to run multiple docker containers, you just need to modify the previous docker run command with a different local ceph directory and replace ceph-dev with a new value.

For example:

# docker run -itd \
  -v <CEPH_ROOT>:/ceph \
  -v <CCACHE_ROOT>:/root/.ccache \
  -v <CEPH_DEV_DOCKER_ROOT>/shared:/shared \
  --net=host \
  --name=new-ceph-container \
  --hostname=new-ceph-container \
  --add-host=new-ceph-container:127.0.0.1 \
  ceph-dev-docker

Now if you want to access this container just run,

# docker attach new-ceph-container

Working on Ceph Dashboard

There are some scripts that can be useful if you are working on Ceph Dashboard.

All of them are now accessible through the dash command:

(docker)# dash

When you press tab it will show you a list of all scripts and their description.

Enable NFS-Ganesha support (Optional)

If you need to test NFS feature in Ceph Dashboard, at least one nfs-ganesha daemon must be provisioned. The start-ceph.sh script that introduced in the later session will ask vstart script to spawn a nfs-ganesha daemon if required packages are installed. Run the following script to install required packages:

(docker)# setup-nfs.sh

Start Ceph Development Environment

To start up the compiled Ceph cluster, you can use the vstart.sh script, which spawns up an entire cluster (MONs, OSDs, Mgr) in your development environment or you can use the start-ceph.sh script available in this docker image.

See the documentation and the output of vstart.sh --help for details.

To start an environment from scratch with debugging enabled, use the following command:

(docker)# start-ceph.sh
# OR
(docker)# dash start-ceph

Note: This script uses the vstart -d option that enables debug output. Keep a close eye on the growth of the log files created in build/out, as they can grow very quickly (several GB within a few hours).

Test Ceph Development Environment

(docker)# cd /ceph/build
(docker)# bin/ceph -s

Stop Ceph Development Environment

(docker)# stop-ceph.sh
# OR
(docker)# dash stop-ceph

Reload Dashboard Module (Backend)

Run the following script to reflect changes in python files:

(docker)# reload-dashboard.sh
# OR
(docker)# dash reload-dashboard

Start Development Server (Frontend)

The following script will start a frontend development server that can be accessed at http://localhost:4200:

(docker)# npm-start.sh
# OR
(docker)# dash npm-start

Setup system to use the cephadm Vagrant box

Ceph provides a Vagrant box that can be used with this Docker container.

To be able to interact with this Vagrant box you need to start it and extract the SSH configuration. Note, your Docker container must be started with the -v ~/.ssh:/root/.ssh:ro command line arguments to be able to establish a SSH connection between the Vagrant box and your Docker container.

To start the Vagrant box go to src/pybind/mgr/cephadm and run:

# vagrant up

After that extract the SSH configuration.

# vagrant ssh-config > ssh-config

Now you need to setup everything in your Docker container by running:

# setup-cephadm.sh

This will enable the Ceph Manager module called cephadm and setup the SSH configuration. Additionally the IP addresses of the Vagrant box nodes will be added to the /etc/hosts file.

External Services

To run preconfigured external services, you can simply use docker-compose.

If you do not have docker-compose installed on your system, follow these instructions.

Starting Services

Running the following command will start all containers, one for each service.

docker-compose up

Note that this will not start ceph-dev-docker. See the instructions above on how to perform this task.

You can also start a single service by providing the name of the services as they are configured in the docker-compose.yml file.

docker-compose up grafana

Stopping these containers is as easy as running them:

docker-compose down

You may want to check the help of docker-compose for starting up containers. It contains descriptions on how to force recreation of containeres of rebuilding them:

docker-compose help up

After starting all containers, the following external services will be available:

Service URL User Pass
Grafana http://localhost:3000 admin admin
Prometheus http://localhost:9090 - -
Alertmanager http://localhost:9093 - -
Node Exporter http://localhost:9100 - -
Keycloak http://localhost:8080 admin admin
LDAP ldap://localhost:2389 cn=admin,dc=example,dc=org admin
PHP LDAP Admin https://localhost:90 cn=admin,dc=example,dc=org admin
Shibboleth http://localhost:9080/Shibboleth.sso/Login admin admin
HAProxy https://localhost/ - -

Enabling Prometheus

All TCP ports in use can be found in prometheus/prometheus.yml.

To start Prometheus, Grafana and Node exporter, run:

docker-compose up alertmanager grafana node-exporter prometheus

In order to connect with Prometheus to your running docker instance, enable Prometheus in it with:

ceph mgr module enable prometheus

Now the following services should be found:

In order to enable connections from the dashboard to the Alertmanager and Prometheus API run:

ceph dashboard set-alertmanager-api-host http://localhost:9093
ceph dashboard set-prometheus-api-host http://localhost:9090

After you have connected the API to the dashboard reload the page. After some minutes you should see a new tab inside the notification popover. The monitoring tab should have at least one active alert named 'load0'.

There are 7 alerts configured, you can find them in 'prometheus/alert.rules'. The alerts have nothing to do with your cluster state (ATM), just with your system load. 'load0' fires if your system load is greater than zero, this means it's always on.

Enabling Grafana

The Grafana container is pre-configured to access the Prometheus container as a data source. The Grafana dashboards are taken from the ceph git repository when creating the container (by assuming that the ceph git repo is located in the same directory as the ceph-dev-docker git repo). Grafana scans this directory for changes every 10 seconds.

To configure the embedding of the Grafana dashboards into the Ceph Manager Dashboard, run the following command inside the ceph-dev container where Ceph is up and running::

(ceph-dev)# bin/ceph dashboard set-grafana-api-url http://localhost:3000

See the Ceph Dashboard documentation for additional information about the Grafana integration.

Enabling SSL

To enable SSL in Grafana you need to modify the 'grafana/grafana.ini' file.

[server]
protocol = https

After that run the following command inside the ceph-dev container where Ceph is up and running::

(ceph-dev)# bin/ceph dashboard set-grafana-api-url https://localhost:3000

Configuring SSO

Add the following entry to your /etc/hosts:

<your_ip> cephdashboard.local

Access ceph-dev container:

# docker exec -it ceph-dev bash

(ceph-dev)# cd /ceph/build

Start Ceph Dashboard (if it's not already running):

(ceph-dev)# start-ceph

Ceph Dashboard should be running on port 433:

(ceph-dev)# bin/ceph config set mgr mgr/dashboard/x/ssl_server_port 443

(ceph-dev)# bin/ceph mgr module disable dashboard

(ceph-dev)# bin/ceph mgr module enable dashboard

(ceph-dev)# bin/ceph mgr services

Setup SSO on Ceph Dashboard:

(ceph-dev)# cat <<EOF > sp_x_509_cert

MIIDYDCCAkigAwIBAgIJAOwAnH/ZKuTnMA0GCSqGSIb3DQEBCwUAMEUxCzAJBgNVBAYTAkFVMRMwEQYDVQQIDApTb21lLVN0YXRlMSEwHwYDVQQKDBhJbnRlcm5ldCBXaWRnaXRzIFB0eSBMdGQwHhcNMTgwOTI0MTA0ODQwWhcNMjgwOTIzMTA0ODQwWjBFMQswCQYDVQQGEwJBVTETMBEGA1UECAwKU29tZS1TdGF0ZTEhMB8GA1UECgwYSW50ZXJuZXQgV2lkZ2l0cyBQdHkgTHRkMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEArdsf7uMypSF/6/7W+dKGsveHa3nbkKRPbXAP9P9a9hb3vxVkd6Qqsgf4WJrwRAl1I5Hhfz6AfHDVJFJg+TDKBlba2NXASxGMWYcgdvAvzyrWaIfGUhajZ/cE2Zz16qs3nIY88jXqaVQIFhESBk9uc3aK3RGgLTb6ytWRlP/EMQZ8pxlQUYUuqvKMCBifJTUPDyGiqnaQ826W1zi1qMcHmbRQbmprU/g1na6rAX1OJPwMgovrMvQKR9PuMmUDauLQI3iWHzy3t+02rKUAHWGF2Xfel3RCSXWp+o6nBRrUnl642zAvXuoGYyJLTqXbziD2CVT0uA8SuH/w/UFFflWEEwIDAQABo1MwUTAdBgNVHQ4EFgQUDr2DkSCj8i5I8JfmN/9SbaqrR8UwHwYDVR0jBBgwFoAUDr2DkSCj8i5I8JfmN/9SbaqrR8UwDwYDVR0TAQH/BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEACqtiY50kaOt+lKNRlOsqMRe5YftjWNF53CP2jAFpE4n7tVDGnO8c2+gDJL7Fc9VkcfZzYArpzcdcXMMKD/Kp4L/zzPpIiVxZtqRw3A+xNkZ6yKLz6aZAY/2wIcVwXBGvDFIHYuzfS5YTp9oAX9M+izTt4HuP20GuyCNWIE/ME5QUaJ62Z+nJdCd43Eg4gq67+whSWaL6GdiW1y+Fcj4nAEWMKNccDeCWI9FTG/aTmliazvHSxOi6Z3mcQNs0VIgBlbuVmXruJEFPv40okY5drFZbR4ZjjSbZPckXVs62fTV+q5RtrTQd8+g5ifci+TOyPEktC49FKanZR6L0TI+E8g== EOF

(ceph-dev)# cat <<EOF > sp_private_key

MIIEvAIBADANBgkqhkiG9w0BAQEFAASCBKYwggSiAgEAAoIBAQCt2x/u4zKlIX/r/tb50oay94dreduQpE9tcA/0/1r2Fve/FWR3pCqyB/hYmvBECXUjkeF/PoB8cNUkUmD5MMoGVtrY1cBLEYxZhyB28C/PKtZoh8ZSFqNn9wTZnPXqqzechjzyNeppVAgWERIGT25zdordEaAtNvrK1ZGU/8QxBnynGVBRhS6q8owIGJ8lNQ8PIaKqdpDzbpbXOLWoxweZtFBuamtT+DWdrqsBfU4k/AyCi+sy9ApH0+4yZQNq4tAjeJYfPLe37TaspQAdYYXZd96XdEJJdan6jqcFGtSeXrjbMC9e6gZjIktOpdvOIPYJVPS4DxK4f/D9QUV+VYQTAgMBAAECggEAAj0sMBtk75N63kMt6ZG4gl2FtPCgz0AOdc5XpkQTm8+36RIRdSj8E8bef+We6oFkrMyYJtdbOD8Lv6f/77WdJG/B6cD29QCI2i5PULjPJM/cawQ0naIFALXBrjvDPv5tfOqNpmDjX+/hGself8dOGNaR+z7a3To0CKCve0e/8xGo3uNhPBByvrGgdZK6LQKOeo387zKRwDG2Pi4+e5kfGwOYB4tfPZOMVMEuFAV+MJ9xb6N2lp/n1Qxo9ceEiOxjJGzQygJJUquIe+koQfKcZ/iah5mi8BaEdXYKIklDxEJijXmfEwjFE2yrYqV1HZ2iuOzdeVgeCeloYST/BxHNwQKBgQDiIV9TvW2/T/HBSA/yUjmO9r93oXTb0lMvfHurKBF1t65aAztpq8sIZ/4JrfEkumo1KA5Nm+Z3nPY5dEpO4A/CSUXX8iCMQSbE/Sk2PspReG1hSsYMMZYKIXUp8fE3zZHnUuXug+4pjMKzcD2hNKj37uFTn5BlQyXnn0Uap/AfuwKBgQDE0hYobMGnbRbjQhm8rSYeslPDjDC8yLOJW83TWMqWseMRXzuB+dU+Gooo2SMmWRatKuZ+oACx7E8k6aMaUrv7aCnht7QH/TBBUsb10ZZ9mvi+wRqiw7drrxcvU6X07A17bsIzT4FJ+QdisUKwkVrFhCGcySZLyAWgQHMD/i6LiQKBgFEJYJ4j3naW8a4wYvaWHOZs6sS2aah1QTZdR/xYSZmED8lWKy59UC9dBR725Noiq/kMt8N8QSVQbLS+RfrqNPuNQqhWru9UUc56YxB7hAmaPKiHIV4xTvGmd9RmTemPk9/wR1IomWrudL/VU2C3/G2Nf9Z18ks3uxe8bglVcaoNAoGAP/3+bk5N+F2jn2gSbiHtzvUz/tRJ1Fd86CANH7YyyCQ2K6PG+U99YZ/HY9iVcRZuJQdZwbnMAA1Q/jNocFqN/AO1+kl8I0zSr6p2Pd5TC6ujTIIEYv83V6+p3h1YS/WjvIoaYgxrgN2S5Se1Ayt/U9DODOfpp6H1ElFiE95Ey+ECgYA8vcf0CBCcVTUitTAFTPDFujFKlQ1a9HHbu3qFPP5A/Jo6Ki4eqmZfCBH7ZB/B1oOf0Jb/Er24nAql8VHqVrTfLhsKdM8djLWeFp7YRaWlNjQnoweHKBaBRL0HVkrwh/1fvtnlIB4K8kNc8liwCIOmpt0WMFkMKHBqeRJ/XS2gGQ== EOF

(ceph-dev)# bin/ceph dashboard sso setup saml2 https://cephdashboard.local  https://localhost:9443/idp/shibboleth uid https://cephdashboard/idp sp_x_509_cert sp_private_key | jq

To generate a different certificate:

openssl req -new -x509 -days 3652 -nodes -out sp.crt -keyout saml.key

Access shibboleth container:

# docker-compose exec shibboleth /bin/bash

Add the following entry to shibboleth /etc/hosts:

<your_ip> cephdashboard.local

Setup shibboleth IdP:

(shibboleth)# curl https://cephdashboard.local/auth/saml2/metadata --output /opt/shibboleth-idp/metadata/cephdashboard-metadata.xml --insecure

(shibboleth)# $JETTY_HOME/bin/jetty.sh restart

Login with user admin password admin.

Note: SLO is not fully configured, yet

Enabling HAProxy

For testing Ceph Dashboard HA you should run multiple Ceph Managers. To do so execute the following command in your Ceph development environment:

# export MGR=3
# start-ceph.sh

Now you have to update the HAProxy configuration file ./haproxy/haproxy.conf and adapt the host ports of your running Ceph Dashboards. You will find them in the output of the vstart.sh script. After that start the HAProxy:

# docker-compose up haproxy

or if you want to enable all services:

# docker-compose up haproxy alertmanager grafana node-exporter prometheus

Now you can reach the Ceph Dashboard via

* http://localhost/
* https://localhost/

To simulate a failover you have to find out which node is active. This can be done by running:

# ceph status

To force a failover you simply have to execute the following command on one of your Ceph Manager nodes x, y or z:

# ceph mgr fail <ACTIVE_MGR>
# ceph mgr fail x

If you are logged into the Dashboard via HTTPS while a failover occurs, then you will get error messages because of the changed SSL certificate of the new active Ceph Dashboard instance. Please refresh the browser to fix this.

Troubleshooting

Permission error when trying to access /ceph

If you encounter a permisson denied when trying to access /ceph by, for instance, running setup-ceph.sh or simply by trying to list its contents (to verify that it has been mounted correctly), the chances are high that your host system uses SELinux. To circumvent that problem, you can simply disable SELinux by running:

sudo setenforce permissive

This puts SELinux in permissive mode, where the rules are still evaluated but not enforced, they are only logged. This basically disables SELinux, making the host system more vulnerable for security flaws.

ceph-dev-docker's People

Contributors

a2batic avatar bk201 avatar callithea avatar lenzgr avatar p-se avatar ricardoasmarques avatar rjfd avatar schoolguy avatar tspmelo avatar votdev avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

ceph-dev-docker's Issues

alerting rules file contains wrong expressions

Ceph MGR fails to load MGR modules after rebuild

2018-10-31 13:52:33.781 7f978235e740 10 mgr[py] loaded 10 options
2018-10-31 13:52:33.781 7f978235e740  4 mgr[py] Standby mode not provided by module 'balancer'
2018-10-31 13:52:33.781 7f978235e740  1 mgr[py] Loading python module 'crash'
2018-10-31 13:52:33.801 7f978235e740 10 mgr[py] Computed sys.path '/ceph/src/pybind:/ceph/build/lib/cython_modules/lib.3::/usr/lib/python36.zip:/usr/lib64/python3.6:/usr/lib64/python3.6:/usr/lib64/python3.6/lib-dynload:/usr/lib64/python3.6/site-packages:/usr/lib/python3.6/site-packages:/usr/local/lib64/python3.6/site-packages:/usr/local/lib/python3.6/site-packages:/ceph/src/pybind/mgr'
2018-10-31 13:52:33.813 7f978235e740 -1 mgr[py] Module not found: 'mgr_module'
2018-10-31 13:52:33.813 7f978235e740 -1 mgr[py] Traceback (most recent call last):
  File "/ceph/src/pybind/mgr/mgr_module.py", line 8, in <module>
    import rados
ImportError: Interpreter change detected - this module can only be loaded into one interpreter per process.

2018-10-31 13:52:33.813 7f978235e740 -1 mgr[py] Class not found in module 'crash'
2018-10-31 13:52:33.813 7f978235e740 -1 mgr[py] Error loading module 'crash': (22) Invalid argument
2018-10-31 13:52:33.813 7f978235e740  1 mgr[py] Loading python module 'dashboard'
2018-10-31 13:52:33.837 7f978235e740 10 mgr[py] Computed sys.path '/ceph/src/pybind:/ceph/build/lib/cython_modules/lib.3::/usr/lib/python36.zip:/usr/lib64/python3.6:/usr/lib64/python3.6:/usr/lib64/python3.6/lib-dynload:/usr/lib64/python3.6/site-packages:/usr/lib/python3.6/site-packages:/usr/local/lib64/python3.6/site-packages:/usr/local/lib/python3.6/site-packages:/ceph/src/pybind/mgr'
2018-10-31 13:52:33.849 7f978235e740 -1 mgr[py] Module not found: 'mgr_module'
2018-10-31 13:52:33.849 7f978235e740 -1 mgr[py] Traceback (most recent call last):
  File "/ceph/src/pybind/mgr/mgr_module.py", line 8, in <module>
    import rados
ImportError: Interpreter change detected - this module can only be loaded into one interpreter per process.

2018-10-31 13:52:33.849 7f978235e740 -1 mgr[py] Class not found in module 'dashboard'
2018-10-31 13:52:33.849 7f978235e740 -1 mgr[py] Error loading module 'dashboard': (22) Invalid argument
2018-10-31 13:52:33.849 7f978235e740  1 mgr[py] Loading python module 'devicehealth'
2018-10-31 13:52:33.873 7f978235e740 10 mgr[py] Computed sys.path '/ceph/src/pybind:/ceph/build/lib/cython_modules/lib.3::/usr/lib/python36.zip:/usr/lib64/python3.6:/usr/lib64/python3.6:/usr/lib64/python3.6/lib-dynload:/usr/lib64/python3.6/site-packages:/usr/lib/python3.6/site-packages:/usr/local/lib64/python3.6/site-packages:/usr/local/lib/python3.6/site-packages:/ceph/src/pybind/mgr'
2018-10-31 13:52:33.889 7f978235e740 -1 mgr[py] Module not found: 'mgr_module'
2018-10-31 13:52:33.889 7f978235e740 -1 mgr[py] Traceback (most recent call last):
  File "/ceph/src/pybind/mgr/mgr_module.py", line 8, in <module>
    import rados
ImportError: Interpreter change detected - this module can only be loaded into one interpreter per process.

2018-10-31 13:52:33.889 7f978235e740 -1 mgr[py] Class not found in module 'devicehealth'
2018-10-31 13:52:33.889 7f978235e740 -1 mgr[py] Error loading module 'devicehealth': (22) Invalid argument
2018-10-31 13:52:33.889 7f978235e740  1 mgr[py] Loading python module 'diskprediction'
2018-10-31 13:52:33.913 7f978235e740 10 mgr[py] Computed sys.path '/ceph/src/pybind:/ceph/build/lib/cython_modules/lib.3::/usr/lib/python36.zip:/usr/lib64/python3.6:/usr/lib64/python3.6:/usr/lib64/python3.6/lib-dynload:/usr/lib64/python3.6/site-packages:/usr/lib/python3.6/site-packages:/usr/local/lib64/python3.6/site-packages:/usr/local/lib/python3.6/site-packages:/ceph/src/pybind/mgr'
2018-10-31 13:52:33.929 7f978235e740 -1 mgr[py] Module not found: 'mgr_module'
2018-10-31 13:52:33.929 7f978235e740 -1 mgr[py] Traceback (most recent call last):
  File "/ceph/src/pybind/mgr/mgr_module.py", line 8, in <module>
    import rados
ImportError: Interpreter change detected - this module can only be loaded into one interpreter per process.

2018-10-31 13:52:33.929 7f978235e740 -1 mgr[py] Class not found in module 'diskprediction'
2018-10-31 13:52:33.929 7f978235e740 -1 mgr[py] Error loading module 'diskprediction': (22) Invalid argument
2018-10-31 13:52:33.929 7f978235e740  1 mgr[py] Loading python module 'hello'
2018-10-31 13:52:33.953 7f978235e740 10 mgr[py] Computed sys.path '/ceph/src/pybind:/ceph/build/lib/cython_modules/lib.3::/usr/lib/python36.zip:/usr/lib64/python3.6:/usr/lib64/python3.6:/usr/lib64/python3.6/lib-dynload:/usr/lib64/python3.6/site-packages:/usr/lib/python3.6/site-packages:/usr/local/lib64/python3.6/site-packages:/usr/local/lib/python3.6/site-packages:/ceph/src/pybind/mgr'
2018-10-31 13:52:33.961 7f978235e740 -1 mgr[py] Module not found: 'mgr_module'
2018-10-31 13:52:33.961 7f978235e740 -1 mgr[py] Traceback (most recent call last):
  File "/ceph/src/pybind/mgr/mgr_module.py", line 8, in <module>
    import rados
ImportError: Interpreter change detected - this module can only be loaded into one interpreter per process.

Current build flags in this project:
-DWITH_PYTHON3=ON -DWITH_PYTHON2=OFF -DMGR_PYTHON_VERSION=3 -DWITH_TESTS=ON -DWITH_CCACHE=ON

Another combination which does not trigger the issue:
-D WITH_PYTHON3=ON -D WITH_TESTS=ON -D WITH_CCACHE=ON -D ENABLE_GIT_VERSION=OFF WITH_MGR_DASHBOARD_FRONTEND=OFF

Likely relevant (bad):
-D WITH_PYTHON2=OFF and -D MGR_PYTHON_VERSION=3

This looks like something in master has changed and prevents us from using a Python3 only Ceph, but as I'm not sure about that, I opened the issue here first.

Workaround:

Use the build flags known to work and adapt the setup-ceph.sh file inside or outside of your container. Adapting it outside requires you to build a new image and create a new container out of that image.

"Can't find librados2 package" error during the building

I find this error when just running the building command.

It seems in the source repos, only librados3 is available but not librados2. So I modified it in the Dockerfile, and finally builded the image successfully. The reason is just that there is not official librados2 package for openSUSE Tumbleweed.

Don't know if it matters...Is librados2 necessary?

Errors occurred during compling ceph

Hi, after following the steps in the documentation, I have now a running container.

But when I try to build the ceph using setup-ceph.sh script, it fails.

The error message is below:

``txt
git version 2.20.1
WITH_PYTHON 3
-- Building with ccache: /usr/bin/ccache, CCACHE_DIR=
-- NSS_LIBRARIES: /usr/lib64/libssl3.so;/usr/lib64/libsmime3.so;/usr/lib64/libnss3.so;/usr/lib64/libnssutil3.so
-- NSS_INCLUDE_DIRS: /usr/include/nss3
'--host=x86_64-suse-linux-gnu' '--build=x86_64-suse-linux-gnu' '--program-prefix=' '--prefix=/usr' '--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin' '--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include' '--libdir=/usr/lib64' '--libexecdir=/usr/lib' '--localstatedir=/var' '--sharedstatedir=/var/lib' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--disable-dependency-tracking' '--enable-ipv6' '--with-ssl' '--with-ca-fallback' '--without-ca-path' '--without-ca-bundle' '--with-gssapi=/usr/lib/mit' '--with-libidn2' '--with-libssh' '--with-libmetalink' '--enable-hidden-symbols' '--disable-static' '--enable-threaded-resolver' 'build_alias=x86_64-suse-linux-gnu' 'host_alias=x86_64-suse-linux-gnu' 'CFLAGS=-O2 -Wall -fstack-protector-strong -funwind-tables -fasynchronous-unwind-tables -fstack-clash-protection -g -fPIE' 'LDFLAGS= -pie' 'CPPFLAGS=-D_FORTIFY_SOURCE=2'
-- libcurl is linked with openssl: explicitly setting locks
-- ssl soname: libssl.so.1.1
-- crypto soname: libcrypto.so.1.1
-- Found PythonInterp: /usr/bin/python3 (found suitable version "3.7.2", minimum required is "3")
-- BUILDING Boost Libraries at j 2
-- boost will be downloaded...
-- Found Yasm: good -- capable of assembling x86_64
-- Found PythonInterp: /usr/bin/python3 (found version "3.7.2")
-- Setting civetweb to use OPENSSL >= 1.1
CMake Error at /usr/share/cmake/Modules/FindPackageHandleStandardArgs.cmake:137 (message):
Could NOT find RabbitMQ (missing: rabbitmq_INCLUDE_DIR rabbitmq_LIBRARY)
Call Stack (most recent call first):
/usr/share/cmake/Modules/FindPackageHandleStandardArgs.cmake:378 (_FPHSA_FAILURE_MESSAGE)
cmake/modules/FindRabbitMQ.cmake:9 (find_package_handle_standard_args)
src/rgw/CMakeLists.txt:182 (find_package)

-- Configuring incomplete, errors occurred!
``

The Cmake error log file is also attached.
CMakeError.log

Since I am completely new to cmake and compiling of ceph, it's the error of script or something wrong in the environment relating to Dockerfile? Thanks for any advice and help.

`install-deps.sh` fails due to missing `pip2` command

install-deps.sh fails due to missing pip2 command.

╭─root@ceph-dev /ceph ‹master*› 
╰─setup-setup-ceph.sh                                                                                                                                                                                       
[...]
Nothing to do.
Collecting python3-saml
  Downloading https://files.pythonhosted.org/packages/1c/5a/fdb873d1f89b031b24a078fd3a2de8da887407e98cb2e651f13d97a10aea/python3_saml-1.6.0-py3-none-any.whl (72kB)
    100% |████████████████████████████████| 81kB 4.0MB/s 
Collecting xmlsec>=0.6.0 (from python3-saml)
  Downloading https://files.pythonhosted.org/packages/35/42/d7cd323c91d4706f3cc32ffe7d5f851ab8ef9898ccb350f6ba593dd8b89a/xmlsec-1.3.3.tar.gz
Collecting isodate>=0.5.0 (from python3-saml)
  Downloading https://files.pythonhosted.org/packages/9b/9f/b36f7774ff5ea8e428fdcfc4bb332c39ee5b9362ddd3d40d9516a55221b2/isodate-0.6.0-py2.py3-none-any.whl (45kB)
    100% |████████████████████████████████| 51kB 5.9MB/s 
Collecting defusedxml==0.5.0 (from python3-saml)
  Downloading https://files.pythonhosted.org/packages/87/1c/17f3e3935a913dfe2a5ca85fa5ccbef366bfd82eb318b1f75dadbf0affca/defusedxml-0.5.0-py2.py3-none-any.whl
Collecting pkgconfig (from xmlsec>=0.6.0->python3-saml)
  Downloading https://files.pythonhosted.org/packages/b4/2c/bf434cb5a6590417e1d4468050ec317ea17fd6231c2a256df4646c11e588/pkgconfig-1.5.1-py2.py3-none-any.whl
Collecting lxml>=3.0 (from xmlsec>=0.6.0->python3-saml)
  Downloading https://files.pythonhosted.org/packages/66/20/49201c7bcb1c92942ac98658b09fb4a0c0dcd064d439489349bc5891207c/lxml-4.3.3-cp37-cp37m-manylinux1_x86_64.whl (5.7MB)
    100% |████████████████████████████████| 5.7MB 4.6MB/s 
Requirement already satisfied: six in /usr/lib/python3.7/site-packages (from isodate>=0.5.0->python3-saml) (1.12.0)
Installing collected packages: pkgconfig, lxml, xmlsec, isodate, defusedxml, python3-saml
  Running setup.py install for xmlsec ... done
Successfully installed defusedxml-0.5.0 isodate-0.6.0 lxml-4.3.3 pkgconfig-1.5.1 python3-saml-1.6.0 xmlsec-1.3.3
/shared/bin/setup-ceph.sh: line 26: pip2: command not found
╭─root@ceph-dev /ceph ‹master*› 
╰─#                                                                                                                                                                                                     127 ↵

Installing the python2-pip package using zypper in python2-pip solves the problem.

Fail to start Ceph in Nautilus container

Recent change enabled logging in start-ceph.sh script.

./bin/ceph config set mgr mgr/dashboard/log_level info
./bin/ceph config set mgr mgr/dashboard/log_to_file true

These configs are not available in Nautilus, which leads to start-ceph.sh error.

dashboard urls: https://172.18.0.1:41700
  w/ user/pass: admin / admin
restful urls: https://172.18.0.1:42700
  w/ user/pass: admin / be67b364-a629-4763-bc8c-a150b4ea7f2b


export PYTHONPATH=./pybind:/ceph/src/pybind:/ceph/build/lib/cython_modules/lib.3:
export LD_LIBRARY_PATH=/ceph/build/lib
CEPH_DEV=1
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
2020-02-05 07:40:05.968 7fafc7e68700 -1 WARNING: all dangerous and experimental features are enabled.
2020-02-05 07:40:06.000 7fafc7e68700 -1 WARNING: all dangerous and experimental features are enabled.
Error EINVAL: unrecognized config option 'mgr/dashboard/log_level'

change the base image to a more stable one

openSUSE Tumbleweed is a rolling release so something could easily break after an update. Yesterday I tried to setup a ceph dev environment and encountered the issue openssl/openssl#10015 with the latest opensuse/tumbleweed image. It was caused by openssl 1.1.1d. Maybe we should use a non-rolling update release?

Enable Dashboard debug mode

Dashboard debug mode should be automatically enabled using the following command:

  • bin/ceph dashboard debug enable

(or bin/ceph config set mgr mgr/dashboard/debug true)

`npm install` fails due to permission problems

Please note that this issue has originally been reported in oftc#ceph-dashboard by using ceph-dev-docker. I just happened to be able reproduce it on my customized fork using sudo. As ceph-dev-docker runs as root by default, it is not necessary to use sudo to reproduce the problem here!

My proposed solution (as I do not have this problem with my fork) is to switch from root to some non-root user inside the container. Ceph doesn't require root anyway and it will prevent mixed owners in the ceph repo.

The workaround, which fixed the problem for ceph-dev-docker users is to install the frontend dependencies by calling npm install --unsafe-perm in /ceph/src/pybind/mgr/dashboard/frontend.

user@ceph-4 /ceph/src/pybind/mgr/dashboard/frontend (master*) $ sudo npm install   
Unhandled rejection Error: Command failed: /usr/bin/git submodule update -q --init --recursivec/pybind/mgr/dashboard/frontend/node_modules/.staging/typescript-7d3fd7c7 (11733ms)
fatal: failed to stat '/root/.npm/_cacache/tmp/git-clone-44f5e7c8': Permission denied

    at ChildProcess.exithandler (child_process.js:281:12)
    at emitTwo (events.js:126:13)
    at ChildProcess.emit (events.js:214:7)
    at maybeClose (internal/child_process.js:915:16)
    at Socket.stream.socket.on (internal/child_process.js:336:11)
    at emitOne (events.js:116:13)
    at Socket.emit (events.js:211:7)
    at Pipe._handle.close [as _onclose] (net.js:561:12)


> [email protected] install /ceph/src/pybind/mgr/dashboard/frontend/node_modules/node-sass
> node scripts/install.js

Unable to save binary /ceph/src/pybind/mgr/dashboard/frontend/node_modules/node-sass/vendor/linux-x64-57 : { Error: EACCES: permission denied, mkdir '/ceph/src/pybind/mgr/dashboard/frontend/node_modules/node-sass/vendor'
    at Object.fs.mkdirSync (fs.js:885:18)
    at sync (/ceph/src/pybind/mgr/dashboard/frontend/node_modules/mkdirp/index.js:71:13)
    at Function.sync (/ceph/src/pybind/mgr/dashboard/frontend/node_modules/mkdirp/index.js:77:24)
    at checkAndDownloadBinary (/ceph/src/pybind/mgr/dashboard/frontend/node_modules/node-sass/scripts/install.js:114:11)
    at Object.<anonymous> (/ceph/src/pybind/mgr/dashboard/frontend/node_modules/node-sass/scripts/install.js:157:1)
    at Module._compile (module.js:653:30)
    at Object.Module._extensions..js (module.js:664:10)
    at Module.load (module.js:566:32)
    at tryModuleLoad (module.js:506:12)
    at Function.Module._load (module.js:498:3)
  errno: -13,
  code: 'EACCES',
  syscall: 'mkdir',
  path: '/ceph/src/pybind/mgr/dashboard/frontend/node_modules/node-sass/vendor' }

> [email protected] postinstall /ceph/src/pybind/mgr/dashboard/frontend/node_modules/node-sass
> node scripts/build.js

Building: /usr/bin/node8 /ceph/src/pybind/mgr/dashboard/frontend/node_modules/node-gyp/bin/node-gyp.js rebuild --verbose --libsass_ext= --libsass_cflags= --libsass_ldflags= --libsass_library=
gyp info it worked if it ends with ok
gyp verb cli [ '/usr/bin/node8',
gyp verb cli   '/ceph/src/pybind/mgr/dashboard/frontend/node_modules/node-gyp/bin/node-gyp.js',
gyp verb cli   'rebuild',
gyp verb cli   '--verbose',
gyp verb cli   '--libsass_ext=',
gyp verb cli   '--libsass_cflags=',
gyp verb cli   '--libsass_ldflags=',
gyp verb cli   '--libsass_library=' ]
gyp info using [email protected]
gyp info using [email protected] | linux | x64
gyp verb command rebuild []
gyp verb command clean []
gyp verb clean removing "build" directory
gyp verb command configure []
gyp verb check python checking for Python executable "python2" in the PATH
gyp verb `which` succeeded python2 /usr/bin/python2
gyp verb check python version `/usr/bin/python2 -c "import sys; print "2.7.16
gyp verb check python version .%s.%s" % sys.version_info[:3];"` returned: %j
gyp verb get node dir no --target version specified, falling back to host node version: 8.15.1
gyp verb command install [ '8.15.1' ]
gyp verb install input version string "8.15.1"
gyp verb install installing version: 8.15.1
gyp verb install --ensure was passed, so won't reinstall if already installed
gyp WARN EACCES user "root" does not have permission to access the dev dir "/root/.node-gyp/8.15.1"
gyp WARN EACCES attempting to reinstall using temporary dev dir "/ceph/src/pybind/mgr/dashboard/frontend/node_modules/node-sass/.node-gyp"
gyp verb tmpdir == cwd automatically will remove dev files after to save disk space
gyp verb command install [ '--node_gyp_internal_noretry', '8.15.1' ]
gyp verb install input version string "8.15.1"
gyp verb install installing version: 8.15.1
gyp verb install --ensure was passed, so won't reinstall if already installed
gyp verb install version not already installed, continuing with install 8.15.1
gyp verb ensuring nodedir is created /ceph/src/pybind/mgr/dashboard/frontend/node_modules/node-sass/.node-gyp/8.15.1
gyp WARN install got an error, rolling back install
gyp verb command remove [ '8.15.1' ]
gyp verb remove using node-gyp dir: /ceph/src/pybind/mgr/dashboard/frontend/node_modules/node-sass/.node-gyp
gyp verb remove removing target version: 8.15.1
gyp verb remove removing development files for version: 8.15.1
gyp WARN install got an error, rolling back install
gyp verb command remove [ '8.15.1' ]
gyp verb remove using node-gyp dir: /ceph/src/pybind/mgr/dashboard/frontend/node_modules/node-sass/.node-gyp
gyp verb remove removing target version: 8.15.1
gyp verb remove removing development files for version: 8.15.1
gyp ERR! configure error 
gyp ERR! stack Error: EACCES: permission denied, mkdir '/ceph/src/pybind/mgr/dashboard/frontend/node_modules/node-sass/.node-gyp'
gyp ERR! System Linux 4.15.0-47-generic
gyp ERR! command "/usr/bin/node8" "/ceph/src/pybind/mgr/dashboard/frontend/node_modules/node-gyp/bin/node-gyp.js" "rebuild" "--verbose" "--libsass_ext=" "--libsass_cflags=" "--libsass_ldflags=" "--libsass_library="
gyp ERR! cwd /ceph/src/pybind/mgr/dashboard/frontend/node_modules/node-sass
gyp ERR! node -v v8.15.1
gyp ERR! node-gyp -v v3.8.0
gyp ERR! not ok 
Build failed with error code: 1

AttributeError: module 'rados' has no attribute 'Rados'

Hi,

I am trying to setup a ceph dev env, and found this repo via https://insujang.github.io/2020-11-03/deploying-a-ceph-development-environment-cluster/

It is very helpful, thank you.

However, at step "4. Ceph Dashboard", I got:

╭─root@ceph-dev /ceph ‹master› 
╰─# npm-start.sh 
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
Traceback (most recent call last):
  File "./bin/ceph", line 1318, in <module>
    retval = main()
  File "./bin/ceph", line 978, in main
    conffile = rados.Rados.DEFAULT_CONF_FILES
AttributeError: module 'rados' has no attribute 'Rados'
jq: error: syntax error, unexpected $end (Unix shell quoting issues?) at <top-level>, line 1:
.["/api/"].target=                 
jq: 1 compile error
jq: error: syntax error, unexpected $end (Unix shell quoting issues?) at <top-level>, line 1:
.["/ui-api/"].target=                    
jq: 1 compile error

ceph -s also report same error:

╭─root@ceph-dev /ceph/build ‹master› 
╰─# bin/ceph -s
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
Traceback (most recent call last):
  File "bin/ceph", line 1318, in <module>
    retval = main()
  File "bin/ceph", line 978, in main
    conffile = rados.Rados.DEFAULT_CONF_FILES
AttributeError: module 'rados' has no attribute 'Rados'

It seems the rados module has updated and break the code ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.