A collection of Salt files for deploying, managing and automating Ceph.
The goal is to manage multiple Ceph clusters with a single salt master. At this time, only a single Ceph cluster can be managed.
The diagram should explain the intended flow for the orchestration runners and related salt states.
Automatic discovery, configuration and deployment of Ceph clusters works. RGW deployment works for single site deployements. MDS deployment and CephFS creation works.
To learn more about DeepSea, take a look at the Wiki.
There is also a dedicated mailing list deepsea-users. If you have any questions, suggestions for improvements or any other feedback, please join us there! We look forward to your contribution.
If you think you've found a bug or would like to suggest an enhancement, please submit it via the bug tracker on GitHub.
For contributing to DeepSea, refer to the contribution guidelines
- Install salt-master on one host
- Install salt-minion on all hosts including the master.
- Accept keys (e.g.
salt-key -A -y
)
- Install rpm
- For non-RPM-distros, try
make install
.
- Run
salt-run state.orch ceph.stage.0
orsalt-run state.orch ceph.stage.prep
- Run
salt-run state.orch ceph.stage.1
orsalt-run state.orch ceph.stage.discovery
- Create
/srv/pillar/ceph/proposals/policy.cfg
. Examples are here - Run
salt-run state.orch ceph.stage.2
orsalt-run state.orch ceph.stage.configure
- Run
salt-run state.orch ceph.stage.3
orsalt-run state.orch ceph.stage.deploy
- Run
salt-run state.orch ceph.stage.4
orsalt-run state.orch ceph.stage.services
The discovery stage (or stage 1) creates many configuration proposals under
/srv/pillar/ceph/proposals
. The files contain configuration options for Ceph
clusters, potential storage layouts and role assignments for the cluster
minions. The policy.cfg specifies which of these files and options are to be
used for the deployment.
Please refer to the Policy wiki page for more detailed information.
Once a cluster is deployed one might want to verify functionality or run benchmarks to verify the cluster works as expected.
- In order to gain some confidence in your cluster after the inital deployment
(stage 3) run
salt-run state.orch ceph.benchmarks.baseline
. This runs an osd benchmark on each OSD and aggregates the results. It reports your average OSD performance and points out OSDs that deviate from the average. Please note that for now the baseline benchmark assumes all uniform OSDs. - To load test CephFS run
salt-run state.orch ceph.benchmarks.cephfs
. This requires a running MDS (deploy in stage 4) and at least on minion with the mds-client role. The cephfs_benchmark stage will then mount the CephFS instance on the mds-client and run a bunch of fio tests. See the benchmark readme for futher details. - more to come