Git Product home page Git Product logo

aur-ceph's People

Contributors

bazaah avatar foxxx0 avatar

Stargazers

Stephen Muth avatar Sebastian Bolaños avatar Aidar Fattakhov avatar Ella Alinda avatar Marcus Hitchins avatar Daniel Brunner avatar

Watchers

 avatar

Forkers

enelar

aur-ceph's Issues

Investigate cmake failure for `rdkafka`

Need to rebuild for new boost release, but it appears there is a cmake build error:

CMake Error at /usr/share/cmake/Modules/FindPackageHandleStandardArgs.cmake:230 (message):
  Could NOT find RDKafka: (Required is at least version "0.9.2") (found
  /usr/include)

This error message is wrong.

It is caused by the call to pkg-config in pkg_search_module.

Running the equivalent command from the shell results in the following error message:

❯ arch-nspawn $CHROOT/17.2.5_6 pkg-config --print-requires --print-variables --modversion rdkafka
Package curl was not found in the pkg-config search path.
Perhaps you should add the directory containing `curl.pc'
to the PKG_CONFIG_PATH environment variable
Package 'curl', required by 'rdkafka', not found

Which leads us to our real error, revealed in the following diff between v1.9 and v2.0:

@@ -5,8 +5,8 @@

 Name: librdkafka
 Description: The Apache Kafka C/C++ library
-Version: 1.9.2
-Requires: zlib libssl libsasl2 libzstd liblz4
+Version: 2.0.0
+Requires: curl zlib libssl libsasl2 libzstd liblz4
 Cflags: -I${includedir}
 Libs: -L${libdir} -lrdkafka
 Libs.private: -lpthread -lrt -ldl -lm

curl is not a valid pkg-config module, at least on Archlinux, libcurl is.

This change was introduced in confluentinc/librdkafka#4045


Now I need to figure out who is responsible:

  1. Maintainer of community/librdkafka
  2. The upstream (https://github.com/confluentinc/librdkafka)

ceph 17.2.4 doesnt build

Hi,
my cluster is already running ceph 17.2.4 and since some arch upgrades, some dependency libraries were updated too and I cannot rebuild ceph anymore. I know this repo only builds ceph 16, but maybe you have an idea why it doesnt build with the newer version anymore?

would a c-style cast do the trick here?

In file included from /usr/include/arrow/array/data.h:27,
                 from /usr/include/arrow/array/array_base.h:26,
                 from /usr/include/arrow/array.h:37,
                 from /usr/include/arrow/api.h:22,
                 from /home/feedc0de/ceph-arch/src/ceph-17.2.4/src/s3select/include/s3select_parquet_intrf.h:11,
                 from /home/feedc0de/ceph-arch/src/ceph-17.2.4/src/s3select/include/s3select_oper.h:16,
                 from /home/feedc0de/ceph-arch/src/ceph-17.2.4/src/s3select/include/s3select.h:12,
                 from /home/feedc0de/ceph-arch/src/ceph-17.2.4/src/rgw/rgw_s3select_private.h:35,
                 from /home/feedc0de/ceph-arch/src/ceph-17.2.4/src/rgw/rgw_s3select.cc:4:
/home/feedc0de/ceph-arch/src/ceph-17.2.4/src/s3select/include/s3select_parquet_intrf.h: In member function ‘virtual arrow::Status arrow::io::OSFile::OpenWritable(const std::string&, bool, bool, bool)’:
/home/feedc0de/ceph-arch/src/ceph-17.2.4/src/s3select/include/s3select_parquet_intrf.h:199:5: error: cannot convert ‘arrow::internal::FileDescriptor’ to ‘int’ in assignment
  199 |     ARROW_ASSIGN_OR_RAISE(fd_, ::arrow::internal::FileOpenWritable(file_name_, write_only,
      |     ^~~~~~~~~~~~~~~~~~~~~
      |     |
      |     arrow::internal::FileDescriptor
/home/feedc0de/ceph-arch/src/ceph-17.2.4/src/s3select/include/s3select_parquet_intrf.h: In member function ‘virtual arrow::Status arrow::io::OSFile::OpenReadable(const std::string&)’:
/home/feedc0de/ceph-arch/src/ceph-17.2.4/src/s3select/include/s3select_parquet_intrf.h:232:5: error: cannot convert ‘arrow::internal::FileDescriptor’ to ‘int’ in assignment
  232 |     ARROW_ASSIGN_OR_RAISE(fd_, ::arrow::internal::FileOpenReadable(file_name_));
      |     ^~~~~~~~~~~~~~~~~~~~~
      |     |
      |     arrow::internal::FileDescriptor

ceph 18.2.2-3

  • Backport ceph/ceph@9ee47b0 (fix for ceph-volume)
  • Backport ceph/ceph@0358bd5 (another fix for ceph-volume)
  • Handle GCC-14 compile errors
  • Rebuild / fix all the fun problems I'm sure python 3.12 will bring
    • run-tox-mgr
    • run-tox-mgr-dashboard-py3
    • run-tox-mgr-dashboard-lint
    • run-tox-mgr-dashboard-openapi
    • run-tox-python-common (root error: pytest-dev/pyfakefs#770, needs pyfakefs>=5.2.0)
    • run-tox-cephadm
    • run-tox-cephfs-top
    • unittest_mempool (looks like just a perf regression the upstream hasn't decided how to fix yet: ceph/ceph#56448, ceph/ceph#55249, ceph/ceph#55696)
    • test_ceph_argparse.py
    • unittest_deferred (disappeared after GCC-14 rebuild)

Build fails with some boost-python error on Manjaro

Hi,
I hope that this is the right place :) => im trying to update from 17.2.6-2 to 17.2.6-3 and im getting this:

-- Found OpenSSL: /usr/lib/libcrypto.so (found version "3.1.2")  
-- Found EXPAT: /usr/lib/libexpat.so (found version "2.5.0") 
-- Found OATH: /usr/lib/liboath.so  
-- ssl soname: libssl.so.3
-- crypto soname: libcrypto.so.3
-- Found Python3: /usr/bin/python3.9 (found suitable exact version "3.9.17") found components: Interpreter Development 
-- Found ZLIB: /usr/lib/libz.so (found version "1.2.13")  
CMake Error at /usr/lib/cmake/Boost-1.81.0/BoostConfig.cmake:141 (find_package):
  Found package configuration file:

    /usr/lib/cmake/boost_python-1.81.0/boost_python-config.cmake

  but it set boost_python_FOUND to FALSE so package "boost_python" is
  considered to be NOT FOUND.  Reason given by package:

  No suitable build variant has been found.

  The following variants have been tried and rejected:

  * libboost_python27.so.1.81.0 (2.7, Boost_PYTHON_VERSION=3.9)

  * libboost_python311.so.1.81.0 (3.11, Boost_PYTHON_VERSION=3.9)

  * libboost_python27.a (2.7, Boost_PYTHON_VERSION=3.9)

  * libboost_python311.a (3.11, Boost_PYTHON_VERSION=3.9)

Call Stack (most recent call first):
  /usr/lib/cmake/Boost-1.81.0/BoostConfig.cmake:262 (boost_find_component)
  cmake/modules/FindBoost.cmake:597 (find_package)
  CMakeLists.txt:636 (find_package)


-- Configuring incomplete, errors occurred!
==> ERROR: A failure occurred in build().
    Aborting...

same happens when im trying latest 17.2.6-3 from source:

$ git clone https://git.st8l.com/luxolus/aur.ceph
$ cd aur.ceph
$ makepkg         
==> Making package: ceph 17.2.6-3 (Mi 23 Aug 2023 09:48:13 CEST)
==> Checking runtime dependencies...
==> Checking buildtime dependencies...
==> Retrieving sources...
  -> Downloading ceph-17.2.6.tar.gz...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  160M  100  160M    0     0  9031k      0  0:00:18  0:00:18 --:--:-- 8717k
  -> Found ceph.sysusers
[...]  
Applying patch ceph-17.2.6-cython-fixes.patch
patching file src/pybind/rbd/c_rbd.pxd
patching file src/pybind/rbd/rbd.pyx
==> Starting build()...
-- The CXX compiler identification is GNU 13.2.1
-- The C compiler identification is GNU 13.2.1
-- The ASM compiler identification is GNU
-- Found assembler: /usr/bin/cc
[...]
-- Found OpenSSL: /usr/lib/libcrypto.so (found version "3.1.2")  
-- Found EXPAT: /usr/lib/libexpat.so (found version "2.5.0") 
-- Found OATH: /usr/lib/liboath.so  
-- ssl soname: libssl.so.3
-- crypto soname: libcrypto.so.3
-- Found Python3: /usr/bin/python3.9 (found suitable exact version "3.9.17") found components: Interpreter Development 
-- Found ZLIB: /usr/lib/libz.so (found version "1.2.13")  
CMake Error at /usr/lib/cmake/Boost-1.81.0/BoostConfig.cmake:141 (find_package):
  Found package configuration file:

    /usr/lib/cmake/boost_python-1.81.0/boost_python-config.cmake

  but it set boost_python_FOUND to FALSE so package "boost_python" is
  considered to be NOT FOUND.  Reason given by package:

  No suitable build variant has been found.

  The following variants have been tried and rejected:

  * libboost_python27.so.1.81.0 (2.7, Boost_PYTHON_VERSION=3.9)

  * libboost_python311.so.1.81.0 (3.11, Boost_PYTHON_VERSION=3.9)

  * libboost_python27.a (2.7, Boost_PYTHON_VERSION=3.9)

  * libboost_python311.a (3.11, Boost_PYTHON_VERSION=3.9)

Call Stack (most recent call first):
  /usr/lib/cmake/Boost-1.81.0/BoostConfig.cmake:262 (boost_find_component)
  cmake/modules/FindBoost.cmake:597 (find_package)
  CMakeLists.txt:636 (find_package)


-- Configuring incomplete, errors occurred!
==> ERROR: A failure occurred in build().
    Aborting...

Any ideas how to solve this?

$ pamac list -i | grep boost
boost                                        1.81.0-7                      extra     183,1 MB
boost-libs                                   1.81.0-7                      extra     8,5 MB
boost-python2                                1.81.0-1                      AUR       755,1 kB
boost-python2-libs                           1.81.0-1                      AUR       315,8 kB
$ uname -a 
Linux leberle-desktop 6.4.9-1-MANJARO #1 SMP PREEMPT_DYNAMIC Wed Aug  9 08:32:12 UTC 2023 x86_64 GNU/Linux

Rebuild for python 3.11

Seems Archlinux finally managed to navigate the 3.11 mess and is testing 3.11 rebuilds now.

This package has to be rebuilt once that hits stable.

ceph-mgr/dashboard: python-cryptography PyO3 modules may only be initialized once per interpreter process

Opening this to track progress / news on the upstream project(s). See #16 for the history of this issue.


The basic facts are:

  1. Ceph uses a relatively rare "subinterpreter" model for running mgr modules. The important bit here is that there is a 1:Many relationship between the OS process and Python interpreters.
  2. CPython doesn't really prevent you from (theoretically) sharing python objects between interpreters in the same process, and doesn't really take stance on it beyond "only do this if you know what you're doing", but typically one can assume that bad things will happen if you share Python objects between interpreters and aren't a CPython dev who understands when it is allowed.
  3. PyO3 is a rust project for interop with Python, notably for us, it allows one to expose rust code into native Python, and is used in the popular python-cryptography module
  4. Up until v0.17.0 (or really: PyO3/pyo3#2523) the maintainers allowed one to initialize the same PyO3 backed module multiple times per process invocation
  5. Technically, doing so opens soundness holes in rust to Python libraries around globals, as PyO3 doesn't (can't?) prevent module writers from storing Python objects that were implicitly created in one interpreters from being accessed in another (the rust global is unique at the process level).
  6. Therefore, rather than undertaking an intractable redesign, the PyO3 maintainers simply remove the ability to initialize a module more than once in a process.
  7. python-cryptography upgraded to the problem version of PyO3 in v41, which is the current latest in Archlinux: https://archlinux.org/packages/extra/x86_64/python-cryptography/
  8. Therefore, ceph-mgr's dashboard will blow up because the restful module (loaded earlier, mandatory module) uses python-jwt which initializes python-cryptography first

At the moment, I don't see a maintainable path forward without help from the upstream packages.

  • I could attempt to extend the build to use a venv and somehow install that alongside ceph
    • But doing so is a considerable maintenance effort using technologies I'm not super familiar with (CMake, Python)
    • And would be a very large patch to the build system relatively
  • I could attempt to forcefully override just python-cryptography, but this will stop working as so as one of the rdeps required a version greater than v41
  • Or I do nothing, mention that the dashboard is broken and see if I can figure out a way to move the conversation forward upstream
    • The most annoying, as I use the dashboard myself, but ultimately the only choice I think I can support long term.

v17.2.5-5: python test failures

These are due to the python ecosystem not understanding semver.

The upstream has been struggling through them too:

Just keeping this here to track. I have no intention of back porting these fixes to 17.2.5, as I got a clean test run originally.


run-tox-qa
Requirement already satisfied: tox in /build/ceph/src/ceph-17.2.5/build/qa-virtualenv/lib/python3.10/site-packages (4.0.16)
Requirement already satisfied: chardet>=5.1 in /build/ceph/src/ceph-17.2.5/build/qa-virtualenv/lib/python3.10/site-packages (from tox) (5.1.0)
Requirement already satisfied: virtualenv>=20.17.1 in /build/ceph/src/ceph-17.2.5/build/qa-virtualenv/lib/python3.10/site-packages (from tox) (20.17.1)
Requirement already satisfied: pyproject-api>=1.2.1 in /build/ceph/src/ceph-17.2.5/build/qa-virtualenv/lib/python3.10/site-packages (from tox) (1.2.1)
Requirement already satisfied: pluggy>=1 in /build/ceph/src/ceph-17.2.5/build/qa-virtualenv/lib/python3.10/site-packages (from tox) (1.0.0)
Requirement already satisfied: colorama>=0.4.6 in /build/ceph/src/ceph-17.2.5/build/qa-virtualenv/lib/python3.10/site-packages (from tox) (0.4.6)
Requirement already satisfied: tomli>=2.0.1 in /build/ceph/src/ceph-17.2.5/build/qa-virtualenv/lib/python3.10/site-packages (from tox) (2.0.1)
Requirement already satisfied: filelock>=3.8.2 in /build/ceph/src/ceph-17.2.5/build/qa-virtualenv/lib/python3.10/site-packages (from tox) (3.8.2)
Requirement already satisfied: platformdirs>=2.6 in /build/ceph/src/ceph-17.2.5/build/qa-virtualenv/lib/python3.10/site-packages (from tox) (2.6.0)
Requirement already satisfied: cachetools>=5.2 in /build/ceph/src/ceph-17.2.5/build/qa-virtualenv/lib/python3.10/site-packages (from tox) (5.2.0)
Requirement already satisfied: packaging>=22 in /build/ceph/src/ceph-17.2.5/build/qa-virtualenv/lib/python3.10/site-packages (from tox) (22.0)
Requirement already satisfied: distlib<1,>=0.3.6 in /build/ceph/src/ceph-17.2.5/build/qa-virtualenv/lib/python3.10/site-packages (from virtualenv>=20.17.1->tox) (0.3.6)
py3: install_deps /build/ceph/src/ceph-17.2.5/qa> python -I -m pip install httplib2 git+https://github.com/ceph/teuthology.git@main
py3: commands[0] /build/ceph/src/ceph-17.2.5/qa> pytest --assert=plain test_import.py
py3: failed with pytest is not allowed, use allowlist_externals to allow it
py3: FAIL ✖ in 4 minutes 25.29 seconds
flake8: install_deps /build/ceph/src/ceph-17.2.5/qa> python -I -m pip install flake8
flake8: commands[0] /build/ceph/src/ceph-17.2.5/qa> flake8 --select=F,E9 --exclude=venv,.tox
./tasks/cephfs/test_full.py:6:5: F401 'typing.Optional' imported but unused
flake8: exit 1 (12.10 seconds) /build/ceph/src/ceph-17.2.5/qa> flake8 --select=F,E9 --exclude=venv,.tox pid=71946
flake8: FAIL ✖ in 30.18 seconds
mypy: install_deps /build/ceph/src/ceph-17.2.5/qa> python -I -m pip install mypy types-boto types-cryptography types-jwt types-paramiko types-python-dateutil types-PyYAML types-requests -c /build/ceph/src/ceph-17.2.5/qa/../src/mypy-constrains.txt
mypy: commands[0] /build/ceph/src/ceph-17.2.5/qa> mypy .
Success: no issues found in 245 source files
mypy: OK ✔ in 6 minutes 1.78 seconds
deadsymlinks: commands[0] /build/ceph/src/ceph-17.2.5/qa> bash -c '! (find . -xtype l | grep ^)'
  py3: FAIL code 1 (265.29 seconds)
  flake8: FAIL code 1 (30.18=setup[18.08]+cmd[12.10] seconds)
  mypy: OK (361.78=setup[339.22]+cmd[22.56] seconds)
  deadsymlinks: OK (1.63=setup[0.51]+cmd[1.13] seconds)
  evaluation failed :( (659.23 seconds)
run-tox-mgr
Requirement already satisfied: tox in /build/ceph/src/ceph-17.2.5/build/mgr-virtualenv/lib/python3.10/site-packages (4.0.16)
Requirement already satisfied: pyproject-api>=1.2.1 in /build/ceph/src/ceph-17.2.5/build/mgr-virtualenv/lib/python3.10/site-packages (from tox) (1.2.1)
Requirement already satisfied: tomli>=2.0.1 in /build/ceph/src/ceph-17.2.5/build/mgr-virtualenv/lib/python3.10/site-packages (from tox) (2.0.1)
Requirement already satisfied: colorama>=0.4.6 in /build/ceph/src/ceph-17.2.5/build/mgr-virtualenv/lib/python3.10/site-packages (from tox) (0.4.6)
Requirement already satisfied: cachetools>=5.2 in /build/ceph/src/ceph-17.2.5/build/mgr-virtualenv/lib/python3.10/site-packages (from tox) (5.2.0)
Requirement already satisfied: pluggy>=1 in /build/ceph/src/ceph-17.2.5/build/mgr-virtualenv/lib/python3.10/site-packages (from tox) (1.0.0)
Requirement already satisfied: chardet>=5.1 in /build/ceph/src/ceph-17.2.5/build/mgr-virtualenv/lib/python3.10/site-packages (from tox) (5.1.0)
Requirement already satisfied: virtualenv>=20.17.1 in /build/ceph/src/ceph-17.2.5/build/mgr-virtualenv/lib/python3.10/site-packages (from tox) (20.17.1)
Requirement already satisfied: packaging>=22 in /build/ceph/src/ceph-17.2.5/build/mgr-virtualenv/lib/python3.10/site-packages (from tox) (22.0)
Requirement already satisfied: filelock>=3.8.2 in /build/ceph/src/ceph-17.2.5/build/mgr-virtualenv/lib/python3.10/site-packages (from tox) (3.8.2)
Requirement already satisfied: platformdirs>=2.6 in /build/ceph/src/ceph-17.2.5/build/mgr-virtualenv/lib/python3.10/site-packages (from tox) (2.6.0)
Requirement already satisfied: distlib<1,>=0.3.6 in /build/ceph/src/ceph-17.2.5/build/mgr-virtualenv/lib/python3.10/site-packages (from virtualenv>=20.17.1->tox) (0.3.6)
ROOT: will run in automatically provisioned tox, host /build/ceph/src/ceph-17.2.5/build/mgr-virtualenv/bin/python3.10 is missing [requires (has)]: cython
ROOT: install_deps /build/ceph/src/ceph-17.2.5/src/pybind/mgr> python -I -m pip install cython tox
ROOT: provision /build/ceph/src/ceph-17.2.5/src/pybind/mgr> .tox/.tox/bin/python -m tox -c /build/ceph/src/ceph-17.2.5/src/pybind/mgr/tox.ini -e py3,py37,mypy,flake8,jinjalint,nooptional
py3: install_deps> python -I -m pip install cython -r requirements.txt -r rook/requirements.txt
py3: commands[0]> pytest --doctest-modules
============================= test session starts ==============================
platform linux -- Python 3.10.8, pytest-7.2.0, pluggy-1.0.0
cachedir: .tox/.tox/py3/.pytest_cache
rootdir: /build/ceph/src/ceph-17.2.5/src/pybind/mgr, configfile: tox.ini
plugins: cov-2.7.1, requests-mock-1.10.0, pyfakefs-5.0.0
collected 1281 items

mgr_module.py .                                                          [  0%]
mgr_util.py ......                                                       [  0%]
cephadm/upgrade.py .                                                     [  0%]
cephadm/tests/test_agent.py .....                                        [  1%]
cephadm/tests/test_autotune.py ....                                      [  1%]
cephadm/tests/test_cephadm.py .......................................... [  4%]
...................................................................      [  9%]
cephadm/tests/test_completion.py ............                            [ 10%]
cephadm/tests/test_configchecks.py .......................               [ 12%]
cephadm/tests/test_facts.py .                                            [ 12%]
cephadm/tests/test_migration.py .......                                  [ 13%]
cephadm/tests/test_osd_removal.py ...................................... [ 16%]
............                                                             [ 17%]
cephadm/tests/test_scheduling.py ....................................... [ 20%]
........................................................................ [ 25%]
........................................................................ [ 31%]
........................................................................ [ 37%]
........................................................................ [ 42%]
........................................................................ [ 48%]
........................................................................ [ 53%]
........................................................................ [ 59%]
........................................................................ [ 65%]
........................................................................ [ 70%]
........................................................................ [ 76%]
..........                                                               [ 77%]
cephadm/tests/test_services.py ..........................                [ 79%]
cephadm/tests/test_spec.py .........................................     [ 82%]
cephadm/tests/test_ssh.py .s                                             [ 82%]
cephadm/tests/test_template.py .                                         [ 82%]
cephadm/tests/test_tuned_profiles.py ....                                [ 82%]
cephadm/tests/test_upgrade.py .........................                  [ 84%]
cli_api/tests/test_cli_api.py .....                                      [ 85%]
diskprediction_local/predictor.py .                                      [ 85%]
insights/tests/test_health.py .........                                  [ 86%]
mds_autoscaler/tests/test_autoscaler.py .                                [ 86%]
nfs/tests/test_nfs.py .................................                  [ 88%]
orchestrator/_interface.py ss.ss.                                        [ 89%]
orchestrator/module.py .                                                 [ 89%]
orchestrator/tests/test_orchestrator.py ...........                      [ 90%]
pg_autoscaler/tests/test_cal_final_pg_target.py .......                  [ 90%]
pg_autoscaler/tests/test_cal_ratio.py ..                                 [ 90%]
pg_autoscaler/tests/test_overlapping_roots.py ..                         [ 90%]
progress/test_progress.py ..                                             [ 91%]
prometheus/module.py .                                                   [ 91%]
prometheus/test_module.py ......                                         [ 91%]
rook/tests/test_placement.py ....                                        [ 91%]
snap_schedule/tests/fs/test_schedule.py ................................ [ 94%]
..............                                                           [ 95%]
snap_schedule/tests/fs/test_schedule_client.py ..                        [ 95%]
telemetry/tests/test_telemetry.py ...                                    [ 95%]
tests/test_object_format.py ............................................ [ 99%]
...                                                                      [ 99%]
tests/test_tls.py .....                                                  [100%]

=============================== warnings summary ===============================
.tox/.tox/py3/lib/python3.10/site-packages/asyncssh/crypto/cipher.py:29
  /build/ceph/src/ceph-17.2.5/src/pybind/mgr/.tox/.tox/py3/lib/python3.10/site-packages/asyncssh/crypto/cipher.py:29: CryptographyDeprecationWarning: Blowfish has been deprecated
    from cryptography.hazmat.primitives.ciphers.algorithms import Blowfish, CAST5

.tox/.tox/py3/lib/python3.10/site-packages/asyncssh/crypto/cipher.py:29
  /build/ceph/src/ceph-17.2.5/src/pybind/mgr/.tox/.tox/py3/lib/python3.10/site-packages/asyncssh/crypto/cipher.py:29: CryptographyDeprecationWarning: CAST5 has been deprecated
    from cryptography.hazmat.primitives.ciphers.algorithms import Blowfish, CAST5

.tox/.tox/py3/lib/python3.10/site-packages/asyncssh/crypto/cipher.py:30
  /build/ceph/src/ceph-17.2.5/src/pybind/mgr/.tox/.tox/py3/lib/python3.10/site-packages/asyncssh/crypto/cipher.py:30: CryptographyDeprecationWarning: SEED has been deprecated
    from cryptography.hazmat.primitives.ciphers.algorithms import SEED, TripleDES

osd_perf_query/module.py:61
  /build/ceph/src/ceph-17.2.5/src/pybind/mgr/osd_perf_query/module.py:61: DeprecationWarning: invalid escape sequence '\.'
    'regex': '^(?:rbd|journal)_data\.(?:([0-9]+)\.)?([^.]+)\.'},

prometheus/module.py:35
  /build/ceph/src/ceph-17.2.5/src/pybind/mgr/prometheus/module.py:35: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
    v = StrictVersion(cherrypy.__version__)

prometheus/module.py:38
prometheus/module.py:38
  /build/ceph/src/ceph-17.2.5/src/pybind/mgr/prometheus/module.py:38: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
    if StrictVersion("3.1.2") <= v < StrictVersion("3.2.3"):

rook/rook_cluster.py:109
  /build/ceph/src/ceph-17.2.5/src/pybind/mgr/rook/rook_cluster.py:109: DeprecationWarning: invalid escape sequence '\d'
    coeff_and_unit = re.search('(\d+)(\D+)', size_str)

stats/fs/perf_stats.py:26
  /build/ceph/src/ceph-17.2.5/src/pybind/mgr/stats/fs/perf_stats.py:26: DeprecationWarning: invalid escape sequence '\d'
    CLIENT_ID_ALL = "\d*"

stats/fs/perf_stats.py:30
  /build/ceph/src/ceph-17.2.5/src/pybind/mgr/stats/fs/perf_stats.py:30: DeprecationWarning: invalid escape sequence '\s'
    MDS_PERF_QUERY_REGEX_MATCH_CLIENTS = '^(client.{0}\s+{1}):.*'

zabbix/module.py:134
  /build/ceph/src/ceph-17.2.5/src/pybind/mgr/zabbix/module.py:134: DeprecationWarning: invalid escape sequence '\['
    uri = re.match("(?:(?:\[?)([a-z0-9-\.]+|[a-f0-9:\.]+)(?:\]?))(?:((?::))([0-9]{1,5}))?$", server)

pg_autoscaler/tests/test_cal_final_pg_target.py::TestPgAutoscaler::test_even_pools_one_meta_three_bulk
  /build/ceph/src/ceph-17.2.5/src/pybind/mgr/.tox/.tox/py3/lib/python3.10/site-packages/_pytest/fixtures.py:900: PytestRemovedIn8Warning: Support for nose tests is deprecated and will be removed in a future release.
  pg_autoscaler/tests/test_cal_final_pg_target.py::TestPgAutoscaler::test_even_pools_one_meta_three_bulk is using nose-specific method: `setup(self)`
  To remove this warning, rename it to `setup_method(self)`
  See docs: https://docs.pytest.org/en/stable/deprecations.html#support-for-tests-written-for-nose
    fixture_result = next(generator)

pg_autoscaler/tests/test_cal_final_pg_target.py::TestPgAutoscaler::test_even_pools_two_meta_two_bulk
  /build/ceph/src/ceph-17.2.5/src/pybind/mgr/.tox/.tox/py3/lib/python3.10/site-packages/_pytest/fixtures.py:900: PytestRemovedIn8Warning: Support for nose tests is deprecated and will be removed in a future release.
  pg_autoscaler/tests/test_cal_final_pg_target.py::TestPgAutoscaler::test_even_pools_two_meta_two_bulk is using nose-specific method: `setup(self)`
  To remove this warning, rename it to `setup_method(self)`
  See docs: https://docs.pytest.org/en/stable/deprecations.html#support-for-tests-written-for-nose
    fixture_result = next(generator)

pg_autoscaler/tests/test_cal_final_pg_target.py::TestPgAutoscaler::test_uneven_pools_one_meta_three_bulk
  /build/ceph/src/ceph-17.2.5/src/pybind/mgr/.tox/.tox/py3/lib/python3.10/site-packages/_pytest/fixtures.py:900: PytestRemovedIn8Warning: Support for nose tests is deprecated and will be removed in a future release.
  pg_autoscaler/tests/test_cal_final_pg_target.py::TestPgAutoscaler::test_uneven_pools_one_meta_three_bulk is using nose-specific method: `setup(self)`
  To remove this warning, rename it to `setup_method(self)`
  See docs: https://docs.pytest.org/en/stable/deprecations.html#support-for-tests-written-for-nose
    fixture_result = next(generator)

pg_autoscaler/tests/test_cal_final_pg_target.py::TestPgAutoscaler::test_uneven_pools_two_meta_two_bulk
  /build/ceph/src/ceph-17.2.5/src/pybind/mgr/.tox/.tox/py3/lib/python3.10/site-packages/_pytest/fixtures.py:900: PytestRemovedIn8Warning: Support for nose tests is deprecated and will be removed in a future release.
  pg_autoscaler/tests/test_cal_final_pg_target.py::TestPgAutoscaler::test_uneven_pools_two_meta_two_bulk is using nose-specific method: `setup(self)`
  To remove this warning, rename it to `setup_method(self)`
  See docs: https://docs.pytest.org/en/stable/deprecations.html#support-for-tests-written-for-nose
    fixture_result = next(generator)

pg_autoscaler/tests/test_cal_final_pg_target.py::TestPgAutoscaler::test_uneven_pools_with_diff_roots
  /build/ceph/src/ceph-17.2.5/src/pybind/mgr/.tox/.tox/py3/lib/python3.10/site-packages/_pytest/fixtures.py:900: PytestRemovedIn8Warning: Support for nose tests is deprecated and will be removed in a future release.
  pg_autoscaler/tests/test_cal_final_pg_target.py::TestPgAutoscaler::test_uneven_pools_with_diff_roots is using nose-specific method: `setup(self)`
  To remove this warning, rename it to `setup_method(self)`
  See docs: https://docs.pytest.org/en/stable/deprecations.html#support-for-tests-written-for-nose
    fixture_result = next(generator)

pg_autoscaler/tests/test_cal_final_pg_target.py::TestPgAutoscaler::test_even_pools_with_diff_roots
  /build/ceph/src/ceph-17.2.5/src/pybind/mgr/.tox/.tox/py3/lib/python3.10/site-packages/_pytest/fixtures.py:900: PytestRemovedIn8Warning: Support for nose tests is deprecated and will be removed in a future release.
  pg_autoscaler/tests/test_cal_final_pg_target.py::TestPgAutoscaler::test_even_pools_with_diff_roots is using nose-specific method: `setup(self)`
  To remove this warning, rename it to `setup_method(self)`
  See docs: https://docs.pytest.org/en/stable/deprecations.html#support-for-tests-written-for-nose
    fixture_result = next(generator)

pg_autoscaler/tests/test_cal_final_pg_target.py::TestPgAutoscaler::test_uneven_pools_with_overlapped_roots
  /build/ceph/src/ceph-17.2.5/src/pybind/mgr/.tox/.tox/py3/lib/python3.10/site-packages/_pytest/fixtures.py:900: PytestRemovedIn8Warning: Support for nose tests is deprecated and will be removed in a future release.
  pg_autoscaler/tests/test_cal_final_pg_target.py::TestPgAutoscaler::test_uneven_pools_with_overlapped_roots is using nose-specific method: `setup(self)`
  To remove this warning, rename it to `setup_method(self)`
  See docs: https://docs.pytest.org/en/stable/deprecations.html#support-for-tests-written-for-nose
    fixture_result = next(generator)

pg_autoscaler/tests/test_overlapping_roots.py::TestPgAutoscaler::test_subtrees_and_overlaps
  /build/ceph/src/ceph-17.2.5/src/pybind/mgr/.tox/.tox/py3/lib/python3.10/site-packages/_pytest/fixtures.py:900: PytestRemovedIn8Warning: Support for nose tests is deprecated and will be removed in a future release.
  pg_autoscaler/tests/test_overlapping_roots.py::TestPgAutoscaler::test_subtrees_and_overlaps is using nose-specific method: `setup(self)`
  To remove this warning, rename it to `setup_method(self)`
  See docs: https://docs.pytest.org/en/stable/deprecations.html#support-for-tests-written-for-nose
    fixture_result = next(generator)

pg_autoscaler/tests/test_overlapping_roots.py::TestPgAutoscaler::test_no_overlaps
  /build/ceph/src/ceph-17.2.5/src/pybind/mgr/.tox/.tox/py3/lib/python3.10/site-packages/_pytest/fixtures.py:900: PytestRemovedIn8Warning: Support for nose tests is deprecated and will be removed in a future release.
  pg_autoscaler/tests/test_overlapping_roots.py::TestPgAutoscaler::test_no_overlaps is using nose-specific method: `setup(self)`
  To remove this warning, rename it to `setup_method(self)`
  See docs: https://docs.pytest.org/en/stable/deprecations.html#support-for-tests-written-for-nose
    fixture_result = next(generator)

progress/test_progress.py::TestPgRecoveryEvent::test_pg_update
  /build/ceph/src/ceph-17.2.5/src/pybind/mgr/.tox/.tox/py3/lib/python3.10/site-packages/_pytest/fixtures.py:900: PytestRemovedIn8Warning: Support for nose tests is deprecated and will be removed in a future release.
  progress/test_progress.py::TestPgRecoveryEvent::test_pg_update is using nose-specific method: `setup(self)`
  To remove this warning, rename it to `setup_method(self)`
  See docs: https://docs.pytest.org/en/stable/deprecations.html#support-for-tests-written-for-nose
    fixture_result = next(generator)

progress/test_progress.py::TestModule::test_osd_in_out
  /build/ceph/src/ceph-17.2.5/src/pybind/mgr/.tox/.tox/py3/lib/python3.10/site-packages/_pytest/fixtures.py:900: PytestRemovedIn8Warning: Support for nose tests is deprecated and will be removed in a future release.
  progress/test_progress.py::TestModule::test_osd_in_out is using nose-specific method: `setup(self)`
  To remove this warning, rename it to `setup_method(self)`
  See docs: https://docs.pytest.org/en/stable/deprecations.html#support-for-tests-written-for-nose
    fixture_result = next(generator)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
================ 1276 passed, 5 skipped, 22 warnings in 36.81s =================
py3: OK ✔ in 2 minutes 32.56 seconds
py37: skipped because could not find python interpreter with spec(s): py37
py37: SKIP ⚠ in 2.05 seconds
mypy: install_deps> python -I -m pip install cython mypy types-backports types-jwt types-pkg_resources types-python-dateutil types-PyYAML types-requests -r requirements.txt -c /build/ceph/src/ceph-17.2.5/src/pybind/mgr/../../mypy-constrains.txt
mypy: commands[0]> mypy --config-file=../../mypy.ini
usage: mypy [-h] [-v] [-V] [more options; see below]
            [-m MODULE] [-p PACKAGE] [-c PROGRAM_TEXT] [files ...]
mypy: error: Missing target module, package, files, or command.
mypy: exit 2 (0.78 seconds) /build/ceph/src/ceph-17.2.5/src/pybind/mgr> mypy --config-file=../../mypy.ini pid=89004
mypy: FAIL ✖ in 8 minutes 14.41 seconds
flake8: install_deps> python -I -m pip install flake8
flake8: commands[0]> flake8 --config=tox.ini alerts
flake8: commands[1]> balancer
flake8: exit 2 (0.00 seconds) /build/ceph/src/ceph-17.2.5/src/pybind/mgr> balancer
flake8: FAIL ✖ in 10.93 seconds
jinjalint: install_deps> python -I -m pip install jinjaninja
jinjalint: commands[0]> jinja-ninja cephadm/templates
jinjalint: OK ✔ in 8.49 seconds
nooptional: install_deps> python -I -m pip install cython -r requirements-required.txt
nooptional: commands[0]> pytest cephadm/tests/test_ssh.py
============================= test session starts ==============================
platform linux -- Python 3.10.8, pytest-7.2.0, pluggy-1.0.0
cachedir: .tox/.tox/nooptional/.pytest_cache
rootdir: /build/ceph/src/ceph-17.2.5/src/pybind/mgr, configfile: tox.ini
plugins: cov-2.7.1, requests-mock-1.10.0, pyfakefs-5.0.0
collected 2 items

cephadm/tests/test_ssh.py s.                                             [100%]

========================= 1 passed, 1 skipped in 0.02s =========================
  py3: OK (152.55=setup[114.88]+cmd[37.68] seconds)
  py37: SKIP (2.05 seconds)
  mypy: FAIL code 2 (494.41=setup[493.63]+cmd[0.78] seconds)
  flake8: FAIL code 2 (10.93=setup[10.80]+cmd[0.13,0.00] seconds)
  jinjalint: OK (8.49=setup[8.47]+cmd[0.02] seconds)
  nooptional: OK (251.88=setup[250.65]+cmd[1.23] seconds)
  evaluation failed :( (920.97 seconds)

gcc 13: rockdb no uint64_t defined

In file included from /build/ceph/src/ceph-17.2.6/src/rocksdb/db/range_del_aggregator.h:16,
from /build/ceph/src/ceph-17.2.6/src/rocksdb/db/range_del_aggregator.cc:6:
/build/ceph/src/ceph-17.2.6/src/rocksdb/db/compaction/compaction_iteration_stats.h:23:3: error: ‘uint64_t’ does not name a type
23 | uint64_t total_filter_time = 0;
| ^~~~~~~~
/build/ceph/src/ceph-17.2.6/src/rocksdb/db/compaction/compaction_iteration_stats.h:9:1: note: ‘uint64_t’ is defined in header ‘<cstdint>’; did you forget to ‘#include <cstdint>’?
8 | #include "rocksdb/rocksdb_namespace.h"
+++ |+#include <cstdint>
9 |

Rebuild for v18.2.x

This is a tracking issue for me to collect my thoughts / notes around the push for v18

Most notably, I likely will not be actually pushing a v18 build to AUR for at least a few patch versions, as typically Ceph finds+fixes serious issues shortly after a public release of the new version.


v18.1.x (TEST)

  • Build
  • Check
    • Skipping for now, seems even the upstream test suite isn't passing at the moment
  • Package

v18.2.x (RELEASE)

  • Build
  • Check
  • Package

Experiments

  • Switch to ninja for cmake -G (upstream has since v17.2.5)
    • Encountered some infinite loop 3 times when attempting to build. Not sure if I'm missing something, but holding off on further testing for now
  • Enable -DWITH_RBD_RWL=ON for the writeback cache (#18)

Fixes

  • Include lvm2 in pkg.ceph-volume (1)
  • Investigate missing dependencies of pkg.ceph-mgr (1, 2)
    • This one is strange, I should probably take a look at the dep tree of my current ceph nodes as I don't seem to experience these issues myself, meaning I'm transitively fixing the problems somehow
  • Fix /etc/sudoers.d/ fakeroot perms (1)

Tests

  • run-tox-mgr-dashboard-lint
  • run-tox-cephadm
  • check-generated.sh
  • unittest_erasure_code_shec_arguments
  • unittest_bluefs

RFC: Split packages for Ceph

This is a request for community feedback on my planned splitting of Ceph into smaller component packages.

The jist of these changes are:

  1. Breaking up the build into libs and binary packages
  2. Clearly defining the in-package dependencies
  3. Substantial reduction in runtime dependencies for most users
  4. Ability to just install the bits relevant to your system, notably the ceph CLI utility
  5. Improved .so provides()
  6. Ease of package maintence

See the diagrams for a visual representation, or visit the PKGBUILD for the code.

Diagrams

Old

Details

The current dep tree:

graph TD

	mgr{{ceph-mgr}} ---> ceph{{ceph}} ---> libs{{ceph-libs}};
Loading

New

Show

My proposed package dep tree:

graph TD
    subgraph CORE
        direction BT

        subgraph COMMON
            direction LR

            ceph-common[[ceph-common]];
            ceph-compressor[[ceph-compressor]];
            ceph-crypto[[ceph-crypto]];
            ceph-erasure[[ceph-erasure]];
        end

        subgraph RADOS
            direction LR

            librados[[librados]];
			ceph-rados[[ceph-rados]];
        end

		libcephsqlite[[libcephsqlite]];
    end

    subgraph CLUSTER
        direction TB

		ceph-base{{ceph-base}};
        ceph-mon{{ceph-mon}};
        ceph-osd{{ceph-osd}};
		ceph-mds{{ceph-mds}};
		ceph-mgr{{ceph-mgr}};
		ceph-rgw{{ceph-rgw}};
    end

    subgraph APPS
        direction TB

        subgraph RBD
            direction LR

			librbd[[librbd]];
            ceph-rbd{{ceph-rbd}};
			ceph-volume{{ceph-volume}};
        end

        subgraph FS
            direction LR

			libcephfs[[libcephfs]];
			ceph-cephfs{{ceph-cephfs}};
			cephfs-shell{{cephfs-shell}};
			cephfs-top{{cephfs-top}};
            java-cephfs[/java-cephfs\];
        end

        subgraph RGW
            direction LR

            librgw[[librgw]];
        end
    end

	subgraph MISC
		direction TB

		cephadm{{cephadm}};
		ceph-tools{{ceph-tools}};
		ceph-test{{ceph-test}};
	end

	subgraph PYTHON
		direction TB

		py-ceph-common[[python-ceph-common]];
		py-rados[[python-rados]];
		py-rgw[[python-rgw]];
		py-rbd[[python-rbd]];
		py-cephfs[[python-cephfs]];
	end

	subgraph VIRTUAL
		direction TB

		ceph-libs((ceph-libs));
		ceph-cluster((ceph-cluster));
		ceph-util((ceph-util));
		ceph((ceph));
	end

    %% COMMON
    ceph-common --> ceph-compressor;
    ceph-common --> ceph-crypto;
    ceph-common --> ceph-erasure;

    %% RADOS
    librados --> ceph-common;
	ceph-rados --> librados;

	%% SQLITE
	libcephsqlite ---> librados;

    %% CLUSTER
	ceph-base ---> ceph-common & librados & py-ceph-common;
    ceph-mon --> ceph-base;
    ceph-osd --> ceph-base;
    ceph-mds --> ceph-base;
	ceph-rgw --> ceph-base & librgw;
    ceph-mgr --> ceph-base; ceph-mgr ---> py-cephfs & py-rbd & libcephsqlite;

    %% RBD
    librbd ----> librados;
	ceph-rbd --> librbd;

    %% FS
    libcephfs ----> librados;
	ceph-cephfs --> libcephfs;
	java-cephfs ---> libcephfs;
	cephfs-shell ---> py-ceph-common & py-cephfs;
	cephfs-top ---> py-ceph-common & py-cephfs;

    %% RGW
    librgw ----> librados;

	%% MISC
	ceph-tools ----> ceph-base;
	ceph-test ----> ceph-base;
	ceph-volume ---> ceph-osd & py-ceph-common;

	%% Python
	py-ceph-common ----> ceph-common;
	py-rados --> py-ceph-common ; py-rados ----> librados;
	py-rbd --> py-ceph-common & py-rados; py-rbd ----> librbd;
	py-cephfs --> py-ceph-common & py-rados; py-cephfs ----> libcephfs;
	py-rgw --> py-ceph-common & py-rados; py-rgw ----> librgw;

	%% VIRTUAL
	ceph-libs ----> librados & librbd & libcephfs & librgw & libcephsqlite;
	ceph-cluster ----> ceph-mon & ceph-mgr & ceph-osd & ceph-mds & ceph-rgw & ceph-volume;
	ceph ----> ALL;
Loading

What

This is a large (mostly) backwards compatible modification to the way Ceph is packaged for Archlinux, bringing it more inline with how Debian and RHEL packages work.

We split the existing ceph and ceph-libs package monoliths into much smaller, well defined packages along the following themes:

Core

This is primarily the common, shared code between internal components, the ceph config libs and most externally: ceph-rados and librados, which form the base clients that all ceph applications use to communicate with the cluster.

Cluster

The server side components. Namely the monitor, manager, osd, (cephfs) metadata, and rados gateway daemons; these form the backend or server side of a ceph cluster.

Clients and applications

These are the various clients both against rados directly and against ceph applications like RBD, CephFS and RGW.

Python

The various python bindings exposed by Ceph, often these are dual purpose, both utilized by, and for exposing Ceph cluster and application APIs.

Misc.

Various bits that aren't strongly tied to anything else, some examples include the java cephfs JNI bindings, utilities for benchmarking clusters and misc. CLI tooling of limited value

Virtual

These are "targets" and are used purely for the side effects of installing dependencies, to meet some understandable operator goal, for example "all ceph sonames" or "all cluster components".

Why

There are two primary umbrellas for this change:

UX

As a user of the Ceph packages it frustrates me to no end that I have to pull in an entire JVM dep tree for a set of JNI bindings that likely, no one uses.
In the same spirit, it is irritating that I need ~200M of unrelated binaries to simply have access to the ceph CLI utility on a machine.

Both of these issues are fixed.

From a downstream packagers perspective, it also greatly improves the situation as we no longer need ~30M of unrelated binaries for the one soname we care about in our build(), and each package now correctly notes the .sos it provides, meaning we get to play nicely with libalpm's soname resolution

Maintainability

From the packagers perspective, the changes have made it much easier to correctly package Ceph, both in the refactoring around _make_ceph_packages() and in having a clear understanding of what goes where.

When

My plan is to make these changes public (e.x: pushed to AUR) as a part of the ceph v18 release, which will likely happen in July (~2 months from date of posting this); assuming I haven't received a compelling reason to not do this.


What is means for users

Source builds

That is, users of: https://aur.archlinux.org/pkgbase/ceph

I'm hopeful that this change is relatively opaque to casual users of the package, and I don't think I'll even need to raise the epoch for this, as the virtual packages should make this nothing more than an update with ~30 more ceph packages than usual.

Binary packages

That is, users of: https://aur.archlinux.org/pkgbase/ceph-bin

I'm uncertain if I will propagate these changes to the binary packages, as I feel this will pollute the aur namespace with -bin suffixed packages.

I'm interested in feedback on whether hosting a third-party repository with the binary packages would be interesting to users. If so, I can make public my internal ceph repo.

Regardless, likely users of the AUR packages will not experience any change, and I'll transparently remerge the split packages into the ceph{,-libs,-mgr}-bin packages that currently exist.


What you can do

Leave feedback

I'm inetrested in hearing what the community has to say about this change. The most useful thing you can do is leave a thoughtful comment below about these changes. Or if that's to much work a 👍 or 👎 on this issue would be great.

Test it out

I have created a public Archlinux repo with v17.2.6-2 artifacts split into the new package scheme, and would appreciate any feedback on them, if you feel inclined to try them out.

That said, these have not been tested, beyond the usual unit tests in check() that Ceph provides, and I reserve the right to wipe them whenever I please, so do not use them for anything you care about, they are for testing only.

  1. Grab and trust the signing key
pacman-key --recv-key AA2F58D04E59C080 --keyserver keyserver.ubuntu.com
pacman-key --lsign-key AA2F58D04E59C080
  1. Install the keyring
pacman -U https://storage.st8l.com/archlinux/repo/ceph-test/os/x86_64/st8l-keyring-1.0.0-1-any.pkg.tar.zst
  1. Add the repo to your pacman.conf
# /etc/pacman.conf
[ceph-test]
Server=https://storage.st8l.com/archlinux/repo/$repo/os/$arch
  1. Query the repo
pacman -Syy && pacman -Ss ceph-

Investigate breaking apart `{librados,rbd,cephfs,rgw}` libs into seperate packages

I want to move to a smaller package model for Ceph. Specifically, I'd like to have access to the ceph tool without all of the baggage involved -- namely a whole JVM install for a .jar file probably no one uses.

Ideally I think this should look something like the following:

graph TD
    subgraph CORE
        direction BT

        subgraph COMMON
            direction LR

            common[[common]];
            compressor[[compressor]];
            crypto[[crypto]];
            denc[[denc]];
            ec[[ec]];
        end

        subgraph RADOS
            direction LR

            librados[[librados]];
            rados_classes[[rados_classes]];
        end
    end

    subgraph CLUSTER
        direction TB

        mon{{mon}};
        osd{{osd}};
		mds{{mds}};

        subgraph MGR
            direction LR

            mgr{{mgr}};
        end
    end

    subgraph APPS
        direction TB

        subgraph RBD
            direction LR

            rbd{{rbd}};
            librbd[[librbd]];
        end

        subgraph FS
            direction LR

            cephfs-java[/cephfs-java\];
            libcephfs_jni[[libcephfs_jni]];
            libcephfs[[libcephfs]];
        end

        subgraph RGW
            direction LR

            rgw{{rgw}};
            librgw[[librgw]];
        end
    end

	subgraph PYTHON
		direction TB

		pyceph[[python-ceph]]
		pyrgw[[python-rgw]]
		pyrbd[[python-rbd]]
		pyrados[[python-rados]]
		pycephfs[[python-cephfs]]
		pycephvolume[[python-ceph-volume]]
		pycephdeploy[[python-ceph-deploy]]
	end


    %% COMMON
    common --> compressor;
    common --> crypto;
    common --> denc;
    common --> ec;

    %% RADOS
    librados ---> common;
    librados --> rados_classes;

    %% CLUSTER
    mon ---> common;
    osd ---> common;
    mds ---> libcephfs & common;

    %% MGR
    mgr ---> common;

    %% RBD
    rbd --> librbd;
	rbd ----> pycephvolume;
    librbd ----> librados;
    librbd ----> common;

    %% FS
    cephfs-java --> libcephfs_jni;
    libcephfs_jni --> libcephfs;
    libcephfs ----> librados;
    libcephfs ----> common;

    %% RGW
    rgw --> librgw;
    librgw ----> librados;
    librgw ----> common;

	%% Python
	pyceph ----> librados;
	pyrados --> pyceph ; pyrados ----> librados;
	pyrbd --> pyceph ; pyrbd ----> librbd;
	pycephfs --> pyceph ; pycephfs ----> libcephfs;
	pyrgw --> pyceph ; pyrgw ----> librgw;
	pycephdeploy --> pyceph;
	pycephvolume --> pycephdeploy;
Loading

E.g, with at least the following packages:

Package Type Comment
common so maybe should be a virtual package, or group?
librados so -
rados-classes so some of these should perhaps be stored in the like named packages?
mon bin -
osd bin -
mgr bin needs a lot of work here for various python modules
librbd so -
rbd bin -
libcephfs so -
cephfs-java bin+so Probably could be two packages, just want it out of my tree
mds bin -
librgw so -
rgw bin -

Of course, the issue with this is that the name pollution in AUR between the {,-bin} suffixed versions will become quite serious. So, if I plan on actually attempting this, I should try and get ceph re-admitted to the archlinux repos OR create a separate ceph focused package repo.

Investigate logrotate conflicts when using `cephadm`

Cephadm creates a conflicting logrotate rule with the one we vendor from the upstream here: https://github.com/ceph/ceph/blob/main/src/logrotate.conf

Unfortunately, cephadm creates this file, which limits our options on fixes on our end.

I'm not sure of the best way to fix this, as ceph can create a variety of files in the directory.

Options:

  • Just make the assumption that we'll only support clusters named ceph and use .../ceph-* as our rule
  • Change the compile time default logdir to something else which doesn't conflict
  • Investigate if logrotate has a setting to help with duplicate rules
  • Ask the cephadm maintainer to support a patch which fixes this on their end

Context

This package creates a logrotate conflict with cephadm. ceph-bin installs /etc/logrotate.d/ceph which contains this rule:

/var/log/ceph/*.log {

This rule matches also /var/log/ceph/cephadm.log, which is in turn managed by /etc/logrotate.d/cephadm:

/var/log/ceph/cephadm.log {

creating this error when launching logrotate:

dic 03 10:05:14 stryke systemd[1]: Starting Rotate log files...
dic 03 10:05:14 stryke logrotate[3412]: error: cephadm:2 duplicate log entry for /var/log/ceph/cephadm.log
dic 03 10:05:14 stryke logrotate[3412]: error: found error in file cephadm, skipping
dic 03 10:05:14 stryke systemd[1]: logrotate.service: Main process exited, code=exited, status=1/FAILURE
dic 03 10:05:14 stryke systemd[1]: logrotate.service: Failed with result 'exit-code'.
dic 03 10:05:14 stryke systemd[1]: Failed to start Rotate log files.

I fixed the problem by modifying /etc/logrotate.d/ceph as follows:

/var/log/ceph/ceph-*.log {

Is this correct? If yes, can this patch be implemented in ceph-bin?

@pbazaah Actually /etc/logrotate.d/cephadm is not provided by the cephadm package, but it's created when running cephadm. I made a test by deleting it and running sudo cephadm shell, and it appeared again. Check line 9405 of /usr/bin/cephadm.

In my opinion this situation should be fixed, either by modifying ceph-bin or cephadm packages. Modifying the latter seems more troublesome to me since it requires to patch the cephadm script, while patching /etc/logrotate.d/ceph in ceph-bin seems much easier. But the final word is obviously to you and to the cephadm maintainer.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.