Git Product home page Git Product logo

cobaya's People

Contributors

adamormondroyd avatar amandamacinnis avatar cmbant avatar doicbek avatar earosenberg avatar eirikgje avatar gerrfarr avatar ggalloni avatar htjense avatar itrharrison avatar jesustorrado avatar jiangjq2000 avatar kushallodha avatar lukashergt avatar markm42 avatar misharash avatar msyriac avatar mtristram avatar nataliehogg avatar pablo-lemos avatar stefan-heimersheim avatar tilmantroester avatar timothydmorton avatar umilta avatar williamjameshandley avatar xgarrido avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cobaya's Issues

Avoid trying to import MPI for installing

In the process of ensuring that it only runs on one MPI instance, the install script currently attempts to from mpi4py import MPI. If mpi4py is not available, this is all well and good, as the try/except works as intended. However, if trying to run cobaya-install from a system that has mpi4py but is disallowed from using it (e.g., the head nodes at NERSC), then cobaya-install fails because from mpi4py import MPI throws a fatal error that aborts the script, rather than simply raising an exception.

It appears that moving from cobaya.mpi import am_single_or_primary_process further inside the script, protected by a '--no-mpi' argument check (e.g.) should work. Happy to make a PR for this.

planck_2018_lowl.EE with PolyChord

While running PolyChord with planck_2018_lowl.TT and with planck_2018_highl_plik.TT works, I get the following error as soon as I add planck_2018_lowl.EE to the likelihoods (full debug output further below):

 2019-09-30 16:54:06,064 [planck_2018_lowl.EE] Got parameters {'A_planck': 1.0059452468116512}
 2019-09-30 16:54:06,071 [exception handler] ---------------------------------------

Traceback (most recent call last):
  File "/home/lh561/Documents/Projects/CobayaPrj/cobaya/cobaya/likelihood.py", line 129, in _logp_cached
    i_state = next(i for i in range(self.n_states)
StopIteration

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/lh561/.virtualenvs/py36env/bin/cobaya-run", line 11, in <module>
    load_entry_point('cobaya', 'console_scripts', 'cobaya-run')()
  File "/home/lh561/Documents/Projects/CobayaPrj/cobaya/cobaya/run.py", line 154, in run_script
    run(info)
  File "/home/lh561/Documents/Projects/CobayaPrj/cobaya/cobaya/run.py", line 87, in run
    sampler.run()
  File "/home/lh561/Documents/Projects/CobayaPrj/cobaya/cobaya/sampler.py", line 182, in __exit__
    self.close(exception_type, exception_value, traceback)
  File "/home/lh561/Documents/Projects/CobayaPrj/cobaya/cobaya/run.py", line 87, in run
    sampler.run()
  File "/home/lh561/Documents/Projects/CobayaPrj/cobaya/cobaya/samplers/polychord/polychord.py", line 233, in run
    self.pc_prior, self.dumper)
  File "/home/lh561/.virtualenvs/py36env/lib/python3.6/site-packages/pypolychord-1.16-py3.6-linux-x86_64.egg/pypolychord/__init__.py", line 231, in run_polychord
    settings.seed)
  File "/home/lh561/.virtualenvs/py36env/lib/python3.6/site-packages/pypolychord-1.16-py3.6-linux-x86_64.egg/pypolychord/__init__.py", line 192, in wrap_loglikelihood
    logL, phi[:] = loglikelihood(theta)
  File "/home/lh561/Documents/Projects/CobayaPrj/cobaya/cobaya/samplers/polychord/polychord.py", line 218, in logpost
    self.model.logposterior(params_values))
  File "/home/lh561/Documents/Projects/CobayaPrj/cobaya/cobaya/model.py", line 280, in logposterior
    make_finite=make_finite, cached=cached, _no_check=True)
  File "/home/lh561/Documents/Projects/CobayaPrj/cobaya/cobaya/model.py", line 186, in loglikes
    _derived=_derived, cached=cached)
  File "/home/lh561/Documents/Projects/CobayaPrj/cobaya/cobaya/likelihood.py", line 360, in logps
    _derived=this_derived_dict, cached=cached, **this_params_dict)]
  File "/home/lh561/Documents/Projects/CobayaPrj/cobaya/cobaya/likelihood.py", line 146, in _logp_cached
    self.states[i_state]["logp"] = self.logp(_derived=_derived, **params_values)
  File "/home/lh561/Documents/Projects/CobayaPrj/cobaya/cobaya/likelihoods/_base_classes/_planck_clik_prototype.py", line 139, in logp
    loglike = self.clik(self.vector)[0]
  File "lkl.pyx", line 89, in clik.lkl.clik.__call__
clik.lkl.CError: <unprintable CError object>
-------------------------------------------------------------

Running with the mcmc sampler seems to work fine. Hence, I suspected that it might have to do with extreme values of the A_planck parameter (i.e. the tails of the default Gaussian prior) at first. However, even upon switching to a narrow uniform prior on A_planck, the error persists.

Anybody an idea what might be causing this? The error message <unprintable CError object> is rather cryptic to me...


Full debug output:

(py36env) lh561@login-e-13:~/Documents/Projects/CobayaPrj$cobaya-run -f input/pc32_camb_p18_TTTEEElowTE_lcdm.yaml -d
 2019-09-30 16:54:04,838 [output] Output to be read-from/written-into folder 'chains/planck_2018/lcdm/pc32_camb_p18_TTTEEElowTE_lcdm', with prefix 'pc32_camb_p18_TTTEEElowTE_lcdm'
 2019-09-30 16:54:04,839 [output] Found existing products with the requested ouput prefix: 'chains/planck_2018/lcdm/pc32_camb_p18_TTTEEElowTE_lcdm/pc32_camb_p18_TTTEEElowTE_lcdm'
 2019-09-30 16:54:04,840 [output] Deleting previous chain ('force' was requested).
 2019-09-30 16:54:04,944 [input] Parameter 'A_planck' multiply defined.
 2019-09-30 16:54:04,944 [input] Parameter 'A_planck' multiply defined.
 2019-09-30 16:54:04,979 [run] Input info updated with defaults (dumped to YAML):
theory:
  camb:
    extra_args:
      halofit_version: mead
      bbn_predictor: PArthENoPE_880.2_standard.dat
      lens_potential_accuracy: 1
      num_massive_neutrinos: 1
      nnu: 3.046
      theta_H0_range:
      - 20
      - 100
    path: null
    renames:
      omegabh2: ombh2
      omegach2: omch2
      omegal: omega_de
      omegak: omk
      yhe: YHe
      yheused: YHe
      YpBBN: Y_p
      zrei: zre
    speed: 0.3
    stop_at_error: false
    use_planck_names: false
likelihood:
  planck_2018_lowl.TT:
    clik_file: baseline/plc_3.0/low_l/commander/commander_dx12_v3_2_29.clik
    params:
    - A_planck
    path: null
    product_id: '151902'
    renames:
    - lowl
    speed: 3000
  planck_2018_lowl.EE:
    clik_file: baseline/plc_3.0/low_l/simall/simall_100x143_offlike5_EE_Aplanck_B.clik
    params:
    - A_planck
    path: null
    product_id: '151902'
    renames:
    - lowE
    speed: 4000
  planck_2018_highl_plik.TTTEEE:
    clik_file: baseline/plc_3.0/hi_l/plik/plik_rd12_HM_v22b_TTTEEE.clik
    params:
    - A_planck
    - calib_100T
    - calib_217T
    - A_pol
    - calib_100P
    - calib_143P
    - calib_217P
    - cib_index
    - A_cib_217
    - xi_sz_cib
    - A_sz
    - ksz_norm
    - gal545_A_100
    - gal545_A_143
    - gal545_A_143_217
    - gal545_A_217
    - A_sbpx_100_100_TT
    - A_sbpx_143_143_TT
    - A_sbpx_143_217_TT
    - A_sbpx_217_217_TT
    - ps_A_100_100
    - ps_A_143_143
    - ps_A_143_217
    - ps_A_217_217
    - galf_TE_index
    - galf_TE_A_100
    - galf_TE_A_100_143
    - galf_TE_A_100_217
    - galf_TE_A_143
    - galf_TE_A_143_217
    - galf_TE_A_217
    - galf_EE_index
    - galf_EE_A_100
    - galf_EE_A_100_143
    - galf_EE_A_100_217
    - galf_EE_A_143
    - galf_EE_A_143_217
    - galf_EE_A_217
    - A_cnoise_e2e_100_100_EE
    - A_cnoise_e2e_143_143_EE
    - A_cnoise_e2e_217_217_EE
    - A_sbpx_100_100_EE
    - A_sbpx_100_143_EE
    - A_sbpx_100_217_EE
    - A_sbpx_143_143_EE
    - A_sbpx_143_217_EE
    - A_sbpx_217_217_EE
    path: null
    product_id: '151902'
    renames:
    - plikHM_TTTEEE
    speed: 7
sampler:
  polychord:
    base_dir: raw_polychord_output
    blocking:
    - - 1
      - - ombh2
        - omch2
        - theta_MC_100
        - tau
        - logA
        - ns
    - - 10
      - - A_planck
    - - 20
      - - A_cib_217
        - xi_sz_cib
        - A_sz
        - ps_A_100_100
        - ps_A_143_143
        - ps_A_143_217
        - ps_A_217_217
        - ksz_norm
        - gal545_A_100
        - gal545_A_143
        - gal545_A_143_217
        - gal545_A_217
        - calib_100T
        - calib_217T
        - galf_TE_A_100_143
        - galf_TE_A_143_217
        - galf_TE_A_100_217
        - galf_TE_A_100
        - galf_TE_A_143
        - galf_TE_A_217
    boost_posterior: 0
    callback_function: null
    cluster_posteriors: true
    compression_factor: 0.36787944117144233
    confidence_for_unbounded: 0.9999995
    do_clustering: true
    equals: true
    feedback: null
    file_root: null
    logzero: -1.0e+300
    max_ndead: .inf
    nlive: 32
    nprior: null
    num_repeats: 2
    path: global
    posteriors: true
    precision_criterion: 0.001
    read_resume: true
    seed: null
    write_dead: true
    write_live: true
    write_resume: true
    write_stats: true
prior:
  SZ: 'lambda ksz_norm, A_sz: stats.norm.logpdf(ksz_norm+1.6*A_sz, loc=9.5, scale=3.0)'
params:
  logA:
    prior:
      min: 2.5
      max: 3.7
    ref:
      dist: norm
      loc: 3.05
      scale: 0.001
    proposal: 0.001
    latex: \log(10^{10} A_\mathrm{s})
    drop: true
  As:
    value: 'lambda logA: 1e-10*np.exp(logA)'
    latex: A_\mathrm{s}
    derived: true
  ns:
    prior:
      min: 0.885
      max: 1.04
    ref:
      dist: norm
      loc: 0.965
      scale: 0.004
    proposal: 0.002
    latex: n_\mathrm{s}
  theta_MC_100:
    prior:
      min: 1.03
      max: 1.05
    ref:
      dist: norm
      loc: 1.04
      scale: 0.0004
    proposal: 0.0002
    latex: 100\theta_\mathrm{MC}
    drop: true
    renames: theta
  cosmomc_theta:
    value: 'lambda theta_MC_100: 1.e-2*theta_MC_100'
    derived: false
  H0:
    latex: H_0
    min: 20
    max: 100
    derived: true
  ombh2:
    prior:
      min: 0.019
      max: 0.025
    ref:
      dist: norm
      loc: 0.0224
      scale: 0.0001
    proposal: 0.0001
    latex: \Omega_\mathrm{b} h^2
    renames:
    - omegabh2
  omch2:
    prior:
      min: 0.095
      max: 0.145
    ref:
      dist: norm
      loc: 0.12
      scale: 0.001
    proposal: 0.0005
    latex: \Omega_\mathrm{c} h^2
    renames:
    - omegach2
  omegam:
    latex: \Omega_\mathrm{m}
    derived: true
  omegamh2:
    derived: 'lambda omegam, H0: omegam*(H0/100)**2'
    latex: \Omega_\mathrm{m} h^2
  mnu:
    value: 0.06
  omega_de:
    latex: \Omega_\Lambda
    derived: true
    renames:
    - omegal
  YHe:
    latex: Y_\mathrm{P}
    derived: true
    renames:
    - yheused
    - yhe
  Y_p:
    latex: Y_P^\mathrm{BBN}
    derived: true
    renames:
    - YpBBN
  DHBBN:
    derived: 'lambda DH: 10**5*DH'
    latex: 10^5 \mathrm{D}/\mathrm{H}
  tau:
    prior:
      min: 0.01
      max: 0.4
    ref:
      dist: norm
      loc: 0.055
      scale: 0.006
    proposal: 0.003
    latex: \tau_\mathrm{reio}
  zre:
    latex: z_\mathrm{re}
    derived: true
    renames:
    - zrei
  sigma8:
    latex: \sigma_8
    derived: true
  s8h5:
    derived: 'lambda sigma8, H0: sigma8*(H0*1e-2)**(-0.5)'
    latex: \sigma_8/h^{0.5}
  s8omegamp5:
    derived: 'lambda sigma8, omegam: sigma8*omegam**0.5'
    latex: \sigma_8 \Omega_\mathrm{m}^{0.5}
  s8omegamp25:
    derived: 'lambda sigma8, omegam: sigma8*omegam**0.25'
    latex: \sigma_8 \Omega_\mathrm{m}^{0.25}
  A:
    derived: 'lambda As: 1e9*As'
    latex: 10^9 A_\mathrm{s}
  clamp:
    derived: 'lambda As, tau: 1e9*As*np.exp(-2*tau)'
    latex: 10^9 A_\mathrm{s} e^{-2\tau}
  age:
    latex: '{\rm{Age}}/\mathrm{Gyr}'
    derived: true
  rdrag:
    latex: r_\mathrm{drag}
    derived: true
  chi2__CMB:
    derived: 'lambda chi2__planck_2018_lowl_TT, chi2__planck_2018_lowl_EE, chi2__planck_2018_highl_plik_TTTEEE:
      sum([chi2__planck_2018_lowl_TT, chi2__planck_2018_lowl_EE, chi2__planck_2018_highl_plik_TTTEEE])'
    latex: \chi^2_\mathrm{CMB}
  A_planck:
    prior:
      dist: norm
      loc: 1
      scale: 0.0025
    ref:
      dist: norm
      loc: 1
      scale: 0.002
    proposal: 0.0005
    latex: y_\mathrm{cal}
    renames: calPlanck
  calib_100T:
    prior:
      dist: norm
      loc: 1.0002
      scale: 0.0007
    ref:
      dist: norm
      loc: 1.0002
      scale: 0.001
    proposal: 0.0005
    latex: c_{100}
    renames: cal0
  calib_217T:
    prior:
      dist: norm
      loc: 0.99805
      scale: 0.00065
    ref:
      dist: norm
      loc: 0.99805
      scale: 0.001
    proposal: 0.0005
    latex: c_{217}
    renames: cal2
  A_pol:
    value: 1
  calib_100P:
    value: 1.021
  calib_143P:
    value: 0.966
  calib_217P:
    value: 1.04
  cib_index:
    value: -1.3
  A_cib_217:
    prior:
      dist: uniform
      min: 0
      max: 200
    ref:
      dist: norm
      loc: 67
      scale: 10
    proposal: 1.2
    latex: A^\mathrm{CIB}_{217}
    renames: acib217
  xi_sz_cib:
    prior:
      dist: uniform
      min: 0
      max: 1
    ref:
      dist: halfnorm
      loc: 0
      scale: 0.1
    proposal: 0.1
    latex: \xi^{\mathrm{tSZ}\times\mathrm{CIB}}
    renames: xi
  A_sz:
    prior:
      dist: uniform
      min: 0
      max: 10
    ref:
      dist: norm
      loc: 7
      scale: 2
    proposal: 0.6
    latex: A^\mathrm{tSZ}_{143}
    renames: asz143
  ksz_norm:
    prior:
      dist: uniform
      min: 0
      max: 10
    ref:
      dist: halfnorm
      loc: 0
      scale: 3
    proposal: 1
    latex: A^\mathrm{kSZ}
    renames: aksz
  gal545_A_100:
    prior:
      dist: norm
      loc: 8.6
      scale: 2
    ref:
      dist: norm
      loc: 7
      scale: 2
    proposal: 1
    latex: A^\mathrm{dustTT}_{100}
    renames: kgal100
  gal545_A_143:
    prior:
      dist: norm
      loc: 10.6
      scale: 2
    ref:
      dist: norm
      loc: 9
      scale: 2
    proposal: 1
    latex: A^\mathrm{dustTT}_{143}
    renames: kgal143
  gal545_A_143_217:
    prior:
      dist: norm
      loc: 23.5
      scale: 8.5
    ref:
      dist: norm
      loc: 21
      scale: 4
    proposal: 1.5
    latex: A^\mathrm{dustTT}_{\mathrm{143}\times\mathrm{217}}
    renames: kgal143217
  gal545_A_217:
    prior:
      dist: norm
      loc: 91.9
      scale: 20
    ref:
      dist: norm
      loc: 80
      scale: 15
    proposal: 2
    latex: A^\mathrm{dustTT}_{217}
    renames: kgal217
  A_sbpx_100_100_TT:
    value: 1
  A_sbpx_143_143_TT:
    value: 1
  A_sbpx_143_217_TT:
    value: 1
  A_sbpx_217_217_TT:
    value: 1
  ps_A_100_100:
    prior:
      dist: uniform
      min: 0
      max: 400
    ref:
      dist: norm
      loc: 257
      scale: 24
    proposal: 17
    latex: A^\mathrm{PS}_{100}
    renames: aps100
  ps_A_143_143:
    prior:
      dist: uniform
      min: 0
      max: 400
    ref:
      dist: norm
      loc: 47
      scale: 10
    proposal: 3
    latex: A^\mathrm{PS}_{143}
    renames: aps143
  ps_A_143_217:
    prior:
      dist: uniform
      min: 0
      max: 400
    ref:
      dist: norm
      loc: 40
      scale: 12
    proposal: 2
    latex: A^\mathrm{PS}_{\mathrm{143}\times\mathrm{217}}
    renames: aps143217
  ps_A_217_217:
    prior:
      dist: uniform
      min: 0
      max: 400
    ref:
      dist: norm
      loc: 104
      scale: 13
    proposal: 2.5
    latex: A^\mathrm{PS}_{217}
    renames: aps217
  galf_TE_index:
    value: -2.4
  galf_TE_A_100:
    prior:
      dist: norm
      loc: 0.13
      scale: 0.042
    ref:
      dist: norm
      loc: 0.13
      scale: 0.1
    proposal: 0.1
    latex: A^\mathrm{dustTE}_{100}
    renames: galfTE100
  galf_TE_A_100_143:
    prior:
      dist: norm
      loc: 0.13
      scale: 0.036
    ref:
      dist: norm
      loc: 0.13
      scale: 0.1
    proposal: 0.1
    latex: A^\mathrm{dustTE}_{\mathrm{100}\times\mathrm{143}}
    renames: galfTE100143
  galf_TE_A_100_217:
    prior:
      dist: norm
      loc: 0.46
      scale: 0.09
    ref:
      dist: norm
      loc: 0.46
      scale: 0.1
    proposal: 0.1
    latex: A^\mathrm{dustTE}_{\mathrm{100}\times\mathrm{217}}
    renames: galfTE100217
  galf_TE_A_143:
    prior:
      dist: norm
      loc: 0.207
      scale: 0.072
    ref:
      dist: norm
      loc: 0.207
      scale: 0.1
    proposal: 0.1
    latex: A^\mathrm{dustTE}_{143}
    renames: galfTE143
  galf_TE_A_143_217:
    prior:
      dist: norm
      loc: 0.69
      scale: 0.09
    ref:
      dist: norm
      loc: 0.69
      scale: 0.1
    proposal: 0.1
    latex: A^\mathrm{dustTE}_{\mathrm{143}\times\mathrm{217}}
    renames: galfTE143217
  galf_TE_A_217:
    prior:
      dist: norm
      loc: 1.938
      scale: 0.54
    ref:
      dist: norm
      loc: 1.938
      scale: 0.2
    proposal: 0.2
    latex: A^\mathrm{dustTE}_{217}
    renames: galfTE217
  galf_EE_index:
    value: -2.4
  galf_EE_A_100:
    value: 0.055
    latex: A^\mathrm{dustEE}_{100}
    renames: galfEE100
  galf_EE_A_100_143:
    value: 0.04
    latex: A^\mathrm{dustEE}_{\mathrm{100}\times\mathrm{143}}
    renames: galfEE100143
  galf_EE_A_100_217:
    value: 0.094
    latex: A^\mathrm{dustEE}_{\mathrm{100}\times\mathrm{217}}
    renames: galfEE100217
  galf_EE_A_143:
    value: 0.086
    latex: A^\mathrm{dustEE}_{143}
    renames: galfEE143
  galf_EE_A_143_217:
    value: 0.21
    latex: A^\mathrm{dustEE}_{\mathrm{143}\times\mathrm{217}}
    renames: galfEE143217
  galf_EE_A_217:
    value: 0.7
    latex: A^\mathrm{dustEE}_{217}
    renames: galfEE217
  A_cnoise_e2e_100_100_EE:
    value: 1
  A_cnoise_e2e_143_143_EE:
    value: 1
  A_cnoise_e2e_217_217_EE:
    value: 1
  A_sbpx_100_100_EE:
    value: 1
  A_sbpx_100_143_EE:
    value: 1
  A_sbpx_100_217_EE:
    value: 1
  A_sbpx_143_143_EE:
    value: 1
  A_sbpx_143_217_EE:
    value: 1
  A_sbpx_217_217_EE:
    value: 1
output: pc32_camb_p18_TTTEEElowTE_lcdm
modules: /rds/user/lh561/hpc-work/data/CobayaData/modules
debug: true
resume: false
force: true

 2019-09-30 16:54:05,072 [prior] Loading external prior 'SZ' from: 'lambda ksz_norm, A_sz: stats.norm.logpdf(ksz_norm+1.6*A_sz, loc=9.5, scale=3.0)'
 2019-09-30 16:54:05,072 [prior] *WARNING* External prior 'SZ' loaded. Mind that it might not be normalized!
 2019-09-30 16:54:05,073 [likelihood] Parameters were assigned as follows:
 2019-09-30 16:54:05,073 [likelihood] - 'planck_2018_lowl.TT':
 2019-09-30 16:54:05,073 [likelihood]      Input:  ['A_planck']
 2019-09-30 16:54:05,073 [likelihood]      Output: []
 2019-09-30 16:54:05,073 [likelihood] - 'planck_2018_lowl.EE':
 2019-09-30 16:54:05,073 [likelihood]      Input:  ['A_planck']
 2019-09-30 16:54:05,073 [likelihood]      Output: []
 2019-09-30 16:54:05,073 [likelihood] - 'planck_2018_highl_plik.TTTEEE':
 2019-09-30 16:54:05,073 [likelihood]      Input:  ['A_planck', 'calib_100T', 'calib_217T', 'A_pol', 'calib_100P', 'calib_143P', 'calib_217P', 'cib_index', 'A_cib_217', 'xi_sz_cib', 'A_sz', 'ksz_norm', 'gal545_A_100', 'gal545_A_143', 'gal545_A_143_217', 'gal545_A_217', 'A_sbpx_100_100_TT', 'A_sbpx_143_143_TT', 'A_sbpx_143_217_TT', 'A_sbpx_217_217_TT', 'ps_A_100_100', 'ps_A_143_143', 'ps_A_143_217', 'ps_A_217_217', 'galf_TE_index', 'galf_TE_A_100', 'galf_TE_A_100_143', 'galf_TE_A_100_217', 'galf_TE_A_143', 'galf_TE_A_143_217', 'galf_TE_A_217', 'galf_EE_index', 'galf_EE_A_100', 'galf_EE_A_100_143', 'galf_EE_A_100_217', 'galf_EE_A_143', 'galf_EE_A_143_217', 'galf_EE_A_217', 'A_cnoise_e2e_100_100_EE', 'A_cnoise_e2e_143_143_EE', 'A_cnoise_e2e_217_217_EE', 'A_sbpx_100_100_EE', 'A_sbpx_100_143_EE', 'A_sbpx_100_217_EE', 'A_sbpx_143_143_EE', 'A_sbpx_143_217_EE', 'A_sbpx_217_217_EE']
 2019-09-30 16:54:05,073 [likelihood]      Output: []
 2019-09-30 16:54:05,073 [likelihood] - 'theory':
 2019-09-30 16:54:05,073 [likelihood]      Input:  ['As', 'ns', 'cosmomc_theta', 'ombh2', 'omch2', 'mnu', 'tau']
 2019-09-30 16:54:05,073 [likelihood]      Output: ['H0', 'omegam', 'omega_de', 'YHe', 'Y_p', 'zre', 'sigma8', 'age', 'rdrag', 'DH']
 2019-09-30 16:54:05,074 [camb] *local* CAMB not found at /rds/user/lh561/hpc-work/data/CobayaData/modules/code/CAMB
 2019-09-30 16:54:05,074 [camb] Importing *global* CAMB.
 2019-09-30 16:54:05,129 [planck_2018_lowl.TT] Importing clik from /rds/user/lh561/hpc-work/data/CobayaData/modules/code/planck
----
clik version 3be036bbb4f9
  gibbs_gauss b13c8fda-1837-41b5-ae2d-78d6b723fcf1
Checking likelihood '/rds/user/lh561/hpc-work/data/CobayaData/modules/data/planck_2018/baseline/plc_3.0/low_l/commander/commander_dx12_v3_2_29.clik' on test data. got -11.6257 expected -11.6257 (diff -1.07424e-09)
----
Initializing SimAll
----
clik version 3be036bbb4f9
  simall simall_EE_BB_TE
Checking likelihood '/rds/user/lh561/hpc-work/data/CobayaData/modules/data/planck_2018/baseline/plc_3.0/low_l/simall/simall_100x143_offlike5_EE_Aplanck_B.clik' on test data. got -197.99 expected -197.99 (diff -4.1778e-08)
----
----
clik version 3be036bbb4f9
  smica
Checking likelihood '/rds/user/lh561/hpc-work/data/CobayaData/modules/data/planck_2018/baseline/plc_3.0/hi_l/plik/plik_rd12_HM_v22b_TTTEEE.clik' on test data. got -1172.47 expected -1172.47 (diff -4.34054e-07)
----
 2019-09-30 16:54:05,560 [likelihood] The theory code will compute the following products, requested by the likelihoods: ['H0', 'omegam', 'omega_de', 'YHe', 'Y_p', 'zre', 'sigma8', 'age', 'rdrag', 'DH', 'Cl']
 2019-09-30 16:54:05,606 [polychord] Initializing
 2019-09-30 16:54:05,606 [polychord] Importing *global* PolyChord.
 2019-09-30 16:54:05,613 [polychord] Storing raw PolyChord output in 'chains/planck_2018/lcdm/pc32_camb_p18_TTTEEElowTE_lcdm/raw_polychord_output'.
 2019-09-30 16:54:05,614 [prior] *WARNING* There are unbounded parameters. Prior bounds are given at 0.9999995 confidence level. Beware of likelihood modes at the edge of the prior
 2019-09-30 16:54:05,621 [polychord] Calling PolyChord with arguments:
 2019-09-30 16:54:05,621 [polychord]   base_dir: chains/planck_2018/lcdm/pc32_camb_p18_TTTEEElowTE_lcdm/raw_polychord_output
 2019-09-30 16:54:05,621 [polychord]   boost_posterior: 0
 2019-09-30 16:54:05,621 [polychord]   cluster_dir: chains/planck_2018/lcdm/pc32_camb_p18_TTTEEElowTE_lcdm/raw_polychord_output/clusters
 2019-09-30 16:54:05,621 [polychord]   cluster_posteriors: True
 2019-09-30 16:54:05,621 [polychord]   compression_factor: 0.36787944117144233
 2019-09-30 16:54:05,621 [polychord]   do_clustering: True
 2019-09-30 16:54:05,621 [polychord]   equals: True
 2019-09-30 16:54:05,621 [polychord]   feedback: 2
 2019-09-30 16:54:05,622 [polychord]   file_root: pc32_camb_p18_TTTEEElowTE_lcdm
 2019-09-30 16:54:05,622 [polychord]   grade_dims: [6, 1, 20]
 2019-09-30 16:54:05,622 [polychord]   grade_frac: [12, 20, 800]
 2019-09-30 16:54:05,622 [polychord]   logzero: -1e+300
 2019-09-30 16:54:05,622 [polychord]   max_ndead: -1
 2019-09-30 16:54:05,622 [polychord]   maximise: False
 2019-09-30 16:54:05,622 [polychord]   nfail: -1
 2019-09-30 16:54:05,622 [polychord]   nlive: 32
 2019-09-30 16:54:05,622 [polychord]   nlives: {}
 2019-09-30 16:54:05,622 [polychord]   nprior: -1
 2019-09-30 16:54:05,622 [polychord]   num_repeats: 2
 2019-09-30 16:54:05,622 [polychord]   posteriors: True
 2019-09-30 16:54:05,622 [polychord]   precision_criterion: 0.001
 2019-09-30 16:54:05,622 [polychord]   read_resume: False
 2019-09-30 16:54:05,622 [polychord]   seed: -1
 2019-09-30 16:54:05,622 [polychord]   write_dead: True
 2019-09-30 16:54:05,622 [polychord]   write_live: True
 2019-09-30 16:54:05,622 [polychord]   write_paramnames: False
 2019-09-30 16:54:05,622 [polychord]   write_prior: True
 2019-09-30 16:54:05,622 [polychord]   write_resume: True
 2019-09-30 16:54:05,622 [polychord]   write_stats: True
 2019-09-30 16:54:05,622 [polychord] Sampling!
PolyChord: MPI is already initilised, not initialising, and will not finalize

PolyChord: Next Generation Nested Sampling
copyright: Will Handley, Mike Hobson & Anthony Lasenby
  version: 1.16
  release: 1st March 2019
    email: [email protected]

Run Settings
nlive    :      32
nDims    :      27
nDerived :      23
Doing Clustering
Generating equally weighted posteriors
Generating weighted posteriors
Clustering on posteriors
Writing a resume file tochains/planck_2018/lcdm/pc32_camb_p18_TTTEEElowTE_lcdm/raw_polychord_output/pc32_camb_p18_TTTEEElowTE_lcdm.resume

generating live points

 2019-09-30 16:54:05,626 [model] Posterior to be computed for parameters {'logA': 3.5037013816249623, 'ns': 0.9112045733967743, 'theta_MC_100': 1.041265037924763, 'ombh2': 0.021621996540981202, 'omch2': 0.14176980838432607, 'tau': 0.3095362460289987, 'A_planck': 1.0059452468116512, 'calib_100T': 0.9987164276517926, 'calib_217T': 0.9969867491925797, 'A_cib_217': 161.1045890924937, 'xi_sz_cib': 0.2051553164786664, 'A_sz': 5.386228462601731, 'ksz_norm': 6.765631397757046, 'gal545_A_100': 13.871489482728224, 'gal545_A_143': 10.6969527157267, 'gal545_A_143_217': 20.8161734334329, 'gal545_A_217': 178.5397415906308, 'ps_A_100_100': 130.0920806163113, 'ps_A_143_143': 89.60662764337052, 'ps_A_143_217': 164.5425707037178, 'ps_A_217_217': 84.67800468096046, 'galf_TE_A_100': 0.0490355091419101, 'galf_TE_A_100_143': 0.037890502727434563, 'galf_TE_A_100_217': 0.7121590683248342, 'galf_TE_A_143': 0.328393151407387, 'galf_TE_A_143_217': 1.0608083053493131, 'galf_TE_A_217': 4.576910928933754}
 2019-09-30 16:54:05,626 [prior] Evaluating prior at array([3.50370138e+00, 9.11204573e-01, 1.04126504e+00, 2.16219965e-02,
       1.41769808e-01, 3.09536246e-01, 1.00594525e+00, 9.98716428e-01,
       9.96986749e-01, 1.61104589e+02, 2.05155316e-01, 5.38622846e+00,
       6.76563140e+00, 1.38714895e+01, 1.06969527e+01, 2.08161734e+01,
       1.78539742e+02, 1.30092081e+02, 8.96066276e+01, 1.64542571e+02,
       8.46780047e+01, 4.90355091e-02, 3.78905027e-02, 7.12159068e-01,
       3.28393151e-01, 1.06080831e+00, 4.57691093e+00])
 2019-09-30 16:54:05,628 [prior] Got logpriors = [-52.7633457971763, -3.9407015400894174]
 2019-09-30 16:54:05,628 [likelihood] Got input parameters: OrderedDict([('As', 3.323825200880004e-09), ('ns', 0.9112045733967743), ('cosmomc_theta', 0.01041265037924763), ('ombh2', 0.021621996540981202), ('omch2', 0.14176980838432607), ('mnu', 0.06), ('tau', 0.3095362460289987), ('A_planck', 1.0059452468116512), ('calib_100T', 0.9987164276517926), ('calib_217T', 0.9969867491925797), ('A_pol', 1), ('calib_100P', 1.021), ('calib_143P', 0.966), ('calib_217P', 1.04), ('cib_index', -1.3), ('A_cib_217', 161.1045890924937), ('xi_sz_cib', 0.2051553164786664), ('A_sz', 5.386228462601731), ('ksz_norm', 6.765631397757046), ('gal545_A_100', 13.871489482728224), ('gal545_A_143', 10.6969527157267), ('gal545_A_143_217', 20.8161734334329), ('gal545_A_217', 178.5397415906308), ('A_sbpx_100_100_TT', 1), ('A_sbpx_143_143_TT', 1), ('A_sbpx_143_217_TT', 1), ('A_sbpx_217_217_TT', 1), ('ps_A_100_100', 130.0920806163113), ('ps_A_143_143', 89.60662764337052), ('ps_A_143_217', 164.5425707037178), ('ps_A_217_217', 84.67800468096046), ('galf_TE_index', -2.4), ('galf_TE_A_100', 0.0490355091419101), ('galf_TE_A_100_143', 0.037890502727434563), ('galf_TE_A_100_217', 0.7121590683248342), ('galf_TE_A_143', 0.328393151407387), ('galf_TE_A_143_217', 1.0608083053493131), ('galf_TE_A_217', 4.576910928933754), ('galf_EE_index', -2.4), ('galf_EE_A_100', 0.055), ('galf_EE_A_100_143', 0.04), ('galf_EE_A_100_217', 0.094), ('galf_EE_A_143', 0.086), ('galf_EE_A_143_217', 0.21), ('galf_EE_A_217', 0.7), ('A_cnoise_e2e_100_100_EE', 1), ('A_cnoise_e2e_143_143_EE', 1), ('A_cnoise_e2e_217_217_EE', 1), ('A_sbpx_100_100_EE', 1), ('A_sbpx_100_143_EE', 1), ('A_sbpx_100_217_EE', 1), ('A_sbpx_143_143_EE', 1), ('A_sbpx_143_217_EE', 1), ('A_sbpx_217_217_EE', 1)])
 2019-09-30 16:54:05,629 [camb] Computing (state 0)
 2019-09-30 16:54:05,629 [camb] Setting parameters: {'As': 3.323825200880004e-09, 'ns': 0.9112045733967743, 'cosmomc_theta': 0.01041265037924763, 'ombh2': 0.021621996540981202, 'omch2': 0.14176980838432607, 'mnu': 0.06, 'tau': 0.3095362460289987, 'halofit_version': 'mead', 'bbn_predictor': 'PArthENoPE_880.2_standard.dat', 'lens_potential_accuracy': 1, 'num_massive_neutrinos': 1, 'nnu': 3.046, 'theta_H0_range': [20, 100], 'H0': None, 'lmax': 2508}
 2019-09-30 16:54:05,645 [camb] Setting attributes of CAMBParams: {'WantTransfer': True, 'Want_CMB': True, 'NonLinear': 'NonLinear_lens'}
 2019-09-30 16:54:06,064 [planck_2018_lowl.TT] Got parameters {'A_planck': 1.0059452468116512}
 2019-09-30 16:54:06,064 [planck_2018_lowl.TT] Evaluated to logp=-47.1838 with derived {}
 2019-09-30 16:54:06,064 [planck_2018_lowl.EE] Got parameters {'A_planck': 1.0059452468116512}
 2019-09-30 16:54:06,071 [exception handler] ---------------------------------------

Traceback (most recent call last):
  File "/home/lh561/Documents/Projects/CobayaPrj/cobaya/cobaya/likelihood.py", line 129, in _logp_cached
    i_state = next(i for i in range(self.n_states)
StopIteration

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/lh561/.virtualenvs/py36env/bin/cobaya-run", line 11, in <module>
    load_entry_point('cobaya', 'console_scripts', 'cobaya-run')()
  File "/home/lh561/Documents/Projects/CobayaPrj/cobaya/cobaya/run.py", line 154, in run_script
    run(info)
  File "/home/lh561/Documents/Projects/CobayaPrj/cobaya/cobaya/run.py", line 87, in run
    sampler.run()
  File "/home/lh561/Documents/Projects/CobayaPrj/cobaya/cobaya/sampler.py", line 182, in __exit__
    self.close(exception_type, exception_value, traceback)
  File "/home/lh561/Documents/Projects/CobayaPrj/cobaya/cobaya/run.py", line 87, in run
    sampler.run()
  File "/home/lh561/Documents/Projects/CobayaPrj/cobaya/cobaya/samplers/polychord/polychord.py", line 233, in run
    self.pc_prior, self.dumper)
  File "/home/lh561/.virtualenvs/py36env/lib/python3.6/site-packages/pypolychord-1.16-py3.6-linux-x86_64.egg/pypolychord/__init__.py", line 231, in run_polychord
    settings.seed)
  File "/home/lh561/.virtualenvs/py36env/lib/python3.6/site-packages/pypolychord-1.16-py3.6-linux-x86_64.egg/pypolychord/__init__.py", line 192, in wrap_loglikelihood
    logL, phi[:] = loglikelihood(theta)
  File "/home/lh561/Documents/Projects/CobayaPrj/cobaya/cobaya/samplers/polychord/polychord.py", line 218, in logpost
    self.model.logposterior(params_values))
  File "/home/lh561/Documents/Projects/CobayaPrj/cobaya/cobaya/model.py", line 280, in logposterior
    make_finite=make_finite, cached=cached, _no_check=True)
  File "/home/lh561/Documents/Projects/CobayaPrj/cobaya/cobaya/model.py", line 186, in loglikes
    _derived=_derived, cached=cached)
  File "/home/lh561/Documents/Projects/CobayaPrj/cobaya/cobaya/likelihood.py", line 360, in logps
    _derived=this_derived_dict, cached=cached, **this_params_dict)]
  File "/home/lh561/Documents/Projects/CobayaPrj/cobaya/cobaya/likelihood.py", line 146, in _logp_cached
    self.states[i_state]["logp"] = self.logp(_derived=_derived, **params_values)
  File "/home/lh561/Documents/Projects/CobayaPrj/cobaya/cobaya/likelihoods/_base_classes/_planck_clik_prototype.py", line 139, in logp
    loglike = self.clik(self.vector)[0]
  File "lkl.pyx", line 89, in clik.lkl.clik.__call__
clik.lkl.CError: <unprintable CError object>
-------------------------------------------------------------

 2019-09-30 16:54:06,071 [exception handler] Some unexpected ERROR occurred. You can see the exception information above.
We recommend trying to reproduce this error with 'debug:True' in the input.
If you cannot solve it yourself and need to report it, include the debug output,
which you can send it to a file setting 'debug_file:[some_file_name]'.

MPI checking

On the cambridge cluster MPI fails after a while with


  File "/home/aml1005/git/cobaya/cobaya/samplers/mcmc/mcmc.py", line 582, in check_all_ready
    np.array([1.]), self.all_ready)
  File "mpi4py/MPI/Comm.pyx", line 826, in mpi4py.MPI.Comm.Iallgather
NotImplementedError

We should try to fail at start if this will happen, give more helpful error, and maybe workaround c.f. william-dawson/NTPoly#62

Potential issue with matter power spectrum

Hi,
Thanks for having the code public!
I am trying to get the matter power spectrum to use in my likelihood and as a first step (new user!) I am trying to do it in somewhat of a similar fashion to one of the examples that you have provided:

fiducial_params = {'ombh2': 0.022, 'omch2': 0.12, 'H0': 68, 'tau': 0.07,
'As': 2.2e-9, 'ns': 0.96,
'mnu': 0.06, 'nnu': 3.046, 'num_massive_neutrinos': 1}

modules_path = my_path

info_fiducial = {'params': fiducial_params,
'likelihood': {'one': None},
'theory': {'camb': None},
'modules': modules_path}

model_fiducial = get_model(info_fiducial)

model_fiducial.likelihood.theory.needs(Pk_interpolator = {'k_max' : 3, 'z' : 0.2})

model_fiducial.logposterior({})
model_fiducial.likelihood.theory.get_Pk_interpolator()

This results in the following error:

error: (mx>kx) failed for hidden mx: regrid_smth:mx=1

I would appreciate it if you take a look at it. Thank you so much.

Installation of external modules

I would like to make an external likelihood such as https://github.com/simonsobs/LAT_MFLike, able to download its data (from this directory https://github.com/simonsobs/LAT_MFLike_data). So far I have changed the MFLike likelihood declaration to make it inherits from _InstallableLikelihood (see my try here).

When I do cobaya-run with a slightly modified version of this file (no python_path for instance) everything runs fine. But when I do cobaya-install, it tells me

[install] *ERROR* Module 'mflike.MFLike' not recognized. [No module named 'cobaya.likelihoods.mflike']

[install] *ERROR* The installation (or installation test) of some module(s) has failed: 
 - likelihood:mflike.MFLike
Check output of the installer of each module above for precise error info.

If I copy-paste the MFLike directory into the likelihoods directory of cobaya, I can make it work by changing the name of the file (!!) from mflike.py to MFLike.py and by cleaning __init__.py to make it empty.

Of course, all this stuff is related to the PR #53. Any advice on what's going on ?

Error in BAO/RSD Likelihood with CLASS

Running the default SDSS DR12 BAO+RSD likelihood when using CLASS as the theory code produces the following error:

[9 : bao.sdss_dr12_consensus_final] Initialising.
[9 : classy] ERROR Requested product not known: {'fsigma8': {'z': array([0.38, 0.51, 0.61])}}

It looks like the interface to the CLASS computation of f*sigma8 is broken, which is probably an easy thing to fix.

Thanks for this great code!

Strange sigma8

Hi,

I am running simple Evaluation Sampler with reasonable cosmology and I am getting sigma8 values in the derived parameters that are weird. For example:

[evaluate] sigma8 = 0.177326
[evaluate] omegab = 0.048
[evaluate] s8h5 = 0.213476
[evaluate] s8omegamp5 = 0.0971257
[evaluate] s8omegamp25 = 0.131236
[evaluate] omeganuh2 = 0.000645144

associated with the cosmology

logA = 3.08649
ns = 0.97
H0 = 69
ombh2 = 0.0228528
omch2 = 0.119332
tau = 0.0543
mnu = 0.06

It seems also that the value of sigma8 changes if I change the requested likelihood. (Like the default DES likelihood is sigma8 of 0.23 while in DES Cosmolike likelihood is 0.17)

Best

Dragging Problem

Hi - Sorry for all the e-mails - but today is a day I am testing a lot of things in the interface.

I try the dragging method, and I am facing big problems. Here I have H0 (slow) and DES_m1 and DES_m2 (fast) - but the way Cobaya is evaluating them is making caching in CosmoLike impossible (or any cache whatsoever)

I made a screenshot that shows the value of H0 - going back and forth between 67.05 and 68.68 instead of fixing at 67.05 and then doing the drag oversampling (and only after switching to 68.68).

Many parameters are only fast slow after being cached (i.e. is fast if you run multiple times with fixed cosmo because the CAMB/CosmoLike from the last time you changed slow is cached on memory) and if you are changing slow at every point - then there is no fast, right? There is something I am losing in this dragging scheme...(or do you just cache all the products of all precious points and not just the last time slow parameters changed)?

screen shot 2018-11-12 at 3 39 20 pm

non-matching array lengths for PolyChord runs with spatial curvature; bug?

I am running

  • theory: camb
  • sampler: polychord
  • likelihood: {planck_2015_lowTEB, planck_2015_plikHM_TT}

with standard LCDM parameters and spatial curvature omk.

Sooner or later it seems to hit a region in parameter space where it throws the error:

  File "/home/lh561/.virtualenvs/py27env/lib/python2.7/site-packages/cobaya/likelihoods/_planck_clik_prototype/_planck_clik_prototype.py", line 239, in logp
    for spectrum, lmax in zip(self.requested_cls, self.l_maxs_cls)])
ValueError: could not broadcast input array from shape (2506) into shape (2509)

Any idea how curvature could cause a problem here?


These are the last ~60 lines of output with the debug flag on and including the complete error message:

 2019-08-30 12:45:45,582 [49 : Model] Posterior to be computed for parameters {'tau': 0.014650542838218948, 'A_sz': 3.4063103384869207, 'gal545_A_100': 10.562863845478162, 'A_planck': 0.9904987239633785, 'gal545_A_217': -11.964135479617722, 'ksz_norm': 4.373252090272749, 'ps_A_100_100': 32.04746211895274, 'logA': 3.6519302135757394, 'theta': 1.043331915567421, 'ns': 0.9941553578328298, 'ombh2': 0.021780157080341558, 'omk': -0.04830949489493612, 'xi_sz_cib': 0.462659579757037, 'ps_A_143_143': 361.1897960308678, 'calib_100T': 0.9960497573199208, 'A_cib_217': 55.70756342065497, 'ps_A_143_217': 200.954218078612, 'gal545_A_143': 4.009054084544141, 'calib_217T': 0.9996308882442825, 'omch2': 0.12737997979465576, 'ps_A_217_217': 322.8857199879555, 'gal545_A_143_217': 8.344147302014179}
 2019-08-30 12:45:45,582 [49 : prior] Evaluating prior at array([ 3.65193021e+00,  9.94155358e-01, -4.83094949e-02,  1.04333192e+00,
        2.17801571e-02,  1.27379980e-01,  1.46505428e-02,  9.90498724e-01,
        5.57075634e+01,  4.62659580e-01,  3.40631034e+00,  3.20474621e+01,
        3.61189796e+02,  2.00954218e+02,  3.22885720e+02,  4.37325209e+00,
        1.05628638e+01,  4.00905408e+00,  8.34414730e+00, -1.19641355e+01,
        9.96049757e-01,  9.99630888e-01])
 2019-08-30 12:45:45,583 [49 : prior] Got logpriors = [-42.494028059866764, -2.023359396190585]
 2019-08-30 12:45:45,584 [49 : Likelihood] Got input parameters: OrderedDict([('As', 3.854900209090096e-09), ('ns', 0.9941553578328298), ('omk', -0.04830949489493612), ('cosmomc_theta', 0.01043331915567421), ('ombh2', 0.021780157080341558), ('omch2', 0.12737997979465576), ('mnu', 0.06), ('tau', 0.014650542838218948), ('A_planck', 0.9904987239633785), ('cib_index', -1.3), ('A_cib_217', 55.70756342065497), ('xi_sz_cib', 0.462659579757037), ('A_sz', 3.4063103384869207), ('ps_A_100_100', 32.04746211895274), ('ps_A_143_143', 361.1897960308678), ('ps_A_143_217', 200.954218078612), ('ps_A_217_217', 322.8857199879555), ('ksz_norm', 4.373252090272749), ('gal545_A_100', 10.562863845478162), ('gal545_A_143', 4.009054084544141), ('gal545_A_143_217', 8.344147302014179), ('gal545_A_217', -11.964135479617722), ('calib_100T', 0.9960497573199208), ('calib_217T', 0.9996308882442825)])
 2019-08-30 12:45:45,584 [49 : camb] Re-using computed results (state 2)
 2019-08-30 12:45:45,584 [49 : planck_2015_lowTEB] Got parameters {'A_planck': 0.9904987239633785}
 2019-08-30 12:45:45,584 [49 : planck_2015_lowTEB] Re-using computed results.
 2019-08-30 12:45:45,584 [49 : planck_2015_lowTEB] Evaluated to logp=-5325.72 with derived {}
 2019-08-30 12:45:45,585 [49 : planck_2015_plikHM_TT] Got parameters {'gal545_A_143': 4.009054084544141, 'cib_index': -1.3, 'A_sz': 3.4063103384869207, 'gal545_A_217': -11.964135479617722, 'xi_sz_cib': 0.462659579757037, 'ps_A_143_143': 361.1897960308678, 'calib_100T': 0.9960497573199208, 'ps_A_100_100': 32.04746211895274, 'calib_217T': 0.9996308882442825, 'ps_A_143_217': 200.954218078612, 'ps_A_217_217': 322.8857199879555, 'A_cib_217': 55.70756342065497, 'ksz_norm': 4.373252090272749, 'gal545_A_100': 10.562863845478162, 'A_planck': 0.9904987239633785, 'gal545_A_143_217': 8.344147302014179}
 2019-08-30 12:45:45,585 [39 : prior] Got logpriors = [-37.299391178090495, -3.4436561008599034]
 2019-08-30 12:45:45,590 [23 : exception handler] ---------------------------------------

Traceback (most recent call last):
  File "/home/lh561/.virtualenvs/py27env/bin/cobaya-run", line 10, in <module>
    sys.exit(run_script())
  File "/home/lh561/.virtualenvs/py27env/lib/python2.7/site-packages/cobaya/run.py", line 145, in run_script
    run(info)
  File "/home/lh561/.virtualenvs/py27env/lib/python2.7/site-packages/cobaya/run.py", line 81, in run
    sampler.run()
  File "/home/lh561/.virtualenvs/py27env/lib/python2.7/site-packages/cobaya/sampler.py", line 184, in __exit__
    self.close(exception_type, exception_value, traceback)
  File "/home/lh561/.virtualenvs/py27env/lib/python2.7/site-packages/cobaya/run.py", line 81, in run
    sampler.run()
  File "/home/lh561/.virtualenvs/py27env/lib/python2.7/site-packages/cobaya/samplers/polychord/polychord.py", line 229, in run
    self.pc_prior, self.dumper)
  File "/home/lh561/Documents/Projects/PolyChordPrj/PolyChordLite/build/lib.linux-x86_64-2.7/pypolychord/__init__.py", line 231, in run_polychord
    settings.seed)
  File "/home/lh561/Documents/Projects/PolyChordPrj/PolyChordLite/build/lib.linux-x86_64-2.7/pypolychord/__init__.py", line 192, in wrap_loglikelihood
    logL, phi[:] = loglikelihood(theta)
  File "/home/lh561/.virtualenvs/py27env/lib/python2.7/site-packages/cobaya/samplers/polychord/polychord.py", line 214, in logpost
    self.model.logposterior(params_values))
  File "/home/lh561/.virtualenvs/py27env/lib/python2.7/site-packages/cobaya/model.py", line 284, in logposterior
    make_finite=make_finite, cached=cached, _no_check=True)
  File "/home/lh561/.virtualenvs/py27env/lib/python2.7/site-packages/cobaya/model.py", line 189, in loglikes
    _derived=_derived, cached=cached)
  File "/home/lh561/.virtualenvs/py27env/lib/python2.7/site-packages/cobaya/likelihood.py", line 318, in logps
    _derived=this_derived_dict, cached=cached, **this_params_dict)]
  File "/home/lh561/.virtualenvs/py27env/lib/python2.7/site-packages/cobaya/likelihood.py", line 142, in _logp_cached
    self.states[i_state]["logp"] = self.logp(_derived=_derived, **params_values)
  File "/home/lh561/.virtualenvs/py27env/lib/python2.7/site-packages/cobaya/likelihoods/_planck_clik_prototype/_planck_clik_prototype.py", line 239, in logp
    for spectrum, lmax in zip(self.requested_cls, self.l_maxs_cls)])
ValueError: could not broadcast input array from shape (2506) into shape (2509)
-------------------------------------------------------------

 2019-08-30 12:45:45,590 [23 : exception handler] Some unexpected ERROR occurred. You can see the exception information above.
We recommend trying to reproduce this error with 'debug:True' in the input.
If you cannot solve it yourself and need to report it, include the debug output,
which you can send it to a file setting 'debug_file:[some_file_name]'.
 2019-08-30 12:45:45,586 [39 : Likelihood] Got input parameters: OrderedDict([('As', 3.856584862813382e-09), ('ns', 0.9933417499153214), ('omk', -0.05012615755099464), ('cosmomc_theta', 0.010435861588044482), ('ombh2', 0.02178509013427381), ('omch2', 0.1271081971254231), ('mnu', 0.06), ('tau', 0.022518337444613457), ('A_planck', 0.9908677190845763), ('cib_index', -1.3), ('A_cib_217', 48.11832887168862), ('xi_sz_cib', 0.039481926166643984), ('A_sz', 4.079796506660104), ('ps_A_100_100', 75.14642269557496), ('ps_A_143_143', 102.98215305433611), ('ps_A_143_217', 251.7647614394884), ('ps_A_217_217', 208.29500486848244), ('ksz_norm', 8.038872245772469), ('gal545_A_100', 15.358190284918994), ('gal545_A_143', 7.052641159625674), ('gal545_A_143_217', 29.601195575718286), ('gal545_A_217', 17.82128523307788), ('calib_100T', 1.000459109867035), ('calib_217T', 0.9899776453052694)])
 2019-08-30 12:45:45,587 [39 : camb] Re-using computed results (state 1)
 2019-08-30 12:45:45,587 [39 : planck_2015_lowTEB] Got parameters {'A_planck': 0.9908677190845763}
 2019-08-30 12:45:45,587 [39 : planck_2015_lowTEB] Re-using computed results.
 2019-08-30 12:45:45,587 [39 : planck_2015_lowTEB] Evaluated to logp=-5322.58 with derived {}
 2019-08-30 12:45:45,587 [39 : planck_2015_plikHM_TT] Got parameters {'gal545_A_143': 7.052641159625674, 'cib_index': -1.3, 'A_sz': 4.079796506660104, 'gal545_A_217': 17.82128523307788, 'xi_sz_cib': 0.039481926166643984, 'ps_A_143_143': 102.98215305433611, 'calib_100T': 1.000459109867035, 'ps_A_100_100': 75.14642269557496, 'calib_217T': 0.9899776453052694, 'ps_A_143_217': 251.7647614394884, 'ps_A_217_217': 208.29500486848244, 'A_cib_217': 48.11832887168862, 'ksz_norm': 8.038872245772469, 'gal545_A_100': 15.358190284918994, 'A_planck': 0.9908677190845763, 'gal545_A_143_217': 29.601195575718286}

Theta_STAR

Hi

I am trying to do something that used to be quite simple in CosmoMC

I am sampling on omega_b, omega_c and H_0. I have to sample on those parameters for my test.

But I want to add a gaussian likelihood on theta_star (or theta_MC). So I try to add

def add_theory(self):
   self.theory.needs(**{
            "theta_star": None
            })

but it failed. I try to look for similar words but I couldn't find it. What am I missing?

PS: This would help - among other things - to implement the Planck 2015 compressed likelihood. But for DES purposes - we sample on H0

Best
Vivian

cobaya-install cosmo fails on planck 2018

Running cobaya-install cosmo fails for me on building the new clik; here is where it died:

[121/141] Compiling build/src/python/clik/lkl_lensing.pyx.c
[122/141] Compiling build/src/python/clik/parametric.pyx.c
Waf: Leaving directory `/tigress/tmorton/cosmology/modules/code/planck/code/plc_3.0/plc-3.01/build'

/home/tmorton/.conda/envs/cobaya/lib/python3.7/site-packages/Cython/Compiler/Main.py:369: FutureWarning: Cython directive 'language_level' not set, using 2 for now (Py2). This will change in a later release! File: /tigress/tmorton/cosmology/modules/code/planck/code/plc_3.0/plc-3.01/src/plik/component_plugin/rel2015/rel2015.pyx
  tree = Parsing.p_module(s, pxd, full_module_name)

/home/tmorton/.conda/envs/cobaya/lib/python3.7/site-packages/Cython/Compiler/Main.py:369: FutureWarning: Cython directive 'language_level' not set, using 2 for now (Py2). This will change in a later release! File: /tigress/tmorton/cosmology/modules/code/planck/code/plc_3.0/plc-3.01/src/python/clik/lkl.pyx
  tree = Parsing.p_module(s, pxd, full_module_name)
warning: /tigress/tmorton/cosmology/modules/code/planck/code/plc_3.0/plc-3.01/src/python/clik/lkl.pyx:49:13: Non-trivial type declarators in shared declaration (e.g. mix of pointers and values). Each pointer declaration should be on its own line.
warning: /tigress/tmorton/cosmology/modules/code/planck/code/plc_3.0/plc-3.01/src/python/clik/lkl.pyx:49:19: Non-trivial type declarators in shared declaration (e.g. mix of pointers and values). Each pointer declaration should be on its own line.
warning: /tigress/tmorton/cosmology/modules/code/planck/code/plc_3.0/plc-3.01/src/python/clik/lkl.pyx:207:13: Non-trivial type declarators in shared declaration (e.g. mix of pointers and values). Each pointer declaration should be on its own line.
warning: /tigress/tmorton/cosmology/modules/code/planck/code/plc_3.0/plc-3.01/src/python/clik/lkl.pyx:207:19: Non-trivial type declarators in shared declaration (e.g. mix of pointers and values). Each pointer declaration should be on its own line.

/home/tmorton/.conda/envs/cobaya/lib/python3.7/site-packages/Cython/Compiler/Main.py:369: FutureWarning: Cython directive 'language_level' not set, using 2 for now (Py2). This will change in a later release! File: /tigress/tmorton/cosmology/modules/code/planck/code/plc_3.0/plc-3.01/src/python/clik/lkl_lensing.pyx
  tree = Parsing.p_module(s, pxd, full_module_name)
warning: /tigress/tmorton/cosmology/modules/code/planck/code/plc_3.0/plc-3.01/src/python/clik/lkl_lensing.pyx:51:13: Non-trivial type declarators in shared declaration (e.g. mix of pointers and values). Each pointer declaration should be on its own line.
warning: /tigress/tmorton/cosmology/modules/code/planck/code/plc_3.0/plc-3.01/src/python/clik/lkl_lensing.pyx:51:19: Non-trivial type declarators in shared declaration (e.g. mix of pointers and values). Each pointer declaration should be on its own line.
warning: /tigress/tmorton/cosmology/modules/code/planck/code/plc_3.0/plc-3.01/src/python/clik/lkl_lensing.pyx:151:13: Non-trivial type declarators in shared declaration (e.g. mix of pointers and values). Each pointer declaration should be on its own line.
warning: /tigress/tmorton/cosmology/modules/code/planck/code/plc_3.0/plc-3.01/src/python/clik/lkl_lensing.pyx:151:19: Non-trivial type declarators in shared declaration (e.g. mix of pointers and values). Each pointer declaration should be on its own line.

/home/tmorton/.conda/envs/cobaya/lib/python3.7/site-packages/Cython/Compiler/Main.py:369: FutureWarning: Cython directive 'language_level' not set, using 2 for now (Py2). This will change in a later release! File: /tigress/tmorton/cosmology/modules/code/planck/code/plc_3.0/plc-3.01/src/python/clik/clik.parametric.pxd
  tree = Parsing.p_module(s, pxd, full_module_name)

src/python/clik/lkl.pyx.c:4:20: fatal error: Python.h: No such file or directory
 #include "Python.h"
                    ^
compilation terminated.

src/python/clik/lkl_lensing.pyx.c:4:20: fatal error: Python.h: No such file or directory
 #include "Python.h"
                    ^
compilation terminated.

src/python/clik/parametric.pyx.c:4:20: fatal error: Python.h: No such file or directory
 #include "Python.h"
                    ^
compilation terminated.

Build failed
 -> task in 'lkl' failed with exit status 1 (run with -v to display more information)
 -> task in 'lkl_lensing' failed with exit status 1 (run with -v to display more information)
 -> task in 'parametric' failed with exit status 1 (run with -v to display more information)

I am on Cobaya 2.0.3, on linux, within a python 3.7.3 conda environment. I will continue to investigate, but since I've been super happy before with the cobaya-install command just working with no headaches, thought I'd bring this up here in case anyone else has seen this.

Comparison between int and None fails under python3 when resuming MCMC

When setting resume=True and calling via the API there is a crash, which I think comes from a change between python 2 and python 3. In py2 you can do, e.g. 1>None and it will just return False, but in py3 it raises an error. I think that's what's happening in the traceback below.

[SNIP some external bits]
    self.cobaya_run(info)
  File "/usr/local/lib/python3.6/dist-packages/cobaya/run.py", line 64, in run
    modules=info.get(_path_install)) as sampler:
  File "/usr/local/lib/python3.6/dist-packages/cobaya/sampler.py", line 188, in get_sampler
    info_sampler[name], posterior, output_file, resume=resume, modules=modules)
  File "/usr/local/lib/python3.6/dist-packages/cobaya/sampler.py", line 150, in __init__
    self.initialize()
  File "/usr/local/lib/python3.6/dist-packages/cobaya/samplers/mcmc/mcmc.py", line 59, in initialize
    if self.resuming and (max(self.mpi_size, 1) != max(get_mpi_size(), 1)):
TypeError: '>' not supported between instances of 'int' and 'NoneType'

Support for text-parameters (e.g. 'reio_parametrization')?

Hi,

I'm trying to run cobaya with different reionization histories but this requires giving classy strings with parameters as input like this params['many_tanh_xe']='-2,-1,0.2' (see simple example below).

However, I see no way to pass this to classy in cobaya. One has to write a custom likelihood function that is clear but I cannot create a model with this string as parameter because cobaya always interprets it as an external function (error depends on quoting):

[tools] *ERROR* Failed to load external function: 'NameError("name 'reio_camb' is not defined")'
[tools] *ERROR* The external function provided is not an actual function. Got: ''reio_camb''

Problematic cobaya exampe:

from cobaya.model import get_model
cosmo = {'theory': {'classy': {}}, 'likelihood': {'planck_2018_lowl.TT': None}}
cosmo['modules'] = "/home/stefan/var/cobaya_modules/"
cosmo['params'] = {'omega_cdm': {'prior': {'min': 0.001, 'max': 0.99}},
                   'reio_parametrization': 'reio_camb'}
model = get_model(cosmo)

Normal cobaya example:

from cobaya.model import get_model
cosmo = {'theory': {'classy': {}}, 'likelihood': {'planck_2018_lowl.TT': None}}
cosmo['modules'] = "/home/stefan/var/cobaya_modules/"
cosmo['params'] = {'omega_cdm': {'prior': {'min': 0.001, 'max': 0.99}}}         
model = get_model(cosmo)
params = {'omega_cdm':0.12, 'A_planck':1}
loglike,_ = model.loglike(params)
print(loglike)

Working classy example:

from classy import Class
cosmo = Class()
params={'output': 'tCl lCl','l_max_scalars': 2000,'lensing': 'yes'}
params['reio_parametrization']='reio_many_tanh'
params['many_tanh_num']='3'
params['many_tanh_z']='3.5,6, 27.5'
params['many_tanh_xe']='-2,-1,0.2'
params['many_tanh_width']='0.5,0.5,3'
cosmo.set(params)
cosmo.compute()
cosmo.lensed_cl(2000)

Full traceback of problematic example:

[tools] *ERROR* Failed to load external function: 'NameError("name 'reio_camb' is not defined")'

-------------------------------------------------------------
NameError                   Traceback (most recent call last)
~/.local/lib/python3.7/site-packages/cobaya/tools.py in get_external_function(string_or_function, name, or_class)
    175                 sys.path.append(os.path.realpath(os.curdir))
--> 176             function = eval(string_or_function)
    177         except Exception as e:

~/.local/lib/python3.7/site-packages/cobaya/tools.py in <module>

NameError: name 'reio_camb' is not defined

During handling of the above exception, another exception occurred:

LoggedError                 Traceback (most recent call last)
<ipython-input-1-8350a15f398d> in <module>
      7 cosmo['params'] = {'omega_cdm': {'prior': {'min': 0.001, 'max': 0.99}},
      8                    'reio_parametrization': 'reio_camb'}
----> 9 model = get_model(cosmo)

~/.local/lib/python3.7/site-packages/cobaya/model.py in get_model(info)
     60     return Model(updated_info[_params], updated_info[_likelihood],
     61                  updated_info.get(_prior), updated_info.get(_theory),
---> 62                  modules=info.get(_path_install), timing=updated_info.get(_timing))
     63 
     64 

~/.local/lib/python3.7/site-packages/cobaya/model.py in __init__(self, info_params, info_likelihood, info_prior, info_theory, modules, timing, allow_renames)
     86                 self._updated_info[k] = deepcopy_where_possible(v)
     87         self.parameterization = Parameterization(
---> 88             self._updated_info[_params], allow_renames=allow_renames)
     89         self.prior = Prior(self.parameterization, self._updated_info.get(_prior, None))
     90         self.likelihood = Likelihood(

~/.local/lib/python3.7/site-packages/cobaya/parameterization.py in __init__(self, info_params, allow_renames, ignore_unused_sampled)
    132                 else:
    133                     self._input[p] = None
--> 134                     self._input_funcs[p] = get_external_function(info[_p_value])
    135                     self._input_args[p] = getargspec(self._input_funcs[p]).args
    136             if is_sampled_param(info):

~/.local/lib/python3.7/site-packages/cobaya/tools.py in get_external_function(string_or_function, name, or_class)
    178             raise LoggedError(
    179                 log, "Failed to load external function%s: '%r'",
--> 180                 " '%s'" % name if name else "", e)
    181     else:
    182         function = string_or_function

LoggedError: Failed to load external function: 'NameError("name 'reio_camb' is not defined")'

Is there a way to pass these parameters that I don't know of?

Cheers,
Stefan

Edit: Passing it with value 'reio_parametrization': {'value': 'reio_camb'} also gives the function interpretation [tools] *ERROR* Failed to load external function: 'NameError("name 'reio_camb' is not defined")'

CAMB Fail

Hi

Is there an option for just ignoring the point when CAMB fails (for example when Halofit fails) - that is a big issue for me now...

Question about Polychord

Hi

I don't see the Polychord option compression_factor but I do see update_files - which I don't see correspondence in PyPolychord setting.py

Best
Vivian

Don't see "Keep common parameter names" checkbox

I'm running cobaya-cosmo-generator and I don't see the option for "Keep common parameter names" as is described in the documentation. Has this feature been discontinued? I've installed tag v1.2.1 from the repo and am running on Mac OS.

Screenshot:
Screenshot 2019-07-04 13 30 55

OpenMP

This is more of a dream than an issue.

Not all code are thread-safe (CosmoLike for example it isn't) - and these code are getting so complex that making it thread-safe is a difficult task. On the other hand - threading can be quite useful. Is it possible to turn a feature so that Cobaya will use OpenMP to calculate more than one likelihood at the same time instead of threading each likelihood separately?

PolyChord sampler with CLASS theory code

Hi @JesusTorrado,

I wanted to check out the PolyChord-CLASS combination on Planck data. However, it fails with the error message below (at the end of the pasted output).

This might be a memory problem due to the very high number of repeats for fast parameters in:

number of repeats:           30         100       31206

The third number shouldn't be that high.

  • Where/How are those numbers being calculated? (Where can I find the defaults?)
  • Does CLASS support all three of those numbers or does it only support fast and slow parameters?
  • If CLASS only allows two different speeds, how best to tell PolyChord to only work with two speeds when used with CLASS?

Output with error message

(Output abbreviated: clik output and repetitions due to parallelisation replaced by [...])

Executing command:
==================
mpirun -ppn 32 -np 32 cobaya-run input/pc32_p15_TTlowTEB_lcdm.yaml

***********************************************************************
*WARNING*: Python 2 support will eventually be dropped
(it is already unsupported by many scientific Python modules).

Please use Python 3!

In some systems, the Python 3 command may be python3 instead of python.
If that is the case, use pip3 instead of pip to install Cobaya.
***********************************************************************

[...]

[0 : output] Output to be read-from/written-into folder 'chains/planck_2015/pc32_p15_TTlowTEB_lcdm', with prefix 'pc32_p15_TTlowTEB_lcdm'
[7 : prior] *WARNING* External prior 'SZ' loaded. Mind that it might not be normalized!

[...]

[0 : classy] Importing *local* classy from /rds/user/lh561/hpc-work/data/CobayaData/modules/code/classy

[...]

[0 : polychord] Initializing
[0 : polychord] Importing *local* PolyChord from /rds/user/lh561/hpc-work/data/CobayaData/modules/code/PolyChordLite
[0 : polychord] Storing raw PolyChord output in 'chains/planck_2015/pc32_p15_TTlowTEB_lcdm/raw_polychord_output'.
[0 : prior] *WARNING* There are unbounded parameters. Prior bounds are given at 0.9999995 confidence level. Beware of likelihood modes at the edge of the prior

[...]

[0 : polychord] Calling PolyChord with arguments:
[0 : polychord]   base_dir: chains/planck_2015/pc32_p15_TTlowTEB_lcdm/raw_polychord_output
[0 : polychord]   boost_posterior: 0
[0 : polychord]   cluster_dir: chains/planck_2015/pc32_p15_TTlowTEB_lcdm/raw_polychord_output/clusters
[0 : polychord]   cluster_posteriors: True
[0 : polychord]   compression_factor: 0.367879441171
[0 : polychord]   do_clustering: True
[0 : polychord]   equals: True
[0 : polychord]   feedback: 1
[0 : polychord]   file_root: pc32_p15_TTlowTEB_lcdm
[0 : polychord]   grade_dims: [6, 1, 14]
[0 : polychord]   grade_frac: [30, 100, 31206]
[0 : polychord]   logzero: -1.7976931348623157e+308
[0 : polychord]   max_ndead: -1
[0 : polychord]   maximise: False
[0 : polychord]   nfail: -1
[0 : polychord]   nlive: 32
[0 : polychord]   nlives: {}
[0 : polychord]   nprior: -1
[0 : polychord]   num_repeats: 105
[0 : polychord]   posteriors: True
[0 : polychord]   precision_criterion: 0.001
[0 : polychord]   read_resume: False
[0 : polychord]   seed: -1
[0 : polychord]   write_dead: True
[0 : polychord]   write_live: True
[0 : polychord]   write_paramnames: False
[0 : polychord]   write_prior: True
[0 : polychord]   write_resume: True
[0 : polychord]   write_stats: True
[0 : polychord] Sampling!
PolyChord: MPI is already initilised, not initialising, and will not finalize

[...]

PolyChord: Next Generation Nested Sampling
copyright: Will Handley, Mike Hobson & Anthony Lasenby
  version: 1.16
  release: 1st March 2019
    email: [email protected]

Run Settings
nlive    :      32
nDims    :      21
nDerived :      16
Doing Clustering
Generating equally weighted posteriors
Generating weighted posteriors
Clustering on posteriors
Writing a resume file tochains/planck_2015/pc32_p15_TTlowTEB_lcdm/raw_polychord_output/pc32_p15_TTlowTEB_lcdm.resume

generating live points


all live points generated

number of repeats:           30         100       31206
started sampling


===================================================================================
=   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
=   PID 123815 RUNNING AT cpu-e-87
=   EXIT CODE: 9
=   CLEANING UP REMAINING PROCESSES
=   YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
   Intel(R) MPI Library troubleshooting guide:
      https://software.intel.com/node/561764
===================================================================================

Prior for derived parameters [of theory code] not working

Hi Again

I was trying to reproduce Fig 7 of

https://arxiv.org/pdf/1806.04649.pdf

So I created a fake likelihood that just return chi^2 = 0.0 and I made the following yaml

test6.txt

which according to instruction (from H0) should have limited the range of omega_m

omegam:
max: 1
min: 0
latex: '\Omega_\mathrm{m}'

but when I see the chain files - I see that there are bunch of accepted points with omegam outside this range.

Here one example

TEST_PRIORS_1.txt

I can do similar thing in the likelihood (return -\infty if omegam is outside the range) but it seems awkward that min/max does not work (or only works for H0)

Best

MPI is not being initialized for SLUMP

Thanks for writing and sharing Cobaya. This is a great software! I have found a minor bug that I am reporting below


  • cobaya version: 0.126
  • Python version: 2.7.13
  • Operating System: SuSE Linux Enterprise Server (SLES) Linux distribution (NERSC Cori Cluster)

Description

On NERSC Cori Cluster, mpirun command is disabled. Instead, it has its SLURM counterpart called srun. Unfortunately, SLURM has its own set of enviromental variables. So when I try to do a MPI run with srun, Cobaya isn't MPI aware. I think the issue is with the following code block under mpi.py, which looks for specific MPI variables.

        if any([os.getenv(v) for v in
                ["OMPI_COMM_WORLD_SIZE",  # OpenMPI
                 "PMI_SIZE"]]):           # Inte MPI
            from mpi4py import MPI
            _mpi = MPI
        else:
            _mpi = None

What I Did

For now, I have implemented the following workaround.

    if _mpi == -1:
        try:
            from mpi4py import MPI
            _mpi = MPI
        except:
            _mpi = None

Conversion between logAs and As

Hello!

Thanks for all support so far. I have a new issue. When I type

params: logA: prior: {min: 2, max: 4} ref: {dist: norm, loc: 3.1, scale: 0.001} proposal: 0.001 latex: \log(10^{10} A_\mathrm{s}) drop: true As: value: 'lambda logA: 1e-10*np.exp(logA)' latex: 'A_\mathrm{s}'

everything works perfectly, but when I try to fix logAs, Cobaya does not make the conversion

params: logA: value: 3.11820385633 As: value: 'lambda logA: 1e-10*np.exp(logA)' latex: 'A_\mathrm{s}'

Error message:

ERROR Some of the parameters passed to CAMB were not recognized: Unrecognized parameters: set(['logA'])

Is it possible to fix this?

Resuming a non-reproducible chain

It looks like it isn't possible to resume a chain that has external priors and likelihoods.

Is there a way around this? I have a run which I know is reproducible (because I can check externally) but because of the need to include dynamically generated information in a closure I need to use external functions.

If there isn't then an option to skip the check when resuming would be handy.

Small bug in Planck Likelihood

This is a bug that I have solved myself for some time - and I forget to tell you. See the picture below - without this fixed I was unable to path with an external Planck Likelihood

screen shot 2018-12-07 at 4 52 26 pm

$COBAYA_MODULES env variable?

Just another suggestion as you move toward 2.0-- have you considered using a $COBAYA_MODULES environment variable as the default modules directory? That way, if modules isn't passed in an info string, then you can just default to os.getenv('COBAYA_MODULES', None) or something like that as the modules path. This would also make it easier to share .yaml files between machines (e.g., would avoid having to hard-code paths).

If you're interested, I'm happy to give a shot at a PR along these lines.

Cobaya - Polychord

Hi Jesus

My polychord keep crashing with this error

Traceback (most recent call last):
File "/scratch/midway2/viniciusvmb/krause/cosmo_cobaya6/bin/cobaya-run", line 11, in
load_entry_point('cobaya', 'console_scripts', 'cobaya-run')()
File "/scratch/midway2/viniciusvmb/krause/cosmo_cobaya6/cosmolike_core/cobaya/cobaya/cobaya/run.py", line 126, in run_script
run(info)
File "/scratch/midway2/viniciusvmb/krause/cosmo_cobaya6/cosmolike_core/cobaya/cobaya/cobaya/run.py", line 67, in run
sampler.run()
File "/scratch/midway2/viniciusvmb/krause/cosmo_cobaya6/cosmolike_core/cobaya/cobaya/cobaya/sampler.py", line 184, in exit
self.close(exception_type, exception_value, traceback)
File "/scratch/midway2/viniciusvmb/krause/cosmo_cobaya6/cosmolike_core/cobaya/cobaya/cobaya/run.py", line 67, in run
sampler.run()
File "/scratch/midway2/viniciusvmb/krause/cosmo_cobaya6/cosmolike_core/cobaya/cobaya/cobaya/samplers/polychord/polychord.py", line 226, in run
self.pc_prior, self.dumper)
File "/scratch/midway2/viniciusvmb/krause/cosmo_cobaya6/lib/python2.7/site-packages/pypolychord-1.16-py2.7-linux-x86_64.egg/pypolychord/init.py", line 227, in run_polychord
return PolyChordOutput(settings.base_dir, settings.file_root)
File "/scratch/midway2/viniciusvmb/krause/cosmo_cobaya6/lib/python2.7/site-packages/pypolychord-1.16-py2.7-linux-x86_64.egg/pypolychord/output.py", line 100, in init
self._create_pandas_table()
File "/scratch/midway2/viniciusvmb/krause/cosmo_cobaya6/lib/python2.7/site-packages/pypolychord-1.16-py2.7-linux-x86_64.egg/pypolychord/output.py", line 190, in _create_pandas_table
self._samples_table['loglike'] *= -0.5
File "/home/viniciusvmb/.local/lib/python2.7/site-packages/pandas/core/ops.py", line 897, in f
result = method(self, other)
File "/home/viniciusvmb/.local/lib/python2.7/site-packages/pandas/core/ops.py", line 1069, in wrapper
result = safe_na_op(lvalues, rvalues)
File "/home/viniciusvmb/.local/lib/python2.7/site-packages/pandas/core/ops.py", line 1037, in safe_na_op
lambda x: op(x, rvalues))
File "pandas/_libs/algos_common_helper.pxi", line 1212, in pandas._libs.algos.arrmap_object
File "/home/viniciusvmb/.local/lib/python2.7/site-packages/pandas/core/ops.py", line 1037, in
lambda x: op(x, rvalues))
TypeError: can't multiply sequence by non-int of type 'float'

any clues? Here is my yaml file.

POLY_DES_PLANCK_MCMC_0.txt

Cosmology modules: Memory leak if module path is changed manually after installation

I install cobaya and use cobaya-install cosmo -m /home/stefan/modules to install the modules. I can run cobaya using the code generated by cobaya-cosmo-generator (see code below) without problems.
Now I move the modules to a different folder (to a faster drive in my case) and change the 'modules' value in the info dict. When I run cobaya now, the output is is at

Sampling! (NB: nothing will be printed until 120 burn-in samples have been obtained)

and the python process uses increasingly more RAM. strace python cosmo.py shows

openat(AT_FDCWD, "/home/stefan/modules/code/classy/bbn/sBBN_2017.dat", O_RDONLY) = -1 ENOENT (No such file or directory)

many times. Cobaya somehow still uses the old location for the modules and allocates lots of memory (my system hung up at 28G during lunch).

How to reproduce:
Rename your cobaya modules folder, change it in the yaml file / info dict and run some cosmology.

Expected behaviour:
It would be nice if cobaya could show a warning/error if this happens and tell the user where to change the module path (or use the correct path automatically).

I used python3 and different likelihoods and had that with every likelihood problem.
I also tested it on Scientific Linux 7.5 with python2 and gcc version < 5.

Edit: Workaround:
If one deletes the module folder and runs cobaya-install again (with the new path), everything works again.

Example code, mostly generated by cobaya-cosmo-generator:

import cobaya.run as cbyr
from collections import OrderedDict

info = {
'modules': '/home/stefan/modules_new/',
'output': 'chains/cosmo',
'force': True,
'likelihood': OrderedDict([
                            ('sixdf_2011_bao', None),
                          ]),
 'params': OrderedDict([('logA',
                         OrderedDict([('prior',
                                       OrderedDict([('min', 2), ('max', 4)])),
                                      ('ref',
                                       OrderedDict([('dist', 'norm'),
                                                    ('loc', 3.1),
                                                    ('scale', 0.001)])),
                                      ('proposal', 0.001),
                                      ('latex', '\\log(10^{10} A_\\mathrm{s})'),
                                      ('drop', True)])),
                        ('A_s',
                         OrderedDict([('value',
                                       'lambda logA: 1e-10*np.exp(logA)'),
                                      ('latex', 'A_\\mathrm{s}')])),
                        ('n_s',
                         OrderedDict([('prior',
                                       OrderedDict([('min', 0.8),
                                                    ('max', 1.2)])),
                                      ('ref',
                                       OrderedDict([('dist', 'norm'),
                                                    ('loc', 0.96),
                                                    ('scale', 0.004)])),
                                      ('proposal', 0.002),
                                      ('latex', 'n_\\mathrm{s}')])),
                        ('H0',
                         OrderedDict([('prior',
                                       OrderedDict([('min', 40),
                                                    ('max', 100)])),
                                      ('ref',
                                       OrderedDict([('dist', 'norm'),
                                                    ('loc', 70),
                                                    ('scale', 2)])),
                                      ('proposal', 2),
                                      ('latex', 'H_0')])),
                        ('omega_b',
                         OrderedDict([('prior',
                                       OrderedDict([('min', 0.005),
                                                    ('max', 0.1)])),
                                      ('ref',
                                       OrderedDict([('dist', 'norm'),
                                                    ('loc', 0.0221),
                                                    ('scale', 0.0001)])),
                                      ('proposal', 0.0001),
                                      ('latex', '\\Omega_\\mathrm{b} h^2')])),
                        ('omega_cdm',
                         OrderedDict([('prior',
                                       OrderedDict([('min', 0.001),
                                                    ('max', 0.99)])),
                                      ('ref',
                                       OrderedDict([('dist', 'norm'),
                                                    ('loc', 0.12),
                                                    ('scale', 0.001)])),
                                      ('proposal', 0.0005),
                                      ('latex', '\\Omega_\\mathrm{c} h^2')])),
                        ('Omega_m',
                         OrderedDict([('latex', '\\Omega_\\mathrm{m}')])),
                        ('omegamh2',
                         OrderedDict([('derived',
                                       'lambda Omega_m, H0: '
                                       'Omega_m*(H0/100)**2'),
                                      ('latex', '\\Omega_\\mathrm{m} h^2')])),
                        ('m_ncdm',
                         OrderedDict([('value', 0.06), ('renames', 'mnu')])),
                        ('Omega_Lambda',
                         OrderedDict([('latex', '\\Omega_\\Lambda')])),
                        ('YHe', OrderedDict([('latex', 'Y_\\mathrm{P}')])),
                        ('tau_reio',
                         OrderedDict([('prior',
                                       OrderedDict([('min', 0.01),
                                                    ('max', 0.8)])),
                                      ('ref',
                                       OrderedDict([('dist', 'norm'),
                                                    ('loc', 0.09),
                                                    ('scale', 0.01)])),
                                      ('proposal', 0.005),
                                      ('latex', '\\tau_\\mathrm{reio}')])),
                        ('z_reio',
                         OrderedDict([('latex', 'z_\\mathrm{re}')]))]),
 'sampler': {'mcmc': {'covmat': 'auto'}},
 'theory': OrderedDict([('classy',
                         OrderedDict([('extra_args',
                                       OrderedDict([('N_ncdm', 1),
                                                    ('N_ur', 2.0328)]))]))])}

updated_info, products = cbyr.run(info)

More complete documentation on subclassing Likelihood

As a new user, I have a suggestion for a bit more clarity in the documentation on how to define a new custom cosmological likelihood. I am able to follow and model the "Creating your own cosmological likelihood" example, but it's still not clear to me how to do the same by subclassing Likelihood. I think it would be great if that page also explained what the equivalent implementation of my_like as a class would look like, as a minimal example.

resuming with PolyChord with Intel `ifort` not working

Trying to resume a PolyChord run currently fails with (full output further down):

forrtl: severe (64): input conversion error, unit 10, file /home/lh561/Documents/Projects/CobayaPrj/chains/sn_pantheon/class_pc16_sn/raw_polychord_output/class_pc16_sn.resume

I've encountered this at first with the Planck likelihood (for both CAMB and CLASS), but it can be reproduced very fast with a SN likelihood and a small nlive (i.e. run once and interrupt, then try running again with -r).

The full output from the attempt to resume with cobaya-run -r -d:

(py36env) lh561@login-e-12:~/Documents/Projects/CobayaPrj$cobaya-run -r -d input/class_pc16_sn.yaml
 2019-09-09 15:40:01,306 [output] Output to be read-from/written-into folder 'chains/sn_pantheon/class_pc16_sn', with prefix 'class_pc16_sn'
 2019-09-09 15:40:01,306 [output] Found existing products with the requested ouput prefix: 'chains/sn_pantheon/class_pc16_sn/class_pc16_sn'
 2019-09-09 15:40:01,307 [output] Let's try to resume/load.
 2019-09-09 15:40:01,394 [run] Input info updated with defaults (dumped to YAML):
theory:
  classy:
    extra_args:
      N_ur: 2.0328
      N_ncdm: 1
    path: null
    renames:
      As: A_s
      ns: n_s
      nrun: alpha_s
      nrunrun: beta_s
      nt: n_t
      ntrun: alpha_t
      rdrag: rs_drag
      omegak: Omega_k
      omegal: Omega_Lambda
      w: w0_fld
      wa: wa_fld
      omegabh2: omega_b
      omegab: Omega_b
      omegach2: omega_cdm
      omegac: Omega_cdm
      omegam: Omega_m
      omegan: Omega_nu
      tau: tau_reio
      zrei: z_reio
      deltazrei: reionization_width
      helium_redshift: helium_fullreio_redshift
      helium_delta_redshift: helium_fullreio_width
      yhe: YHe
      yheused: YHe
    speed: 0.2
    use_planck_names: false
likelihood:
  sn.pantheon:
    dataset_file: Pantheon/full_long.dataset
    dataset_params: null
    params: []
    path: null
    renames:
    - Pantheon
    - Pantheon18
    speed: 100
sampler:
  polychord:
    base_dir: raw_polychord_output
    blocking: null
    boost_posterior: 0
    callback_function: null
    cluster_posteriors: true
    compression_factor: 0.36787944117144233
    confidence_for_unbounded: 0.9999995
    do_clustering: true
    equals: true
    feedback: null
    file_root: null
    logzero: null
    max_ndead: .inf
    nlive: 16
    nprior: null
    num_repeats: 5d
    path: null
    posteriors: true
    precision_criterion: 0.001
    read_resume: true
    seed: null
    write_dead: true
    write_live: true
    write_resume: true
    write_stats: true
params:
  logA:
    prior:
      min: 2.5
      max: 3.7
    ref:
      dist: norm
      loc: 3.1
      scale: 0.001
    proposal: 0.001
    latex: \log(10^{10} A_\mathrm{s})
    drop: true
  A_s:
    value: 'lambda logA: 1e-10*np.exp(logA)'
    latex: A_\mathrm{s}
    derived: true
    renames:
    - As
  n_s:
    prior:
      min: 0.885
      max: 1.04
    ref:
      dist: norm
      loc: 0.96
      scale: 0.004
    proposal: 0.002
    latex: n_\mathrm{s}
    renames:
    - ns
  H0:
    prior:
      min: 40
      max: 100
    ref:
      dist: norm
      loc: 70
      scale: 2
    proposal: 2
    latex: H_0
  omega_b:
    prior:
      min: 0.019
      max: 0.025
    ref:
      dist: norm
      loc: 0.0221
      scale: 0.0001
    proposal: 0.0001
    latex: \Omega_\mathrm{b} h^2
    renames:
    - omegabh2
  omega_cdm:
    prior:
      min: 0.095
      max: 0.145
    ref:
      dist: norm
      loc: 0.12
      scale: 0.001
    proposal: 0.0005
    latex: \Omega_\mathrm{c} h^2
    renames:
    - omegach2
  Omega_m:
    latex: \Omega_\mathrm{m}
    derived: true
    renames:
    - omegam
  omegamh2:
    derived: 'lambda Omega_m, H0: Omega_m*(H0/100)**2'
    latex: \Omega_\mathrm{m} h^2
  m_ncdm:
    renames: mnu
    value: 0.06
  Omega_Lambda:
    latex: \Omega_\Lambda
    derived: true
    renames:
    - omegal
  YHe:
    latex: Y_\mathrm{P}
    derived: true
    renames:
    - yheused
    - yhe
  tau_reio:
    prior:
      min: 0.01
      max: 0.4
    ref:
      dist: norm
      loc: 0.09
      scale: 0.01
    proposal: 0.005
    latex: \tau_\mathrm{reio}
    renames:
    - tau
  z_reio:
    latex: z_\mathrm{re}
    derived: true
    renames:
    - zrei
output: class_pc16_sn
resume: true
timing: true
modules: /rds/user/lh561/hpc-work/data/CobayaData/modules
debug: true
force: false

 2019-09-09 15:40:01,581 [likelihood] Parameters were assigned as follows:
 2019-09-09 15:40:01,581 [likelihood] - 'sn.pantheon':
 2019-09-09 15:40:01,581 [likelihood]      Input:  []
 2019-09-09 15:40:01,581 [likelihood]      Output: []
 2019-09-09 15:40:01,581 [likelihood] - 'theory':
 2019-09-09 15:40:01,581 [likelihood]      Input:  ['A_s', 'n_s', 'H0', 'omega_b', 'omega_cdm', 'm_ncdm', 'tau_reio']
 2019-09-09 15:40:01,581 [likelihood]      Output: ['Omega_m', 'Omega_Lambda', 'YHe', 'z_reio']
 2019-09-09 15:40:01,582 [classy] Importing *local* classy from /rds/user/lh561/hpc-work/data/CobayaData/modules/code/classy
 2019-09-09 15:40:01,585 [sn.pantheon] Reading /rds/user/lh561/hpc-work/data/CobayaData/modules/data/sn_data/Pantheon/lcparam_full_long_zhel.txt
 2019-09-09 15:40:01,598 [sn.pantheon] Number of SN read: 1048 
 2019-09-09 15:40:01,598 [sn.pantheon] Reading covmat for: mag 
 2019-09-09 15:40:05,517 [likelihood] The theory code will compute the following products, requested by the likelihoods: ['Omega_m', 'Omega_Lambda', 'YHe', 'z_reio', 'angular_diameter_distance']
 2019-09-09 15:40:05,535 [polychord] Initializing
 2019-09-09 15:40:05,535 [polychord] Importing *local* PolyChord from /rds/user/lh561/hpc-work/data/CobayaData/modules/code/PolyChordLite
 2019-09-09 15:40:05,555 [polychord] Storing raw PolyChord output in 'chains/sn_pantheon/class_pc16_sn/raw_polychord_output'.
 2019-09-09 15:40:05,556 [likelihood] Optimal ordering of parameter blocks: [['logA', 'n_s', 'H0', 'omega_b', 'omega_cdm', 'tau_reio']] with speeds array([1])
 2019-09-09 15:40:05,559 [polychord] Calling PolyChord with arguments:
 2019-09-09 15:40:05,559 [polychord]   base_dir: chains/sn_pantheon/class_pc16_sn/raw_polychord_output
 2019-09-09 15:40:05,559 [polychord]   boost_posterior: 0
 2019-09-09 15:40:05,559 [polychord]   cluster_dir: chains/sn_pantheon/class_pc16_sn/raw_polychord_output/clusters
 2019-09-09 15:40:05,559 [polychord]   cluster_posteriors: True
 2019-09-09 15:40:05,559 [polychord]   compression_factor: 0.36787944117144233
 2019-09-09 15:40:05,559 [polychord]   do_clustering: True
 2019-09-09 15:40:05,559 [polychord]   equals: True
 2019-09-09 15:40:05,560 [polychord]   feedback: 2
 2019-09-09 15:40:05,560 [polychord]   file_root: class_pc16_sn
 2019-09-09 15:40:05,560 [polychord]   grade_dims: [6]
 2019-09-09 15:40:05,560 [polychord]   grade_frac: [6]
 2019-09-09 15:40:05,560 [polychord]   logzero: -1.7976931348623157e+308
 2019-09-09 15:40:05,560 [polychord]   max_ndead: -1
 2019-09-09 15:40:05,560 [polychord]   maximise: False
 2019-09-09 15:40:05,560 [polychord]   nfail: -1
 2019-09-09 15:40:05,560 [polychord]   nlive: 16
 2019-09-09 15:40:05,560 [polychord]   nlives: {}
 2019-09-09 15:40:05,560 [polychord]   nprior: -1
 2019-09-09 15:40:05,560 [polychord]   num_repeats: 30
 2019-09-09 15:40:05,560 [polychord]   posteriors: True
 2019-09-09 15:40:05,560 [polychord]   precision_criterion: 0.001
 2019-09-09 15:40:05,560 [polychord]   read_resume: True
 2019-09-09 15:40:05,560 [polychord]   seed: -1
 2019-09-09 15:40:05,560 [polychord]   write_dead: True
 2019-09-09 15:40:05,560 [polychord]   write_live: True
 2019-09-09 15:40:05,560 [polychord]   write_paramnames: False
 2019-09-09 15:40:05,560 [polychord]   write_prior: True
 2019-09-09 15:40:05,560 [polychord]   write_resume: True
 2019-09-09 15:40:05,560 [polychord]   write_stats: True
 2019-09-09 15:40:05,561 [polychord] Sampling!
PolyChord: MPI is already initilised, not initialising, and will not finalize

PolyChord: Next Generation Nested Sampling
copyright: Will Handley, Mike Hobson & Anthony Lasenby
  version: 1.16
  release: 1st March 2019
    email: [email protected]

Run Settings
nlive    :      16
nDims    :       6
nDerived :       8
Doing Clustering
Generating equally weighted posteriors
Generating weighted posteriors
Clustering on posteriors
Writing a resume file tochains/sn_pantheon/class_pc16_sn/raw_polychord_output/class_pc16_sn.resume

forrtl: severe (64): input conversion error, unit 10, file /home/lh561/Documents/Projects/CobayaPrj/chains/sn_pantheon/class_pc16_sn/raw_polychord_output/class_pc16_sn.resume
Image              PC                Routine            Line        Source             
libifcoremt.so.5   00002B097F56398F  for__io_return        Unknown  Unknown
libifcoremt.so.5   00002B097F5A8A59  for_read_seq_fmt_     Unknown  Unknown
libifcoremt.so.5   00002B097F5A5C89  for_read_seq_fmt      Unknown  Unknown
libchord.so        00002B097E4D4F56  read_write_module     Unknown  Unknown
libchord.so        00002B097E4D271E  read_write_module     Unknown  Unknown
libchord.so        00002B097E485722  nested_sampling_m     Unknown  Unknown
libchord.so        00002B097E4F70AF  interfaces_module     Unknown  Unknown
libchord.so        00002B097E4F4628  polychord_c_inter     Unknown  Unknown
libchord.so        00002B097E4F3B72  _Z13run_polychord     Unknown  Unknown
libchord.so        00002B097E4F3860  _Z13run_polychord     Unknown  Unknown
_pypolychord.cpyt  00002B097E23598D  Unknown               Unknown  Unknown
libpython3.6m.so.  00002B093550BA36  _PyCFunction_Fast     Unknown  Unknown
libpython3.6m.so.  00002B09355B55DC  Unknown               Unknown  Unknown
libpython3.6m.so.  00002B09355BC8B3  _PyEval_EvalFrame     Unknown  Unknown
libpython3.6m.so.  00002B09355B3C33  Unknown               Unknown  Unknown
libpython3.6m.so.  00002B09355B57CB  Unknown               Unknown  Unknown
libpython3.6m.so.  00002B09355B5468  Unknown               Unknown  Unknown
libpython3.6m.so.  00002B09355BC8B3  _PyEval_EvalFrame     Unknown  Unknown
libpython3.6m.so.  00002B09355B3C33  Unknown               Unknown  Unknown
libpython3.6m.so.  00002B09355B57CB  Unknown               Unknown  Unknown
libpython3.6m.so.  00002B09355B5468  Unknown               Unknown  Unknown
libpython3.6m.so.  00002B09355BC8B3  _PyEval_EvalFrame     Unknown  Unknown
libpython3.6m.so.  00002B09355B3C33  Unknown               Unknown  Unknown
libpython3.6m.so.  00002B09355B57CB  Unknown               Unknown  Unknown
libpython3.6m.so.  00002B09355B5468  Unknown               Unknown  Unknown
libpython3.6m.so.  00002B09355BC8B3  _PyEval_EvalFrame     Unknown  Unknown
libpython3.6m.so.  00002B09355B3C33  Unknown               Unknown  Unknown
libpython3.6m.so.  00002B09355B57CB  Unknown               Unknown  Unknown
libpython3.6m.so.  00002B09355B5468  Unknown               Unknown  Unknown
libpython3.6m.so.  00002B09355BC8B3  _PyEval_EvalFrame     Unknown  Unknown
libpython3.6m.so.  00002B09355B24AA  PyEval_EvalCodeEx     Unknown  Unknown
libpython3.6m.so.  00002B09355B1959  PyEval_EvalCode       Unknown  Unknown
libpython3.6m.so.  00002B09355FF48E  PyRun_FileExFlags     Unknown  Unknown
libpython3.6m.so.  00002B09355FEF05  PyRun_SimpleFileE     Unknown  Unknown
libpython3.6m.so.  00002B093561C6DC  Py_Main               Unknown  Unknown
python3            0000000000401E69  main                  Unknown  Unknown
libc-2.17.so       00002B09365863D5  __libc_start_main     Unknown  Unknown
python3            0000000000401CA9  Unknown               Unknown  Unknown

Proposal: NEW MPI splitting

Hi

This is just a proposal and can happily help you implement if you think is ok. We are planning of making the merge between CosmoLike and Cobaya Framework an official DES pipeline. This means we are going to test modes in CosmoLike that are slow using this new framework - like non-limber calculation.

The way Cobaya splits MPI threads right now in Metropolis Hasting is one MPI core per walker. However, the way Metropolis Hasting in your code works is the following (I've learned that from CosmoMC notes): for each step you create an orthonormal basis with random orientation and cycle evaluations in the basis vectors. Therefore, you could split MPI threads up to number_walkers x number_dimmensions which can be >> n_walkers. This new splitting would require to include all points in a pool and then distribute them to MPI. Of course MPI threads = number_walkers x number_dimmensions is a bit extreme, but the point is that user can allocate MPI threads > number_walkers to achieve faster convergence.

With this change, convergence can happen a lot faster when CosmoLike need to evaluate things like a non-limber approximation. This would also help a lot in cases when the user adds a lot of parameters like when Prof. Dvorkin and I worked with 20 PCAs in w(z).

Function to dump observables

For testing, it's useful to be able to easily get the observables for a specific model to check all settings are correct and that theory predictions agree with expectations. (equivalent of action=4 and test_output_root in CosmoMC)

Yaml problem

Hi - I am trying to install the new cobaya and I keep getting this error when loading yaml files
File "/home/vivianmiranda/data/Krause/test/CobayaCosmolike2/des_real/..//bin/cobaya-run", line 11, in <module> load_entry_point('cobaya', 'console_scripts', 'cobaya-run')() File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 484, in load_entry_point return get_distribution(dist).load_entry_point(group, name) File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2707, in load_entry_point return ep.load() File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2325, in load return self.resolve() File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2331, in resolve module = __import__(self.module_name, fromlist=['__name__'], level=0) File "/home/vivianmiranda/data/Krause/test/CobayaCosmolike2/cosmolike_core/cobaya/cobaya/run.py", line 21, in <module> from cobaya.output import get_Output as Output File "/home/vivianmiranda/data/Krause/test/CobayaCosmolike2/cosmolike_core/cobaya/cobaya/output.py", line 22, in <module> from cobaya.yaml import yaml_dump, yaml_load_file, OutputError File "/home/vivianmiranda/data/Krause/test/CobayaCosmolike2/cosmolike_core/cobaya/cobaya/yaml.py", line 22, in <module> import yaml File "/home/vivianmiranda/data/Krause/test/CobayaCosmolike2/cosmolike_core/cobaya/cobaya/yaml.py", line 42, in <module> def yaml_load(text_stream, Loader=yaml.Loader, object_pairs_hook=odict, file_name=None): AttributeError: 'module' object has no attribute 'Loader'

Here is the yaml file

EXAMPLE_EVALUATE3.txt

Input generator on 4K screens

I have the same issue in GetDistGUI: on 4K Windows screen with PySide2 things are too small

cobaya_4K

I don't know what the solution is, no permutation of the following seems to give consistent results for GetDistGUI

 QApplication.setAttribute(Qt.AA_EnableHighDpiScaling) # DPI support
 QCoreApplication.setAttribute(Qt.AA_UseHighDpiPixmaps) # HiDPI pixmaps
 os.environ["QT_AUTO_SCREEN_SCALE_FACTOR"] = "1"
 os.environ["QT_SCALE_FACTOR"] = "2" 

Bug in the transformation of variables - Polychord

Hi,

There is a bug in the prior translation between Cobaya and Polychord. If I run Cobaya-Polychord without doing any transformation of variables, like here

As:
  value: 2e-9
  latex: 'A_\mathrm{s}'
ns:
  prior: {min: 0.9, max: 1.0}
  ref: {dist: norm, loc: 0.96, scale: 0.004}
  proposal: 0.002
  latex: n_\mathrm{s}
H0:
  value: 68
ombh2:
  prior: {min: 0.005, max: 0.08}
  ref: {dist: norm, loc: 0.0221, scale: 0.0001}
  proposal: 0.0001
  latex: \Omega_\mathrm{b} h^2
omch2:
  prior: {min: 0.03, max: 0.16}
  ref: {dist: norm, loc: 0.12, scale: 0.001}
  proposal: 0.0005
  latex: \Omega_\mathrm{c} h^2
tau:
  prior: {min: 0.01, max: 0.08}
  ref: {dist: norm, loc: 0.05, scale: 0.01}
  proposal: 0.005
  latex: \tau_\mathrm{reio}

then everything seems perfect. But if I request transformation of variables in the yaml file, like

logA:
  prior: {min: 2, max: 4}
  ref: {dist: norm, loc: 3.1, scale: 0.001}
  proposal: 0.001
  latex: \log(10^{10} A_\mathrm{s})
  drop: true
As: 
  value: 'lambda logA: 1e-10*np.exp(logA)' 
  latex: 'A_\mathrm{s}'
theta:
  prior: {min: 0.5, max: 3}
  ref: {dist: norm, loc: 1.0411, scale: 0.0004}
  proposal: 0.0002
  latex: 100\theta_\mathrm{MC}
  drop: true
cosmomc_theta: {value: 'lambda theta: 1.e-2*theta', derived: false}

I get Infinite and NAN all over the place on the raw PolyChord files (*live.txt, *prior.txt and *.resume)

Error in DES Likelihood with CLASS

When running the DES likelihood out of the box with CLASS, I get the following error message:

[exception handler] ---------------------------------------

Traceback (most recent call last):
File "/mnt/home/chill/.local/lib/python3.7/site-packages/cobaya/theories/classy/classy.py", line 352, in compute
i_state = next(i for i in range(self.n_states)
StopIteration

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/mnt/home/chill/.local/bin/cobaya-run", line 10, in
sys.exit(run_script())
File "/mnt/home/chill/.local/lib/python3.7/site-packages/cobaya/run.py", line 154, in run_script
run(info)
File "/mnt/home/chill/.local/lib/python3.7/site-packages/cobaya/run.py", line 87, in run
sampler.run()
File "/mnt/home/chill/.local/lib/python3.7/site-packages/cobaya/samplers/evaluate/evaluate.py", line 65, in run
self.logposterior = self.model.logposterior(reference_point)
File "/mnt/home/chill/.local/lib/python3.7/site-packages/cobaya/model.py", line 280, in logposterior
make_finite=make_finite, cached=cached, _no_check=True)
File "/mnt/home/chill/.local/lib/python3.7/site-packages/cobaya/model.py", line 186, in loglikes
_derived=_derived, cached=cached)
File "/mnt/home/chill/.local/lib/python3.7/site-packages/cobaya/likelihood.py", line 356, in logps
**theory_params_dict)
File "/mnt/home/chill/.local/lib/python3.7/site-packages/cobaya/theories/classy/classy.py", line 401, in compute
*self.collectors[product].args, **self.collectors[product].kwargs)
File "classy.pyx", line 920, in classy.Class.get_pk_and_k_and_z
File "classy.pyx", line 1155, in classy.Class.z_of_tau
classy.CosmoSevereError:

Error in Class: background_at_tau(L:122) :condition (tau > pba->tau_table[pba->bt_size-1]) is true; out of range: tau=1.417296e+04 > tau_max=1.417296e+04


After digging into this for quite awhile, my best guess for the cause of the problem is the construction of self.zs_interp and self.zs in _des_prototype.py. The former array causes the code to try to evaluate P(k,z) down to z=0, but it seems that this causes problems by requiring evaluation beyond the lowest z at which the code has pre-evaluated P(k). It seems this is around z=0.005, hence my guess that it's connected to self.zs, which explicitly has a hard-coded lower limit of 0.005.

However, if I try to remedy this by setting the lower limit of self.zs to 0, then the code segfaults. For various values between 0.005 and 0, it either throws the same error as above or segfaults. Perhaps the issues is occurring elsewhere -- any help would be greatly appreciated.

Inconsistent CMB temperature values between cobaya and CAMB/CLASS

The CMB temperature value from cobaya conventions file is not exactly the same as the value from CAMB or CLASS. Computed Cl between CAMB via cobaya and CAMB alone will then be shifted by a small amount due to the way cobaya rescales spectra given the units used. Either the value in conventions file matches the 2.7255 value in CAMB/CLASS or each theory wrapper get the value from CAMB or CLASS to make them self-compatible.

Speed blocking in PolyChord reduces ints by common denominator

Hi Jesus,

when specifying the blocking option in the PolyChord sampler and providing integers for the speeds, e.g.

sampler:
  polychord:
    blocking: 
      - [2, [omega_b, omega_cdm, H0, tau_reio, logA, n_s]]
      - [12, [A_planck]]
      - [50, [A_cib_217, xi_sz_cib, A_sz, ps_A_100_100, ps_A_143_143, ps_A_143_217, ps_A_217_217, ksz_norm, gal545_A_100, gal545_A_143, gal545_A_143_217, gal545_A_217, calib_100T, calib_217T]]

then these integers are divided by their common denominators before turning them into the number of repeats, i.e. in this case the integers [2, 12, 50] are first reduced to [1, 6, 25] and then multiplied by the number of parameters to give the number of repeats for PolyChord [6, 6, 350]. The parameter num_repeats seems to be ignored entirely when blocking is specified.

However, for accurate evidences the number of repeats of slow parameters should be greater than the number of slow parameters. Thus, there should be no reduction of speeds by common denominators for PolyChord.

An easy workaround is to choose integers with no common denominators (e.g. go for 51 instead of 50 here), but I thought I should report the issue here nonetheless.

Best,
Lukas

Individual fast and slow parameters (from the python interpreter)

Hi there,

Thanks for this great code!

I was looking and couldn't immediately see a way to set individual parameters to be fast and slow when using the python interface. e.g. if I have some generic likelihood function:

def my_logp(x,y):
    # function with some clever caching so that changing y is fast
    return logp

# Copied from your example
info = {"likelihood": {"my_like": my_logp}}
info["params"] = odict([
    ["x", {"prior": {"min": -2, "max": 2}, "ref": 1, "proposal": 0.2}],
    ["y", {"prior": {"min": -2, "max": 2}, "ref": 0, "proposal": 0.2}]])

Is there some way to do this for the individual parameters as opposed to the whole likelihood?

Cheers,
Joe

cosmo-generator stopped working from version 1.2.2 to version 2.0

The cobaya-cosmo-generator stopped working in version 2.0.

Error message with python2:

Traceback (most recent call last):
  File "/home/lh561/.virtualenvs/py27env/bin/cobaya-cosmo-generator", line 11, in <module>
    load_entry_point('cobaya', 'console_scripts', 'cobaya-cosmo-generator')()
  File "/home/lh561/Documents/Projects/CobayaPrj/cobaya/cobaya/cosmo_input/gui.py", line 291, in gui_script
    window = MainWindow()
  File "/home/lh561/Documents/Projects/CobayaPrj/cobaya/cobaya/cosmo_input/gui.py", line 59, in __init__
    modules = get_available_modules(kind)
  File "/home/lh561/Documents/Projects/CobayaPrj/cobaya/cobaya/tools.py", line 148, in get_available_modules
    if __init__filename and os.path.getsize(__init__with_path):
  File "/home/lh561/.virtualenvs/py27env/lib/python2.7/genericpath.py", line 57, in getsize
    return os.stat(filename).st_size
OSError: [Errno 20] Not a directory: '/home/lh561/Documents/Projects/CobayaPrj/cobaya/cobaya/samplers/__init__.py/__init__.py'
QClipboard: Unable to receive an event from the clipboard manager in a reasonable time

To check whether this might be due to the python version I tried it with python3, but only got:

Segmentation fault

The repetition of /__init__.py seems to cause the problem. Glancing at cobaya/tools.py there were too many changes for me to easily spot what might have caused this, i.e. I guess this'll be easier for those familiar with the changes made.

column mismatch between cobaya output and raw_polychord_output

I want to work with the raw PolyChord output, but the lack of a .paramnames file is a problem for me. I tried using the param names that are detected by GetDist, but I got a column mismatch, i.e. GetDist had more param names than columns in the raw PolyChord output.

When running PolyChord with Cobaya the PolyChord output is stored in the folder raw_polychord_output and in particular a sample file is produced raw_polychord_output/<rootname>.txt. With other codes (CosmoChord, MontePython) also a .paramnames file is produced which allows to identify the param names corresponding to columns in the sample file. Cobaya doesn't create a paramnames file instead it produces a sample file <rootname>.1.txt, which has a file header specifying the columns. However, this sample file has two more columns than the files in raw_polychord_output.

  • Which two columns are additionally stored in <rootname>.1.txt?
  • Are those two columns appended to the end?
  • How can I find out the param names corresponding to the columns in the raw PolyChord output?

w0-wa

Hi

How can enable w0-wa parameterization? (I need to run some chains to DES meeting in 2 weeks).

Best
Vivian

pandas .ix is deprecated

I get a bunch of deprecation errors when resuming a run.

/home/zequnl/.conda/envs/ps/lib/python3.7/site-packages/cobaya/samplers/mcmc/mcmc.py:320: FutureWarning: 
.ix is deprecated. Please use
.loc for label based indexing or
.iloc for positional indexing

See the documentation here:
http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#ix-indexer-is-deprecated
  initial_point = (self.collection[self.collection.sampled_params]
/home/zequnl/.conda/envs/ps/lib/python3.7/site-packages/cobaya/samplers/mcmc/mcmc.py:322: FutureWarning: 
.ix is deprecated. Please use
.loc for label based indexing or
.iloc for positional indexing

See the documentation here:
http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#ix-indexer-is-deprecated
  logpost = -(self.collection[_minuslogpost]
/home/zequnl/.conda/envs/ps/lib/python3.7/site-packages/cobaya/samplers/mcmc/mcmc.py:324: FutureWarning: 
.ix is deprecated. Please use
.loc for label based indexing or
.iloc for positional indexing

See the documentation here:
http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#ix-indexer-is-deprecated
  logpriors = -(self.collection[self.collection.minuslogprior_names]
/home/zequnl/.conda/envs/ps/lib/python3.7/site-packages/cobaya/samplers/mcmc/mcmc.py:326: FutureWarning: 
.ix is deprecated. Please use
.loc for label based indexing or
.iloc for positional indexing

See the documentation here:
http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#ix-indexer-is-deprecated
  loglikes = -0.5 * (self.collection[self.collection.chi2_names]
/home/zequnl/.conda/envs/ps/lib/python3.7/site-packages/cobaya/samplers/mcmc/mcmc.py:328: FutureWarning: 
.ix is deprecated. Please use
.loc for label based indexing or
.iloc for positional indexing

See the documentation here:
http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#ix-indexer-is-deprecated
  derived = (self.collection[self.collection.derived_params]

As the warnings suggest, replace with iloc and loc?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.