pierrexyz / pybird Goto Github PK
View Code? Open in Web Editor NEWPython code for Biased tracers in redshift space
Home Page: https://pybird.readthedocs.io/en/latest/
License: MIT License
Python code for Biased tracers in redshift space
Home Page: https://pybird.readthedocs.io/en/latest/
License: MIT License
A small bug I found (I haven't investigated), with nnlo counterterm, with_bias=True:
N = Correlator()
with_bias = True
N.set({'output': 'bPk', 'multipole': 3, 'kmax': 0.3, 'xdata': kd, 'km': 0.7, 'kr': 0.35, 'nd': 3e-4,
'eft_basis': 'eftoflss', 'with_stoch': True, 'with_bias': with_bias, 'with_nnlo_counterterm': True})
eft_params = {'b1': 1.9535, 'c2': 0.5863, 'c4': 0.0, 'b3': -0.3948, 'cct': 0.1839, 'cr1': -0.8414, 'cr2': -0.8084, 'ce0': 1.5045, 'ce1': 0.0, 'ce2': -1.6803, 'b2': 0.4146, 'b4': 0.4146, 'cr4': 10., 'cr6': 20.}
if with_bias:
N.compute({'kk': kk, 'pk_lin': pk_lin, 'Psmooth': pk_lin, 'f': f1, 'bias': eft_params})
else:
N.compute({'kk': kk, 'pk_lin': pk_lin, 'Psmooth': pk_lin, 'f': f1})
fails with:
325 self.Ps[1] += np.einsum('lb,bx->lx', self.b13, self.P13) + np.einsum('l,x,x->lx', self.bct, self.co.k**2, self.P11)
326 if self.with_stoch: self.Ps[1] += np.einsum('b,lbx->lx', self.bst, self.Pstl)
--> 327 if self.with_nnlo_counterterm: self.Ps[2] = np.einsum('lb,x->lx', self.cnnlo, self.Pnnlo)
328 if setfull: self.setfullPs()
File ~/anaconda3/envs/cosmodesi/lib/python3.9/site-packages/numpy/core/einsumfunc.py:1371, in einsum(out, optimize, *operands, **kwargs)
1369 if specified_out:
1370 kwargs['out'] = out
-> 1371 return c_einsum(*operands, **kwargs)
1373 # Check the kwargs to avoid a more cryptic error later, without having to
1374 # repeat default values here
1375 valid_einsum_kwargs = ['dtype', 'order', 'casting']
ValueError: einstein sum subscripts string contains too many subscripts for operand 0
While there is no such issue with with_bias=False
Hi,
I am Yan from the University of Queensland. This is not an issue with the code, I just can't find your email, so I just posted the message here. Sorry about that. I am currently working on implementing PyBird into the DESIlike pipeline. I talked to Arnaud who is in charge of DESIlike, he says DESIlike uses the version of PyBird from this directory. I have developed several new features such as ShapeFit with PyBird. I also tested my version of PyBird with some of the DESI mocks. Is it possible that I can merge my repo with one of your branches? This way DESIlike will still be able to import PyBird and use my version of code. Thank you very much.
Best,
Yan Lai
Hi Pierre, Guido and all,
We've been looking at fitting multiple redshifts/skycuts using pybird and think we've identified a potentially important bug in the montepython class file, although it's not clear which branches this may be in, or if we've missed/modified something in our fork.
In correlator.compute, the redshifts are reordered such that the model is computed only for the middle bin and then rescaled by the growth factor appropriately for all other redshift bins. However, it looks the linear power spectrum from class is computed for the first redshift in the unordered list of redshifts specified in self.config (zfid in likelihood_class.__set_cosmo
. If the middle redshift is different from the first redshift, this causes a mismatch between the linear power spectrum and growth factors.
There is a small bug when AP is applied to the PS with with_bias=False
.
The AP is not applied to the stochastic terms.
Dear Pierre Zhang,
I am currently doing my PhD at the University of Montpellier with Vivian Poulin and I would like to use your pybird code to improve the predictions of the matter power spectrum of an alternative model to the LCDM model. (The DCDM model, a model that states that dark matter decays into a radiative particle.)
More precisely, I would like to plot the data from CMASS NGC, CMASS SGC and LOWZ SGC and compare these data with the predictions of DCDM and LCDM models (see picture attached, where the blue dots are the data). For the (nu)LCDM model, I used the EFT parameters of the table 4 of your article (https://arxiv.org/abs/2003.07956v2). However, it turns out that the LCDM and DCDM model predictions are far from the data, and I do not understand why. I suspect it comes from mis-using the bias parameters. In order to determine these forecasts, I used the values of b_1 and c_2 returned by our MCMC analysis, and derived b_2 and b_4 as follows (according to part A.1 of your article): b_2 = b_4 = c_2/sqrt(2). I then defined bs in the code as [b_1, b_2, 0, b_4, 0, 0, 0], since bs = [b_1, b_2, b_3, b_4, c_{ct}/k_{nl}^2, c_{r,1}/k_{m}^2, c_{r,2}/k_{m}^2]. Do you have an idea as to where the error could come from? Any help would be much appreciated.
Hi Pierre,
I am Yan, one of the Ph.D. students of Cullan Howlett at the University of Queensland. I am working on applying the pybird pipeline to the DESI mocks. I am still trying to understand how the code works but I think there may be a possible bug when the code applies the AP effect to the model power spectrum. In the class Common of pybird.py, you integrate over mu to get l11, lct, l22, and l13. This is then multiplied by the linear/loop power spectrum in Bird to get the model power spectrum independent of mu. Lastly, in Projection, you multiply the monopole, quadrupole, and hexadecapole by the Legendre Polynomials and sum them together to find the model power spectrum in terms of k and mu. However, in Common, I think the highest order of mu is 8, which means you need to sum up to the 8th multipole to reconstruct the power spectrum. I understand the contribution from the 6th and 8th multipole should be small, but since the DESI mocks give very tight constraints, I just want to make sure ignoring the 6th and 8th multipole wouldn't introduce any systematic bias.
Here is the list of my attempts to solve this problem:
I tried to modify the code such that it calculates the 6th and 8th multipole. I did this by changing Nl = 5 and modified the mu list (lines 82 - 88 in pybird.py) to include the integration over high mu power. However, the problem I encounter is that the Q matrix in IR-resum can only calculate up to Nl = 3.
I run the code without the IR-resummation with Nl = 5 and Nl = 3 and got the following plots.
Here are all the terms in Ploop with the power of mu greater than 4. The y-axis is the fractional difference between the model power spectrum I got with Nl = 5 and Nl =3 with AP = alpha_perp = alpha_par. For some elements in quadrupole and hexadecapole the fractional difference is actually quite big.
I tried to run the Mathematica notebook to generate the Q matrix for higher multipole. However, I want to know whether the setting I have is reasonable. You can access the notebook in this zip file: cbird_modify_notebook.zip. I mainly changed the IR-correction integrands section and Resummation 'hacked' up to k^(2*16) hexadecapole section. Or you can tell me what setting I should use. I also want to know whether there is a faster way to write the formulae for the Q matrix in python other than just copying and pasting.
This is probably a bug. I think in line 87 for pybird.py, the last term should be 48/143 instead of 48/148.
Thanks for your help.
Hi Pierre and Guido,
I find that in your paper 1909.05271, c2 and c4 is defined as b2=(c2+c4)/√2, b4=(c2-c4)/√2.
But you also mentioned that "We perform the change of variables between (b2, b4) and (c2, c4) as we find for all simulations and in the observational data that the former are almost completely anti-correlated (at ~ 99%)" and then set c4=0.
I wonder if this is a typo? If c4=0, then b2 equals b4, they have positive correlation.
The following code shows that the definition of c2 and c4 is indeed b2=(c2+c4)/√2, b4=(c2-c4)/√2.
pybird/montepython_tree/montepython/likelihood_class.py
Lines 2836 to 2837 in 13e1c09
There might be small bug in the lognormal prior.
It is written, in likelihood.py
prior = - 0.5 * np.einsum( 'n,nm,m->', np.log(bs) - prior_mean, prior_sigma**-2 * prior_inv_corr_mat, np.log(bs) - prior_mean )
However, it should be
prior = - 0.5 * np.einsum( 'n,nm,m->', np.log(bs) - prior_mean, prior_sigma**-2 * prior_inv_corr_mat, np.log(bs) - prior_mean ) - np.sum(np.log(bs), axis=0)
After fixing this, the chains using my external code, Effort.jl
, show an exquisite agreement with the ones obtained with pybird.
Hi,
I have been recently using pybird and thank you very much for developing this package.
I find pybird itself doesn't compute stochastic terms (eat 7 counter terms instead of 10) and stochastic terms are added to multipoles in likelihood:
if self.birdlkl is 'fastfull':
self.bird.fullPs[0] += bval[7] / self.nd + bval[8] / self.nd / self.km**2 * self.k**2
self.bird.fullPs[1] += bval[9] / self.nd / self.km**2 * self.k**2
I'm a bit confused about it:
f
wasn't included in this piece of code?Could you explain it to me why? Thank you very much!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.