lsstdesc / ccl Goto Github PK
View Code? Open in Web Editor NEWDESC Core Cosmology Library: cosmology routines with validated numerical accuracy
License: BSD 3-Clause "New" or "Revised" License
DESC Core Cosmology Library: cosmology routines with validated numerical accuracy
License: BSD 3-Clause "New" or "Revised" License
We'd like to provide a parameterization that decouples growth from expansion history. The simplest way we could think of was to have, for some user-supplied spline (or function??) delta_growth:
growth(a) -> growth(a) * p(a)
Maybe with a placeholder p(a) = 1
As you added the growth code would you be able to add this to the growth_ode_system and growth_factor_and_growth_rate function somehow?
We assumed that using a multiplicative, instead of additive, modification would be advantageous for computing modifications to f(a), do you agree? If this doesn't make a difference, an additive delta_G(a) as we discussed before might be better. Can you make another executive decision?
This is probably too late in the game. But since CCL is a C, I feel obligated to mention this.
GObject Introspection allows one to automatically generate bindings for a huge bunch of languages. (Python, Javascript, Lua Ruby, and apparently Julia too ...). It is one of the core technology behind the GNOME desktop environment.
See https://wiki.gnome.org/Projects/GObjectIntrospection
There is a list of 'Users': https://wiki.gnome.org/Projects/GObjectIntrospection/Users
It does require the source code to be annotated in a specific way (gtk-doc + ownership annotations) and the project to be organized in a specific way (autotools and dependency on GLib).
GLib does provide basic data structures like hashtable and binary trees and a type system that we usually invent and reinvent again and again in C. Nevertheless pulling in GLib dependency is going to be a very bold and radical move; I've never heard of any science project using GLib as a foundation. On the other hand people do survive building things with ROOT, so...
Currently CCL only evalulates the linear power spectrum at z=0, and rescales to other redshifts using the growth factor.
This scaling is not exact for all cosmologies (e.g., massive neutrinos), and we should switch to a 2D (k,z) look-up table, as done for the non-linear power spectrum.
add option to link to user specified CLASS in Makefile
add option to pass user specified additional configuration parameters to CLASS
FFTlog implementation for correlation function implies that the comparison to the benchmark can only be done in certain range of angular scales. If LMIN>2 or LMAX<3e4, the code ccl_test_corr would crash because gel would attempt to extrapolate the CCL correlation to the range of the benchmark. To avoid this, I implemented a temporary fix by requesting that the correlation only be compared to the benchmarks in the range of the FFTlog output. This needs to be revisited later on.
We need a module to check the outputs of CCL to our benchmark code comparison. Is this old_test?
We've noticed differences at >1e-4 between PP constants (currently implemented in CCL), CLASS constants and GSL constants. (G is one example.) We have agreed the best is to use PP constants and we need to contact the CLASS developers to ask them to incorporate these as well if possible.
The growth factor seems to be normalized to \propto a at early times, but we probably want to have it normalized to D(z=0)=1.
Benchmark files are needed for some of the mass function models. Agreement is only at the 10^{-2} level for the currently available benchmark - we might need to revisit this.
Currently the master branch is tracking the .gitignore file. Is there a good reason to track it? Shouldn't each user define their own .gitignore?
Some routines, e.g. ccl_sigmaR and ccl_sigma8, do not take status as an argument yet. This should be updated.
The error handing in ccl_cls.c also could be made more consistent.
Computing power spectra with CLASS becomes very slow for large values of P_k_max_1/Mpc; we currently set this parameter to K_MAX (=1000.).
Given the intrinsic uncertainty of halofit at large k (and the lack of relevance of the linear power spectrum at very large k), would it make sense to only evaluate power spectra up to k ~ 50 or 100 1/Mpc, and extrapolate from there (using the analytic slope for the linear power spectrum, and a fit to the slope in the non-linear regime for P_nl)?
We'd like to optionally get the matter power spectrum from CLASS.
Will need to
-[] add CLASS code to repo
-[] link to it etc.
-[] add power spectrum config option
-[] call class from the relevant function (with initialisation and finalization)
Photons are considered in some functions but not in others, we need to consistently include this. A branch is being created to solve this.
Some routines now have status, some don't and in any case it is processed somewhat cumbersomely.
I propose the following:
@damonge @elikrause @joezuntz
Guys,
This is not really an issue, since it is not clear what could close it, but in absence of other tools:
check out this:
https://github.com/damonge/CosmoMAD
and the example in the doc:
https://github.com/damonge/CosmoMAD/blob/master/doc/CosmoMad.pdf
It does seem pretty similar in spirit to CCL, I guess we can just copy-paste much of the code from there.
Send version 0.0 to theorists to see whether they can interface, and figure out how exotic models break it
We are missing the proper sigma_8 normalization to compare to benchmark - this is in the to do list.
Once the API has stabilised and we have a public release of CCL, the package should be registered with PyPI. This will allow people to install it using the pip
command.
Instructions for doing this are here:
https://packaging.python.org/distributing/
There is a bug in the code to create parameters in the python layer:
import pyccl
params=pyccl.core.Parameters(Omega_c=0.25, Omega_b=0.05, h=0.7, A_s=2.1e-9, n_s=0.96)
yields an error NameError: global name 'norm_pk' is not defined
in the relevant file. Apparently the python layer was not updated to match the C?
GSL has trouble interpolating the nonlinear matter power spectrum at the boundaries of the redshift range. If I try to evaluate the nonlinear power spectrum at a=1.0, CCL crashes. Evaluating at a=0.98, for example, is not a problem.
We have been comparing the benchmark correlation function with the output from CCL. Clustering results are in good agreement (although probably not at 10^-4, this needs to be checked). The weak lensing results are qualitatively similar to the benchmark at large angular separations. However, weak lensing results are sensitive to L_MAX_INT choice - the maximum ell of the integration range. FFTlog is also giving a lot of oscillation at the small angular separations, which is producing problems when splining the weak lensing correlation.
It isn't clear that this function is needed to call the growth, for example. We tried calling growth alone in one of the test scripts and realized we needed compute_distances before. Perhaps rename this to something more general or make it part of create_cosmology somehow?
Outputs of these new routines in the LSST_specs branch (placeholder) need to be tested.
In the python code this can be circumvented, but in the C-code it isn't so easy. We should choose whether we want functions to be functions of redshift or scale factor.
For instance in src/ccl_power.c the function ccl_linear_matter_power() is a function of scale factor. On the other hand in src/ccl_massfunc.c the function ccl_massfunc() is a function of redshift.
To avoid confusion for users we should be consistent. Later on we can accommodate specifying one or the other.
Currently all angular power spectra are being computed by LOS-integrating the linear matter power spectrum with the different window functions. The rationale behind this is that what goes into the integral should be P(k,z1,z2), which for linear theory is literally: P(k,z1,z2)=sqrt(P(k,z1)*P(k,z2)). This is not the case for the non-linear power spectrum (e.g. see https://arxiv.org/abs/1612.00770), however the only quantity we're currently computing P(k,z) for the non-linear power spectrum.
We could put a patch on this issue very easily by just approximating P(k,z1,z2)=sqrt(P(k,z1)*P(k,z2)) also for the non-linear power spectrum, and revisit this question later on if we think it's relevant. Note also that this issue does not apply in the Limber approximation that we currently have implemented.
Unless people disagree I could make these changes pretty quickly.
One key application for the CCL will be as theory backend for MCMC analyses. For this, a user may want to specify the angular binning (in \ell, or \theta), a set of CCL_ClTracer structures, and which (cross-) correlations of these tracers to compute.
It would be helpful if the CCL contained the necessary data structures and book-keeping code, as this will likely be common at least to LSS and WL.
Here are some ideas for the data structure, but it would be good to hear other ideas, and think about how to make the book-keeping easy.
Minimum data structure:
optional1:
optional2:
Does the basic idea for the use case make sense, or would you describe this application differently?
For the minimum structure, is it OK to impose a maximum on the number tracers supported to make the allocation easy (rather than using linked lists, or C acrobatics to mimic dynamic array sizes)?
What other features should we plan for, and what other information should be stored?
@dkirkby suggested using python unit tests to do integration with Travis CI. The plan is use astropy
to check the accuracy of the parts that are already implemented there. This will be implemented in the branch travis_integration
As models and cosmological parameters are added beyond vanilla LCDM, it will become necessary to signal which functions support certain new parameters, and which values of those parameters are valid. This could be done through documentation alone, but it would be more robust if the CCL code was able to perform validity checks itself.
Being able to check which parameters are supported/required by a function would also be useful for likelihood/sampling codes, which can use this information to define fast/slow parameters, or to cache the results of expensive calculations. Monte Carlo analyses that propose invalid values should also be given enough information to fail gracefully.
The following types of test are likely to be useful:
The results of some of these tests may be conditional on the values of other parameters (e.g. degraded precision on parameters derived from the power spectrum when w != -1, Omega_K != 0).
While these tests could be hard-coded in each function, e.g. using assertions, this will significantly increase the maintenance burden and complexity of the code as new parameters are added. Ideally, a simple, low-maintenance way of enforcing validity conditions on parameters and functions would be incorporated into the API.
Hi all,
Once this week's sprint is over, how about we switch to Read access for the Members team, and get in the habit of submitting pull requests from forks? I really like how pull requests on GitHub enable easy code review, which I think could be important for this project. What do you think?
Phil
Several functions need to be generalized to K!=0. @elisachisari and I have seen, for the moment, ccl_luminosity_distance and all the line-of-sight projections needed to compute C_ells. It would also be handy to add a function in ccl_background.c that computes ccl_comoving_angular_distance besides ccl_comoving_radial_distance.
Separate call to CLASS from halofit calculation for speed up
Please ignore this issue until after the CCL release, so that it does not bog you all down :).
New developers can more easily join the project with a few straight forward additions to the github page. Specifically, on either/both the README and the wiki we should provide some direction with a "points of entry" section. As in, if one wanted to learn the structure of CCL then this would point to which files to look at first. This can alternatively be done inside READMEs in specific directories, such as within src/.
Additionally, the issue page can become a great resource for new people joining up, since they can use these as starting points for contributing. Tagging issues with descriptive labels does this without actually having to do any extra work. For a great example of how this is done, you can check out the issues page for matplotlib.
The following branches might need some updates in terms of handling the status and gracefully exiting in case of errors:
See e.g. ccl_background.h:
double ccl_luminosity_distance(ccl_cosmology * cosmo, double a);
ok, I can guess what this is, but you need to document the units, possibly in the header file.
in ccl_parameters_fill_initial omega_g is set to 1.7E-5. Assuming this is omega_photons, this would be incorrect (for T_CMB=2.7255 it would be 2.47E-5).
Currently the code seems to run some test upon make test, but in some tests not even status flags are checked, let alone the output (as far as I can tell, correct me if wrong).
It would be good to have a function equivalent to ccl_parameters_create_flat_wcdm which accepts non-zero wa.
This will speed up all the splines.
I have run into some trouble when attempting to use a scalar value for k in a jupyter notebook.
I am using the following cosmology:
p = ccl.Parameters(Omega_c = 0.27,Omega_b = 0.045, h=0.67, A_s=2e-9,n_s=0.96)
cosmo = ccl.Cosmology(p)
When attempting to calculate the power spectrum at a specific k value, using a length one list or array works, but using a scalar for the same value does not always work.
As an example with the linear power spectrum, using k=[1e-4] or k=np.array([1e-4]) will return a value of ~2027, but using k = 1e-4 causes the notebook to crash, and the following message pops up in the terminal.
gsl: interp.c:150: ERROR: interpolation error
Default GSL error handler invoked.
[I 22:09:33.186 NotebookApp] KernelRestarter: restarting kernel (1/5)
If I use k=[0.234], a value of ~2241 is returned; however, if I try k=0.234, ~7.9 is returned.
I'm not entirely certain what is going on, but the power spectrum functions do not seem to be correctly interpreting the value of k when it is passed as a scalar.
Using a list or numpy array to pass a single value is a simple workaround, but I thought I should point out that this is happening.
We need to think about the license at some point. What would be a good choice? Is there a commonly adopted license by DESC?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.