Git Product home page Git Product logo

ccl's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ccl's Issues

Add modified growth rate function

We'd like to provide a parameterization that decouples growth from expansion history. The simplest way we could think of was to have, for some user-supplied spline (or function??) delta_growth:
growth(a) -> growth(a) * p(a)

Maybe with a placeholder p(a) = 1

As you added the growth code would you be able to add this to the growth_ode_system and growth_factor_and_growth_rate function somehow?

We assumed that using a multiplicative, instead of additive, modification would be advantageous for computing modifications to f(a), do you agree? If this doesn't make a difference, an additive delta_G(a) as we discussed before might be better. Can you make another executive decision?

GObject Introspection

This is probably too late in the game. But since CCL is a C, I feel obligated to mention this.

GObject Introspection allows one to automatically generate bindings for a huge bunch of languages. (Python, Javascript, Lua Ruby, and apparently Julia too ...). It is one of the core technology behind the GNOME desktop environment.

See https://wiki.gnome.org/Projects/GObjectIntrospection

There is a list of 'Users': https://wiki.gnome.org/Projects/GObjectIntrospection/Users

It does require the source code to be annotated in a specific way (gtk-doc + ownership annotations) and the project to be organized in a specific way (autotools and dependency on GLib).

GLib does provide basic data structures like hashtable and binary trees and a type system that we usually invent and reinvent again and again in C. Nevertheless pulling in GLib dependency is going to be a very bold and radical move; I've never heard of any science project using GLib as a foundation. On the other hand people do survive building things with ROOT, so...

Use 2D (k,z) interpolation for linear power spectra

Currently CCL only evalulates the linear power spectrum at z=0, and rescales to other redshifts using the growth factor.
This scaling is not exact for all cosmologies (e.g., massive neutrinos), and we should switch to a 2D (k,z) look-up table, as done for the non-linear power spectrum.

Correlation function & LMIN/LMAX

FFTlog implementation for correlation function implies that the comparison to the benchmark can only be done in certain range of angular scales. If LMIN>2 or LMAX<3e4, the code ccl_test_corr would crash because gel would attempt to extrapolate the CCL correlation to the range of the benchmark. To avoid this, I implemented a temporary fix by requesting that the correlation only be compared to the benchmarks in the range of the FFTlog output. This needs to be revisited later on.

Theorist-level observables: Clusters

  • cluster number counts, without mass-observable relation complications
  • cluster number counts, binning by observable mass proxy
  • cluster lensing power spectra

Use PP constants

We've noticed differences at >1e-4 between PP constants (currently implemented in CCL), CLASS constants and GSL constants. (G is one example.) We have agreed the best is to use PP constants and we need to contact the CLASS developers to ask them to incorporate these as well if possible.

D(a) normalization

The growth factor seems to be normalized to \propto a at early times, but we probably want to have it normalized to D(z=0)=1.

Mass function benchmarks

Benchmark files are needed for some of the mass function models. Agreement is only at the 10^{-2} level for the currently available benchmark - we might need to revisit this.

Should we be tracking .gitignore?

Currently the master branch is tracking the .gitignore file. Is there a good reason to track it? Shouldn't each user define their own .gitignore?

Consistent error handling

Some routines, e.g. ccl_sigmaR and ccl_sigma8, do not take status as an argument yet. This should be updated.
The error handing in ccl_cls.c also could be made more consistent.

Extrapolate power spectra for k > 50 1/Mpc?

Computing power spectra with CLASS becomes very slow for large values of P_k_max_1/Mpc; we currently set this parameter to K_MAX (=1000.).
Given the intrinsic uncertainty of halofit at large k (and the lack of relevance of the linear power spectrum at very large k), would it make sense to only evaluate power spectra up to k ~ 50 or 100 1/Mpc, and extrapolate from there (using the analytic slope for the linear power spectrum, and a fit to the slope in the non-linear regime for P_nl)?

Link to CLASS

We'd like to optionally get the matter power spectrum from CLASS.

Will need to
-[] add CLASS code to repo
-[] link to it etc.
-[] add power spectrum config option
-[] call class from the relevant function (with initialisation and finalization)

Omega_g missing from ccl_background

Photons are considered in some functions but not in others, we need to consistently include this. A branch is being created to solve this.

Change how status is processed.

Some routines now have status, some don't and in any case it is processed somewhat cumbersomely.

I propose the following:

  • ccl_cosmology' carries around a status flag itself, like int status
  • any routine that needs to raise status, changes that flag. This means you don't need an extra parameter and we always carry around ccl_cosmology anyway
  • on the input, you could set this flag as -1 for the code to barf rather than silently pass flag to you.
  • there could also be a ccl_check_flag routine that would print a human-readable error.

Break CCL

Send version 0.0 to theorists to see whether they can interface, and figure out how exotic models break it

ccl_test_bbks

We are missing the proper sigma_8 normalization to compare to benchmark - this is in the to do list.

Creating parameters for python layer

There is a bug in the code to create parameters in the python layer:

import pyccl
params=pyccl.core.Parameters(Omega_c=0.25, Omega_b=0.05, h=0.7, A_s=2.1e-9, n_s=0.96)

yields an error NameError: global name 'norm_pk' is not defined in the relevant file. Apparently the python layer was not updated to match the C?

GSL interpolation at boundaries

GSL has trouble interpolating the nonlinear matter power spectrum at the boundaries of the redshift range. If I try to evaluate the nonlinear power spectrum at a=1.0, CCL crashes. Evaluating at a=0.98, for example, is not a problem.

Correlation function: integration problems at small scales

We have been comparing the benchmark correlation function with the output from CCL. Clustering results are in good agreement (although probably not at 10^-4, this needs to be checked). The weak lensing results are qualitatively similar to the benchmark at large angular separations. However, weak lensing results are sensitive to L_MAX_INT choice - the maximum ell of the integration range. FFTlog is also giving a lot of oscillation at the small angular separations, which is producing problems when splining the weak lensing correlation.

ccl_compute_distances

It isn't clear that this function is needed to call the growth, for example. We tried calling growth alone in one of the test scripts and realized we needed compute_distances before. Perhaps rename this to something more general or make it part of create_cosmology somehow?

Inconsistent function signatures: redshift vs scale factor

In the python code this can be circumvented, but in the C-code it isn't so easy. We should choose whether we want functions to be functions of redshift or scale factor.

For instance in src/ccl_power.c the function ccl_linear_matter_power() is a function of scale factor. On the other hand in src/ccl_massfunc.c the function ccl_massfunc() is a function of redshift.

To avoid confusion for users we should be consistent. Later on we can accommodate specifying one or the other.

Non-linear power spectrum for angular Cells and correlations

Currently all angular power spectra are being computed by LOS-integrating the linear matter power spectrum with the different window functions. The rationale behind this is that what goes into the integral should be P(k,z1,z2), which for linear theory is literally: P(k,z1,z2)=sqrt(P(k,z1)*P(k,z2)). This is not the case for the non-linear power spectrum (e.g. see https://arxiv.org/abs/1612.00770), however the only quantity we're currently computing P(k,z) for the non-linear power spectrum.

We could put a patch on this issue very easily by just approximating P(k,z1,z2)=sqrt(P(k,z1)*P(k,z2)) also for the non-linear power spectrum, and revisit this question later on if we think it's relevant. Note also that this issue does not apply in the Limber approximation that we currently have implemented.

Unless people disagree I could make these changes pretty quickly.

likelihood interface and data structures

One key application for the CCL will be as theory backend for MCMC analyses. For this, a user may want to specify the angular binning (in \ell, or \theta), a set of CCL_ClTracer structures, and which (cross-) correlations of these tracers to compute.
It would be helpful if the CCL contained the necessary data structures and book-keeping code, as this will likely be common at least to LSS and WL.

Here are some ideas for the data structure, but it would be good to hear other ideas, and think about how to make the book-keeping easy.

Minimum data structure:

  • pointers for vectors of \ell and \theta bins
  • flag whether to compute power spectra or correlation functions
  • lists of the CCL_ClTracer structures (preferably separated by type)
    (all immutable)

optional1:

  • specify which cross-correlations between CCL_ClTracers to evaluate
  • tables and splines for auxillary, cosmology independent functions
  • pointers to systematics models (grouped by tracer type)
    (all immutable)

optional2:

  • splined versions of the angular power spectra to speed up the correlation function compution
  • parameters for systematics models
    (mutable, to be updated as relevant parameters change)

Does the basic idea for the use case make sense, or would you describe this application differently?
For the minimum structure, is it OK to impose a maximum on the number tracers supported to make the allocation easy (rather than using linked lists, or C acrobatics to mimic dynamic array sizes)?
What other features should we plan for, and what other information should be stored?

Unit tests and integration with Travis CI

@dkirkby suggested using python unit tests to do integration with Travis CI. The plan is use astropy to check the accuracy of the parts that are already implemented there. This will be implemented in the branch travis_integration

Signalling parameter-dependence of functions and validity of parameter values

As models and cosmological parameters are added beyond vanilla LCDM, it will become necessary to signal which functions support certain new parameters, and which values of those parameters are valid. This could be done through documentation alone, but it would be more robust if the CCL code was able to perform validity checks itself.

Being able to check which parameters are supported/required by a function would also be useful for likelihood/sampling codes, which can use this information to define fast/slow parameters, or to cache the results of expensive calculations. Monte Carlo analyses that propose invalid values should also be given enough information to fail gracefully.

The following types of test are likely to be useful:

  1. Whether a function depends on a given parameter (e.g. H(z) does not depend on sigma_8)
  2. Whether a parameter or certain parameter values are supported by the function (e.g. does the function assume Omega_K = 0?)
  3. Whether the parameter value is in a physically valid range (e.g. negative h)
  4. Whether the parameter value is in a range that has been validated for precision (e.g. the mass function is unlikely to be accurate for n_s = 2)

The results of some of these tests may be conditional on the values of other parameters (e.g. degraded precision on parameters derived from the power spectrum when w != -1, Omega_K != 0).

While these tests could be hard-coded in each function, e.g. using assertions, this will significantly increase the maintenance burden and complexity of the code as new parameters are added. Ideally, a simple, low-maintenance way of enforcing validity conditions on parameters and functions would be incorporated into the API.

Curvature

Several functions need to be generalized to K!=0. @elisachisari and I have seen, for the moment, ccl_luminosity_distance and all the line-of-sight projections needed to compute C_ells. It would also be handy to add a function in ccl_background.c that computes ccl_comoving_angular_distance besides ccl_comoving_radial_distance.

Theorist-level observables: C(l)s

  • weak lensing tomography power spectra
  • clustering tomography power spectra, Limber + linear bias
  • galaxy-galaxy lensing power spectra, Limber
  • clustering tomography power spectra, exact + user supplied bias

Improve accessibility for new developers

Please ignore this issue until after the CCL release, so that it does not bog you all down :).

New developers can more easily join the project with a few straight forward additions to the github page. Specifically, on either/both the README and the wiki we should provide some direction with a "points of entry" section. As in, if one wanted to learn the structure of CCL then this would point to which files to look at first. This can alternatively be done inside READMEs in specific directories, such as within src/.

Additionally, the issue page can become a great resource for new people joining up, since they can use these as starting points for contributing. Tagging issues with descriptive labels does this without actually having to do any extra work. For a great example of how this is done, you can check out the issues page for matplotlib.

Improve status handling in new branches

The following branches might need some updates in terms of handling the status and gracefully exiting in case of errors:

  • angular_cls
  • LSST_specs
  • correlation

document units *everywhere*

See e.g. ccl_background.h:

double ccl_luminosity_distance(ccl_cosmology * cosmo, double a);
ok, I can guess what this is, but you need to document the units, possibly in the header file.

omega_g

in ccl_parameters_fill_initial omega_g is set to 1.7E-5. Assuming this is omega_photons, this would be incorrect (for T_CMB=2.7255 it would be 2.47E-5).

Make actual unit tests

Currently the code seems to run some test upon make test, but in some tests not even status flags are checked, let alone the output (as far as I can tell, correct me if wrong).

Python Power Spectrum for scalar k

I have run into some trouble when attempting to use a scalar value for k in a jupyter notebook.
I am using the following cosmology:

p = ccl.Parameters(Omega_c = 0.27,Omega_b = 0.045, h=0.67, A_s=2e-9,n_s=0.96)
cosmo = ccl.Cosmology(p)

When attempting to calculate the power spectrum at a specific k value, using a length one list or array works, but using a scalar for the same value does not always work.
As an example with the linear power spectrum, using k=[1e-4] or k=np.array([1e-4]) will return a value of ~2027, but using k = 1e-4 causes the notebook to crash, and the following message pops up in the terminal.

gsl: interp.c:150: ERROR: interpolation error
Default GSL error handler invoked.
[I 22:09:33.186 NotebookApp] KernelRestarter: restarting kernel (1/5)

If I use k=[0.234], a value of ~2241 is returned; however, if I try k=0.234, ~7.9 is returned.
I'm not entirely certain what is going on, but the power spectrum functions do not seem to be correctly interpreting the value of k when it is passed as a scalar.

Using a list or numpy array to pass a single value is a simple workaround, but I thought I should point out that this is happening.

License?

We need to think about the license at some point. What would be a good choice? Is there a commonly adopted license by DESC?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.