Git Product home page Git Product logo

bilby's People

Contributors

adivijaykumar avatar asb5468 avatar bengpatterson avatar bruce-edelman avatar ceciliogq avatar colmtalbot avatar cplb avatar dbkeitel avatar duncanmmacleod avatar gregoryashton avatar hoyc1 avatar isaaclegred avatar isobelmarguarethe avatar johnveitch avatar josh-willis avatar kwwette avatar litingxiao avatar marcarene avatar mattcarney106 avatar mattpitkin avatar mj-will avatar moritzthomashuebner avatar mpuerrer avatar nsar16 avatar oliviawilk avatar plasky avatar rorysmith avatar smorisaki avatar transientlunatic avatar vivienr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

bilby's Issues

Default conversion functions don't generate additional parameters as expected

Based on the comment here, I expected that creating a default BBHPriorDict, deleting the luminosity_distance key, and adding a redshift prior would cause luminosity distance to be calculated from redshift during sampling. That doesn't seem to be happening. Instead, my samples dictionary doesn't have a key for luminosity distance.

I'm using the latest version of Bilby, 1.4.1

Code to reproduce:

from bilby.gw.prior import BBHPriorDict, UniformSourceFrame
prior = BBHPriorDict()
del prior["luminosity_distance"]
prior["redshift"] = UniformSourceFrame(0, 0.5)
samples = prior.sample(10)
samples["luminosity_distance"]

Some wrong with bns_eos_example.py

I want to use bns_eos_example.py, but I can't run it successfully. When I run the code, it throws some warning,, as shown in figure. But this is not a big problem, the big problem is that the program seem pause at there, and when I debug the code, I found it seem enter an infinite loop.
code

I use the Ubuntu system with version 20.04, the python version is 3.8.10. I use pip install the bilby and all the other packages, the version of them is default with pip install.

Some problems with Interferometer.set_strain_data_from_frame_file().

I want to use Interferometer.set_strain_data_from_frame_file() to load the frame files which I make by myself (GWTC-1 data, but cut it to 4s), but I get an error:
ๅ›พ็‰‡
Then I read the source code of set_from_frame_file(), I find that 'channel' is set to 'None' by default.
Is that my frame file in wrong format? Can you give some examples?

save_data psd file saves asd

The lines here:

np.savetxt(filename_psd,
np.array(
[self.strain_data.frequency_array,
self.amplitude_spectral_density_array]).T,
header='f h(f)')

Saves data to filename_psd, but even though it is called "psd" actually is saving the asd to this psd file instead.

Fix would be to change

self.amplitude_spectral_density_array]).T,
to call self.power_spectral_density_array.

[QUESTION] Bilby parameter estimation: Only first node utilizes CPUs, remaining nodes inactive or simply the whole process stuck.

I'm using bilby for parameter estimation of GW150914, but I'm encountering some issues. In my bilby script, I've set ncpus=80. Additionally, my sbatch script is configured as follows:

#SBATCH --nodes=5
#SBATCH --ntasks-per-node=1
#SBATCH --time=40:00:00
#SBATCH --cpus-per-task=16

However, I've noticed that only the first node shows 880% CPU utilization. Does this imply that I only have 8 slightly larger CPU cores in operation? Furthermore, when I check the remaining 4 nodes, I don't see any CPU usage by my processes, indicating that my script is only running on the first node. Is there an issue with my setup?

Limitations of plot_interferometer_waveform_posterior() when different injection and recovery models are used

I am new to Bilby, however when using the code for my project I noticed that plot_interferometer_waveform_posterior() function uses only the waveform generator of the recovery model when plotting the results. In particular, the following line is responsible for that, whose initialisation of the waveform is subsequently used in the plotting routine:

waveform_generator = self.waveform_generator_class(
            duration=self.duration, sampling_frequency=self.sampling_frequency,
            start_time=self.start_time,
            frequency_domain_source_model=self.frequency_domain_source_model,
            time_domain_source_model=self.time_domain_source_model,
            parameter_conversion=self.parameter_conversion,
            waveform_arguments=self.waveform_arguments)

This is not an issue if the same injection and recovery waveforms are used. However, this limits the applicability of this function for more complicated scenarios where these models are not the same. I would like to suggest the following options that could be implemented as changes to account for this issue:

  • Allow to pass a waveform generator of the injection to plot_interferometer_waveform_posterior(). If not passed in the function, the argument can be set to the default, being the recovery waveform with the appropriate injection parameters.
  • Allow to save the injected model in the result, for example in the dictionary of meta_data function. It then can be straightforwardly accessed from the plot_interferometer_waveform_posterior() function. Storing the injection waveform in the result can also be useful for a quick generation of the injected model, especially for cases where the waveform is not one of the GW approximants, but say a numerical-relativity waveform.

I would be happy to make the change through a PR, depending what option the developers think is most suitable.

Subclasses of JointPrior are re-initialized wrong from dictionaries and json

I am not a member of the LVK and, thus, cannot create issues or merge requests in the main repository.

I have encountered issues with subclassing JointPrior and JointPriorDist that occur when initializing from dict and json representations, as done when loading a bilby results object. In both instances, checks for MultivariateGaussin(Dist) are hardcoded without need, as far as I understand it. Changing these checks to (Base)JointPrior(Dist) solves the problems.

For json, this block

class BilbyJsonEncoder(json.JSONEncoder):
def default(self, obj):
from ..prior import MultivariateGaussianDist, Prior, PriorDict
from ...gw.prior import HealPixMapPriorDist
from ...bilby_mcmc.proposals import ProposalCycle
if isinstance(obj, np.integer):
return int(obj)
if isinstance(obj, np.floating):
return float(obj)
if isinstance(obj, PriorDict):
return {"__prior_dict__": True, "content": obj._get_json_dict()}
if isinstance(obj, (MultivariateGaussianDist, HealPixMapPriorDist, Prior)):
return {
"__prior__": True,
"__module__": obj.__module__,
"__name__": obj.__class__.__name__,
"kwargs": dict(obj.get_instantiation_dict()),
}

could be updated to

class BilbyJsonEncoder(json.JSONEncoder):
    def default(self, obj):
        from ..prior import BaseJointPriorDist, Prior, PriorDict
        from ...gw.prior import HealPixMapPriorDist
        from ...bilby_mcmc.proposals import ProposalCycle

        if isinstance(obj, np.integer):
            return int(obj)
        if isinstance(obj, np.floating):
            return float(obj)
        if isinstance(obj, PriorDict):
            return {"__prior_dict__": True, "content": obj._get_json_dict()}
        if isinstance(obj, (BaseJointPriorDist, HealPixMapPriorDist, Prior)):

For dicts,

def from_dictionary(self, dictionary):
mvgkwargs = {}
for key in list(dictionary.keys()):
val = dictionary[key]
if isinstance(val, Prior):
continue
elif isinstance(val, (int, float)):
dictionary[key] = DeltaFunction(peak=val)
elif isinstance(val, str):
cls = val.split("(")[0]
args = "(".join(val.split("(")[1:])[:-1]
try:
dictionary[key] = DeltaFunction(peak=float(cls))
logger.debug("{} converted to DeltaFunction prior".format(key))
continue
except ValueError:
pass
if "." in cls:
module = ".".join(cls.split(".")[:-1])
cls = cls.split(".")[-1]
else:
module = __name__.replace(
"." + os.path.basename(__file__).replace(".py", ""), ""
)
try:
cls = getattr(import_module(module), cls, cls)
except ModuleNotFoundError:
logger.error(
"Cannot import prior class {} for entry: {}={}".format(
cls, key, val
)
)
raise
if key.lower() in ["conversion_function", "condition_func"]:
setattr(self, key, cls)
elif isinstance(cls, str):
if "(" in val:
raise TypeError("Unable to parse prior class {}".format(cls))
else:
continue
elif cls.__name__ in [
"MultivariateGaussianDist",
"MultivariateNormalDist",
]:
dictionary.pop(key)
if key not in mvgkwargs:
mvgkwargs[key] = cls.from_repr(args)
elif cls.__name__ in ["MultivariateGaussian", "MultivariateNormal"]:
mgkwargs = {
item[0].strip(): cls._parse_argument_string(item[1])
for item in cls._split_repr(
", ".join(
[arg for arg in args.split(",") if "dist=" not in arg]
)
).items()
}
keymatch = re.match(r"dist=(?P<distkey>\S+),", args)
if keymatch is None:
raise ValueError(
"'dist' argument for MultivariateGaussian is not specified"
)
if keymatch["distkey"] not in mvgkwargs:
raise ValueError(
f"MultivariateGaussianDist {keymatch['distkey']} must be defined before {cls.__name__}"
)
mgkwargs["dist"] = mvgkwargs[keymatch["distkey"]]
dictionary[key] = cls(**mgkwargs)
else:
try:
dictionary[key] = cls.from_repr(args)
except TypeError as e:
raise TypeError(
"Unable to parse prior, bad entry: {} "
"= {}. Error message {}".format(key, val, e)
)
elif isinstance(val, dict):
try:
_class = getattr(
import_module(val.get("__module__", "none")),
val.get("__name__", "none"),
)
dictionary[key] = _class(**val.get("kwargs", dict()))
except ImportError:
logger.debug(
"Cannot import prior module {}.{}".format(
val.get("__module__", "none"), val.get("__name__", "none")
)
)
logger.warning(
"Cannot convert {} into a prior object. "
"Leaving as dictionary.".format(key)
)
continue
else:
raise TypeError(
"Unable to parse prior, bad entry: {} "
"= {} of type {}".format(key, val, type(val))
)
self.update(dictionary)

could be updated to

def from_dictionary(self, dictionary):
        jpdkwargs = {}
        for key in list(dictionary.keys()):
            val = dictionary[key]
            if isinstance(val, Prior):
                continue
            elif isinstance(val, (int, float)):
                dictionary[key] = DeltaFunction(peak=val)
            elif isinstance(val, str):
                cls = val.split("(")[0]
                args = "(".join(val.split("(")[1:])[:-1]
                try:
                    dictionary[key] = DeltaFunction(peak=float(cls))
                    logger.debug("{} converted to DeltaFunction prior".format(key))
                    continue
                except ValueError:
                    pass
                if "." in cls:
                    module = ".".join(cls.split(".")[:-1])
                    cls = cls.split(".")[-1]
                else:
                    module = __name__.replace(
                        "." + os.path.basename(__file__).replace(".py", ""), ""
                    )
                try:
                    cls = getattr(import_module(module), cls, cls)
                except ModuleNotFoundError:
                    logger.error(
                        "Cannot import prior class {} for entry: {}={}".format(
                            cls, key, val
                        )
                    )
                    raise
                if key.lower() in ["conversion_function", "condition_func"]:
                    setattr(self, key, cls)
                elif isinstance(cls, str):
                    if "(" in val:
                        raise TypeError("Unable to parse prior class {}".format(cls))
                    else:
                        continue
                elif issubclass(cls, BaseJointPriorDist):
                    dictionary.pop(key)
                    if key not in jpdkwargs:
                        jpdkwargs[key] = cls.from_repr(args)
                elif issubclass(cls, JointPrior):
                    jpkwargs = {
                        item[0].strip(): cls._parse_argument_string(item[1])
                        for item in cls._split_repr(
                            ", ".join(
                                [arg for arg in args.split(",") if "dist=" not in arg]
                            )
                        ).items()
                    }
                    keymatch = re.match(r"dist=(?P<distkey>\S+),", args)
                    if keymatch is None:
                        raise ValueError(
                            "'dist' argument for JointPrior is not specified"
                        )

                    if keymatch["distkey"] not in jpdkwargs:
                        raise ValueError(
                            f"BaseJointPriorDist {keymatch['distkey']} must be defined before {cls.__name__}"
                        )

                    jpkwargs["dist"] = jpdkwargs[keymatch["distkey"]]
                    dictionary[key] = cls(**jpkwargs)
                else:
                    try:
                        dictionary[key] = cls.from_repr(args)
                    except TypeError as e:
                        raise TypeError(
                            "Unable to parse prior, bad entry: {} "
                            "= {}. Error message {}".format(key, val, e)
                        )
            elif isinstance(val, dict):
                try:
                    _class = getattr(
                        import_module(val.get("__module__", "none")),
                        val.get("__name__", "none"),
                    )
                    dictionary[key] = _class(**val.get("kwargs", dict()))
                except ImportError:
                    logger.debug(
                        "Cannot import prior module {}.{}".format(
                            val.get("__module__", "none"), val.get("__name__", "none")
                        )
                    )
                    logger.warning(
                        "Cannot convert {} into a prior object. "
                        "Leaving as dictionary.".format(key)
                    )
                    continue
            else:
                raise TypeError(
                    "Unable to parse prior, bad entry: {} "
                    "= {} of type {}".format(key, val, type(val))
                )
        self.update(dictionary)

Here, I have updated the names of some variables to better reflect what is happening as well.

some wrong with estimate parameters

When I use bilby for parameter estimation, the program has found the injected value, but it keeps reminding me that

Userwarning: Warings.warn("Weights do not sum to 1 and have been renormalized.")
Bilby waring: Failed to create dynesty run plot at checkpoint

How can I solve the problem.

check_signal_duration

I am using bilby for supernova waveforms and my code that worked with version 1.1.3 now it doesn't with 2.1.1. The problem is the next.

In the function inject_signal in gw/detector/interferometer.py the function self.check_signal_duration(parameters, raise_error) is called. This function assumes that the injections are binaries and fails if mass_1 and mass_2 are not parameters. The problem is that this prevents using bilby for non-cbc waveforms unless one adds some fake masses as parameters or remove the call to the function manually.

With this tweak I can run the code but I guess something more elegant could be implemented in bilby.

Redundancy in HealPixMapPriorDist

When using a Healpix Map as a prior, there is a significant amount of resources and walltime dedicated to redundant normalization of the probability skymap. In the _sample method in HealPixMapPriorDist in bilby/gw/prior.py, the first three lines are dedicated to creating an array of indices, normalizing the skymap, and sampling from the skymap using np.random.choice:

pixel_choices = np.arange(self.npix)
pixel_probs = self._check_norm(self.prob)
sample_pix = np.random.choice(pixel_choices, size=size, p=pixel_probs, replace=True)

Using cProfile to measure the function calls, here is the breakdown of the cpu time of a default bilby_mcmc run with 1000 samples on a single core, with all priors being delta functions except RA and Dec, which are the default priors:

137633368 function calls (134823836 primitive calls) in 246.185 seconds

 Ordered by: internal time
 ncalls  tottime  percall  cumtime  percall filename:lineno(function)
 469984   81.276    0.000  102.027    0.000 /home/*/mambaforge/envs/bbh_inference/lib/python3.11/site-packages/bilby/gw/detector/interferometer.py:288(get_detector_response)
 939970   21.445    0.000   30.620    0.000 /home/*/mambaforge/envs/bbh_inference/lib/python3.11/site-packages/bilby/gw/utils.py:116(noise_weighted_inner_product)
2491277    8.056    0.000    8.056    0.000 {method 'reduce' of 'numpy.ufunc' objects}
 469984    7.458    0.000   35.602    0.000 /home/*/mambaforge/envs/bbh_inference/lib/python3.11/site-packages/bilby/gw/detector/interferometer.py:582(inner_product)
   2948    5.621    0.002  240.154    0.081 /home/*/mambaforge/envs/bbh_inference/lib/python3.11/site-packages/bilby/bilby_mcmc/sampler.py:1219(step)
 469984    4.994    0.000   27.122    0.000 /home/*/mambaforge/envs/bbh_inference/lib/python3.11/site-packages/bilby/gw/detector/interferometer.py:565(optimal_snr_squared)
 ...

Here is the output of an identical run where the RA and Dec priors were taken from a skymap using HealPixMapPriorDist. The skymap NSIDE is 1024, meaning the skymap has 1.2e7 pixels. The walltime taken by the first three lines of this method dwarfs everything else, and 30% of it is in the first two lines:

85554618 function calls (84271828 primitive calls) in 3561.656 seconds

 Ordered by: internal time
 ncalls  tottime  percall  cumtime  percall filename:lineno(function)
  23033 2087.882    0.091 2089.279    0.091 {method 'choice' of 'numpy.random.mtrand.RandomState' objects}
  22013  479.156    0.022 1078.164    0.049 /home/*/mambaforge/envs/bbh_inference/lib/python3.11/site-packages/bilby/gw/prior.py:1470(_check_norm)
  22013  462.599    0.021  598.698    0.027 /home/*/mambaforge/envs/bbh_inference/lib/python3.11/site-packages/numpy/linalg/linalg.py:2342(norm)
  26789  279.157    0.010  279.157    0.010 {built-in method numpy.arange}
4257840  143.885    0.000  143.885    0.000 {method 'reduce' of 'numpy.ufunc' objects}
 155026   20.178    0.000   25.409    0.000 /home/*/mambaforge/envs/bbh_inference/lib/python3.11/site-packages/bilby/gw/detector/interferometer.py:288(get_detector_response)
 ...

From my cursory glance at the source code, it seems like the first two lines of _sample are redundant, because the skymap should be read-only and normalization only needs to be done once, not before every sample. If the class is rewritten so that self._check_norm is called on self.prob when the skymap is read from file during initialization and the array of indices pixel_choices is declared as an object-level element, then there is a noticeable improvement in efficiency:

87854023 function calls (86528259 primitive calls) in 2317.059 seconds

 Ordered by: internal time
 ncalls  tottime  percall  cumtime  percall filename:lineno(function)
  24833 2221.242    0.089 2222.324    0.089 {method 'choice' of 'numpy.random.mtrand.RandomState' objects}
 160690   20.592    0.000   25.780    0.000 /home/*/mambaforge/envs/bbh_inference/lib/python3.11/site-packages/bilby/gw/detector/interferometer.py:288(get_detector_response)
4292990    6.491    0.000    6.491    0.000 {method 'reduce' of 'numpy.ufunc' objects}
 321382    5.203    0.000    7.485    0.000 /home/*/mambaforge/envs/bbh_inference/lib/python3.11/site-packages/bilby/gw/utils.py:116(noise_weighted_inner_product)
 ...

Is there another reason why the skymap must be normalized before every sample?

custom_likelihood error

Hi,
I am trying to write a custom_likelihood for one of my works. For a simple example, I am getting an error like,
'GaussianLikelihood' object has no attribute '_marginalized_parameters'

my definition is as follows

class GaussianLikelihood(bilby.Likelihood):
    def __init__(self, x1,x2, y, function,sigma=None):
        """
        A general Gaussian likelihood - the parameters are inferred from the
        arguments of function

        Parameters
        ----------
        x1,x2, y: array_like
            The data to analyse
        sigma: float
            The standard deviation of the noise
        function:
            The python function to fit to the data. Note, this must take the
            dependent variable as its first argument. The other arguments are
            will require a prior and will be sampled over (unless a fixed
            value is given).
        """
        self.x1 = x1
        self.x2 = x2
        self.y = y
        self.sigma = sigma
        self.N = len(x1)
        self.function = function

        # These lines of code infer the parameters from the provided function
        parameters = inspect.getargspec(function).args
        parameters.pop(0)
        self.parameters = dict.fromkeys(parameters)

    def log_likelihood(self):
        res = self.y - self.function(self.x1,self.x2, **self.parameters)
        return -0.5 * (np.sum((res / self.sigma)**2)
                       + self.N*np.log(2*np.pi*self.sigma**2))

ConditionalPriorDict: fix subset sampling for external 'DeltaFunction' dependencies

[I have no write access on the gitlab repo, so using this as a 'proxy']

Conditional priors cannot be sampled from in a subset if one of the conditional dependencies is outside of the sampled set. This makes sense if that outside dependency has any 'uncertainty' (ie its prior is not fixed/DeltaFunction), but should actually work fine if the dependency outside the subset is fixed or a DeltaFunction.

Code example showing this

#!/usr/bin/env python3
import numpy as np
from bilby.core.prior import (ConditionalPriorDict, ConditionalUniform,
                              IllegalConditionsException, Uniform)


def _tp_conditional_uniform(ref_params, period):
    min_ref, max_ref = ref_params["minimum"], ref_params["maximum"]
    max_ref = np.minimum(max_ref, min_ref + period)
    return {"minimum": min_ref, "maximum": max_ref}


p0 = 68400.0
prior = ConditionalPriorDict(
    {
        "tp": ConditionalUniform(
            condition_func=_tp_conditional_uniform, minimum=0, maximum=2 * p0
        )
    }
)

# ---------- 0. Sanity check: sample full prior
prior["period"] = p0
samples2d = prior.sample(1000)
assert samples2d["tp"].max() < p0

# ---------- 1. Subset sampling with external delta-prior
print("Test 1: Subset-sampling conditionals for fixed 'externals':")
prior["period"] = p0
try:
    samples1d = prior.sample_subset(["tp"], 1000)
    assert samples1d["tp"].max() < p0
    print(f"OK.")
except IllegalConditionsException as e:
    print(f"FAIL ==> Should work, but caught exception: {type(e).__name__}: {e}")

# ---------- 2. Subset sampling with external uniform prior
prior["period"] = Uniform(minimum=p0, maximum=2 * p0)
print("Test 2: Subset-sampling conditionals for 'external' uncertainties:")
try:
    samples1d_fail = prior.sample_subset(["tp"], 1000)
    print("This should not work!")
except IllegalConditionsException as e:
    print(f"OK. ==> Correctly failed, caught exception: {type(e).__name__}: {e}")

The attached commit fixes this. The fix works by including all DeltaFunction priors in the subset ConditionalDict, but still only sampling from the requested keys.

0001-ConditionalPriorDict-fix-subset-sampling-for-conditi.txt

@GregoryAshton @ColmTalbot

Return numpy array from priors

Since I cannot raise issue on gitlab, raising an issue here (using bilby version from gitlab). Currently all the sampled values (returned from bilby.run_sampler) from priors are scalar values. However, I am working on a prior which is a set of 12 values which is then used to calculate a scalar value inside the likelihood. I have been able to implement the MultiSampleUniform class which is derived from bibly.core.prior.Uniform and takes a size argument and returns the sampled array of input size. Until now everything works fine. But when I try to use this prior file in bilby.run_sampler for PE, only a scalar value (value present at 0th index) is passed instead of the array and all other values are discarded. Is there any solution regarding this? Below is an MWE. The following is the data in a prior file called custom.prior

param1 = Uniform(name='param1',minimum=40., maximum=189)
param2 = Uniform(name='param2', minimum=15., maximum=75.)
param3 = Uniform(name='param3', minimum=-3., maximum=-0.5)
param4 = mymodule.MultiSampleUniform(name='param4', minimum=0., maximum=2,size=20)
param5 = mymodule.MultiSampleUniform(name='param5', minimum=0., maximum=2,size=20)
param6 = mymodule.MultiSampleUniform(name='param6', minimum=0., maximum=2,size=20)

the output is

{'param1': 178.3648888150874,
 'param2': 26.174208848862058,
 'param3': -2.372850327248928,
 'param4': array([0.65831787, 1.90676416, 1.34471272, 0.48157445, 1.43151775,
        1.55069804, 0.58756039, 0.96520768, 1.75337026, 0.37881952]),
 'param5': array([1.64777325, 0.35046741, 1.45417774, 0.19867297, 0.57448372,
        0.20531799, 0.96917794, 1.59248033, 1.49513293, 0.99586554]),
 'param6': array([0.81157534, 1.47165483, 1.80407503, 0.40499179, 1.55700099,
        1.50306299, 0.7416611 , 1.53320774, 1.69269496, 1.69418185])}

which is expected.

But when this prior file is passed into the bilby.run_sampler(...) the parameter values are reduce to just

{'param1': 178.3648888150874,
 'param2': 26.174208848862058,
 'param3': -2.372850327248928,
 'param4': 0.65831787,
 'param5': 1.64777325,
 'param6': 0.81157534}

errors regarding redundant parameters

While running simple examples from the website regarding injecting and retrieving data, I am getting errors like "mass_1, mass_2 are redundant parameters", or "your sampling set contains redundant parameters". In the previous version of bilby these codes were running fine, but after I reinstalled bilby these errors are showing up.

small suggest that can improve the I/O efficiency for the function write_chains_to_file in emcee.py

Hey there!

I am using your super useful Emcee wrapper! Thanks a lot for developing such an amazing tool.

I noticed that the current Emcee core is taking a lot of time when writing the chains to file, with the function

def write_chains_to_file(self, sample):
    chain_file = self.checkpoint_info.chain_file
    temp_chain_file = chain_file + '.temp'
    if os.path.isfile(chain_file):
        copyfile(chain_file, temp_chain_file)
    if self.prerelease:
        points = np.hstack([sample.coords, sample.blobs])
    else:
        points = np.hstack([sample[0], np.array(sample[3])])
    with open(temp_chain_file, "a") as ff:
        for ii, point in enumerate(points):
            ff.write(self.checkpoint_info.chain_template.format(ii, *point))
    shutil.move(temp_chain_file, chain_file)

However, after some slight modifications, I found I could cut the entire running time for a 5000-step sampling chain writing time by almost 60%. The modified version is as follows. I am not entirely sure if the suggestion will break anything. So far, I have tested and found no issue. It essentially reduces one copy time and also concatenates in a slightly different manner.

def write_chains_to_file(self, sample):
    chain_file = self.checkpoint_info.chain_file
    temp_chain_file = chain_file + '.temp'

    if self.prerelease:
        points = np.hstack([sample.coords, sample.blobs])
    else:
        points = np.hstack([sample[0], np.array(sample[3])])
   
    data_to_write = "\n".join(self.checkpoint_info.chain_template.format(ii, *point) for ii, point in enumerate(points))

    with open(temp_chain_file, "w") as ff:
        ff.write(data_to_write)

    with open(temp_chain_file, 'rb') as ftemp, open(chain_file, 'ab') as fchain:
        shutil.copyfileobj(ftemp, fchain)

    os.remove(temp_chain_file)

Not sure if this helps, but in my case (5000 steps), I reduced about one minute.

How to implement the new BH-NS waveform approximations

Hello,

thank you for giving me the opportunity to open an issue.

I would like to implement in BILBY, accordingly to this tutorial,
the waveform to describe a black hole - neutron star merger, i. e. PhenomBHNS or alternatively SEOBNRv4_ROM NRTidalv2_NSBH

Obviously, when I change the string in lines 42-43 of the tutorial, i. e.
waveform_arguments = dict(waveform_approximant='IMRPhenomPv2_NRTidal',
reference_frequency=50., minimum_frequency=40.0)
in
waveform_arguments = dict(waveform_approximant='PhenomBHNS',
reference_frequency=50., minimum_frequency=40.0)

the code give me errors because does not recognize the PhenomBHNS string.

I think that my issue can be split in two questions:

  1. Are PhenomBHNS or SEOBNRv4_ROM NRTidalv2_NSBH included in LALsuite?

  2. If yes, how can I update the waveform dictionary?

Best Regards

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.