This page was automatically generated from a Jupyter notebook.

Find the original here.


Population Inference on GWTC-1

The first gravitational-wave transient catalog “GWTC-1” includes all compact binary coalescences observed during Advanced LIGO/Virgo’s first and second observing runs.

GWPopulation builds upon bilby (arXiv:1811.02042) to provide simple, modular, user-friendly, population inference.

There are many implemented models and an example of defining custom models is included below. In this example we use:

  • A mass distribution in primary mass and mass ratio from Talbot & Thrane (2018) (arXiv:1801:02699). This is equivalent to the PowerLaw + Peak model used in LVK analyses without the low-mass smoothing for computational efficiency.

  • Half-Gaussian + isotropic spin tilt distribution from Talbot & Thrane (2017) (arXiv:1704.08370).

  • Beta spin magnitude distribution from Wysocki+ (2018) (arXiv:1805:06442).

  • Each of these are also available with independent but identically distributed spins.

  • Redshift evolution model as in Fishbach+ (2018) (arXiv:1805.10270).

For more information see the git repository, documentation.

Install some packages

  • gwpopulation has the population model code.

  • nestle is a sampler that runs quickly for simple problems, but may not give publication quality results.

If you’re using colab.research.google.com you will want to choose a GPU-accelerated runtime.

“runtime”->”change runtime type”->”Hardware accelerator = GPU”

[1]:
!pip install --pre "gwpopulation>0.10" nestle
Requirement already satisfied: gwpopulation>0.10 in /usr/local/lib/python3.10/dist-packages (1.0.0rc1)
Requirement already satisfied: nestle in /usr/local/lib/python3.10/dist-packages (0.2.0)
Requirement already satisfied: numpy>=1.16 in /usr/local/lib/python3.10/dist-packages (from gwpopulation>0.10) (1.25.2)
Requirement already satisfied: scipy in /usr/local/lib/python3.10/dist-packages (from gwpopulation>0.10) (1.11.4)
Requirement already satisfied: pandas in /usr/local/lib/python3.10/dist-packages (from gwpopulation>0.10) (1.5.3)
Requirement already satisfied: tqdm in /usr/local/lib/python3.10/dist-packages (from gwpopulation>0.10) (4.66.2)
Requirement already satisfied: astropy in /usr/local/lib/python3.10/dist-packages (from gwpopulation>0.10) (5.3.4)
Requirement already satisfied: bilby>=2.2.0 in /usr/local/lib/python3.10/dist-packages (from gwpopulation>0.10) (2.2.3)
Requirement already satisfied: cached-interpolate in /usr/local/lib/python3.10/dist-packages (from gwpopulation>0.10) (0.2.0)
Requirement already satisfied: bilby.cython>=0.3.0 in /usr/local/lib/python3.10/dist-packages (from bilby>=2.2.0->gwpopulation>0.10) (0.4.2)
Requirement already satisfied: dynesty>=2.0.1 in /usr/local/lib/python3.10/dist-packages (from bilby>=2.2.0->gwpopulation>0.10) (2.1.3)
Requirement already satisfied: emcee in /usr/local/lib/python3.10/dist-packages (from bilby>=2.2.0->gwpopulation>0.10) (3.1.4)
Requirement already satisfied: corner in /usr/local/lib/python3.10/dist-packages (from bilby>=2.2.0->gwpopulation>0.10) (2.2.2)
Requirement already satisfied: matplotlib<3.8.0 in /usr/local/lib/python3.10/dist-packages (from bilby>=2.2.0->gwpopulation>0.10) (3.7.1)
Requirement already satisfied: dill in /usr/local/lib/python3.10/dist-packages (from bilby>=2.2.0->gwpopulation>0.10) (0.3.8)
Requirement already satisfied: h5py in /usr/local/lib/python3.10/dist-packages (from bilby>=2.2.0->gwpopulation>0.10) (3.9.0)
Requirement already satisfied: attrs in /usr/local/lib/python3.10/dist-packages (from bilby>=2.2.0->gwpopulation>0.10) (23.2.0)
Requirement already satisfied: pyerfa>=2.0 in /usr/local/lib/python3.10/dist-packages (from astropy->gwpopulation>0.10) (2.0.1.1)
Requirement already satisfied: PyYAML>=3.13 in /usr/local/lib/python3.10/dist-packages (from astropy->gwpopulation>0.10) (6.0.1)
Requirement already satisfied: packaging>=19.0 in /usr/local/lib/python3.10/dist-packages (from astropy->gwpopulation>0.10) (24.0)
Requirement already satisfied: python-dateutil>=2.8.1 in /usr/local/lib/python3.10/dist-packages (from pandas->gwpopulation>0.10) (2.8.2)
Requirement already satisfied: pytz>=2020.1 in /usr/local/lib/python3.10/dist-packages (from pandas->gwpopulation>0.10) (2023.4)
Requirement already satisfied: contourpy>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib<3.8.0->bilby>=2.2.0->gwpopulation>0.10) (1.2.0)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.10/dist-packages (from matplotlib<3.8.0->bilby>=2.2.0->gwpopulation>0.10) (0.12.1)
Requirement already satisfied: fonttools>=4.22.0 in /usr/local/lib/python3.10/dist-packages (from matplotlib<3.8.0->bilby>=2.2.0->gwpopulation>0.10) (4.49.0)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib<3.8.0->bilby>=2.2.0->gwpopulation>0.10) (1.4.5)
Requirement already satisfied: pillow>=6.2.0 in /usr/local/lib/python3.10/dist-packages (from matplotlib<3.8.0->bilby>=2.2.0->gwpopulation>0.10) (9.4.0)
Requirement already satisfied: pyparsing>=2.3.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib<3.8.0->bilby>=2.2.0->gwpopulation>0.10) (3.1.2)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.10/dist-packages (from python-dateutil>=2.8.1->pandas->gwpopulation>0.10) (1.16.0)

Get the data

Pull the posterior samples for each of the events from the LIGO dcc.

[2]:
!wget https://dcc.ligo.org/public/0157/P1800370/002/GWTC-1_sample_release.tar.gz
!tar -xvzf GWTC-1_sample_release.tar.gz
--2024-03-13 16:49:42--  https://dcc.ligo.org/public/0157/P1800370/002/GWTC-1_sample_release.tar.gz
Resolving dcc.ligo.org (dcc.ligo.org)... 131.215.125.144
Connecting to dcc.ligo.org (dcc.ligo.org)|131.215.125.144|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 89895447 (86M) [application/x-gzip]
Saving to: ‘GWTC-1_sample_release.tar.gz.1’

GWTC-1_sample_relea 100%[===================>]  85.73M  18.5MB/s    in 5.7s

2024-03-13 16:49:48 (15.0 MB/s) - ‘GWTC-1_sample_release.tar.gz.1’ saved [89895447/89895447]

./GWTC-1_sample_release/
./GWTC-1_sample_release/GW170729_GWTC-1.hdf5
./GWTC-1_sample_release/GW151226_GWTC-1.hdf5
./GWTC-1_sample_release/GW170809_GWTC-1.hdf5
./GWTC-1_sample_release/GW170104_GWTC-1.hdf5
./GWTC-1_sample_release/GW170817_GWTC-1.hdf5
./GWTC-1_sample_release/GW170608_GWTC-1.hdf5
./GWTC-1_sample_release/GW170814_GWTC-1.hdf5
./GWTC-1_sample_release/GW151012_GWTC-1.hdf5
./GWTC-1_sample_release/GW170809_priorChoices_GWTC-1.hdf5
./GWTC-1_sample_release/GW170818_GWTC-1.hdf5
./GWTC-1_sample_release/GW150914_GWTC-1.hdf5
./GWTC-1_sample_release/GW170823_GWTC-1.hdf5

Imports

Import the packages required for the script. We also set the backend for array operations to jax which allows us to take advantage of just-in-time (jit) compilation in addition to GPU parallelisation when available.

[3]:
%pylab inline

import h5py

import pandas as pd
from scipy.interpolate import interp1d
from astropy import cosmology, units

import bilby as bb
from bilby.core.prior import LogUniform, PriorDict, Uniform
from bilby.hyper.model import Model
import gwpopulation as gwpop
from gwpopulation.experimental.jax import JittedLikelihood, NonCachingModel

gwpop.set_backend("jax")

xp = gwpop.utils.xp
Populating the interactive namespace from numpy and matplotlib

Load posteriors

We’re using the posteriors from the GWTC-1 data release.

We need to change the names of the parameters to make them work with the code.

[4]:
parameter_translator = dict(
    mass_1_det="m1_detector_frame_Msun",
    mass_2_det="m2_detector_frame_Msun",
    luminosity_distance="luminosity_distance_Mpc",
    a_1="spin1",
    a_2="spin2",
    cos_tilt_1="costilt1",
    cos_tilt_2="costilt2",
)

posteriors = list()
priors = list()

file_str = "./GWTC-1_sample_release/GW{}_GWTC-1.hdf5"

events = [
    "150914",
    "151012",
    "151226",
    "170104",
    "170608",
    "170729",
    "170809",
    "170814",
    "170818",
    "170823",
]
for event in events:
    _posterior = pd.DataFrame()
    _prior = pd.DataFrame()
    with h5py.File(file_str.format(event)) as ff:
        for my_key, gwtc_key in parameter_translator.items():
            _posterior[my_key] = ff["IMRPhenomPv2_posterior"][gwtc_key]
            _prior[my_key] = ff["prior"][gwtc_key]
    posteriors.append(_posterior)
    priors.append(_prior)
[5]:
luminosity_distances = np.linspace(1, 10000, 1000)
redshifts = np.array(
    [
        cosmology.z_at_value(cosmology.Planck15.luminosity_distance, dl * units.Mpc)
        for dl in luminosity_distances
    ]
)
dl_to_z = interp1d(luminosity_distances, redshifts)

luminosity_prior = luminosity_distances**2

dz_ddl = np.gradient(redshifts, luminosity_distances)

redshift_prior = interp1d(redshifts, luminosity_prior / dz_ddl / (1 + redshifts))

Add some weights to posterior

Make sure the posterior DataFrames contain the appropriate quantities.

We could include a prior column, this is the prior used in the initial sampling stage. This is used to weight the samples in the likelihood.

[6]:
for posterior in posteriors:
    posterior["redshift"] = dl_to_z(posterior["luminosity_distance"])
    posterior["mass_1"] = posterior["mass_1_det"] / (1 + posterior["redshift"])
    posterior["mass_2"] = posterior["mass_2_det"] / (1 + posterior["redshift"])
    posterior["mass_ratio"] = posterior["mass_2"] / posterior["mass_1"]

Specify the model

Choose which population models we want to use.

For the mass distribution we use

gwpopulation.models.mass.two_component_primary_mass_ratio.

This is a powerlaw + Gaussian mass distribution with powerlaw mass ratio distribution.

For spins we use

gwpopulation.models.spin.iid_spin

Where the spins of the two black holes are independently and identically distributed with a beta distribution for the magnitude and an isotropic + half-Gaussian for the cosine tilts.

Note: We use the gwpopulation.experimental.jax.NonCachingModel as internal caching breaks the JIT functionality.

[7]:
model = NonCachingModel(
    [
        gwpop.models.mass.two_component_primary_mass_ratio,
        gwpop.models.spin.iid_spin,
    ]
)

Selection effects

Gravitational-wave surveys suffer from Malmquist bias.

In order to measure the true, astrophysical, distribution we must include a term to account for this in our population analyses.

The way the likelihood is structured, this can be any object that evaluates to give the observed spactime volume as a function of the population parameters.

We define classes so that various bits of metadata can be stored.

The data for calculating this is not easily available. We use a very rough toy model to get the general scaling for the primary mass, \(VT(m) \sim m^{1.6}\). This value was chosen to get a decent agreement with the more complex model.

I do not recommend using this toy function for science.

[8]:
def toy_vt_calculator(kwargs):
    masses = xp.linspace(3, 100, 1000)
    vts = masses**1.6
    params = {
        key: kwargs[key]
        for key in [
            "alpha",
            "mmin",
            "mmax",
            "lam",
            "mpp",
            "sigpp",
            "gaussian_mass_maximum",
        ]
    }
    p_m = gwpop.models.mass.two_component_single(masses, **params)
    return xp.trapz(p_m * vts, masses)

Define the likelihood

The HyperparameterLikelihood marginalises over the local merger rate, with a uniform-in-log prior.

To also estimate the rate use the RateLikelilhood (see further on in the notebook).

We provide: - posteriors: a list of pandas DataFrames - hyper_prior: our population model, as defined above - selection_function: anything which evaluates the selection function

We can also provide: - conversion_function: this converts between the parameters we sample in and those needed by the model, e.g., for sampling in the mean and variance of the beta distribution - max_samples: the maximum number of samples to use from each posterior, this defaults to the length of the shortest posterior

We may get a warning telling us cupy is not available and so numpy is for the likelihood evaluation. This will go away if you have a GPU and cupy installed.

[9]:
fast_likelihood = gwpop.hyperpe.HyperparameterLikelihood(
    posteriors=posteriors,
    hyper_prior=model,
    selection_function=toy_vt_calculator,
    cupy=gwpop.backend == "cupy",
)
16:49 bilby INFO    : No prior values provided, defaulting to 1.

Define the prior

This is the standard method to define the prior distribution within bilby.

The labels are used in plotting.

Numbers are converted to delta function priors and are not sampled.

There are many other distributions available, see the code/documentation for a full list.

[10]:
fast_priors = PriorDict()

# mass
fast_priors["alpha"] = Uniform(minimum=-2, maximum=4, latex_label="$\\alpha$")
fast_priors["beta"] = Uniform(minimum=-4, maximum=12, latex_label="$\\beta$")
fast_priors["mmin"] = Uniform(minimum=5, maximum=10, latex_label="$m_{\\min}$")
fast_priors["mmax"] = Uniform(minimum=20, maximum=60, latex_label="$m_{\\max}$")
fast_priors["lam"] = Uniform(minimum=0, maximum=1, latex_label="$\\lambda_{m}$")
fast_priors["mpp"] = Uniform(minimum=10, maximum=50, latex_label="$\\mu_{m}$")
fast_priors["sigpp"] = Uniform(minimum=0, maximum=10, latex_label="$\\sigma_{m}$")
fast_priors["gaussian_mass_maximum"] = 100
# spin
fast_priors["amax"] = 1
fast_priors["alpha_chi"] = Uniform(
    minimum=1, maximum=6, latex_label="$\\alpha_{\\chi}$"
)
fast_priors["beta_chi"] = Uniform(minimum=1, maximum=6, latex_label="$\\beta_{\\chi}$")
fast_priors["xi_spin"] = Uniform(minimum=0, maximum=1, latex_label="$\\xi$")
fast_priors["sigma_spin"] = Uniform(minimum=0, maximum=4, latex_label="$\\sigma$")

Just-in-time compile

We JIT compile the likelihood object before starting the sampler. This is done using the gwpopulation.experimental.jax.JittedLikelihood class.

We then time the original likelihood object and the JIT-ed version. Note that we do two evaluations for each object as the first evaluation must compile the likelihood and so takes longer. (In addition to the JIT compilation, JAX compiles GPU functionality at the first evaluation, but this is less extreme than the full JIT compilation.)

[11]:
parameters = fast_priors.sample()
fast_likelihood.parameters.update(parameters)
fast_likelihood.log_likelihood_ratio()
%time print(fast_likelihood.log_likelihood_ratio())
jit_likelihood = JittedLikelihood(fast_likelihood)
jit_likelihood.parameters.update(parameters)
%time print(jit_likelihood.log_likelihood_ratio())
%time print(jit_likelihood.log_likelihood_ratio())
-101.34629616403652
CPU times: user 50.1 ms, sys: 10.3 ms, total: 60.4 ms
Wall time: 98 ms
-101.34629616403652
CPU times: user 2.3 s, sys: 136 ms, total: 2.44 s
Wall time: 4.71 s
-101.34629616403652
CPU times: user 8.41 ms, sys: 0 ns, total: 8.41 ms
Wall time: 8.47 ms

Run the sampler

We’ll use the sampler nestle and use a small number of live points to reduce the runtime. This sampler is fast, but can produce unreliable results. Other samplers are available, dynesty is the standard choice for full analyses and will be used later.

This is slower without using the GPU version, but the JIT makes it bearable.

bilby times a single likelihood evaluation before beginning the run, however, this isn’t well defined with JAX.

Note: sometimes this finds a high likelihood mode, likely due to breakdowns in the approximation used to estimate the likelihood. If you see dlogz > -80, you should interrupt the execution and restart.

[12]:
fast_result = bb.run_sampler(
    likelihood=jit_likelihood,
    priors=fast_priors,
    sampler="nestle",
    nlive=100,
    label="fast",
    save="hdf5",
)
16:50 bilby INFO    : Running for label 'fast', output will be saved to 'outdir'
/usr/local/lib/python3.10/dist-packages/_distutils_hack/__init__.py:33: UserWarning: Setuptools is replacing distutils.
  warnings.warn("Setuptools is replacing distutils.")
16:50 bilby INFO    : Analysis priors:
16:50 bilby INFO    : alpha=Uniform(minimum=-2, maximum=4, name=None, latex_label='$\\alpha$', unit=None, boundary=None)
16:50 bilby INFO    : beta=Uniform(minimum=-4, maximum=12, name=None, latex_label='$\\beta$', unit=None, boundary=None)
16:50 bilby INFO    : mmin=Uniform(minimum=5, maximum=10, name=None, latex_label='$m_{\\min}$', unit=None, boundary=None)
16:50 bilby INFO    : mmax=Uniform(minimum=20, maximum=60, name=None, latex_label='$m_{\\max}$', unit=None, boundary=None)
16:50 bilby INFO    : lam=Uniform(minimum=0, maximum=1, name=None, latex_label='$\\lambda_{m}$', unit=None, boundary=None)
16:50 bilby INFO    : mpp=Uniform(minimum=10, maximum=50, name=None, latex_label='$\\mu_{m}$', unit=None, boundary=None)
16:50 bilby INFO    : sigpp=Uniform(minimum=0, maximum=10, name=None, latex_label='$\\sigma_{m}$', unit=None, boundary=None)
16:50 bilby INFO    : alpha_chi=Uniform(minimum=1, maximum=6, name=None, latex_label='$\\alpha_{\\chi}$', unit=None, boundary=None)
16:50 bilby INFO    : beta_chi=Uniform(minimum=1, maximum=6, name=None, latex_label='$\\beta_{\\chi}$', unit=None, boundary=None)
16:50 bilby INFO    : xi_spin=Uniform(minimum=0, maximum=1, name=None, latex_label='$\\xi$', unit=None, boundary=None)
16:50 bilby INFO    : sigma_spin=Uniform(minimum=0, maximum=4, name=None, latex_label='$\\sigma$', unit=None, boundary=None)
16:50 bilby INFO    : gaussian_mass_maximum=100
16:50 bilby INFO    : amax=1
16:50 bilby INFO    : Analysis likelihood class: <class 'gwpopulation.experimental.jax.JittedLikelihood'>
16:50 bilby INFO    : Analysis likelihood noise evidence: nan
16:50 bilby INFO    : Single likelihood evaluation took 5.536e-04 s
16:50 bilby INFO    : Using sampler Nestle with kwargs {'method': 'multi', 'npoints': 100, 'update_interval': None, 'npdim': None, 'maxiter': None, 'maxcall': None, 'dlogz': None, 'decline_factor': None, 'rstate': None, 'callback': <function print_progress at 0x7d52bb8f67a0>, 'steps': 20, 'enlarge': 1.2}
it=   770 logz=-98.091915
16:50 bilby INFO    : Sampling time: 0:00:18.473231

16:50 bilby INFO    : Summary of results:
nsamples: 871
ln_noise_evidence:    nan
ln_evidence:    nan +/-  0.187
ln_bayes_factor: -97.912 +/-  0.187

[13]:
_ = fast_result.plot_corner(save=False)
_images/GWTC1_24_0.png

We can compute a rate posterior as a post-processing step.

Note that because we used the unphysical selection function above, this is not going to be a sensible number.

We also use the non-JIT-compiled likelihood for this stage.

[14]:
rates = list()
for ii in range(len(fast_result.posterior)):
    fast_likelihood.parameters.update(dict(fast_result.posterior.iloc[ii]))
    rates.append(float(fast_likelihood.generate_rate_posterior_sample()))
fast_result.posterior["rate"] = rates

plt.hist(np.log10(rates), bins=30, histtype="step", density=True)
plt.xlabel("$\\log_{10} {\\cal R}$")
plt.ylabel("$p(\\log_{10} {\\cal R})$")
plt.show()
plt.close()
_images/GWTC1_26_0.png

Define a new model

Let’s define a new population model for BNS.

Just as an example we’ll use a Gaussian distribution bounded between \([1 M_{\odot}, 2 M_{\odot}]\).

\[p(m_1, m_2) = N \exp \left(- \frac{\left((m_1 - \mu)^2 + (m_2 - \mu)^2\right)}{2 \sigma^2}\right) \quad : \quad 1 \leq m_2 \leq m_1 \leq 2\]

We see that this function takes three arguments: - dataset: this is common to all of the population models in gwpopulation, it is a dictionary containing the data to be evaluated, here it is assumed to contain entries for mass_1 and mass_2, the source-frame masses. - mu_bns: the peak of the bns mass distribution. - sigma_bns: the width of the bns mass distribution.

[15]:
def truncated_gaussian_primary_secondary_identical(dataset, mu_bns, sigma_bns):
    prob = gwpop.utils.truncnorm(
        dataset["mass_1"], mu=mu_bns, sigma=sigma_bns, low=1, high=2
    )
    prob *= gwpop.utils.truncnorm(
        dataset["mass_2"], mu=mu_bns, sigma=sigma_bns, low=1, high=2
    )
    prob *= dataset["mass_1"] >= dataset["mass_2"]
    prob *= 2
    return prob

Load GW170817 posterior

This is just the same as above.

[16]:
posterior = pd.DataFrame()
prior = pd.DataFrame()
with h5py.File("./GWTC-1_sample_release/GW170817_GWTC-1.hdf5") as ff:
    for my_key, gwtc_key in parameter_translator.items():
        try:
            posterior[my_key] = ff["IMRPhenomPv2NRT_lowSpin_posterior"][gwtc_key]
            prior[my_key] = ff["IMRPhenomPv2NRT_lowSpin_prior"][gwtc_key]
        except ValueError:
            pass

posterior["redshift"] = dl_to_z(posterior["luminosity_distance"])
posterior["mass_1"] = posterior["mass_1_det"] / (1 + posterior["redshift"])
posterior["mass_2"] = posterior["mass_2_det"] / (1 + posterior["redshift"])

Define the new likelihood

We use the same likelihood as before.

Note: - This time we cast our posterior to a list while creating the likelihood. - We pass the function rather than a Model object as before, bilby will turn this into a Model for internal use. This means that we cannot use JIT compilation, but that is fine as we are only analysing a single event. - We’ve removed the selection and conversion functions as they aren’t needed here (yes, a selection function is technically needed).

[17]:
bns_likelihood = gwpop.hyperpe.HyperparameterLikelihood(
    posteriors=[posterior],
    hyper_prior=truncated_gaussian_primary_secondary_identical,
    cupy=gwpop.backend == "cupy",
)
16:51 bilby INFO    : No prior values provided, defaulting to 1.

Define the new prior

Just as before.

[18]:
bns_priors = PriorDict()
bns_priors["mu_bns"] = Uniform(minimum=1, maximum=2, latex_label="$\\mu_{bns}$")
bns_priors["sigma_bns"] = LogUniform(
    minimum=1e-2, maximum=1, latex_label="$\\sigma_{bns}$"
)
[19]:
bns_likelihood.parameters.update(bns_priors.sample())
bns_likelihood.log_likelihood_ratio()

bns_result = bb.run_sampler(
    likelihood=bns_likelihood,
    priors=bns_priors,
    sampler="nestle",
    nlive=100,
    # walks=5,
    # nact=3,
    label="bns",
    save="hdf5",
)
16:51 bilby INFO    : Running for label 'bns', output will be saved to 'outdir'
16:51 bilby INFO    : Analysis priors:
16:51 bilby INFO    : mu_bns=Uniform(minimum=1, maximum=2, name=None, latex_label='$\\mu_{bns}$', unit=None, boundary=None)
16:51 bilby INFO    : sigma_bns=LogUniform(minimum=0.01, maximum=1, name=None, latex_label='$\\sigma_{bns}$', unit=None, boundary=None)
16:51 bilby INFO    : Analysis likelihood class: <class 'gwpopulation.hyperpe.HyperparameterLikelihood'>
16:51 bilby INFO    : Analysis likelihood noise evidence: nan
16:51 bilby INFO    : Single likelihood evaluation took 9.999e-03 s
16:51 bilby INFO    : Using sampler Nestle with kwargs {'method': 'multi', 'npoints': 100, 'update_interval': None, 'npdim': None, 'maxiter': None, 'maxcall': None, 'dlogz': None, 'decline_factor': None, 'rstate': None, 'callback': <function print_progress at 0x7d52bb8f67a0>, 'steps': 20, 'enlarge': 1.2}
it=   469 logz=0.594423
16:51 bilby INFO    : Sampling time: 0:00:13.761623
16:51 bilby INFO    : Summary of results:
nsamples: 570
ln_noise_evidence:    nan
ln_evidence:    nan +/-  0.115
ln_bayes_factor:  0.825 +/-  0.115

[20]:
_ = bns_result.plot_corner(save=False)
_images/GWTC1_36_0.png

Do it all

Let’s put together a run with models for the mass, spin and redshift distributions.

This will not give sensible answers because VT is not estimated.

The data for VT estimation isn’t available in this notebook.

Note that the redshift model is a class and so is called slightly differently. This is to enable caching of expensive data internally.

Once again, we are going to JIT compile the likelihood, and so we use the NonCachingModel. The caching this disables is not the same as the redshift caching and so can be safely used.

[21]:
full_model = NonCachingModel(
    [
        gwpop.models.mass.two_component_primary_mass_ratio,
        gwpop.models.spin.iid_spin_magnitude_beta,
        gwpop.models.spin.independent_spin_orientation_gaussian_isotropic,
        gwpop.models.redshift.PowerLawRedshift(),
    ]
)

Update sampling prior

We need to update the sampling prior to account for the new redshift evolution model.

Fortunately, we defined an interpolant for this earlier.

[22]:
for posterior in posteriors:
    posterior["prior"] = redshift_prior(posterior["redshift"])

Likelihood

We use a different likelihood class RateLikelihood, this will simultaneously estimate the local merger rate as well as the population distribution.

This is created just as before.

We note that this is not strictly necessary as we can always reconstruct the rate posterior in post-processing using the HypmerparameterLikelihood.

[23]:
full_likelihood = gwpop.hyperpe.HyperparameterLikelihood(
    posteriors=posteriors,
    hyper_prior=full_model,
    cupy=gwpop.backend == "cupy",
)

Prior

This is just a longer version of the previous.

[24]:
full_priors = PriorDict()

# mass
full_priors["alpha"] = Uniform(minimum=-4, maximum=12, latex_label="$\\alpha$")
full_priors["beta"] = Uniform(minimum=-4, maximum=12, latex_label="$\\beta$")
full_priors["mmin"] = Uniform(minimum=5, maximum=10, latex_label="$m_{\\min}$")
full_priors["mmax"] = Uniform(minimum=20, maximum=60, latex_label="$m_{\\max}$")
full_priors["lam"] = Uniform(minimum=0, maximum=1, latex_label="$\\lambda_{m}$")
full_priors["mpp"] = Uniform(minimum=20, maximum=50, latex_label="$\\mu_{m}$")
full_priors["sigpp"] = Uniform(minimum=0, maximum=10, latex_label="$\\sigma_{m}$")
full_priors["gaussian_mass_maximum"] = 100
# spin magnitude
full_priors["amax"] = 1
full_priors["alpha_chi"] = Uniform(
    minimum=1, maximum=4, latex_label="$\\alpha_{\\chi}$"
)
full_priors["beta_chi"] = Uniform(minimum=1, maximum=4, latex_label="$\\beta_{\\chi}$")
# spin orientation
full_priors["xi_spin"] = Uniform(minimum=0, maximum=1, latex_label="$\\xi$")
full_priors["sigma_1"] = Uniform(minimum=0, maximum=4, latex_label="$\\sigma{1}$")
full_priors["sigma_2"] = Uniform(minimum=0, maximum=4, latex_label="$\\sigma{2}$")
# redshift evolution
full_priors["lamb"] = Uniform(minimum=-25, maximum=25, latex_label="$\\lambda_{z}$")
[25]:
full_likelihood.parameters.update(full_priors.sample())
full_likelihood.log_likelihood_ratio()
jit_likelihood = JittedLikelihood(full_likelihood)
jit_likelihood.parameters.update(full_priors.sample())
_ = jit_likelihood.log_likelihood_ratio()

We are now using dynesty with the acceptance-walk method. The number of accepted steps is likely sufficient for this problem, but more details can be found in the Bilby documentation.

[26]:
full_result = bb.run_sampler(
    likelihood=jit_likelihood,
    priors=full_priors,
    sampler="dynesty",
    nlive=100,
    sample="acceptance-walk",
    naccept=10,
    label="full",
    save="hdf5",
)

_ = full_result.plot_corner(save=False)
16:51 bilby INFO    : Running for label 'full', output will be saved to 'outdir'
16:51 bilby INFO    : Analysis priors:
16:51 bilby INFO    : alpha=Uniform(minimum=-4, maximum=12, name=None, latex_label='$\\alpha$', unit=None, boundary=None)
16:51 bilby INFO    : beta=Uniform(minimum=-4, maximum=12, name=None, latex_label='$\\beta$', unit=None, boundary=None)
16:51 bilby INFO    : mmin=Uniform(minimum=5, maximum=10, name=None, latex_label='$m_{\\min}$', unit=None, boundary=None)
16:51 bilby INFO    : mmax=Uniform(minimum=20, maximum=60, name=None, latex_label='$m_{\\max}$', unit=None, boundary=None)
16:51 bilby INFO    : lam=Uniform(minimum=0, maximum=1, name=None, latex_label='$\\lambda_{m}$', unit=None, boundary=None)
16:51 bilby INFO    : mpp=Uniform(minimum=20, maximum=50, name=None, latex_label='$\\mu_{m}$', unit=None, boundary=None)
16:51 bilby INFO    : sigpp=Uniform(minimum=0, maximum=10, name=None, latex_label='$\\sigma_{m}$', unit=None, boundary=None)
16:51 bilby INFO    : alpha_chi=Uniform(minimum=1, maximum=4, name=None, latex_label='$\\alpha_{\\chi}$', unit=None, boundary=None)
16:51 bilby INFO    : beta_chi=Uniform(minimum=1, maximum=4, name=None, latex_label='$\\beta_{\\chi}$', unit=None, boundary=None)
16:51 bilby INFO    : xi_spin=Uniform(minimum=0, maximum=1, name=None, latex_label='$\\xi$', unit=None, boundary=None)
16:51 bilby INFO    : sigma_1=Uniform(minimum=0, maximum=4, name=None, latex_label='$\\sigma{1}$', unit=None, boundary=None)
16:51 bilby INFO    : sigma_2=Uniform(minimum=0, maximum=4, name=None, latex_label='$\\sigma{2}$', unit=None, boundary=None)
16:51 bilby INFO    : lamb=Uniform(minimum=-25, maximum=25, name=None, latex_label='$\\lambda_{z}$', unit=None, boundary=None)
16:51 bilby INFO    : gaussian_mass_maximum=100
16:51 bilby INFO    : amax=1
16:51 bilby INFO    : Analysis likelihood class: <class 'gwpopulation.experimental.jax.JittedLikelihood'>
16:51 bilby INFO    : Analysis likelihood noise evidence: nan
16:51 bilby INFO    : Single likelihood evaluation took 3.858e-04 s
16:51 bilby INFO    : Using sampler Dynesty with kwargs {'nlive': 100, 'bound': 'live', 'sample': 'acceptance-walk', 'periodic': None, 'reflective': None, 'update_interval': 600, 'first_update': None, 'npdim': None, 'rstate': None, 'queue_size': 1, 'pool': None, 'use_pool': None, 'live_points': None, 'logl_args': None, 'logl_kwargs': None, 'ptform_args': None, 'ptform_kwargs': None, 'gradient': None, 'grad_args': None, 'grad_kwargs': None, 'compute_jac': False, 'enlarge': None, 'bootstrap': None, 'walks': 100, 'facc': 0.2, 'slices': None, 'fmove': 0.9, 'max_move': 100, 'update_func': None, 'ncdim': None, 'blob': False, 'save_history': False, 'history_filename': None, 'maxiter': None, 'maxcall': None, 'dlogz': 0.1, 'logl_max': inf, 'n_effective': None, 'add_live': True, 'print_progress': True, 'print_func': <bound method Dynesty._print_func of <bilby.core.sampler.dynesty.Dynesty object at 0x7d4f44d9fd60>>, 'save_bounds': False, 'checkpoint_file': None, 'checkpoint_every': 60, 'resume': False, 'seed': None}
16:51 bilby INFO    : Checkpoint every check_point_delta_t = 600s
16:51 bilby INFO    : Using dynesty version 2.1.3
16:51 bilby INFO    : Using the bilby-implemented acceptance-walk sampling with an average of 10 accepted steps per MCMC and maximum length 5000
16:51 bilby INFO    : Resume file outdir/full_resume.pickle does not exist.
16:51 bilby INFO    : Generating initial points from the prior
16:53 bilby INFO    : Written checkpoint file outdir/full_resume.pickle
16:53 bilby INFO    : Rejection sampling nested samples to obtain 517 posterior samples
16:53 bilby INFO    : Sampling time: 0:01:46.081567

16:53 bilby INFO    : Summary of results:
nsamples: 517
ln_noise_evidence:    nan
ln_evidence:    nan +/-  0.275
ln_bayes_factor: -254.941 +/-  0.275

_images/GWTC1_47_5.png

Finally, we’ll print some version information so we have a record.

[27]:
import platform

print(f"Python=={platform.python_version()}")
print(
    "\n".join(
        f"{m.__name__}=={m.__version__}"
        for m in globals().values()
        if getattr(m, "__version__", None)
    )
)
Python==3.10.12
numpy==1.25.2
matplotlib==3.7.1
matplotlib.pylab==1.25.2
numpy==1.25.2
matplotlib==3.7.1
logging==0.5.1.2
re==2.2.1
h5py==3.9.0
pandas==1.5.3
bilby==2.2.3
gwpopulation==1.0.0rc1
platform==1.0.8
[27]: