Open In Colab

Spectral Siren Population Inference on GWTC-3#

The thrid gravitational-wave transient catalog GWTC-3 includes all compact binary coalescences observed during Advanced LIGO/Virgo’s first three oberving runs.

“Spectral siren” cosmology (see, e.g., Ezquiaga & Holz) uses features in the black hole mass spectrum to constrain the distance-redshift relation.

GWPopulation builds upon Bilby (arXiv:1811.02042) to provide simple, modular, user-friendly, population inference. wcosmo provides optimized cosmology functionality for GWPopulation.

There are many implemented models and an example of defining custom models is included below. In this example we use:

  • A mass distribution in primary mass and mass ratio from Talbot & Thrane (2018) (arXiv:1801:02699). This is equivalent to the PowerLaw + Peak model used in LVK analyses without the low-mass smoothing for computational efficiency.

  • Half-Gaussian + isotropic spin tilt distribution from Talbot & Thrane (2017) (arXiv:1704.08370).

  • Beta spin magnitude distribution from Wysocki+ (2018) (arXiv:1805:06442).

  • Each of these are also available with independent but identically distributed spins.

  • Redshift evolution model as in Fishbach+ (2018) (arXiv:1805.10270).

  • A variable Flat Lambda CDM cosmology using a modified version of the approximation in arXiv:1111.6396 as implemented in wcosmo.

For more information on GWPopulation see the git repository, documentation.

Setup#

If you’re using colab.research.google.com you will want to choose a GPU-accelerated runtime (I’m going to use a T4 GPU).

“runtime”->”change runtime type”->”Hardware accelerator = GPU”

Install some needed packages#

Almost all of the dependencies for this are integrated into GWPopulation. These include wcosmo for cosmology, and Bilby and dynesty for sampling.

The one exception is `unxt <https://unxt.readthedocs.io/en/latest/>`__ which provides astropy-like units compatible with jax.

[1]:
!pip install gwpopulation unxt --quiet --progress-bar off

Download data#

We need to download the data for the events and simmulated “injections” used to characterize the detection sensitivity.

Event posteriors#

We’re using the posteriors from the GWTC-3 data release in a pre-processed format.

The file was produced by gwpopulation-pipe to reduce the many GB of posterior sample files to a single ~30Mb file.

The choice of events in this file was not very careful and should only be considered qualitatively correct.

The data file can be found here. The original data can be found at zenodo:5546663 and zenodo:6513631 along with citation information.

Sensitivity injections#

Again I have pre-processed the full injection set using gwpopulation-pipe to reduce the filesize. The original data is available at zenodo:7890398 along with citation information.

[2]:
!gdown https://drive.google.com/uc?id=16gStLIjt65gWBkw-gNOVUqNbZ89q8CLF
!gdown https://drive.google.com/uc?id=10pevUCM3V2-D-bROFEMAcTJsX_9RzeM6
Downloading...
From: https://drive.google.com/uc?id=16gStLIjt65gWBkw-gNOVUqNbZ89q8CLF
To: /content/gwtc-3-injections.pkl
100% 2.69M/2.69M [00:00<00:00, 19.0MB/s]
Downloading...
From: https://drive.google.com/uc?id=10pevUCM3V2-D-bROFEMAcTJsX_9RzeM6
To: /content/gwtc-3-samples.pkl
100% 36.4M/36.4M [00:00<00:00, 85.6MB/s]

Imports#

Import the packages required for the script. We also set the backend for array operations to jax which allows us to take advantage of just-in-time (jit) compilation in addition to GPU-parallelisation when available.

[3]:
import bilby as bb
import gwpopulation as gwpop
import jax
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from bilby.core.prior import PriorDict, Uniform
from gwpopulation.experimental.cosmo_models import CosmoModel
from gwpopulation.experimental.jax import JittedLikelihood
from wcosmo.astropy import Planck15
from wcosmo.utils import disable_units

disable_units()
gwpop.set_backend("jax")

xp = gwpop.utils.xp

%matplotlib inline

Load posteriors#

We remove two events from the file that shouldn’t be there that have NS-like secondaries as we are just interested in BBHs for this demonstration.

We also need to modify some of the stored parameters as we are going to include a cosmological fit so we will fit detector frame masses and luminosity distance.

When using the JAX backend, this could probably be accelerated by sprinkling some jax.jits.

[4]:
posteriors = pd.read_pickle("gwtc-3-samples.pkl")
del posteriors[15]
del posteriors[38]

for post in posteriors:
    zs = post.pop("redshift").values
    post["mass_1_detector"] = post.pop("mass_1") * (1 + zs)
    post["luminosity_distance"] = np.asarray(Planck15.luminosity_distance(zs))
    post["prior"] /= np.asarray(Planck15.dDLdz(zs) * (1 + zs))

Load injections#

Load the injections used to characterize the sensitivity of the gravitaitonal-wave survey.

Again, we need to make some modifications for fitting in detector-frame quantities.

[5]:
import dill

with open("gwtc-3-injections.pkl", "rb") as ff:
    injections = dill.load(ff)

zs = np.asarray(injections.pop("redshift"))
injections["mass_1_detector"] = injections.pop("mass_1") * (1 + zs)
injections["luminosity_distance"] = np.asarray(Planck15.luminosity_distance(zs))
injections["prior"] /= np.asarray(Planck15.dDLdz(zs) * (1 + zs))

Define some models and the likelihood#

We need to define Bilby Model objects for the numerator and denominator independently as these cache some computations interally.

Note that we are using a CosmoModel, this in an experimental feature and so the specific API may change in future, but the basic funcionality should be stable. We create a model that can infer the three parameters of wCDM.

The HyperparameterLikelihood marginalises over the local merger rate, with a uniform-in-log prior. The posterior for the merger rate can be recovered in post-processing.

We provide:

  • posteriors: a list of pandas DataFrames.

  • hyper_prior: our population model, as defined above.

  • selection_function: anything which evaluates the selection function.

We can also provide:

  • conversion_function: this converts between the parameters we sample in and those needed by the model, e.g., for sampling in the mean and variance of the beta distribution.

  • max_samples: the maximum number of samples to use from each posterior, this defaults to the length of the shortest posterior.

[6]:
model = CosmoModel(
    model_functions=[
        gwpop.models.mass.two_component_primary_mass_ratio,
        gwpop.models.spin.iid_spin,
        gwpop.models.redshift.PowerLawRedshift(cosmo_model="FlatwCDM"),
    ],
    cosmo_model="FlatwCDM",
)

vt = gwpop.vt.ResamplingVT(model=model, data=injections, n_events=len(posteriors))

likelihood = gwpop.hyperpe.HyperparameterLikelihood(
    posteriors=posteriors,
    hyper_prior=model,
    selection_function=vt,
)

Define our prior#

The mass model has eight parameters that we vary that are described in arXiv:1801:02699. This model is sometimes referred to as “PowerLaw+Peak”

The spin magnitude model is a Beta distribution with the usual parameterization, and the spin orientation model is a mixure of a uniform component and a truncated Gaussian that peaks at aligned spin. This combination is sometimes referred to as “Default”.

For redshift we use a model that looks like

\[p(z) \propto \frac{d V_{c}}{dz} (1 + z)^{λ - 1}\]

Finally, we set priors on the three parameters of the wCDM model, \(H_0\) (the Hubble constant), \(\Omega_{m,0}\) (the matter fraction of the Universe at current time), \(w_0\) (the constant dark energy equation of state).

[7]:
priors = PriorDict()

# mass
priors["alpha"] = Uniform(minimum=-2, maximum=4, latex_label="$\\alpha$")
priors["beta"] = Uniform(minimum=-4, maximum=12, latex_label="$\\beta$")
priors["mmin"] = Uniform(minimum=2, maximum=2.5, latex_label="$m_{\\min}$")
priors["mmax"] = Uniform(minimum=80, maximum=100, latex_label="$m_{\\max}$")
priors["lam"] = Uniform(minimum=0, maximum=1, latex_label="$\\lambda_{m}$")
priors["mpp"] = Uniform(minimum=10, maximum=50, latex_label="$\\mu_{m}$")
priors["sigpp"] = Uniform(minimum=1, maximum=10, latex_label="$\\sigma_{m}$")
priors["gaussian_mass_maximum"] = 100
# spin
priors["amax"] = 1
priors["alpha_chi"] = Uniform(
    minimum=1, maximum=6, latex_label="$\\alpha_{\\chi}$"
)
priors["beta_chi"] = Uniform(minimum=1, maximum=6, latex_label="$\\beta_{\\chi}$")
priors["xi_spin"] = Uniform(minimum=0, maximum=1, latex_label="$\\xi$")
priors["sigma_spin"] = Uniform(minimum=0.3, maximum=4, latex_label="$\\sigma$")

priors["H0"] = Uniform(minimum=20, maximum=200, latex_label="$H_0$")
priors["Om0"] = Uniform(minimum=0, maximum=1, latex_label="$\\Omega_{m,0}$")
priors["w0"] = Uniform(minimum=-1.5, maximum=-0.5, latex_label="$w_{0}$")
priors["lamb"] = Uniform(minimum=-1, maximum=10, latex_label="$\\lambda_{z}$")

Just-in-time compile#

We JIT compile the likelihood object before starting the sampler. This is done using the gwpopulation.experimental.jax.JittedLikelihood class.

We then time the original likelihood object and the JIT-ed version. Note that we do two evaluations for each object as the first evaluation must compile the likelihood and so takes longer. (In addition to the JIT compilation, JAX compiles GPU functionality at the first evaluation, but this is less extreme than the full JIT compilation.)

[8]:
parameters = priors.sample()
likelihood.parameters.update(parameters)
likelihood.log_likelihood_ratio()
%time print(likelihood.log_likelihood_ratio())
jit_likelihood = JittedLikelihood(likelihood)
jit_likelihood.parameters.update(parameters)
%time print(jit_likelihood.log_likelihood_ratio())
%time print(jit_likelihood.log_likelihood_ratio())
-234.23005649570018
CPU times: user 2.25 s, sys: 242 ms, total: 2.49 s
Wall time: 3.11 s
-234.23005649570018
CPU times: user 6.57 s, sys: 536 ms, total: 7.11 s
Wall time: 7.12 s
-234.23005649570018
CPU times: user 10.2 ms, sys: 0 ns, total: 10.2 ms
Wall time: 23.1 ms

Run the sampler#

We’ll use the sampler dynesty and use a small number of live points to reduce the runtime (total runtime should be approximately 5 minutes on T4 GPUs via Google colab). The settings here may not give publication quality results, a convergence test should be performed before making strong quantitative statements.

bilby times a single likelihood evaluation before beginning the run, however, this isn’t well defined with JAX.

Note: sometimes this finds a high likelihood mode, likely due to breakdowns in the approximation used to estimate the likelihood. If you see dlogz > -80, you should interrupt the execution and restart.

[9]:
result = bb.run_sampler(
    likelihood=jit_likelihood,
    priors=priors,
    sampler="dynesty",
    nlive=100,
    label="cosmo",
    sample="acceptance-walk",
    naccept=5,
    save="hdf5",
    resume=False,
)
15:26 bilby INFO    : Running for label 'cosmo', output will be saved to 'outdir'
/usr/local/lib/python3.10/dist-packages/bilby/core/utils/log.py:73: UserWarning: The '__version__' attribute is deprecated and will be removed in MarkupSafe 3.1. Use feature detection, or `importlib.metadata.version("markupsafe")`, instead.
  vdict[key] = str(getattr(sys.modules[key], "__version__", "N/A"))
/usr/local/lib/python3.10/dist-packages/_distutils_hack/__init__.py:32: UserWarning: Setuptools is replacing distutils. Support for replacing an already imported distutils is deprecated. In the future, this condition will fail. Register concerns at https://github.com/pypa/setuptools/issues/new?template=distutils-deprecation.yml
  warnings.warn(
15:26 bilby INFO    : Analysis priors:
15:26 bilby INFO    : alpha=Uniform(minimum=-2, maximum=4, name=None, latex_label='$\\alpha$', unit=None, boundary=None)
15:26 bilby INFO    : beta=Uniform(minimum=-4, maximum=12, name=None, latex_label='$\\beta$', unit=None, boundary=None)
15:26 bilby INFO    : mmin=Uniform(minimum=2, maximum=2.5, name=None, latex_label='$m_{\\min}$', unit=None, boundary=None)
15:26 bilby INFO    : mmax=Uniform(minimum=80, maximum=100, name=None, latex_label='$m_{\\max}$', unit=None, boundary=None)
15:26 bilby INFO    : lam=Uniform(minimum=0, maximum=1, name=None, latex_label='$\\lambda_{m}$', unit=None, boundary=None)
15:26 bilby INFO    : mpp=Uniform(minimum=10, maximum=50, name=None, latex_label='$\\mu_{m}$', unit=None, boundary=None)
15:26 bilby INFO    : sigpp=Uniform(minimum=1, maximum=10, name=None, latex_label='$\\sigma_{m}$', unit=None, boundary=None)
15:26 bilby INFO    : alpha_chi=Uniform(minimum=1, maximum=6, name=None, latex_label='$\\alpha_{\\chi}$', unit=None, boundary=None)
15:26 bilby INFO    : beta_chi=Uniform(minimum=1, maximum=6, name=None, latex_label='$\\beta_{\\chi}$', unit=None, boundary=None)
15:26 bilby INFO    : xi_spin=Uniform(minimum=0, maximum=1, name=None, latex_label='$\\xi$', unit=None, boundary=None)
15:26 bilby INFO    : sigma_spin=Uniform(minimum=0.3, maximum=4, name=None, latex_label='$\\sigma$', unit=None, boundary=None)
15:26 bilby INFO    : H0=Uniform(minimum=20, maximum=200, name=None, latex_label='$H_0$', unit=None, boundary=None)
15:26 bilby INFO    : Om0=Uniform(minimum=0, maximum=1, name=None, latex_label='$\\Omega_{m,0}$', unit=None, boundary=None)
15:26 bilby INFO    : w0=Uniform(minimum=-1.5, maximum=-0.5, name=None, latex_label='$w_{0}$', unit=None, boundary=None)
15:26 bilby INFO    : lamb=Uniform(minimum=-1, maximum=10, name=None, latex_label='$\\lambda_{z}$', unit=None, boundary=None)
15:26 bilby INFO    : gaussian_mass_maximum=100
15:26 bilby INFO    : amax=1
15:26 bilby INFO    : Analysis likelihood class: <class 'gwpopulation.experimental.jax.JittedLikelihood'>
15:26 bilby INFO    : Analysis likelihood noise evidence: nan
15:26 bilby INFO    : Single likelihood evaluation took nan s
15:26 bilby INFO    : Using sampler Dynesty with kwargs {'nlive': 100, 'bound': 'live', 'sample': 'acceptance-walk', 'periodic': None, 'reflective': None, 'update_interval': 600, 'first_update': None, 'npdim': None, 'rstate': None, 'queue_size': 1, 'pool': None, 'use_pool': None, 'live_points': None, 'logl_args': None, 'logl_kwargs': None, 'ptform_args': None, 'ptform_kwargs': None, 'gradient': None, 'grad_args': None, 'grad_kwargs': None, 'compute_jac': False, 'enlarge': None, 'bootstrap': None, 'walks': 100, 'facc': 0.2, 'slices': None, 'fmove': 0.9, 'max_move': 100, 'update_func': None, 'ncdim': None, 'blob': False, 'save_history': False, 'history_filename': None, 'maxiter': None, 'maxcall': None, 'dlogz': 0.1, 'logl_max': inf, 'n_effective': None, 'add_live': True, 'print_progress': True, 'print_func': <bound method Dynesty._print_func of <bilby.core.sampler.dynesty.Dynesty object at 0x7f62d8bc7730>>, 'save_bounds': False, 'checkpoint_file': None, 'checkpoint_every': 60, 'resume': False, 'seed': None}
15:26 bilby INFO    : Checkpoint every check_point_delta_t = 600s
15:26 bilby INFO    : Using dynesty version 2.1.4
15:26 bilby INFO    : Using the bilby-implemented acceptance-walk sampling with an average of 5 accepted steps per MCMC and maximum length 5000
15:26 bilby INFO    : Generating initial points from the prior
15:32 bilby INFO    : Written checkpoint file outdir/cosmo_resume.pickle
15:32 bilby INFO    : Rejection sampling nested samples to obtain 687 posterior samples
15:32 bilby INFO    : Sampling time: 0:05:57.872468

15:32 bilby INFO    : Summary of results:
nsamples: 687
ln_noise_evidence:    nan
ln_evidence:    nan +/-  0.397
ln_bayes_factor: -199.185 +/-  0.397

Plot some posteriors#

We can look at the posteriors on some of the parameters, here the cosmology parameters and the location of the mass peak and the redshift evolution.

We see that the value of the Hubble constant is strongly correlated with the location of the peak in the mass distribution as has been noted elsewhere.

We also include the values of the cosmology parameters reported in the Planck15 cosmology for reference.

[10]:
_ = result.plot_corner(
    save=False,
    parameters=["H0", "Om0", "w0", "mpp", "lamb"],
    truths=[67.74, 0.3075, -1, np.nan, np.nan],
)
../_images/examples_gwtc3_21_0.png

Post-processing checks#

As mentioned above, hierarchical analyses performed in this way are susceptible to systematic bias due to Monte Carlo error. To ensure we are not suffering from this issue, we compute the variance in each of our Monte Carlo integrals along with the total variance for each posterior sample. We then look at whether there are correlations between the log-likelihood, the variance, and the hyperparameters. If we see significant correlation between the variance and other quantities, it is a sign that our results may not be reliable.

[11]:
func = jax.jit(likelihood.generate_extra_statistics)

full_posterior = pd.DataFrame([
    func(parameters) for parameters
    in result.posterior.to_dict(orient="records")
]).astype(float)
full_posterior.describe()
[11]:
H0 Om0 alpha alpha_chi amax beta beta_chi gaussian_mass_maximum lam lamb ... var_67 var_68 var_69 var_7 var_70 var_8 var_9 variance w0 xi_spin
count 687.000000 687.000000 687.000000 687.000000 687.0 687.000000 687.000000 687.0 687.000000 687.000000 ... 687.000000 687.000000 687.000000 687.000000 687.000000 687.000000 687.000000 687.000000 687.000000 687.000000
mean 86.192321 0.463870 2.785490 1.699249 1.0 2.029660 4.672059 100.0 0.046022 2.281156 ... 0.000902 0.001298 0.000506 0.000339 0.000781 0.000857 0.001010 1.578096 -1.037032 0.656418
std 33.315514 0.264367 0.227932 0.425861 0.0 1.084555 0.828649 0.0 0.067595 0.923766 ... 0.000432 0.000799 0.000242 0.000153 0.000300 0.000415 0.000481 0.820307 0.278118 0.244211
min 28.461434 0.012781 2.014633 1.001600 1.0 -0.575890 2.176011 100.0 0.007235 -0.860138 ... 0.000166 0.000260 0.000140 0.000107 0.000179 0.000216 0.000239 0.495687 -1.499852 0.000738
25% 62.020596 0.241354 2.629028 1.362416 1.0 1.278567 4.072469 100.0 0.022109 1.709052 ... 0.000615 0.000760 0.000346 0.000236 0.000595 0.000579 0.000679 1.025316 -1.275062 0.501259
50% 80.248760 0.457905 2.793779 1.664151 1.0 1.894215 4.742172 100.0 0.031899 2.228540 ... 0.000811 0.001100 0.000444 0.000304 0.000738 0.000762 0.000894 1.336759 -1.044505 0.693256
75% 105.724361 0.667858 2.940473 1.940837 1.0 2.606386 5.371636 100.0 0.045664 2.897086 ... 0.001078 0.001568 0.000607 0.000405 0.000906 0.001025 0.001201 1.858051 -0.810291 0.861111
max 193.002066 0.995524 3.430792 3.125901 1.0 6.657481 5.998233 100.0 0.704479 6.065530 ... 0.003160 0.005728 0.001807 0.001424 0.002516 0.003410 0.003835 6.507338 -0.500004 0.999019

8 rows × 164 columns

[12]:
full_posterior[
    result.search_parameter_keys + ["log_likelihood", "variance"]
].corr()
[12]:
alpha beta mmin mmax lam mpp sigpp alpha_chi beta_chi xi_spin sigma_spin H0 Om0 w0 lamb log_likelihood variance
alpha 1.000000 0.141622 0.090023 0.080669 -0.357669 -0.056450 -0.038718 0.110190 0.136683 0.035176 0.131746 0.092676 -0.059821 -0.011571 0.441030 0.191771 0.166605
beta 0.141622 1.000000 -0.033839 0.067055 -0.009699 -0.052296 0.017018 -0.069778 -0.055648 0.054644 -0.096338 -0.059046 0.068654 0.147231 -0.149448 -0.114000 0.183815
mmin 0.090023 -0.033839 1.000000 0.054066 -0.068222 0.109581 -0.007212 -0.033472 -0.006678 -0.117315 0.027034 -0.106144 0.133593 -0.133617 0.037905 0.005377 -0.043536
mmax 0.080669 0.067055 0.054066 1.000000 0.110677 -0.094240 -0.006212 0.090794 0.154900 -0.005070 0.056983 0.131370 -0.042789 0.027122 -0.006661 -0.139059 0.087109
lam -0.357669 -0.009699 -0.068222 0.110677 1.000000 -0.691310 0.413978 0.069220 0.004225 -0.064605 -0.055335 0.300825 -0.041868 0.001544 -0.011827 -0.409244 -0.079433
mpp -0.056450 -0.052296 0.109581 -0.094240 -0.691310 1.000000 -0.460154 -0.094476 -0.080426 0.037703 -0.006029 -0.613080 0.137499 -0.105793 -0.165381 0.396366 0.056222
sigpp -0.038718 0.017018 -0.007212 -0.006212 0.413978 -0.460154 1.000000 0.022190 -0.106280 -0.145172 0.073256 -0.205229 -0.024965 -0.026839 0.111204 -0.593371 -0.485489
alpha_chi 0.110190 -0.069778 -0.033472 0.090794 0.069220 -0.094476 0.022190 1.000000 0.592209 -0.071230 0.318560 0.068430 -0.133062 0.008132 0.181618 -0.165641 -0.149992
beta_chi 0.136683 -0.055648 -0.006678 0.154900 0.004225 -0.080426 -0.106280 0.592209 1.000000 -0.040551 0.002835 0.162779 -0.094758 -0.029511 0.191065 0.164174 0.350465
xi_spin 0.035176 0.054644 -0.117315 -0.005070 -0.064605 0.037703 -0.145172 -0.071230 -0.040551 1.000000 0.093104 0.059087 -0.058887 0.078590 0.026096 0.254043 0.187677
sigma_spin 0.131746 -0.096338 0.027034 0.056983 -0.055335 -0.006029 0.073256 0.318560 0.002835 0.093104 1.000000 -0.071397 -0.016116 -0.030013 0.102910 -0.221670 -0.333567
H0 0.092676 -0.059046 -0.106144 0.131370 0.300825 -0.613080 -0.205229 0.068430 0.162779 0.059087 -0.071397 1.000000 -0.396252 0.028811 0.001228 -0.009208 0.219151
Om0 -0.059821 0.068654 0.133593 -0.042789 -0.041868 0.137499 -0.024965 -0.133062 -0.094758 -0.058887 -0.016116 -0.396252 1.000000 0.010338 -0.022531 0.033099 0.030473
w0 -0.011571 0.147231 -0.133617 0.027122 0.001544 -0.105793 -0.026839 0.008132 -0.029511 0.078590 -0.030013 0.028811 0.010338 1.000000 -0.088142 -0.037276 -0.034192
lamb 0.441030 -0.149448 0.037905 -0.006661 -0.011827 -0.165381 0.111204 0.181618 0.191065 0.026096 0.102910 0.001228 -0.022531 -0.088142 1.000000 0.067673 0.007794
log_likelihood 0.191771 -0.114000 0.005377 -0.139059 -0.409244 0.396366 -0.593371 -0.165641 0.164174 0.254043 -0.221670 -0.009208 0.033099 -0.037276 0.067673 1.000000 0.344516
variance 0.166605 0.183815 -0.043536 0.087109 -0.079433 0.056222 -0.485489 -0.149992 0.350465 0.187677 -0.333567 0.219151 0.030473 -0.034192 0.007794 0.344516 1.000000

The most strongly correlated variables are the ones that control the position and width of the peak in the mass distribution. Below we show a scatter matrix for these variables. The variance for this analysis has a tail up to ~4 and so may be non-trivially biased. The simplest method to resolve this is by using more samples for all of the Monte Carlo integrals.

[13]:
pd.plotting.scatter_matrix(
    full_posterior[["mpp", "sigpp", "log_likelihood", "variance"]],
    alpha=0.1,
)
plt.show()
plt.close()
../_images/examples_gwtc3_26_0.png
[ ]: