Inference with finite time series
Time series analysis is ubiquitous in many fields of science including gravitational-wave astronomy, where strain time series are analyzed to infer the nature of gravitational-wave sources, e.g., black holes and neutron stars. It is common in gravitational-wave transient studies to apply a tapered window function to reduce the effects of spectral artifacts from the sharp edges of data segments. We show that the conventional analysis of tapered data fails to take into account covariance between frequency bins, which arises for all finite time series – no matter the choice of window function. We discuss the origin of this covariance and show that as the number of gravitational-wave detections grows, and as we gain access to more high signal-to-noise ratio events, this covariance will become a non-negligible source of systematic error. We derive a framework that models the correlation induced by the window function and demonstrate this solution using both data from the first LIGO–Virgo transient catalog and simulated Gaussian noise.