pint.gridutils.grid_chisq_derived

pint.gridutils.grid_chisq_derived(ftr, parnames, parfuncs, gridvalues, extraparnames=[], executor=None, ncpu=None, chunksize=1, printprogress=True, **fitargs)[source]

Compute chisq over a grid of derived parameters

Parameters:
  • ftr (pint.fitter.Fitter) – The base fitter to use.

  • parnames (list) – Names of the parameters (available in ftr) to grid over

  • parfuncs (list) – List of functions to convert gridvalues to quantities accessed through parnames

  • gridvalues (list) – List of underlying grid values to grid over (each should be 1D array of astropy.units.Quantity)

  • extraparnames (list, optional) – Names of other parameters to return

  • executor (concurrent.futures.Executor or None, optional) – Executor object to run multiple processes in parallel If None, will use default concurrent.futures.ProcessPoolExecutor, unless overridden by ncpu=1

  • ncpu (int, optional) – If an existing Executor is not supplied, one will be created with this number of workers. If 1, will run single-processor version If None, will use multiprocessing.cpu_count()

  • chunksize (int) – Size of the chunks for concurrent.futures.ProcessPoolExecutor parallel execution. Ignored for concurrent.futures.ThreadPoolExecutor

  • printprogress (bool, optional) – Print indications of progress (requires tqdm for ncpu>1)

  • fitargs – additional arguments pass to fit_toas()

Returns:

  • np.ndarray (array of chisq values)

  • parvalues (list of np.ndarray) – Parameter values computed from gridvalues and parfuncs

  • extraout (dict of np.ndarray) – Parameter values computed at each grid point for extraparnames

Example

>>> import astropy.units as u
>>> import numpy as np
>>> import pint.config
>>> import pint.gridutils
>>> from pint.fitter import WLSFitter
>>> from pint.models.model_builder import get_model, get_model_and_toas
# Load in a basic dataset
>>> parfile = pint.config.examplefile("NGC6440E.par")
>>> timfile = pint.config.examplefile("NGC6440E.tim")
>>> m, t = get_model_and_toas(parfile, timfile)
>>> f = WLSFitter(t, m)
# find the best-fit
>>> f.fit_toas()
>>> bestfit = f.resids.chi2
# do a grid for F0 and tau
>>> F0 = np.linspace(f.model.F0.quantity - 3 * f.model.F0.uncertainty,f.model.F0.quantity + 3 * f.model.F0.uncertainty,15,)
>>> tau = np.linspace(8.1, 8.3, 13) * 100 * u.Myr
>>> chi2grid_tau, params = pint.gridutils.grid_chisq_derived(f,("F0", "F1"),(lambda x, y: x, lambda x, y: -x / 2 / y),(F0, tau))

Notes

By default, it will create ProcessPoolExecutor with max_workers equal to the desired number of cpus. However, if you are running this as a script you may need something like:

import multiprocessing
if __name__ == "__main__":
    multiprocessing.freeze_support()
    ...
    grid_chisq_derived(...)

If an instantiated Executor is passed instead, it will be used as-is.

The behavior for different combinations of executor and ncpu is: +—————–+——–+————————+ | executor | ncpu | result | +=================+========+========================+ | existing object | any | uses existing executor | +—————–+——–+————————+ | None | 1 | uses single-processor | +—————–+——–+————————+ | None | None | creates default | | | | executor with | | | | cpu_count workers | +—————–+——–+————————+ | None | >1 | creates default | | | | executor with desired | | | | number of workers | +—————–+——–+————————+

Other Executors can be found for different computing environments: * [1] for MPI * [2] for SLURM or Condor