PINT Is Not TEMPO3
PINT is a project to develop a new pulsar timing solution based on python and modern libraries. It is still in active development, but it can already produce residuals from most “normal” timing models that agree with Tempo and Tempo2 to within ~10 nanoseconds. It can be used within python scripts or notebooks, and there are several command line tools that come with it.
The primary reasons we are developing PINT are:
To have a robust system to check high-precision timing results that is completely independent of TEMPO and Tempo2
To make a system that is easy to extend and modify due to a good design and the use of a modern programming language, techniques, and libraries.
How the documentation is organized
We try to keep the PINT documentation divided into four categories:
- Tutorials
Easy-to-follow walkthroughs to show new users what PINT can do. No explanations or alternatives or fallback methods, just getting people to a point where they know what questions to ask. These should work for everyone without surprises or problems. Jupyter notebooks are a natural format for these. Includes Basic installation.
- Explanation
Descriptions of how PINT works, why certain design choices were made, what the underlying concepts are and so forth. This is for users who know more or less what to do but want to understand what is going on.
- Reference
Specific details of how particular things work. This is for users who are trying to use PINT and know what function or object or method they need but need the details of how it works.
- How-tos
Detailed guides on how to accomplish specific things, for people who already know what questions to ask. Explanations and reference are not needed here but even experienced users can benefit from being pointed at the right way to do something, with fallbacks and troubleshooting advice.
This is based on the Django documentation structure, and it is intended both to help users find the correct information and to help documentation writers keep clear in our heads who we are writing for and what we are trying to communicate to them.
Tutorials
These are step-by-step guides you can follow to show you what PINT can do. Quick common tasks are explained on the PINT Wiki. If you want some explanations of why things work this way, see the Explanation section. If you want details on a particular function that comes up, see the Reference section. More complicated tasks are discussed in the How-tos.
Data Files
The data files (par
and tim
) associated with the tutorials and
other examples can be located via pint.config.examplefile()
(available via the pint.config
module):
import pint.config
fullfilename = pint.config.examplefile(filename)
For example, the file NGC6440E.par
from the
Time a Pulsar notebook can be found via:
import pint
fullfilename = pint.config.examplefile("NGC6440E.par")
Examples
These tutorials examples are
in the form of Jupyter notebooks, downloadable from a link at the top
of each page. (Also available in the same place is a plain-python
script version, in case this is more convenient.) You should be able
to download these files and run them from anywhere convenient
(provided PINT
is installed). Finally, there are additional
notebooks you can download from the PINT Wiki or the
GitHub examples directory: these
are not included in the default build because they take too long, but you can download and run them yourself.
Basic installation
IMPORTANT Note: **
PINT has a naming conflict with the `pint <https://pypi.org/project/Pint/>`_ units package available from PyPI (i.e. using pip) and conda.
Do **NOT pip install pint
or conda install pint
! See below!
PINT is now available via PyPI as the package pint-pulsar, so it is now simple to install via pip.
For most users, who don’t want to develop the PINT code, installation should just be a matter of:
$ pip install pint-pulsar
By default this will install in your system site-packages. Depending on your system and preferences, you may want to append --user
to install it for just yourself (e.g. if you don’t have permission to write in the system site-packages), or you may want to create a
virtualenv to work on PINT (using a virtualenv is highly recommended by the PINT developers).
If you want access to the source code, example notebooks, and tests, you can install from source, by cloning the source repository from GitHub, then install it, ensuring that all dependencies needed to run PINT are available:
$ git checkout https://github.com/nanograv/PINT.git
$ cd PINT
$ pip install .
If this fails, or for more explicit installation and troubleshooting instructions see How to Install PINT.
This Jupyter notebook can be downloaded from time_a_pulsar.ipynb, or viewed as a python script at time_a_pulsar.py.
Time a pulsar
This notebook walks through a simple pulsar timing session, as one might do with TEMPO/TEMPO2: load a .par
file, load a .tim
file, do a fit, plot the residuals before and after. This one also displays various additional information you might find useful, and also ignores but then plots TOAs with large uncertainties. Similar code is available as a standalone script at fit_NGC6440E.py
[1]:
import astropy.units as u
import matplotlib.pyplot as plt
import pint.fitter
from pint.models import get_model_and_toas
from pint.residuals import Residuals
import pint.logging
pint.logging.setup(level="INFO")
[1]:
1
We want to load a parameter file and some TOAs. For the purposes of this notebook, we’ll load in ones that are included with PINT; the pint.config.examplefile()
calls return the path to where those files are in the PINT distribution. If you wanted to use your own files, you would probably know their filenames and could just set parfile="myfile.par"
and timfile="myfile.tim"
.
[2]:
import pint.config
parfile = pint.config.examplefile("NGC6440E.par")
timfile = pint.config.examplefile("NGC6440E.tim")
Let’s load the par and tim files. We could load them separately with the get_model
and get_TOAs
functions, but the parfile may contain information about how to interpret the TOAs, so it is convenient to load the two together so that the TOAs take into account details in the par file.
[3]:
m, t_all = get_model_and_toas(parfile, timfile)
m
[3]:
TimingModel(
AbsPhase(
MJDParameter( TZRMJD 53801.3860512007484954 (d) frozen=True),
strParameter( TZRSITE 1 frozen=True),
floatParameter( TZRFRQ 1949.609 (MHz) frozen=True)),
AstrometryEquatorial(
MJDParameter( POSEPOCH 53750.0000000000000000 (d) frozen=True),
floatParameter( PX 0.0 (mas) frozen=True),
AngleParameter( RAJ 17:48:52.75000000 (hourangle) +/- 0h00m00.05s frozen=False),
AngleParameter( DECJ -20:21:29.00000000 (deg) +/- 0d00m00.4s frozen=False),
floatParameter( PMRA 0.0 (mas / yr) frozen=True),
floatParameter( PMDEC 0.0 (mas / yr) frozen=True)),
DispersionDM(
floatParameter( DM 223.9 (pc / cm3) +/- 0.3 pc / cm3 frozen=False),
floatParameter( DM1 UNSET,
MJDParameter( DMEPOCH UNSET),
SolarSystemShapiro(
boolParameter( PLANET_SHAPIRO N frozen=True)),
SolarWindDispersion(
floatParameter( NE_SW 0.0 (1 / cm3) frozen=True),
floatParameter( SWP 2.0 () frozen=True),
floatParameter( SWM 0.0 () frozen=True)),
Spindown(
floatParameter( F0 61.485476554 (Hz) +/- 5e-10 Hz frozen=False),
MJDParameter( PEPOCH 53750.0000000000000000 (d) frozen=True),
floatParameter( F1 -1.181e-15 (Hz / s) +/- 1e-18 Hz / s frozen=False)),
TroposphereDelay(
boolParameter( CORRECT_TROPOSPHERE N frozen=True))
)
There are many messages here. As a rule messages marked INFO
can safely be ignored, they are simply informational; take a look at them if something unexpected happens. Messages marked WARNING
or ERROR
are more serious. (These messages are emitted by the python logger
module and can be suppressed or written to a log file if they are annoying.)
Let’s just print out a quick summary.
[4]:
t_all.print_summary()
Number of TOAs: 62
Number of commands: 0
Number of observatories: 1 ['gbt']
MJD span: 53478.286 to 54187.587
Date span: 2005-04-18 06:51:39.290648106 to 2007-03-28 14:05:44.808308037
gbt TOAs (62):
Min freq: 1549.609 MHz
Max freq: 2212.109 MHz
Min error: 13.2 us
Max error: 118 us
Median error: 22.1 us
[5]:
rs = Residuals(t_all, m).phase_resids
xt = t_all.get_mjds()
plt.figure()
plt.plot(xt, rs, "x")
plt.title(f"{m.PSR.value} Pre-Fit Timing Residuals")
plt.xlabel("MJD")
plt.ylabel("Residual (phase)")
plt.grid()

We could proceed immediately to fitting the par file, but some of those uncertainties seem a little large. Let’s discard the data points with uncertainties \(>30\,\mu\text{s}\) - uncertainty estimation is not always reliable when the signal-to-noise is low.
[6]:
error_ok = t_all.table["error"] <= 30 * u.us
t = t_all[error_ok]
t.print_summary()
Number of TOAs: 44
Number of commands: 0
Number of observatories: 1 ['gbt']
MJD span: 53478.286 to 54187.587
Date span: 2005-04-18 06:51:39.290648106 to 2007-03-28 14:05:44.808308037
gbt TOAs (44):
Min freq: 1724.609 MHz
Max freq: 1949.609 MHz
Min error: 13.2 us
Max error: 29.9 us
Median error: 21.5 us
[7]:
rs = Residuals(t, m).phase_resids
xt = t.get_mjds()
plt.figure()
plt.plot(xt, rs, "x")
plt.title(f"{m.PSR.value} Pre-Fit Timing Residuals")
plt.xlabel("MJD")
plt.ylabel("Residual (phase)")
plt.grid()

Now let’s fit the par file to the residuals, using the auto
function to pick the right fitter for our data.
[8]:
f = pint.fitter.Fitter.auto(t, m)
f.fit_toas()
[9]:
# Print some basic params
print("Best fit has reduced chi^2 of", f.resids.chi2_reduced)
print("RMS in phase is", f.resids.phase_resids.std())
print("RMS in time is", f.resids.time_resids.std().to(u.us))
Best fit has reduced chi^2 of 1.0367399130374004782
RMS in phase is 0.0011179201216563867
RMS in time is 18.18185666856455 us
[10]:
# Show the parameter correlation matrix
corm = f.get_parameter_correlation_matrix(pretty_print=True)
Parameter correlation matrix:
RAJ DECJ F0 F1 DM
RAJ 1.000
DECJ -0.047 1.000
F0 -0.105 0.250 1.000
F1 0.277 -0.323 -0.773 1.000
DM 0.139 0.054 -0.099 0.030 1.000
[11]:
f.print_summary()
Fitted model using downhill_wls method with 5 free parameters to 44 TOAs
Prefit residuals Wrms = 1113.6432896435356 us, Postfit residuals Wrms = 18.175665853858916 us
Chisq = 39.396 for 38 d.o.f. for reduced Chisq of 1.037
PAR Prefit Postfit Units
=================== ==================== ============================ =====
PSR 1748-2021E 1748-2021E None
EPHEM DE421 DE421 None
CLOCK TT(BIPM2019) TT(BIPM2019) None
UNITS TDB TDB None
START 53478.3 d
FINISH 54187.6 d
TIMEEPH FB90 FB90 None
T2CMETHOD IAU2000B IAU2000B None
DILATEFREQ N None
DMDATA N None
NTOA 0 None
CHI2 39.3961
CHI2R 1.03674
TRES 18.1757 us
POSEPOCH 53750 d
PX 0 mas
RAJ 17h48m52.75s 17h48m52.80032123s +/- 0.00014 hourangle_second
DECJ -20d21m29s -20d21m29.39582205s +/- 0.034 arcsec
PMRA 0 mas / yr
PMDEC 0 mas / yr
F0 61.4855 61.485476554374(18) Hz
PEPOCH 53750 d
F1 -1.181e-15 -1.1817(15)×10⁻¹⁵ Hz / s
CORRECT_TROPOSPHERE N None
PLANET_SHAPIRO N None
NE_SW 0 1 / cm3
SWP 2
SWM 0
DM 223.9 224.07(8) pc / cm3
TZRMJD 53801.4 d
TZRSITE 1 1 None
TZRFRQ 1949.61 MHz
Derived Parameters:
Period = 0.016264003404376±0.000000000000005 s
Pdot = (3.126±0.004)×10⁻¹⁹
Characteristic age = 8.244e+08 yr (braking index = 3)
Surface magnetic field = 2.28e+09 G
Magnetic field at light cylinder = 4806 G
Spindown Edot = 2.868e+33 erg / s (I=1e+45 cm2 g)
[12]:
plt.figure()
plt.errorbar(
xt.value,
f.resids.time_resids.to(u.us).value,
t.get_errors().to(u.us).value,
fmt="x",
)
plt.title(f"{m.PSR.value} Post-Fit Timing Residuals")
plt.xlabel("MJD")
plt.ylabel("Residual (us)")
plt.grid()

[13]:
t_bad = t_all[~error_ok]
r_bad = Residuals(t_bad, f.model)
plt.figure()
plt.errorbar(
xt.value,
f.resids.time_resids.to(u.us).value,
t.get_errors().to(u.us).value,
fmt="x",
label="used in fit",
)
plt.errorbar(
t_bad.get_mjds().value,
r_bad.time_resids.to(u.us).value,
t_bad.get_errors().to(u.us).value,
fmt="x",
label="bad data",
)
plt.title(f"{m.PSR.value} Post-Fit Timing Residuals")
plt.xlabel("MJD")
plt.ylabel("Residual (us)")
plt.grid()
plt.legend(loc="upper left")
[13]:
<matplotlib.legend.Legend at 0x7f2024dc61d0>

[14]:
plt.show()
[15]:
f.model.write_parfile("/tmp/output.par", "wt")
print(f.model.as_parfile())
# Created: 2024-04-26T18:30:06.991225
# PINT_version: 1.0
# User: docs
# Host: build-24199868-project-85767-nanograv-pint
# OS: Linux-5.19.0-1028-aws-x86_64-with-glibc2.35
# Python: 3.11.6 (main, Feb 1 2024, 16:47:41) [GCC 11.4.0]
# Format: pint
PSR 1748-2021E
EPHEM DE421
CLK TT(BIPM2019)
UNITS TDB
START 53478.2858714195382639
FINISH 54187.5873241702319097
TIMEEPH FB90
T2CMETHOD IAU2000B
DILATEFREQ N
DMDATA N
NTOA 44
CHI2 39.39611669542122
CHI2R 1.0367399130374004
TRES 18.175665853858915125
RAJ 17:48:52.80032123 1 0.00013868970124516312
DECJ -20:21:29.39582205 1 0.03403292479973538814
PMRA 0.0
PMDEC 0.0
PX 0.0
POSEPOCH 53750.0000000000000000
F0 61.48547655437361534 1 1.841356378258774786e-11
F1 -1.1816723614788466795e-15 1 1.4578585393661638702e-18
PEPOCH 53750.0000000000000000
CORRECT_TROPOSPHERE N
PLANET_SHAPIRO N
SOLARN0 0.0
SWM 0.0
DM 224.06649954612109599 1 0.082722266447867537353
TZRMJD 53801.3860512007484954
TZRSITE 1
TZRFRQ 1949.609
This Jupyter notebook can be downloaded from PINT_walkthrough.ipynb, or viewed as a python script at PINT_walkthrough.py.
PINT Example Session
The PINT homepage is at: https://github.com/nanograv/PINT.
The documentation is availble here: https://nanograv-pint.readthedocs.io/en/latest/index.html
PINT can be run via a Python script, in an interactive session with ipython or jupyter, or using one of the command-line tools provided.
Times of Arrival (TOAs)
The raw data for PINT are TOAs, which can be read in from files in a variety of formats, or constructed programatically. PINT currently can read TEMPO, Tempo2, and ITOA text files, as well as a range of spacecraft FITS format event files (e.g. Fermi “FT1” and NICER .evt files).
Note: The first time TOAs get read in, lots of processing (can) happen, which can take some time. However, a “pickle” file can be saved, so the next time the same file is loaded (if nothing has changed), the TOAs will be loaded from the pickle file, which is much faster.
[1]:
import tempfile
import astropy.units as u
from pprint import pprint
from glob import glob
import pint.logging
# setup the logging
# let's have it give less detail
pint.logging.setup(level="WARNING")
[1]:
1
[2]:
%matplotlib inline
import matplotlib.pyplot as plt
# Turn on quantity support for plotting. This is very helpful!
from astropy.visualization import quantity_support
quantity_support()
[2]:
<astropy.visualization.units.quantity_support.<locals>.MplQuantityConverter at 0x7fcc505e1110>
[3]:
# Here is how to create a single TOA in Python
# The first argument is an MJD(UTC) as a 2-double tuple to allow extended precision
# and the second argument is the TOA uncertainty
# Wherever possible, it is good to use astropy units on the values,
# but there are sensible defaults if you leave them out (us for uncertainty, MHz for freq)
import pint.toa as toa
a = toa.TOA(
(54567, 0.876876876876876),
4.5 * u.us,
freq=1400.0 * u.MHz,
obs="GBT",
backend="GUPPI",
name="guppi_56789.fits",
)
print(a)
54567.8768768768768759: 4.500 us error at 'gbt' at 1400.0000 MHz {'backend': 'GUPPI', 'name': 'guppi_56789.fits'}
[4]:
# An example of reading a TOA file
import pint.toa as toa
import pint.config
# maybe we want extra logging info here to see what happens when we load TOAs
pint.logging.setup(level="DEBUG")
t = toa.get_TOAs(pint.config.examplefile("NGC6440E.tim"), ephem="DE440")
# but then turn back to "WARNING" later
pint.logging.setup(level="WARNING")
DEBUG (pint.toa ): No pulse number flags found in the TOAs
DEBUG (pint.toa ): Applying clock corrections (include_gps = True, include_bipm = True)
INFO (pint.observatory ): Applying GPS to UTC clock correction (~few nanoseconds)
INFO (pint.observatory ): Loading global GPS clock file
DEBUG (pint.observatory.clock_file ): Global clock file gps2utc.clk saving kwargs={'bogus_last_correction': False, 'valid_beyond_ends': False}
DEBUG (pint.observatory.clock_file ): Loading TEMPO2-format observatory clock correction file gps2utc.clk (/home/docs/.astropy/cache/download/url/d3c81b5766f4bfb84e65504c8a453085/contents) with bogus_last_correction=False
INFO (pint.observatory ): Using global clock file for gps2utc.clk with bogus_last_correction=False
INFO (pint.observatory ): Applying TT(TAI) to TT(BIPM2021) clock correction (~27 us)
INFO (pint.observatory ): Loading BIPM clock version bipm2021
DEBUG (pint.observatory.clock_file ): Global clock file tai2tt_bipm2021.clk saving kwargs={'bogus_last_correction': False, 'valid_beyond_ends': False}
DEBUG (pint.observatory.clock_file ): Loading TEMPO2-format observatory clock correction file tai2tt_bipm2021.clk (/home/docs/.astropy/cache/download/url/e00edeef4edde217d65207a9abeb6a8c/contents) with bogus_last_correction=False
INFO (pint.observatory ): Using global clock file for tai2tt_bipm2021.clk with bogus_last_correction=False
DEBUG (pint.observatory.clock_file ): Global clock file time_gbt.dat saving kwargs={'bogus_last_correction': False, 'valid_beyond_ends': False}
DEBUG (pint.observatory.clock_file ): Loading TEMPO-format observatory clock correction file time_gbt.dat (/home/docs/.astropy/cache/download/url/599e3ebbfc317e090244ee1ef4c79374/contents) with bogus_last_correction=False
INFO (pint.observatory ): Using global clock file for time_gbt.dat with bogus_last_correction=False
INFO (pint.observatory.topo_obs ): Applying observatory clock corrections for observatory='gbt'.
DEBUG (pint.toa ): Computing TDB columns.
DEBUG (pint.toa ): Using EPHEM = DE440 for TDB calculation.
DEBUG (pint.toa ): Computing PosVels of observatories and Earth, using DE440
INFO (pint.solar_system_ephemerides ): Set solar system ephemeris to de440 through astropy
DEBUG (pint.toa ): SSB obs pos [-1.31656418e+11 -6.52210907e+10 -2.82886126e+10] m
DEBUG (pint.toa ): Adding columns ssb_obs_pos ssb_obs_vel obs_sun_pos
[4]:
3
[5]:
# You can print a summary of the loaded TOAs
t.print_summary()
Number of TOAs: 62
Number of commands: 0
Number of observatories: 1 ['gbt']
MJD span: 53478.286 to 54187.587
Date span: 2005-04-18 06:51:39.290648106 to 2007-03-28 14:05:44.808308037
gbt TOAs (62):
Min freq: 1549.609 MHz
Max freq: 2212.109 MHz
Min error: 13.2 us
Max error: 118 us
Median error: 22.1 us
[6]:
# The get_mjds() method returns an array of the MJDs for the TOAs
# Here is the MJD of the first TOA. Notice that is has the units of days
pprint(t.get_mjds())
<Quantity [53478.28587142, 53483.27670519, 53489.46838979, 53679.87564592,
53679.87564537, 53679.87564493, 53679.87564458, 53679.87564513,
53681.700751 , 53681.9545449 , 53683.73678777, 53685.73745904,
53687.68639838, 53687.95032739, 53690.8505221 , 53695.69557327,
53695.85890789, 53700.71983242, 53700.86649642, 53709.63751695,
53709.80961233, 53740.56747467, 53740.77459869, 53801.3860512 ,
53801.59143301, 53833.2978103 , 53833.50245772, 53843.33207938,
53865.18476778, 53865.37595138, 53895.11283426, 53895.3234694 ,
53920.05274172, 53920.23971474, 53954.97216082, 53955.17456176,
53980.90304181, 53981.11981343, 54010.82143311, 54011.03176787,
54050.70474316, 54050.94624708, 54093.65660523, 54095.65330737,
54098.6648706 , 54099.70978479, 54148.68651943, 54150.42513338,
54151.52682219, 54152.71744732, 54153.54858413, 54160.52286339,
54187.33158349, 54187.58732417, 54099.70978574, 54099.70978543,
54099.70978515, 54099.7097849 , 54099.70978469, 54099.70978449,
54099.70978432, 54099.70978416] d>
TOAs are stored in a Astropy Table in an instance of the TOAs class.
[7]:
# List the table columns, which include pre-computed TDB times and
# solar system positions and velocities
t.table.colnames
[7]:
['index',
'mjd',
'mjd_float',
'error',
'freq',
'obs',
'flags',
'delta_pulse_number',
'tdb',
'tdbld',
'ssb_obs_pos',
'ssb_obs_vel',
'obs_sun_pos']
Lots of cool things that tables can do…
[8]:
# This pops open a browser window showing the contents of the table
# t.table.show_in_browser()
Can do fancy sorting, selecting, re-arranging very easily.
[9]:
select = t.get_errors() < 20 * u.us
print(select)
[False False False False False False False True False False False False
False False True False True False False False True False True False
True True True True False True False True True True False False
False False False False False False True True False True True False
False False True False False False False False False False False False
False False]
[10]:
pprint(t.table["tdb"][select])
<Column name='tdb' dtype='object' length=18>
53679.87638798794
53690.85126495607
53695.85965074819
53709.81035518692
53740.775353131845
53801.59218746964
53833.2985647664
53833.50321218054
53843.33283383857
53865.37670583518
53895.32422385059
53920.05349616944
53920.240469182434
54093.65735966989
54095.65406181057
54099.71053923451
54148.6872738913
54153.54933858948
TOAs objects have a select() method to select based on a boolean mask. This selection can be undone later with unselect.
[11]:
t.print_summary()
t.select(select)
t.print_summary()
t.unselect()
t.print_summary()
Number of TOAs: 62
Number of commands: 0
Number of observatories: 1 ['gbt']
MJD span: 53478.286 to 54187.587
Date span: 2005-04-18 06:51:39.290648106 to 2007-03-28 14:05:44.808308037
gbt TOAs (62):
Min freq: 1549.609 MHz
Max freq: 2212.109 MHz
Min error: 13.2 us
Max error: 118 us
Median error: 22.1 us
Number of TOAs: 18
Number of commands: 0
Number of observatories: 1 ['gbt']
MJD span: 53679.876 to 54153.549
Date span: 2005-11-05 21:00:55.739565465 to 2007-02-22 13:09:57.668850184
gbt TOAs (18):
Min freq: 1949.609 MHz
Max freq: 1949.609 MHz
Min error: 13.2 us
Max error: 19.6 us
Median error: 16.4 us
Number of TOAs: 62
Number of commands: 0
Number of observatories: 1 ['gbt']
MJD span: 53478.286 to 54187.587
Date span: 2005-04-18 06:51:39.290648106 to 2007-03-28 14:05:44.808308037
gbt TOAs (62):
Min freq: 1549.609 MHz
Max freq: 2212.109 MHz
Min error: 13.2 us
Max error: 118 us
Median error: 22.1 us
PINT routines / classes / functions use Astropy Units internally and externally as much as possible:
[12]:
pprint(t.get_errors())
<Quantity [ 21.71, 21.95, 29.95, 25.46, 23.43, 31.67, 30.26, 13.52,
21.64, 27.41, 24.58, 23.52, 21.71, 21.47, 17.72, 28.88,
14.63, 38.03, 31.47, 33.26, 13.88, 26.89, 18.29, 21.48,
17.88, 18.59, 19.03, 15.07, 21.58, 14.72, 25.14, 14.65,
19.29, 13.25, 20.71, 23.57, 23.45, 22.16, 23.53, 21.01,
21.66, 75.3 , 19.65, 16.28, 21.93, 14. , 19.35, 32.92,
33.83, 118.43, 16.45, 30.18, 21.8 , 20.75, 32.75, 31.29,
37.13, 37.4 , 35.24, 50.83, 38.43, 48.59] us>
The times in each row contain (or are derived from) Astropy Time objects:
[13]:
toa0 = t.table["mjd"][0]
[14]:
toa0.tai
[14]:
<Time object: scale='tai' format='pulsar_mjd' value=53478.28624178991>
But the most useful timescale, TDB is also stored in its own column as a long double numpy array, to maintain precision and keep from having to redo the conversion. Note that is is the TOA time converted to the TDB timescale, but the Solar System delays have not been applied, so this is NOT what people call “barycentered times”
[15]:
pprint(t.table["tdbld"][:3])
<Column name='tdbld' dtype='float128' length=3>
53478.286614308378393
53483.277448077169023
53489.469132675783516
Timing Models
Now let’s define and load a timing model
[16]:
import pint.models as models
m = models.get_model(pint.config.examplefile("NGC6440E.par"))
[17]:
# Printing a model gives the parfile representation
print(m)
# Created: 2024-04-26T18:23:01.836077
# PINT_version: 1.0
# User: docs
# Host: build-24199868-project-85767-nanograv-pint
# OS: Linux-5.19.0-1028-aws-x86_64-with-glibc2.35
# Python: 3.11.6 (main, Feb 1 2024, 16:47:41) [GCC 11.4.0]
# Format: pint
PSR 1748-2021E
EPHEM DE421
CLK TT(BIPM2019)
UNITS TDB
TIMEEPH FB90
T2CMETHOD IAU2000B
DILATEFREQ N
DMDATA N
NTOA 0
RAJ 17:48:52.75000000 1 0.04999999999999999584
DECJ -20:21:29.00000000 1 0.40000000000000002220
PMRA 0.0
PMDEC 0.0
PX 0.0
POSEPOCH 53750.0000000000000000
F0 61.485476554 1 5e-10
F1 -1.181e-15 1 1e-18
PEPOCH 53750.0000000000000000
CORRECT_TROPOSPHERE N
PLANET_SHAPIRO N
SOLARN0 0.0
SWM 0.0
DM 223.9 1 0.3
TZRMJD 53801.3860512007484954
TZRSITE 1
TZRFRQ 1949.609
Timing models are composed of “delay” terms and “phase” terms, which are computed by the Components of the model. The delay terms are evaluated in order, going from terms local to the Solar System, which are needed for computing ‘barycenter-corrected’ TOAs, through terms for the binary system.
[18]:
# delay_funcs lists all the delay functions in the model, and the order is important!
m.delay_funcs
[18]:
[<bound method Astrometry.solar_system_geometric_delay of AstrometryEquatorial(
MJDParameter( POSEPOCH 53750.0000000000000000 (d) frozen=True),
floatParameter( PX 0.0 (mas) frozen=True),
AngleParameter( RAJ 17:48:52.75000000 (hourangle) +/- 0h00m00.05s frozen=False),
AngleParameter( DECJ -20:21:29.00000000 (deg) +/- 0d00m00.4s frozen=False),
floatParameter( PMRA 0.0 (mas / yr) frozen=True),
floatParameter( PMDEC 0.0 (mas / yr) frozen=True))>,
<bound method TroposphereDelay.troposphere_delay of TroposphereDelay(
boolParameter( CORRECT_TROPOSPHERE N frozen=True))>,
<bound method SolarSystemShapiro.solar_system_shapiro_delay of SolarSystemShapiro(
boolParameter( PLANET_SHAPIRO N frozen=True))>,
<bound method SolarWindDispersion.solar_wind_delay of SolarWindDispersion(
floatParameter( NE_SW 0.0 (1 / cm3) frozen=True),
floatParameter( SWP 2.0 () frozen=True),
floatParameter( SWM 0.0 () frozen=True))>,
<bound method DispersionDM.constant_dispersion_delay of DispersionDM(
floatParameter( DM 223.9 (pc / cm3) +/- 0.3 pc / cm3 frozen=False),
floatParameter( DM1 UNSET,
MJDParameter( DMEPOCH UNSET)>]
The phase functions include the spindown model and an absolute phase definition (if the TZR parameters are specified).
[19]:
# And phase_funcs holds a list of all the phase functions
m.phase_funcs
[19]:
[<bound method Spindown.spindown_phase of Spindown(
floatParameter( F0 61.485476554 (Hz) +/- 5e-10 Hz frozen=False),
MJDParameter( PEPOCH 53750.0000000000000000 (d) frozen=True),
floatParameter( F1 -1.181e-15 (Hz / s) +/- 1e-18 Hz / s frozen=False))>]
You can easily show/compute individual terms…
[20]:
ds = m.solar_system_shapiro_delay(t)
pprint(ds)
<Quantity [-4.11774615e-06, -4.58215733e-06, -5.09435415e-06,
1.26025166e-05, 1.26025164e-05, 1.26025162e-05,
1.26025160e-05, 1.26025163e-05, 1.34033282e-05,
1.35163227e-05, 1.43416919e-05, 1.53159181e-05,
1.63198995e-05, 1.64587639e-05, 1.80783671e-05,
2.11530227e-05, 2.12647452e-05, 2.49851393e-05,
2.51080759e-05, 3.45107578e-05, 3.47450146e-05,
3.00319035e-05, 2.98083009e-05, 2.11804876e-06,
2.07541048e-06, -3.00762925e-06, -3.03173087e-06,
-4.09655364e-06, -5.80849733e-06, -5.81983363e-06,
-6.90339229e-06, -6.90646307e-06, -6.82672804e-06,
-6.82292820e-06, -5.19141699e-06, -5.17650521e-06,
-2.63564143e-06, -2.60880558e-06, 2.28385789e-06,
2.32788087e-06, 1.51692739e-05, 1.52882687e-05,
5.13321680e-05, 4.61456318e-05, 3.99876478e-05,
3.82020217e-05, 6.59654820e-06, 6.09155453e-06,
5.78124972e-06, 5.45386906e-06, 5.22873335e-06,
3.47897241e-06, -1.52400083e-06, -1.56079047e-06,
3.82020202e-05, 3.82020207e-05, 3.82020211e-05,
3.82020215e-05, 3.82020219e-05, 3.82020222e-05,
3.82020225e-05, 3.82020227e-05] s>
The get_mjds()
method can return the TOA times as either astropy Time objects (for high precision), or as double precisions Quantities (for easy plotting).
[21]:
plt.plot(t.get_mjds(high_precision=False), ds.to(u.us), "+")
plt.xlabel("MJD")
plt.ylabel("Solar System Shapiro Delay ($\mu$s)")
[21]:
Text(0, 0.5, 'Solar System Shapiro Delay ($\\mu$s)')

Here are all of the terms added together:
[22]:
pprint(m.delay(t))
<Quantity [-256.27796452, -292.17743481, -333.75112984, 357.1520691 ,
357.10408427, 357.06636421, 357.03617705, 357.08413648,
367.69372882, 369.13822873, 379.08005434, 389.80438944,
399.79826793, 401.11625247, 415.02771298, 435.8981066 ,
436.5464231 , 454.24090015, 454.72529413, 478.04191824,
478.38369877, 468.46391764, 467.91545254, 94.36835601,
92.628911 , -175.81800261, -177.4654317 , -253.52926427,
-394.2164241 , -395.23707448, -497.24016588, -497.540425 ,
-488.57121649, -488.17791764, -337.19445584, -335.91562337,
-146.57400016, -144.80615177, 105.78789561, 107.54212818,
389.24573948, 390.51810899, 490.40905135, 488.47319094,
484.41349149, 482.68725532, 240.08971331, 226.85700881,
218.50175843, 209.24129419, 202.56724193, 146.32198475,
-81.97904962, -84.14626638, 482.76923444, 482.74204028,
482.71810824, 482.69693669, 482.67811686, 482.6613129 ,
482.64624688, 482.63268713] s>
[23]:
pprint(m.phase(t))
Phase(int=<Quantity [-1.71639818e+09, -1.68988294e+09, -1.65698802e+09,
-6.45521434e+08, -6.45521434e+08, -6.45521434e+08,
-6.45521434e+08, -6.45521434e+08, -6.35826494e+08,
-6.34478342e+08, -6.25011064e+08, -6.14383467e+08,
-6.04030643e+08, -6.02628642e+08, -5.87222662e+08,
-5.61485361e+08, -5.60617711e+08, -5.34795890e+08,
-5.34016790e+08, -4.87423535e+08, -4.86509326e+08,
-3.23112273e+08, -3.22011925e+08, 0.00000000e+00,
1.09116600e+06, 1.69542892e+08, 1.70630151e+08,
2.22853171e+08, 3.38950845e+08, 3.39966541e+08,
4.97945399e+08, 4.99064384e+08, 6.30434263e+08,
6.31427504e+08, 8.15928963e+08, 8.17004108e+08,
9.53671033e+08, 9.54822490e+08, 1.11259234e+09,
1.11370960e+09, 1.32444882e+09, 1.32573169e+09,
1.55261772e+09, 1.56322501e+09, 1.57922372e+09,
1.58477477e+09, 1.84497101e+09, 1.85420794e+09,
1.86006100e+09, 1.86638658e+09, 1.87080228e+09,
1.90785552e+09, 2.05028673e+09, 2.05164544e+09,
1.58477477e+09, 1.58477477e+09, 1.58477477e+09,
1.58477477e+09, 1.58477477e+09, 1.58477477e+09,
1.58477477e+09, 1.58477477e+09]>, frac=<Quantity [-0.00309252, -0.00801063, -0.0171438 , -0.16868086, -0.16915065,
-0.17547523, -0.17560751, -0.17249322, -0.16943038, -0.16625714,
-0.16672537, -0.16423417, -0.16116986, -0.16013094, -0.15483818,
-0.14701759, -0.1468638 , -0.13713971, -0.13910141, -0.12596384,
-0.12321319, -0.06663884, -0.06750514, -0.00067575, 0.00150531,
-0.00503603, -0.00931584, -0.01609301, -0.04283737, -0.04080684,
-0.0929814 , -0.09457692, -0.13884716, -0.13800694, -0.18976956,
-0.1910674 , -0.2118581 , -0.21299342, -0.21584403, -0.21263116,
-0.17415625, -0.17657797, -0.09980529, -0.09705824, -0.09214972,
-0.09090943, -0.02173754, -0.01961242, -0.01233623, 0.0003572 ,
-0.01552563, -0.01669291, -0.01407142, -0.01213314, -0.08416629,
-0.08750831, -0.09085623, -0.0897284 , -0.09223133, -0.08668348,
-0.09712123, -0.09125008]>)
Residuals
[24]:
import pint.residuals
[25]:
rs = pint.residuals.Residuals(t, m)
[26]:
# Note that the Residuals object contains a toas member that has the TOAs used to compute
# the residuals, so you can use that to get the MJDs and uncertainties for each TOA
# Also note that plotting astropy Quantities must be enabled using
# astropy quanity_support() first (see beginning of this notebook)
plt.errorbar(
rs.toas.get_mjds(),
rs.time_resids.to(u.us),
yerr=rs.toas.get_errors().to(u.us),
fmt=".",
)
plt.title(f"{m.PSR.value} Pre-Fit Timing Residuals")
plt.xlabel("MJD")
plt.ylabel("Residual (us)")
plt.grid()

Fitting and Post-Fit residuals
The fitter is completely separate from the model and the TOA code. So you can use any type of fitter with some easy coding to create a new subclass of Fitter
. This example uses PINT’s Weighted Least Squares fitter. The return value for this fitter is the chi^2 after the fit.
[27]:
import pint.fitter
f = pint.fitter.WLSFitter(t, m)
f.fit_toas() # fit_toas() returns the final reduced chi squared
[27]:
59.57436756189371577
[28]:
# You can now print a nice human-readable summary of the fit
f.print_summary()
Fitted model using weighted_least_square method with 5 free parameters to 62 TOAs
Prefit residuals Wrms = 1090.793040947669 us, Postfit residuals Wrms = 21.182047246369727 us
Chisq = 59.574 for 56 d.o.f. for reduced Chisq of 1.064
PAR Prefit Postfit Units
=================== ==================== ============================ =====
PSR 1748-2021E 1748-2021E None
EPHEM DE421 DE440 None
CLOCK TT(BIPM2019) TT(BIPM2021) None
UNITS TDB TDB None
START 53478.3 d
FINISH 54187.6 d
TIMEEPH FB90 FB90 None
T2CMETHOD IAU2000B IAU2000B None
DILATEFREQ N None
DMDATA N None
NTOA 0 None
CHI2 59.5744
CHI2R 1.06383
TRES 21.182 us
POSEPOCH 53750 d
PX 0 mas
RAJ 17h48m52.75s 17h48m52.80035401s +/- 0.00014 hourangle_second
DECJ -20d21m29s -20d21m29.38334163s +/- 0.033 arcsec
PMRA 0 mas / yr
PMDEC 0 mas / yr
F0 61.4855 61.485476554371(18) Hz
PEPOCH 53750 d
F1 -1.181e-15 -1.1813(14)×10⁻¹⁵ Hz / s
CORRECT_TROPOSPHERE N None
PLANET_SHAPIRO N None
NE_SW 0 1 / cm3
SWP 2
SWM 0
DM 223.9 224.114(35) pc / cm3
TZRMJD 53801.4 d
TZRSITE 1 1 None
TZRFRQ 1949.61 MHz
Derived Parameters:
Period = 0.016264003404376±0.000000000000005 s
Pdot = (3.125±0.004)×10⁻¹⁹
Characteristic age = 8.246e+08 yr (braking index = 3)
Surface magnetic field = 2.28e+09 G
Magnetic field at light cylinder = 4806 G
Spindown Edot = 2.868e+33 erg / s (I=1e+45 cm2 g)
[29]:
# Lets plot the post-fit residuals
plt.errorbar(
t.get_mjds(), f.resids.time_resids.to(u.us), t.get_errors().to(u.us), fmt="x"
)
plt.title(f"{m.PSR.value} Post-Fit Timing Residuals")
plt.xlabel("MJD")
plt.ylabel("Residual (us)")
plt.grid()

Now let’s save (and print) the post-fit par file. We’ll request a more TEMPO2-compatible file, though we could have requested a more TEMPO-style file or a more native PINT format. These differ only slightly, just as much as needed to be read by the three pieces of software. PINT can read all three variants.
[30]:
f.model.write_parfile("/tmp/output.par", format="tempo2")
print(f.model.as_parfile(format="tempo2"))
# Created: 2024-04-26T18:23:03.131961
# PINT_version: 1.0
# User: docs
# Host: build-24199868-project-85767-nanograv-pint
# OS: Linux-5.19.0-1028-aws-x86_64-with-glibc2.35
# Python: 3.11.6 (main, Feb 1 2024, 16:47:41) [GCC 11.4.0]
# Format: tempo2
MODE 1
PSR 1748-2021E
EPHEM DE440
CLK TT(BIPM2021)
UNITS TDB
START 53478.2858714195382639
FINISH 54187.5873241702319097
TIMEEPH FB90
#T2CMETHOD IAU2000B
DILATEFREQ N
DMDATA 0
NTOA 62
CHI2 59.574367561893716
CHI2R 1.0638279921766736
TRES 21.182047246369727363
RAJ 17:48:52.80035401 1 0.00013524660895673345
DECJ -20:21:29.38334163 1 0.03285153305570393673
PMRA 0.0
PMDEC 0.0
PX 0.0
POSEPOCH 53750.0000000000000000
F0 61.485476554371304245 1 1.8086084528481615922e-11
F1 -1.18133207889496925105e-15 1 1.4418540384438644559e-18
PEPOCH 53750.0000000000000000
CORRECT_TROPOSPHERE N
PLANET_SHAPIRO N
SOLARN0 0.0
DM 224.11379740488740128 1 0.03493898049412474255
TZRMJD 53801.3860512007484954
TZRSITE 1
TZRFRQ 1949.609
Other interesting things
We can make Barycentered TOAs in a single line, if you have a model and a TOAs object! These are TDB times with the Solar System delays applied (precisely which of the delay components are applied is changeable – the default applies all delays before the ones associated with the binary system)
[31]:
pprint(m.get_barycentric_toas(t))
<Quantity [53478.28958049, 53483.28082976, 53489.47299554, 53679.87225507,
53679.87225507, 53679.87225507, 53679.87225507, 53679.87225507,
53681.69723814, 53681.95101532, 53683.73314312, 53685.73369027,
53687.68251394, 53687.9464277 , 53690.84646139, 53695.69127101,
53695.85459813, 53700.71531786, 53700.86197625, 53709.63272692,
53709.80481834, 53740.56280708, 53740.76993744, 53801.38571344,
53801.59111538, 53833.3005997 , 53833.50526618, 53843.33576821,
53865.19008493, 53865.38128034, 53895.11934381, 53895.32998242,
53920.05915093, 53920.24611939, 53954.97681797, 53955.1792041 ,
53980.90549269, 53981.12224386, 54010.82096314, 54011.0312776 ,
54050.70099243, 54050.94248163, 54093.65168364, 54095.64840819,
54098.6600184 , 54099.70495258, 54148.68449508, 54150.42326217,
54151.5250477 , 54152.71578 , 54153.54699406, 54160.52192431,
54187.33328679, 54187.58905255, 54099.70495258, 54099.70495258,
54099.70495258, 54099.70495258, 54099.70495258, 54099.70495258,
54099.70495258, 54099.70495258] d>
Let’s export the clock corrections as they currently stand so we can save these exact versions for reproducibility purposes.
[32]:
import pint.observatory.topo_obs
d = tempfile.mkdtemp()
pint.observatory.topo_obs.export_all_clock_files(d)
for f in sorted(glob(f"{d}/*")):
print(f)
/tmp/tmp95uglrac/gps2utc.clk
/tmp/tmp95uglrac/tai2tt_bipm2021.clk
This Jupyter notebook can be downloaded from fit_NGC6440E.ipynb, or viewed as a python script at fit_NGC6440E.py.
Demonstrate the use of PINT in a script
This notebook is primarily designed to operate as a plain .py
script. You should be able to run the .py
script that occurs in the docs/examples/
directory in order to carry out a simple fit of a timing model to some data. You should also be able to run the notebook version as it is here (it may be necessary to make notebooks
to produce a .ipynb
version using jupytext
).
[1]:
import os
import astropy.units as u
# This will change which output method matplotlib uses and may behave better on some machines
# import matplotlib
# matplotlib.use('TKAgg')
import matplotlib.pyplot as plt
import pint.fitter
import pint.residuals
import pint.toa
from pint.models import get_model_and_toas
import pint.logging
import os
# setup logging
pint.logging.setup(level="INFO")
[1]:
1
[2]:
import pint.config
parfile = pint.config.examplefile("NGC6440E.par")
timfile = pint.config.examplefile("NGC6440E.tim")
assert os.path.exists(parfile)
assert os.path.exists(timfile)
[3]:
# Read the timing model and the TOAs
m, t = get_model_and_toas(parfile, timfile)
If we wanted to do things separately we could do
# Define the timing model
m = get_model(parfile)
# Read in the TOAs, using the solar system ephemeris and other things from the model
t = pint.toa.get_TOAs(timfile, model=m)
If we wanted to select some subset of the TOAs, there are tools to do that. Most easily you make a new TOAs object containing the subset you care about (we will make but not use them):
Use every other TOA
[4]:
t_every_other = t[::2]
Use only TOAs with errors < 30 us
[5]:
t_small_errors = t[t.get_errors() < 30 * u.us]
Use only TOAs from the GBT (although this is all of them for this example)
[6]:
t_gbt = t[t.get_obss() == "gbt"]
[7]:
# Print a summary of the TOAs that we have
print(t.get_summary())
Number of TOAs: 62
Number of commands: 0
Number of observatories: 1 ['gbt']
MJD span: 53478.286 to 54187.587
Date span: 2005-04-18 06:51:39.290648106 to 2007-03-28 14:05:44.808308037
gbt TOAs (62):
Min freq: 1549.609 MHz
Max freq: 2212.109 MHz
Min error: 13.2 us
Max error: 118 us
Median error: 22.1 us
[8]:
# These are pre-fit residuals
rs = pint.residuals.Residuals(t, m).phase_resids
xt = t.get_mjds()
plt.plot(xt, rs, "x")
plt.title(f"{m.PSR.value} Pre-Fit Timing Residuals")
plt.xlabel("MJD")
plt.ylabel("Residual (phase)")
plt.grid()
plt.show()

[9]:
# Now do the fit
print("Fitting.")
f = pint.fitter.DownhillWLSFitter(t, m)
f.fit_toas()
# f = pint.fitter.DownhillGLSFitter(t, m)
# f.fit_toas(full_cov=True)
Fitting.
[10]:
# Print some basic params
print("Best fit has reduced chi^2 of", f.resids.reduced_chi2)
print("RMS in phase is", f.resids.phase_resids.std())
print("RMS in time is", f.resids.time_resids.std().to(u.us))
Best fit has reduced chi^2 of 1.0638341436607690591
RMS in phase is 0.0020495747259610133
RMS in time is 33.334290336833924 us
[11]:
# Show the parameter correlation matrix
corm = f.get_parameter_correlation_matrix(pretty_print=True)
Parameter correlation matrix:
RAJ DECJ F0 F1 DM
RAJ 1.000
DECJ -0.072 1.000
F0 -0.087 0.247 1.000
F1 0.294 -0.344 -0.798 1.000
DM -0.005 0.065 0.007 0.058 1.000
[12]:
print(f.get_summary())
Fitted model using downhill_wls method with 5 free parameters to 62 TOAs
Prefit residuals Wrms = 1090.5801805746107 us, Postfit residuals Wrms = 21.182108487867104 us
Chisq = 59.575 for 56 d.o.f. for reduced Chisq of 1.064
PAR Prefit Postfit Units
=================== ==================== ============================ =====
PSR 1748-2021E 1748-2021E None
EPHEM DE421 DE421 None
CLOCK TT(BIPM2019) TT(BIPM2019) None
UNITS TDB TDB None
START 53478.3 d
FINISH 54187.6 d
TIMEEPH FB90 FB90 None
T2CMETHOD IAU2000B IAU2000B None
DILATEFREQ N None
DMDATA N None
NTOA 0 None
CHI2 59.5747
CHI2R 1.06383
TRES 21.1821 us
POSEPOCH 53750 d
PX 0 mas
RAJ 17h48m52.75s 17h48m52.8003469s +/- 0.00014 hourangle_second
DECJ -20d21m29s -20d21m29.3833405s +/- 0.033 arcsec
PMRA 0 mas / yr
PMDEC 0 mas / yr
F0 61.4855 61.485476554372(18) Hz
PEPOCH 53750 d
F1 -1.181e-15 -1.1813(14)×10⁻¹⁵ Hz / s
CORRECT_TROPOSPHERE N None
PLANET_SHAPIRO N None
NE_SW 0 1 / cm3
SWP 2
SWM 0
DM 223.9 224.114(35) pc / cm3
TZRMJD 53801.4 d
TZRSITE 1 1 None
TZRFRQ 1949.61 MHz
Derived Parameters:
Period = 0.016264003404376±0.000000000000005 s
Pdot = (3.125±0.004)×10⁻¹⁹
Characteristic age = 8.246e+08 yr (braking index = 3)
Surface magnetic field = 2.28e+09 G
Magnetic field at light cylinder = 4806 G
Spindown Edot = 2.868e+33 erg / s (I=1e+45 cm2 g)
[13]:
plt.errorbar(
xt.value,
f.resids.time_resids.to_value(u.us),
t.get_errors().to_value(u.us),
fmt="x",
)
plt.title(f"{m.PSR.value} Post-Fit Timing Residuals")
plt.xlabel("MJD")
plt.ylabel("Residual (us)")
plt.grid()
plt.show()

[14]:
f.model.write_parfile("/tmp/output.par", "wt")
print(f.model.as_parfile())
# Created: 2024-04-26T18:24:36.026939
# PINT_version: 1.0
# User: docs
# Host: build-24199868-project-85767-nanograv-pint
# OS: Linux-5.19.0-1028-aws-x86_64-with-glibc2.35
# Python: 3.11.6 (main, Feb 1 2024, 16:47:41) [GCC 11.4.0]
# Format: pint
PSR 1748-2021E
EPHEM DE421
CLK TT(BIPM2019)
UNITS TDB
START 53478.2858714195382639
FINISH 54187.5873241702319097
TIMEEPH FB90
T2CMETHOD IAU2000B
DILATEFREQ N
DMDATA N
NTOA 62
CHI2 59.57471204500307
CHI2R 1.063834143660769
TRES 21.182108487867104272
RAJ 17:48:52.80034690 1 0.00013524663578313309
DECJ -20:21:29.38334050 1 0.03285268549434944285
PMRA 0.0
PMDEC 0.0
PX 0.0
POSEPOCH 53750.0000000000000000
F0 61.48547655437249947 1 1.8086084389702723126e-11
F1 -1.1813316933589336929e-15 1 1.441854039880340105e-18
PEPOCH 53750.0000000000000000
CORRECT_TROPOSPHERE N
PLANET_SHAPIRO N
SOLARN0 0.0
SWM 0.0
DM 224.11379639347297277 1 0.03493898051062995641
TZRMJD 53801.3860512007484954
TZRSITE 1
TZRFRQ 1949.609
This Jupyter notebook can be downloaded from covariance.ipynb, or viewed as a python script at covariance.py.
Accessing correlation matrices and model derivatives
The results of a fit consist of best-fit parameter values and uncertainties, and residuals; these are conventional data products from pulsar timing. Additional information can be useful though: we can describe the correlations between model parameters in a matrix, and we can compute the derivatives of the residuals with respect to the model parameters. Both of these additional pieces of information can be obtained from a Fitter object in PINT; this notebook will demonstrate how to do this efficiently.
[1]:
import contextlib
from pathlib import Path
import matplotlib.pyplot as plt
import numpy as np
import scipy.linalg
import scipy.stats
import matplotlib.pyplot as plt
from astropy.visualization import quantity_support
import pint.fitter
import pint.models
import pint.toa
pint.logging.setup(level="INFO")
quantity_support()
[1]:
<astropy.visualization.units.quantity_support.<locals>.MplQuantityConverter at 0x7f2af00e5150>
[2]:
parfile = Path(pint.config.examplefile("NGC6440E.par"))
timfile = Path(pint.config.examplefile("NGC6440E.tim"))
assert parfile.exists()
assert timfile.exists()
[3]:
m, t = pint.models.get_model_and_toas(parfile, timfile)
Extracting the parameter covariance matrix
Unfortunately, parameter correlation matrices are not stored when .par
files are recorded, only the individual parameter uncertainties. In PINT, the machinery for computing these matrices resides in Fitter
objects. We will therefore construct one and carry out a fit - but we will take zero steps, so that the fit doesn’t change the solution (and it runs fairly quickly!).
Normally you should probably actually do something if the model isn’t converged! Specifically, the covariance matrix probably isn’t very meaningful if you’re not at a best-fit set of parameters. Unfortunately, with maxiter=0
you will always get an exception claiming that the fitter hasn’t converged.
[4]:
fitter = pint.fitter.Fitter.auto(t, m)
with contextlib.suppress(pint.fitter.MaxiterReached):
fitter.fit_toas(maxiter=0)
You can get a human-readable version of the parameter correlation matrix:
[5]:
fitter.parameter_correlation_matrix
[5]:
Parameter correlation matrix:
RAJ DECJ F0 F1 DM
RAJ 1.000
DECJ -0.072 1.000
F0 -0.087 0.247 1.000
F1 0.294 -0.344 -0.798 1.000
DM -0.005 0.065 0.007 0.058 1.000
If you want a machine-readable version:
[6]:
fitter.parameter_correlation_matrix.labels
[6]:
[[('Offset', (0, 1, Unit(dimensionless))),
('RAJ', (1, 2, Unit("1 / (Hz hourangle)"))),
('DECJ', (2, 3, Unit("1 / (Hz deg)"))),
('F0', (3, 4, Unit("1 / Hz2"))),
('F1', (4, 5, Unit("s / Hz2"))),
('DM', (5, 6, Unit("cm3 / (Hz pc)")))],
[('Offset', (0, 1, Unit(dimensionless))),
('RAJ', (1, 2, Unit("1 / (Hz hourangle)"))),
('DECJ', (2, 3, Unit("1 / (Hz deg)"))),
('F0', (3, 4, Unit("1 / Hz2"))),
('F1', (4, 5, Unit("s / Hz2"))),
('DM', (5, 6, Unit("cm3 / (Hz pc)")))]]
[7]:
fitter.parameter_correlation_matrix.matrix
[7]:
array([[ 1. , 0.02479064, -0.05713427, -0.00909425, -0.01942478,
-0.99531251],
[ 0.02479064, 1. , -0.07222405, -0.08730079, 0.29426893,
-0.00496847],
[-0.05713427, -0.07222405, 1. , 0.24713231, -0.34386087,
0.0648164 ],
[-0.00909425, -0.08730079, 0.24713231, 1. , -0.79767086,
0.00674302],
[-0.01942478, 0.29426893, -0.34386087, -0.79767086, 1. ,
0.05755779],
[-0.99531251, -0.00496847, 0.0648164 , 0.00674302, 0.05755779,
1. ]])
Be warned: the labels here are not necessarily in the same order as fo.model.free_params
. Also, if the model includes red noise parameters, there may be more rows and columns than labels in the parameter covariance matrix. These unlabelled rows and columns will always be at the end. Let’s check that there aren’t surprises waiting for this pulsar.
Note also that correlation matrices are unitless, so the units recorded in the .labels
attribute are wrong.
Even if there are no noise component entries, the correlation matrix includes a row and column for the non-parameter Offset
. This arises because internally PINT fits allow for a constant offset in phase, but custom in pulsar timing is to report mean-subtracted residuals and ignore the absolute phase.
[8]:
print(f"Model free parameters: {len(fitter.model.free_params)}")
print(
f"Correlation matrix labels: {len(fitter.parameter_correlation_matrix.labels[0])}"
)
print(f"Correlation matrix shape: {fitter.parameter_correlation_matrix.shape}")
Model free parameters: 5
Correlation matrix labels: 6
Correlation matrix shape: (6, 6)
Let’s extract the correlation matrix in a more convenient form. This requires some fancy indexing to rearrange the rows and columns as needed.
[9]:
pint_correlations = fitter.parameter_correlation_matrix
params = fitter.model.free_params
corr_labels = [label for label, _ in pint_correlations.labels[0]]
ix = [corr_labels.index(p) for p in params]
raw_correlation = pint_correlations.matrix
assert np.allclose(raw_correlation, raw_correlation.T)
raw_correlation = (raw_correlation + raw_correlation.T) / 2
# extract rows in the right order then columns in the right order
correlation = (raw_correlation[ix, :])[:, ix]
assert correlation.shape == (len(params), len(params))
for i, p1 in enumerate(params):
assert p1 in corr_labels
for j, p2 in enumerate(params[: i + 1]):
assert (
correlation[i, j]
== raw_correlation[corr_labels.index(p1), corr_labels.index(p2)]
)
assert correlation[i, j] == correlation[j, i]
Let’s summarize the worst covariances.
[10]:
correlation_list = [
(p1, p2, correlation[i, j])
for i, p1 in enumerate(params)
for j, p2 in enumerate(params[:i])
]
correlation_list.sort(key=lambda t: -abs(t[-1]))
for p1, p2, c in correlation_list:
if abs(c) < 0.5:
break
print(f"{p1:10s} {p2:10s} {c:+.15f}")
F1 F0 -0.797670856119385
Error ellipses
In the frequentist least-squares fitting we do in PINT, the model is assumed to be linear over the range of plausible values, and as a result the estimate of the plausible parameter distribution is a multivariate normal distribution (with correlations as computed above). The confidence regions we obtain are therefore ellipsoids. An n-dimensional ellipsoid is rather cumbersome to visualize, but we can find it useful to plot two-dimensional projections. These are analogous to Bayesian posterior distributions and credible regions.
Let’s plot the credible region for the pair of parameters DM
and F1
.
[11]:
p1 = "F0"
p2 = "F1"
i = params.index(p1)
j = params.index(p2)
cor = np.array([[1, correlation[i, j]], [correlation[i, j], 1]])
sigmas = np.array([fitter.get_fitparams_uncertainty()[p] for p in [p1, p2]])
vals, vecs = scipy.linalg.eigh(cor)
def plot_ellipses():
for n_sigma in [1, 2, 3]:
thresh = np.sqrt(scipy.stats.chi2(2).isf(2 * scipy.stats.norm.cdf(-n_sigma)))
angles = np.linspace(0, 2 * np.pi, 200)
points = thresh * (
np.sqrt(vals[0]) * np.cos(angles)[:, None] * vecs[None, :, 0]
+ np.sqrt(vals[1]) * np.sin(angles)[:, None] * vecs[None, :, 1]
)
plt.plot(
points[:, 0] * sigmas[0], points[:, 1] * sigmas[1], label=f"{n_sigma} sigma"
)
plt.axvspan(-sigmas[0], sigmas[0], alpha=0.5, label="one-sigma single-variable")
plt.axhspan(-sigmas[1], sigmas[1], alpha=0.5)
plt.xlabel(r"$\Delta$" + f"{p1}")
plt.ylabel(r"$\Delta$" + f"{p2}")
plot_ellipses()
plt.legend()
[11]:
<matplotlib.legend.Legend at 0x7f2ae5b54c50>

You can generate something like a posterior sample fairly easily:
[12]:
all_sigmas = np.array([fitter.get_fitparams_uncertainty()[p] for p in params])
sample = (
scipy.stats.multivariate_normal(cov=correlation).rvs(size=1000)
* all_sigmas[None, :]
)
plot_ellipses()
plt.plot(sample[:, i], sample[:, j], ".", label="sample points")
plt.legend()
[12]:
<matplotlib.legend.Legend at 0x7f2ae5b224d0>

Model derivatives
PINT’s fitters rely on having analytical derivatives of the timing model with respect to each parameter. These can be obtained by querying appropriate methods in the TimingModel
object, but it is more conveniently packaged as the “design matrix” for the fit. Here too the order of the parameters may well not match what is in fitter.model.free_params
.
[13]:
design, names, units = fitter.get_designmatrix()
print(names)
print(units)
print(fitter.model.free_params)
print(design.shape)
['Offset', 'RAJ', 'DECJ', 'F0', 'F1', 'DM']
[Unit(dimensionless), Unit("1 / (Hz hourangle)"), Unit("1 / (Hz deg)"), Unit("1 / Hz2"), Unit("s / Hz2"), Unit("cm3 / (Hz pc)")]
['RAJ', 'DECJ', 'F0', 'F1', 'DM']
(62, 6)
Let’s look at the derivatives (normalized so their scales are comparable) as a function of time. This may give us some hints for why the covariances are what they are.
[14]:
mjds = t.get_mjds()
ix = np.argsort(mjds)
assert np.all(np.diff(mjds[ix]) > 0)
for deriv, param in zip(design.T, names):
deriv_normalized = deriv / np.sqrt((deriv**2).mean())
plt.plot(mjds[ix], deriv_normalized[ix], label=param)
plt.legend()
plt.xlabel("MJD")
plt.ylabel("Derivative (normalized)")
[14]:
Text(0, 0.5, 'Derivative (normalized)')

We didn’t actually include Offset
in our covariance descriptions, but its covariance with DM is quite strong, and this plot shows why: we just don’t have much frequency coverage, so the primary effect of changing DM is to change the absolute delay. Happily this isn’t a covariance that need trouble us unless we want absolute phases.
This Jupyter notebook can be downloaded from check_clock_corrections.ipynb, or viewed as a python script at check_clock_corrections.py.
Check the state of PINT’s clock corrections
In order to do precision pulsar timing, it is necessary to know how the observatory clocks differ from a global time standard so that TOAs can be corrected. This requires PINT to have access to a record of measured differences. This record needs to be updated when new data is available. This notebook demonstrates how you can check the status of the clock corrections in your version of PINT. The version in the documentation also records the state of the PINT distribution at the moment the documentation was generated (which is when the code was last changed).
[1]:
import tempfile
from glob import glob
import pint.observatory
import pint.observatory.topo_obs
import pint.logging
# hide annoying INFO messages?
pint.logging.setup("WARNING")
[1]:
1
[2]:
pint.observatory.list_last_correction_mjds()
gbt 2024-03-30 00:00:00.000
time_gbt.dat 2024-04-20 12:00:00.000
gbt_pre_2021 2024-03-30 00:00:00.000
time_gbt.dat 2024-04-20 12:00:00.000
arecibo 2020-08-18 00:00:00.000
time_ao.dat 2020-08-18 00:00:00.000
arecibo_pre_2021 2020-08-18 00:00:00.000
time_ao.dat 2020-08-18 00:00:00.000
vla 2021-03-07 12:00:00.000
time_vla.dat 2021-03-07 12:00:00.000
meerkat 2024-03-30 00:00:00.000
mk2utc_observatory.clk 2024-04-04 23:44:59.971
parkes 2024-03-30 00:00:00.000
pks2gps.clk 2024-03-31 02:12:29.664
jodrell 2023-12-04 00:30:00.251
jb2gps.clk 2023-12-04 00:30:00.251
jbroach 2018-03-20 15:22:44.000
jbroach2jb.clk 2018-03-20 15:22:44.000
jb2gps.clk 2023-12-04 00:30:00.251
jbdfb 2017-05-11 00:04:53.000
jbdfb2jb.clk 2017-05-11 00:04:53.000
jb2gps.clk 2023-12-04 00:30:00.251
jbafb 2024-03-30 00:00:00.000
jodrell_pre_2021 2023-12-04 00:30:00.251
jb2gps.clk 2023-12-04 00:30:00.251
nancay 2024-03-30 00:00:00.000
ncyobs 2023-10-27 00:00:00.000
ncyobs2obspm.clk 2023-10-27 00:00:00.000
obspm2gps.clk 2023-10-27 00:00:00.000
effelsberg 2015-06-22 12:00:00.000
eff2gps.clk 2015-06-22 12:00:00.000
effelsberg_pre_2021 2015-06-22 12:00:00.000
eff2gps.clk 2015-06-22 12:00:00.000
gmrt 2024-03-30 00:00:00.000
ort 2024-03-30 00:00:00.000
wsrt 2015-06-29 02:24:00.000
wsrt2gps.clk 2015-06-29 02:24:00.000
fast 2024-03-30 00:00:00.000
time_fast.dat 2024-04-16 22:59:31.200
mwa 2024-03-30 00:00:00.000
lwa1 2024-03-30 00:00:00.000
ps1 2024-03-30 00:00:00.000
hobart 2024-03-30 00:00:00.000
most 2018-09-06 00:00:00.173
mo2gps.clk 2018-09-06 00:00:00.173
chime 2024-03-30 00:00:00.000
magic 2024-03-30 00:00:00.000
lst 2024-03-30 00:00:00.000
virgo 2024-03-30 00:00:00.000
lho 2024-03-30 00:00:00.000
llo 2024-03-30 00:00:00.000
geo600 2024-03-30 00:00:00.000
kagra 2024-03-30 00:00:00.000
algonquin 2024-03-30 00:00:00.000
drao 2024-03-30 00:00:00.000
acre 2024-03-30 00:00:00.000
ata 2024-03-30 00:00:00.000
ccera 2024-03-30 00:00:00.000
axis 2024-03-30 00:00:00.000
narrabri 2024-03-30 00:00:00.000
nanshan 2024-03-30 00:00:00.000
uao 2024-03-30 00:00:00.000
dss_43 2024-03-30 00:00:00.000
op 2024-03-30 00:00:00.000
effelsberg_asterix 2021-03-21 12:00:00.000
effix2gps.clk 2021-03-21 12:00:00.000
leap 2014-03-04 00:00:00.000
leap2effix.clk 2014-03-04 00:00:00.000
effix2gps.clk 2021-03-20 12:00:00.000
jodrellm4 2024-03-30 00:00:00.000
gb300 2024-03-30 00:00:00.000
gb140 1999-07-31 12:00:00.000
time_gb140.dat 1999-07-31 12:00:00.000
gb853 1997-08-28 09:50:24.000
time_gb853.dat 1997-08-28 09:50:24.000
la_palma 2024-03-30 00:00:00.000
hartebeesthoek 2024-03-30 00:00:00.000
warkworth_30m 2024-03-30 00:00:00.000
warkworth_12m 2024-03-30 00:00:00.000
lofar 2024-03-30 00:00:00.000
de601lba 2024-03-30 00:00:00.000
de601lbh 2024-03-30 00:00:00.000
de601hba 2024-03-30 00:00:00.000
de601 2024-03-30 00:00:00.000
de602lba 2024-03-30 00:00:00.000
de602lbh 2024-03-30 00:00:00.000
de602hba 2024-03-30 00:00:00.000
de602 2024-03-30 00:00:00.000
de603lba 2024-03-30 00:00:00.000
de603lbh 2024-03-30 00:00:00.000
de603hba 2024-03-30 00:00:00.000
de603 2024-03-30 00:00:00.000
de604lba 2024-03-30 00:00:00.000
de604lbh 2024-03-30 00:00:00.000
de604hba 2024-03-30 00:00:00.000
de604 2024-03-30 00:00:00.000
de605lba 2024-03-30 00:00:00.000
de605lbh 2024-03-30 00:00:00.000
de605hba 2024-03-30 00:00:00.000
de605 2024-03-30 00:00:00.000
fr606lba 2024-03-30 00:00:00.000
fr606lbh 2024-03-30 00:00:00.000
fr606hba 2024-03-30 00:00:00.000
fr606 2024-03-30 00:00:00.000
se607lba 2024-03-30 00:00:00.000
se607lbh 2024-03-30 00:00:00.000
se607hba 2024-03-30 00:00:00.000
se607 2024-03-30 00:00:00.000
uk608lba 2024-03-30 00:00:00.000
uk608lbh 2024-03-30 00:00:00.000
uk608hba 2024-03-30 00:00:00.000
uk608 2024-03-30 00:00:00.000
de609lba 2024-03-30 00:00:00.000
de609lbh 2024-03-30 00:00:00.000
de609hba 2024-03-30 00:00:00.000
de609 2024-03-30 00:00:00.000
fi609lba 2024-03-30 00:00:00.000
fi609lbh 2024-03-30 00:00:00.000
fi609hba 2024-03-30 00:00:00.000
fi609 2024-03-30 00:00:00.000
utr-2 2024-03-30 00:00:00.000
goldstone 2024-03-30 00:00:00.000
shao 2024-03-30 00:00:00.000
pico_veleta 2024-03-30 00:00:00.000
iar1 2024-03-30 00:00:00.000
iar2 2024-03-30 00:00:00.000
kat-7 2024-03-30 00:00:00.000
mkiii 2024-03-30 00:00:00.000
tabley 2024-03-30 00:00:00.000
darnhall 2024-03-30 00:00:00.000
knockin 2024-03-30 00:00:00.000
defford 2024-03-30 00:00:00.000
cambridge 2024-03-30 00:00:00.000
princeton 2024-03-30 00:00:00.000
hamburg 2024-03-30 00:00:00.000
jb_42ft 2024-03-30 00:00:00.000
jb_mkii 2024-03-30 00:00:00.000
jb_mkii_rch 2024-03-30 00:00:00.000
jb_mkii_dfb 2024-03-30 00:00:00.000
lwa_sv 2024-03-30 00:00:00.000
grao 2024-03-30 00:00:00.000
srt 2024-03-30 00:00:00.000
quabbin 2024-03-30 00:00:00.000
vla_site 2024-03-30 00:00:00.000
gb_20m_xyz 2024-03-30 00:00:00.000
northern_cross 2024-03-30 00:00:00.000
hess 2024-03-30 00:00:00.000
hawc 2024-03-30 00:00:00.000
barycenter MISSING
geocenter 2024-03-30 00:00:00.000
stl_geo 2024-03-30 00:00:00.000
Let’s export the clock corrections as they currently stand so we can save these exact versions for reproducibility purposes.
[3]:
d = tempfile.mkdtemp()
pint.observatory.topo_obs.export_all_clock_files(d)
for f in sorted(glob(f"{d}/*")):
print(f)
/tmp/tmp5es5zqpz/eff2gps.clk
/tmp/tmp5es5zqpz/effix2gps.clk
/tmp/tmp5es5zqpz/gps2utc.clk
/tmp/tmp5es5zqpz/jb2gps.clk
/tmp/tmp5es5zqpz/jbdfb2jb.clk
/tmp/tmp5es5zqpz/jbroach2jb.clk
/tmp/tmp5es5zqpz/leap2effix.clk
/tmp/tmp5es5zqpz/mk2utc_observatory.clk
/tmp/tmp5es5zqpz/mo2gps.clk
/tmp/tmp5es5zqpz/ncyobs2obspm.clk
/tmp/tmp5es5zqpz/obspm2gps.clk
/tmp/tmp5es5zqpz/pks2gps.clk
/tmp/tmp5es5zqpz/tai2tt_bipm2021.clk
/tmp/tmp5es5zqpz/time_ao.dat
/tmp/tmp5es5zqpz/time_fast.dat
/tmp/tmp5es5zqpz/time_gb140.dat
/tmp/tmp5es5zqpz/time_gb853.dat
/tmp/tmp5es5zqpz/time_gbt.dat
/tmp/tmp5es5zqpz/time_vla.dat
/tmp/tmp5es5zqpz/wsrt2gps.clk
This Jupyter notebook can be downloaded from understanding_timing_models.ipynb, or viewed as a python script at understanding_timing_models.py.
Understanding Timing Models
Build a timing model starting from a par file
[1]:
from pint.models import get_model
from pint.models.timing_model import Component
import pint.config
import pint.logging
# setup logging
pint.logging.setup(level="INFO")
[1]:
1
One can build a timing model via get_model()
method. This will read the par file and instantiate all the delay and phase components, using the default ordering.
[2]:
par = "B1855+09_NANOGrav_dfg+12_TAI.par"
m = get_model(pint.config.examplefile(par))
Each of the parameters in the model can be accessed as an attribute of the TimingModel
object. Behind the scenes PINT figures out which component the parameter is stored in.
Each parameter has attributes like the quantity (which includes units), and a description (see the Understanding Parameters notebook for more detail)
[3]:
print(m.F0.quantity)
print(m.F0.description)
186.49408156698235 Hz
Spin-frequency
We can now explore the structure of the model
[4]:
# This gives a list of all of the component types (so far there are only delay and phase components)
m.component_types
[4]:
['DelayComponent', 'PhaseComponent']
[5]:
dir(m)
[5]:
['BINARY',
'CHI2',
'CHI2R',
'CLOCK',
'DILATEFREQ',
'DMDATA',
'DMRES',
'DelayComponent_list',
'EPHEM',
'FINISH',
'INFO',
'NTOA',
'NoiseComponent_list',
'PSR',
'PhaseComponent_list',
'RM',
'START',
'T2CMETHOD',
'TIMEEPH',
'TRACK',
'TRES',
'UNITS',
'__class__',
'__contains__',
'__delattr__',
'__dict__',
'__dir__',
'__doc__',
'__eq__',
'__format__',
'__ge__',
'__getattr__',
'__getattribute__',
'__getitem__',
'__getstate__',
'__gt__',
'__hash__',
'__init__',
'__init_subclass__',
'__iter__',
'__le__',
'__len__',
'__lt__',
'__module__',
'__ne__',
'__new__',
'__reduce__',
'__reduce_ex__',
'__repr__',
'__setattr__',
'__setitem__',
'__sizeof__',
'__str__',
'__subclasshook__',
'__weakref__',
'_locate_param_host',
'add_component',
'add_param_from_top',
'add_tzr_toa',
'as_ECL',
'as_ICRS',
'as_parfile',
'basis_funcs',
'companion_radial_velocity',
'compare',
'component_types',
'components',
'conjunction',
'covariance_matrix_funcs',
'd_delay_d_param',
'd_delay_d_param_num',
'd_dm_d_param',
'd_phase_d_delay_funcs',
'd_phase_d_param',
'd_phase_d_param_num',
'd_phase_d_toa',
'd_phase_d_tpulsar',
'd_toasigma_d_param',
'delay',
'delay_deriv_funcs',
'delay_funcs',
'delete_jump_and_flags',
'designmatrix',
'dm_covariance_matrix',
'dm_covariance_matrix_funcs',
'dm_derivs',
'dm_funcs',
'find_empty_masks',
'fittable_params',
'free_params',
'get_barycentric_toas',
'get_component_type',
'get_components_by_category',
'get_deriv_funcs',
'get_derived_params',
'get_params_dict',
'get_params_mapping',
'get_params_of_component_type',
'get_params_of_type_top',
'get_prefix_list',
'get_prefix_mapping',
'has_correlated_errors',
'has_time_correlated_errors',
'is_binary',
'items',
'jump_flags_to_params',
'keys',
'map_component',
'match_param_aliases',
'name',
'noise_model_basis_weight',
'noise_model_designmatrix',
'noise_model_dimensions',
'orbital_phase',
'param_help',
'params',
'params_ordered',
'phase',
'phase_deriv_funcs',
'phase_funcs',
'pulsar_radial_velocity',
'remove_component',
'remove_param',
'scaled_dm_uncertainty',
'scaled_dm_uncertainty_funcs',
'scaled_toa_uncertainty',
'scaled_toa_uncertainty_funcs',
'search_cmp_attr',
'set_param_uncertainties',
'set_param_values',
'setup',
'toa_covariance_matrix',
'toasigma_derivs',
'top_level_params',
'total_dispersion_slope',
'total_dm',
'use_aliases',
'validate',
'validate_component_types',
'validate_toas',
'write_parfile']
The TimingModel class stores lists of the delay model components and phase components that make up the model
[6]:
# When this list gets printed, it shows the parameters that are associated with each component as well.
m.DelayComponent_list
[6]:
[AstrometryEquatorial(
MJDParameter( POSEPOCH 49453.0000000000000000 (d) frozen=True),
floatParameter( PX 1.2288569063263406 (mas) +/- 0.21243361289239687 mas frozen=False),
AngleParameter( RAJ 18:57:36.39328840 (hourangle) +/- 0h00m00.00002603s frozen=False),
AngleParameter( DECJ 9:43:17.29196000 (deg) +/- 0d00m00.00078789s frozen=False),
floatParameter( PMRA -2.5054345161030382 (mas / yr) +/- 0.031049582610533172 mas / yr frozen=False),
floatParameter( PMDEC -5.497455863199382 (mas / yr) +/- 0.06348008663748286 mas / yr frozen=False)),
TroposphereDelay(
boolParameter( CORRECT_TROPOSPHERE N frozen=True)),
SolarSystemShapiro(
boolParameter( PLANET_SHAPIRO N frozen=True)),
SolarWindDispersion(
floatParameter( NE_SW 0.0 (1 / cm3) frozen=True),
floatParameter( SWP 2.0 () frozen=True),
floatParameter( SWM 0.0 () frozen=True)),
DispersionDM(
floatParameter( DM 13.29709 (pc / cm3) frozen=True),
floatParameter( DM1 UNSET,
MJDParameter( DMEPOCH 49453.0000000000000000 (d) frozen=True)),
DispersionDMX(
floatParameter( DMX 0.0 (pc / cm3) frozen=True),
floatParameter( DMX_0001 0.0 (pc / cm3) frozen=False),
MJDParameter( DMXR1_0001 53358.7273000000000002 (d) frozen=True),
MJDParameter( DMXR2_0001 53358.7733000000000000 (d) frozen=True),
floatParameter( DMX_0002 0.00011110286020705287 (pc / cm3) +/- 4.673570459450457e-05 pc / cm3 frozen=False),
floatParameter( DMX_0003 -4.4655555822498926e-05 (pc / cm3) +/- 4.202379265494554e-05 pc / cm3 frozen=False),
floatParameter( DMX_0004 -3.172366242454921e-05 (pc / cm3) +/- 4.222071152827488e-05 pc / cm3 frozen=False),
floatParameter( DMX_0005 -2.6615937544541525e-05 (pc / cm3) +/- 3.676165288001985e-05 pc / cm3 frozen=False),
floatParameter( DMX_0006 7.145445617437231e-05 (pc / cm3) +/- 4.374167394142721e-05 pc / cm3 frozen=False),
floatParameter( DMX_0007 7.743670274839663e-06 (pc / cm3) +/- 5.13199543963211e-05 pc / cm3 frozen=False),
floatParameter( DMX_0008 6.628314576141847e-05 (pc / cm3) +/- 3.755691800563188e-05 pc / cm3 frozen=False),
floatParameter( DMX_0009 9.960800222441839e-05 (pc / cm3) +/- 3.619457239625261e-05 pc / cm3 frozen=False),
floatParameter( DMX_0010 0.00021384397943417332 (pc / cm3) +/- 3.970891234128656e-05 pc / cm3 frozen=False),
floatParameter( DMX_0011 0.0001654914916498753 (pc / cm3) +/- 4.615618107309406e-05 pc / cm3 frozen=False),
floatParameter( DMX_0012 0.00025014821251322404 (pc / cm3) +/- 3.617384078038137e-05 pc / cm3 frozen=False),
floatParameter( DMX_0013 0.00032544283445758236 (pc / cm3) +/- 3.525448646108801e-05 pc / cm3 frozen=False),
floatParameter( DMX_0014 0.0007020949387551039 (pc / cm3) +/- 3.567637835272617e-05 pc / cm3 frozen=False),
floatParameter( DMX_0015 0.0008302906182772181 (pc / cm3) +/- 3.306863035583663e-05 pc / cm3 frozen=False),
floatParameter( DMX_0016 0.0009456692079715063 (pc / cm3) +/- 4.382957805166126e-05 pc / cm3 frozen=False),
floatParameter( DMX_0017 0.001018831160816332 (pc / cm3) +/- 3.607409255560491e-05 pc / cm3 frozen=False),
floatParameter( DMX_0018 0.0010891165860712315 (pc / cm3) +/- 4.813789826343072e-05 pc / cm3 frozen=False),
floatParameter( DMX_0019 0.0010386415586231196 (pc / cm3) +/- 4.612675176906664e-05 pc / cm3 frozen=False),
floatParameter( DMX_0020 0.0013195672245089195 (pc / cm3) +/- 4.616167182274142e-05 pc / cm3 frozen=False),
floatParameter( DMX_0021 0.0012154222610258824 (pc / cm3) +/- 4.294821352263614e-05 pc / cm3 frozen=False),
floatParameter( DMX_0022 0.0013377609996199928 (pc / cm3) +/- 4.221028001939009e-05 pc / cm3 frozen=False),
floatParameter( DMX_0023 0.0016163009324016205 (pc / cm3) +/- 5.156225843276498e-05 pc / cm3 frozen=False),
floatParameter( DMX_0024 0.0016669838878444674 (pc / cm3) +/- 6.395311039024736e-05 pc / cm3 frozen=False),
floatParameter( DMX_0025 0.0004568363556910632 (pc / cm3) +/- 4.912260448808334e-05 pc / cm3 frozen=False),
floatParameter( DMX_0026 0.0005277178068362735 (pc / cm3) +/- 4.15270457934297e-05 pc / cm3 frozen=False),
floatParameter( DMX_0027 0.0007289486445487147 (pc / cm3) +/- 4.234732022848593e-05 pc / cm3 frozen=False),
floatParameter( DMX_0028 0.001142078552606484 (pc / cm3) +/- 5.990586419386911e-05 pc / cm3 frozen=False),
floatParameter( DMX_0029 0.001105019478847692 (pc / cm3) +/- 4.148068669700297e-05 pc / cm3 frozen=False),
floatParameter( DMX_0030 0.0016917871040745575 (pc / cm3) +/- 6.46466473284041e-05 pc / cm3 frozen=False),
MJDParameter( DMXR1_0002 53420.5539999999999999 (d) frozen=True),
MJDParameter( DMXR1_0003 53448.4787000000000000 (d) frozen=True),
MJDParameter( DMXR1_0004 53477.4013000000000000 (d) frozen=True),
MJDParameter( DMXR1_0005 53532.2326000000000000 (d) frozen=True),
MJDParameter( DMXR1_0006 53603.0361000000000000 (d) frozen=True),
MJDParameter( DMXR1_0007 53628.9651000000000001 (d) frozen=True),
MJDParameter( DMXR1_0008 53686.7974000000000000 (d) frozen=True),
MJDParameter( DMXR1_0009 53715.7357000000000001 (d) frozen=True),
MJDParameter( DMXR1_0010 53750.6225000000000001 (d) frozen=True),
MJDParameter( DMXR1_0011 53798.5061000000000000 (d) frozen=True),
MJDParameter( DMXR1_0012 53851.3716000000000000 (d) frozen=True),
MJDParameter( DMXR1_0013 53891.2523000000000000 (d) frozen=True),
MJDParameter( DMXR1_0014 54043.8419000000000000 (d) frozen=True),
MJDParameter( DMXR1_0015 54092.7177000000000000 (d) frozen=True),
MJDParameter( DMXR1_0016 54135.5826000000000000 (d) frozen=True),
MJDParameter( DMXR1_0017 54519.5292000000000000 (d) frozen=True),
MJDParameter( DMXR1_0018 54569.4146000000000000 (d) frozen=True),
MJDParameter( DMXR1_0019 54678.1000000000000000 (d) frozen=True),
MJDParameter( DMXR1_0020 54819.7198000000000001 (d) frozen=True),
MJDParameter( DMXR1_0021 54862.5883999999999999 (d) frozen=True),
MJDParameter( DMXR1_0022 54925.4324000000000001 (d) frozen=True),
MJDParameter( DMXR1_0023 54981.2806000000000000 (d) frozen=True),
MJDParameter( DMXR1_0024 54998.2092000000000000 (d) frozen=True),
MJDParameter( DMXR1_0025 53926.1631000000000000 (d) frozen=True),
MJDParameter( DMXR1_0026 53968.0623000000000000 (d) frozen=True),
MJDParameter( DMXR1_0027 54008.9535000000000000 (d) frozen=True),
MJDParameter( DMXR1_0028 54177.4780000000000000 (d) frozen=True),
MJDParameter( DMXR1_0029 54472.6626000000000001 (d) frozen=True),
MJDParameter( DMXR1_0030 55108.9038000000000000 (d) frozen=True),
MJDParameter( DMXR2_0002 53420.5810999999999999 (d) frozen=True),
MJDParameter( DMXR2_0003 53448.5144999999999998 (d) frozen=True),
MJDParameter( DMXR2_0004 53477.4295000000000000 (d) frozen=True),
MJDParameter( DMXR2_0005 53532.2715000000000000 (d) frozen=True),
MJDParameter( DMXR2_0006 53603.0846000000000000 (d) frozen=True),
MJDParameter( DMXR2_0007 53628.9835000000000000 (d) frozen=True),
MJDParameter( DMXR2_0008 53686.8370000000000000 (d) frozen=True),
MJDParameter( DMXR2_0009 53715.7616999999999999 (d) frozen=True),
MJDParameter( DMXR2_0010 53750.6622000000000000 (d) frozen=True),
MJDParameter( DMXR2_0011 53798.5341000000000000 (d) frozen=True),
MJDParameter( DMXR2_0012 53851.3992000000000001 (d) frozen=True),
MJDParameter( DMXR2_0013 53891.2801000000000000 (d) frozen=True),
MJDParameter( DMXR2_0014 54043.8674000000000001 (d) frozen=True),
MJDParameter( DMXR2_0015 54092.7426000000000000 (d) frozen=True),
MJDParameter( DMXR2_0016 54135.6066000000000001 (d) frozen=True),
MJDParameter( DMXR2_0017 54519.5561999999999999 (d) frozen=True),
MJDParameter( DMXR2_0018 54569.4669000000000000 (d) frozen=True),
MJDParameter( DMXR2_0019 54678.1283000000000000 (d) frozen=True),
MJDParameter( DMXR2_0020 54819.7462000000000001 (d) frozen=True),
MJDParameter( DMXR2_0021 54862.6144999999999999 (d) frozen=True),
MJDParameter( DMXR2_0022 54925.4613000000000000 (d) frozen=True),
MJDParameter( DMXR2_0023 54981.3252000000000000 (d) frozen=True),
MJDParameter( DMXR2_0024 54998.2268000000000000 (d) frozen=True),
MJDParameter( DMXR2_0025 53926.1924000000000000 (d) frozen=True),
MJDParameter( DMXR2_0026 53968.0901000000000000 (d) frozen=True),
MJDParameter( DMXR2_0027 54008.9866000000000000 (d) frozen=True),
MJDParameter( DMXR2_0028 54177.5085999999999999 (d) frozen=True),
MJDParameter( DMXR2_0029 54472.6889000000000000 (d) frozen=True),
MJDParameter( DMXR2_0030 55108.9221000000000000 (d) frozen=True)),
BinaryDD(
floatParameter( PB 12.327171194774200418 (d) +/- 7.9493185824e-10 d frozen=False),
floatParameter( PBDOT 0.0 () frozen=True),
floatParameter( A1 9.2307804312998 (ls) +/- 3.6890718667634e-07 ls frozen=False),
floatParameter( A1DOT 0.0 (ls / s) frozen=True),
floatParameter( ECC 2.174526566823692e-05 () +/- 4.027191312623e-08 frozen=False),
floatParameter( EDOT 0.0 (1 / s) frozen=True),
MJDParameter( T0 49452.9406950773356469 (d) +/- 0.0016903183053283725 d frozen=False),
floatParameter( OM 276.55142180589701234 (deg) +/- 0.04936551005019606 deg frozen=False),
floatParameter( OMDOT 0.0 (deg / yr) frozen=True),
floatParameter( M2 0.2611131248072343 (solMass) +/- 0.02616161008932908 solMass frozen=False),
floatParameter( SINI 0.9974171733520092 () +/- 0.0018202351513085199 frozen=False),
floatParameter( FB0 UNSET,
floatParameter( A0 0.0 (s) frozen=True),
floatParameter( B0 0.0 (s) frozen=True),
floatParameter( GAMMA 0.0 (s) frozen=True),
floatParameter( DR 0.0 () frozen=True),
floatParameter( DTH 0.0 () frozen=True))]
[7]:
# Now let's look at the phase components. These include the absolute phase, the spindown model, and phase jumps
m.PhaseComponent_list
[7]:
[AbsPhase(
MJDParameter( TZRMJD 54177.5083593432625578 (d) frozen=True),
strParameter( TZRSITE ao frozen=True),
floatParameter( TZRFRQ 424.0 (MHz) frozen=True)),
Spindown(
floatParameter( F0 186.49408156698235146 (Hz) +/- 6.98911818e-12 Hz frozen=False),
MJDParameter( PEPOCH 49453.0000000000000000 (d) frozen=True),
floatParameter( F1 -6.2049547277487420583e-16 (Hz / s) +/- 1.73809343735734e-20 Hz / s frozen=False)),
PhaseJump(
maskParameter(JUMP1 -chanid asp_424 7.6456527699426e-07 +/- 0.0 s (s)),
maskParameter(JUMP2 -chanid asp_428 1.6049580247793e-06 +/- 0.0 s (s)),
maskParameter(JUMP3 -chanid asp_432 2.677273589972e-06 +/- 0.0 s (s)),
maskParameter(JUMP4 -chanid asp_436 3.0814628244857e-06 +/- 0.0 s (s)),
maskParameter(JUMP5 -chanid asp_440 4.136918843609e-06 +/- 0.0 s (s)),
maskParameter(JUMP6 -chanid asp_1382 -5.9069311360427e-05 +/- 0.0 s (s)),
maskParameter(JUMP7 -chanid asp_1386 -5.9582397512386e-05 +/- 0.0 s (s)),
maskParameter(JUMP8 -chanid asp_1390 -5.9756103518774e-05 +/- 0.0 s (s)),
maskParameter(JUMP9 -chanid asp_1394 -5.9347801609886e-05 +/- 0.0 s (s)),
maskParameter(JUMP10 -chanid asp_1398 -5.9009509604032e-05 +/- 0.0 s (s)),
maskParameter(JUMP11 -chanid asp_1402 -5.886527352183e-05 +/- 0.0 s (s)),
maskParameter(JUMP12 -chanid asp_1406 -5.9015609469307e-05 +/- 0.0 s (s)),
maskParameter(JUMP13 -chanid asp_1410 -5.8771703168481e-05 +/- 0.0 s (s)),
maskParameter(JUMP14 -chanid asp_1414 -5.8756443357489e-05 +/- 0.0 s (s)),
maskParameter(JUMP15 -chanid asp_1418 -5.8960922044628e-05 +/- 0.0 s (s)),
maskParameter(JUMP16 -chanid asp_1422 -5.8883992500888e-05 +/- 0.0 s (s)),
maskParameter(JUMP17 -chanid asp_1426 -5.8918166650158e-05 +/- 0.0 s (s)),
maskParameter(JUMP18 -chanid asp_1430 -5.8867635126778e-05 +/- 0.0 s (s)),
maskParameter(JUMP19 -chanid asp_1434 -5.9028297376503e-05 +/- 0.0 s (s)),
maskParameter(JUMP20 -chanid asp_1438 -5.9000827072833e-05 +/- 0.0 s (s)),
maskParameter(JUMP21 -chanid asp_1442 -5.8537946239722e-05 +/- 0.0 s (s)))]
We can add a component to an existing model
[8]:
from pint.models.astrometry import AstrometryEcliptic
[9]:
a = AstrometryEcliptic() # init the AstrometryEcliptic instance
[10]:
# Add the component to the model
# It will be put in the default order
# We set validate=False since we have not set the parameter values yet, which would cause validate to fail
m.add_component(a, validate=False)
[11]:
m.DelayComponent_list # The new instance is added to delay component list
[11]:
[AstrometryEquatorial(
MJDParameter( POSEPOCH 49453.0000000000000000 (d) frozen=True),
floatParameter( PX 1.2288569063263406 (mas) +/- 0.21243361289239687 mas frozen=False),
AngleParameter( RAJ 18:57:36.39328840 (hourangle) +/- 0h00m00.00002603s frozen=False),
AngleParameter( DECJ 9:43:17.29196000 (deg) +/- 0d00m00.00078789s frozen=False),
floatParameter( PMRA -2.5054345161030382 (mas / yr) +/- 0.031049582610533172 mas / yr frozen=False),
floatParameter( PMDEC -5.497455863199382 (mas / yr) +/- 0.06348008663748286 mas / yr frozen=False)),
AstrometryEcliptic(
MJDParameter( POSEPOCH UNSET,
floatParameter( PX 0.0 (mas) frozen=True),
AngleParameter( ELONG UNSET,
AngleParameter( ELAT UNSET,
floatParameter( PMELONG 0.0 (mas / yr) frozen=True),
floatParameter( PMELAT 0.0 (mas / yr) frozen=True),
strParameter( ECL IERS2010 frozen=True)),
TroposphereDelay(
boolParameter( CORRECT_TROPOSPHERE N frozen=True)),
SolarSystemShapiro(
boolParameter( PLANET_SHAPIRO N frozen=True)),
SolarWindDispersion(
floatParameter( NE_SW 0.0 (1 / cm3) frozen=True),
floatParameter( SWP 2.0 () frozen=True),
floatParameter( SWM 0.0 () frozen=True)),
DispersionDM(
floatParameter( DM 13.29709 (pc / cm3) frozen=True),
floatParameter( DM1 UNSET,
MJDParameter( DMEPOCH 49453.0000000000000000 (d) frozen=True)),
DispersionDMX(
floatParameter( DMX 0.0 (pc / cm3) frozen=True),
floatParameter( DMX_0001 0.0 (pc / cm3) frozen=False),
MJDParameter( DMXR1_0001 53358.7273000000000002 (d) frozen=True),
MJDParameter( DMXR2_0001 53358.7733000000000000 (d) frozen=True),
floatParameter( DMX_0002 0.00011110286020705287 (pc / cm3) +/- 4.673570459450457e-05 pc / cm3 frozen=False),
floatParameter( DMX_0003 -4.4655555822498926e-05 (pc / cm3) +/- 4.202379265494554e-05 pc / cm3 frozen=False),
floatParameter( DMX_0004 -3.172366242454921e-05 (pc / cm3) +/- 4.222071152827488e-05 pc / cm3 frozen=False),
floatParameter( DMX_0005 -2.6615937544541525e-05 (pc / cm3) +/- 3.676165288001985e-05 pc / cm3 frozen=False),
floatParameter( DMX_0006 7.145445617437231e-05 (pc / cm3) +/- 4.374167394142721e-05 pc / cm3 frozen=False),
floatParameter( DMX_0007 7.743670274839663e-06 (pc / cm3) +/- 5.13199543963211e-05 pc / cm3 frozen=False),
floatParameter( DMX_0008 6.628314576141847e-05 (pc / cm3) +/- 3.755691800563188e-05 pc / cm3 frozen=False),
floatParameter( DMX_0009 9.960800222441839e-05 (pc / cm3) +/- 3.619457239625261e-05 pc / cm3 frozen=False),
floatParameter( DMX_0010 0.00021384397943417332 (pc / cm3) +/- 3.970891234128656e-05 pc / cm3 frozen=False),
floatParameter( DMX_0011 0.0001654914916498753 (pc / cm3) +/- 4.615618107309406e-05 pc / cm3 frozen=False),
floatParameter( DMX_0012 0.00025014821251322404 (pc / cm3) +/- 3.617384078038137e-05 pc / cm3 frozen=False),
floatParameter( DMX_0013 0.00032544283445758236 (pc / cm3) +/- 3.525448646108801e-05 pc / cm3 frozen=False),
floatParameter( DMX_0014 0.0007020949387551039 (pc / cm3) +/- 3.567637835272617e-05 pc / cm3 frozen=False),
floatParameter( DMX_0015 0.0008302906182772181 (pc / cm3) +/- 3.306863035583663e-05 pc / cm3 frozen=False),
floatParameter( DMX_0016 0.0009456692079715063 (pc / cm3) +/- 4.382957805166126e-05 pc / cm3 frozen=False),
floatParameter( DMX_0017 0.001018831160816332 (pc / cm3) +/- 3.607409255560491e-05 pc / cm3 frozen=False),
floatParameter( DMX_0018 0.0010891165860712315 (pc / cm3) +/- 4.813789826343072e-05 pc / cm3 frozen=False),
floatParameter( DMX_0019 0.0010386415586231196 (pc / cm3) +/- 4.612675176906664e-05 pc / cm3 frozen=False),
floatParameter( DMX_0020 0.0013195672245089195 (pc / cm3) +/- 4.616167182274142e-05 pc / cm3 frozen=False),
floatParameter( DMX_0021 0.0012154222610258824 (pc / cm3) +/- 4.294821352263614e-05 pc / cm3 frozen=False),
floatParameter( DMX_0022 0.0013377609996199928 (pc / cm3) +/- 4.221028001939009e-05 pc / cm3 frozen=False),
floatParameter( DMX_0023 0.0016163009324016205 (pc / cm3) +/- 5.156225843276498e-05 pc / cm3 frozen=False),
floatParameter( DMX_0024 0.0016669838878444674 (pc / cm3) +/- 6.395311039024736e-05 pc / cm3 frozen=False),
floatParameter( DMX_0025 0.0004568363556910632 (pc / cm3) +/- 4.912260448808334e-05 pc / cm3 frozen=False),
floatParameter( DMX_0026 0.0005277178068362735 (pc / cm3) +/- 4.15270457934297e-05 pc / cm3 frozen=False),
floatParameter( DMX_0027 0.0007289486445487147 (pc / cm3) +/- 4.234732022848593e-05 pc / cm3 frozen=False),
floatParameter( DMX_0028 0.001142078552606484 (pc / cm3) +/- 5.990586419386911e-05 pc / cm3 frozen=False),
floatParameter( DMX_0029 0.001105019478847692 (pc / cm3) +/- 4.148068669700297e-05 pc / cm3 frozen=False),
floatParameter( DMX_0030 0.0016917871040745575 (pc / cm3) +/- 6.46466473284041e-05 pc / cm3 frozen=False),
MJDParameter( DMXR1_0002 53420.5539999999999999 (d) frozen=True),
MJDParameter( DMXR1_0003 53448.4787000000000000 (d) frozen=True),
MJDParameter( DMXR1_0004 53477.4013000000000000 (d) frozen=True),
MJDParameter( DMXR1_0005 53532.2326000000000000 (d) frozen=True),
MJDParameter( DMXR1_0006 53603.0361000000000000 (d) frozen=True),
MJDParameter( DMXR1_0007 53628.9651000000000001 (d) frozen=True),
MJDParameter( DMXR1_0008 53686.7974000000000000 (d) frozen=True),
MJDParameter( DMXR1_0009 53715.7357000000000001 (d) frozen=True),
MJDParameter( DMXR1_0010 53750.6225000000000001 (d) frozen=True),
MJDParameter( DMXR1_0011 53798.5061000000000000 (d) frozen=True),
MJDParameter( DMXR1_0012 53851.3716000000000000 (d) frozen=True),
MJDParameter( DMXR1_0013 53891.2523000000000000 (d) frozen=True),
MJDParameter( DMXR1_0014 54043.8419000000000000 (d) frozen=True),
MJDParameter( DMXR1_0015 54092.7177000000000000 (d) frozen=True),
MJDParameter( DMXR1_0016 54135.5826000000000000 (d) frozen=True),
MJDParameter( DMXR1_0017 54519.5292000000000000 (d) frozen=True),
MJDParameter( DMXR1_0018 54569.4146000000000000 (d) frozen=True),
MJDParameter( DMXR1_0019 54678.1000000000000000 (d) frozen=True),
MJDParameter( DMXR1_0020 54819.7198000000000001 (d) frozen=True),
MJDParameter( DMXR1_0021 54862.5883999999999999 (d) frozen=True),
MJDParameter( DMXR1_0022 54925.4324000000000001 (d) frozen=True),
MJDParameter( DMXR1_0023 54981.2806000000000000 (d) frozen=True),
MJDParameter( DMXR1_0024 54998.2092000000000000 (d) frozen=True),
MJDParameter( DMXR1_0025 53926.1631000000000000 (d) frozen=True),
MJDParameter( DMXR1_0026 53968.0623000000000000 (d) frozen=True),
MJDParameter( DMXR1_0027 54008.9535000000000000 (d) frozen=True),
MJDParameter( DMXR1_0028 54177.4780000000000000 (d) frozen=True),
MJDParameter( DMXR1_0029 54472.6626000000000001 (d) frozen=True),
MJDParameter( DMXR1_0030 55108.9038000000000000 (d) frozen=True),
MJDParameter( DMXR2_0002 53420.5810999999999999 (d) frozen=True),
MJDParameter( DMXR2_0003 53448.5144999999999998 (d) frozen=True),
MJDParameter( DMXR2_0004 53477.4295000000000000 (d) frozen=True),
MJDParameter( DMXR2_0005 53532.2715000000000000 (d) frozen=True),
MJDParameter( DMXR2_0006 53603.0846000000000000 (d) frozen=True),
MJDParameter( DMXR2_0007 53628.9835000000000000 (d) frozen=True),
MJDParameter( DMXR2_0008 53686.8370000000000000 (d) frozen=True),
MJDParameter( DMXR2_0009 53715.7616999999999999 (d) frozen=True),
MJDParameter( DMXR2_0010 53750.6622000000000000 (d) frozen=True),
MJDParameter( DMXR2_0011 53798.5341000000000000 (d) frozen=True),
MJDParameter( DMXR2_0012 53851.3992000000000001 (d) frozen=True),
MJDParameter( DMXR2_0013 53891.2801000000000000 (d) frozen=True),
MJDParameter( DMXR2_0014 54043.8674000000000001 (d) frozen=True),
MJDParameter( DMXR2_0015 54092.7426000000000000 (d) frozen=True),
MJDParameter( DMXR2_0016 54135.6066000000000001 (d) frozen=True),
MJDParameter( DMXR2_0017 54519.5561999999999999 (d) frozen=True),
MJDParameter( DMXR2_0018 54569.4669000000000000 (d) frozen=True),
MJDParameter( DMXR2_0019 54678.1283000000000000 (d) frozen=True),
MJDParameter( DMXR2_0020 54819.7462000000000001 (d) frozen=True),
MJDParameter( DMXR2_0021 54862.6144999999999999 (d) frozen=True),
MJDParameter( DMXR2_0022 54925.4613000000000000 (d) frozen=True),
MJDParameter( DMXR2_0023 54981.3252000000000000 (d) frozen=True),
MJDParameter( DMXR2_0024 54998.2268000000000000 (d) frozen=True),
MJDParameter( DMXR2_0025 53926.1924000000000000 (d) frozen=True),
MJDParameter( DMXR2_0026 53968.0901000000000000 (d) frozen=True),
MJDParameter( DMXR2_0027 54008.9866000000000000 (d) frozen=True),
MJDParameter( DMXR2_0028 54177.5085999999999999 (d) frozen=True),
MJDParameter( DMXR2_0029 54472.6889000000000000 (d) frozen=True),
MJDParameter( DMXR2_0030 55108.9221000000000000 (d) frozen=True)),
BinaryDD(
floatParameter( PB 12.327171194774200418 (d) +/- 7.9493185824e-10 d frozen=False),
floatParameter( PBDOT 0.0 () frozen=True),
floatParameter( A1 9.2307804312998 (ls) +/- 3.6890718667634e-07 ls frozen=False),
floatParameter( A1DOT 0.0 (ls / s) frozen=True),
floatParameter( ECC 2.174526566823692e-05 () +/- 4.027191312623e-08 frozen=False),
floatParameter( EDOT 0.0 (1 / s) frozen=True),
MJDParameter( T0 49452.9406950773356469 (d) +/- 0.0016903183053283725 d frozen=False),
floatParameter( OM 276.55142180589701234 (deg) +/- 0.04936551005019606 deg frozen=False),
floatParameter( OMDOT 0.0 (deg / yr) frozen=True),
floatParameter( M2 0.2611131248072343 (solMass) +/- 0.02616161008932908 solMass frozen=False),
floatParameter( SINI 0.9974171733520092 () +/- 0.0018202351513085199 frozen=False),
floatParameter( FB0 UNSET,
floatParameter( A0 0.0 (s) frozen=True),
floatParameter( B0 0.0 (s) frozen=True),
floatParameter( GAMMA 0.0 (s) frozen=True),
floatParameter( DR 0.0 () frozen=True),
floatParameter( DTH 0.0 () frozen=True))]
There are two ways to remove a component from a model. This simplest is to use the remove_component
method to remove it by name.
[12]:
# We will not do this here, since we'll demonstrate a different method below.
# m.remove_component("AstrometryEcliptic")
Alternatively, you can have more control using the map_component()
method, which takes either a string with component name, or a Component instance and returns a tuple containing the Component instance, its order in the relevant component list, the list of components of this type in the model, and the component type (as a string)
[13]:
component, order, from_list, comp_type = m.map_component("AstrometryEcliptic")
print("Component : ", component)
print("Type : ", comp_type)
print("Order : ", order)
print("List : ")
_ = [print(c) for c in from_list]
Component : AstrometryEcliptic(
MJDParameter( POSEPOCH UNSET,
floatParameter( PX 0.0 (mas) frozen=True),
AngleParameter( ELONG UNSET,
AngleParameter( ELAT UNSET,
floatParameter( PMELONG 0.0 (mas / yr) frozen=True),
floatParameter( PMELAT 0.0 (mas / yr) frozen=True),
strParameter( ECL IERS2010 frozen=True))
Type : DelayComponent
Order : 1
List :
AstrometryEquatorial(
MJDParameter( POSEPOCH 49453.0000000000000000 (d) frozen=True),
floatParameter( PX 1.2288569063263406 (mas) +/- 0.21243361289239687 mas frozen=False),
AngleParameter( RAJ 18:57:36.39328840 (hourangle) +/- 0h00m00.00002603s frozen=False),
AngleParameter( DECJ 9:43:17.29196000 (deg) +/- 0d00m00.00078789s frozen=False),
floatParameter( PMRA -2.5054345161030382 (mas / yr) +/- 0.031049582610533172 mas / yr frozen=False),
floatParameter( PMDEC -5.497455863199382 (mas / yr) +/- 0.06348008663748286 mas / yr frozen=False))
AstrometryEcliptic(
MJDParameter( POSEPOCH UNSET,
floatParameter( PX 0.0 (mas) frozen=True),
AngleParameter( ELONG UNSET,
AngleParameter( ELAT UNSET,
floatParameter( PMELONG 0.0 (mas / yr) frozen=True),
floatParameter( PMELAT 0.0 (mas / yr) frozen=True),
strParameter( ECL IERS2010 frozen=True))
TroposphereDelay(
boolParameter( CORRECT_TROPOSPHERE N frozen=True))
SolarSystemShapiro(
boolParameter( PLANET_SHAPIRO N frozen=True))
SolarWindDispersion(
floatParameter( NE_SW 0.0 (1 / cm3) frozen=True),
floatParameter( SWP 2.0 () frozen=True),
floatParameter( SWM 0.0 () frozen=True))
DispersionDM(
floatParameter( DM 13.29709 (pc / cm3) frozen=True),
floatParameter( DM1 UNSET,
MJDParameter( DMEPOCH 49453.0000000000000000 (d) frozen=True))
DispersionDMX(
floatParameter( DMX 0.0 (pc / cm3) frozen=True),
floatParameter( DMX_0001 0.0 (pc / cm3) frozen=False),
MJDParameter( DMXR1_0001 53358.7273000000000002 (d) frozen=True),
MJDParameter( DMXR2_0001 53358.7733000000000000 (d) frozen=True),
floatParameter( DMX_0002 0.00011110286020705287 (pc / cm3) +/- 4.673570459450457e-05 pc / cm3 frozen=False),
floatParameter( DMX_0003 -4.4655555822498926e-05 (pc / cm3) +/- 4.202379265494554e-05 pc / cm3 frozen=False),
floatParameter( DMX_0004 -3.172366242454921e-05 (pc / cm3) +/- 4.222071152827488e-05 pc / cm3 frozen=False),
floatParameter( DMX_0005 -2.6615937544541525e-05 (pc / cm3) +/- 3.676165288001985e-05 pc / cm3 frozen=False),
floatParameter( DMX_0006 7.145445617437231e-05 (pc / cm3) +/- 4.374167394142721e-05 pc / cm3 frozen=False),
floatParameter( DMX_0007 7.743670274839663e-06 (pc / cm3) +/- 5.13199543963211e-05 pc / cm3 frozen=False),
floatParameter( DMX_0008 6.628314576141847e-05 (pc / cm3) +/- 3.755691800563188e-05 pc / cm3 frozen=False),
floatParameter( DMX_0009 9.960800222441839e-05 (pc / cm3) +/- 3.619457239625261e-05 pc / cm3 frozen=False),
floatParameter( DMX_0010 0.00021384397943417332 (pc / cm3) +/- 3.970891234128656e-05 pc / cm3 frozen=False),
floatParameter( DMX_0011 0.0001654914916498753 (pc / cm3) +/- 4.615618107309406e-05 pc / cm3 frozen=False),
floatParameter( DMX_0012 0.00025014821251322404 (pc / cm3) +/- 3.617384078038137e-05 pc / cm3 frozen=False),
floatParameter( DMX_0013 0.00032544283445758236 (pc / cm3) +/- 3.525448646108801e-05 pc / cm3 frozen=False),
floatParameter( DMX_0014 0.0007020949387551039 (pc / cm3) +/- 3.567637835272617e-05 pc / cm3 frozen=False),
floatParameter( DMX_0015 0.0008302906182772181 (pc / cm3) +/- 3.306863035583663e-05 pc / cm3 frozen=False),
floatParameter( DMX_0016 0.0009456692079715063 (pc / cm3) +/- 4.382957805166126e-05 pc / cm3 frozen=False),
floatParameter( DMX_0017 0.001018831160816332 (pc / cm3) +/- 3.607409255560491e-05 pc / cm3 frozen=False),
floatParameter( DMX_0018 0.0010891165860712315 (pc / cm3) +/- 4.813789826343072e-05 pc / cm3 frozen=False),
floatParameter( DMX_0019 0.0010386415586231196 (pc / cm3) +/- 4.612675176906664e-05 pc / cm3 frozen=False),
floatParameter( DMX_0020 0.0013195672245089195 (pc / cm3) +/- 4.616167182274142e-05 pc / cm3 frozen=False),
floatParameter( DMX_0021 0.0012154222610258824 (pc / cm3) +/- 4.294821352263614e-05 pc / cm3 frozen=False),
floatParameter( DMX_0022 0.0013377609996199928 (pc / cm3) +/- 4.221028001939009e-05 pc / cm3 frozen=False),
floatParameter( DMX_0023 0.0016163009324016205 (pc / cm3) +/- 5.156225843276498e-05 pc / cm3 frozen=False),
floatParameter( DMX_0024 0.0016669838878444674 (pc / cm3) +/- 6.395311039024736e-05 pc / cm3 frozen=False),
floatParameter( DMX_0025 0.0004568363556910632 (pc / cm3) +/- 4.912260448808334e-05 pc / cm3 frozen=False),
floatParameter( DMX_0026 0.0005277178068362735 (pc / cm3) +/- 4.15270457934297e-05 pc / cm3 frozen=False),
floatParameter( DMX_0027 0.0007289486445487147 (pc / cm3) +/- 4.234732022848593e-05 pc / cm3 frozen=False),
floatParameter( DMX_0028 0.001142078552606484 (pc / cm3) +/- 5.990586419386911e-05 pc / cm3 frozen=False),
floatParameter( DMX_0029 0.001105019478847692 (pc / cm3) +/- 4.148068669700297e-05 pc / cm3 frozen=False),
floatParameter( DMX_0030 0.0016917871040745575 (pc / cm3) +/- 6.46466473284041e-05 pc / cm3 frozen=False),
MJDParameter( DMXR1_0002 53420.5539999999999999 (d) frozen=True),
MJDParameter( DMXR1_0003 53448.4787000000000000 (d) frozen=True),
MJDParameter( DMXR1_0004 53477.4013000000000000 (d) frozen=True),
MJDParameter( DMXR1_0005 53532.2326000000000000 (d) frozen=True),
MJDParameter( DMXR1_0006 53603.0361000000000000 (d) frozen=True),
MJDParameter( DMXR1_0007 53628.9651000000000001 (d) frozen=True),
MJDParameter( DMXR1_0008 53686.7974000000000000 (d) frozen=True),
MJDParameter( DMXR1_0009 53715.7357000000000001 (d) frozen=True),
MJDParameter( DMXR1_0010 53750.6225000000000001 (d) frozen=True),
MJDParameter( DMXR1_0011 53798.5061000000000000 (d) frozen=True),
MJDParameter( DMXR1_0012 53851.3716000000000000 (d) frozen=True),
MJDParameter( DMXR1_0013 53891.2523000000000000 (d) frozen=True),
MJDParameter( DMXR1_0014 54043.8419000000000000 (d) frozen=True),
MJDParameter( DMXR1_0015 54092.7177000000000000 (d) frozen=True),
MJDParameter( DMXR1_0016 54135.5826000000000000 (d) frozen=True),
MJDParameter( DMXR1_0017 54519.5292000000000000 (d) frozen=True),
MJDParameter( DMXR1_0018 54569.4146000000000000 (d) frozen=True),
MJDParameter( DMXR1_0019 54678.1000000000000000 (d) frozen=True),
MJDParameter( DMXR1_0020 54819.7198000000000001 (d) frozen=True),
MJDParameter( DMXR1_0021 54862.5883999999999999 (d) frozen=True),
MJDParameter( DMXR1_0022 54925.4324000000000001 (d) frozen=True),
MJDParameter( DMXR1_0023 54981.2806000000000000 (d) frozen=True),
MJDParameter( DMXR1_0024 54998.2092000000000000 (d) frozen=True),
MJDParameter( DMXR1_0025 53926.1631000000000000 (d) frozen=True),
MJDParameter( DMXR1_0026 53968.0623000000000000 (d) frozen=True),
MJDParameter( DMXR1_0027 54008.9535000000000000 (d) frozen=True),
MJDParameter( DMXR1_0028 54177.4780000000000000 (d) frozen=True),
MJDParameter( DMXR1_0029 54472.6626000000000001 (d) frozen=True),
MJDParameter( DMXR1_0030 55108.9038000000000000 (d) frozen=True),
MJDParameter( DMXR2_0002 53420.5810999999999999 (d) frozen=True),
MJDParameter( DMXR2_0003 53448.5144999999999998 (d) frozen=True),
MJDParameter( DMXR2_0004 53477.4295000000000000 (d) frozen=True),
MJDParameter( DMXR2_0005 53532.2715000000000000 (d) frozen=True),
MJDParameter( DMXR2_0006 53603.0846000000000000 (d) frozen=True),
MJDParameter( DMXR2_0007 53628.9835000000000000 (d) frozen=True),
MJDParameter( DMXR2_0008 53686.8370000000000000 (d) frozen=True),
MJDParameter( DMXR2_0009 53715.7616999999999999 (d) frozen=True),
MJDParameter( DMXR2_0010 53750.6622000000000000 (d) frozen=True),
MJDParameter( DMXR2_0011 53798.5341000000000000 (d) frozen=True),
MJDParameter( DMXR2_0012 53851.3992000000000001 (d) frozen=True),
MJDParameter( DMXR2_0013 53891.2801000000000000 (d) frozen=True),
MJDParameter( DMXR2_0014 54043.8674000000000001 (d) frozen=True),
MJDParameter( DMXR2_0015 54092.7426000000000000 (d) frozen=True),
MJDParameter( DMXR2_0016 54135.6066000000000001 (d) frozen=True),
MJDParameter( DMXR2_0017 54519.5561999999999999 (d) frozen=True),
MJDParameter( DMXR2_0018 54569.4669000000000000 (d) frozen=True),
MJDParameter( DMXR2_0019 54678.1283000000000000 (d) frozen=True),
MJDParameter( DMXR2_0020 54819.7462000000000001 (d) frozen=True),
MJDParameter( DMXR2_0021 54862.6144999999999999 (d) frozen=True),
MJDParameter( DMXR2_0022 54925.4613000000000000 (d) frozen=True),
MJDParameter( DMXR2_0023 54981.3252000000000000 (d) frozen=True),
MJDParameter( DMXR2_0024 54998.2268000000000000 (d) frozen=True),
MJDParameter( DMXR2_0025 53926.1924000000000000 (d) frozen=True),
MJDParameter( DMXR2_0026 53968.0901000000000000 (d) frozen=True),
MJDParameter( DMXR2_0027 54008.9866000000000000 (d) frozen=True),
MJDParameter( DMXR2_0028 54177.5085999999999999 (d) frozen=True),
MJDParameter( DMXR2_0029 54472.6889000000000000 (d) frozen=True),
MJDParameter( DMXR2_0030 55108.9221000000000000 (d) frozen=True))
BinaryDD(
floatParameter( PB 12.327171194774200418 (d) +/- 7.9493185824e-10 d frozen=False),
floatParameter( PBDOT 0.0 () frozen=True),
floatParameter( A1 9.2307804312998 (ls) +/- 3.6890718667634e-07 ls frozen=False),
floatParameter( A1DOT 0.0 (ls / s) frozen=True),
floatParameter( ECC 2.174526566823692e-05 () +/- 4.027191312623e-08 frozen=False),
floatParameter( EDOT 0.0 (1 / s) frozen=True),
MJDParameter( T0 49452.9406950773356469 (d) +/- 0.0016903183053283725 d frozen=False),
floatParameter( OM 276.55142180589701234 (deg) +/- 0.04936551005019606 deg frozen=False),
floatParameter( OMDOT 0.0 (deg / yr) frozen=True),
floatParameter( M2 0.2611131248072343 (solMass) +/- 0.02616161008932908 solMass frozen=False),
floatParameter( SINI 0.9974171733520092 () +/- 0.0018202351513085199 frozen=False),
floatParameter( FB0 UNSET,
floatParameter( A0 0.0 (s) frozen=True),
floatParameter( B0 0.0 (s) frozen=True),
floatParameter( GAMMA 0.0 (s) frozen=True),
floatParameter( DR 0.0 () frozen=True),
floatParameter( DTH 0.0 () frozen=True))
[14]:
# Now we can remove the component by directly manipulating the list
from_list.remove(component)
[15]:
m.DelayComponent_list # AstrometryEcliptic has been removed from delay list.
[15]:
[AstrometryEquatorial(
MJDParameter( POSEPOCH 49453.0000000000000000 (d) frozen=True),
floatParameter( PX 1.2288569063263406 (mas) +/- 0.21243361289239687 mas frozen=False),
AngleParameter( RAJ 18:57:36.39328840 (hourangle) +/- 0h00m00.00002603s frozen=False),
AngleParameter( DECJ 9:43:17.29196000 (deg) +/- 0d00m00.00078789s frozen=False),
floatParameter( PMRA -2.5054345161030382 (mas / yr) +/- 0.031049582610533172 mas / yr frozen=False),
floatParameter( PMDEC -5.497455863199382 (mas / yr) +/- 0.06348008663748286 mas / yr frozen=False)),
TroposphereDelay(
boolParameter( CORRECT_TROPOSPHERE N frozen=True)),
SolarSystemShapiro(
boolParameter( PLANET_SHAPIRO N frozen=True)),
SolarWindDispersion(
floatParameter( NE_SW 0.0 (1 / cm3) frozen=True),
floatParameter( SWP 2.0 () frozen=True),
floatParameter( SWM 0.0 () frozen=True)),
DispersionDM(
floatParameter( DM 13.29709 (pc / cm3) frozen=True),
floatParameter( DM1 UNSET,
MJDParameter( DMEPOCH 49453.0000000000000000 (d) frozen=True)),
DispersionDMX(
floatParameter( DMX 0.0 (pc / cm3) frozen=True),
floatParameter( DMX_0001 0.0 (pc / cm3) frozen=False),
MJDParameter( DMXR1_0001 53358.7273000000000002 (d) frozen=True),
MJDParameter( DMXR2_0001 53358.7733000000000000 (d) frozen=True),
floatParameter( DMX_0002 0.00011110286020705287 (pc / cm3) +/- 4.673570459450457e-05 pc / cm3 frozen=False),
floatParameter( DMX_0003 -4.4655555822498926e-05 (pc / cm3) +/- 4.202379265494554e-05 pc / cm3 frozen=False),
floatParameter( DMX_0004 -3.172366242454921e-05 (pc / cm3) +/- 4.222071152827488e-05 pc / cm3 frozen=False),
floatParameter( DMX_0005 -2.6615937544541525e-05 (pc / cm3) +/- 3.676165288001985e-05 pc / cm3 frozen=False),
floatParameter( DMX_0006 7.145445617437231e-05 (pc / cm3) +/- 4.374167394142721e-05 pc / cm3 frozen=False),
floatParameter( DMX_0007 7.743670274839663e-06 (pc / cm3) +/- 5.13199543963211e-05 pc / cm3 frozen=False),
floatParameter( DMX_0008 6.628314576141847e-05 (pc / cm3) +/- 3.755691800563188e-05 pc / cm3 frozen=False),
floatParameter( DMX_0009 9.960800222441839e-05 (pc / cm3) +/- 3.619457239625261e-05 pc / cm3 frozen=False),
floatParameter( DMX_0010 0.00021384397943417332 (pc / cm3) +/- 3.970891234128656e-05 pc / cm3 frozen=False),
floatParameter( DMX_0011 0.0001654914916498753 (pc / cm3) +/- 4.615618107309406e-05 pc / cm3 frozen=False),
floatParameter( DMX_0012 0.00025014821251322404 (pc / cm3) +/- 3.617384078038137e-05 pc / cm3 frozen=False),
floatParameter( DMX_0013 0.00032544283445758236 (pc / cm3) +/- 3.525448646108801e-05 pc / cm3 frozen=False),
floatParameter( DMX_0014 0.0007020949387551039 (pc / cm3) +/- 3.567637835272617e-05 pc / cm3 frozen=False),
floatParameter( DMX_0015 0.0008302906182772181 (pc / cm3) +/- 3.306863035583663e-05 pc / cm3 frozen=False),
floatParameter( DMX_0016 0.0009456692079715063 (pc / cm3) +/- 4.382957805166126e-05 pc / cm3 frozen=False),
floatParameter( DMX_0017 0.001018831160816332 (pc / cm3) +/- 3.607409255560491e-05 pc / cm3 frozen=False),
floatParameter( DMX_0018 0.0010891165860712315 (pc / cm3) +/- 4.813789826343072e-05 pc / cm3 frozen=False),
floatParameter( DMX_0019 0.0010386415586231196 (pc / cm3) +/- 4.612675176906664e-05 pc / cm3 frozen=False),
floatParameter( DMX_0020 0.0013195672245089195 (pc / cm3) +/- 4.616167182274142e-05 pc / cm3 frozen=False),
floatParameter( DMX_0021 0.0012154222610258824 (pc / cm3) +/- 4.294821352263614e-05 pc / cm3 frozen=False),
floatParameter( DMX_0022 0.0013377609996199928 (pc / cm3) +/- 4.221028001939009e-05 pc / cm3 frozen=False),
floatParameter( DMX_0023 0.0016163009324016205 (pc / cm3) +/- 5.156225843276498e-05 pc / cm3 frozen=False),
floatParameter( DMX_0024 0.0016669838878444674 (pc / cm3) +/- 6.395311039024736e-05 pc / cm3 frozen=False),
floatParameter( DMX_0025 0.0004568363556910632 (pc / cm3) +/- 4.912260448808334e-05 pc / cm3 frozen=False),
floatParameter( DMX_0026 0.0005277178068362735 (pc / cm3) +/- 4.15270457934297e-05 pc / cm3 frozen=False),
floatParameter( DMX_0027 0.0007289486445487147 (pc / cm3) +/- 4.234732022848593e-05 pc / cm3 frozen=False),
floatParameter( DMX_0028 0.001142078552606484 (pc / cm3) +/- 5.990586419386911e-05 pc / cm3 frozen=False),
floatParameter( DMX_0029 0.001105019478847692 (pc / cm3) +/- 4.148068669700297e-05 pc / cm3 frozen=False),
floatParameter( DMX_0030 0.0016917871040745575 (pc / cm3) +/- 6.46466473284041e-05 pc / cm3 frozen=False),
MJDParameter( DMXR1_0002 53420.5539999999999999 (d) frozen=True),
MJDParameter( DMXR1_0003 53448.4787000000000000 (d) frozen=True),
MJDParameter( DMXR1_0004 53477.4013000000000000 (d) frozen=True),
MJDParameter( DMXR1_0005 53532.2326000000000000 (d) frozen=True),
MJDParameter( DMXR1_0006 53603.0361000000000000 (d) frozen=True),
MJDParameter( DMXR1_0007 53628.9651000000000001 (d) frozen=True),
MJDParameter( DMXR1_0008 53686.7974000000000000 (d) frozen=True),
MJDParameter( DMXR1_0009 53715.7357000000000001 (d) frozen=True),
MJDParameter( DMXR1_0010 53750.6225000000000001 (d) frozen=True),
MJDParameter( DMXR1_0011 53798.5061000000000000 (d) frozen=True),
MJDParameter( DMXR1_0012 53851.3716000000000000 (d) frozen=True),
MJDParameter( DMXR1_0013 53891.2523000000000000 (d) frozen=True),
MJDParameter( DMXR1_0014 54043.8419000000000000 (d) frozen=True),
MJDParameter( DMXR1_0015 54092.7177000000000000 (d) frozen=True),
MJDParameter( DMXR1_0016 54135.5826000000000000 (d) frozen=True),
MJDParameter( DMXR1_0017 54519.5292000000000000 (d) frozen=True),
MJDParameter( DMXR1_0018 54569.4146000000000000 (d) frozen=True),
MJDParameter( DMXR1_0019 54678.1000000000000000 (d) frozen=True),
MJDParameter( DMXR1_0020 54819.7198000000000001 (d) frozen=True),
MJDParameter( DMXR1_0021 54862.5883999999999999 (d) frozen=True),
MJDParameter( DMXR1_0022 54925.4324000000000001 (d) frozen=True),
MJDParameter( DMXR1_0023 54981.2806000000000000 (d) frozen=True),
MJDParameter( DMXR1_0024 54998.2092000000000000 (d) frozen=True),
MJDParameter( DMXR1_0025 53926.1631000000000000 (d) frozen=True),
MJDParameter( DMXR1_0026 53968.0623000000000000 (d) frozen=True),
MJDParameter( DMXR1_0027 54008.9535000000000000 (d) frozen=True),
MJDParameter( DMXR1_0028 54177.4780000000000000 (d) frozen=True),
MJDParameter( DMXR1_0029 54472.6626000000000001 (d) frozen=True),
MJDParameter( DMXR1_0030 55108.9038000000000000 (d) frozen=True),
MJDParameter( DMXR2_0002 53420.5810999999999999 (d) frozen=True),
MJDParameter( DMXR2_0003 53448.5144999999999998 (d) frozen=True),
MJDParameter( DMXR2_0004 53477.4295000000000000 (d) frozen=True),
MJDParameter( DMXR2_0005 53532.2715000000000000 (d) frozen=True),
MJDParameter( DMXR2_0006 53603.0846000000000000 (d) frozen=True),
MJDParameter( DMXR2_0007 53628.9835000000000000 (d) frozen=True),
MJDParameter( DMXR2_0008 53686.8370000000000000 (d) frozen=True),
MJDParameter( DMXR2_0009 53715.7616999999999999 (d) frozen=True),
MJDParameter( DMXR2_0010 53750.6622000000000000 (d) frozen=True),
MJDParameter( DMXR2_0011 53798.5341000000000000 (d) frozen=True),
MJDParameter( DMXR2_0012 53851.3992000000000001 (d) frozen=True),
MJDParameter( DMXR2_0013 53891.2801000000000000 (d) frozen=True),
MJDParameter( DMXR2_0014 54043.8674000000000001 (d) frozen=True),
MJDParameter( DMXR2_0015 54092.7426000000000000 (d) frozen=True),
MJDParameter( DMXR2_0016 54135.6066000000000001 (d) frozen=True),
MJDParameter( DMXR2_0017 54519.5561999999999999 (d) frozen=True),
MJDParameter( DMXR2_0018 54569.4669000000000000 (d) frozen=True),
MJDParameter( DMXR2_0019 54678.1283000000000000 (d) frozen=True),
MJDParameter( DMXR2_0020 54819.7462000000000001 (d) frozen=True),
MJDParameter( DMXR2_0021 54862.6144999999999999 (d) frozen=True),
MJDParameter( DMXR2_0022 54925.4613000000000000 (d) frozen=True),
MJDParameter( DMXR2_0023 54981.3252000000000000 (d) frozen=True),
MJDParameter( DMXR2_0024 54998.2268000000000000 (d) frozen=True),
MJDParameter( DMXR2_0025 53926.1924000000000000 (d) frozen=True),
MJDParameter( DMXR2_0026 53968.0901000000000000 (d) frozen=True),
MJDParameter( DMXR2_0027 54008.9866000000000000 (d) frozen=True),
MJDParameter( DMXR2_0028 54177.5085999999999999 (d) frozen=True),
MJDParameter( DMXR2_0029 54472.6889000000000000 (d) frozen=True),
MJDParameter( DMXR2_0030 55108.9221000000000000 (d) frozen=True)),
BinaryDD(
floatParameter( PB 12.327171194774200418 (d) +/- 7.9493185824e-10 d frozen=False),
floatParameter( PBDOT 0.0 () frozen=True),
floatParameter( A1 9.2307804312998 (ls) +/- 3.6890718667634e-07 ls frozen=False),
floatParameter( A1DOT 0.0 (ls / s) frozen=True),
floatParameter( ECC 2.174526566823692e-05 () +/- 4.027191312623e-08 frozen=False),
floatParameter( EDOT 0.0 (1 / s) frozen=True),
MJDParameter( T0 49452.9406950773356469 (d) +/- 0.0016903183053283725 d frozen=False),
floatParameter( OM 276.55142180589701234 (deg) +/- 0.04936551005019606 deg frozen=False),
floatParameter( OMDOT 0.0 (deg / yr) frozen=True),
floatParameter( M2 0.2611131248072343 (solMass) +/- 0.02616161008932908 solMass frozen=False),
floatParameter( SINI 0.9974171733520092 () +/- 0.0018202351513085199 frozen=False),
floatParameter( FB0 UNSET,
floatParameter( A0 0.0 (s) frozen=True),
floatParameter( B0 0.0 (s) frozen=True),
floatParameter( GAMMA 0.0 (s) frozen=True),
floatParameter( DR 0.0 () frozen=True),
floatParameter( DTH 0.0 () frozen=True))]
To switch the order of a component, just change the order of the component list
NB: that this should almost never be done! In most cases the default order of the delay components is correct. Experts only!
[16]:
# Let's look at the order of the components in the delay list first
_ = [print(dc.__class__) for dc in m.DelayComponent_list]
<class 'pint.models.astrometry.AstrometryEquatorial'>
<class 'pint.models.troposphere_delay.TroposphereDelay'>
<class 'pint.models.solar_system_shapiro.SolarSystemShapiro'>
<class 'pint.models.solar_wind_dispersion.SolarWindDispersion'>
<class 'pint.models.dispersion_model.DispersionDM'>
<class 'pint.models.dispersion_model.DispersionDMX'>
<class 'pint.models.binary_dd.BinaryDD'>
[17]:
# Now let's swap the order of DispersionDMX and Dispersion
component, order, from_list, comp_type = m.map_component("DispersionDMX")
new_order = 3
from_list[order], from_list[new_order] = from_list[new_order], from_list[order]
[18]:
# Print the classes to see the order switch
_ = [print(dc.__class__) for dc in m.DelayComponent_list]
<class 'pint.models.astrometry.AstrometryEquatorial'>
<class 'pint.models.troposphere_delay.TroposphereDelay'>
<class 'pint.models.solar_system_shapiro.SolarSystemShapiro'>
<class 'pint.models.dispersion_model.DispersionDMX'>
<class 'pint.models.dispersion_model.DispersionDM'>
<class 'pint.models.solar_wind_dispersion.SolarWindDispersion'>
<class 'pint.models.binary_dd.BinaryDD'>
Delays are always computed in the order of the DelayComponent_list
[19]:
# First get the toas
from pint.toa import get_TOAs
t = get_TOAs(pint.config.examplefile("B1855+09_NANOGrav_dfg+12.tim"), model=m)
[20]:
# compute the total delay
total_delay = m.delay(t)
total_delay
[20]:
One can get the delay up to some component. For example, if you want the delay computation stop after the Solar System Shapiro delay.
By default the delay of the specified component is included. This can be changed by the keyword parameter include_last=False
.
[21]:
to_jump_delay = m.delay(t, cutoff_component="SolarSystemShapiro")
to_jump_delay
[21]:
Here is a list of all the Component types that PINT knows about
[22]:
Component.component_types
[22]:
{'AbsPhase': pint.models.absolute_phase.AbsPhase,
'AstrometryEquatorial': pint.models.astrometry.AstrometryEquatorial,
'AstrometryEcliptic': pint.models.astrometry.AstrometryEcliptic,
'BinaryBT': pint.models.binary_bt.BinaryBT,
'BinaryBTPiecewise': pint.models.binary_bt.BinaryBTPiecewise,
'BinaryDD': pint.models.binary_dd.BinaryDD,
'BinaryDDS': pint.models.binary_dd.BinaryDDS,
'BinaryDDGR': pint.models.binary_dd.BinaryDDGR,
'BinaryDDH': pint.models.binary_dd.BinaryDDH,
'BinaryDDK': pint.models.binary_ddk.BinaryDDK,
'BinaryELL1': pint.models.binary_ell1.BinaryELL1,
'BinaryELL1H': pint.models.binary_ell1.BinaryELL1H,
'BinaryELL1k': pint.models.binary_ell1.BinaryELL1k,
'DispersionDM': pint.models.dispersion_model.DispersionDM,
'DispersionDMX': pint.models.dispersion_model.DispersionDMX,
'DispersionJump': pint.models.dispersion_model.DispersionJump,
'FDJumpDM': pint.models.dispersion_model.FDJumpDM,
'DMWaveX': pint.models.dmwavex.DMWaveX,
'FD': pint.models.frequency_dependent.FD,
'Glitch': pint.models.glitch.Glitch,
'PhaseOffset': pint.models.phase_offset.PhaseOffset,
'PiecewiseSpindown': pint.models.piecewise.PiecewiseSpindown,
'IFunc': pint.models.ifunc.IFunc,
'PhaseJump': pint.models.jump.PhaseJump,
'ScaleToaError': pint.models.noise_model.ScaleToaError,
'ScaleDmError': pint.models.noise_model.ScaleDmError,
'EcorrNoise': pint.models.noise_model.EcorrNoise,
'PLDMNoise': pint.models.noise_model.PLDMNoise,
'PLRedNoise': pint.models.noise_model.PLRedNoise,
'SolarSystemShapiro': pint.models.solar_system_shapiro.SolarSystemShapiro,
'SolarWindDispersion': pint.models.solar_wind_dispersion.SolarWindDispersion,
'SolarWindDispersionX': pint.models.solar_wind_dispersion.SolarWindDispersionX,
'Spindown': pint.models.spindown.Spindown,
'FDJump': pint.models.fdjump.FDJump,
'TroposphereDelay': pint.models.troposphere_delay.TroposphereDelay,
'Wave': pint.models.wave.Wave,
'WaveX': pint.models.wavex.WaveX}
When PINT builds a model from a par file, it has to infer what components to include in the model. This is done by the component_special_params
of each Component
. A component will be instantiated when one of its special parameters is present in the par file.
[23]:
from collections import defaultdict
special = defaultdict(list)
for comp, tp in Component.component_types.items():
for p in tp().component_special_params:
special[p].append(comp)
special
[23]:
defaultdict(list,
{'RAJ': ['AstrometryEquatorial'],
'DECJ': ['AstrometryEquatorial'],
'PMRA': ['AstrometryEquatorial'],
'PMDEC': ['AstrometryEquatorial'],
'RA': ['AstrometryEquatorial'],
'DEC': ['AstrometryEquatorial'],
'ELONG': ['AstrometryEcliptic'],
'ELAT': ['AstrometryEcliptic'],
'PMELONG': ['AstrometryEcliptic'],
'PMELAT': ['AstrometryEcliptic'],
'LAMBDA': ['AstrometryEcliptic'],
'BETA': ['AstrometryEcliptic'],
'PMLAMBDA': ['AstrometryEcliptic'],
'PMBETA': ['AstrometryEcliptic'],
'DMX_0001': ['DispersionDMX'],
'DMXR1_0001': ['DispersionDMX'],
'DMXR2_0001': ['DispersionDMX'],
'DMWXFREQ_0001': ['DMWaveX'],
'DMWXSIN_0001': ['DMWaveX'],
'DMWXCOS_0001': ['DMWaveX'],
'NE_SW': ['SolarWindDispersion'],
'SWM': ['SolarWindDispersion'],
'SWP': ['SolarWindDispersion'],
'NE1AU': ['SolarWindDispersion'],
'SOLARN0': ['SolarWindDispersion'],
'SWXDM_0001': ['SolarWindDispersionX'],
'SWXP_0001': ['SolarWindDispersionX'],
'SWXR1_0001': ['SolarWindDispersionX'],
'SWXR2_0001': ['SolarWindDispersionX'],
'WXFREQ_0001': ['WaveX'],
'WXSIN_0001': ['WaveX'],
'WXCOS_0001': ['WaveX']})
This Jupyter notebook can be downloaded from understanding_parameters.ipynb, or viewed as a python script at understanding_parameters.py.
Understanding Parameters
[1]:
import pint.models
import pint.models.parameter as pp
import astropy.units as u
from astropy.time import Time
import pint.config
import pint.logging
pint.logging.setup(level="INFO")
[1]:
1
[2]:
# Load a model to play with
model = pint.models.get_model(
pint.config.examplefile("B1855+09_NANOGrav_dfg+12_TAI.par")
)
[3]:
# This model has a large number of parameters of various types
model.params
[3]:
['PSR',
'TRACK',
'EPHEM',
'CLOCK',
'UNITS',
'START',
'FINISH',
'RM',
'INFO',
'TIMEEPH',
'T2CMETHOD',
'BINARY',
'DILATEFREQ',
'DMDATA',
'NTOA',
'CHI2',
'CHI2R',
'TRES',
'DMRES',
'POSEPOCH',
'PX',
'RAJ',
'DECJ',
'PMRA',
'PMDEC',
'F0',
'PEPOCH',
'F1',
'CORRECT_TROPOSPHERE',
'PLANET_SHAPIRO',
'NE_SW',
'SWP',
'SWM',
'DM',
'DM1',
'DMEPOCH',
'DMX',
'DMX_0001',
'DMXR1_0001',
'DMXR2_0001',
'DMX_0002',
'DMX_0003',
'DMX_0004',
'DMX_0005',
'DMX_0006',
'DMX_0007',
'DMX_0008',
'DMX_0009',
'DMX_0010',
'DMX_0011',
'DMX_0012',
'DMX_0013',
'DMX_0014',
'DMX_0015',
'DMX_0016',
'DMX_0017',
'DMX_0018',
'DMX_0019',
'DMX_0020',
'DMX_0021',
'DMX_0022',
'DMX_0023',
'DMX_0024',
'DMX_0025',
'DMX_0026',
'DMX_0027',
'DMX_0028',
'DMX_0029',
'DMX_0030',
'DMXR1_0002',
'DMXR1_0003',
'DMXR1_0004',
'DMXR1_0005',
'DMXR1_0006',
'DMXR1_0007',
'DMXR1_0008',
'DMXR1_0009',
'DMXR1_0010',
'DMXR1_0011',
'DMXR1_0012',
'DMXR1_0013',
'DMXR1_0014',
'DMXR1_0015',
'DMXR1_0016',
'DMXR1_0017',
'DMXR1_0018',
'DMXR1_0019',
'DMXR1_0020',
'DMXR1_0021',
'DMXR1_0022',
'DMXR1_0023',
'DMXR1_0024',
'DMXR1_0025',
'DMXR1_0026',
'DMXR1_0027',
'DMXR1_0028',
'DMXR1_0029',
'DMXR1_0030',
'DMXR2_0002',
'DMXR2_0003',
'DMXR2_0004',
'DMXR2_0005',
'DMXR2_0006',
'DMXR2_0007',
'DMXR2_0008',
'DMXR2_0009',
'DMXR2_0010',
'DMXR2_0011',
'DMXR2_0012',
'DMXR2_0013',
'DMXR2_0014',
'DMXR2_0015',
'DMXR2_0016',
'DMXR2_0017',
'DMXR2_0018',
'DMXR2_0019',
'DMXR2_0020',
'DMXR2_0021',
'DMXR2_0022',
'DMXR2_0023',
'DMXR2_0024',
'DMXR2_0025',
'DMXR2_0026',
'DMXR2_0027',
'DMXR2_0028',
'DMXR2_0029',
'DMXR2_0030',
'PB',
'PBDOT',
'A1',
'A1DOT',
'ECC',
'EDOT',
'T0',
'OM',
'OMDOT',
'M2',
'SINI',
'FB0',
'A0',
'B0',
'GAMMA',
'DR',
'DTH',
'TZRMJD',
'TZRSITE',
'TZRFRQ',
'JUMP1',
'JUMP2',
'JUMP3',
'JUMP4',
'JUMP5',
'JUMP6',
'JUMP7',
'JUMP8',
'JUMP9',
'JUMP10',
'JUMP11',
'JUMP12',
'JUMP13',
'JUMP14',
'JUMP15',
'JUMP16',
'JUMP17',
'JUMP18',
'JUMP19',
'JUMP20',
'JUMP21']
Attributes of Parameters
Each parameter has attributes that specify the name and type of the parameter, its units, and the uncertainty. The par.quantity
and par.uncertainty
are both astropy quantities with units. If you need the bare values, access par.value
and par.uncertainty_value
, which will be numerical values in the units of par.units
Let’s look at those for each of the types of parameters in this model.
[4]:
printed = []
for p in model.params:
par = getattr(model, p)
if type(par) in printed:
continue
print("Name ", par.name)
print("Type ", type(par))
print("Quantity ", par.quantity, type(par.quantity))
print("Value ", par.value)
print("units ", par.units)
print("Uncertainty ", par.uncertainty)
print("Uncertainty_value", par.uncertainty_value)
print("Summary ", par)
print("Parfile Style ", par.as_parfile_line())
print()
printed.append(type(par))
Name PSR
Type <class 'pint.models.parameter.strParameter'>
Quantity 1855+09 <class 'str'>
Value 1855+09
units None
Uncertainty None
Uncertainty_value None
Summary strParameter( PSR 1855+09 frozen=True)
Parfile Style PSRJ 1855+09
Name START
Type <class 'pint.models.parameter.MJDParameter'>
Quantity 53358.7264648894852140 <class 'astropy.time.core.Time'>
Value 53358.726464889485214
units d
Uncertainty None
Uncertainty_value None
Summary MJDParameter( START 53358.7264648894852140 (d) frozen=True)
Parfile Style START 53358.7264648894852140
Name RM
Type <class 'pint.models.parameter.floatParameter'>
Quantity None <class 'NoneType'>
Value None
units rad / m2
Uncertainty None
Uncertainty_value None
Summary floatParameter( RM UNSET
Parfile Style
Name DILATEFREQ
Type <class 'pint.models.parameter.boolParameter'>
Quantity False <class 'bool'>
Value False
units None
Uncertainty None
Uncertainty_value None
Summary boolParameter( DILATEFREQ N frozen=True)
Parfile Style DILATEFREQ N
Name NTOA
Type <class 'pint.models.parameter.intParameter'>
Quantity 702 <class 'int'>
Value 702
units None
Uncertainty None
Uncertainty_value None
Summary intParameter( NTOA 702 frozen=True)
Parfile Style NTOA 702
Name RAJ
Type <class 'pint.models.parameter.AngleParameter'>
Quantity 18h57m36.3932884s <class 'astropy.coordinates.angles.core.Angle'>
Value 18.960109246777776
units hourangle
Uncertainty 0h00m00.00002603s
Uncertainty_value 7.2298063352084138888e-09
Summary AngleParameter( RAJ 18:57:36.39328840 (hourangle) +/- 0h00m00.00002603s frozen=False)
Parfile Style RAJ 18:57:36.39328840 1 0.00002602730280675029
Name F0
Type <class 'pint.models.parameter.prefixParameter'>
Quantity 186.49408156698235 Hz <class 'astropy.units.quantity.Quantity'>
Value 186.49408156698235146
units Hz
Uncertainty 6.98911818e-12 Hz
Uncertainty_value 6.98911818e-12
Summary floatParameter( F0 186.49408156698235146 (Hz) +/- 6.98911818e-12 Hz frozen=False)
Parfile Style F0 186.49408156698235146 1 6.98911818e-12
Name JUMP1
Type <class 'pint.models.parameter.maskParameter'>
Quantity 7.6456527699426e-07 s <class 'astropy.units.quantity.Quantity'>
Value 7.6456527699426e-07
units s
Uncertainty 0.0 s
Uncertainty_value 0.0
Summary maskParameter(JUMP1 -chanid asp_424 7.6456527699426e-07 +/- 0.0 s (s))
Parfile Style JUMP -chanid asp_424 7.6456527699426e-07 1 0.0
Note that DMX_nnnn is an example of a prefixParameter
. These are parameters that are indexed by a numerical value and a componenent can have an arbitrary number of them. In some cases, like Fn
they are coefficients of a Taylor expansion and so all indices up to the maximum must be present. For others, like DMX_nnnn
some indices can be missing without a problem.
prefixParameter
s can be used to hold indexed parameters of various types ( float, bool, str, MJD, angle ). Each one will instantiate a parameter of that type as par.param_comp
. When you print the parameter it looks like the param_comp
type.
[5]:
# Note that for each instance of a prefix parameter is of type `prefixParameter`
print("Type = ", type(model.DMX_0016))
print("param_comp type = ", type(model.DMX_0016.param_comp))
print("Printing gives : ", model.DMX_0016)
Type = <class 'pint.models.parameter.prefixParameter'>
param_comp type = <class 'pint.models.parameter.floatParameter'>
Printing gives : floatParameter( DMX_0016 0.0009456692079715063 (pc / cm3) +/- 4.382957805166126e-05 pc / cm3 frozen=False)
Constructing a parameter
You can make a Parameter instance by calling its constructor
[6]:
# You can specify the vaue as a number
t = pp.floatParameter(name="TEST", value=100, units="Hz", uncertainty=0.03)
print(t)
floatParameter( TEST 100.0 (Hz) +/- 0.03 Hz frozen=True)
[7]:
# Or as a string that will be parsed
t2 = pp.floatParameter(name="TEST", value="200", units="Hz", uncertainty=".04")
print(t2)
floatParameter( TEST 200.0 (Hz) +/- 0.04 Hz frozen=True)
[8]:
# Or as an astropy Quantity with units (this is the preferred method!)
t3 = pp.floatParameter(
name="TEST", value=0.3 * u.kHz, units="Hz", uncertainty=4e-5 * u.kHz
)
print(t3)
print(t3.quantity)
print(t3.value)
print(t3.uncertainty)
print(t3.uncertainty_value)
floatParameter( TEST 300.0 (Hz) +/- 0.04 Hz frozen=True)
0.3 kHz
300.0
0.04 Hz
0.04
Setting Parameters
The value of a parameter can be set in multiple ways. As usual, the preferred method is to set it using an astropy Quantity, so units will be checked and respected
[9]:
par = model.F0
# Here we set it using a Quantity in kHz. Because astropy Quantities are used, it does the right thing!
par.quantity = 0.3 * u.kHz
print("Quantity ", par.quantity, type(par.quantity))
print("Value ", par.value)
print(par)
Quantity 0.3 kHz <class 'astropy.units.quantity.Quantity'>
Value 299.9999999999999889
floatParameter( F0 299.9999999999999889 (Hz) +/- 6.98911818e-12 Hz frozen=False)
[10]:
# Here we set it with a bare number, which is interpreted as being in the units `par.units`
print(par)
par.quantity = 200
print("Quantity ", par.quantity, type(par.quantity))
print("Value ", par.value)
print(par)
floatParameter( F0 299.9999999999999889 (Hz) +/- 6.98911818e-12 Hz frozen=False)
Quantity 200.0 Hz <class 'astropy.units.quantity.Quantity'>
Value 200.0
floatParameter( F0 200.0 (Hz) +/- 6.98911818e-12 Hz frozen=False)
[11]:
# If you try to set the parameter to a quantity that isn't compatible with the units, it raises an exception
try:
print(par)
par.value = 100 * u.second # SET F0 to seconds as time.
print("Quantity ", par.quantity, type(par.quantity))
print("Value ", par.value)
print(par)
except u.UnitConversionError as e:
print("Exception raised:", e)
else:
raise ValueError("That was supposed to raise an exception!")
floatParameter( F0 200.0 (Hz) +/- 6.98911818e-12 Hz frozen=False)
Exception raised: 's' (time) and 'Hz' (frequency) are not convertible
MJD parameters
These parameters hold a date as an astropy Time
object. Numbers will be interpreted as MJDs in the default time scale of the parameter (which is UTC for the TZRMJD parameter)
[12]:
par = model.TZRMJD
print(par)
par.quantity = 54000
print("Quantity ", par.quantity, type(par.quantity))
print("Value ", par.value)
print(par)
par.quantity
MJDParameter( TZRMJD 54177.5083593432625578 (d) frozen=True)
Quantity 54000.0 <class 'astropy.time.core.Time'>
Value 54000.0
MJDParameter( TZRMJD 54000.0000000000000000 (d) frozen=True)
[12]:
<Time object: scale='utc' format='pulsar_mjd' value=54000.0>
[13]:
# And of course, you can set them with a `Time` object
par.quantity = Time.now()
print("Quantity ", par.quantity, type(par.quantity))
print("Value ", par.value)
print(par)
par.quantity
Quantity 2024-04-26 18:30:47.258105 <class 'astropy.time.core.Time'>
Value 60426.771380302141203
MJDParameter( TZRMJD 60426.7713803021412037 (d) frozen=True)
[13]:
<Time object: scale='utc' format='datetime' value=2024-04-26 18:30:47.258105>
[14]:
# I wonder if this should get converted to UTC?
par.quantity = Time(58000.0, format="mjd", scale="tdb")
print("Quantity ", par.quantity, type(par.quantity))
print("Value ", par.value)
print(par)
par.quantity
Quantity 58000.0 <class 'astropy.time.core.Time'>
Value 58000.0
MJDParameter( TZRMJD 58000.0000000000000000 (d) frozen=True)
[14]:
<Time object: scale='tdb' format='mjd' value=58000.0>
AngleParameters
These store quanities as angles using astropy coordinates
[15]:
# The unit for RAJ is hourangle
par = model.RAJ
print(par)
par.quantity = 12
print("Quantity ", par.quantity, type(par.quantity))
print("Value ", par.value)
print(par)
AngleParameter( RAJ 18:57:36.39328840 (hourangle) +/- 0h00m00.00002603s frozen=False)
Quantity 12h00m00s <class 'astropy.coordinates.angles.core.Angle'>
Value 12.0
AngleParameter( RAJ 12:00:00.00000000 (hourangle) +/- 0h00m00.00002603s frozen=False)
[16]:
# Best practice is to set using a quantity with units
print(par)
par.quantity = 30.5 * u.hourangle
print("Quantity ", par.quantity, type(par.quantity))
print("Value ", par.value)
print(par)
par.quantity
AngleParameter( RAJ 12:00:00.00000000 (hourangle) +/- 0h00m00.00002603s frozen=False)
Quantity 30h30m00s <class 'astropy.coordinates.angles.core.Angle'>
Value 30.5
AngleParameter( RAJ 30:30:00.00000000 (hourangle) +/- 0h00m00.00002603s frozen=False)
[16]:
[17]:
# But a string will work
par.quantity = "20:30:00"
print("Quantity ", par.quantity, type(par.quantity))
print("Value ", par.value)
print(par)
par.quantity
Quantity 20h30m00s <class 'astropy.coordinates.angles.core.Angle'>
Value 20.5
AngleParameter( RAJ 20:30:00.00000000 (hourangle) +/- 0h00m00.00002603s frozen=False)
[17]:
[18]:
# And the units can be anything that is convertable to hourangle
print(par)
par.quantity = 30 * u.deg
print("Quantity ", par.quantity, type(par.quantity))
print("Quantity in deg", par.quantity.to(u.deg))
print("Value ", par.value)
print(par)
par.quantity
AngleParameter( RAJ 20:30:00.00000000 (hourangle) +/- 0h00m00.00002603s frozen=False)
Quantity 2h00m00s <class 'astropy.coordinates.angles.core.Angle'>
Quantity in deg 30d00m00s
Value 2.0000000000000004
AngleParameter( RAJ 2:00:00.00000000 (hourangle) +/- 0h00m00.00002603s frozen=False)
[18]:
[19]:
# Here, setting RAJ to an incompatible unit will raise an exception
try:
# Example for RAJ
print(par)
par.quantity = 30 * u.hour # Here hour is in the unit of time, not hourangle
print("Quantity ", par.quantity, type(par.quantity))
print(par)
par.quantity
except u.UnitConversionError as e:
print("Exception raised:", e)
else:
raise ValueError("That was supposed to raise an exception!")
AngleParameter( RAJ 2:00:00.00000000 (hourangle) +/- 0h00m00.00002603s frozen=False)
Exception raised: 'h' (time) and 'hourangle' (angle) are not convertible
This Jupyter notebook can be downloaded from build_model_from_scratch.ipynb, or viewed as a python script at build_model_from_scratch.py.
Building a timing model from scratch
This example includes: * Constructing a timing model object from scratch * Adding and deleting components * Assigning parameter values * Adding prefix-able parameters
[1]:
import astropy.units as u # Astropy units is a very useful module.
import pint.logging
try:
from IPython.display import display
except ImportError:
# Older IPython
from IPython.core.display_functions import display
# setup logging
pint.logging.setup(level="INFO")
from pint.models import (
parameter as p,
) # We would like to add parameters to the model, so we need parameter module.
from pint.models.timing_model import (
TimingModel,
Component,
) # Interface for timing model
import pint
from astropy.time import Time # PINT uses astropy Time objects to represent times
Typically, timing models are built by reading a par file with the get_model()
function, but it is possible to construct them entirely programmatically from scratch. Also, once you have a TimingModel
object (no matter how you built it), you can modify it by adding or removing parameters or entire components. This example show how this is done.
We are going to build the model for “NGC6440E.par” from scratch
First let us see all the possible components we can use
All built-in component classes can be viewed from Component
class, which uses the meta-class to collect the built-in component class. For how to make a component class, see example “make_component_class” (in preparation)
[2]:
# list all the existing components
# all_components is a dictionary, with the component name as the key and component class as the value.
all_components = Component.component_types
# Print the component class names.
_ = [print(x) for x in all_components] # The "_ =" just suppresses excess output
AbsPhase
AstrometryEquatorial
AstrometryEcliptic
BinaryBT
BinaryBTPiecewise
BinaryDD
BinaryDDS
BinaryDDGR
BinaryDDH
BinaryDDK
BinaryELL1
BinaryELL1H
BinaryELL1k
DispersionDM
DispersionDMX
DispersionJump
FDJumpDM
DMWaveX
FD
Glitch
PhaseOffset
PiecewiseSpindown
IFunc
PhaseJump
ScaleToaError
ScaleDmError
EcorrNoise
PLDMNoise
PLRedNoise
SolarSystemShapiro
SolarWindDispersion
SolarWindDispersionX
Spindown
FDJump
TroposphereDelay
Wave
WaveX
Choose your components
Let’s start from a relatively simple model, with AbsPhase
: The absolute phase of the pulsar, typical parameters, TZRMJD
, TZRFREQ
… AstrometryEquatorial
: The ICRS equatorial coordinate, parameters, RAJ
, DECJ
, PMRA
, PMDEC
… Spindown
: The pulsar spin-down model, parameters, F0
, F1
…
We will add a dispersion model as a demo.
[3]:
selected_components = ["AbsPhase", "AstrometryEquatorial", "Spindown"]
component_instances = []
# Initiate the component instances
for cp_name in selected_components:
component_class = all_components[cp_name] # Get the component class
component_instance = component_class() # Instantiate a component object
component_instances.append(component_instance)
Construct timing model (i.e., TimingModel
instance)
TimingModel
class provides the storage and interface for the components. It also manages the components internally.
[4]:
# Construct timing model instance, given a name and a list of components to include (that we just created above)
tm = TimingModel("NGC6400E", component_instances)
View the components in the timing model instance
To view all the components in TimingModel
instance, we can use the property .components
, which returns a dictionary (name as the key, component instance as the value).
Internally, the components are stored in a list(ordered list, you will see why this is important below) according to their types. All the delay type of components (subclasses of DelayComponent
class) are stored in the DelayComponent_list
, and the phase type of components (subclasses of PhaseComponent
class) in the PhaseComponent_list
.
[5]:
# print the components in the timing model
for cp_name, cp_instance in tm.components.items():
print(cp_name, cp_instance)
AbsPhase AbsPhase(
MJDParameter( TZRMJD UNSET,
strParameter( TZRSITE UNSET,
floatParameter( TZRFRQ UNSET)
Spindown Spindown(
floatParameter( F0 0.0 (Hz) frozen=True),
MJDParameter( PEPOCH UNSET)
AstrometryEquatorial AstrometryEquatorial(
MJDParameter( POSEPOCH UNSET,
floatParameter( PX 0.0 (mas) frozen=True),
AngleParameter( RAJ UNSET,
AngleParameter( DECJ UNSET,
floatParameter( PMRA 0.0 (mas / yr) frozen=True),
floatParameter( PMDEC 0.0 (mas / yr) frozen=True))
Useful methods of TimingModel
TimingModel.components()
: List all the components in the timing model.TimingModel.add_component()
: Add component into the timing model.TimingModel.remove_component()
: Remove a component from the timing model.TimingModel.params()
: List all the parameters in the timing model from all components.TimingModel.setup()
: Setup the components (e.g., register the derivatives).TimingModel.validate()
: Validate the components see if the parameters are setup properly.TimingModel.delay()
: Compute the total delay.TimingModel.phase()
: Compute the total phase.TimingModel.delay_funcs()
: List all the delay functions from all the delay components.TimingModel.phase_funcs()
: List all the phase functions from all the phase components.TimingModel.get_component_type()
: Get all the components from one categoryTimingModel.map_component()
: Get the component location. It returns the component’s instance, order in the list, host list and its type.TimingModel.get_params_mapping()
: Report which component each parameter comes from.TimingModel.get_prefix_mapping()
: Get the index mapping for one prefix parameters.TimingModel.param_help()
: Print the help line for all available parameters.
Component order
Since the times that are given to a delay component include all the delays from the previously-evaluted delay components, the order of delay components is important. For example, the solar system delays need to be applied to get to barycentric time, which is needed to evaluate the binary delays, then the binary delays must be applied to get to pulsar proper time.
PINT provides a default ordering for the components. In most cases this should be correct, but can be modified by expert users for a particular purpose.
Here is the default order:
[6]:
from pint.models.timing_model import DEFAULT_ORDER
_ = [print(order) for order in DEFAULT_ORDER]
astrometry
jump_delay
troposphere
solar_system_shapiro
solar_wind
dispersion_constant
dispersion_dmx
dispersion_jump
pulsar_system
frequency_dependent
absolute_phase
spindown
phase_jump
wave
wavex
Add parameter values
Initially, the parameters have no values or the default values, so we must add them before validating the model.
Please note, PINT’s convention for fitting flag is defined in the Parameter.frozen
attribute. Parameter.frozen = True
means “do not fit this parameter”. This is the opposite of TEMPO/TEMPO2 .par file flag where “1” means the parameter is fitted.
[7]:
# We build a dictionary with a key for each parameter we want to set.
# The dictionary entries can be either
# {'pulsar name': (parameter value, TEMPO_Fit_flag, uncertainty)} akin to a TEMPO par file form
# or
# {'pulsar name': (parameter value, )} for parameters that can't be fit
# NOTE: The values here are assumed to be in the default units for each parameter
# Notice that we assign values with units, and pint defines a special hourangle_second unit that can be use for
# right ascensions. Also, angles can be specified as strings that will be parsed by astropy.
params = {
"PSR": ("1748-2021E",),
"RAJ": ("17:48:52.75", 1, 0.05 * pint.hourangle_second),
"DECJ": ("-20:21:29.0", 1, 0.4 * u.arcsec),
"F0": (61.485476554 * u.Hz, 1, 5e-10 * u.Hz),
"PEPOCH": (Time(53750.000000, format="mjd", scale="tdb"),),
"POSEPOCH": (Time(53750.000000, format="mjd", scale="tdb"),),
"TZRMJD": (Time(53801.38605120074849, format="mjd", scale="tdb"),),
"TZRFRQ": (1949.609 * u.MHz,),
"TZRSITE": (1,),
}
# Assign the parameters
for name, info in params.items():
par = getattr(tm, name) # Get parameter object from name
par.quantity = info[0] # set parameter value
if len(info) > 1:
if info[1] == 1:
par.frozen = False # Frozen means not fit.
par.uncertainty = info[2]
Set up and Validating the model
Setting up the model builds the necessary model attributes, and validating model checks if there is any important parameter values missing, and if the parameters are assigned correctly. If there is anything not assigned correctly, it will raise an exception.
[8]:
tm.setup()
tm.validate()
# You should see all the assigned parameters.
# Printing a TimingModel object shows the parfile representation
print(tm)
# Created: 2024-04-26T18:23:59.212450
# PINT_version: 1.0
# User: docs
# Host: build-24199868-project-85767-nanograv-pint
# OS: Linux-5.19.0-1028-aws-x86_64-with-glibc2.35
# Python: 3.11.6 (main, Feb 1 2024, 16:47:41) [GCC 11.4.0]
# Format: pint
PSR 1748-2021E
DILATEFREQ N
DMDATA N
NTOA 0
RAJ 17:48:52.75000000 1 0.04999999999999999584
DECJ -20:21:29.00000000 1 0.40000000000000002220
PMRA 0.0
PMDEC 0.0
PX 0.0
POSEPOCH 53750.0000000000000000
F0 61.48547655400000167 1 5.0000000000000003114e-10
PEPOCH 53750.0000000000000000
TZRMJD 53801.3860512007449870
TZRSITE 1
TZRFRQ 1949.609
The validate function is also integrated into the add_component() function. When adding a component it will validate the timing model by default; however, it can be switched off by setting flag validate=False
. We will use this flag in the next section.
Add a component to the timing model
We will add the dispersion component to the timing model. The steps are: 1. Instantiate the Dispersion class 2. Add dispersion instance into the timing model, with validation as False.
Since the dispersion model’s parameter have not set yet, validation would fail. We will validate it after the parameters filled in.
Add parameters and set values
Validate the timing model.
[9]:
dispersion_class = all_components["DispersionDM"]
dispersion = dispersion_class() # Make the dispersion instance.
# Using validate=False here allows a component being added first and validate later.
tm.add_component(dispersion, validate=False)
Let us examine the components in the timing model.
[10]:
# print the components out, DispersionDM should be there.
print("All components in timing model:")
display(tm.components)
print("\n")
print("Delay components in the DelayComponent_list (order matters!):")
# print the delay component order, dispersion should be after the astrometry
display(tm.DelayComponent_list)
All components in timing model:
{'AbsPhase': AbsPhase(
MJDParameter( TZRMJD 53801.3860512007449870 (d) frozen=True),
strParameter( TZRSITE 1 frozen=True),
floatParameter( TZRFRQ 1949.609 (MHz) frozen=True)),
'Spindown': Spindown(
floatParameter( F0 61.48547655400000167 (Hz) +/- 5e-10 Hz frozen=False),
MJDParameter( PEPOCH 53750.0000000000000000 (d) frozen=True)),
'AstrometryEquatorial': AstrometryEquatorial(
MJDParameter( POSEPOCH 53750.0000000000000000 (d) frozen=True),
floatParameter( PX 0.0 (mas) frozen=True),
AngleParameter( RAJ 17:48:52.75000000 (hourangle) +/- 0h00m00.05s frozen=False),
AngleParameter( DECJ -20:21:29.00000000 (deg) +/- 0d00m00.4s frozen=False),
floatParameter( PMRA 0.0 (mas / yr) frozen=True),
floatParameter( PMDEC 0.0 (mas / yr) frozen=True)),
'DispersionDM': DispersionDM(
floatParameter( DM 0.0 (pc / cm3) frozen=True),
floatParameter( DM1 UNSET,
MJDParameter( DMEPOCH UNSET)}
Delay components in the DelayComponent_list (order matters!):
[AstrometryEquatorial(
MJDParameter( POSEPOCH 53750.0000000000000000 (d) frozen=True),
floatParameter( PX 0.0 (mas) frozen=True),
AngleParameter( RAJ 17:48:52.75000000 (hourangle) +/- 0h00m00.05s frozen=False),
AngleParameter( DECJ -20:21:29.00000000 (deg) +/- 0d00m00.4s frozen=False),
floatParameter( PMRA 0.0 (mas / yr) frozen=True),
floatParameter( PMDEC 0.0 (mas / yr) frozen=True)),
DispersionDM(
floatParameter( DM 0.0 (pc / cm3) frozen=True),
floatParameter( DM1 UNSET,
MJDParameter( DMEPOCH UNSET)]
The DM value can be set as we set the parameters above.
[11]:
tm.DM.quantity = 223.9 * u.pc / u.cm**3
tm.DM.frozen = False # Frozen means not fit.
tm.DM.uncertainty = 0.3 * u.pc / u.cm**3
Run validate again and just make sure everything is setup good.
[12]:
tm.validate() # If this fails, that means the DM model was not setup correctly.
Now the dispersion model component is added and you are now set for your analysis.
Delete a component
Deleting a component will remove the component from component list.
[13]:
# Remove by name
tm.remove_component("DispersionDM")
[14]:
display(tm.components)
{'AbsPhase': AbsPhase(
MJDParameter( TZRMJD 53801.3860512007449870 (d) frozen=True),
strParameter( TZRSITE 1 frozen=True),
floatParameter( TZRFRQ 1949.609 (MHz) frozen=True)),
'Spindown': Spindown(
floatParameter( F0 61.48547655400000167 (Hz) +/- 5e-10 Hz frozen=False),
MJDParameter( PEPOCH 53750.0000000000000000 (d) frozen=True)),
'AstrometryEquatorial': AstrometryEquatorial(
MJDParameter( POSEPOCH 53750.0000000000000000 (d) frozen=True),
floatParameter( PX 0.0 (mas) frozen=True),
AngleParameter( RAJ 17:48:52.75000000 (hourangle) +/- 0h00m00.05s frozen=False),
AngleParameter( DECJ -20:21:29.00000000 (deg) +/- 0d00m00.4s frozen=False),
floatParameter( PMRA 0.0 (mas / yr) frozen=True),
floatParameter( PMDEC 0.0 (mas / yr) frozen=True))}
Dispersion model should disappear from the timing model.
Add prefix-style parameters
Prefix style parameters are used in certain models (e.g., DMX_nnnn or Fn).
Let us use the DispersionDMX
model to demonstrate how it works.
[15]:
tm.add_component(all_components["DispersionDMX"]())
[16]:
_ = [print(cp) for cp in tm.components]
# "DispersionDMX" should be there.
AbsPhase
Spindown
AstrometryEquatorial
DispersionDMX
Display the existing DMX parameters
What do we have in DMX model.
Note, Component
class also has the attribute params
, which is only for the parameters in the component.
[17]:
print(tm.components["DispersionDMX"].params)
['DMX', 'DMX_0001', 'DMXR1_0001', 'DMXR2_0001']
Add DMX parameters
Since we already have DMX_0001, we will add DMX_0003, just want to show that for DMX model, DMX index(‘0001’ part) does not have to follow the consecutive order.
The prefix type of parameters have to use prefixParameter
class from pint.models.parameter
module.
[18]:
# Add prefix parameters
dmx_0003 = p.prefixParameter(
parameter_type="float", name="DMX_0003", value=None, units=u.pc / u.cm**3
)
tm.components["DispersionDMX"].add_param(dmx_0003, setup=True)
# tm.add_param_from_top(dmx_0003, "DispersionDMX", setup=True)
# # Component should given by its name string. use setup=True make sure new parameter get registered.
Check if the parameter and component setup correctly.
[19]:
display(tm.params)
display(tm.delay_deriv_funcs.keys()) # the derivative function should be added.
['PSR',
'TRACK',
'EPHEM',
'CLOCK',
'UNITS',
'START',
'FINISH',
'RM',
'INFO',
'TIMEEPH',
'T2CMETHOD',
'BINARY',
'DILATEFREQ',
'DMDATA',
'NTOA',
'CHI2',
'CHI2R',
'TRES',
'DMRES',
'POSEPOCH',
'PX',
'RAJ',
'DECJ',
'PMRA',
'PMDEC',
'F0',
'PEPOCH',
'TZRMJD',
'TZRSITE',
'TZRFRQ',
'DMX',
'DMX_0001',
'DMXR1_0001',
'DMXR2_0001',
'DMX_0003']
dict_keys(['PX', 'RAJ', 'DECJ', 'PMRA', 'PMDEC', 'DMX_0001', 'DMX_0003'])
However only adding DMX_0003 is not enough, since one DMX parameter also need a DMX range, DMXR1_0003
, DMXR2_0003
in this case. Without them, the validation will fail. So let us add them as well.
[20]:
dmxr1_0003 = p.prefixParameter(
parameter_type="mjd", name="DMXR1_0003", value=None, units=u.day
) # DMXR1 is a type of MJD parameter internally.
dmxr2_0003 = p.prefixParameter(
parameter_type="mjd", name="DMXR2_0003", value=None, units=u.day
) # DMXR1 is a type of MJD parameter internally.
tm.components["DispersionDMX"].add_param(dmxr1_0003, setup=True)
tm.components["DispersionDMX"].add_param(dmxr2_0003, setup=True)
[21]:
tm.params
[21]:
['PSR',
'TRACK',
'EPHEM',
'CLOCK',
'UNITS',
'START',
'FINISH',
'RM',
'INFO',
'TIMEEPH',
'T2CMETHOD',
'BINARY',
'DILATEFREQ',
'DMDATA',
'NTOA',
'CHI2',
'CHI2R',
'TRES',
'DMRES',
'POSEPOCH',
'PX',
'RAJ',
'DECJ',
'PMRA',
'PMDEC',
'F0',
'PEPOCH',
'TZRMJD',
'TZRSITE',
'TZRFRQ',
'DMX',
'DMX_0001',
'DMXR1_0001',
'DMXR2_0001',
'DMX_0003',
'DMXR1_0003',
'DMXR2_0003']
Then validate it.
[22]:
tm.validate()
Remove a parameter
Remove a parameter is just use the remove_param()
function.
[23]:
tm.remove_param("DMX_0003")
tm.remove_param("DMXR1_0003")
tm.remove_param("DMXR2_0003")
display(tm.params)
['PSR',
'TRACK',
'EPHEM',
'CLOCK',
'UNITS',
'START',
'FINISH',
'RM',
'INFO',
'TIMEEPH',
'T2CMETHOD',
'BINARY',
'DILATEFREQ',
'DMDATA',
'NTOA',
'CHI2',
'CHI2R',
'TRES',
'DMRES',
'POSEPOCH',
'PX',
'RAJ',
'DECJ',
'PMRA',
'PMDEC',
'F0',
'PEPOCH',
'TZRMJD',
'TZRSITE',
'TZRFRQ',
'DMX',
'DMX_0001',
'DMXR1_0001',
'DMXR2_0001']
Add higher order derivatives of spin frequency to timing model
Adding higher order derivatives of spin frequency (e.g., F2, F3, F4) is a common use case. Fn
is a prefixParameter, but unlike the DMX_
parameters, all indexes up to the maximum order must exist, since it represents the coefficients of a Taylor expansion.
Let us list the current spindown model parameters:
[24]:
display(tm.components["Spindown"].params)
['F0', 'PEPOCH']
Let us add F1
and F2
to the model. Both F1
and F2
needs a very high precision, we use longdouble=True flag to specify the F2
value to be a longdouble type.
Note, if we add F3
directly without F2
, the validation will fail.
[25]:
f1 = p.prefixParameter(
parameter_type="float", name="F1", value=0.0, units=u.Hz / (u.s), longdouble=True
)
f2 = p.prefixParameter(
parameter_type="float",
name="F2",
value=0.0,
units=u.Hz / (u.s) ** 2,
longdouble=True,
)
[26]:
tm.components["Spindown"].add_param(f1, setup=True)
tm.components["Spindown"].add_param(f2, setup=True)
[27]:
tm.validate()
[28]:
display(tm.params)
['PSR',
'TRACK',
'EPHEM',
'CLOCK',
'UNITS',
'START',
'FINISH',
'RM',
'INFO',
'TIMEEPH',
'T2CMETHOD',
'BINARY',
'DILATEFREQ',
'DMDATA',
'NTOA',
'CHI2',
'CHI2R',
'TRES',
'DMRES',
'POSEPOCH',
'PX',
'RAJ',
'DECJ',
'PMRA',
'PMDEC',
'F0',
'PEPOCH',
'F1',
'F2',
'TZRMJD',
'TZRSITE',
'TZRFRQ',
'DMX',
'DMX_0001',
'DMXR1_0001',
'DMXR2_0001']
Now F2
can be used in the timing model.
[29]:
tm.F1.quantity = -1.181e-15 * u.Hz / u.s
tm.F1.uncertainty = 1e-18 * u.Hz / u.s
tm.F2.quantity = 2e-10 * u.Hz / u.s**2
display(tm.F2)
floatParameter( F2 2e-10 (Hz / s2) frozen=True)
[ ]:
This Jupyter notebook can be downloaded from How_to_build_a_timing_model_component.ipynb, or viewed as a python script at How_to_build_a_timing_model_component.py.
How to compose a timing model component
PINT’s design makes it easy to add a new, custom timing model component to meet specific needs. This notebook demonstrates how to write your own timing model component with the minimal requirements so that PINT can recognize and use it in fits. Here, we implement a new spindown class, PeriodSpindown
, that is parameterized by P0
, P1
, instead of the built-in Spindown
model component, which uses F0
, F1
.
Building the timing model component from scratch
This example notebook includes the following contents * Defining a timing model component class * Necessary parts * Conventions * Use it with the TimingModel
class * Add the new component to the TimingModel
class * Use the functions in the TimingModel
class to interact with the new component.
We will build a simple model component, pulsar spindown model with spin period as parameters, instead of spin frequency.
Import the necessary modules
[1]:
# PINT uses astropy units in the internal calculation and is highly recommended for a new component
import astropy.units as u
# Import the component classes.
from pint.models.spindown import SpindownBase
import pint.models.parameter as p
import pint.config
import pint.logging
# setup the logging
pint.logging.setup(level="INFO")
[1]:
1
Define the timing model class
A timing model component should be an inheritance/subclass of pint.models.timing_model.Component
. PINT also pre-defines three component subclasses for the most used type of components and they have different attribute and functions (see: https://nanograv-pint.readthedocs.io/en/latest/api/pint.models.timing_model.html): * DelayComponent for delay type of models. * PhaseComponent for phase type of models. * NoiseComponent for noise type of models.
Here since we are making a spin-down model, we will use the PhaseComponent
.
Required parts
Model parameters, generally defined as
PINT.models.parameter.Parameter
class or its subclasses. (see https://nanograv-pint.readthedocs.io/en/latest/api/pint.models.parameter.html)Model functions, defined as methods in the component, including:
.setup(), for setting up the component(e.g., registering the derivatives).
.validate(), for checking if the parameters have the correct inputs.
Modeled quantity functions.
The derivative of modeled quantities.
Other support functions.
Conventions
To make a component work as a part of a timing model, it has to follow the following rules to interface the TimingModel
class. Using the analog of a circuit board, the TimingModel
object is the mother board, and the Component
objects are the electronic components(e.g., resistors and transistors); and the following rules are the pins of a component.
Set the class attribute
.register
to be True so that the component is in the searching space of model builderAdd the method of final result in the designated list, so the
TimingModel
’s collecting function(e.g., total delay or total phase) can collect the result. Here are the designated list for the most common component type:DelayComponent: .delay_funcs_component
PhaseComponent: .phase_funcs_component
NoiseComponent: .
.basis_funcs
.covariance_matrix_funcs
.scaled_toa_sigma_funcs
.scaled_dm_sigma_funcs
.dm_covariance_matrix_funcs_component
Register the analytical derivative functions using the
.register_deriv_funcs(derivative function, parameter name)
if any.If one wants to access the attribute in the parent
TimingModel
class or from other components, please use._parent
attribute which is a linker to theTimingModel
class and other components.
[2]:
class PeriodSpindown(SpindownBase):
"""This is an example model component of pulsar spindown but parametrized as period."""
register = True # Flags for the model builder to find this component.
# define the init function.
# Most components do not have a parameter for input.
def __init__(self):
# Get the attributes that initialized in the parent class
super().__init__()
# Add parameters using the add_params in the TimingModel
# Add spin period as parameter
self.add_param(
p.floatParameter(
name="P0",
value=None,
units=u.s,
description="Spin period",
longdouble=True,
)
)
# Add spin period derivative P1. Since it is not all rquired, we are setting the
# default value to 0.0
self.add_param(
p.floatParameter(
name="P1",
value=0.0,
units=u.s / u.s,
description="Spin period derivative",
longdouble=True,
)
)
# Add reference epoch time.
self.add_param(
p.MJDParameter(
name="PEPOCH_P0",
description="Reference epoch for spin-down",
time_scale="tdb",
)
)
# Add spindown phase model function to phase functions
self.phase_funcs_component += [self.spindown_phase_period]
# Add the d_phase_d_delay derivative to the list
self.phase_derivs_wrt_delay += [self.d_spindown_phase_period_d_delay]
def setup(self):
"""Setup the model. Register the derivative functions"""
super().setup() # This will run the setup in the Component class.
# The following lines are resgistering the derivative functions to the timingmodel.
self.register_deriv_funcs(self.d_phase_d_P0, "P0")
self.register_deriv_funcs(self.d_phase_d_P1, "P1")
def validate(self):
"""Check the parameter value."""
super().validate() # This will run the .validate() in the component class
# Check required parameters, since P1 is not required, we are not checking it here
for param in ["P0"]:
if getattr(self, param) is None:
raise ValueError(f"Spindown period model needs {param}")
# One can always setup properties for updating attributes automatically.
@property
def F0(self):
# We return F0 as parameter here since the other place of PINT code use F0
# in the format of PINT parameter.
return p.floatParameter(
name="F0",
value=1.0 / self.P0.quantity,
units="Hz",
description="Spin-frequency",
long_double=True,
)
# Defining the derivatives. In the PINT, a common format of derivative naming is
# d_xxx_d_xxxx
@property
def d_F0_d_P0(self):
return -1.0 / self.P0.quantity**2
@property
def F1(self):
return p.floatParameter(
name="F1",
value=self.d_F0_d_P0 * self.P1.quantity,
units=u.Hz / u.s,
description="Spin down frequency",
long_double=True,
)
@property
def d_F1_d_P0(self):
return self.P1.quantity * 2.0 / self.P0.quantity**3
@property
def d_F1_d_P1(self):
return self.d_F0_d_P0
def get_dt(self, toas, delay):
"""dt from the toas to the reference time."""
# toas.table['tdbld'] stores the tdb time in longdouble.
return (toas.table["tdbld"] - self.PEPOCH_P0.value) * u.day - delay
# Defining the phase function, which is added to the self.phase_funcs_component
def spindown_phase_period(self, toas, delay):
"""Spindown phase using P0 and P1"""
dt = self.get_dt(toas, delay)
return self.F0.quantity * dt + 0.5 * self.F1.quantity * dt**2
def d_spindown_phase_period_d_delay(self, toas, delay):
"""This is part of the derivative chain for the parameters in the delay term."""
dt = self.get_dt(toas, delay)
return -(self.F0.quantity + dt * self.F1.quantity)
def d_phase_d_P0(self, toas, param, delay):
dt = self.get_dt(toas, delay)
return self.d_F0_d_P0 * dt + 0.5 * self.d_F1_d_P0 * dt**2
def d_phase_d_P1(self, toas, param, delay):
dt = self.get_dt(toas, delay)
return 0.5 * self.d_F1_d_P1 * dt**2
Apply the new component to the TimingModel
Let us use this new model component in our example pulsar “NGC6440E”, which has F0
and F1
. Instead, we will use the model component above. The following .par
file string if converted from the NGC6440E.par
with P0
and P1
instead of F0
, F1
.
[3]:
par_string = """
PSR 1748-2021E
RAJ 17:48:52.75 1 0.05
DECJ -20:21:29.0 1 0.4
P0 0.016264003404474613 1 0
P1 3.123955D-19 1 0
PEPOCH_P0 53750.000000
POSEPOCH 53750.000000
DM 223.9 1 0.3
SOLARN0 0.00
EPHEM DE421
UNITS TDB
TIMEEPH FB90
CORRECT_TROPOSPHERE N
PLANET_SHAPIRO N
DILATEFREQ N
TZRMJD 53801.38605120074849
TZRFRQ 1949.609
TZRSITE 1
"""
[4]:
import io
from pint.models import get_model
Load the timing model with new parameterization.
[5]:
model = get_model(
io.StringIO(par_string)
) # PINT can take a string IO for inputing the par file
Check if the component is loaded into the timing model and make sure there is no built-in spindown model.
[6]:
print(model.components["PeriodSpindown"])
print(
"Is the built-in spin-down model in the timing model: ",
"Spindown" in model.components.keys(),
)
print("Is 'P0' in the timing model: ", "P0" in model.params)
print("Is 'P1' in the timing model: ", "P1" in model.params)
print("Is 'F0' in the timing model: ", "F0" in model.params)
print("Is 'F1' in the timing model: ", "F1" in model.params)
PeriodSpindown(
floatParameter( P0 0.016264003404474613 (s) +/- 0.0 s frozen=False),
floatParameter( P1 3.123955e-19 () +/- 0.0 frozen=False),
MJDParameter( PEPOCH_P0 53750.0000000000000000 (d) frozen=True))
Is the built-in spin-down model in the timing model: False
Is 'P0' in the timing model: True
Is 'P1' in the timing model: True
Is 'F0' in the timing model: False
Is 'F1' in the timing model: False
Load TOAs and prepare for fitting
[7]:
from pint.fitter import WLSFitter
from pint.toa import get_TOAs
[8]:
toas = get_TOAs(pint.config.examplefile("NGC6440E.tim"), ephem="DE421")
f = WLSFitter(toas, model)
Plot the residuals
[9]:
import matplotlib.pyplot as plt
Plot the prefit residuals.
[10]:
plt.errorbar(
toas.get_mjds().value,
f.resids_init.time_resids.to_value(u.us),
yerr=toas.get_errors().to_value(u.us),
fmt=".",
)
plt.title(f"{model.PSR.value} Pre-Fit Timing Residuals")
plt.xlabel("MJD")
plt.ylabel("Residual (us)")
plt.grid()

Fit the TOAs using P0
and P1
[11]:
f.fit_toas()
[11]:
59.574712398814320655
Plot the post-fit residuals
[12]:
plt.errorbar(
toas.get_mjds().value,
f.resids.time_resids.to_value(u.us),
yerr=toas.get_errors().to_value(u.us),
fmt=".",
)
plt.title(f"{model.PSR.value} Pre-Fit Timing Residuals")
plt.xlabel("MJD")
plt.ylabel("Residual (us)")
plt.grid()

Print out the summary
[13]:
f.print_summary()
Fitted model using weighted_least_square method with 5 free parameters to 62 TOAs
Prefit residuals Wrms = 1090.5803879598564 us, Postfit residuals Wrms = 21.18210855076685 us
Chisq = 59.575 for 56 d.o.f. for reduced Chisq of 1.064
PAR Prefit Postfit Units
=================== ==================== ============================ =====
PSR 1748-2021E 1748-2021E None
EPHEM DE421 DE421 None
CLOCK TT(BIPM2021) None
UNITS TDB TDB None
START 53478.3 d
FINISH 54187.6 d
TIMEEPH FB90 FB90 None
DILATEFREQ N None
DMDATA N None
NTOA 0 None
CHI2 59.5747
CHI2R 1.06383
TRES 21.1821 us
POSEPOCH 53750 d
PX 0 mas
RAJ 17h48m52.75s 17h48m52.80034691s +/- 0.00014 hourangle_second
DECJ -20d21m29s -20d21m29.38331083s +/- 0.033 arcsec
PMRA 0 mas / yr
PMDEC 0 mas / yr
TZRMJD 53801.4 d
TZRSITE 1 1 None
TZRFRQ 1949.61 MHz
P0 0.016264 0.016264003404376(5) s
P1 3.12396e-19 3.125(4)×10⁻¹⁹
PEPOCH_P0 53750 d
CORRECT_TROPOSPHERE N None
PLANET_SHAPIRO N None
NE_SW 0 1 / cm3
SWP 2
SWM 0
DM 223.9 224.114(35) pc / cm3
Derived Parameters:
Period = 0.01626400340437608±0 s
Pdot = (3.1248326935082824±0)×10⁻¹⁹
Characteristic age = 8.246e+08 yr (braking index = 3)
Surface magnetic field = 2.28e+09 G
Magnetic field at light cylinder = 4806 G
Spindown Edot = 2.868e+33 erg / s (I=1e+45 cm2 g)
Write out a par file for the result
[14]:
f.model.write_parfile("/tmp/output.par")
print(f.model.as_parfile())
# Created: 2024-04-26T18:22:30.870959
# PINT_version: 1.0
# User: docs
# Host: build-24199868-project-85767-nanograv-pint
# OS: Linux-5.19.0-1028-aws-x86_64-with-glibc2.35
# Python: 3.11.6 (main, Feb 1 2024, 16:47:41) [GCC 11.4.0]
# Format: pint
PSR 1748-2021E
EPHEM DE421
CLOCK TT(BIPM2021)
UNITS TDB
START 53478.2858714195382639
FINISH 54187.5873241702319097
TIMEEPH FB90
DILATEFREQ N
DMDATA N
NTOA 62
CHI2 59.57471239881432
CHI2R 1.063834149978827
TRES 21.182108550766848844
RAJ 17:48:52.80034691 1 0.00013524660915123997
DECJ -20:21:29.38331083 1 0.03285153312807001513
PMRA 0.0
PMDEC 0.0
PX 0.0
POSEPOCH 53750.0000000000000000
TZRMJD 53801.3860512007484954
TZRSITE 1
TZRFRQ 1949.609
P0 0.01626400340437608 1 4.784091376106006e-15
P1 3.1248326935082817e-19 1 3.8139606778798536e-22
PEPOCH_P0 53750.0000000000000000
CORRECT_TROPOSPHERE N
PLANET_SHAPIRO N
SOLARN0 0.0
SWM 0.0
DM 224.11379619426720944 1 0.034938980494125096432
This Jupyter notebook can be downloaded from understanding_fitters.ipynb, or viewed as a python script at understanding_fitters.py.
Understanding Fitters
[1]:
from IPython.display import display_markdown
import pint.toa
import pint.models
import pint.fitter
import pint.config
import pint.logging
pint.logging.setup(level="INFO")
[1]:
1
[2]:
%matplotlib inline
import matplotlib.pyplot as plt
# Turn on quantity support for plotting. This is very helpful!
from astropy.visualization import quantity_support
quantity_support()
[2]:
<astropy.visualization.units.quantity_support.<locals>.MplQuantityConverter at 0x7fbd297ff8d0>
[3]:
# Load some TOAs and a model to fit
m, t = pint.models.get_model_and_toas(
pint.config.examplefile("NGC6440E.par"), pint.config.examplefile("NGC6440E.tim")
)
[4]:
# You can check if a model includes a noise model with correlated errors (e.g. ECORR or TNRED) by checking the has_correlated_errors property
m.has_correlated_errors
[4]:
False
There are several fitters in PINT, each of which is a subclass of Fitter
DownhillWLSFitter
- PINT’s workhorse fitter, which does a basic weighted least-squares minimization of the residuals.DownhillGLSFitter
- A generalized least squares fitter, like “tempo -G”, that can handle noise processes like ECORR and red noise that are specified by their correlation function properties.WidebandDownhillFitter
- A fitter that uses DM estimates associated with each TOA. Also supports generalized least squares.PowellFitter
- A very simple example fitter that uses the Powell method implemented in scipy. One notable feature is that it does not require evaluating derivatives w.r.t the model parameters.MCMCFitter
- A fitter that does an MCMC fit using the emcee package. This can be very slow, but accomodates Priors on the parameter values and can produce corner plots and other analyses of the posterior distributions of the parameters.WLSFitter
,GLSFitter
,WidebandFitter
- Simpler fitters that make no attempt to ensure convergence.
You can normally use the function pint.fitter.Fitter.auto(toas, model)
to construct an appropriate fitter for your model and data.
Weighted Least Squares Fitter
[5]:
# Instantiate a fitter
wlsfit = pint.fitter.DownhillWLSFitter(toas=t, model=m)
A fit is performed by calling fit_toas()
For most fitters, multiple iterations can be limited by setting the maxiter
keyword argument.
Downhill fitters will raise the pint.fitter.MaxiterReached
exception if they stop before detecting convergence; you can capture this exception and continue if you don’t mind not having the best-fit answer.
[6]:
try:
wlsfit.fit_toas(maxiter=1)
except pint.fitter.MaxiterReached:
print("Fitter has not fully converged.")
Fitter has not fully converged.
[7]:
# A summary of the fit and resulting model parameters can easily be printed
# Only free parameters will have values and uncertainties in the Postfit column
wlsfit.print_summary()
Fitted model using downhill_wls method with 5 free parameters to 62 TOAs
Prefit residuals Wrms = 1090.5801805746107 us, Postfit residuals Wrms = 21.182108789370787 us
Chisq = 59.575 for 56 d.o.f. for reduced Chisq of 1.064
PAR Prefit Postfit Units
=================== ==================== ============================ =====
PSR 1748-2021E 1748-2021E None
EPHEM DE421 DE421 None
CLOCK TT(BIPM2019) TT(BIPM2019) None
UNITS TDB TDB None
START 53478.3 d
FINISH 54187.6 d
TIMEEPH FB90 FB90 None
T2CMETHOD IAU2000B IAU2000B None
DILATEFREQ N None
DMDATA N None
NTOA 0 None
CHI2 59.5747
CHI2R 1.06383
TRES 21.1821 us
POSEPOCH 53750 d
PX 0 mas
RAJ 17h48m52.75s 17h48m52.80034691s +/- 0.00014 hourangle_second
DECJ -20d21m29s -20d21m29.38331083s +/- 0.033 arcsec
PMRA 0 mas / yr
PMDEC 0 mas / yr
F0 61.4855 61.485476554372(18) Hz
PEPOCH 53750 d
F1 -1.181e-15 -1.1813(14)×10⁻¹⁵ Hz / s
CORRECT_TROPOSPHERE N None
PLANET_SHAPIRO N None
NE_SW 0 1 / cm3
SWP 2
SWM 0
DM 223.9 224.114(35) pc / cm3
TZRMJD 53801.4 d
TZRSITE 1 1 None
TZRFRQ 1949.61 MHz
Derived Parameters:
Period = 0.016264003404376±0.000000000000005 s
Pdot = (3.125±0.004)×10⁻¹⁹
Characteristic age = 8.246e+08 yr (braking index = 3)
Surface magnetic field = 2.28e+09 G
Magnetic field at light cylinder = 4806 G
Spindown Edot = 2.868e+33 erg / s (I=1e+45 cm2 g)
[8]:
# The WLS fitter doesn't handle correlated errors
wlsfit.resids.model.has_correlated_errors
[8]:
False
[9]:
# You can request a pretty-printed covariance matrix
cov = wlsfit.get_parameter_covariance_matrix(pretty_print=True)
Parameter covariance matrix:
RAJ DECJ F0 F1 DM
RAJ 1.411e-15
DECJ -2.477e-14 8.328e-11
F0 -5.932e-20 4.079e-17 3.271e-22
F1 1.594e-26 -4.525e-24 -2.080e-29 2.079e-36
DM -6.523e-12 2.067e-08 4.261e-15 2.900e-21 1.221e-03
[10]:
# plot() will make a plot of the post-fit residuals
wlsfit.plot()

Comparing models
There also a convenience function for pretty printing a comparison of two models with the differences measured in sigma.
[11]:
display_markdown(wlsfit.model.compare(wlsfit.model_init, format="markdown"), raw=True)
PARAMETER |
NGC6440E.par |
NGC6440E.par |
Diff_Sigma1 |
Diff_Sigma2 |
---|---|---|---|---|
PSR |
1748-2021E |
1748-2021E |
||
EPHEM |
DE421 |
DE421 |
||
CLOCK |
TT(BIPM2019) |
TT(BIPM2019) |
||
UNITS |
TDB |
TDB |
||
START |
53478.285871419538264 |
Missing |
||
FINISH |
54187.58732417023191 |
Missing |
||
TIMEEPH |
FB90 |
FB90 |
||
T2CMETHOD |
IAU2000B |
IAU2000B |
||
DILATEFREQ |
False |
False |
||
DMDATA |
False |
False |
||
NTOA |
62 |
0 |
||
CHI2 |
59.574713740962196 |
Missing |
||
CHI2R |
1.0638341739457535 |
Missing |
||
TRES |
21.18210878937078595 |
Missing |
||
POSEPOCH |
53750.0 |
53750.0 |
||
PX |
0.0 |
0.0 |
||
RAJ |
17h48m52.80034691s +/- 0.00014 |
17h48m52.75s +/- 0.05 |
-372.26 |
-1.01 |
DECJ |
-20d21m29.38331083s +/- 0.033 |
-20d21m29s +/- 0.4 |
11.67 |
0.96 |
PMRA |
0.0 |
0.0 |
||
PMDEC |
0.0 |
0.0 |
||
F0 |
61.485476554372(18) |
61.4854765540(5) |
-20.60 |
-0.74 |
PEPOCH |
53750.0 |
53750.0 |
||
F1 |
-1.1813(14)×10⁻¹⁵ |
-1.1810(10)×10⁻¹⁵ |
0.23 |
0.33 |
CORRECT_TROPOSPHERE |
False |
False |
||
PLANET_SHAPIRO |
False |
False |
||
NE_SW |
0.0 |
0.0 |
||
SWP |
2.0 |
2.0 |
||
SWM |
0.0 |
0.0 |
||
DM |
224.114(35) |
223.90(30) |
-6.12 |
-0.71 |
TZRMJD |
53801.386051200748497 |
53801.386051200748497 |
||
TZRSITE |
1 |
1 |
||
TZRFRQ |
1949.609 |
1949.609 |
||
SEPARATION |
0.805131 arcsec |
You can see just how much F1 changed. Let’s compare the \(\chi^2\) values:
[12]:
print(f"Pre-fit chi-squared value: {wlsfit.resids_init.chi2}")
print(f"Post-fit chi-squared value: {wlsfit.resids.chi2}")
Pre-fit chi-squared value: 157920.59715077005
Post-fit chi-squared value: 59.574713740962196
Generalized Least Squares fitter
The GLS fitter is capable of handling correlated noise models.
It has some more complex options using the maxiter
, threshold
, and full_cov
keyword arguments to fit_toas()
.
If maxiter
is less than one, no fitting is done, just the chi-squared computation. In this case, you must provide the residuals
argument.
If maxiter
is one or more, so fitting is actually done, the chi-squared value returned is only approximately the chi-squared of the improved(?) model. In fact it is the chi-squared of the solution to the linear fitting problem, and the full non-linear model should be evaluated and new residuals produced if an accurate chi-squared is desired.
A first attempt is made to solve the fitting problem by Cholesky decomposition, but if this fails singular value decomposition is used instead. In this case singular values below threshold are removed.
full_cov
determines which calculation is used. If True, the full covariance matrix is constructed and the calculation is relatively straightforward but the full covariance matrix may be enormous. If False, an algorithm is used that takes advantage of the structure of the covariance matrix, based on information provided by the noise model. The two algorithms should give the same result up to numerical accuracy where they both can be applied.
To test this fitter properly, we need a model that includes correlated noise components, so we will load one from NANOGrav 9yr data release.
[13]:
m1855 = pint.models.get_model(pint.config.examplefile("B1855+09_NANOGrav_9yv1.gls.par"))
[14]:
# You can check if a model includes a noise model with correlated errors (e.g. ECORR or TNRED) by checking the has_correlated_errors property
m1855.has_correlated_errors
[14]:
True
[15]:
print(m1855)
# Created: 2024-04-26T18:30:13.960087
# PINT_version: 1.0
# User: docs
# Host: build-24199868-project-85767-nanograv-pint
# OS: Linux-5.19.0-1028-aws-x86_64-with-glibc2.35
# Python: 3.11.6 (main, Feb 1 2024, 16:47:41) [GCC 11.4.0]
# Format: pint
PSR B1855+09
EPHEM DE421
CLK TT(BIPM2019)
UNITS TDB
START 53358.7260000000000000
FINISH 56598.8730000000000000
INFO -f
TIMEEPH FB90
DILATEFREQ N
DMDATA N
NTOA 4005
TRES 5.52
LAMBDA 286.863489330115613 1 0.00000001658590000000
BETA 32.321487755503703 1 0.00000002735260000000
PMLAMBDA -3.2701 1 0.0141
PMBETA -5.0982 1 0.0291
PX 0.2929 1 0.2186
ECL IERS2003
POSEPOCH 54978.0000000000000000
F0 186.4940812707752116 1 3.28468e-11
F1 -6.205147513395e-16 1 1.379566413719e-19
PEPOCH 54978.0000000000000000
CORRECT_TROPOSPHERE N
PLANET_SHAPIRO N
SOLARN0 0.0
SWM 0.0
DM 13.299393
DMX 14.0
DMX_0001 0.015161863 1 0.00351684846
DMXR1_0001 53358.7274600000000001
DMXR2_0001 53358.7784100000000000
DMX_0002 0.0152370685 1 0.00351683449
DMXR1_0002 53420.5489300000000000
DMXR2_0002 53420.5862000000000001
DMX_0003 0.0151895956 1 0.00351649738
DMXR1_0003 53448.4736900000000000
DMXR2_0003 53457.4929400000000000
DMX_0004 0.0151322502 1 0.00351653508
DMXR1_0004 53477.3962800000000000
DMXR2_0004 53477.4345300000000000
DMX_0005 0.0151076504 1 0.00351662711
DMXR1_0005 53532.2328100000000000
DMXR2_0005 53532.2765600000000000
DMX_0006 0.015263814 1 0.00351647013
DMXR1_0006 53603.0363200000000000
DMXR2_0006 53603.0897100000000000
DMX_0007 0.0151897641 1 0.003516599
DMXR1_0007 53628.9652500000000001
DMXR2_0007 53628.9833500000000001
DMX_0008 0.0152890326 1 0.00351651389
DMXR1_0008 53686.7976000000000000
DMXR2_0008 53686.8420300000000001
DMX_0009 0.0152484643 1 0.0035165631
DMXR1_0009 53715.7306400000000001
DMXR2_0009 53715.7667800000000000
DMX_0010 0.0153422398 1 0.00351652136
DMXR1_0010 53750.6227300000000000
DMXR2_0010 53750.6672300000000000
DMX_0011 0.015354092 1 0.00351660466
DMXR1_0011 53798.5010800000000000
DMXR2_0011 53798.5391500000000000
DMX_0012 0.0154295455 1 0.0035165139
DMXR1_0012 53851.3718100000000000
DMXR2_0012 53851.4042800000000000
DMX_0013 0.0154693407 1 0.00351653286
DMXR1_0013 53891.2472700000000000
DMXR2_0013 53891.2851400000000000
DMX_0014 0.0156001615 1 0.00351689266
DMXR1_0014 53926.1632600000000000
DMXR2_0014 53926.1922800000000000
DMX_0015 0.0157477908 1 0.00351642748
DMXR1_0015 53968.0572500000000000
DMXR2_0015 53968.0951700000000000
DMX_0016 0.0159397058 1 0.0035163291
DMXR1_0016 54008.9537200000000000
DMXR2_0016 54008.9864400000000001
DMX_0017 0.0159157339 1 0.00351644531
DMXR1_0017 54043.8368200000000000
DMXR2_0017 54043.8724500000000001
DMX_0018 0.016023921 1 0.00351654202
DMXR1_0018 54092.7126500000000000
DMXR2_0018 54092.7424700000000001
DMX_0019 0.0161119039 1 0.00351660022
DMXR1_0019 54135.5828200000000000
DMXR2_0019 54135.6117100000000000
DMX_0020 0.0163381983 1 0.00351653637
DMXR1_0020 54177.4729700000000000
DMXR2_0020 54177.5136300000000000
DMX_0021 0.0162696012 1 0.00351647068
DMXR1_0021 54472.6627600000000000
DMXR2_0021 54472.6939300000000000
DMX_0022 0.0161627196 1 0.00351654829
DMXR1_0022 54519.5241900000000000
DMXR2_0022 54519.5612300000000000
DMX_0023 0.0162621197 1 0.00351655413
DMXR1_0023 54569.4095400000000000
DMXR2_0023 54569.4667500000000000
DMX_0024 0.0164484026 1 0.00351660433
DMXR1_0024 54819.7148100000000001
DMXR2_0024 54819.7512900000000000
DMX_0025 0.0164104711 1 0.00351663094
DMXR1_0025 54862.5833800000000000
DMXR2_0025 54862.6195900000000001
DMX_0026 0.016569036 1 0.00351659233
DMXR1_0026 54925.4273400000000001
DMXR2_0026 54925.4664099999999999
DMX_0027 0.0167679894 1 0.00351680449
DMXR1_0027 54981.2808400000000000
DMXR2_0027 54981.3303200000000001
DMX_0028 0.0167977392 1 0.00351700298
DMXR1_0028 54998.2094000000000000
DMXR2_0028 54998.2266600000000000
DMX_0029 0.0169021454 1 0.00351669195
DMXR1_0029 55108.9040200000000000
DMXR2_0029 55108.9219600000000000
DMX_0030 0.0170268231 1 0.00351673492
DMXR1_0030 55135.8396000000000000
DMXR2_0030 55135.8607900000000001
DMX_0031 0.0170193727 1 0.0035168257
DMXR1_0031 55170.7389400000000000
DMXR2_0031 55170.7489500000000000
DMX_0032 0.017073879 1 0.0035166332
DMXR1_0032 55205.6495700000000000
DMXR2_0032 55205.6660600000000000
DMX_0033 0.0171315137 1 0.00351779101
DMXR1_0033 55298.4053900000000000
DMXR2_0033 55298.4194000000000001
DMX_0034 0.0172665141 1 0.0035171169
DMXR1_0034 55358.2410600000000000
DMXR2_0034 55358.2606200000000000
DMX_0035 0.0173188687 1 0.00351729708
DMXR1_0035 55391.1726800000000000
DMXR2_0035 55391.1865400000000000
DMX_0036 0.0172177568 1 0.00351711273
DMXR1_0036 55424.0694700000000000
DMXR2_0036 55424.0844000000000000
DMX_0037 0.0172926679 1 0.00351670777
DMXR1_0037 55500.8336900000000000
DMXR2_0037 55500.8483800000000000
DMX_0038 0.0172819014 1 0.00351683726
DMXR1_0038 55528.7626100000000000
DMXR2_0038 55528.7760200000000000
DMX_0039 0.0172309563 1 0.00351654536
DMXR1_0039 55638.4592000000000000
DMXR2_0039 55638.4747300000000000
DMX_0040 0.0172343241 1 0.00351659552
DMXR1_0040 55677.3593500000000000
DMXR2_0040 55677.3764100000000000
DMX_0041 0.0172846259 1 0.00351685651
DMXR1_0041 55701.2862700000000000
DMXR2_0041 55701.3045500000000000
DMX_0042 0.0174295333 1 0.00351754173
DMXR1_0042 55731.2369800000000000
DMXR2_0042 55731.2557500000000000
DMX_0043 0.0174135398 1 0.00351701302
DMXR1_0043 55758.1616300000000000
DMXR2_0043 55758.1828600000000000
DMX_0044 0.0174411084 1 0.00351698583
DMXR1_0044 55789.0676000000000000
DMXR2_0044 55789.0858500000000000
DMX_0045 0.0176716738 1 0.00351656249
DMXR1_0045 55843.9164899999999999
DMXR2_0045 55843.9356300000000001
DMX_0046 0.0178457703 1 0.0035165082
DMXR1_0046 55912.7231500000000000
DMXR2_0046 55912.7409100000000001
DMX_0047 0.0179235129 1 0.00351641019
DMXR1_0047 55989.5168300000000000
DMXR2_0047 55989.5376200000000000
DMX_0048 0.0179551569 1 0.00351643393
DMXR1_0048 56023.4263700000000000
DMXR2_0048 56023.4461900000000000
DMX_0049 0.0180214558 1 0.00351830253
DMXR1_0049 56057.3536400000000000
DMXR2_0049 56057.3537799999999999
DMX_0050 0.0182147041 1 0.00352479233
DMXR1_0050 56106.2205600000000000
DMXR2_0050 56106.2206100000000000
DMX_0051 0.0181602407 1 0.00352162978
DMXR1_0051 56122.1298500000000000
DMXR2_0051 56122.1771700000000000
DMX_0052 0.0180430586 1 0.0035266043
DMXR1_0052 56140.1376700000000000
DMXR2_0052 56140.1377500000000000
DMX_0053 0.0181527986 1 0.00352069768
DMXR1_0053 56166.0571200000000000
DMXR2_0053 56166.0573300000000000
DMX_0054 0.0184943818 1 0.00351633835
DMXR1_0054 56201.9569899999999999
DMXR2_0054 56212.9240400000000000
DMX_0055 0.0182191151 1 0.00351594287
DMXR1_0055 56235.8428200000000000
DMXR2_0055 56235.8681000000000000
DMX_0056 0.0182791281 1 0.00351620206
DMXR1_0056 56254.8200499999999999
DMXR2_0056 56254.8445300000000000
DMX_0057 0.0183448743 1 0.00351757322
DMXR1_0057 56277.7525799999999999
DMXR2_0057 56277.7526300000000000
DMX_0058 0.0182456031 1 0.00351582953
DMXR1_0058 56294.6690800000000000
DMXR2_0058 56306.6485600000000000
DMX_0059 0.0180520318 1 0.00351721613
DMXR1_0059 56319.6395299999999999
DMXR2_0059 56319.6395800000000000
DMX_0060 0.0181223112 1 0.00351582826
DMXR1_0060 56341.5587100000000000
DMXR2_0060 56341.5768099999999999
DMX_0061 0.018162022 1 0.00351580352
DMXR1_0061 56360.5302200000000000
DMXR2_0061 56374.5115599999999999
DMX_0062 0.0181719503 1 0.00351571012
DMXR1_0062 56380.4505300000000000
DMXR2_0062 56380.4684000000000000
DMX_0063 0.0181438391 1 0.00351612171
DMXR1_0063 56411.3969800000000000
DMXR2_0063 56417.3732900000000000
DMX_0064 0.0181367452 1 0.00351601523
DMXR1_0064 56432.3219500000000000
DMXR2_0064 56438.3251700000000000
DMX_0065 0.0181205204 1 0.00351651911
DMXR1_0065 56458.2358500000000000
DMXR2_0065 56458.2537600000000000
DMX_0066 0.0181608231 1 0.00351659645
DMXR1_0066 56479.1780800000000000
DMXR2_0066 56479.1960700000000000
DMX_0067 0.0181467585 1 0.00351643552
DMXR1_0067 56498.1243500000000000
DMXR2_0067 56498.1420100000000000
DMX_0068 0.0182940538 1 0.00351645118
DMXR1_0068 56519.0817400000000000
DMXR2_0068 56519.0981400000000000
DMX_0069 0.0182906491 1 0.00351609445
DMXR1_0069 56538.0092400000000000
DMXR2_0069 56538.0256100000000000
DMX_0070 0.0179611744 1 0.00352468823
DMXR1_0070 56557.9709500000000000
DMXR2_0070 56557.9710100000000000
DMX_0071 0.0183625085 1 0.00351656154
DMXR1_0071 56577.9065599999999999
DMXR2_0071 56577.9228499999999999
DMX_0072 0.0183864669 1 0.003515863
DMXR1_0072 56598.8557200000000000
DMXR2_0072 56598.8720400000000000
BINARY DD
PB 12.32717119132762 1 1.9722e-10
PBDOT 0.0
A1 9.23078048 1 2.03e-07
A1DOT 0.0
E 2.1634e-05 1 2.36e-08
EDOT 0.0
T0 54975.5128660817000000 1 0.0019286695
OM 276.536118059963 1 0.056323656112
OMDOT 0.0
M2 0.233837 1 0.011278
SINI 0.999461 1 0.000178
A0 0.0
B0 0.0
GAMMA 0.0
DR 0.0
DTH 0.0
FD1 0.000161666384 1 3.38650356e-05
FD2 -0.00018821003 1 4.13173074e-05
FD3 0.000107526915 1 2.50177766e-05
RNAMP 0.017173
RNIDX -4.91353
TNREDAMP -14.227505410948254
TNREDGAM 4.91353
TNREDC 45.0
T2EFAC -f L-wide_PUPPI 1.507
T2EQUAD -f L-wide_PUPPI 0.25518
T2EFAC -f 430_ASP 1.147
T2EFAC -f L-wide_ASP 1.15
T2EFAC -f 430_PUPPI 1.117
T2EQUAD -f 430_ASP 0.0141
T2EQUAD -f L-wide_ASP 0.42504
T2EQUAD -f 430_PUPPI 0.0264
ECORR -f 430_PUPPI 0.00601
ECORR -f L-wide_PUPPI 0.31843
ECORR -f L-wide_ASP 0.79618
ECORR -f 430_ASP 0.01117
TZRMJD 54981.2808461648844676
TZRSITE 3
TZRFRQ 424.0
JUMP -fe L-wide -9.449e-06 1 9.439e-06
[16]:
ts1855 = pint.toa.get_TOAs(
pint.config.examplefile("B1855+09_NANOGrav_9yv1.tim"), model=m1855
)
ts1855.print_summary()
Number of TOAs: 4005
Number of commands: 1
Number of observatories: 1 ['arecibo']
MJD span: 53358.727 to 56598.872
Date span: 2004-12-19 17:27:32.961266179 to 2013-11-02 20:55:40.399171358
arecibo TOAs (4005):
Min freq: 422.187 MHz
Max freq: 1760.728 MHz
Min error: 0.05 us
Max error: 17.8 us
Median error: 1.19 us
There is currently a problem with DownhillGLSFitter
: it doesn’t record appropriate noise parameters.
[17]:
glsfit = pint.fitter.GLSFitter(toas=ts1855, model=m1855)
[18]:
m1855.DMX_0001.prefix
[18]:
'DMX_'
[19]:
glsfit.fit_toas(maxiter=1)
[19]:
3900.0981532531170273
[20]:
glsfit.print_summary()
Fitted model using generalized_least_square method with 90 free parameters to 4005 TOAs
Prefit residuals Wrms = 5.5386698946489625 us, Postfit residuals Wrms = 1.346783684962562 us
Chisq = 3900.098 for 3914 d.o.f. for reduced Chisq of 0.996
PAR Prefit Postfit Units
=================== ==================== ============================ =====
PSR B1855+09 B1855+09 None
EPHEM DE421 DE421 None
CLOCK TT(BIPM2019) TT(BIPM2019) None
UNITS TDB TDB None
START 53358.7 53358.7 d
FINISH 56598.9 56598.9 d
INFO -f -f None
TIMEEPH FB90 FB90 None
BINARY DD DD None
DILATEFREQ N None
DMDATA N None
NTOA 4005 None
CHI2 3900.1
CHI2R 0.996448
TRES 5.52 1.34678 us
POSEPOCH 54978 d
PX 0.2929 0.30(22) mas
ELONG 286d51m48.56158842s 286d51m48.56156705s +/- 5.9e-05 arcsec
ELAT 32d19m17.35591981s 32d19m17.35582847s +/- 9.8e-05 arcsec
PMELONG -3.2701 -3.270(14) mas / yr
PMELAT -5.0982 -5.099(29) mas / yr
ECL IERS2003 IERS2003 None
F0 186.494 186.49408127078522(27) Hz
PEPOCH 54978 d
F1 -6.20515e-16 -6.20497(25)×10⁻¹⁶ Hz / s
CORRECT_TROPOSPHERE N None
PLANET_SHAPIRO N None
NE_SW 0 1 / cm3
SWP 2
SWM 0
DM 13.2994 pc / cm3
DMX 14 pc / cm3
DMX_0001 0.0151619 0.0152(35) pc / cm3
DMXR1_0001 53358.7 d
DMXR2_0001 53358.8 d
DMX_0002 0.0152371 0.0152(35) pc / cm3
DMXR1_0002 53420.5 d
DMXR2_0002 53420.6 d
DMX_0003 0.0151896 0.0152(35) pc / cm3
DMXR1_0003 53448.5 d
DMXR2_0003 53457.5 d
DMX_0004 0.0151323 0.0151(35) pc / cm3
DMXR1_0004 53477.4 d
DMXR2_0004 53477.4 d
DMX_0005 0.0151077 0.0151(35) pc / cm3
DMXR1_0005 53532.2 d
DMXR2_0005 53532.3 d
DMX_0006 0.0152638 0.0153(35) pc / cm3
DMXR1_0006 53603 d
DMXR2_0006 53603.1 d
DMX_0007 0.0151898 0.0152(35) pc / cm3
DMXR1_0007 53629 d
DMXR2_0007 53629 d
DMX_0008 0.015289 0.0153(35) pc / cm3
DMXR1_0008 53686.8 d
DMXR2_0008 53686.8 d
DMX_0009 0.0152485 0.0153(35) pc / cm3
DMXR1_0009 53715.7 d
DMXR2_0009 53715.8 d
DMX_0010 0.0153422 0.0153(35) pc / cm3
DMXR1_0010 53750.6 d
DMXR2_0010 53750.7 d
DMX_0011 0.0153541 0.0154(35) pc / cm3
DMXR1_0011 53798.5 d
DMXR2_0011 53798.5 d
DMX_0012 0.0154295 0.0154(35) pc / cm3
DMXR1_0012 53851.4 d
DMXR2_0012 53851.4 d
DMX_0013 0.0154693 0.0155(35) pc / cm3
DMXR1_0013 53891.2 d
DMXR2_0013 53891.3 d
DMX_0014 0.0156002 0.0156(35) pc / cm3
DMXR1_0014 53926.2 d
DMXR2_0014 53926.2 d
DMX_0015 0.0157478 0.0158(35) pc / cm3
DMXR1_0015 53968.1 d
DMXR2_0015 53968.1 d
DMX_0016 0.0159397 0.0159(35) pc / cm3
DMXR1_0016 54009 d
DMXR2_0016 54009 d
DMX_0017 0.0159157 0.0159(35) pc / cm3
DMXR1_0017 54043.8 d
DMXR2_0017 54043.9 d
DMX_0018 0.0160239 0.0160(35) pc / cm3
DMXR1_0018 54092.7 d
DMXR2_0018 54092.7 d
DMX_0019 0.0161119 0.0161(35) pc / cm3
DMXR1_0019 54135.6 d
DMXR2_0019 54135.6 d
DMX_0020 0.0163382 0.0163(35) pc / cm3
DMXR1_0020 54177.5 d
DMXR2_0020 54177.5 d
DMX_0021 0.0162696 0.0163(35) pc / cm3
DMXR1_0021 54472.7 d
DMXR2_0021 54472.7 d
DMX_0022 0.0161627 0.0162(35) pc / cm3
DMXR1_0022 54519.5 d
DMXR2_0022 54519.6 d
DMX_0023 0.0162621 0.0163(35) pc / cm3
DMXR1_0023 54569.4 d
DMXR2_0023 54569.5 d
DMX_0024 0.0164484 0.0165(35) pc / cm3
DMXR1_0024 54819.7 d
DMXR2_0024 54819.8 d
DMX_0025 0.0164105 0.0164(35) pc / cm3
DMXR1_0025 54862.6 d
DMXR2_0025 54862.6 d
DMX_0026 0.016569 0.0166(35) pc / cm3
DMXR1_0026 54925.4 d
DMXR2_0026 54925.5 d
DMX_0027 0.016768 0.0168(35) pc / cm3
DMXR1_0027 54981.3 d
DMXR2_0027 54981.3 d
DMX_0028 0.0167977 0.0168(35) pc / cm3
DMXR1_0028 54998.2 d
DMXR2_0028 54998.2 d
DMX_0029 0.0169021 0.0169(35) pc / cm3
DMXR1_0029 55108.9 d
DMXR2_0029 55108.9 d
DMX_0030 0.0170268 0.0170(35) pc / cm3
DMXR1_0030 55135.8 d
DMXR2_0030 55135.9 d
DMX_0031 0.0170194 0.0170(35) pc / cm3
DMXR1_0031 55170.7 d
DMXR2_0031 55170.7 d
DMX_0032 0.0170739 0.0171(35) pc / cm3
DMXR1_0032 55205.6 d
DMXR2_0032 55205.7 d
DMX_0033 0.0171315 0.0171(35) pc / cm3
DMXR1_0033 55298.4 d
DMXR2_0033 55298.4 d
DMX_0034 0.0172665 0.0173(35) pc / cm3
DMXR1_0034 55358.2 d
DMXR2_0034 55358.3 d
DMX_0035 0.0173189 0.0173(35) pc / cm3
DMXR1_0035 55391.2 d
DMXR2_0035 55391.2 d
DMX_0036 0.0172178 0.0172(35) pc / cm3
DMXR1_0036 55424.1 d
DMXR2_0036 55424.1 d
DMX_0037 0.0172927 0.0173(35) pc / cm3
DMXR1_0037 55500.8 d
DMXR2_0037 55500.8 d
DMX_0038 0.0172819 0.0173(35) pc / cm3
DMXR1_0038 55528.8 d
DMXR2_0038 55528.8 d
DMX_0039 0.017231 0.0172(35) pc / cm3
DMXR1_0039 55638.5 d
DMXR2_0039 55638.5 d
DMX_0040 0.0172343 0.0172(35) pc / cm3
DMXR1_0040 55677.4 d
DMXR2_0040 55677.4 d
DMX_0041 0.0172846 0.0173(35) pc / cm3
DMXR1_0041 55701.3 d
DMXR2_0041 55701.3 d
DMX_0042 0.0174295 0.0174(35) pc / cm3
DMXR1_0042 55731.2 d
DMXR2_0042 55731.3 d
DMX_0043 0.0174135 0.0174(35) pc / cm3
DMXR1_0043 55758.2 d
DMXR2_0043 55758.2 d
DMX_0044 0.0174411 0.0174(35) pc / cm3
DMXR1_0044 55789.1 d
DMXR2_0044 55789.1 d
DMX_0045 0.0176717 0.0177(35) pc / cm3
DMXR1_0045 55843.9 d
DMXR2_0045 55843.9 d
DMX_0046 0.0178458 0.0178(35) pc / cm3
DMXR1_0046 55912.7 d
DMXR2_0046 55912.7 d
DMX_0047 0.0179235 0.0179(35) pc / cm3
DMXR1_0047 55989.5 d
DMXR2_0047 55989.5 d
DMX_0048 0.0179552 0.0180(35) pc / cm3
DMXR1_0048 56023.4 d
DMXR2_0048 56023.4 d
DMX_0049 0.0180215 0.0180(35) pc / cm3
DMXR1_0049 56057.4 d
DMXR2_0049 56057.4 d
DMX_0050 0.0182147 0.0182(35) pc / cm3
DMXR1_0050 56106.2 d
DMXR2_0050 56106.2 d
DMX_0051 0.0181602 0.0182(35) pc / cm3
DMXR1_0051 56122.1 d
DMXR2_0051 56122.2 d
DMX_0052 0.0180431 0.0180(35) pc / cm3
DMXR1_0052 56140.1 d
DMXR2_0052 56140.1 d
DMX_0053 0.0181528 0.0182(35) pc / cm3
DMXR1_0053 56166.1 d
DMXR2_0053 56166.1 d
DMX_0054 0.0184944 0.0185(35) pc / cm3
DMXR1_0054 56202 d
DMXR2_0054 56212.9 d
DMX_0055 0.0182191 0.0182(35) pc / cm3
DMXR1_0055 56235.8 d
DMXR2_0055 56235.9 d
DMX_0056 0.0182791 0.0183(35) pc / cm3
DMXR1_0056 56254.8 d
DMXR2_0056 56254.8 d
DMX_0057 0.0183449 0.0183(35) pc / cm3
DMXR1_0057 56277.8 d
DMXR2_0057 56277.8 d
DMX_0058 0.0182456 0.0182(35) pc / cm3
DMXR1_0058 56294.7 d
DMXR2_0058 56306.6 d
DMX_0059 0.018052 0.0181(35) pc / cm3
DMXR1_0059 56319.6 d
DMXR2_0059 56319.6 d
DMX_0060 0.0181223 0.0181(35) pc / cm3
DMXR1_0060 56341.6 d
DMXR2_0060 56341.6 d
DMX_0061 0.018162 0.0182(35) pc / cm3
DMXR1_0061 56360.5 d
DMXR2_0061 56374.5 d
DMX_0062 0.018172 0.0182(35) pc / cm3
DMXR1_0062 56380.5 d
DMXR2_0062 56380.5 d
DMX_0063 0.0181438 0.0181(35) pc / cm3
DMXR1_0063 56411.4 d
DMXR2_0063 56417.4 d
DMX_0064 0.0181367 0.0181(35) pc / cm3
DMXR1_0064 56432.3 d
DMXR2_0064 56438.3 d
DMX_0065 0.0181205 0.0181(35) pc / cm3
DMXR1_0065 56458.2 d
DMXR2_0065 56458.3 d
DMX_0066 0.0181608 0.0182(35) pc / cm3
DMXR1_0066 56479.2 d
DMXR2_0066 56479.2 d
DMX_0067 0.0181468 0.0181(35) pc / cm3
DMXR1_0067 56498.1 d
DMXR2_0067 56498.1 d
DMX_0068 0.0182941 0.0183(35) pc / cm3
DMXR1_0068 56519.1 d
DMXR2_0068 56519.1 d
DMX_0069 0.0182906 0.0183(35) pc / cm3
DMXR1_0069 56538 d
DMXR2_0069 56538 d
DMX_0070 0.0179612 0.0180(35) pc / cm3
DMXR1_0070 56558 d
DMXR2_0070 56558 d
DMX_0071 0.0183625 0.0184(35) pc / cm3
DMXR1_0071 56577.9 d
DMXR2_0071 56577.9 d
DMX_0072 0.0183865 0.0184(35) pc / cm3
DMXR1_0072 56598.9 d
DMXR2_0072 56598.9 d
PB 12.3272 12.32717119133(20) d
PBDOT 0
A1 9.23078 9.23078048(20) ls
A1DOT 0 ls / s
ECC 2.1634e-05 2.1634(24)×10⁻⁵
EDOT 0 1 / s
T0 54975.5 54975.5128(19) d
OM 276.536 276.53(6) deg
OMDOT 0 deg / yr
M2 0.233837 0.234(11) solMass
SINI 0.999461 0.99946(18)
A0 0 s
B0 0 s
GAMMA 0 s
DR 0
DTH 0
FD1 0.000161666 0.000162(34) s
FD2 -0.00018821 -0.00019(4) s
FD3 0.000107527 0.000108(25) s
RNAMP 0.017173
RNIDX -4.91353
TNREDAMP -14.2275
TNREDGAM 4.91353
TNREDC 45
EFAC1 1.507
EQUAD1 0.25518 us
EFAC2 1.147
EFAC3 1.15
EFAC4 1.117
EQUAD2 0.0141 us
EQUAD3 0.42504 us
EQUAD4 0.0264 us
ECORR1 0.00601 us
ECORR2 0.31843 us
ECORR3 0.79618 us
ECORR4 0.01117 us
TZRMJD 54981.3 d
TZRSITE 3 3 None
TZRFRQ 424 MHz
JUMP1 -9.449e-06 -9(9)×10⁻⁶ s
Derived Parameters:
Period = 0.005362100465526423±0.000000000000000008 s
Pdot = (1.78406±0.00007)×10⁻²⁰
Characteristic age = 4.762e+09 yr (braking index = 3)
Surface magnetic field = 3.13e+08 G
Magnetic field at light cylinder = 1.84e+04 G
Spindown Edot = 4.568e+33 erg / s (I=1e+45 cm2 g)
Parallax distance = (3.35±2.45)×10³ pc
Binary model BinaryDD
Mass function = 0.0055573934(4) Msun
Min / Median Companion mass (assuming Mpsr = 1.4 Msun) = 0.2470 / 0.2902 Msun
From SINI in model:
cos(i) = 0.033(5)
i = 88.12(31) deg
Pulsar mass (Shapiro Delay) = 1.2804782086531232 solMass
The GLS fitter produces two types of residuals, the normal residuals to the deterministic model and those from the noise model.
[21]:
glsfit.resids.time_resids
[21]:
[22]:
glsfit.resids.noise_resids
[22]:
{'pl_red_noise': <Quantity [-1.27090740e-06, -1.27090740e-06, -1.27090740e-06, ...,
-1.27096267e-06, -1.27096267e-06, -1.27096267e-06] s>,
'ecorr_noise': <Quantity [-7.68053637e-11, -7.68053637e-11, -7.68053637e-11, ...,
3.75218951e-07, 3.75218951e-07, 3.75218951e-07] s>}
[23]:
# Here we can plot both the residuals to the deterministic model as well as the realization of the noise model residuals
# The difference will be the "whitened" residuals
fig, ax = plt.subplots(figsize=(16, 9))
mjds = glsfit.toas.get_mjds()
ax.plot(mjds, glsfit.resids.time_resids, ".")
ax.plot(mjds, glsfit.resids.noise_resids["pl_red_noise"], ".")
[23]:
[<matplotlib.lines.Line2D at 0x7fbd1f8ad310>]

Choosing fitters
You can use the automatic fitter selection to help you choose between WLSFitter
, GLSFitter
, and their wideband variants. The default Downhill
fitters generally have better performance than the plain variants.
[24]:
autofit = pint.fitter.Fitter.auto(toas=ts1855, model=m1855)
[25]:
autofit.fit_toas()
[25]:
True
[26]:
display_markdown(autofit.model.compare(glsfit.model, format="markdown"), raw=True)
PARAMETER |
B1855+09_NANOGrav_9yv1.gls.par |
B1855+09_NANOGrav_9yv1.gls.par |
Diff_Sigma1 |
Diff_Sigma2 |
---|---|---|---|---|
PSR |
B1855+09 |
B1855+09 |
||
EPHEM |
DE421 |
DE421 |
||
CLOCK |
TT(BIPM2019) |
TT(BIPM2019) |
||
UNITS |
TDB |
TDB |
||
START |
53358.727464829469664 |
53358.727464829469664 |
||
FINISH |
56598.871995360779607 |
56598.871995360779607 |
||
INFO |
-f |
-f |
||
TIMEEPH |
FB90 |
FB90 |
||
BINARY |
DD |
DD |
||
DILATEFREQ |
False |
False |
||
DMDATA |
False |
False |
||
NTOA |
4005 |
4005 |
||
CHI2 |
3900.098975108194 |
3900.098153253117 |
||
CHI2R |
0.9964483840337746 |
0.9964481740554719 |
||
TRES |
1.3465976125409702851 |
1.3467836849625620117 |
||
POSEPOCH |
54978.0 |
54978.0 |
||
PX |
0.30(22) |
0.30(22) |
0.00 |
0.00 |
ELONG |
286d51m48.56156715s +/- 5.9e-05 |
286d51m48.56156705s +/- 5.9e-05 |
-0.00 |
-0.00 |
ELAT |
32d19m17.35582891s +/- 9.7e-05 |
32d19m17.35582847s +/- 9.8e-05 |
-0.00 |
-0.00 |
PMELONG |
-3.270(14) |
-3.270(14) |
0.00 |
0.00 |
PMELAT |
-5.099(29) |
-5.099(29) |
-0.00 |
-0.00 |
ECL |
IERS2003 |
IERS2003 |
||
F0 |
186.49408127078522(27) |
186.49408127078522(27) |
0.00 |
0.00 |
PEPOCH |
54978.0 |
54978.0 |
||
F1 |
-6.20497(25)×10⁻¹⁶ |
-6.20497(25)×10⁻¹⁶ |
-0.00 |
-0.00 |
CORRECT_TROPOSPHERE |
False |
False |
||
PLANET_SHAPIRO |
False |
False |
||
NE_SW |
0.0 |
0.0 |
||
SWP |
2.0 |
2.0 |
||
SWM |
0.0 |
0.0 |
||
DM |
13.299393 |
13.299393 |
||
PB |
12.32717119133(20) |
12.32717119133(20) |
-0.00 |
-0.00 |
PBDOT |
0.0 |
0.0 |
||
A1 |
9.23078048(20) |
9.23078048(20) |
-0.00 |
-0.00 |
A1DOT |
0.0 |
0.0 |
||
ECC |
2.1634(24)×10⁻⁵ |
2.1634(24)×10⁻⁵ |
0.00 |
0.00 |
EDOT |
0.0 |
0.0 |
||
T0 |
54975.5129(8) |
54975.5128(19) |
-0.08 |
-0.03 |
OM |
276.536(23) |
276.53(6) |
-0.08 |
-0.03 |
OMDOT |
0.0 |
0.0 |
||
M2 |
0.234(11) |
0.234(11) |
0.00 |
0.00 |
SINI |
0.99946(18) |
0.99946(18) |
-0.00 |
-0.00 |
A0 |
0.0 |
0.0 |
||
B0 |
0.0 |
0.0 |
||
GAMMA |
0.0 |
0.0 |
||
DR |
0.0 |
0.0 |
||
DTH |
0.0 |
0.0 |
||
FD1 |
0.000162(34) |
0.000162(34) |
0.00 |
0.00 |
FD2 |
-0.00019(4) |
-0.00019(4) |
-0.00 |
-0.00 |
FD3 |
0.000108(25) |
0.000108(25) |
0.00 |
0.00 |
RNAMP |
0.017173 |
0.017173 |
||
RNIDX |
-4.91353 |
-4.91353 |
||
TNREDAMP |
-14.227505410948254 |
-14.227505410948254 |
||
TNREDGAM |
4.91353 |
4.91353 |
||
TNREDC |
45.0 |
45.0 |
||
EFAC1 |
1.507 |
1.507 |
||
EQUAD1 |
0.25518 |
0.25518 |
||
EFAC2 |
1.147 |
1.147 |
||
EFAC3 |
1.15 |
1.15 |
||
EFAC4 |
1.117 |
1.117 |
||
EQUAD2 |
0.0141 |
0.0141 |
||
EQUAD3 |
0.42504 |
0.42504 |
||
EQUAD4 |
0.0264 |
0.0264 |
||
ECORR1 |
0.00601 |
0.00601 |
||
ECORR2 |
0.31843 |
0.31843 |
||
ECORR3 |
0.79618 |
0.79618 |
||
ECORR4 |
0.01117 |
0.01117 |
||
TZRMJD |
54981.28084616488447 |
54981.28084616488447 |
||
TZRSITE |
3 |
3 |
||
TZRFRQ |
424.0 |
424.0 |
||
JUMP1 |
-9(9)×10⁻⁶ |
-9(9)×10⁻⁶ |
0.00 |
0.00 |
SEPARATION |
0.000000 arcsec |
The results are (thankfully) identical.
The MCMC fitter is considerably more complicated, so it has its own dedicated walkthroughs in MCMC_walkthrough.ipynb
(for photon data) and examples/fit_NGC6440E_MCMC.py
(for fitting TOAs).
This Jupyter notebook can be downloaded from noise-fitting-example.ipynb, or viewed as a python script at noise-fitting-example.py.
PINT Noise Fitting Examples
[1]:
from pint.models import get_model
from pint.simulation import make_fake_toas_uniform
from pint.logging import setup as setup_log
from pint.fitter import Fitter
import numpy as np
from io import StringIO
from astropy import units as u
from matplotlib import pyplot as plt
[2]:
setup_log(level="WARNING")
[2]:
1
Fitting for EFAC and EQUAD
[3]:
# Let us begin by simulating a dataset with an EFAC and an EQUAD.
# Note that the EFAC and the EQUAD are set as fit parameters ("1").
par = """
PSR TEST1
RAJ 05:00:00 1
DECJ 15:00:00 1
PEPOCH 55000
F0 100 1
F1 -1e-15 1
EFAC tel gbt 1.3 1
EQUAD tel gbt 1.1 1
TZRMJD 55000
TZRFRQ 1400
TZRSITE gbt
EPHEM DE440
CLOCK TT(BIPM2019)
UNITS TDB
"""
m = get_model(StringIO(par))
ntoas = 200
# EFAC and EQUAD cannot be measured separately if all TOA uncertainties
# are the same. So we must set a different toa uncertainty for each TOA.
# This is how it is in real datasets anyway.
toaerrs = np.random.uniform(0.5, 2, ntoas) * u.us
t = make_fake_toas_uniform(
startMJD=54000,
endMJD=56000,
ntoas=ntoas,
model=m,
obs="gbt",
error=toaerrs,
add_noise=True,
include_bipm=True,
include_gps=True,
)
[4]:
# Now create the fitter. The `Fitter.auto()` function creates a
# Downhill fitter. Noise parameter fitting is only available in
# Downhill fitters.
ftr = Fitter.auto(t, m)
[5]:
# Now do the fitting.
ftr.fit_toas()
[6]:
# Print the post-fit model. We can see that the EFAC and EQUAD have been
# and the uncertainties are listed.
print(ftr.model)
# Created: 2024-04-26T18:24:42.587009
# PINT_version: 1.0
# User: docs
# Host: build-24199868-project-85767-nanograv-pint
# OS: Linux-5.19.0-1028-aws-x86_64-with-glibc2.35
# Python: 3.11.6 (main, Feb 1 2024, 16:47:41) [GCC 11.4.0]
# Format: pint
PSR TEST1
EPHEM DE440
CLOCK TT(BIPM2019)
UNITS TDB
START 53999.9999999842789237
FINISH 56000.0000000035891667
DILATEFREQ N
DMDATA N
NTOA 200
CHI2 199.99991319557515
CHI2R 1.0362689802879541
TRES 2.162977683873399002
RAJ 4:59:59.99999617 1 0.00000784783558282666
DECJ 14:59:59.99980778 1 0.00068118292350704298
PMRA 0.0
PMDEC 0.0
PX 0.0
F0 100.00000000000029638 1 3.0494101200937177432e-13
F1 -1.000002813569296409e-15 1 1.3856212625044998975e-20
PEPOCH 55000.0000000000000000
TZRMJD 55000.0000000000000000
TZRSITE gbt
TZRFRQ 1400.0
EFAC tel gbt 1.0920747019587804 1 0.2207055602638182
EQUAD tel gbt 1.5494113269068523 1 0.49576759779102836
PLANET_SHAPIRO N
[7]:
# Let us plot the injected and measured noise parameters together to
# compare them.
plt.scatter(m.EFAC1.value, m.EQUAD1.value, label="Injected", marker="o", color="blue")
plt.errorbar(
ftr.model.EFAC1.value,
ftr.model.EQUAD1.value,
xerr=ftr.model.EFAC1.uncertainty_value,
yerr=ftr.model.EQUAD1.uncertainty_value,
marker="+",
label="Measured",
color="red",
)
plt.xlabel("EFAC_tel_gbt")
plt.ylabel("EQUAD_tel_gbt (us)")
plt.legend()
plt.show()

Fitting for ECORRs
[8]:
# Note the explicit offset (PHOFF) in the par file below.
# Implicit offset subtraction is typically not accurate enough when
# ECORR (or any other type of correlated noise) is present.
# i.e., PHOFF should be a free parameter when ECORRs are being fit.
par = """
PSR TEST2
RAJ 05:00:00 1
DECJ 15:00:00 1
PEPOCH 55000
F0 100 1
F1 -1e-15 1
PHOFF 0 1
EFAC tel gbt 1.3 1
ECORR tel gbt 1.1 1
TZRMJD 55000
TZRFRQ 1400
TZRSITE gbt
EPHEM DE440
CLOCK TT(BIPM2019)
UNITS TDB
"""
m = get_model(StringIO(par))
# ECORRs only apply when there are multiple TOAs per epoch.
# This can be simulated by providing multiple frequencies and
# setting the `multi_freqs_in_epoch` option. The `add_correlated_noise`
# option should also be set because correlated noise components
# are not simulated by default.
ntoas = 500
toaerrs = np.random.uniform(0.5, 2, ntoas) * u.us
freqs = np.linspace(1300, 1500, 4) * u.MHz
t = make_fake_toas_uniform(
startMJD=54000,
endMJD=56000,
ntoas=ntoas,
model=m,
obs="gbt",
error=toaerrs,
freq=freqs,
add_noise=True,
add_correlated_noise=True,
include_bipm=True,
include_gps=True,
multi_freqs_in_epoch=True,
)
[9]:
ftr = Fitter.auto(t, m)
[10]:
ftr.fit_toas()
[10]:
True
[11]:
print(ftr.model)
# Created: 2024-04-26T18:25:02.453775
# PINT_version: 1.0
# User: docs
# Host: build-24199868-project-85767-nanograv-pint
# OS: Linux-5.19.0-1028-aws-x86_64-with-glibc2.35
# Python: 3.11.6 (main, Feb 1 2024, 16:47:41) [GCC 11.4.0]
# Format: pint
PSR TEST2
EPHEM DE440
CLOCK TT(BIPM2019)
UNITS TDB
START 53999.9999999862203357
FINISH 55984.0000000565306134
DILATEFREQ N
DMDATA N
NTOA 500
CHI2 499.9776191992979
CHI2R 1.0162146731693047
TRES 1.6256229398090499954
RAJ 4:59:59.99999584 1 0.00000551967639521126
DECJ 15:00:00.00002089 1 0.00047963026285659162
PMRA 0.0
PMDEC 0.0
PX 0.0
F0 99.999999999999979544 1 2.1479740438684180401e-13
F1 -9.999938184126784302e-16 1 9.795183297139953144e-21
PEPOCH 55000.0000000000000000
TZRMJD 55000.0000000000000000
TZRSITE gbt
TZRFRQ 1400.0
PHOFF 5.152548440986949e-06 1 1.6335524438398323e-05
EFAC tel gbt 1.3075947169098563 1 0.047391221522040575
ECORR tel gbt 1.0048562631460494 1 0.09055201936095668
PLANET_SHAPIRO N
[12]:
# Let us plot the injected and measured noise parameters together to
# compare them.
plt.scatter(m.EFAC1.value, m.ECORR1.value, label="Injected", marker="o", color="blue")
plt.errorbar(
ftr.model.EFAC1.value,
ftr.model.ECORR1.value,
xerr=ftr.model.EFAC1.uncertainty_value,
yerr=ftr.model.ECORR1.uncertainty_value,
marker="+",
label="Measured",
color="red",
)
plt.xlabel("EFAC_tel_gbt")
plt.ylabel("ECORR_tel_gbt (us)")
plt.legend()
plt.show()

This Jupyter notebook can be downloaded from rednoise-fit-example.ipynb, or viewed as a python script at rednoise-fit-example.py.
Red noise and DM noise fitting examples
This notebook provides an example on how to fit for red noise and DM noise using PINT using simulated datasets.
We will use the PLRedNoise
and PLDMNoise
models to generate noise realizations (these models provide Fourier Gaussian process descriptions of achromatic red noise and DM noise respectively).
We will fit the generated datasets using the WaveX
and DMWaveX
models, which provide deterministic Fourier representations of achromatic red noise and DM noise respectively.
Finally, we will convert the WaveX
/DMWaveX
amplitudes into spectral parameters and compare them with the injected values.
[1]:
from pint import DMconst
from pint.models import get_model
from pint.simulation import make_fake_toas_uniform
from pint.logging import setup as setup_log
from pint.fitter import WLSFitter
from pint.utils import (
dmwavex_setup,
find_optimal_nharms,
wavex_setup,
plrednoise_from_wavex,
pldmnoise_from_dmwavex,
)
from io import StringIO
import numpy as np
import astropy.units as u
from matplotlib import pyplot as plt
from copy import deepcopy
setup_log(level="WARNING")
[1]:
1
Red noise fitting
Simulation
The first step is to generate a simulated dataset for demonstration. Note that we are adding PHOFF as a free parameter. This is required for the fit to work properly.
[2]:
par_sim = """
PSR SIM3
RAJ 05:00:00 1
DECJ 15:00:00 1
PEPOCH 55000
F0 100 1
F1 -1e-15 1
PHOFF 0 1
DM 15 1
TNREDAMP -13
TNREDGAM 3.5
TNREDC 30
TZRMJD 55000
TZRFRQ 1400
TZRSITE gbt
UNITS TDB
EPHEM DE440
CLOCK TT(BIPM2019)
"""
m = get_model(StringIO(par_sim))
[3]:
# Now generate the simulated TOAs.
ntoas = 2000
toaerrs = np.random.uniform(0.5, 2.0, ntoas) * u.us
freqs = np.linspace(500, 1500, 8) * u.MHz
t = make_fake_toas_uniform(
startMJD=53001,
endMJD=57001,
ntoas=ntoas,
model=m,
freq=freqs,
obs="gbt",
error=toaerrs,
add_noise=True,
add_correlated_noise=True,
name="fake",
include_bipm=True,
include_gps=True,
multi_freqs_in_epoch=True,
)
Optimal number of harmonics
The optimal number of harmonics can be estimated by minimizing the Akaike Information Criterion (AIC). This is implemented in the pint.utils.find_optimal_nharms
function.
[4]:
m1 = deepcopy(m)
m1.remove_component("PLRedNoise")
nharm_opt, d_aics = find_optimal_nharms(m1, t, "WaveX", 30)
print("Optimum no of harmonics = ", nharm_opt)
Optimum no of harmonics = 21
[5]:
print(np.argmin(d_aics))
21
[6]:
# The Y axis is plotted in log scale only for better visibility.
plt.scatter(list(range(len(d_aics))), d_aics + 1)
plt.axvline(nharm_opt, color="red", label="Optimum number of harmonics")
plt.axvline(
int(m.TNREDC.value), color="black", ls="--", label="Injected number of harmonics"
)
plt.xlabel("Number of harmonics")
plt.ylabel("AIC - AIC$_\\min{} + 1$")
plt.legend()
plt.yscale("log")
# plt.savefig("sim3-aic.pdf")

[7]:
# Now create a new model with the optimum number of harmonics
m2 = deepcopy(m1)
Tspan = t.get_mjds().max() - t.get_mjds().min()
wavex_setup(m2, T_span=Tspan, n_freqs=nharm_opt, freeze_params=False)
ftr = WLSFitter(t, m2)
ftr.fit_toas(maxiter=10)
m2 = ftr.model
print(m2)
# Created: 2024-04-26T18:26:16.948245
# PINT_version: 1.0
# User: docs
# Host: build-24199868-project-85767-nanograv-pint
# OS: Linux-5.19.0-1028-aws-x86_64-with-glibc2.35
# Python: 3.11.6 (main, Feb 1 2024, 16:47:41) [GCC 11.4.0]
# Format: pint
PSR SIM3
EPHEM DE440
CLOCK TT(BIPM2019)
UNITS TDB
START 53000.9999999565946180
FINISH 56985.0000000463340162
DILATEFREQ N
DMDATA N
NTOA 2000
CHI2 1985.786576195365
CHI2R 1.0178301261893208
TRES 0.9987650260151451009
RAJ 4:59:59.99997566 1 0.00012549888847645377
DECJ 15:00:00.01946093 1 0.01253574707989302608
PMRA 0.0
PMDEC 0.0
PX 0.0
F0 100.00000000000048128 1 5.9764548193525643214e-13
F1 -1.0002964490051748055e-15 1 2.1140416321996849659e-19
PEPOCH 55000.0000000000000000
PLANET_SHAPIRO N
DM 15.000003363000932196 1 4.7273937301305952877e-06
WXEPOCH 55000.0000000000000000
WXFREQ_0001 0.00025100401605860305
WXSIN_0001 -5.26072739744458e-06 1 6.697086358550076e-07
WXCOS_0001 2.4780869502129233e-05 1 1.2762439075717358e-05
WXFREQ_0002 0.0005020080321172061
WXSIN_0002 -1.3890066978301404e-06 1 3.382758024389726e-07
WXCOS_0002 -7.0230417664505525e-06 1 3.237821179344407e-06
WXFREQ_0003 0.0007530120481758091
WXSIN_0003 -3.2078986837634066e-07 1 2.36084206996553e-07
WXCOS_0003 1.2113406092769844e-06 1 1.4789683460957496e-06
WXFREQ_0004 0.0010040160642344122
WXSIN_0004 -5.99580013371814e-08 1 1.8729613993884702e-07
WXCOS_0004 -1.3303896610219611e-06 1 8.667399213794265e-07
WXFREQ_0005 0.0012550200802930152
WXSIN_0005 6.380663830475858e-07 1 1.6442141654937857e-07
WXCOS_0005 6.898480865393056e-07 1 5.895140454980921e-07
WXFREQ_0006 0.0015060240963516182
WXSIN_0006 -5.5321141824092024e-08 1 1.5240201900215862e-07
WXCOS_0006 -4.865108467626868e-07 1 4.448839352687818e-07
WXFREQ_0007 0.0017570281124102212
WXSIN_0007 -1.471356713537941e-07 1 1.516581584594996e-07
WXCOS_0007 8.679297868588556e-07 1 3.689139381650877e-07
WXFREQ_0008 0.0020080321284688244
WXSIN_0008 -1.8158167257610405e-07 1 1.6629295966529573e-07
WXCOS_0008 -6.857947996078223e-07 1 3.404859084200492e-07
WXFREQ_0009 0.002259036144527427
WXSIN_0009 1.5099768956492976e-07 1 2.0798656676868966e-07
WXCOS_0009 7.665718882964265e-07 1 3.6125143597503085e-07
WXFREQ_0010 0.0025100401605860304
WXSIN_0010 -2.7809222918228695e-07 1 3.6063340864317207e-07
WXCOS_0010 -1.016445893466129e-06 1 5.438786122830541e-07
WXFREQ_0011 0.002761044176644633
WXSIN_0011 -2.5141999885865457e-06 1 2.9682242288044903e-06
WXCOS_0011 -6.637784295910365e-06 1 3.858569396516366e-06
WXFREQ_0012 0.0030120481927032364
WXSIN_0012 1.3206476035990138e-07 1 2.1677282930991087e-07
WXCOS_0012 4.555139177184894e-07 1 2.4371525358777845e-07
WXFREQ_0013 0.0032630522087618396
WXSIN_0013 -4.017260976236724e-08 1 1.0014555992081561e-07
WXCOS_0013 -1.7910920893995486e-08 1 1.0119515936898519e-07
WXFREQ_0014 0.0035140562248204424
WXSIN_0014 -3.6960846782051954e-08 1 6.427849145480143e-08
WXCOS_0014 4.530717797795951e-08 1 6.044210014201975e-08
WXFREQ_0015 0.0037650602408790456
WXSIN_0015 -3.32431856735564e-08 1 4.8130650762713475e-08
WXCOS_0015 3.0079207407672806e-08 1 4.581431739369353e-08
WXFREQ_0016 0.004016064256937649
WXSIN_0016 -3.7265512986990295e-09 1 4.0318991349589925e-08
WXCOS_0016 3.173113319176736e-08 1 3.968114678922932e-08
WXFREQ_0017 0.004267068272996251
WXSIN_0017 -8.018333914461123e-08 1 3.6858471917371e-08
WXCOS_0017 -4.949842529356061e-08 1 3.751158222790377e-08
WXFREQ_0018 0.004518072289054854
WXSIN_0018 -5.323037746157597e-08 1 3.493804598714024e-08
WXCOS_0018 -1.2040091902879907e-08 1 3.602245061675795e-08
WXFREQ_0019 0.004769076305113458
WXSIN_0019 -9.163643122689907e-08 1 3.3890724042626463e-08
WXCOS_0019 -2.6435176795823583e-08 1 3.5066527693037597e-08
WXFREQ_0020 0.005020080321172061
WXSIN_0020 7.111206331488393e-08 1 3.2908876477146776e-08
WXCOS_0020 -1.773633234303377e-08 1 3.504885423531406e-08
WXFREQ_0021 0.005271084337230664
WXSIN_0021 -5.700714256165111e-08 1 3.28522938248241e-08
WXCOS_0021 5.803209171045271e-08 1 3.5296926091900935e-08
TZRMJD 55000.0000000000000000
TZRSITE gbt
TZRFRQ 1400.0
PHOFF 0.0003791964944351442 1 0.001054845569748997
Estimating the spectral parameters from the WaveX fit.
[8]:
# Get the Fourier amplitudes and powers and their uncertainties.
idxs = np.array(m2.components["WaveX"].get_indices())
a = np.array([m2[f"WXSIN_{idx:04d}"].quantity.to_value("s") for idx in idxs])
da = np.array([m2[f"WXSIN_{idx:04d}"].uncertainty.to_value("s") for idx in idxs])
b = np.array([m2[f"WXCOS_{idx:04d}"].quantity.to_value("s") for idx in idxs])
db = np.array([m2[f"WXCOS_{idx:04d}"].uncertainty.to_value("s") for idx in idxs])
print(len(idxs))
P = (a**2 + b**2) / 2
dP = ((a * da) ** 2 + (b * db) ** 2) ** 0.5
f0 = (1 / Tspan).to_value(u.Hz)
fyr = (1 / u.year).to_value(u.Hz)
21
[9]:
# We can create a `PLRedNoise` model from the `WaveX` model.
# This will estimate the spectral parameters from the `WaveX`
# amplitudes.
m3 = plrednoise_from_wavex(m2)
print(m3)
# Created: 2024-04-26T18:26:16.992898
# PINT_version: 1.0
# User: docs
# Host: build-24199868-project-85767-nanograv-pint
# OS: Linux-5.19.0-1028-aws-x86_64-with-glibc2.35
# Python: 3.11.6 (main, Feb 1 2024, 16:47:41) [GCC 11.4.0]
# Format: pint
PSR SIM3
EPHEM DE440
CLOCK TT(BIPM2019)
UNITS TDB
START 53000.9999999565946180
FINISH 56985.0000000463340162
DILATEFREQ N
DMDATA N
NTOA 2000
CHI2 1985.786576195365
CHI2R 1.0178301261893208
TRES 0.9987650260151451009
RAJ 4:59:59.99997566 1 0.00012549888847645377
DECJ 15:00:00.01946093 1 0.01253574707989302608
PMRA 0.0
PMDEC 0.0
PX 0.0
F0 100.00000000000048128 1 5.9764548193525643214e-13
F1 -1.0002964490051748055e-15 1 2.1140416321996849659e-19
PEPOCH 55000.0000000000000000
PLANET_SHAPIRO N
DM 15.000003363000932196 1 4.7273937301305952877e-06
TNREDAMP -13.016030933227317 0 0.12485158319660997
TNREDGAM 3.479263293051011 0 0.5549638664070712
TNREDC 21.0
TZRMJD 55000.0000000000000000
TZRSITE gbt
TZRFRQ 1400.0
PHOFF 0.0003791964944351442 1 0.001054845569748997
[10]:
# Now let us plot the estimated spectrum with the injected
# spectrum.
plt.subplot(211)
plt.errorbar(
idxs * f0,
b * 1e6,
db * 1e6,
ls="",
marker="o",
label="$\\hat{a}_j$ (WXCOS)",
color="red",
)
plt.errorbar(
idxs * f0,
a * 1e6,
da * 1e6,
ls="",
marker="o",
label="$\\hat{b}_j$ (WXSIN)",
color="blue",
)
plt.axvline(fyr, color="black", ls="dotted")
plt.axhline(0, color="grey", ls="--")
plt.ylabel("Fourier coeffs ($\mu$s)")
plt.xscale("log")
plt.legend(fontsize=8)
plt.subplot(212)
plt.errorbar(
idxs * f0, P, dP, ls="", marker="o", label="Spectral power (PINT)", color="k"
)
P_inj = m.components["PLRedNoise"].get_noise_weights(t)[::2][:nharm_opt]
plt.plot(idxs * f0, P_inj, label="Injected Spectrum", color="r")
P_est = m3.components["PLRedNoise"].get_noise_weights(t)[::2][:nharm_opt]
print(len(idxs), len(P_est))
plt.plot(idxs * f0, P_est, label="Estimated Spectrum", color="b")
plt.xscale("log")
plt.yscale("log")
plt.ylabel("Spectral power (s$^2$)")
plt.xlabel("Frequency (Hz)")
plt.axvline(fyr, color="black", ls="dotted", label="1 yr$^{-1}$")
plt.legend()
21 21
[10]:
<matplotlib.legend.Legend at 0x7f51a6cc3f90>

Note the outlier in the 1 year^-1 bin. This is caused by the covariance with RA and DEC, which introduce a delay with the same frequency.
DM noise fitting
Let us now do a similar kind of analysis for DM noise.
[11]:
par_sim = """
PSR SIM4
RAJ 05:00:00 1
DECJ 15:00:00 1
PEPOCH 55000
F0 100 1
F1 -1e-15 1
PHOFF 0 1
DM 15 1
TNDMAMP -13
TNDMGAM 3.5
TNDMC 30
TZRMJD 55000
TZRFRQ 1400
TZRSITE gbt
UNITS TDB
EPHEM DE440
CLOCK TT(BIPM2019)
"""
m = get_model(StringIO(par_sim))
[12]:
# Generate the simulated TOAs.
ntoas = 2000
toaerrs = np.random.uniform(0.5, 2.0, ntoas) * u.us
freqs = np.linspace(500, 1500, 8) * u.MHz
t = make_fake_toas_uniform(
startMJD=53001,
endMJD=57001,
ntoas=ntoas,
model=m,
freq=freqs,
obs="gbt",
error=toaerrs,
add_noise=True,
add_correlated_noise=True,
name="fake",
include_bipm=True,
include_gps=True,
multi_freqs_in_epoch=True,
)
[13]:
# Find the optimum number of harmonics by minimizing AIC.
m1 = deepcopy(m)
m1.remove_component("PLDMNoise")
m2 = deepcopy(m1)
nharm_opt, d_aics = find_optimal_nharms(m2, t, "DMWaveX", 30)
print("Optimum no of harmonics = ", nharm_opt)
Optimum no of harmonics = 30
[14]:
# The Y axis is plotted in log scale only for better visibility.
plt.scatter(list(range(len(d_aics))), d_aics + 1)
plt.axvline(nharm_opt, color="red", label="Optimum number of harmonics")
plt.axvline(
int(m.TNDMC.value), color="black", ls="--", label="Injected number of harmonics"
)
plt.xlabel("Number of harmonics")
plt.ylabel("AIC - AIC$_\\min{} + 1$")
plt.legend()
plt.yscale("log")
# plt.savefig("sim3-aic.pdf")

[15]:
# Now create a new model with the optimum number of
# harmonics
m2 = deepcopy(m1)
Tspan = t.get_mjds().max() - t.get_mjds().min()
dmwavex_setup(m2, T_span=Tspan, n_freqs=nharm_opt, freeze_params=False)
ftr = WLSFitter(t, m2)
ftr.fit_toas(maxiter=10)
m2 = ftr.model
print(m2)
# Created: 2024-04-26T18:27:41.839748
# PINT_version: 1.0
# User: docs
# Host: build-24199868-project-85767-nanograv-pint
# OS: Linux-5.19.0-1028-aws-x86_64-with-glibc2.35
# Python: 3.11.6 (main, Feb 1 2024, 16:47:41) [GCC 11.4.0]
# Format: pint
PSR SIM4
EPHEM DE440
CLOCK TT(BIPM2019)
UNITS TDB
START 53000.9999999566617940
FINISH 56985.0000000461651968
DILATEFREQ N
DMDATA N
NTOA 2000
CHI2 1924.9398459914685
CHI2R 0.9958302358983282
TRES 0.97370560740053353646
RAJ 4:59:59.99999754 1 0.00000186069354909244
DECJ 15:00:00.00008286 1 0.00016076295870374744
PMRA 0.0
PMDEC 0.0
PX 0.0
F0 99.999999999999943 1 3.6877913630614625074e-14
F1 -9.99999757231625781e-16 1 8.3155380309933932423e-22
PEPOCH 55000.0000000000000000
PLANET_SHAPIRO N
DM 14.999999105805300385 1 5.1182271780250388098e-06
DMWXEPOCH 55000.0000000000000000
DMWXFREQ_0001 0.0002510040160586182
DMWXSIN_0001 0.002267887154581237 1 6.405742083452016e-06
DMWXCOS_0001 0.0022028446029550104 1 6.801658104196913e-06
DMWXFREQ_0002 0.0005020080321172364
DMWXSIN_0002 -0.002256449215492454 1 4.879320378414731e-06
DMWXCOS_0002 -0.00014072784684890515 1 4.614913834161835e-06
DMWXFREQ_0003 0.0007530120481758545
DMWXSIN_0003 0.00010798611872871258 1 4.552034376795765e-06
DMWXCOS_0003 -7.560287812989517e-05 1 4.462145026502726e-06
DMWXFREQ_0004 0.0010040160642344727
DMWXSIN_0004 0.00011591836869768394 1 4.563079476613378e-06
DMWXCOS_0004 0.00045904841368577865 1 4.358378332370443e-06
DMWXFREQ_0005 0.0012550200802930909
DMWXSIN_0005 -7.914061038193238e-05 1 4.364947036806972e-06
DMWXCOS_0005 3.678118108494062e-05 1 4.496581531867522e-06
DMWXFREQ_0006 0.001506024096351709
DMWXSIN_0006 -0.00016177856742951257 1 4.416317814732561e-06
DMWXCOS_0006 0.00012202157331123866 1 4.4461985432509346e-06
DMWXFREQ_0007 0.001757028112410327
DMWXSIN_0007 4.070724742496181e-06 1 4.463533937578784e-06
DMWXCOS_0007 1.7544746184420862e-05 1 4.3937554364021635e-06
DMWXFREQ_0008 0.0020080321284689454
DMWXSIN_0008 -9.202485634981567e-05 1 4.437741005587197e-06
DMWXCOS_0008 6.666667515998868e-05 1 4.416420555875552e-06
DMWXFREQ_0009 0.0022590361445275634
DMWXSIN_0009 -4.016759497779952e-05 1 4.4209352201880995e-06
DMWXCOS_0009 5.379679454916742e-05 1 4.453007615565689e-06
DMWXFREQ_0010 0.0025100401605861818
DMWXSIN_0010 -1.5622742038987805e-05 1 4.473041981338938e-06
DMWXCOS_0010 7.151230128813104e-05 1 4.448135763685967e-06
DMWXFREQ_0011 0.0027610441766447997
DMWXSIN_0011 5.250038074067476e-05 1 7.137162987804897e-06
DMWXCOS_0011 -3.0002039007988996e-05 1 7.18325685454857e-06
DMWXFREQ_0012 0.003012048192703418
DMWXSIN_0012 5.256613014308591e-05 1 4.48191972973073e-06
DMWXCOS_0012 1.3115385629417554e-05 1 4.41152076391041e-06
DMWXFREQ_0013 0.003263052208762036
DMWXSIN_0013 1.1662502203493704e-05 1 4.318503781142853e-06
DMWXCOS_0013 2.4418102309389204e-05 1 4.5225747457544276e-06
DMWXFREQ_0014 0.003514056224820654
DMWXSIN_0014 7.629401527394661e-06 1 4.416641902267066e-06
DMWXCOS_0014 4.886545060278572e-05 1 4.420483272958954e-06
DMWXFREQ_0015 0.0037650602408792725
DMWXSIN_0015 -2.723657665290609e-05 1 4.27345609861973e-06
DMWXCOS_0015 -1.4103469287476787e-05 1 4.559770853078012e-06
DMWXFREQ_0016 0.004016064256937891
DMWXSIN_0016 -6.2697014551712e-06 1 4.50517707299454e-06
DMWXCOS_0016 7.32410856134019e-06 1 4.332907459016422e-06
DMWXFREQ_0017 0.004267068272996509
DMWXSIN_0017 -4.7500848700161814e-05 1 4.376464444606066e-06
DMWXCOS_0017 1.5907747753421928e-05 1 4.476824939638822e-06
DMWXFREQ_0018 0.004518072289055127
DMWXSIN_0018 2.152892514450982e-05 1 4.498850019248732e-06
DMWXCOS_0018 1.3585152410400874e-05 1 4.362937582894786e-06
DMWXFREQ_0019 0.004769076305113745
DMWXSIN_0019 1.1900781088289971e-06 1 4.5407385727059776e-06
DMWXCOS_0019 9.772600287781807e-06 1 4.321319932305639e-06
DMWXFREQ_0020 0.0050200803211723636
DMWXSIN_0020 -2.3539540615309578e-05 1 4.375942356581529e-06
DMWXCOS_0020 -1.2120009743867361e-06 1 4.4738005307565184e-06
DMWXFREQ_0021 0.0052710843372309815
DMWXSIN_0021 -4.538115353946841e-06 1 4.430657988757621e-06
DMWXCOS_0021 -1.5358016234057167e-05 1 4.4142173344308175e-06
DMWXFREQ_0022 0.0055220883532895995
DMWXSIN_0022 -2.482770311281147e-05 1 4.4240395877964516e-06
DMWXCOS_0022 2.357463865577042e-05 1 4.425296235714781e-06
DMWXFREQ_0023 0.0057730923693482174
DMWXSIN_0023 7.054685712605008e-06 1 4.456922794287003e-06
DMWXCOS_0023 -3.906990127637215e-05 1 4.38254949398702e-06
DMWXFREQ_0024 0.006024096385406836
DMWXSIN_0024 -1.055809736673286e-06 1 4.508596123325227e-06
DMWXCOS_0024 2.084248522590532e-05 1 4.326660188412384e-06
DMWXFREQ_0025 0.006275100401465454
DMWXSIN_0025 3.928520075657159e-06 1 4.454319195284847e-06
DMWXCOS_0025 2.822829777577435e-06 1 4.382386062014087e-06
DMWXFREQ_0026 0.006526104417524072
DMWXSIN_0026 1.4306444754077398e-05 1 4.439900618744785e-06
DMWXCOS_0026 9.488765576228842e-07 1 4.403322477995081e-06
DMWXFREQ_0027 0.00677710843358269
DMWXSIN_0027 4.8695681427499365e-06 1 4.415966128261986e-06
DMWXCOS_0027 3.5662732029316236e-06 1 4.4341393274931e-06
DMWXFREQ_0028 0.007028112449641308
DMWXSIN_0028 3.491739116220949e-06 1 4.446895171977529e-06
DMWXCOS_0028 -4.921946699163366e-07 1 4.3988067049203884e-06
DMWXFREQ_0029 0.007279116465699927
DMWXSIN_0029 9.236504934521167e-06 1 4.406737366331278e-06
DMWXCOS_0029 5.662327131744644e-06 1 4.420412353352633e-06
DMWXFREQ_0030 0.007530120481758545
DMWXSIN_0030 1.71112860121269e-05 1 4.407696603178223e-06
DMWXCOS_0030 -1.2103678148815715e-05 1 4.417334445032463e-06
TZRMJD 55000.0000000000000000
TZRSITE gbt
TZRFRQ 1400.0
PHOFF 0.0006167886778925615 1 5.406853216451107e-06
Estimating the spectral parameters from the DMWaveX
fit.
[16]:
# Get the Fourier amplitudes and powers and their uncertainties.
# Note that the `DMWaveX` amplitudes have the units of DM.
# We multiply them by a constant factor to convert them to dimensions
# of time so that they are consistent with `PLDMNoise`.
scale = DMconst / (1400 * u.MHz) ** 2
idxs = np.array(m2.components["DMWaveX"].get_indices())
a = np.array(
[(scale * m2[f"DMWXSIN_{idx:04d}"].quantity).to_value("s") for idx in idxs]
)
da = np.array(
[(scale * m2[f"DMWXSIN_{idx:04d}"].uncertainty).to_value("s") for idx in idxs]
)
b = np.array(
[(scale * m2[f"DMWXCOS_{idx:04d}"].quantity).to_value("s") for idx in idxs]
)
db = np.array(
[(scale * m2[f"DMWXCOS_{idx:04d}"].uncertainty).to_value("s") for idx in idxs]
)
print(len(idxs))
P = (a**2 + b**2) / 2
dP = ((a * da) ** 2 + (b * db) ** 2) ** 0.5
f0 = (1 / Tspan).to_value(u.Hz)
fyr = (1 / u.year).to_value(u.Hz)
30
[17]:
# We can create a `PLDMNoise` model from the `DMWaveX` model.
# This will estimate the spectral parameters from the `DMWaveX`
# amplitudes.
m3 = pldmnoise_from_dmwavex(m2)
print(m3)
# Created: 2024-04-26T18:27:41.897485
# PINT_version: 1.0
# User: docs
# Host: build-24199868-project-85767-nanograv-pint
# OS: Linux-5.19.0-1028-aws-x86_64-with-glibc2.35
# Python: 3.11.6 (main, Feb 1 2024, 16:47:41) [GCC 11.4.0]
# Format: pint
PSR SIM4
EPHEM DE440
CLOCK TT(BIPM2019)
UNITS TDB
START 53000.9999999566617940
FINISH 56985.0000000461651968
DILATEFREQ N
DMDATA N
NTOA 2000
CHI2 1924.9398459914685
CHI2R 0.9958302358983282
TRES 0.97370560740053353646
RAJ 4:59:59.99999754 1 0.00000186069354909244
DECJ 15:00:00.00008286 1 0.00016076295870374744
PMRA 0.0
PMDEC 0.0
PX 0.0
F0 99.999999999999943 1 3.6877913630614625074e-14
F1 -9.99999757231625781e-16 1 8.3155380309933932423e-22
PEPOCH 55000.0000000000000000
PLANET_SHAPIRO N
DM 14.999999105805300385 1 5.1182271780250388098e-06
TNDMAMP -12.941525654354875 0 0.04226097991811031
TNDMGAM 3.439774060382107 0 0.21069702401222473
TNDMC 30.0
TZRMJD 55000.0000000000000000
TZRSITE gbt
TZRFRQ 1400.0
PHOFF 0.0006167886778925615 1 5.406853216451107e-06
[18]:
# Now let us plot the estimated spectrum with the injected
# spectrum.
plt.subplot(211)
plt.errorbar(
idxs * f0,
b * 1e6,
db * 1e6,
ls="",
marker="o",
label="$\\hat{a}_j$ (DMWXCOS)",
color="red",
)
plt.errorbar(
idxs * f0,
a * 1e6,
da * 1e6,
ls="",
marker="o",
label="$\\hat{b}_j$ (DMWXSIN)",
color="blue",
)
plt.axvline(fyr, color="black", ls="dotted")
plt.axhline(0, color="grey", ls="--")
plt.ylabel("Fourier coeffs ($\mu$s)")
plt.xscale("log")
plt.legend(fontsize=8)
plt.subplot(212)
plt.errorbar(
idxs * f0, P, dP, ls="", marker="o", label="Spectral power (PINT)", color="k"
)
P_inj = m.components["PLDMNoise"].get_noise_weights(t)[::2][:nharm_opt]
plt.plot(idxs * f0, P_inj, label="Injected Spectrum", color="r")
P_est = m3.components["PLDMNoise"].get_noise_weights(t)[::2][:nharm_opt]
print(len(idxs), len(P_est))
plt.plot(idxs * f0, P_est, label="Estimated Spectrum", color="b")
plt.xscale("log")
plt.yscale("log")
plt.ylabel("Spectral power (s$^2$)")
plt.xlabel("Frequency (Hz)")
plt.axvline(fyr, color="black", ls="dotted", label="1 yr$^{-1}$")
plt.legend()
30 30
[18]:
<matplotlib.legend.Legend at 0x7f51b5687ed0>

This Jupyter notebook can be downloaded from WorkingWithFlags.ipynb, or viewed as a python script at WorkingWithFlags.py.
Working With TOA Flags
PINT
provides methods for working conveniently with TOA flags. You can add, delete, and modify flags, and use them to select TOAs.
[1]:
from pint.toa import get_TOAs
import pint.config
import pint.logging
[2]:
pint.logging.setup("WARNING")
[2]:
1
Get a test dataset. This file has no flags to start with.
[3]:
t = get_TOAs(pint.config.examplefile("NGC6440E.tim"), ephem="DE440")
print(t)
62 TOAs starting at MJD 53478.28587141954
The TOAs are stored internally as an astropy.Table
:
[4]:
print(t.table)
index mjd ... obs_sun_pos
... km
----- ----------------- ... ------------------------------------------
0 53478.28587141954 ... 132300218.97645566 .. 28301415.22776123
1 53483.27670518884 ... 125950526.51351064 .. 32709720.8270093
2 53489.46838978825 ... 116811489.04125574 .. 37847344.03461099
3 53679.87564592083 ... -107617035.1860528 .. -40589908.33710148
4 53679.87564536537 ... -107617036.17634691 .. -40589907.92654537
5 53679.8756449276 ... -107617036.95679888 .. -40589907.60298561
6 53679.87564457819 ... -107617037.57973605 .. -40589907.34472832
7 53679.87564513386 ... -107617036.58907865 .. -40589907.755435064
8 53681.70075099914 ... -104248039.58874722 .. -41917934.12755567
9 53681.95454490266 ... -103778953.65473533 .. -42099237.91000268
... ... ... ...
52 54187.33158349338 ... 148137668.7933031 .. 7406553.704567532
53 54187.58732417023 ... 148057390.61638692 .. 7667155.671811677
54 54099.70978574142 ... 22419253.343037248 .. -57831703.002449505
55 54099.70978542604 ... 22419252.5377699 .. -57831703.0529715
56 54099.70978514842 ... 22419251.828904394 .. -57831703.09744529
57 54099.70978490359 ... 22419251.203770243 .. -57831703.13666582
58 54099.7097846853 ... 22419250.64639034 .. -57831703.1716355
59 54099.70978449185 ... 22419250.152454622 .. -57831703.20262473
60 54099.70978431551 ... 22419249.702196058 .. -57831703.230873674
61 54099.70978415968 ... 22419249.304290682 .. -57831703.25583802
Length = 62 rows
You can look at the flags directly (note that the leading -
from a file has been stripped for internal storage). The initial file did not have flags, so these (format
, ddm
, and clkcorr
) were added by PINT
when reading:
[5]:
print(t["flags"])
flags
--------------------------------------------------------------------------
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.7592772625547294e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.761880402762111e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.766274548262721e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.6719196932481803e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.6719196927815868e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.671919692413864e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.6719196921203566e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.6719196925871212e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.674097374856601e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.674122500587136e-05'}
...
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.733335634508195e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.7337239555170205e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.733771843519471e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.7215845569813715e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.7215845570539083e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.7215845571177607e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.7215845571740715e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.7215845572242792e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.7215845572687715e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.7215845573093294e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.721584557345172e-05'}
Length = 62 rows
Which is just looking at the 'flags'
column in the TOA table:
[6]:
print(t.table["flags"])
flags
--------------------------------------------------------------------------
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.7592772625547294e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.761880402762111e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.766274548262721e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.6719196932481803e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.6719196927815868e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.671919692413864e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.6719196921203566e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.6719196925871212e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.674097374856601e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.674122500587136e-05'}
...
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.733335634508195e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.7337239555170205e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.733771843519471e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.7215845569813715e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.7215845570539083e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.7215845571177607e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.7215845571740715e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.7215845572242792e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.7215845572687715e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.7215845573093294e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.721584557345172e-05'}
Length = 62 rows
To extract the values of a flag, you can just treat it like an array slice
[7]:
print(t["ddm"])
['0.0' '0.0' '0.0' '0.0' '0.0' '0.0' '0.0' '0.0' '0.0' '0.0' '0.0' '0.0'
'0.0' '0.0' '0.0' '0.0' '0.0' '0.0' '0.0' '0.0' '0.0' '0.0' '0.0' '0.0'
'0.0' '0.0' '0.0' '0.0' '0.0' '0.0' '0.0' '0.0' '0.0' '0.0' '0.0' '0.0'
'0.0' '0.0' '0.0' '0.0' '0.0' '0.0' '0.0' '0.0' '0.0' '0.0' '0.0' '0.0'
'0.0' '0.0' '0.0' '0.0' '0.0' '0.0' '0.0' '0.0' '0.0' '0.0' '0.0' '0.0'
'0.0' '0.0']
However, flags are stored as strings, so you might use a function that also allows type conversions:
[8]:
ddm, _ = t.get_flag_value("ddm", as_type=float)
print(ddm)
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
It’s also easy to add flags. In this case we will just add to the first 10 TOAs:
[9]:
t[:10, "fish"] = "carp"
print(t["flags"])
flags
------------------------------------------------------------------------------------------
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.7592772625547294e-05', 'fish': 'carp'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.761880402762111e-05', 'fish': 'carp'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.766274548262721e-05', 'fish': 'carp'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.6719196932481803e-05', 'fish': 'carp'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.6719196927815868e-05', 'fish': 'carp'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.671919692413864e-05', 'fish': 'carp'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.6719196921203566e-05', 'fish': 'carp'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.6719196925871212e-05', 'fish': 'carp'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.674097374856601e-05', 'fish': 'carp'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.674122500587136e-05', 'fish': 'carp'}
...
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.733335634508195e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.7337239555170205e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.733771843519471e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.7215845569813715e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.7215845570539083e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.7215845571177607e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.7215845571740715e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.7215845572242792e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.7215845572687715e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.7215845573093294e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.721584557345172e-05'}
Length = 62 rows
If we now try to get those values:
[10]:
print(t["fish"])
['carp' 'carp' 'carp' 'carp' 'carp' 'carp' 'carp' 'carp' 'carp' 'carp' ''
'' '' '' '' '' '' '' '' '' '' '' '' '' '' '' '' '' '' '' '' '' '' '' ''
'' '' '' '' '' '' '' '' '' '' '' '' '' '' '' '' '' '' '' '' '' '' '' ''
'' '' '']
it will return the flag value or an empty string when it is not set. On the other hand:
[11]:
fish, idx = t.get_flag_value("fish")
print(fish)
print(idx)
['carp', 'carp', 'carp', 'carp', 'carp', 'carp', 'carp', 'carp', 'carp', 'carp', None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None]
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
Will also return an array of indices when the flag is set. Now let’s set some other fish:
[12]:
t[10:15, "fish"] = "bass"
and we can select only the TOAs that meet some criteria:
[13]:
basstoas = t[t["fish"] == "bass"]
print(len(basstoas))
5
Which can be combined with other selection criteria:
[14]:
print(t[(t["fish"] == "carp") & (t["mjd_float"] > 53679)])
7 TOAs starting at MJD 53679.87564457819
You can set the value of a flag to an empty string, which will delete it:
[15]:
t["fish"] = ""
print(t["flags"])
flags
--------------------------------------------------------------------------
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.7592772625547294e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.761880402762111e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.766274548262721e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.6719196932481803e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.6719196927815868e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.671919692413864e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.6719196921203566e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.6719196925871212e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.674097374856601e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.674122500587136e-05'}
...
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.733335634508195e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.7337239555170205e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.733771843519471e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.7215845569813715e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.7215845570539083e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.7215845571177607e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.7215845571740715e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.7215845572242792e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.7215845572687715e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.7215845573093294e-05'}
{'format': 'Princeton', 'ddm': '0.0', 'clkcorr': '2.721584557345172e-05'}
Length = 62 rows
[ ]:
This Jupyter notebook can be downloaded from Wideband_TOA_walkthrough.ipynb, or viewed as a python script at Wideband_TOA_walkthrough.py.
Wideband TOA fitting
Traditional pulsar timing involved measuring only the arrival time of each pulse. But as receivers have covered wider and wider contiguous bandwidths, it became necessary to generate many TOAs for each time interval, covering different subbands. This frequency coverage allowed better handling of changing dispersion measures, but resulted in a large number of TOAs and had certain limitations. A new approach measures the pulse arrival time and the dispersion measure simultaneously from a frequency-resolved data cube. This produces TOAs, each of which has an associated dispersion measure and uncertainty. Working with this data requires different handling from PINT. This notebook demonstrates that.
[1]:
import astropy.units as u
import matplotlib.pyplot as plt
from astropy.visualization import quantity_support
from pint.fitter import Fitter
from pint.models import get_model_and_toas
import pint.config
import pint.logging
# setup logging
pint.logging.setup(level="INFO")
quantity_support()
[1]:
<astropy.visualization.units.quantity_support.<locals>.MplQuantityConverter at 0x7efde20e0e50>
Set up your inputs
[2]:
model, toas = get_model_and_toas(
pint.config.examplefile("J1614-2230_NANOGrav_12yv3.wb.gls.par"),
pint.config.examplefile("J1614-2230_NANOGrav_12yv3.wb.tim"),
)
The DM and its uncertainty are recorded as flags, pp_dm
and pp_dme
on the TOAs that have them, They are not currently available as Columns in the Astropy object. On the other hand, it is not necessary that every observation have a measured DM.
(The name, pp_dm
, refers to the fact that they are obtained using “phase portraits”, like profiles but in one more dimension.)
[3]:
print(open(toas.filename).readlines()[-1])
guppi_57922_J1614-2230_0006.12y.x.ff 812.60505020 57922.064007172196642 0.419 gbt -pp_dm 34.4877666 -pp_dme 0.0004867 -be GUPPI -ver 20200204 -nchx 55 -tobs 891.204 -f Rcvr_800_GUPPI -gof 1.041 -snr 169.071 -fratio 1.258 -pta NANOGrav -subint 1 -nch 64 -flux 2.43148 -bw 187.435 -chbw 3.125 -fe Rcvr_800 -fluxe 0.01498 -nbin 2048 -proc 12y -tmplt J1614-2230.Rcvr_800.GUPPI.12y.x.avg_port.spl -flux_ref_freq 836.73551
[4]:
toas.table[-1]
[4]:
index | mjd | mjd_float | error | freq | obs | flags | delta_pulse_number | tdb | tdbld | ssb_obs_pos | ssb_obs_vel | obs_sun_pos |
---|---|---|---|---|---|---|---|---|---|---|---|---|
d | us | MHz | km | km / s | km | |||||||
int64 | object | float64 | float64 | float64 | str3 | object | float64 | object | float128 | float64[3] | float64[3] | float64[3] |
274 | 57922.06400717255 | 57922.06400717254 | 0.419 | 812.6050502 | gbt | {'format': 'Tempo2', 'name': 'guppi_57922_J1614-2230_0006.12y.x.ff', 'pp_dm': '34.4877666', 'pp_dme': '0.0004867', 'be': 'GUPPI', 'ver': '20200204', 'nchx': '55', 'tobs': '891.204', 'f': 'Rcvr_800_GUPPI', 'gof': '1.041', 'snr': '169.071', 'fratio': '1.258', 'pta': 'NANOGrav', 'subint': '1', 'nch': '64', 'flux': '2.43148', 'bw': '187.435', 'chbw': '3.125', 'fe': 'Rcvr_800', 'fluxe': '0.01498', 'nbin': '2048', 'proc': '12y', 'tmplt': 'J1614-2230.Rcvr_800.GUPPI.12y.x.avg_port.spl', 'flux_ref_freq': '836.73551', 'clkcorr': '3.023951845889943e-05'} | 0.0 | 57922.06480791843 | 57922.064807918432745 | -8106233.828294313 .. -60075124.09656769 | 29.431081448790245 .. -0.7083729035832717 | 8524988.622096533 .. 60354832.968297414 |
[5]:
toas.table["flags"][0]
[5]:
FlagDict({'format': 'Tempo2', 'name': '54724.000006.1.000.000.9y.x.ff', 'pp_dm': '34.4828090', 'pp_dme': '0.0118764', 'be': 'GASP', 'ver': '20200204', 'nchx': '16', 'tobs': '60.078', 'f': 'Rcvr_800_GASP', 'gof': '1.035', 'snr': '35.572', 'fratio': '1.073', 'pta': 'NANOGrav', 'subint': '0', 'nch': '16', 'bw': '60.000', 'chbw': '4.000', 'fe': 'Rcvr_800', 'nbin': '2048', 'proc': '12y', 'tmplt': 'J1614-2230.Rcvr_800.GUPPI.12y.x.avg_port.spl', 'to': '-8.970e-07', 'clkcorr': '2.6166525221999545e-05'})
Do the fit
As before, but now we need a fitter adapted to wideband TOAs. The function Fitter.auto()
will examine the model and choose an appropriate one.
[6]:
fitter = Fitter.auto(toas, model)
[7]:
fitter.fit_toas()
[7]:
True
What is new, compared to narrowband fitting?
Residual objects combine TOA and time data
[8]:
type(fitter.resids)
[8]:
pint.residuals.WidebandTOAResiduals
If we look into the resids attribute, it has two independent Residual objects.
[9]:
fitter.resids.toa, fitter.resids.dm
[9]:
(<pint.residuals.Residuals at 0x7efde121b6d0>,
<pint.residuals.WidebandDMResiduals at 0x7efde128c150>)
Each of them can be used independently
Time residual
[10]:
time_resids = fitter.resids.toa.time_resids
plt.errorbar(
toas.get_mjds().value,
time_resids.to_value(u.us),
yerr=toas.get_errors().to_value(u.us),
fmt="x",
)
plt.ylabel("us")
plt.xlabel("MJD")
[10]:
Text(0.5, 0, 'MJD')

[11]:
# Time RMS
print(fitter.resids.toa.rms_weighted())
print(fitter.resids.toa.chi2)
0.17493106055088772 us
156.75077380469221332
DM residual
[12]:
dm_resids = fitter.resids.dm.resids
dm_error = fitter.resids.dm.get_data_error()
plt.errorbar(toas.get_mjds().value, dm_resids.value, yerr=dm_error.value, fmt="x")
plt.ylabel("pc/cm^3")
plt.xlabel("MJD")
[12]:
Text(0.5, 0, 'MJD')

[13]:
# DM RMS
print(fitter.resids.dm.rms_weighted())
print(fitter.resids.dm.chi2)
0.00035817067609101146 pc / cm3
276.42008393172966
However, in the combined residuals, one can access rms and chi2 as well
[14]:
print(fitter.resids.rms_weighted())
print(fitter.resids.chi2)
{'toa': <Quantity 1.74931061e-07 s>, 'dm': <Quantity 0.00035817 pc / cm3>}
433.1708577364218541
The initial residuals is also a combined residual object
[15]:
time_resids = fitter.resids_init.toa.time_resids
plt.errorbar(
toas.get_mjds().value,
time_resids.to_value(u.us),
yerr=toas.get_errors().to_value(u.us),
fmt="x",
)
plt.ylabel("us")
plt.xlabel("MJD")
[15]:
Text(0.5, 0, 'MJD')

[16]:
dm_resids = fitter.resids_init.dm.resids
dm_error = fitter.resids_init.dm.get_data_error()
plt.errorbar(toas.get_mjds().value, dm_resids.value, yerr=dm_error.value, fmt="x")
plt.ylabel("pc/cm^3")
plt.xlabel("MJD")
[16]:
Text(0.5, 0, 'MJD')

Matrices
We’re now fitting a mixed set of data, so the matrices used in fitting now have different units in different parts, and some care is needed to keep track of which part goes where.
Design Matrix are combined
[17]:
d_matrix, labels, units = fitter.get_designmatrix()
[18]:
print("Number of TOAs:", toas.ntoas)
print("Number of DM measurments:", len(fitter.resids.dm.dm_data))
print("Number of fit params:", len(fitter.model.free_params))
print("Shape of design matrix:", d_matrix.shape)
Number of TOAs: 275
Number of DM measurments: 275
Number of fit params: 130
Shape of design matrix: (275, 131)
Covariance Matrix are combined
[19]:
# c_matrix = fitter.get_noise_covariancematrix()
[20]:
# print("Shape of covariance matrix:", c_matrix.shape)
NOTE the matrix are PINTMatrix object right now, here are the difference
If you want to access the matrix data
[21]:
# print(d_matrix.matrix)
PINT matrix has labels that marks all the element in the matrix. It has the label name, index of range of the matrix, and the unit.
[22]:
# print("labels for dimension 0:", d_matrix.labels[0])
[ ]:
This Jupyter notebook can be downloaded from Simulate_and_make_MassMass.ipynb, or viewed as a python script at Simulate_and_make_MassMass.py.
Generate fake data on a relativistic DNS, make a mass-mass diagram
As an example we use the double neutron star (PSR B1534+12). We generate fake data, fit post-Keplerian parameters, and plot a mass-mass diagram to illustrate the overlapping constraints
This reproduces a version of Figure 9 from Fonseca et al. (2014, ApJ, 787, 82)
[1]:
from astropy import units as u, constants as c
import astropy.time
import numpy as np
from matplotlib import pyplot as plt
import matplotlib.cm as cm
import io
import pint.fitter
from pint.models import get_model
import pint.derived_quantities
import pint.simulation
import pint.logging
# setup the logging
pint.logging.setup(level="INFO")
[1]:
1
Some helper functions for plotting
[2]:
def plot_contour(mp, mc, quantity, target, uncertainty, color, nsigma=3, **kwargs):
"""Plot two lines at +/-nsigma * the uncertainty to illustrate a constraint.
Parameters
----------
mp : astropy.units.Quantity
array of pulsar masses (x-axis)
mc : astropy.units.Quantity
array of companion masses (y-axis)
quantity : astropy.units.Quantity
2D array of the prediction as a function of mp and mc. Shape is (len(mc), len(mp))
target : astropy.units.Quantity
best-fit value of the prediction (say from a PINT fit).
uncertainty : astropy.units.Quantity
uncertainty on that best-fit
color : str
color string for the lines
nsigma : float, optional
factor times the uncertainty for the lines (default = 3)
Returns
-------
`~.contour.QuadContourSet`
See :func:`matplotlib.pyplot.contour`
"""
return plt.contour(
mp.value,
mc.value,
quantity.value,
[(target - nsigma * uncertainty).value, (target + nsigma * uncertainty).value],
colors=color,
**kwargs,
)
def plot_fill(
mp, mc, quantity, target, uncertainty, cmap, alpha=0.2, nsigma_max=3, **kwargs
):
"""Fill a region with a color map to illustrate a constraint.
Outside of nsigma_max * uncertainty, constraint is not shown.
Parameters
----------
mp : astropy.units.Quantity
array of pulsar masses (x-axis)
mc : astropy.units.Quantity
array of companion masses (y-axis)
quantity : astropy.units.Quantity
2D array of the prediction as a function of mp,mc. Shape is (len(mc),len(mp))
target : astropy.units.Quantity
best-fit value of the prediction (say from a PINT fit).
uncertainty : astropy.units.Quantity
uncertainty on that best-fit
cmap : str or `~matplotlib.colors.Colormap`
matplotib colormap
alpha : float, optional
alpha for the color fill (default = 0.2)
nsigma_max : float, optional
factor times the uncertainty beyond which no constraint is shown (default = 3)
Returns
-------
`~matplotlib.image.AxesImage`
See :func:`matplotlib.pyplot.imshow`
"""
z = np.fabs((quantity - target) / uncertainty)
z[z >= nsigma_max] = np.nan
plt.imshow(
z,
origin="lower",
extent=(mp.value.min(), mp.value.max(), mc.value.min(), mc.value.max()),
cmap=cmap,
alpha=alpha,
**kwargs,
)
def get_plot_xy(mp, mc, quantity, target, uncertainty, mp_to_plot, nsigma=3):
"""A helper function to find the point in the quantity array that is nsigma * uncertainty away
from the target value at mp=mp_to_plot
returns mp,mc to plot a text label
Parameters
----------
mp : astropy.units.Quantity
array of pulsar masses (x-axis)
mc : astropy.units.Quantity
array of companion masses (y-axis)
quantity : astropy.units.Quantity
2D array of the prediction as a function of mp,mc. Shape is (len(mc),len(mp))
target : astropy.units.Quantity
best-fit value of the prediction (say from a PINT fit).
uncertainty : astropy.units.Quantity
uncertainty on that best-fit
mp_to_plot : astropy.units.Quantity
x-axis value at which to interpolate
nsigma : float, optional
factor times the uncertainty for the lines (default = 3)
Returns
-------
mp_to_plot : astropy.units.Quantity
mc_to_plot : astropy.units.Quantity
"""
z = (quantity - target) / uncertainty
j = np.abs(mp - mp_to_plot).argmin()
i = np.argmin(np.abs(z[:, j] - nsigma))
return mp[j], mc[i]
[3]:
# par file for B1534+12 from ATNF catalog
# basically from Fonseca, Stairs, & Thorsett (2014)
# https://ui.adsabs.harvard.edu/abs/2014ApJ...787...82F/abstract
# except
# * I removed the DM1/DM2 parameters (they were causing errors without a DMEPOCH)
# * I removed RM (PINT couldn't understand it)
# * I removed EPHVER 2 (PINT doesn't do anything with it)
# * I added EPHEM DE440
test_par = """
PSRJ J1537+1155
RAJ 15:37:09.961730 3.000e-06
DECJ +11:55:55.43387 6.000e-05
DM 11.61944 2.000e-05
PEPOCH 52077
F0 26.38213277689397 1.100e-13
F1 -1.686097E-15 2.000e-21
PMRA 1.482 7.000e-03
PMDEC -25.285 1.200e-02
F2 1.70E-29 1.100e-30
BINARY DD
PB 0.420737298879 2.000e-12
ECC 0.27367752 7.000e-08
A1 3.7294636 6.000e-07
T0 52076.827113263 1.100e-08
OM 283.306012 1.200e-05
OMDOT 1.7557950 1.900e-06
PBDOT -0.1366E-12 3.000e-16
#RM 10.6 2.000e-01
PX 0.86 1.800e-01
#DM1 -0.000653 9.000e-06
F3 -1.6E-36 2.000e-37
#DM2 0.00031 1.000e-05
GAMMA 2.0708E-03 5.000e-07
SINI 0.9772 1.600e-03
M2 1.35 5.000e-02
UNITS TDB
EPHEM DE440
"""
[4]:
# PINT wants to read from a file. So make a file-like object
# out of the string
f = io.StringIO(test_par)
[5]:
# load the model into PINT
m = get_model(f)
[6]:
# roughly the parameters from Fonseca, Stairs, Thorsett (2014)
tstart = astropy.time.Time(1990.25, format="jyear")
tstop = astropy.time.Time(2014, format="jyear")
# this is the error on each TOA
error = 5 * u.us
# this is a guess
Ntoa = 1000
# make the new TOAs. Note that even though `error` is passed, the TOAs
# start out perfect
tnew = pint.simulation.make_fake_toas_uniform(
tstart.mjd * u.d, tstop.mjd * u.d, Ntoa, model=m, obs="ARECIBO", error=error
)
# So we have to still add in some noise
tnew.adjust_TOAs(astropy.time.TimeDelta(np.random.normal(size=len(tnew)) * error))
[7]:
# construct a PINT fitter object with the model and simulated TOAs
fit = pint.fitter.WLSFitter(tnew, m)
[8]:
# fit for all of the PK parameters
# by default because the par file doesn't have parameters listed as free
# all of the other parameters will be frozen
# so this will be an underestimate of the true uncertainties (because of covariances)
fit.model.GAMMA.frozen = False
fit.model.PBDOT.frozen = False
fit.model.OMDOT.frozen = False
fit.model.M2.frozen = False
fit.model.SINI.frozen = False
fit.fit_toas()
[8]:
991.51159506540474026
[9]:
# look at the output. Hopefully, since these are simulated TOAs
# the fit will be good. And indeed we see a reduced chi^2 very close to 1
try:
fit.print_summary()
except ValueError as e:
print(f"Unexpected exception: {e}")
Fitted model using weighted_least_square method with 5 free parameters to 1000 TOAs
Prefit residuals Wrms = 4.983997561986146 us, Postfit residuals Wrms = 4.9787337623772485 us
Chisq = 991.512 for 994 d.o.f. for reduced Chisq of 0.997
PAR Prefit Postfit Units
============== ==================== ============================ =====
PSR J1537+1155 J1537+1155 None
EPHEM DE440 DE440 None
CLOCK TT(TAI) None
UNITS TDB TDB None
START 47983.3 d
FINISH 56658 d
BINARY DD DD None
DILATEFREQ N None
DMDATA N None
NTOA 0 None
CHI2 991.512
CHI2R 0.997497
TRES 4.97873 us
POSEPOCH 52077 d
PX 0.86 mas
RAJ 15h37m09.96173s hourangle
DECJ 11d55m55.43387s deg
PMRA 1.482 mas / yr
PMDEC -25.285 mas / yr
F0 26.3821 Hz
PEPOCH 52077 d
F1 -1.6861e-15 Hz / s
F2 1.7e-29 Hz / s2
F3 -1.6e-36 Hz / s3
PLANET_SHAPIRO N None
DM 11.6194 pc / cm3
PB 0.420737 d
PBDOT -1.366e-13 -1.364(6)×10⁻¹³
A1 3.72946 ls
A1DOT 0 ls / s
ECC 0.273678
EDOT 0 1 / s
T0 52076.8 d
OM 283.306 deg
OMDOT 1.75579 1.7557949(5) deg / yr
M2 1.35 1.371(31) solMass
SINI 0.9772 0.9744(24)
A0 0 s
B0 0 s
GAMMA 0.0020708 0.00207094(33) s
DR 0
DTH 0
TZRMJD 52081.9 d
TZRSITE ssb ssb None
TZRFRQ inf MHz
Derived Parameters:
Period = 0.03790444117830463±0.00000000000000016 s
Pdot = (2.4224942±0.0000029)×10⁻¹⁸
Characteristic age = 2.479e+08 yr (braking index = 3)
Surface magnetic field = 9.7e+09 G
Magnetic field at light cylinder = 1614 G
Spindown Edot = 1.756e+33 erg / s (I=1e+45 cm2 g)
Binary model BinaryDD
Orbital Pdot (PBDOT) = (-1.364±0.006)×10⁻¹³ (s/s)
Total mass, assuming GR, from OMDOT is 2.6784593(12) Msun
From SINI in model:
cos(i) = 0.225(10)
i = 77.0(6) deg
Pulsar mass (Shapiro Delay) = 1.3823889745404174 solMass
The value of \(\dot P_B\) is biased because of kinematic effects: * Galactic acceleration * The Shklovskii effect (from the source’s proper motion) We can correct for those, following Nice & Taylor (1995, ApJ, 441, 429):
(Eqn. 2 from that paper), where \(\vec{a} \cdot \vec{n}\) is the component of Galactic acceleration along the line of sight, \(d\) is the distance, and \(\mu\) is the proper motion. So the first term there is the Galactic acceleration term, and the second is the Shklovskii term.
For the former we need to know the Galactic potential. As a simplifying assumption we will assume a flat rotation curve, which gives us:
where
\((l,b)\) are the Galactic coordinates, \(\Theta_0\) is the rotational velocity, and \(R_0\) is the distance to the Galactic center (Eqn. 5 in the paper above).
For both of these we need to know the distance.
[10]:
# get the distance from the parallax. Note that this is crude (the inversion is not good at low S/N)
d = m.PX.quantity.to(u.kpc, equivalencies=u.parallax())
d_err = d * (m.PX.uncertainty / m.PX.quantity)
print(f"distance: {d:.2f} +/- {d_err:.2f}")
distance: 1.16 kpc +/- 0.24 kpc
[11]:
# The PBDOT measurements need correction for kinematic effects
# both Shklovskii acceleration and Galactic acceleration
# do those here
# for Galactic acceleration, need to know the size and speed of the Milky Way
# GRAVITY collaboration 2019
# https://ui.adsabs.harvard.edu/abs/2019A&A...625L..10G
R0 = 8.178 * u.kpc
Theta0 = 220 * u.km / u.s
# We will assume a flat rotation curve: not the best but probably OK
b = m.coords_as_GAL().b
l = m.coords_as_GAL().l
beta = (d / R0) * np.cos(b) - np.cos(l)
# Nice & Taylor (1995), Eqn. 5
# https://ui.adsabs.harvard.edu/abs/1995ApJ...441..429N/abstract
a_dot_n = (
-np.cos(b) * (Theta0**2 / R0) * (np.cos(l) + beta / (np.sin(l) ** 2 + beta**2))
)
# Galactic acceleration contribution to PBDOT
PBDOT_gal = (fit.model.PB.quantity * a_dot_n / c.c).decompose()
# Shklovskii contribution
PBDOT_shk = (fit.model.PB.quantity * pint.utils.pmtot(m) ** 2 * d / c.c).to(
u.s / u.s, equivalencies=u.dimensionless_angles()
)
# the uncertainty from the Galactic acceleration isn't included
# but it's much smaller than the Shklovskii term so we'll ignore it
PBDOT_err = (fit.model.PB.quantity * pint.utils.pmtot(m) ** 2 * d_err / c.c).to(
u.s / u.s, equivalencies=u.dimensionless_angles()
)
print(f"PBDOT_gal = {PBDOT_gal:.2e}, PBDOT_shk = {PBDOT_shk:.2e} +/- {PBDOT_err:.2e}")
PBDOT_gal = 1.20e-15, PBDOT_shk = 6.59e-14 +/- 1.38e-14
[12]:
# make a dense grid of Mp,Mc values to compute all of the PK parameters
mp = np.linspace(1, 2, 500) * u.Msun
mc = np.linspace(1, 2, 400) * u.Msun
Mp, Mc = np.meshgrid(mp, mc)
omdot_pred = pint.derived_quantities.omdot(
Mp, Mc, fit.model.PB.quantity, fit.model.ECC.quantity
)
pbdot_pred = pint.derived_quantities.pbdot(
Mp, Mc, fit.model.PB.quantity, fit.model.ECC.quantity
)
gamma_pred = pint.derived_quantities.gamma(
Mp, Mc, fit.model.PB.quantity, fit.model.ECC.quantity
)
sini_pred = (
pint.derived_quantities.mass_funct(fit.model.PB.quantity, fit.model.A1.quantity)
* (Mp + Mc) ** 2
/ Mc**3
) ** (1.0 / 3)
plt.figure(figsize=(16, 16))
fontsize = 24
nsigma = 3
# OMDOT
# for each quantity we plot contours at +/-3 sigma compared to the best fit
# we also (optionally) plot a colored fill if there is enough space
# and then try to label it
# (a little fudging is required for that)
plot_contour(
mp, mc, omdot_pred, fit.model.OMDOT.quantity, fit.model.OMDOT.uncertainty, "m"
)
# this one doesn't have enough space to really display
# plot_fill(mp, mc, omdot_pred, fit.model.OMDOT.quantity,fit.model.OMDOT.uncertainty, cmap=cm.Reds_r)
x, y = get_plot_xy(
mp,
mc,
omdot_pred,
fit.model.OMDOT.quantity,
fit.model.OMDOT.uncertainty,
1.05 * u.Msun,
3,
)
plt.text(x.value, y.value, "$\dot \omega$", fontsize=fontsize, color="m")
# PBDOT
# make sure we correct it for the kinematic terms
PBDOT_corr = fit.model.PBDOT.quantity - PBDOT_gal - PBDOT_shk
# also add the error from the distance uncertainty in quadrature
PBDOT_uncertainty = np.sqrt(fit.model.PBDOT.uncertainty**2 + PBDOT_err**2)
plot_contour(mp, mc, pbdot_pred, PBDOT_corr, PBDOT_uncertainty, "g", linestyles="--")
plot_fill(mp, mc, pbdot_pred, PBDOT_corr, PBDOT_uncertainty, cmap=cm.Greens_r)
x, y = get_plot_xy(mp, mc, pbdot_pred, PBDOT_corr, PBDOT_uncertainty, 1.15 * u.Msun, -3)
plt.text(
x.value,
y.value,
"$\dot P_B$ ($d=%.2f \pm %.2f$ kpc)" % (d.value, d_err.value),
fontsize=fontsize,
color="g",
)
# GAMMA
plot_contour(
mp, mc, gamma_pred, fit.model.GAMMA.quantity, fit.model.GAMMA.uncertainty, "k"
)
# plot_fill(mp, mc, gamma_pred, fit.model.GAMMA.quantity,fit.model.GAMMA.uncertainty, cmap=cm.Greens_r)
x, y = get_plot_xy(
mp,
mc,
gamma_pred,
fit.model.GAMMA.quantity,
fit.model.GAMMA.uncertainty,
1.05 * u.Msun,
3,
)
plt.text(x.value, y.value + 0.01, "$\gamma$", fontsize=fontsize, color="k")
# M2
plot_contour(mp, mc, Mc, fit.model.M2.quantity, fit.model.M2.uncertainty, "r")
plot_fill(mp, mc, Mc, fit.model.M2.quantity, fit.model.M2.uncertainty, cmap=cm.Reds_r)
x, y = get_plot_xy(
mp, mc, Mc, fit.model.M2.quantity, fit.model.M2.uncertainty, 1.35 * u.Msun, 3
)
plt.text(x.value, y.value + 0.01, "$M_2$", fontsize=fontsize, color="r")
# SINI
plot_contour(
mp,
mc,
sini_pred,
fit.model.SINI.quantity,
fit.model.SINI.uncertainty,
"c",
linestyles=":",
)
plot_fill(
mp,
mc,
sini_pred,
fit.model.SINI.quantity,
fit.model.SINI.uncertainty,
cmap=cm.Blues_r,
)
x, y = get_plot_xy(
mp,
mc,
sini_pred,
fit.model.SINI.quantity,
fit.model.SINI.uncertainty,
1.8 * u.Msun,
-3,
)
plt.text(x.value, y.value + 0.02, "$\sin i$", fontsize=fontsize, color="c")
# Mass function
plt.contour(
mp.value,
mc.value,
pint.derived_quantities.mass_funct2(Mp, Mc, 90 * u.deg).value,
[
pint.derived_quantities.mass_funct(
fit.model.PB.quantity, fit.model.A1.quantity
).value
],
colors="k",
)
z = (
pint.derived_quantities.mass_funct2(Mp, Mc, 90 * u.deg).value
- pint.derived_quantities.mass_funct(
fit.model.PB.quantity, fit.model.A1.quantity
).value
)
z[z > 0] = np.nan
z[z <= 0] = 1
plt.imshow(
z,
origin="lower",
extent=(mp.value.min(), mp.value.max(), mc.value.min(), mc.value.max()),
cmap=cm.Blues,
vmin=0,
vmax=1,
alpha=0.2,
)
# plt.contour(mp.value,mc.value,gamma_pred.value,[(f.model.GAMMA.quantity - 3*f.model.GAMMA.uncertainty).value,(f.model.GAMMA.quantity + 3*f.model.GAMMA.uncertainty).value])
# plt.contour(mp.value,mc.value,pbdot_pred.value,[(f.model.PBDOT.quantity - 3*f.model.PBDOT.uncertainty).value,(f.model.PBDOT.quantity + 3*f.model.PBDOT.uncertainty).value])
plt.text(1.2, 1.1, "Mass Function", fontsize=fontsize, color="b")
plt.xlabel("Pulsar Mass $(M_\\odot)$", fontsize=fontsize)
plt.ylabel("Companion Mass $(M_\\odot)$", fontsize=fontsize)
plt.xticks(fontsize=fontsize)
plt.yticks(fontsize=fontsize)
# plt.savefig('PSRB1534_massmass.png')
[12]:
(array([1. , 1.2, 1.4, 1.6, 1.8, 2. ]),
[Text(0, 1.0, '1.0'),
Text(0, 1.2000000000000002, '1.2'),
Text(0, 1.4000000000000001, '1.4'),
Text(0, 1.6, '1.6'),
Text(0, 1.8, '1.8'),
Text(0, 2.0, '2.0')])

[ ]:
This Jupyter notebook can be downloaded from check_phase_connection.ipynb, or viewed as a python script at check_phase_connection.py.
Check for phase connection
This notebook is meant to answer a common type of question: when I have a solution for a pulsar that goes from \({\rm MJD}_1\) to \({\rm MJD}_2\), can I confidently phase connect it to other data starting at \({\rm MJD}_3\)? What is the phase uncertainty bridging the gap from \(\Delta t={\rm MJD}_3-{\rm MJD}_2\)?
This notebook will start with standard pulsar timing. It will then use the simulation/calculate_random_models
function to propagate the phase uncertainties forward and examine what happens.
[1]:
import matplotlib.pyplot as plt
import numpy as np
import astropy.units as u
import pint.fitter, pint.toa, pint.simulation
from pint.models import get_model_and_toas
from pint import simulation
import pint.config
import pint.logging
# setup logging
pint.logging.setup(level="INFO")
[1]:
1
[2]:
# use the same data as `time_a_pulsar` notebook
parfile = pint.config.examplefile("NGC6440E.par")
timfile = pint.config.examplefile("NGC6440E.tim")
[3]:
# we will do this very simply - ignoring some of the TOA filtering
m, t = get_model_and_toas(parfile, timfile)
f = pint.fitter.WLSFitter(t, m)
f.fit_toas()
[3]:
59.574713740962197604
[4]:
print("Current free parameters: ", f.model.free_params)
Current free parameters: ['RAJ', 'DECJ', 'F0', 'F1', 'DM']
[5]:
print(f"Current last TOA: MJD {f.model.FINISH.quantity}")
Current last TOA: MJD 54187.58732417023
[6]:
# pretend we have new observations starting at MJD 59000
# we don't need to track things continuously over that gap, but it helps us visualize what's happening
# so make fake TOAs to cover the gap
MJDmax = 59000
# the number of TOAs is arbitrary since it's mostly for visualization
tnew = pint.simulation.make_fake_toas_uniform(f.model.FINISH.value, MJDmax, 50, f.model)
[7]:
# make fake models crossing from the last existing TOA to the start of new observations
dphase, mrand = simulation.calculate_random_models(f, tnew, Nmodels=100)
# this is the difference in time across the gap
dt = tnew.get_mjds() - f.model.PEPOCH.value * u.d
Compare against an analytic prediction which focused on the uncertainties from \(F_0\) and \(F_1 = \dot F\):
where \(\Delta t\) is the gap over which we are extrapolating the solution.
[8]:
analytic = np.sqrt(
(f.model.F0.uncertainty * dt) ** 2 + (0.5 * f.model.F1.uncertainty * dt**2) ** 2
).decompose()
[9]:
plt.plot(tnew.get_mjds(), dphase.std(axis=0), label="All Free")
tnew.get_mjds() - f.model.PEPOCH.value * u.d
plt.plot(tnew.get_mjds(), analytic, label="Analytic")
plt.xlabel("MJD")
plt.ylabel("Phase Uncertainty (cycles)")
plt.legend()
[9]:
<matplotlib.legend.Legend at 0x7f09b4cfa090>

You can see that the max uncertainty is about 0.14 cycles. So that means that even at \(3\sigma\) confidence, we can be sure that we will have \(<1\) cycle uncertainty in extrapolating from the end of the existing solution to MJD 59000. The analytic solution has the same shape as the numerical solution although it is slightly lower, which makes sense since the analytic solution ignores the covariance between \(F_0\) and \(F_1\), plus uncertainties on the other free parameters (\(\alpha\), \(\delta\), \({\rm DM}\)).
[ ]:
This Jupyter notebook can be downloaded from PINT_observatories.ipynb, or viewed as a python script at PINT_observatories.py.
PINT Observatories
Basic loading and use of observatories in PINT, including loading custom observatories.
PINT needs to know where telescopes are, what clock corrections are necessary, and a bunch of other information in order to correctly process TOAs. In many cases this will be seemlessly handled when you load in a set of TOAs. But if you want to look at how/where the observatories are defined or add your own, this is the place to learn.
[1]:
# import the library
import pint.observatory
What observatories are present in PINT? How can we identify them? They have both default names and aliases.
[2]:
for name, aliases in pint.observatory.Observatory.names_and_aliases().items():
print(f"Observatory '{name}' is also known as {aliases}")
Observatory 'gbt' is also known as ['1', 'gb']
Observatory 'gbt_pre_2021' is also known as []
Observatory 'arecibo' is also known as ['ao', 'aoutc', '3']
Observatory 'arecibo_pre_2021' is also known as []
Observatory 'vla' is also known as ['6', 'jvla', 'vl']
Observatory 'meerkat' is also known as ['m', 'mk']
Observatory 'parkes' is also known as ['pk', '7', 'pks']
Observatory 'jodrell' is also known as ['8', 'jb']
Observatory 'jbroach' is also known as ['jboroach']
Observatory 'jbdfb' is also known as ['jbodfb']
Observatory 'jbafb' is also known as ['jboafb']
Observatory 'jodrell_pre_2021' is also known as []
Observatory 'nancay' is also known as ['nc', 'ncy', 'f']
Observatory 'ncyobs' is also known as ['nuppi', 'w']
Observatory 'effelsberg' is also known as ['eff', 'g', 'ef']
Observatory 'effelsberg_pre_2021' is also known as []
Observatory 'gmrt' is also known as ['gm', 'r']
Observatory 'ort' is also known as ['or']
Observatory 'wsrt' is also known as ['i', 'we', 'ws']
Observatory 'fast' is also known as ['fa', 'k']
Observatory 'mwa' is also known as ['u', 'mw']
Observatory 'lwa1' is also known as ['lw', 'x']
Observatory 'ps1' is also known as ['p', 'ps']
Observatory 'hobart' is also known as ['4', 'ho']
Observatory 'most' is also known as ['mo', 'e']
Observatory 'chime' is also known as ['y', 'ch']
Observatory 'magic' is also known as []
Observatory 'lst' is also known as []
Observatory 'virgo' is also known as ['v1']
Observatory 'lho' is also known as ['h1', 'hanford']
Observatory 'llo' is also known as ['l1', 'livingston']
Observatory 'geo600' is also known as ['geohf']
Observatory 'kagra' is also known as ['k1', 'lcgt']
Observatory 'algonquin' is also known as ['ar', 'aro']
Observatory 'drao' is also known as ['dr']
Observatory 'acre' is also known as ['acreroad', 'ar', 'a']
Observatory 'ata' is also known as ['hcro']
Observatory 'ccera' is also known as []
Observatory 'axis' is also known as ['axi']
Observatory 'narrabri' is also known as ['atca']
Observatory 'nanshan' is also known as ['ns']
Observatory 'uao' is also known as ['ns']
Observatory 'dss_43' is also known as ['tid43']
Observatory 'op' is also known as ['obspm']
Observatory 'effelsberg_asterix' is also known as ['effix']
Observatory 'leap' is also known as ['leap']
Observatory 'jodrellm4' is also known as ['jbm4']
Observatory 'gb300' is also known as ['9', 'g3']
Observatory 'gb140' is also known as ['g1', 'a']
Observatory 'gb853' is also known as ['g8', 'b']
Observatory 'la_palma' is also known as ['lap', 'lapalma']
Observatory 'hartebeesthoek' is also known as ['hart']
Observatory 'warkworth_30m' is also known as ['wark30m']
Observatory 'warkworth_12m' is also known as ['wark12m']
Observatory 'lofar' is also known as ['t', 'lf']
Observatory 'de601lba' is also known as ['eflfrlba']
Observatory 'de601lbh' is also known as ['eflfrlbh']
Observatory 'de601hba' is also known as ['eflfrhba']
Observatory 'de601' is also known as ['eflfr']
Observatory 'de602lba' is also known as ['uwlfrlba']
Observatory 'de602lbh' is also known as ['uwlfrlbh']
Observatory 'de602hba' is also known as ['uwlfrhba']
Observatory 'de602' is also known as ['uwlfr']
Observatory 'de603lba' is also known as ['tblfrlba']
Observatory 'de603lbh' is also known as ['tblfrlbh']
Observatory 'de603hba' is also known as ['tblfrhba']
Observatory 'de603' is also known as ['tblfr']
Observatory 'de604lba' is also known as ['polfrlba']
Observatory 'de604lbh' is also known as ['polfrlbh']
Observatory 'de604hba' is also known as ['polfrhba']
Observatory 'de604' is also known as ['polfr']
Observatory 'de605lba' is also known as ['julfrlba']
Observatory 'de605lbh' is also known as ['julfrlbh']
Observatory 'de605hba' is also known as ['julfrhba']
Observatory 'de605' is also known as ['julfr']
Observatory 'fr606lba' is also known as ['frlfrlba']
Observatory 'fr606lbh' is also known as ['frlfrlbh']
Observatory 'fr606hba' is also known as ['frlfrhba']
Observatory 'fr606' is also known as ['frlfr']
Observatory 'se607lba' is also known as ['onlfrlba']
Observatory 'se607lbh' is also known as ['onlfrlbh']
Observatory 'se607hba' is also known as ['onlfrhba']
Observatory 'se607' is also known as ['onlfr']
Observatory 'uk608lba' is also known as ['uklfrlba']
Observatory 'uk608lbh' is also known as ['uklfrlbh']
Observatory 'uk608hba' is also known as ['uklfrhba']
Observatory 'uk608' is also known as ['uklfr']
Observatory 'de609lba' is also known as ['ndlfrlba']
Observatory 'de609lbh' is also known as ['ndlfrlbh']
Observatory 'de609hba' is also known as ['ndlfrhba']
Observatory 'de609' is also known as ['ndlfr']
Observatory 'fi609lba' is also known as ['filfrlba']
Observatory 'fi609lbh' is also known as ['filfrlbh']
Observatory 'fi609hba' is also known as ['filfrhba']
Observatory 'fi609' is also known as ['filfr']
Observatory 'utr-2' is also known as ['utr2']
Observatory 'goldstone' is also known as ['gs']
Observatory 'shao' is also known as ['sh', 's']
Observatory 'pico_veleta' is also known as ['pv', 'v']
Observatory 'iar1' is also known as []
Observatory 'iar2' is also known as []
Observatory 'kat-7' is also known as ['k7']
Observatory 'mkiii' is also known as ['jbmk3']
Observatory 'tabley' is also known as ['pickmere']
Observatory 'darnhall' is also known as []
Observatory 'knockin' is also known as []
Observatory 'defford' is also known as []
Observatory 'cambridge' is also known as ['cam']
Observatory 'princeton' is also known as ['pr', '5']
Observatory 'hamburg' is also known as []
Observatory 'jb_42ft' is also known as ['jb42']
Observatory 'jb_mkii' is also known as ['h', 'j2', 'jbmk2']
Observatory 'jb_mkii_rch' is also known as ['jbmk2roach']
Observatory 'jb_mkii_dfb' is also known as ['jbmk2dfb']
Observatory 'lwa_sv' is also known as ['lwasv', 'ls']
Observatory 'grao' is also known as ['grao']
Observatory 'srt' is also known as ['sr', 'z']
Observatory 'quabbin' is also known as ['2', 'qu']
Observatory 'vla_site' is also known as ['v2', 'c']
Observatory 'gb_20m_xyz' is also known as ['g2']
Observatory 'northern_cross' is also known as ['d', 'bo']
Observatory 'hess' is also known as []
Observatory 'hawc' is also known as []
Observatory 'barycenter' is also known as ['bat', 'bary', '@', 'ssb']
Observatory 'geocenter' is also known as ['geo', 'coe', '0', 'o']
Observatory 'stl_geo' is also known as ['stl_geo', 'spacecraft']
Let’s get the GBT. When we print
, we find out some basic info about the observatory name/aliases, location, and and other info present (e.g., where the data are from):
[3]:
gbt = pint.observatory.get_observatory("gbt")
print(gbt)
TopoObs('gbt' ('1','gb') at [882589.289 m, -4924872.368 m 3943729.418 m]:
The Robert C. Byrd Green Bank Telescope
This data was obtained by Joe Swiggum from Ryan Lynch in 2021 September.
)
The observatory also includes info on things like the clock file:
[4]:
print(f"GBT clock file is named '{gbt.clock_files}'")
GBT clock file is named '['time_gbt.dat']'
Some special locations are also present, like the solar system barycenter. You can access explicitly through the pint.observatory.special_locations
module, but if you just try to get one it will automatically import what is needed
[5]:
ssb = pint.observatory.get_observatory("ssb")
If you want to know where the observatories are defined, you can find that too:
[6]:
print(
f"Observatory definitions are in '{pint.observatory.topo_obs.observatories_json}'"
)
Observatory definitions are in '/home/docs/checkouts/readthedocs.org/user_builds/nanograv-pint/envs/stable/lib/python3.11/site-packages/pint/data/runtime/observatories.json'
That is the default location, although you can overwrite those definitions by setting $PINT_CLOCK_OVERRIDE
. You can also define a new observatory andn load it in. We use JSON
to do this:
[7]:
# We want to create a file-like object containing the new definition. So defined as a string, and then use StringIO
import io
notthegbt = r"""
{
"notgbt": {
"tempo_code": "1",
"itoa_code": "GB",
"clock_file": "time_gbt.dat",
"itrf_xyz": [
882589.289,
4924872.368,
3943729.418
],
"origin": "The Robert C. Byrd Green Bank Telescope, except with one coordinate changed.\nThis data was obtained by Joe Swiggum from Ryan Lynch in 2021 September.\n"
}
}
"""
pint.observatory.topo_obs.load_observatories(io.StringIO(notthegbt))
If we had defined the GBT again, it would have complained unless we used overwrite=True
. But since this has a new name it’s OK. Now let’s try to use it:
[8]:
notgbt = pint.observatory.get_observatory("notgbt")
print(notgbt)
TopoObs('notgbt' ('1','gb') at [882589.289 m, 4924872.368 m 3943729.418 m]:
The Robert C. Byrd Green Bank Telescope, except with one coordinate changed.
This data was obtained by Joe Swiggum from Ryan Lynch in 2021 September.
)
This Jupyter notebook can be downloaded from solar_wind.ipynb, or viewed as a python script at solar_wind.py.
Solar Wind Models
The standard solar wind model in PINT is implemented as the NE_SW
parameter (Edwards et al. (2006)), which is the solar wind electron density at 1 AU (in cm\(^{-3}\)). This assumes that the electron density falls as \(r^{-2}\) away from the Sun. With SWM=0
this is all that is allowed.
However, in You et al. (2007) and You et al. (2012), they extend the model to other radial power-law indices \(r^{-p}\) (also see Hazboun et al. (2022)). This is now implemented with SWM=1
in PINT (and the power-law index is SWP
).
Finally, it is clear that the solar wind model can vary from year to year (or even over shorter timescales). Therefore we now have a new SWX
model (like DMX
) that implements a separate solar wind model over different time intervals.
With the new model, though, there is covariance between the power-law index SWP
and NE_SW
, since most of the fit is determined by the maximum excess DM in the data. Therefore for the SWX
model we have reparameterized it to use SWXDM
: the max DM at conjunction. This makes the covariance a lot less. And to ensure continuity, this is explicitly the excess DM, so the DM from the SWX
model at opposition is 0.
[1]:
from io import StringIO
import numpy as np
from astropy.time import Time
import astropy.coordinates
from pint.models import get_model
from pint.fitter import Fitter
from pint.simulation import make_fake_toas_uniform
import pint.utils
import pint.gridutils
import pint.logging
import matplotlib.pyplot as plt
pint.logging.setup(level="WARNING")
[1]:
1
Demonstrate the change in covariance going from NE_SW to DMMAX
[2]:
par = """
PSR J1234+5678
F0 1
DM 10
ELAT 3
ELONG 0
PEPOCH 54000
EPHEM DE440
"""
[3]:
# basic model using standard SW
model0 = get_model(StringIO("\n".join([par, "NE_SW 30\nSWM 0"])))
[4]:
toas = pint.simulation.make_fake_toas_uniform(
54000,
54000 + 365,
153,
model=model0,
obs="gbt",
add_noise=True,
)
[5]:
# standard model with variable index
model1 = get_model(StringIO("\n".join([par, "NE_SW 30\nSWM 1\nSWP 2"])))
# SWX model with 1 segment
model2 = get_model(
StringIO(
"\n".join(
[par, "SWXDM_0001 1\nSWXP_0001 2\nSWXR1_0001 53999\nSWXR2_0001 55000"]
)
)
)
model2.SWXDM_0001.quantity = model0.get_max_dm()
[6]:
# parameter grids
p = np.linspace(1.5, 2.5, 13)
ne_sw = model0.NE_SW.quantity * np.linspace(0.5, 1.5, 15)
dmmax = np.linspace(0.5, 1.5, len(ne_sw)) * model0.get_max_dm()
[7]:
f1 = Fitter.auto(toas, model1)
chi2_SWM1 = pint.gridutils.grid_chisq(f1, ("NE_SW", "SWP"), (ne_sw, p))[0]
[8]:
f2 = Fitter.auto(toas, model2)
chi2_SWX = pint.gridutils.grid_chisq(f2, ("SWXDM_0001", "SWXP_0001"), (dmmax, p))[0]
[9]:
fig, ax = plt.subplots(figsize=(16, 9))
ax.contour(
dmmax / model0.get_max_dm(),
p,
chi2_SWX - chi2_SWX.min(),
np.linspace(2, 100, 10),
colors="b",
)
ax.contour(
ne_sw / model0.NE_SW.quantity,
p,
chi2_SWM1 - chi2_SWM1.min(),
np.linspace(2, 100, 10),
colors="r",
linestyles="--",
)
ax.set_ylabel("p")
ax.set_xlabel("NE_SW or DMMAX / best-fit")
[9]:
Text(0.5, 0, 'NE_SW or DMMAX / best-fit')

SW model limits & scalings
With the new SWX
model, since it is only excess DM that starts at 0, in order to make the new model agree with the old you may need to scale some quantities
[10]:
# default model
model = get_model(StringIO("\n".join([par, "NE_SW 1"])))
# SWX model with a single segment to match the default model
model2 = get_model(
StringIO(
"\n".join(
[par, "SWXDM_0001 1\nSWXP_0001 2\nSWXR1_0001 53999\nSWXR2_0001 55000"]
)
)
)
# because of the way SWX is scaled, scale the input
scale = model2.get_swscalings()[0]
model2.SWXDM_0001.quantity = model.get_max_dm() * scale
toas = make_fake_toas_uniform(54000, 54000 + 365.25, 53, model=model, obs="gbt")
t0, elongation = pint.utils.get_conjunction(
model.get_psr_coords(),
Time(54000, format="mjd"),
precision="high",
)
x = toas.get_mjds().value - t0.mjd
fig, ax = plt.subplots(figsize=(16, 9))
ax.plot(x, model.solar_wind_dm(toas), ".:", label="Old Model")
ax.plot(x, model2.swx_dm(toas), label="Scaled New Model")
ax.plot(
x,
model2.swx_dm(toas) + model.get_min_dm(),
"x--",
label="Scaled New Model + Offset",
)
model2.SWXDM_0001.quantity = model.get_max_dm()
ax.plot(
x,
model2.swx_dm(toas) + model.get_min_dm(),
label="Unscaled New Model + Offset",
)
ax.plot(
x,
model2.swx_dm(toas),
"+-",
label="Unscaled New Model",
)
ax.set_xlabel("Days Since Conjunction")
ax.set_ylabel("Solar Wind DM (pc/cm**3)")
ax.legend()
[10]:
<matplotlib.legend.Legend at 0x7f8ed6d35390>

Utility functions
A few functions to help move between models or separate model SWX
segments
Find the next conjunction (time of SW max)
The low
precision version just interpolates the Sun’s ecliptic longitude to match that of the pulsar. The high
precision version uses better coordinate conversions to do this. It also returns the elongation at conjunction
[11]:
t0, elongation = pint.utils.get_conjunction(
model.get_psr_coords(),
Time(54000, format="mjd"),
precision="high",
)
print(f"Next conjunction at {t0}, with elongation {elongation}")
Next conjunction at 54180.109614738816, with elongation 2.9999638108834428 deg
As expected, the elongation is just about 3 degrees (the ecliptic latitude of the pulsar)
Divide the input times (TOAs) into years centered on each conjunction
This returns integer indices for each year
[12]:
toas = make_fake_toas_uniform(54000, 54000 + 365.25 * 3, 153, model=model, obs="gbt")
elongation = astropy.coordinates.get_sun(
Time(toas.get_mjds(), format="mjd")
).separation(model.get_psr_coords())
t0 = pint.utils.get_conjunction(
model.get_psr_coords(), model.PEPOCH.quantity, precision="high"
)[0]
indices = pint.utils.divide_times(Time(toas.get_mjds(), format="mjd"), t0)
fig, ax = plt.subplots(figsize=(16, 9))
for i in np.unique(indices):
ax.plot(toas.get_mjds()[indices == i], elongation[indices == i].value, "o")
ax.set_xlabel("MJD")
ax.set_ylabel("Elongation (deg)")
[12]:
Text(0, 0.5, 'Elongation (deg)')

Get max/min DM from standard model, or NE_SW from SWX model
[13]:
model0 = get_model(StringIO("\n".join([par, "NE_SW 30\nSWM 0"])))
# standard model with variable index
model1 = get_model(StringIO("\n".join([par, "NE_SW 30\nSWM 1\nSWP 2.5"])))
# SWX model with 1 segment
model2 = get_model(
StringIO(
"\n".join(
[par, "SWXDM_0001 1\nSWXP_0001 2\nSWXR1_0001 53999\nSWXR2_0001 55000"]
)
)
)
# one value of the scale is returned for each SWX segment
scale = model2.get_swscalings()[0]
print(f"SW scaling: {scale}")
model2.SWXDM_0001.quantity = model0.get_max_dm() * scale
SW scaling: 0.9830510553813481
[14]:
# Max is at conjunction, Min at opposition
print(
f"SWM=0: NE_SW = {model0.NE_SW.quantity:.2f} Max DM = {model0.get_max_dm():.4f}, Min DM = {model0.get_min_dm():.4f}"
)
# Max and Min depend on NE_SW and SWP (covariance)
print(
f"SWM=1 and SWP={model1.SWP.value}: NE_SW = {model1.NE_SW.quantity:.2f} Max DM = {model1.get_max_dm():.4f}, Min DM = {model1.get_min_dm():.4f}"
)
# For SWX, the max/min values reported do not assume that it goes to 0 at opposition (for compatibility)
print(
f"SWX and SWP={model2.SWXP_0001.value}: NE_SW = {model2.get_ne_sws()[0]:.2f} Max DM = {model2.get_max_dms()[0]:.4f}, Min DM = {model2.get_min_dms()[0]:.4f}"
)
print(
f"SWX and SWP={model2.SWXP_0001.value}: Scaled NE_SW = {model2.get_ne_sws()[0]/scale:.2f} Scaled Max DM = {model2.get_max_dms()[0]/scale:.4f}, Scaled Min DM = {model2.get_min_dms()[0]/scale:.4f}"
)
SWM=0: NE_SW = 30.00 1 / cm3 Max DM = 0.0086 pc / cm3, Min DM = 0.0001 pc / cm3
SWM=1 and SWP=2.5: NE_SW = 30.00 1 / cm3 Max DM = 0.0290 pc / cm3, Min DM = 0.0001 pc / cm3
SWX and SWP=2.0: NE_SW = 29.49 1 / cm3 Max DM = 0.0084 pc / cm3, Min DM = 0.0001 pc / cm3
SWX and SWP=2.0: Scaled NE_SW = 30.00 1 / cm3 Scaled Max DM = 0.0086 pc / cm3, Scaled Min DM = 0.0001 pc / cm3
The scaled values above agree between the SWM=0
and SWX
models.
[ ]:
This Jupyter notebook can be downloaded from MCMC_walkthrough.ipynb, or viewed as a python script at MCMC_walkthrough.py.
MCMC Walkthrough
This notebook contains examples of how to use the MCMC Fitter class and how to modify it for more specific uses.
All of these examples will use the EmceeSampler
class, which is currently the only Sampler
implementation supported by PINT. Future work may include implementations of other sampling methods.
[1]:
import random
import numpy as np
import pint.models
import pint.toa as toa
import pint.fermi_toas as fermi
from pint.residuals import Residuals
from pint.sampler import EmceeSampler
from pint.mcmc_fitter import MCMCFitter, MCMCFitterBinnedTemplate
from pint.scripts.event_optimize import read_gaussfitfile, marginalize_over_phase
import pint.config
import pint.logging
import matplotlib.pyplot as plt
import pickle
[2]:
pint.logging.setup("WARNING")
[2]:
2
[3]:
np.random.seed(0)
state = np.random.mtrand.RandomState()
Basic Example
This example will show a vanilla, unmodified MCMCFitter
operating with a simple template and on a small dataset. The sampler is a wrapper around the emcee
package, so it requires a number of walkers for the ensemble sampler. This number of walkers must be specified by the user.
The first few lines are the basic methods used to load in models, TOAs, and templates. More detailed information on this can be found in pint.scripts.event_optimize.py
.
[4]:
parfile = pint.config.examplefile("PSRJ0030+0451_psrcat.par")
eventfile = pint.config.examplefile(
"J0030+0451_P8_15.0deg_239557517_458611204_ft1weights_GEO_wt.gt.0.4.fits"
)
gaussianfile = pint.config.examplefile("templateJ0030.3gauss")
weightcol = "PSRJ0030+0451"
[5]:
minWeight = 0.9
nwalkers = 10
# make this larger for real use
nsteps = 50
nbins = 256
phs = 0.0
[6]:
model = pint.models.get_model(parfile)
ts = fermi.get_Fermi_TOAs(
eventfile, weightcolumn=weightcol, minweight=minWeight, ephem="DE421"
)
# Introduce a small error so that residuals can be calculated
ts.table["error"] = 1.0
ts.filename = eventfile
ts.compute_TDBs()
ts.compute_posvels(ephem="DE421", planets=False)
[7]:
weights, _ = ts.get_flag_value("weight", as_type=float)
template = read_gaussfitfile(gaussianfile, nbins)
template /= template.mean()
Sampler and Fitter creation
The sampler must be initialized first, and then passed as an argument into the MCMCFitter
constructor. The fitter will send its log-posterior probabilility function to the sampler for the MCMC run. The log-prior and log-likelihood functions of the MCMCFitter
can be written by the user. The default behavior is to use the functions implemented in the pint.mcmc_fitter
module.
The EmceeSampler
requires only an argument for the number of walkers in the ensemble.
Here, we use MCMCFitterBinnedTemplate
because the Gaussian template not a callable function. If the template is analytic and callable, then MCMCFitterAnalyticTemplate
should be used, and template parameters can be used in the optimization.
[8]:
sampler = EmceeSampler(nwalkers)
[9]:
fitter = MCMCFitterBinnedTemplate(
ts, model, sampler, template=template, weights=weights, phs=phs
)
fitter.sampler.random_state = state
The next step determines the predicted starting phase of the pulse, which is used to set an accurate initial phase in the model. This will result in a more accurate fit, although we don’t actually use it here. This step uses the marginalize over phase method implemented in pint.scripts.event_optimize.py
.
[10]:
phases = fitter.get_event_phases()
maxbin, like_start = marginalize_over_phase(
phases, template, weights=fitter.weights, minimize=True, showplot=True
)
phase = 1.0 - maxbin[0] / float(len(template))
print(f"Starting pulse likelihood: {like_start}")
print(f"Starting pulse phase: {phase}")
print("Pre-MCMC Values:")
for name, val in zip(fitter.fitkeys, fitter.fitvals):
print("%8s:\t%12.5g" % (name, val))

Starting pulse likelihood: 539.5465827007408
Starting pulse phase: 0.5147440486522087
Pre-MCMC Values:
F0: 205.53
F1: -4.2976e-16
The MCMCFitter class is a subclass of pint.fitter.Fitter
. It is run in exactly the same way - with the fit_toas()
method.
[11]:
fitter.fit_toas(maxiter=nsteps, pos=None)
fitter.set_parameters(fitter.maxpost_fitvals)
You must install the tqdm library to use progress indicators with emcee
To make this run relatively fast for demonstration purposes, nsteps
was purposefully kept very small. However, this means that the results of this fit will not be very good. For an example of MCMC fitting that produces better results, look at pint/examples/fitNGC440E_MCMC.py
[12]:
fitter.phaseogram()
samples = np.transpose(sampler.sampler.get_chain(discard=10), (1, 0, 2)).reshape(
(-1, fitter.n_fit_params)
)
ranges = map(
lambda v: (v[1], v[2] - v[1], v[1] - v[0]),
zip(*np.percentile(samples, [16, 50, 84], axis=0)),
)
print("Post-MCMC values (50th percentile +/- (16th/84th percentile):")
for name, vals in zip(fitter.fitkeys, ranges):
print("%8s:" % name + "%25.15g (+ %12.5g / - %12.5g)" % vals)
print("Final ln-posterior: %12.5g" % fitter.lnposterior(fitter.maxpost_fitvals))

Post-MCMC values (50th percentile +/- (16th/84th percentile):
F0: 205.530699280042 (+ 9.0885e-09 / - 1.1338e-08)
F1: -4.29740012337748e-16 (+ 5.1682e-20 / - 5.3725e-20)
Final ln-posterior: -1082.6
Customizable Example
This second example will demonstrate how the MCMCFitter
can be customized for more involved use. Users can define their own prior and likelihood probability functions to allow for more unique configurations.
[13]:
timfile2 = pint.config.examplefile("NGC6440E.tim")
parfile2 = pint.config.examplefile("NGC6440E.par.good")
model2 = pint.models.get_model(parfile2)
toas2 = toa.get_TOAs(timfile2, planets=False, ephem="DE421")
nwalkers2 = 12
nsteps2 = 10
The new probability functions must be defined by the user and must have the following characteristics. They must take two arguments: an MCMCFitter
object, and a vector of fitting parameters (called theta here). They must return a float
(not an astropy.Quantity
).
The new functions can be passed to the constructor of the MCMCFitter
object using the keywords lnprior
and lnlike
, as shown below.
[14]:
def lnprior_basic(ftr, theta):
lnsum = 0.0
for val, key in zip(theta[:-1], ftr.fitkeys[:-1]):
lnsum += getattr(ftr.model, key).prior_pdf(val, logpdf=True)
return lnsum
def lnlikelihood_chi2(ftr, theta):
ftr.set_parameters(theta)
# Uncomment to view progress
# print('Count is: %d' % ftr.numcalls)
return -Residuals(toas=ftr.toas, model=ftr.model).chi2
sampler2 = EmceeSampler(nwalkers=nwalkers2)
fitter2 = MCMCFitter(
toas2, model2, sampler2, lnprior=lnprior_basic, lnlike=lnlikelihood_chi2
)
fitter2.sampler.random_state = state
[15]:
like_start = fitter2.lnlikelihood(fitter2, fitter2.get_parameters())
print(f"Starting pulse likelihood: {like_start}")
Starting pulse likelihood: -59.57510595134
[16]:
fitter2.fit_toas(maxiter=nsteps2, pos=None)
You must install the tqdm library to use progress indicators with emcee
[16]:
18.359665882555686316
[17]:
samples2 = np.transpose(sampler2.sampler.get_chain(), (1, 0, 2)).reshape(
(-1, fitter2.n_fit_params)
)
ranges2 = map(
lambda v: (v[1], v[2] - v[1], v[1] - v[0]),
zip(*np.percentile(samples2, [16, 50, 84], axis=0)),
)
print("Post-MCMC values (50th percentile +/- (16th/84th percentile):")
for name, vals in zip(fitter2.fitkeys, ranges2):
print("%8s:" % name + "%25.15g (+ %12.5g / - %12.5g)" % vals)
print("Final ln-posterior: %12.5g" % fitter2.lnposterior(fitter2.maxpost_fitvals))
Post-MCMC values (50th percentile +/- (16th/84th percentile):
RAJ: 17.8146667648213 (+ 6.3569e-09 / - 6.9211e-09)
DECJ: -20.3581627407672 (+ 2.0255e-06 / - 3.5102e-06)
F0: 61.4854765543732 (+ 9.7415e-12 / - 4.853e-12)
F1: -1.18140566547566e-15 (+ 5.2028e-19 / - 7.1514e-19)
DM: 224.112791617016 (+ 0.0065565 / - 0.0078194)
Final ln-posterior: 18.36
[ ]:
This Jupyter notebook can be downloaded from bayesian-example-NGC6440E.ipynb, or viewed as a python script at bayesian-example-NGC6440E.py.
PINT Bayesian Interface Examples
[1]:
from pint.models import get_model, get_model_and_toas
from pint.bayesian import BayesianTiming
from pint.config import examplefile
from pint.models.priors import Prior
from pint.logging import setup as setup_log
from scipy.stats import uniform
[2]:
import numpy as np
import emcee
import nestle
import corner
import io
import matplotlib.pyplot as plt
[3]:
# Turn off log messages. They can slow down the processing.
setup_log(level="WARNING")
[3]:
1
[4]:
# Read the par and tim files
parfile = examplefile("NGC6440E.par.good")
timfile = examplefile("NGC6440E.tim")
model, toas = get_model_and_toas(parfile, timfile)
[5]:
# This is optional, but the likelihood function behaves better if
# we have the pulse numbers. Make sure that your timing solution is
# phase connected before doing this.
toas.compute_pulse_numbers(model)
[6]:
# Now set the priors.
# I am cheating here by setting the priors around the maximum likelihood estimates.
# This is a bad idea for real datasets and can bias the estimates. I am doing this
# here just to make everything finish faster. In the real world, these priors should
# be informed by, e.g. previous (independent) timing solutions, pulsar search results,
# VLBI localization etc. Note that unbounded uniform priors don't work here.
for par in model.free_params:
param = getattr(model, par)
param_min = float(param.value - 10 * param.uncertainty_value)
param_span = float(20 * param.uncertainty_value)
param.prior = Prior(uniform(param_min, param_span))
[7]:
# Now let us create a BayesianTiming object. This is a wrapper around the
# PINT API that provides provides lnlikelihood, lnprior and prior_transform
# functions which can be passed to a sampler of your choice.
bt = BayesianTiming(model, toas, use_pulse_numbers=True)
[8]:
print("Number of parameters = ", bt.nparams)
print("Likelihood method = ", bt.likelihood_method)
Number of parameters = 5
Likelihood method = wls
MCMC sampling using emcee
[9]:
nwalkers = 20
sampler = emcee.EnsembleSampler(nwalkers, bt.nparams, bt.lnposterior)
[10]:
# Choose the MCMC start points in the vicinity of the maximum likelihood estimate
# available in the `model` object. This helps the MCMC chains converge faster.
# We can also draw these points from the prior, but the MCMC chains will converge
# slower in that case.
maxlike_params = np.array([param.value for param in bt.params], dtype=float)
maxlike_errors = [param.uncertainty_value for param in bt.params]
start_points = (
np.repeat([maxlike_params], nwalkers).reshape(bt.nparams, nwalkers).T
+ np.random.randn(nwalkers, bt.nparams) * maxlike_errors
)
[11]:
# ** IMPORTANT!!! **
# This is used to exclude some of the following time-consuming steps from the readthedocs build.
# Set this to False while actually using this example.
rtd = True
[12]:
# Use longer chain_length for real runs. It is kept small here so that
# the sampling finishes quickly (and because I know the burn in is short
# because of the cheating priors above).
if not rtd:
print("Running emcee...")
chain_length = 1000
sampler.run_mcmc(
start_points,
chain_length,
progress=True,
)
[13]:
if not rtd:
# Merge all the chains together after discarding the first 100 samples as 'burn-in'.
# The burn-in should be decided after looking at the chains in the real world.
samples_emcee = sampler.get_chain(flat=True, discard=100)
# Plot the MCMC chains to make sure that the burn-in has been removed properly.
# Otherwise, go back and discard more points.
for idx, param_chain in enumerate(samples_emcee.T):
plt.subplot(bt.nparams, 1, idx + 1)
plt.plot(param_chain, label=bt.param_labels[idx])
plt.legend()
plt.show()
[14]:
# Plot the posterior distribution.
if not rtd:
fig = corner.corner(samples_emcee, labels=bt.param_labels)
plt.show()
Nested sampling with nestle
Nested sampling computes the Bayesian evidence along with posterior samples. This allows us to do compare two models. Let us compare the model above with and without an EFAC.
[15]:
# Let us run the model without EFAC first. We can reuse the `bt` object from before.
# Nesle is really simple :)
# method='multi' runs the MultiNest algorithm.
# `npoints` is the number of live points.
# `dlogz` is the target accuracy in the computed Bayesian evidence.
# Increasing `npoints` or decreasing `dlogz` gives more accurate results,
# but at the cost of time.
if not rtd:
print("Running nestle...")
result_nestle_1 = nestle.sample(
bt.lnlikelihood,
bt.prior_transform,
bt.nparams,
method="multi",
npoints=150,
dlogz=0.5,
callback=nestle.print_progress,
)
[16]:
# Plot the posterior
# The nested samples come with weights, which must be taken into account
# while plotting.
if not rtd:
fig = corner.corner(
result_nestle_1.samples,
weights=result_nestle_1.weights,
labels=bt.param_labels,
range=[0.999] * bt.nparams,
)
plt.show()
Let us create a new model with an EFAC applied to all toas (all TOAs in this dataset are from GBT).
[17]:
# casting the model to str gives the par file representation.
# Add an EFAC to the par file and make it unfrozen.
parfile = f"{str(model)}EFAC TEL gbt 1 1"
model2 = get_model(io.StringIO(parfile))
[18]:
# Now set the priors.
# Again, don't do this with real data. Use uninformative priors or priors
# motivated by previous experiments. This is done here with the sole purpose
# of making the run finish fast. Let us try this with the prior_info option now.
prior_info = {}
for par in model2.free_params:
param = getattr(model2, par)
param_min = float(param.value - 10 * param.uncertainty_value)
param_max = float(param.value + 10 * param.uncertainty_value)
prior_info[par] = {"distr": "uniform", "pmin": param_min, "pmax": param_max}
prior_info["EFAC1"] = {"distr": "normal", "mu": 1, "sigma": 0.1}
[19]:
bt2 = BayesianTiming(model2, toas, use_pulse_numbers=True, prior_info=prior_info)
print(bt2.likelihood_method)
wls
[20]:
if not rtd:
result_nestle_2 = nestle.sample(
bt2.lnlikelihood,
bt2.prior_transform,
bt2.nparams,
method="multi",
npoints=150,
dlogz=0.5,
callback=nestle.print_progress,
)
[21]:
# Plot the posterior.
# The EFAC looks consistent with 1.
if not rtd:
fig2 = corner.corner(
result_nestle_2.samples,
weights=result_nestle_2.weights,
labels=bt2.param_labels,
range=[0.999] * bt2.nparams,
)
plt.show()
Now let us look at the evidences and compute the Bayes factor.
[22]:
if not rtd:
print(
f"Evidence without EFAC : {result_nestle_1.logz} +/- {result_nestle_1.logzerr}"
)
print(f"Evidence with EFAC : {result_nestle_2.logz} +/- {result_nestle_2.logzerr}")
bf = np.exp(result_nestle_1.logz - result_nestle_2.logz)
print(f"Bayes factor : {bf} (in favor of no EFAC)")
The Bayes factor tells us that the EFAC is unnecessary for this dataset.
This Jupyter notebook can be downloaded from bayesian-wideband-example.ipynb, or viewed as a python script at bayesian-wideband-example.py.
PINT Bayesian Interface Example (Wideband)
[1]:
import corner
import emcee
import matplotlib.pyplot as plt
import numpy as np
from pint.bayesian import BayesianTiming
from pint.config import examplefile
from pint.fitter import WidebandDownhillFitter
from pint.logging import setup as setup_log
from pint.models import get_model_and_toas
[2]:
# Turn off log messages. They can slow down the processing.
setup_log(level="WARNING")
[2]:
1
[3]:
# This is a simulated dataset.
m, t = get_model_and_toas(examplefile("test-wb-0.par"), examplefile("test-wb-0.tim"))
[4]:
# Fit the model to the data to get the parameter uncertainties.
ftr = WidebandDownhillFitter(t, m)
ftr.fit_toas()
m = ftr.model
[5]:
# Now set the priors.
# I am cheating here by setting the priors around the maximum likelihood estimates.
# This is a bad idea for real datasets and can bias the estimates. I am doing this
# here just to make everything finish faster. In the real world, these priors should
# be informed by, e.g. previous (independent) timing solutions, pulsar search results,
# VLBI localization etc. Note that unbounded uniform priors don't work here.
prior_info = {}
for par in m.free_params:
param = getattr(m, par)
param_min = float(param.value - 10 * param.uncertainty_value)
param_max = float(param.value + 10 * param.uncertainty_value)
prior_info[par] = {"distr": "uniform", "pmin": param_min, "pmax": param_max}
[6]:
# Set the EFAC and DMEFAC priors and unfreeze them.
# Don't do this before the fitting step. The fitter doesn't know
# how to deal with noise parameters.
prior_info["EFAC1"] = {"distr": "normal", "mu": 1, "sigma": 0.1}
prior_info["DMEFAC1"] = {"distr": "normal", "mu": 1, "sigma": 0.1}
m.EFAC1.frozen = False
m.EFAC1.uncertainty_value = 0.01
m.DMEFAC1.frozen = False
m.DMEFAC1.uncertainty_value = 0.01
[7]:
# The likelihood function behaves better if `use_pulse_numbers==True`.
bt = BayesianTiming(m, t, use_pulse_numbers=True, prior_info=prior_info)
[8]:
print("Number of parameters = ", bt.nparams)
print("Likelihood method = ", bt.likelihood_method)
Number of parameters = 8
Likelihood method = wls
[9]:
nwalkers = 25
sampler = emcee.EnsembleSampler(nwalkers, bt.nparams, bt.lnposterior)
[10]:
# Start the sampler close to the maximul likelihood estimate.
maxlike_params = np.array([param.value for param in bt.params], dtype=float)
maxlike_errors = [param.uncertainty_value for param in bt.params]
start_points = (
np.repeat([maxlike_params], nwalkers).reshape(bt.nparams, nwalkers).T
+ np.random.randn(nwalkers, bt.nparams) * maxlike_errors
)
[11]:
# ** IMPORTANT!!! **
# This is used to exclude the following time-consuming steps from the readthedocs build.
# Set this to False while actually using this example.
rtd = True
[12]:
if not rtd:
print("Running emcee...")
chain_length = 1000
sampler.run_mcmc(
start_points,
chain_length,
progress=True,
)
samples_emcee = sampler.get_chain(flat=True, discard=100)
[13]:
# Plot the chains to make sure they have converged and the burn-in has been removed properly.
if not rtd:
for idx, param_chain in enumerate(samples_emcee.T):
plt.subplot(bt.nparams, 1, idx + 1)
plt.plot(param_chain)
plt.ylabel(bt.param_labels[idx])
plt.autoscale()
plt.show()
[14]:
if not rtd:
fig = corner.corner(
samples_emcee, labels=bt.param_labels, quantiles=[0.5], truths=maxlike_params
)
plt.show()
This Jupyter notebook can be downloaded from simulation_example.ipynb, or viewed as a python script at simulation_example.py.
Demonstrate TOA simulation using PINT
[1]:
from pint.models import get_model
from pint.simulation import (
make_fake_toas_uniform,
make_fake_toas_fromtim,
)
from pint.residuals import Residuals, WidebandTOAResiduals
from pint.logging import setup as setup_log
from pint import dmu
from pint.config import examplefile
import numpy as np
import matplotlib.pyplot as plt
import astropy.units as u
import io
# Turn logging level to warnings and above
setup_log(level="WARNING")
[1]:
1
Basic example
[2]:
# First, let us create a simple model from which we will simulate TOAs.
m = get_model(
io.StringIO(
"""
RAJ 05:00:00
DECJ 20:00:00
PEPOCH 55000
F0 100
F1 -1e-14
DM 15
PHOFF 0
EFAC tel gbt 1.5
TZRMJD 55000
TZRFRQ 1400
TZRSITE gbt
EPHEM DE440
CLOCK TT(BIPM2019)
UNITS TDB
"""
)
)
[3]:
# The simplest type of simulation we can do is narrowband TOAs with uniformly
# spaced epochs (one TOA per epoch) with a single frequency and equal TOA uncertainties.
tsim = make_fake_toas_uniform(
model=m,
startMJD=54000,
endMJD=56000,
ntoas=100,
freq=1400 * u.MHz,
obs="gbt",
error=1 * u.us,
include_bipm=True,
include_gps=True,
)
[4]:
# Let us try plotting the residuals
res = Residuals(tsim, m)
plt.errorbar(
tsim.get_mjds(),
res.time_resids.to_value("us"),
res.get_data_error().to_value("us"),
marker="+",
ls="",
)
plt.xlabel("MJD")
plt.ylabel("Residuals (us)")
plt.show()

Here we see that the TOAs don’t have the expected white noise. The noise should be 1.5 us, including the EFAC. The noise can be included by using the add_noise
option.
[5]:
tsim = make_fake_toas_uniform(
model=m,
startMJD=54000,
endMJD=56000,
ntoas=100,
freq=1400 * u.MHz,
obs="gbt",
error=1 * u.us,
include_bipm=True,
include_gps=True,
add_noise=True,
)
[6]:
res = Residuals(tsim, m)
plt.errorbar(
tsim.get_mjds(),
res.time_resids.to_value("us"),
res.get_data_error().to_value("us"),
marker="+",
ls="",
)
plt.xlabel("MJD")
plt.ylabel("Residuals (us)")
plt.show()

The same thing can be achieved in the command line using the following command:
$ zima --startMJD 54000 --ntoa 100 --duration 2000 --obs gbt --freq 1400 --error 1 --addnoise test.par test.tim
Multiple frequency example
Multiple frequency TOAs can be simulated by passing an array of frequencies into the freq
parameter.
[7]:
freqs = np.linspace(1000, 2000, 4) * u.MHz
tsim = make_fake_toas_uniform(
model=m,
startMJD=54000,
endMJD=56000,
ntoas=100,
freq=freqs,
obs="gbt",
error=1 * u.us,
include_bipm=True,
include_gps=True,
add_noise=True,
)
[8]:
res = Residuals(tsim, m)
plt.subplot(211)
plt.errorbar(
tsim.get_mjds(),
res.time_resids.to_value("us"),
res.get_data_error().to_value("us"),
marker="+",
ls="",
)
plt.xlabel("MJD")
plt.ylabel("Residuals (us)")
plt.subplot(212)
plt.errorbar(
tsim.get_freqs(),
res.time_resids.to_value("us"),
res.get_data_error().to_value("us"),
marker="+",
ls="",
)
plt.xlabel("Freq (MHz)")
plt.ylabel("Residuals (us)")
plt.show()

We see that the frequencies are distributed amongst epochs such that there is only one TOA per epoch. To distribute the TOAs such that each epoch contains all frequencies, use the multi_freqs_in_epoch
option. Note that this option doesn’t change the total number of TOAs.
[9]:
freqs = np.linspace(1000, 2000, 4) * u.MHz
tsim = make_fake_toas_uniform(
model=m,
startMJD=54000,
endMJD=56000,
ntoas=100,
freq=freqs,
obs="gbt",
error=1 * u.us,
include_bipm=True,
include_gps=True,
add_noise=True,
multi_freqs_in_epoch=True,
)
[10]:
res = Residuals(tsim, m)
plt.subplot(211)
plt.errorbar(
tsim.get_mjds(),
res.time_resids.to_value("us"),
res.get_data_error().to_value("us"),
marker="+",
ls="",
)
plt.xlabel("MJD")
plt.ylabel("Residuals (us)")
plt.subplot(212)
plt.errorbar(
tsim.get_freqs(),
res.time_resids.to_value("us"),
res.get_data_error().to_value("us"),
marker="+",
ls="",
)
plt.xlabel("Freq(MHz)")
plt.ylabel("Residuals (us)")
plt.show()

The same thing can be achieved in the command line using the following command:
$ zima --startMJD 54000 --ntoa 100 --duration 2000 --obs gbt --freq 1000 1333.33 1666.67 2000 --error 1 --addnoise --multifreq test.par test.tim
Wideband TOA simulation example
Wideband TOAs can be simulated using the wideband
option. The white noise RMS for the wideband DMs is controlled using the wideband_dm_error
parameter.
[13]:
m2 = get_model(
io.StringIO(
"""
RAJ 05:00:00
DECJ 20:00:00
PEPOCH 55000
F0 100
F1 -1e-14
DMEPOCH 55000
DM 15
DM1 1
DM2 0.5
PHOFF 0
EFAC tel gbt 1.5
TZRMJD 55000
TZRFRQ 1400
TZRSITE gbt
EPHEM DE440
CLOCK TT(BIPM2019)
UNITS TDB
"""
)
)
tsim = make_fake_toas_uniform(
model=m2,
startMJD=54000,
endMJD=56000,
ntoas=100,
freq=1400 * u.MHz,
obs="gbt",
error=1 * u.us,
include_bipm=True,
include_gps=True,
wideband=True,
wideband_dm_error=1e-5 * dmu,
add_noise=True,
)
[14]:
res = WidebandTOAResiduals(tsim, m2)
plt.subplot(211)
plt.errorbar(
tsim.get_mjds(),
res.toa.time_resids.to_value("us"),
res.toa.get_data_error().to_value("us"),
marker="+",
ls="",
)
plt.xlabel("MJD")
plt.ylabel("Residuals (us)")
plt.subplot(212)
plt.errorbar(
tsim.get_mjds(),
tsim.get_dms().to_value(dmu),
res.dm.get_data_error().to_value(dmu),
marker="+",
ls="",
)
plt.xlabel("MJD")
plt.ylabel("Wideband DM (dmu)")
plt.show()

The same thing can be achieved in the command line using the following command:
::
$ zima –startMJD 54000 –ntoa 100 –duration 2000 –obs gbt –freq 1400 –error 1 –addnoise –wideband –dmerror 1e-5 test.par test.tim
Simulating TOAs based on a tim file
TOAs can be simulated to match the configuration of an existing tim file (e.g. epochs, TOA uncertainties, frequencies, flags, etc.) using the make_fake_toas_fromtim
function. This also works with wideband tim files.
[15]:
tsim = make_fake_toas_fromtim(
timfile=examplefile("B1855+09_NANOGrav_9yv1.tim"),
model=m,
add_noise=True,
)
[16]:
res = Residuals(tsim, m)
plt.errorbar(
tsim.get_mjds(),
res.time_resids.to_value("us"),
res.get_data_error().to_value("us"),
marker="+",
ls="",
)
plt.xlabel("MJD")
plt.ylabel("Residuals (us)")
plt.show()

The same thing can be achieved in the command line using the following command:
::
$ zima –inputtim B1855+09_NANOGrav_9yv1.tim –addnoise test.par test.tim
This Jupyter notebook can be downloaded from paper_validation_example.ipynb, or viewed as a python script at paper_validation_example.py.
Validation Example for PINT paper
A comparison between PINT result and Tempo/Tempo2 result. This example is presented in the PINT paper to validate that PINT is able to process the PSR J1600-3053 NANOGrav 11-year data set, which uses DD binary model, and get a comparable result with TEMPO/TEMPO2. For more discussion see: https://arxiv.org/abs/2012.00074
Method of this comparison:
Requirement: * Data set: PRS J1600-3053 NANOGrav 11-year data * One copy of PRS J1600-3053 NANOGrav 11-year data is included in the PINT source code docs/examples/J1600-3053_NANOGrav_11yv1.gls.par
and docs/examples/J1600-3053_NANOGrav_11yv1.tim
, which is the default data path in this notebook. Note, this requires the user to download the PINT source code from github. * The offical NANOGrav 11-year data can be downloaded at: https://data.nanograv.org/. The data path should be changed
to the data location. * PINT version: 0.8.0 or higher. * TEMPO and its python utils tempo_utils. * TEMPO version for current comparison: 13.101 (2020-11-04 c5fbddf) * TEMPO2 and its python utils tempo2_utils. * TEMPO2 version for current comparison: 2019.01.1 * TEMPO_utils and TEMPO2_utils are packaged together and can be downloaded from https://github.com/demorest/tempo_utils. * TEMPO2 general2 plugins.
[1]:
import pint
import sys
from pint import toa
from pint import models
from pint.fitter import GLSFitter
import os
import matplotlib.pyplot as plt
import astropy.units as u
import tempo2_utils as t2u
import tempo_utils
import tempo2_utils
import numpy as np
from astropy.table import Table
from astropy.io import ascii
import subprocess
import tempfile
from pint import ls
import astropy.constants as ct
from pint.solar_system_ephemerides import objPosVel_wrt_SSB
from astropy.time import Time
Print the PINT and TEMPO/TEMPO2 version
[2]:
print("PINT version: ", pint.__version__)
tempo_v = subprocess.check_output(["tempo", "-v"])
print("TEMPO version: ", tempo_v.decode("utf-8"))
#Not sure why tempo2_v = subprocess.check_output(["tempo2", "-v"]) does not work.
process = subprocess.Popen(['tempo2', '-v'], stdout=subprocess.PIPE)
tempo2_v = process.communicate()[0]
print("TEMPO2 version: ", tempo2_v.decode("utf-8"))
PINT version: 0.8+68.g6c072c27
TEMPO version: Tempo v 13.101 (2020-11-04 c5fbddf)
TEMPO2 version: 2019.01.1
Redefine the Tempo2_util function for larger number of observations
[3]:
_nobs = 30000
def newpar2(parfile,timfile):
"""
Run tempo2, return new parfile (as list of lines). input parfile
can be either lines or a filename.
"""
orig_dir = os.getcwd()
try:
temp_dir = tempfile.mkdtemp(prefix="tempo2")
try:
lines = open(parfile,'r').readlines()
except:
lines = parfile
open("%s/pulsar.par" % temp_dir, 'w').writelines(lines)
timpath = os.path.abspath(timfile)
os.chdir(temp_dir)
cmd = "tempo2 -nobs %d -newpar -f pulsar.par %s -norescale" % (_nobs, timpath)
os.system(cmd + " > /dev/null")
outparlines = open('new.par').readlines()
finally:
os.chdir(orig_dir)
os.system("rm -rf %s" % temp_dir)
for l in outparlines:
if l.startswith('TRES'): rms = float(l.split()[1])
elif l.startswith('CHI2R'): (foo, chi2r, ndof) = l.split()
return float(chi2r)*float(ndof), int(ndof), rms, outparlines
Set up date file path for PSR J1600-3053.
Note
This path only works when PINT is installed from source code, which has docs and example directories.
[4]:
psr = "J1600-3053"
par_file = os.path.join('../examples', psr + "_NANOGrav_11yv1.gls.par")
tim_file = os.path.join('../examples', psr + "_NANOGrav_11yv1.tim")
PINT run
Load TOAs to PINT
[5]:
t = toa.get_TOAs(tim_file, ephem="DE436", bipm_version="BIPM2015")
INFO: Applying clock corrections (include_gps = True, include_bipm = True) [pint.toa]
INFO: Observatory gbt, loading clock file
/home/luo/.local/lib/python3.6/site-packages/pint/datafiles/time.dat [pint.observatory.topo_obs]
INFO: Applying observatory clock corrections. [pint.observatory.topo_obs]
INFO: Applying GPS to UTC clock correction (~few nanoseconds) [pint.observatory.topo_obs]
INFO: Observatory gbt, loading GPS clock file
/home/luo/.local/lib/python3.6/site-packages/pint/datafiles/gps2utc.clk [pint.observatory.topo_obs]
INFO: Applying TT(TAI) to TT(BIPM2015) clock correction (~27 us) [pint.observatory.topo_obs]
INFO: Observatory gbt, loading BIPM clock file
/home/luo/.local/lib/python3.6/site-packages/pint/datafiles/tai2tt_bipm2015.clk [pint.observatory.topo_obs]
INFO: Computing TDB columns. [pint.toa]
INFO: Using EPHEM = DE436 for TDB calculation. [pint.toa]
INFO: Computing PosVels of observatories and Earth, using DE436 [pint.toa]
INFO: Set solar system ephemeris to link:
https://data.nanograv.org/static/data/ephem/de436.bsp [pint.solar_system_ephemerides]
[6]:
print("There are {} TOAs in the dataset.".format(t.ntoas))
There are 12433 TOAs in the dataset.
Load timing model from .par file
Since PINT only uses the IAU 2000 version of precession-nutation model but NANOGrav 11-year data uses old precession-nutation model, You will see a UserWarning: PINT only supports 'T2CMETHOD IAU2000B'
.
[7]:
m = models.get_model(par_file)
INFO: Parameter A1DOT's value will be scaled by 1e-12 [pint.models.parameter]
INFO: Parameter A1DOT's value will be scaled by 1e-12 [pint.models.parameter]
/home/luo/.local/lib/python3.6/site-packages/pint/models/timing_model.py:304: UserWarning: PINT only supports 'T2CMETHOD IAU2000B'
warn("PINT only supports 'T2CMETHOD IAU2000B'")
Make the General Least Square fitter
[8]:
f = GLSFitter(model=m, toas=t)
Fit TOAs for 9 iterations.
The expected chi2 value should be close to TEMPO and TEMPO2, but not the same.
[9]:
chi2 = f.fit_toas(9)
print("Postfit Chi2: ", chi2)
print("Degree of freedom: ", f.resids.dof)
/home/luo/.local/lib/python3.6/site-packages/pint/models/timing_model.py:304: UserWarning: PINT only supports 'T2CMETHOD IAU2000B'
warn("PINT only supports 'T2CMETHOD IAU2000B'")
/home/luo/.local/lib/python3.6/site-packages/pint/models/timing_model.py:304: UserWarning: PINT only supports 'T2CMETHOD IAU2000B'
warn("PINT only supports 'T2CMETHOD IAU2000B'")
/home/luo/.local/lib/python3.6/site-packages/pint/models/timing_model.py:304: UserWarning: PINT only supports 'T2CMETHOD IAU2000B'
warn("PINT only supports 'T2CMETHOD IAU2000B'")
/home/luo/.local/lib/python3.6/site-packages/pint/models/timing_model.py:304: UserWarning: PINT only supports 'T2CMETHOD IAU2000B'
warn("PINT only supports 'T2CMETHOD IAU2000B'")
/home/luo/.local/lib/python3.6/site-packages/pint/models/timing_model.py:304: UserWarning: PINT only supports 'T2CMETHOD IAU2000B'
warn("PINT only supports 'T2CMETHOD IAU2000B'")
/home/luo/.local/lib/python3.6/site-packages/pint/models/timing_model.py:304: UserWarning: PINT only supports 'T2CMETHOD IAU2000B'
warn("PINT only supports 'T2CMETHOD IAU2000B'")
/home/luo/.local/lib/python3.6/site-packages/pint/models/timing_model.py:304: UserWarning: PINT only supports 'T2CMETHOD IAU2000B'
warn("PINT only supports 'T2CMETHOD IAU2000B'")
/home/luo/.local/lib/python3.6/site-packages/pint/models/timing_model.py:304: UserWarning: PINT only supports 'T2CMETHOD IAU2000B'
warn("PINT only supports 'T2CMETHOD IAU2000B'")
/home/luo/.local/lib/python3.6/site-packages/pint/models/timing_model.py:304: UserWarning: PINT only supports 'T2CMETHOD IAU2000B'
warn("PINT only supports 'T2CMETHOD IAU2000B'")
/home/luo/.local/lib/python3.6/site-packages/pint/models/timing_model.py:304: UserWarning: PINT only supports 'T2CMETHOD IAU2000B'
warn("PINT only supports 'T2CMETHOD IAU2000B'")
Postfit Chi2: 12368.094375740437515
Degree of freedom: 12307
The weighted RMS value for pre-fit and post-fit residuals
[10]:
print("Pre-fit residual weighted RMS:", f.resids_init.rms_weighted())
print("Post-fit residual weighted RMS:", f.resids.rms_weighted())
Pre-fit residual weighted RMS: 0.944170684867224 us
Post-fit residual weighted RMS: 0.9441138383219785 us
Plot the pre-fit and post-fit residuals
[11]:
pint_prefit = f.resids_init.time_resids.to_value(u.us)
pint_postfit = f.resids.time_resids.to_value(u.us)
plt.figure(figsize=(8,5), dpi=150)
plt.subplot(2, 1, 1)
plt.errorbar(t.get_mjds().to_value(u.day), f.resids_init.time_resids.to_value(u.us),
yerr=t.get_errors().to_value(u.us), fmt='x')
plt.xlabel('MJD (day)')
plt.ylabel('Time Residuals (us)')
plt.title('PINT pre-fit residuals for PSR J1600-3053 NANOGrav 11-year data')
plt.grid(True)
plt.subplot(2, 1, 2)
plt.errorbar(t.get_mjds().to_value(u.day), f.resids.time_resids.to_value(u.us),
yerr=t.get_errors().to_value(u.us), fmt='x')
plt.xlabel('MJD (day)')
plt.ylabel('Time Residuals (us)')
plt.title('PINT post-fit residuals for PSR J1600-3053 NANOGrav 11-year data')
plt.grid(True)
plt.tight_layout()
plt.savefig("J1600_PINT")

TEMPO run
Use tempo_utils to analysis the same data set.
[12]:
tempo_toa = tempo_utils.read_toa_file(tim_file)
tempo_chi2, ndof, rms_t, tempo_par = tempo_utils.run_tempo(tempo_toa ,par_file, get_output_par=True,
gls=True)
[13]:
print("TEMPO postfit chi2: ", tempo_chi2)
print("TEMPO postfit weighted rms: ", rms_t)
TEMPO postfit chi2: 12368.46
TEMPO postfit weighted rms: 0.944
Write the TEMPO postfit model to a new .par file, for comparison later
[14]:
# Write out the post fit tempo parfile.
tempo_parfile = open(psr + '_tempo.par', 'w')
for line in tempo_par:
tempo_parfile.write(line)
tempo_parfile.close()
Get the TEMPO residuals
[15]:
tempo_prefit = tempo_toa.get_prefit()
tempo_postfit = tempo_toa.get_resids()
mjds = tempo_toa.get_mjd()
freqs = tempo_toa.get_freq()
errs = tempo_toa.get_resid_err()
Plot the PINT - TEMPO residual difference.
[16]:
tp_diff_pre = (pint_prefit - tempo_prefit) * u.us
tp_diff_post = (pint_postfit - tempo_postfit) * u.us
[17]:
plt.figure(figsize=(8,5), dpi=150)
plt.subplot(2, 1, 1)
plt.plot(mjds, (tp_diff_pre - tp_diff_pre.mean()).to_value(u.ns), '+')
plt.xlabel('MJD (day)')
plt.ylabel('Time Residuals (ns)')
plt.title('PSR J1600-3053 prefit residual differences between PINT and TEMPO')
plt.grid(True)
plt.subplot(2, 1, 2)
plt.plot(mjds, (tp_diff_post - tp_diff_post.mean()).to_value(u.ns), '+')
plt.xlabel('MJD (day)')
plt.ylabel('Time Residuals (ns)')
plt.title('PSR J1600-3053 postfit residual differences between PINT and TEMPO')
plt.grid(True)
plt.tight_layout()
plt.savefig("J1600_PINT_tempo.eps")

The PINT-TEMPO pre-fit residual discrepancy is due to the different precession-nutation model in the two packages. * TEMPO: IAU6501976 precession and IAU 1980 nutation. * PINT: IAU2000B precession-nutation.
Compare the parameter between TEMPO and PINT
Reported quantities
TEMPO value
TEMPO uncertainty
Parameter units
TEMPO parameter value - PINT parameter value
TEMPO/PINT parameter absolute difference divided by TEMPO uncertainty
PINT uncertainty divided by TEMPO uncertainty, if TEMPO provides the uncertainty value
[18]:
# Create the parameter compare table
tv = []
tu = []
tv_pv = []
tv_pv_tc = []
tc_pc = []
units = []
names = []
no_t_unc = []
tempo_new_model = models.get_model(psr + '_tempo.par')
for param in tempo_new_model.params:
t_par = getattr(tempo_new_model, param)
pint_par = getattr(f.model, param)
tempoq = t_par.quantity
pintq = pint_par.quantity
try:
diffq = tempoq - pintq
if t_par.uncertainty_value != 0.0:
diff_tcq = np.abs(diffq) / t_par.uncertainty
uvsu = pint_par.uncertainty / t_par.uncertainty
no_t_unc.append(False)
else:
diff_tcq = np.abs(diffq) / pint_par.uncertainty
uvsu = t_par.uncertainty
no_t_unc.append(True)
except TypeError:
continue
uvsu = pint_par.uncertainty / t_par.uncertainty
tv.append(tempoq.value)
tu.append(t_par.uncertainty.value)
tv_pv.append(diffq.value)
tv_pv_tc.append(diff_tcq.value)
tc_pc.append(uvsu)
units.append(t_par.units)
names.append(param)
compare_table = Table((names, tv, tu, units, tv_pv, tv_pv_tc, tc_pc, no_t_unc), names = ('name', 'Tempo Value', 'Tempo uncertainty', 'units',
'Tempo_V-PINT_V',
'Tempo_PINT_diff/unct',
'PINT_unct/Tempo_unct',
'no_t_unc'))
compare_table.sort('Tempo_PINT_diff/unct')
compare_table = compare_table[::-1]
compare_table.write('parameter_compare.t.html', format='html', overwrite=True)
INFO: Parameter A1DOT's value will be scaled by 1e-12 [pint.models.parameter]
INFO: Parameter A1DOT's value will be scaled by 1e-12 [pint.models.parameter]
/home/luo/.local/lib/python3.6/site-packages/pint/models/timing_model.py:304: UserWarning: PINT only supports 'T2CMETHOD IAU2000B'
warn("PINT only supports 'T2CMETHOD IAU2000B'")
Print the parameter difference in a table.
The table is sorted by relative difference in descending order.
[19]:
compare_table
[19]:
name | Tempo Value | Tempo uncertainty | units | Tempo_V-PINT_V | Tempo_PINT_diff/unct | PINT_unct/Tempo_unct | no_t_unc |
---|---|---|---|---|---|---|---|
str8 | str48 | float128 | object | float128 | float128 | float128 | bool |
ELONG | 244.347677844079 | 5.9573e-09 | deg | -5.923646156924533557e-10 | 0.09943508228433239147 | 0.99997665908491340815 | False |
ELAT | -10.0718390253651 | 3.36103e-08 | deg | -3.1908024911500576515e-09 | 0.09493525767845147623 | 1.0000723294528560605 | False |
PMELONG | 0.4626 | 0.010399999999999999523 | mas / yr | 0.00071186955533908413685 | 0.06844899570568116487 | 1.0031591658256824307 | False |
F0 | 277.9377112429746148 | 5.186e-13 | Hz | -1.471045507628332416e-14 | 0.028365705893334601157 | 1.0000737060332136081 | False |
PX | 0.504 | 0.07349999999999999589 | mas | -0.0020714040496805363745 | 0.028182368022864442286 | 0.99982583641940803165 | False |
ECC | 0.0001737294 | 8.9000000000000002855e-09 | -2.385671795204248602e-10 | 0.026805301069710657513 | 1.0022775158955208319 | False | |
DMX_0010 | 0.00066927561 | 0.00020051850499999999489 | pc / cm3 | -5.0888210248638612865e-06 | 0.025378311218028786617 | 0.9999978605214362437 | False |
DMX_0001 | 0.0016432056 | 0.00022434462499999998828 | pc / cm3 | -5.3330253040537178855e-06 | 0.023771576003007506561 | 1.0000068595527518145 | False |
DMX_0002 | 0.00136024872 | 0.00020941304000000001188 | pc / cm3 | -4.909940340062750319e-06 | 0.023446201535791421494 | 1.0000106543543163529 | False |
OM | 181.84956816578 | 0.01296546975 | deg | -0.00026376139872756609872 | 0.020343373885667821539 | 0.9909564544661490358 | False |
... | ... | ... | ... | ... | ... | ... | ... |
DMX_0045 | 3.64190777e-05 | 0.00020164094999999999935 | pc / cm3 | -1.07417857443502110307e-07 | 0.00053271846538861333895 | 1.00000068370106443 | False |
DMX_0071 | -0.000176912603 | 0.00019118353399999999634 | pc / cm3 | -9.7209239643334917694e-08 | 0.0005084603135505116932 | 1.0000046197730818598 | False |
DMX_0075 | 2.00017094e-06 | 0.00019663653799999999744 | pc / cm3 | -9.568337454155456868e-08 | 0.00048660017876003580943 | 0.9999419901870312266 | False |
DMX_0094 | 0.000929849121 | 0.00019402737299999999105 | pc / cm3 | 4.607091936333768123e-08 | 0.0002374454627251881909 | 0.99999408921119359306 | False |
DMX_0073 | -0.000156953835 | 0.00019724444300000000259 | pc / cm3 | 4.614141422537424743e-08 | 0.00023393010988590560973 | 1.0000039448402757714 | False |
DMX_0017 | 0.000178762757 | 0.00021197504699999999088 | pc / cm3 | -3.7177033512374732874e-08 | 0.00017538400881861692499 | 0.99998893629003005046 | False |
DMX_0067 | -0.000377967984 | 0.00019749766400000001308 | pc / cm3 | -2.135996082108402791e-08 | 0.00010815297957693325822 | 0.9999769537566483013 | False |
DMX_0043 | -0.000494848648 | 0.0001997188189999999947 | pc / cm3 | 1.8844699835535508314e-08 | 9.43561549677274415e-05 | 0.99998484576198187757 | False |
DMX_0083 | 8.70047706e-06 | 0.00020486178099999999887 | pc / cm3 | 1.8072403983782853805e-08 | 8.8217547927023319e-05 | 1.0000060515174629128 | False |
DMX_0069 | -0.000251368356 | 0.00019942850700000000919 | pc / cm3 | 1.6777396572118397772e-08 | 8.412737388701604967e-05 | 1.0000028589159370984 | False |
If one wants the Latex output please use the cell below.
[20]:
#ascii.write(compare_table, sys.stdout, Writer = ascii.Latex,
# latexdict = {'tabletype': 'table*'})
Check out the maximum DMX difference
[21]:
max_dmx = 0
max_dmx_index = 0
for ii, row in enumerate(compare_table):
if row['name'].startswith('DMX_'):
if row['Tempo_PINT_diff/unct'] > max_dmx:
max_dmx = row['Tempo_PINT_diff/unct']
max_dmx_index = ii
dmx_max = compare_table[max_dmx_index]['name']
compare_table[max_dmx_index]
[21]:
name | Tempo Value | Tempo uncertainty | units | Tempo_V-PINT_V | Tempo_PINT_diff/unct | PINT_unct/Tempo_unct | no_t_unc |
---|---|---|---|---|---|---|---|
str8 | str48 | float128 | object | float128 | float128 | float128 | bool |
DMX_0010 | 0.00066927561 | 0.00020051850499999999489 | pc / cm3 | -5.0888210248638612865e-06 | 0.025378311218028786617 | 0.9999978605214362437 | False |
Output the table in the paper
[22]:
paper_params = ['F0', 'F1', 'FD1', 'FD2', 'JUMP1', 'PX',
'ELONG', 'ELAT', 'PMELONG', 'PMELAT', 'PB',
'A1', 'A1DOT', 'ECC', 'T0', 'OM', 'OMDOT', 'M2',
'SINI', dmx_max]
# Get the table index of the parameters above
paper_param_index = []
for pp in paper_params:
# We assume the parameter name are unique in the table
idx = np.where(compare_table['name'] == pp)[0][0]
paper_param_index.append(idx)
paper_param_index = np.array(paper_param_index)
compare_table[paper_param_index]
[22]:
name | Tempo Value | Tempo uncertainty | units | Tempo_V-PINT_V | Tempo_PINT_diff/unct | PINT_unct/Tempo_unct | no_t_unc |
---|---|---|---|---|---|---|---|
str8 | str48 | float128 | object | float128 | float128 | float128 | bool |
F0 | 277.9377112429746148 | 5.186e-13 | Hz | -1.471045507628332416e-14 | 0.028365705893334601157 | 1.0000737060332136081 | False |
F1 | -7.338737472765e-16 | 4.619148184227e-21 | Hz / s | 6.3620434143467849095e-23 | 0.013773196183814252269 | 1.0001125817340140925 | False |
FD1 | 3.98314325e-05 | 1.6566479199999999207e-06 | s | -2.5460168459788480762e-09 | 0.0015368484849688811896 | 0.9999972210804803918 | False |
FD2 | -1.47296057e-05 | 1.1922595999999999884e-06 | s | 1.3701818969060749554e-09 | 0.0011492311715553180356 | 0.99999858894148907495 | False |
JUMP1 | -8.789e-06 | 1.2999999999999999941e-07 | s | -4.6499257614788852208e-10 | 0.0035768659703683731467 | 1.0037094619649047367 | False |
PX | 0.504 | 0.07349999999999999589 | mas | -0.0020714040496805363745 | 0.028182368022864442286 | 0.99982583641940803165 | False |
ELONG | 244.347677844079 | 5.9573e-09 | deg | -5.923646156924533557e-10 | 0.09943508228433239147 | 0.99997665908491340815 | False |
ELAT | -10.0718390253651 | 3.36103e-08 | deg | -3.1908024911500576515e-09 | 0.09493525767845147623 | 1.0000723294528560605 | False |
PMELONG | 0.4626 | 0.010399999999999999523 | mas / yr | 0.00071186955533908413685 | 0.06844899570568116487 | 1.0031591658256824307 | False |
PMELAT | -7.1555 | 0.058200000000000001732 | mas / yr | -0.00050484895274838237356 | 0.008674380631415503848 | 0.9992173822396303029 | False |
PB | 14.34846572550302 | 2.1222661e-06 | d | -3.4570101856666590745e-08 | 0.016289240004666045763 | 1.0000725690593949082 | False |
A1 | 8.801653122 | 8.1100000000000004906e-07 | ls | 1.4913034362962207524e-08 | 0.018388451742246864767 | 0.9844174919313783967 | False |
A1DOT | -4e-15 | 6.260000000000000155e-16 | ls / s | 8.912568233346704436e-18 | 0.014237329446240742231 | 0.99986821703571171494 | False |
ECC | 0.0001737294 | 8.9000000000000002855e-09 | -2.385671795204248602e-10 | 0.026805301069710657513 | 1.0022775158955208319 | False | |
T0 | 55878.2618980451000000 | 0.0005167676 | d | -1.0512794592798524462e-05 | 0.020343370197354718952 | 0.9909423736001105048 | False |
OM | 181.84956816578 | 0.01296546975 | deg | -0.00026376139872756609872 | 0.020343373885667821539 | 0.9909564544661490358 | False |
OMDOT | 0.0052209 | 0.0013554 | deg / yr | -2.2108721249702442383e-05 | 0.016311584218461297317 | 1.0000993017977168461 | False |
M2 | 0.271894 | 0.089418999999999998485 | solMass | -0.0016411381726809115555 | 0.018353349653663222213 | 0.97866231479642251667 | False |
SINI | 0.906285 | 0.03399300000000000238 | 0.00054355868381605887407 | 0.015990312235344302655 | 0.9838880245552520387 | False | |
DMX_0010 | 0.00066927561 | 0.00020051850499999999489 | pc / cm3 | -5.0888210248638612865e-06 | 0.025378311218028786617 | 0.9999978605214362437 | False |
TEMPO2 run
Before TEMPO2 run, the .par
file has to be modified for a more accurate TEMPO2 vs PINT comparison. We save the modified .par file in a file named “[PSR name]_tempo2.par”. In this case, “J1600-3053_tempo2.par”
Modified parameters in the .par file:
ECL IERS2010 —-> ECL IERS 2003 (In this version of TEMPO2, the IERS 2003 Obliquity angle is hardcoded in its code. To match TEMPO2’s default value, we change the ECL to IERS 2003 in the
.par
file )T2CMETHOD TEMPO —-> # T2CMETHOD TEMPO (TEMPO2 supports both IAU 2000 precession-nutation model and old TEMPO-style model. To make TEMPO2 ues its default precession and nutation model, IAU 2000, this line in the
.par
file has to be commmented out.)
Note, this modified
.par
file is provided in thedocs/examples
directory. If PINT is not installed from source code, one have to modify the.par
file from the NANOGrav 11-year data.
[23]:
tempo2_par = os.path.join('../examples',"J1600-3053_tempo2.par")
PINT refit using the modified tempo2-style parfile
[24]:
m_t2 = models.get_model(tempo2_par)
INFO: Parameter A1DOT's value will be scaled by 1e-12 [pint.models.parameter]
INFO: Parameter A1DOT's value will be scaled by 1e-12 [pint.models.parameter]
[25]:
f_t2 = GLSFitter(toas=t, model=m_t2)
f_t2.fit_toas()
[25]:
12368.092265853045861
Tempo2 fit
[26]:
tempo2_chi2, ndof, rms_t2, tempo2_new_par = newpar2(tempo2_par, tim_file)
print("TEMPO2 chi2: ", tempo2_chi2)
print("TEMPO2 rms: ", rms_t2)
TEMPO2 chi2: 12265.156200000001
TEMPO2 rms: 0.944
Get TEMPO2 residuals, toa value, observing frequencies, and data error
[27]:
tempo2_result = t2u.general2(tempo2_par, tim_file, ['sat', 'pre', 'post', 'freq', 'err'])
# TEMPO2's residual unit is second
tp2_diff_pre = f_t2.resids_init.time_resids - tempo2_result['pre'] * u.s
tp2_diff_post = f_t2.resids.time_resids - tempo2_result['post'] * u.s
Plot the TEMPO2 - PINT residual difference
[28]:
plt.figure(figsize=(8,5), dpi=150)
plt.subplot(2, 1, 1)
plt.plot(mjds, (tp2_diff_pre - tp2_diff_pre.mean()).to_value(u.ns), '+')
plt.xlabel('MJD (day)')
plt.ylabel('Time Residuals (ns)')
plt.title('PSR J1600-3053 prefit residual differences between PINT and TEMPO2')
plt.grid(True)
plt.subplot(2, 1, 2)
plt.plot(mjds, (tp2_diff_post - tp2_diff_post.mean()).to_value(u.ns), '+')
plt.xlabel('MJD (day)')
plt.ylabel('Time Residuals (ns)')
plt.title('PSR J1600-3053 postfit residual differences between PINT and TEMPO2')
plt.grid(True)
plt.tight_layout()
plt.savefig("J1600_PINT_tempo2")

In this comparison, PINT and TEMPO2’s results, both pre-fit and post-fit, agree with each other within the level of 5 ns.
Write out the TEMPO2 postfit parameter to a new file
Note, since the ECL parameter is hard coded in tempo2, we will have to add it manually
[29]:
# Write out the post fit tempo parfile.
tempo2_parfile = open(psr + '_new_tempo2.2.par', 'w')
for line in tempo2_new_par:
tempo2_parfile.write(line)
tempo2_parfile.write("ECL IERS2003")
tempo2_parfile.close()
Compare the parameter between TEMPO2 and PINT
Reported quantities
TEMPO2 value
TEMPO2 uncertainty
Parameter units
TEMPO2 parameter value - PINT parameter value
TEMPO2/PINT parameter absolute difference divided by TEMPO2 uncertainty
PINT uncertainty divided by TEMPO2 uncertainty, if TEMPO2 provides the uncertainty value
[30]:
# Create the parameter compare table
tv = []
t2_unc = []
tv_pv = []
tv_pv_tc = []
tc_pc = []
units = []
names = []
no_t2_unc = []
tempo2_new_model = models.get_model(psr + '_new_tempo2.2.par')
for param in tempo2_new_model.params:
t2_par = getattr(tempo2_new_model, param)
pint2_par = getattr(f_t2.model, param)
tempo2q = t2_par.quantity
pint2q = pint2_par.quantity
try:
diff2q = tempo2q - pint2q
if t2_par.uncertainty_value != 0.0:
diff_tcq = np.abs(diff2q) / t2_par.uncertainty
uvsu = pint2_par.uncertainty / t2_par.uncertainty
no_t2_unc.append(False)
else:
diff_tcq = np.abs(diff2q) / pint2_par.uncertainty
uvsu = t2_par.uncertainty
no_t2_unc.append(True)
except TypeError:
continue
uvsu = pint2_par.uncertainty / t2_par.uncertainty
tv.append(tempo2q.value)
t2_unc.append(t2_par.uncertainty.value)
tv_pv.append(diff2q.value)
tv_pv_tc.append(diff_tcq.value)
tc_pc.append(uvsu)
units.append(t2_par.units)
names.append(param)
compare_table2 = Table((names, tv, t2_unc,units, tv_pv, tv_pv_tc, tc_pc, no_t2_unc), names = ('name', 'Tempo2 Value', 'T2 unc','units',
'Tempo2_V-PINT_V',
'Tempo2_PINT_diff/unct',
'PINT_unct/Tempo2_unct',
'no_t_unc'))
compare_table2.sort('Tempo2_PINT_diff/unct')
compare_table2 = compare_table2[::-1]
compare_table2.write('parameter_compare.t2.html', format='html', overwrite=True)
WARNING: EPHVER 5 does nothing in PINT [pint.models.timing_model]
WARNING: Unrecognized parfile line 'NE_SW 0' [pint.models.timing_model]
WARNING: Unrecognized parfile line 'NE_SW2 0.000' [pint.models.timing_model]
/home/luo/.local/lib/python3.6/site-packages/astropy/units/quantity.py:477: RuntimeWarning: divide by zero encountered in true_divide
result = super().__array_ufunc__(function, method, *arrays, **kwargs)
Print the parameter difference in a table.
The table is sorted by relative difference in descending order.
[31]:
compare_table2
[31]:
name | Tempo2 Value | T2 unc | units | Tempo2_V-PINT_V | Tempo2_PINT_diff/unct | PINT_unct/Tempo2_unct | no_t_unc |
---|---|---|---|---|---|---|---|
str8 | str48 | float128 | object | float128 | float128 | float128 | bool |
ECC | 0.00017372966157521168 | 8.922286680669999241e-09 | 4.158524756102052744e-11 | 0.0046608284455950449096 | 1.0000400945849889922 | False | |
DMX_0098 | 0.0013394613122489417 | 0.00019579968831114546654 | pc / cm3 | -5.203720844662671624e-07 | 0.002657675755026450183 | 0.999999270898346615 | False |
DMX_0070 | -0.00023747963906517973 | 0.00019767137320477682749 | pc / cm3 | -4.67328571424551701e-07 | 0.0023641691958118014015 | 1.0000006149827960211 | False |
DMX_0097 | 0.0013928330661987446 | 0.00019620100461426303326 | pc / cm3 | -4.3986846819175744183e-07 | 0.002241927706010230642 | 0.9999998585691036723 | False |
DMX_0055 | -0.0005307704904403621 | 0.00019675128861832102923 | pc / cm3 | -3.9826563099699431592e-07 | 0.0020242085009648507626 | 1.0000000155754873443 | False |
DMX_0063 | -0.00048410571072825574 | 0.00019894769104906708185 | pc / cm3 | -3.8863084052005031355e-07 | 0.001953432273934765668 | 1.0000001764728303488 | False |
DMX_0079 | 0.00018976795294000216 | 0.00019490725481464179483 | pc / cm3 | -3.6899448284041791446e-07 | 0.001893179826432496481 | 1.000000186999052243 | False |
DMX_0010 | 0.00067403356955979 | 0.00020051850482404336064 | pc / cm3 | -3.7725252445390284467e-07 | 0.0018813850860545018075 | 0.99999987947730495375 | False |
F1 | -7.3387383041227678664e-16 | 4.619148404392432094e-21 | Hz / s | -8.1916189706268323405e-24 | 0.0017734045874857090789 | 0.9999982063295332385 | False |
DMX_0086 | 0.00029525346690830644 | 0.0001961188165133768578 | pc / cm3 | -3.4520664969226941277e-07 | 0.001760191376989691343 | 1.0000003907045813545 | False |
... | ... | ... | ... | ... | ... | ... | ... |
DMX_0092 | 0.0013207295138539894 | 0.00019585216459019454951 | pc / cm3 | -3.4341162511537445812e-08 | 0.00017534226687457687001 | 1.0000000634867269866 | False |
DMX_0058 | -0.0005377581468744793 | 0.00019927530538964258904 | pc / cm3 | -3.1767895307833332597e-08 | 0.00015941712017812561131 | 1.0000003635542755731 | False |
DMX_0017 | 0.0001789867729265184 | 0.00021197504981685315979 | pc / cm3 | 2.854267636918103937e-08 | 0.000134651112920327015 | 0.9999995235765041235 | False |
DMX_0089 | 0.0007495614446295846 | 0.00021586616414944812654 | pc / cm3 | 2.4131246270523223907e-08 | 0.000111787997742048708454 | 1.0000000332176062212 | False |
DMX_0027 | -0.00018288082535181414 | 0.00019391445756469536201 | pc / cm3 | -1.7893991355739631913e-08 | 9.227775783437751106e-05 | 1.000000166662465162 | False |
DMX_0040 | -0.0005242449385393532 | 0.00020212647115737782458 | pc / cm3 | -1.6804078603323106822e-08 | 8.313645663085500665e-05 | 0.99999999749641554914 | False |
DMX_0032 | -6.265675469663684e-05 | 0.00019561483985536690729 | pc / cm3 | 1.5367171419820358814e-08 | 7.855831097059144273e-05 | 1.0000001097328992117 | False |
DMX_0001 | 0.0016484372168232325 | 0.00022434462780433157077 | pc / cm3 | -1.4668063582322365956e-08 | 6.538183564224023551e-05 | 1.0000004801524928766 | False |
DMX_0083 | 8.544780315309648e-06 | 0.00020486177918444288125 | pc / cm3 | -9.68929340697572577e-09 | 4.729673561143965028e-05 | 1.0000001554708373153 | False |
DMX_0044 | -0.0003390023662491028 | 0.00021062295971768858391 | pc / cm3 | -4.0461326309814554802e-09 | 1.9210311337399994e-05 | 1.000000188731715367 | False |
If one wants to get the latex version, please use the line below.
[32]:
#ascii.write(compare_table2, sys.stdout, Writer = ascii.Latex,
# latexdict = {'tabletype': 'table*'})
Check out the maximum DMX difference
[33]:
max_dmx = 0
max_dmx_index = 0
for ii, row in enumerate(compare_table2):
if row['name'].startswith('DMX_'):
if row['Tempo2_PINT_diff/unct'] > max_dmx:
max_dmx = row['Tempo2_PINT_diff/unct']
max_dmx_index = ii
dmx_max2 = compare_table2[max_dmx_index]['name']
compare_table2[max_dmx_index]
[33]:
name | Tempo2 Value | T2 unc | units | Tempo2_V-PINT_V | Tempo2_PINT_diff/unct | PINT_unct/Tempo2_unct | no_t_unc |
---|---|---|---|---|---|---|---|
str8 | str48 | float128 | object | float128 | float128 | float128 | bool |
DMX_0098 | 0.0013394613122489417 | 0.00019579968831114546654 | pc / cm3 | -5.203720844662671624e-07 | 0.002657675755026450183 | 0.999999270898346615 | False |
Output the table in the paper
[34]:
paper_params = ['F0', 'F1', 'FD1', 'FD2', 'JUMP1', 'PX',
'ELONG', 'ELAT', 'PMELONG', 'PMELAT', 'PB',
'A1', 'A1DOT', 'ECC', 'T0', 'OM', 'OMDOT', 'M2',
'SINI', dmx_max]
# Get the table index of the parameters above
paper_param_index = []
for pp in paper_params:
# We assume the parameter name are unique in the table
idx = np.where(compare_table2['name'] == pp)[0][0]
paper_param_index.append(idx)
paper_param_index = np.array(paper_param_index)
compare_table2[paper_param_index]
[34]:
name | Tempo2 Value | T2 unc | units | Tempo2_V-PINT_V | Tempo2_PINT_diff/unct | PINT_unct/Tempo2_unct | no_t_unc |
---|---|---|---|---|---|---|---|
str8 | str48 | float128 | object | float128 | float128 | float128 | bool |
F0 | 277.93771124297462788 | 5.1859268946902080184e-13 | Hz | -6.6613381477509392425e-16 | 0.0012845029023782387781 | 1.0000081311054239701 | False |
F1 | -7.3387383041227678664e-16 | 4.619148404392432094e-21 | Hz / s | -8.1916189706268323405e-24 | 0.0017734045874857090789 | 0.9999982063295332385 | False |
FD1 | 3.983282287426775e-05 | 1.6566478062738200598e-06 | s | -1.6361212107284211874e-09 | 0.0009876095598185302606 | 1.00000000325577032 | False |
FD2 | -1.4729805752137882e-05 | 1.1922596055992699934e-06 | s | 1.4357622807706935195e-09 | 0.0012042362871541143106 | 1.0000000135934370427 | False |
JUMP1 | -8.7887456483184e-06 | 0.0 | s | -4.9036265484266971897e-11 | 0.00037580263061884339052 | inf | True |
PX | 0.5061242012322064 | 0.07348886965486496614 | mas | 1.8776131796016670705e-05 | 0.00025549626609032611863 | 1.0000000214259234799 | False |
ELONG | 244.34767784255382 | 5.95727548431e-09 | deg | 9.1233687271596863866e-12 | 0.0015314666496770878097 | 1.0000013787890671494 | False |
ELAT | -10.071839047043065 | 3.361025894297e-08 | deg | -1.44861900253090425394e-11 | 0.0004310050109964715927 | 0.99999279185393830885 | False |
PMELONG | 0.4619096015625491 | 0.010433361011620021289 | mas / yr | 7.4203184182164427796e-06 | 0.0007112107411937685676 | 1.0000025748109051538 | False |
PMELAT | -7.155145674275822 | 0.058156247552489513664 | mas / yr | -7.1705189014892312116e-05 | 0.0012329748226993851416 | 0.99999266738091230344 | False |
PB | 14.348465754661366786 | 2.12226632065849e-06 | d | -1.9243153815198810186e-09 | 0.0009067266265257465907 | 0.99999757655418149095 | False |
A1 | 8.80165312286463 | 8.114047416773300209e-07 | ls | -8.196980871844061767e-10 | 0.0010102209724458007922 | 0.99998413921662954174 | False |
A1DOT | -4.008979189463729e-15 | 6.2586911221949290846e-16 | ls / s | -1.0335518582171720488e-18 | 0.00165138658872608559 | 1.0000003602893390298 | False |
ECC | 0.00017372966157521168 | 8.922286680669999241e-09 | 4.158524756102052744e-11 | 0.0046608284455950449096 | 1.0000400945849889922 | False | |
T0 | 55878.2618994738495070 | 0.00051676746764245482 | d | 4.8831633153723075225e-07 | 0.00094494402630447885357 | 1.0000114310932686899 | False |
OM | 181.84960401549451478 | 0.01296564244572522874 | deg | 1.225778713571934464e-05 | 0.00094540530382748107187 | 1.0000120115751260204 | False |
OMDOT | 0.0052395528517645540778 | 0.00135543635075636363 | deg / yr | -1.228657154944533207e-06 | 0.00090646613856778681487 | 0.9999975920019996213 | False |
M2 | 0.2717633814383356 | 0.08941866471282471085 | solMass | 0.00010426380694084080858 | 0.0011660183841448814885 | 1.0000265499521452384 | False |
SINI | 0.9064200568225846 | 0.03399283139781983376 | -4.2776490711826653524e-05 | 0.0012583974018289681776 | 1.0000279809291119371 | False | |
DMX_0010 | 0.00067403356955979 | 0.00020051850482404336064 | pc / cm3 | -3.7725252445390284467e-07 | 0.0018813850860545018075 | 0.99999987947730495375 | False |
The residual difference between PINT and TEMPO2 is at the level of ~1ns
We believe the discrepancy is mainly from the solar system geometric delay.
We will use the tempo2 postfit parameters, which are wrote out to
J1600-3053_new_tempo2.2.par
[35]:
tempo2_result2 = t2u.general2('J1600-3053_new_tempo2.2.par', tim_file, ['sat', 'pre', 'post', 'freq', 'err'])
m_t22 = models.get_model('J1600-3053_new_tempo2.2.par')
f_t22 = GLSFitter(toas=t, model=m_t22)
f_t22.fit_toas()
tp2_diff_pre2 = f_t22.resids_init.time_resids - tempo2_result2['pre'] * u.s
tp2_diff_post2 = f_t22.resids.time_resids - tempo2_result2['post'] * u.s
WARNING: EPHVER 5 does nothing in PINT [pint.models.timing_model]
WARNING: Unrecognized parfile line 'NE_SW 0' [pint.models.timing_model]
WARNING: Unrecognized parfile line 'NE_SW2 0.000' [pint.models.timing_model]
[36]:
PINT_solar = m_t22.solar_system_geometric_delay(t)
tempo2_solar = t2u.general2('J1600-3053_new_tempo2.2.par', tim_file, ['roemer'])
[37]:
diff_solar = PINT_solar + tempo2_solar['roemer'] * u.s
plt.figure(figsize=(8,2.5), dpi=150)
plt.plot(mjds, (tp2_diff_post2 - tp2_diff_post2.mean()).to_value(u.ns), '+')
plt.plot(mjds, (diff_solar - diff_solar.mean()).to_value(u.ns, equivalencies=[(ls, u.s)]), 'x')
plt.xlabel('MJD (day)')
plt.ylabel('Discrepancies (ns)')
#plt.title('PSR J1600-3053 postfit residual differences between PINT and TEMPO2')
plt.grid(True)
plt.legend(['Postfit Residual Differences', 'Solar System Geometric Delay Difference'],
loc='upper center', bbox_to_anchor=(0.5, -0.3), shadow=True, ncol=2)
plt.tight_layout()
plt.savefig("solar_geo")

Explanation
This section is to explain pulsar timing, how PINT works, and why it is built the way it is.
PINT is used for pulsar timing and related activities. Some of it may make a lot more sense if you know something about the science it is used for. You can find an excellent introduction in the Handbook of Pulsar Astronomy, by Lorimer and Kramer. This document is aimed at using PINT specifically, and may also be more understandable if you have used other pulsar timing software, TEMPO or TEMPO2, though we hope that you will find PINT sufficient for all your needs!
Time
With modern instrumentation, we are able to measure time - both time intervals and an absolute time scale - to stupendous accuracy. Pulsar timing is a powerful tool in large part because it takes advantage of that accuracy. Getting time measurements and calculations right to this level of accuracy does require a certain amount of care, in general and while using (and writing) PINT.
Precision
The first challenge that arises is numerical precision. Computers necessarily represent real numbers to finite precision. Python, in particular, uses floating-point numbers that occupy 64 bits, 11 of which encode the exponent and 53 of which encode the mantissa. This means that numbers are represented with a little less than 16 decimal digits of precision:
>>> import numpy as np
>>> np.finfo(float).eps
2.220446049250313e-16
>>> 1 + np.finfo(float).eps
1.0000000000000002
>>> 1 + np.finfo(float).eps/2
1.0
>>> 1 + np.finfo(float).eps/2 == 1
True
Unfortunately, we have observations spanning decades and we would often like to work with time measurements at the nanosecond level. It turns out that python’s floating-point numbers simply don’t have the precision we need for this:
>>> import astropy.units as u
>>> (10*u.year*np.finfo(float).eps).to(u.ns)
<Quantity 70.07194824 ns>
That is, if I want to represent a ten-year span, the smallest increment
python’s floating-point can cope with is about 70 nanoseconds - not enough
for accurate pulsar timing work! There are a number of ways to approach
this problem, all somewhat awkward in python. One approach of interest
is that numpy
provides floating-point types, for example,
numpy.longdouble
, with more precision:
>>> np.finfo(np.longdouble).eps
1.084202172485504434e-19
>>> (10*u.year*np.finfo(np.longdouble).eps).to(u.ns)
<Quantity 0.03421482 ns>
These numbers are represented with 80 bits, and most desktop and server
machines have hardware for computing with these numbers, so they are not
much slower than ordinary (“double-precision”, 64-bit) floating-point
numbers. Let me warn you about one point of possible confusion: modern
computers have very complicated cache setups that prefer data to be
aligned just so in memory, so numpy
generally pads these numbers out
with zeroes and stores them in larger memory spaces. Thus you will often
see np.float96
and np.float128
types; these contain only
numbers with 80-bit precision. Actual 128-bit precision is not currently
available in numpy
, in part because on almost all current machines all
calculations must be carried out in software, which takes 20-50 times as
long.
An alternative approach to dealing with more precision than your machine’s
floating-point numbers natively support is to represent numbers as a pair
of double-precision values, with the second providing additional digits
of precision to the first. These are generically called double-double
numbers, and can be faster than “proper” 128-bit floating-point numbers.
Sadly these are not implemented in numpy
either. But because it is
primarily time that requires such precision, astropy
provides a type
astropy.time.Time
(and astropy.time.TimeDelta
) that uses a similar
representation internally: two floating-point numbers, one of which is
the integer number of days (in the Julian-Day system) and one of which
is the fractional day. This allows very satisfactory precision:
>>> (1*u.day*np.finfo(float).eps).to(u.ns)
<Quantity 0.01918465 ns>
>>> t = astropy.time.Time("2019-08-19", format="iso")
>>> t
<Time object: scale='utc' format='iso' value=2019-08-19 00:00:00.000>
>>> (t + 0.1*u.ns) - t
<TimeDelta object: scale='tai' format='jd' value=1.1102230246251565e-15>
>>> ((t + 0.1*u.ns) - t).to(u.ns)
<Quantity 0.09592327 ns>
Thus it is important when dealing with a time to ensure that it is stored
in either an astropy
time object or a np.longdouble
. Because python’s
default is to use less precision, it is easy to lose digits:
>>> 1+np.finfo(np.longdouble).eps
1.0000000000000000001
>>> print("Number: {}".format(1+np.finfo(np.longdouble).eps))
Number: 1.0
>>> print("Number: {}".format(1+np.finfo(float).eps))
Number: 1.0000000000000002
>>> print("Number: {}".format(str(1+np.finfo(np.longdouble).eps)))
Number: 1.0000000000000000001
Time Scales
A second concern when dealing with times at this level of precision is that Einstein’s theory of relativity becomes relevant: Clocks at the surface of the Earth advance more slowly than clocks in space nearby because of the slowing of time by the Earth’s gravity. As the Earth moves around the Sun, its changing velocity affects clock rates, and so does its movement deeper and shallower in the Sun’s gravity well. Of course none of these things affect a pulsar’s rotation, so we need some way to compensate for that.
On a more human scale, observations are recorded in convenient time units, often UTC; but UTC has leap seconds, so some days have one more second (or one fewer) than others!
The upshot of all this is that if you care about accuracy, you need to be
quite careful about how you measure your time. Fortunately, there is a
well-defined system of time scales, and astropy.time.Time
automatically
keeps track of which one your time is in and does the appropriate
conversions - as long as you tell it what kind of time you’re putting
in, and what kind of time you’re asking for:
>>> t = astropy.time.Time("2019-08-19", format="iso", scale="utc")
>>> t
<Time object: scale='utc' format='iso' value=2019-08-19 00:00:00.000>
>>> t.tdb
<Time object: scale='tdb' format='iso' value=2019-08-19 00:01:09.183>
The conventional time scale for working with pulsars, and the one PINT uses, is Barycentric Dynamical Time (TDB). You should be aware that there is another time scale, not yet fully supported in PINT, called Barycentric Coordinate Time (TCB). Because of different handling of relativistic corrections, the TCB timescale does not advance at the same rate as TDB (there is also a many-second offset). TEMPO2 uses TCB by default, so you may encounter pulsar timing models or even measurements that use TCB. PINT provides a command line tool tcb2tdb to approximately convert TCB timing models to TDB. PINT can also optionally convert TCB timing models to TDB (approximately) upon read.
Note that the need for leap seconds is because the Earth’s rotation is
somewhat erratic - no, we’re not about to be thrown off, but its
unpredictability can get as large as a second after a few years. So
the International_Earth_Rotation_Service announces leap seconds about
six months in advance. This means that astropy
and pint need to
keep their lists of leap seconds up-to-date by checking the IERS
website from time to time.
It is also conventional to record pulsar data with reference to an observatory clock, usually a maser, that may drift with respect to International Atomic Time (TAI). Usually GPS is used to track the deviations of this observatory clock and record them in a file. PINT also needs up-to-date versions of these observatory clock correction files to produce accurate results.
Even more detail about how PINT handles time scales is available on the github wiki.
Specifically, there is a complexity in using MJDs to specify times in the UTC time scale, which is the customary way observatories work. PINT attempts to handle this correctly by default, but if you see timing anomalies on days with leap seconds, this may be the problem. Alternatively, you may not be using up-to-date leap-second data files, or the process that generated the MJDs may not (this is a particular concern when working with X-ray or gamma-ray data).
Dispersion Measure (DM)
Radio waves emitted by the pulsar experience dispersion as they travel through the ionized
interstellar medium (ISM). The time delay due to the interstellar dispersion is given by
\(\frac{K\times DM}{\nu^2}\), where \(\nu\) is the frequency of the radio signal.
The dominant source of this dispersion is the presence of free electrons in the ISM, and
to a first approximation, the DM can be interpreted as the electron column density along the
line of sight to the pulsar. \(K\) is known as the DM constant, and should be equal to
\(\frac{e^2}{8 \pi ^2 c \epsilon _0 m_e} \approx 1.3445365918(9)\times 10^{-7}\; \text{m}^2/\text{s}\)
based on the latest measurements of the physical constants. However, pulsar astronomers have
traditionally used a fixed value \(1.3447217\times 10^{-7}\; \text{m}^2/\text{s}\) for \(K\) over
the years. For example, the Handbook of Pulsar Astronomy by Lorimer & Kramer (Chapter 5) provides
the value \(2.41\times 10^{-4}\; \text{MHz}^{-2} \text{pc}\, \text{cm}^{-3} s^{-1}\) for the
reciprocal of \(K\). PINT follows this convention to be compatible with older pulsar
ephemerides and with other pulsar timing packages. The value of \(K\) used by PINT can be
accessed as pint.DMconst
.
It should also be noted that there are other effects contributing to the dispersion delay than
the free electrons, such as ions in the ISM, interstellar magnetic fields, and the ISM temperature.
Hence, it has been argued (see Kulkarni 2020 https://arxiv.org/abs/2007.02886) that the dispersion
slope \(K\times DM\) should be treated as the primary observable rather than the DM, which
is usually interpreted as the electron column density. The dispersion slope corresponding to a DM value
can be computed using pint.derived_quantities.dispersion_slope()
. A DM value measured based
on the conventional value of \(K\) can be converted to a value based on the latest physical
constant values using pint.utils.convert_dispersion_measure()
.
The total DM and dispersion slope predicted by a given timing model (pint.models.timing_model.TimingModel
)
for a given set of TOAs (pint.toa.TOAs
) can be computed using pint.models.timing_model.TimingModel.total_dm()
and pint.models.timing_model.TimingModel.total_dispersion_slope()
methods respectively.
Offsets in pulsar timing
Offsets arise in pulsar timing models for a variety of reasons. The different types of offsets are listed below:
Overall phase offset (PHOFF)
The pulse phase corresponding to the TOAs are usually computed in reference to an arbitrary
fiducial TOA known as the TZR TOA (see pint.models.absolute_phase.AbsPhase
). Since the
choice of the TZR TOA is arbitrary, there can be an overall phase offset between the TZR TOA and
the measured TOAs. There are three ways to account for this offset: (1) subtract the weighted mean
from the timing residuals, (2) make the TZR TOA (given by the TZRMJD parameter) fittable, or
(3) introduce a fittable phase offset parameter between measured TOAs and the TZR TOA.
Traditionally, pulsar timing packages have opted to implicitly subtract the residual mean, and this
is the default behavior of PINT. Option (2) is hard to implement because the TZR TOA may be
specified at any observatory, and computing the TZR phase requires the application of the clock
corrections. The explicit phase offset (option 3) can be invoked by adding the PHOFF parameter,
(implemented in pint.models.phase_offset.PhaseOffset
). If the explicit offset PHOFF
is given, the implicit residual mean subtraction behavior will be disabled.
In the pulsar ephemeris (par) file, an example PHOFF parameter looks like this:
PHOFF 0.1 1 0.001
System-dependent delays (`JUMP`s)
It is very common to have TOAs for the same pulsar obtained using different observatories,
telescope receivers, backend systems, and data processing pipelines, especially in long-running
campaigns. Delays can arise between the TOAs measured using such different systems due to, among
other reasons, instrumental delays, differences in algorithms used for RFI mitigation, folding, TOA
measurement etc., and the choice of different template profiles used for TOA measurement. Such
offsets are usually modeled using phase jumps (the JUMP parameter, see pint.models.jump.PhaseJump
)
between TOAs generated from different systems.
- Here are some examples for JUMP parameters in a par file:
JUMP -f 430_PUPPI 0.01 1 1e-5 JUMP tel ao 0.01 1 1e-5 JUMP mjd 55000 55100 0.01 1 1e-5 JUMP freq 1000 1400 0.01 1 1e-5
System-dependent DM offsets (`DMJUMP`s and `FDJUMPDM`s)
Similar to system-dependent delays, offsets can arise between wideband DM values measured using
different systems due to the choice of template portraits with different fiducial DMs. This is
usually modeled using DM jumps (the DMJUMP parameter, see pint.models.dispersion_model.DispersionJump
).
This type of offset only applies to the wideband DM values and not to the wideband TOAs.
- Here are some examples for DMJUMP parameters in a par file:
DMJUMP -f 430_PUPPI 1e-4 1 1e-5 DMJUMP tel ao 1e-4 1 1e-5 DMJUMP mjd 55000 55100 1e-4 1 1e-5 DMJUMP freq 1000 1400 1e-4 1 1e-5
Similar offsets also arise in the case of narrowband TOAs. Unlike the wideband case, these offsets
manifest as system-dependent corrections to the DM delay. They are modeled using the FDJUMPDM parameters
(see see pint.models.dispersion_model.FDJumpDM
)
- Here are some examples for FDJUMPDM parameters in a par file:
FDJUMPDM -f 430_PUPPI 1e-4 1 1e-5 FDJUMPDM -f L-wide_PUPPI 1e-4 1 1e-5
System- and frequency-dependent offsets (`FDJUMP`s)
In narrowband datasets, the template profiles often do not adequately model the frequency-dependent
evolution of pulse profiles, resulting in a frequency-dependent artefact in the timing residuals.
This systematic effect is usually modeled phenomenologically as a log-polynomial function of frequency
whose coefficients are the so-called FD parameters (see pint.models.frequency_dependent.FD
).
Sometimes, this effect needs to be modeled separately for different systems since different template
profiles will be used for each system. This is achieved through system-dependent FD parameters or FDJUMP`s
(see :class:`pint.models.fdjump.FDJump).
- Here are some examples for FDJUMP parameters in a par file:
FD1JUMP -f L-wide_PUPPI 1e-4 1 1e-5 FD2JUMP -f L-wide_PUPPI 1e-4 1 1e-5 FD1JUMP -f 430_PUPPI 1e-4 1 1e-5 FD2JUMP -f 430_PUPPI 1e-4 1 1e-5
Observatories
PINT comes with a number of defined observatories. Those on the surface of the Earth
are TopoObs
instances. It can also pull in observatories
from astropy
, and you can define your own. Observatories are generally referenced when
reading TOA files, but can also be accessed directly:
import pint.observatory
gbt = pint.observatory.get_observatory("gbt")
Observatory definitions
Observatory definitions are included in pint.config.runtimefile("observatories.json")
.
To see the existing names, pint.observatory.Observatory.names_and_aliases()
will
return a dictionary giving all of the names (primary keys) and potential aliases (values).
You can also find the full list at Observatory List.
The observatory data are stored in JSON format. A simple example is:
"gbt": {
"tempo_code": "1",
"itoa_code": "GB",
"clock_file": "time_gbt.dat",
"itrf_xyz": [
882589.289,
-4924872.368,
3943729.418
],
"origin": "The Robert C. Byrd Green Bank Telescope.\nThis data was obtained by Joe Swiggum from Ryan Lynch in 2021 September.\n"
}
The observatory is defined by its name (gbt
) and its position. This can be given as
geocentric coordinates in the International_Terrestrial_Reference_System (ITRF) through
the itrf_xyz
triple (units as m
), or geodetic coordinates (WGS84 assumed) through
lat
, lon
, alt
(units are deg
and m
). Conversion is done through
Astropy_EarthLocation.
Other attributes are optional. Here we have also specified the tempo_code
and
itoa_code
, and a human-readable origin
string.
A more complex/complete example is:
"jbroach": {
"clock_file": [
{
"name": "jbroach2jb.clk",
"valid_beyond_ends": true
},
"jb2gps.clk"
],
"clock_fmt": "tempo2",
"aliases": [
"jboroach"
],
"bogus_last_correction": true,
"itrf_xyz": [
3822625.769,
-154105.255,
5086486.256
],
"origin": [
"The Lovell telescope at Jodrell Bank.",
"These are the coordinates used for VLBI as of March 2020 (MJD 58919). They are based on",
"a fiducial position at MJD 50449 plus a (continental) drift velocity of",
"[-0.0117, 0.0170, 0.0093] m/yr. This data was obtained from Ben Perera in September 2021.",
"This data is for the Roach instrument - a different clock file is required for this instrument to accommodate recorded instrumental delays."
]
}
Here we have included additional explicit aliases
, specified the clock format via
clock_fmt
, and specified that the last entry in the clock file is bogus (bogus_last_correction
).
There are two clock files included in clock_file
:
jbroach2jb.clk
(where we also specify that it isvalid_beyond_ends
)jb2gps.clk
These are combined to reference this particular telescope/instrument combination.
For the full set of options, see TopoObs
.
Adding New Observatories
In addition to modifying pint.config.runtimefile("observatories.json")
, there are other
ways to add new observatories.
Make sure you define any new observatory before you load any TOAs.
1. You can define them pythonically:
import pint.observatory.topo_obs
import astropy.coordinates
newobs = pint.observatory.topo_obs.TopoObs("newobs", location=astropy.coordinates.EarthLocation.of_site("keck"), origin="another way to get Keck")
This can be done by specifying the ITRF coordinates, (lat
, lon
, alt
), or a
EarthLocation
instance.
2. You can include them just for the duration of your python session:
import io
from pint.observatory.topo_obs import load_observatories
# GBT but no clock file
fakeGBT = r"""{
"gbt": {
"tempo_code": "1",
"itoa_code": "GB",
"clock_file": "",
"itrf_xyz": [
882589.289,
-4924872.368,
3943729.418
],
"origin": "The Robert C. Byrd Green Bank Telescope.\nThis data was obtained by Joe Swiggum from Ryan Lynch in 2021 September.\nHowever this has no clock correction"
}
}"""
load_observatories(io.StringIO(fakeGBT), overwrite=True)
Note that since we are overwriting an existing observatory (rather than defining a
completely new one) we specify overwrite=True
.
3. You can define them in a different file on disk. If you took the JSON above and
put it into a file /home/user/anothergbt.json
,
you could then do:
export $PINT_OBS_OVERRIDE=/home/user/anothergbt.json
(or the equivalent in your shell of choice) before you start any PINT scripts. By default this will overwrite any existing definitions.
4. You can rely on astropy
. For instance:
import pint.observatory
keck = pint.observatory.Observatory.get("keck")
will find Keck. astropy.coordinates.EarthLocation.get_site_names()
will return a list
of potential observatories.
External Data
In order to provide sub-microsecond accuracy, PINT needs a certain number of data files, for example Solar System ephemerides, that would be cumbersome to include in the package itself. Further, some of this external data needs to be kept up-to-date - precise measurements of the Earth’s rotation, for example, or observatory clock corrections.
Most of this external data is obtained through astropy
’s data downloading
mechanism (see astropy.utils.data
). This will result in the data being
downloaded the first time it
is required on your machine but thereafter stored in a “cache” in your home
directory. If you plan to operate offline, you may want to run some commands
before disconnecting to ensure that this data has been downloaded. Data
that must be up-to-date is generally in the form of a time series, and
“up-to-date” generally means that it must cover the times that occur in
your data. This can be an issue for simulation and forecasting; there should
always be a mechanism to allow out-of-date data if you can accept lower
accuracy.
Clock corrections
Not all the data that PINT uses is easily accessible for programs to download. Observatory clock corrections, for example, may need to be obtained from the observatory through various means (often talking to a support scientist). PINT uses a global repository, https://ipta.github.io/pulsar-clock-corrections/ to retrieve up-to-date clock corrections for all telescopes it knows about. PINT should notify you when your clock files are out of date for the data you are using; be aware that you may obtain reduced accuracy if you have old clock correction files.
Normally, if you try to do some operation that requires unavailable clock
corrections, PINT will emit a warning but continue. If you want to be stricter,
you can specify limit="error"
to various functions like
pint.toa.get_TOAs()
.
If you need to check how up to date your clock corrections are, you can use
something like get_observatory("gbt").last_clock_correction_mjd()
: the
function pint.observatory.Observatory.last_clock_correction_mjd()
checks
when clock corrections are valid for. For most telescopes, this combines the
per-telescope clock correction with PINT’s global GPS and BIPM clock
corrections (both of which cannot be reliably extrapolated too far into the
future). PINT provides two convenience functions,
pint.observatory.list_last_correction_mjds()
and
pint.observatory.check_for_new_clock_files_in_tempo12_repos()
, that will
help you check the state of your clock corrections.
If you need clock files that are not in the global repository, perhaps more recent versions or clock files for telescopes not included in the global repository or specific versions for reproducibility, you have several options:
Set the environment variable
PINT_CLOCK_OVERRIDE
to point to a directory that contains clock files. Any clock file found there will supersede the version found in the global repository. You can also usepint.observatory.export_clock_files()
to export the clock files you are currently using to a directory for use in this way later.Modify
src/pint/data/runtime/observatories.json
so that the observatory you are interested in points to the correct clock file. (You may have to redopip install
for PINT to make this take effect.) If you setclock_dir="TEMPO"
orclock_dir="TEMPO2"
then PINT will look in the clock directory referenced by your environment variables$TEMPO
or$TEMPO2
(and nowhere else; it will no longer find clock corrections for this observatory that are included with PINT). You can also specify a specific directory asclock_dir="/home/burnell/clock-files/"
. Editing this file also allows you to choose between TEMPO- and TEMPO2-format clock corrections with theclock_fmt
argument.Create a new observatory in your own code. This involves creating a new
pint.observatory.topo_obs.TopoObs
object like those insrc/pint/data/runtime/observatories.json
. As long as this object is created before you read in any TOAs that need it, and as long as its name does not overlap with any existing observatory, you should be able to create your custom observatory and point the clock correction files to the right place as above.
Structure of Pulsar Timing Data Formats
Pulsar timing data has traditionally been divided into two parts: a list of
pulse arrival times, with sufficient metadata to work with (a .tim
file),
and a description of the timing model, with parameter values, metadata, and
some fitting instructions (a .par
file). These have been ad-hoc formats,
created to be easy to work with (originally) using 1980s FORTRAN code
(specifically TEMPO
). The advent of a second tool that works with these
files (TEMPO2
) did not, unfortunately, come with a standardization effort,
and so files varied further in structure and were not necessarily interpreted
in the same way by both tools. As PINT is a third tool, we would prefer to
avoid introducing our own, incompatible (obviously or subtly) file formats. We
therefore formalize them here.
We are aware that not every set of timing data or parameters “in the wild” will follow these rules. We hope to be able to lay out a clear and specific description of these files and how they are interpreted, then elaborate on how non-conforming files are handled, as well as how TEMPO and TEMPO2 interpret these same files. Where possible we have tried to ensure that our description agrees with both TEMPO and TEMPO2, but as they disagree for some existing files, it may be necessary to offer PINT some guidance on how to interpret some files.
Parameter files (.par
)
Parameter files are text files, consisting of a collection of lines whose order is irrelevant. Lines generally begin with an all-uppercase parameter name, then a space-separated list of values whose interpretation depends on the parameter.
We separate parsing such a file into two steps: determining the structure of the timing model, that is, which components make up the timing model and how many parameters they have, then extracting the values and settings from the par file into the model. It is the intention that in PINT these two steps can be carried out separately, for example manually constructing a timing model from a collection of components then feeding it parameter values from a parameter file. It is also the intent that, unlike TEMPO and TEMPO2, PINT should be able to clearly indicate when anomalies have occurred, for example if some parameter was present in the parameter file but not used by any model.
Selecting timing model components
We describe a simple procedure for selecting the relevant timing model components.
If the
BINARY
line is present in the parameter file, its value determines which binary model to use; if not, no binary model is used.Each model component has one or more “special parameters” or families of parameters identified by a common prefix. If a par file contains a special parameter, or a known alias of one, then the timing model uses the corresponding component.
Components are organized into categories. No more than one component from each category may be present; some categories may be required but in others no component is necessary: - Solar system dispersion - Astrometry - Interstellar dispersion - Binary - Spin-down - Timing noise
Each component may indicate that it supersedes one or more others, that is, that its parameters are a superset of the previous model. In this case, if both are suggested by the parameter file, the component that is superseded is discarded. If applying this rule does not reduce the number of components in the category down to one, then the model is ambiguous.
We note that many parameters have “aliases”, alternative names used in certain par files. For these purposes, aliases are treated as equivalent to the special parameters they are aliases for. Also note that not all parameters need to be special for any component; the intent is for each component to identify a parameter that is unique to it (or models that supersede it) and will always be present.
We intend that PINT have facilities for managing parameter files that are ambiguous by this definition, whether by applying heuristics or by allowing users to clarify their intent.
This scheme as it stands has a problem: some parameter files found “in the wild” specify equatorial coordinates for the pulsar but ecliptic values for the proper motion. These files should certainly use ecliptic coordinates for fitting.
Timing files (.tim
)
There are several commonly-used timing file formats. These are collections of lines, but in some cases they can contain structure in the form of blocks that are meant to be omitted from reading or have their time adjusted. We recommend use of the most flexible format, that defined by TEMPO2 and now also supported (to the extent that the engine permits) by TEMPO.
Fitting
A very common operation with PINT is fitting a timing model to timing data.
Fundamentally this operation tries to adjust the model parameters to minimize
the residuals produced when the model is applied to a set of TOAs. The result
of this process is a set of best-fit model parameters, uncertainties on (and
correlations between) these, and residuals from this best-fit model. This is
carried out by constructing a pint.fitter.Fitter
object from
a pint.toa.TOAs
object and
a pint.models.timing_model.TimingModel
object and then running the
pint.fitter.Fitter.fit_toas()
method; there are several example notebooks
that demonstrate this. Nevertheless there are some subtleties to how fitting
works in PINT that we explain here.
Fitting algorithms
PINT is designed to be able to offer several alternative algorithms to arrive at the best-fit model. This both because fitting can be a time-consuming process if a suboptimal algorithm is chosen, and because different kinds of model and data require different calculations - narrowband (TOA-only) versus wideband (TOA and DM measurements) and uncorrelated errors versus correlated errors.
The TEMPO/TEMPO2 and default PINT fitting algorithms (pint.fitter.WidebandTOAFitter
, for example),
leaving aside the rank-reduced case, proceed like:
Evaluate the model and its derivatives at the starting point \(x\), producing a set of residuals \(\delta y\) and a Jacobian M.
Compute \(\delta x\) to minimize \(\left| M\delta x - \delta y \right|_C\), where \(\left| \cdot \right|_C\) is the squared amplitude of a vector with respect to the data uncertainties/covariance \(C\).
Update the starting point by \(\delta x\).
TEMPO and TEMPO2 can check whether the predicted improvement of chi-squared, assuming the linear model is correct, is enough to warrant continuing; if so, they jump back to step 1 unless the maximum number of iterations is reached. PINT does not contain this check.
This algorithm is the Gauss-Newton_algorithm for solving nonlinear least-squares problems, and even in one-complex-dimensional cases can exhibit convergence behavior that is literally chaotic. For TEMPO/TEMPO2 and PINT, the problem is that the model is never actually evaluated at the updated starting point before committing to it; it can be invalid (ECC > 1) or the step can be large enough that the derivative does not match the function and thus the chi-squared value after the step can be worse than the initial chi-squared. These issues particularly arise with poorly constrained parameters like M2 or SINI. Users experienced with pulsar timing are frequently all too familiar with this phenomenon and have a collection of tricks for evading it.
PINT contains a slightly more sophisticated algorithm, implemented in
pint.fitter.DownhillFitter
, that takes more careful steps:
Evaluate the model and its derivatives at the starting point \(x\), producing a set of residuals \(\delta y\) and a Jacobian M.
Compute \(\delta x\) to minimize \(\left| M\delta x - \delta y \right|_C\), where \(\left| \cdot \right|_C\) is the squared amplitude of a vector with respect to the data uncertainties/covariance \(C\).
Set \(\lambda\) to 1.
Evaluate the model at the starting point plus \(\lambda \delta x\). If this is invalid or worse than the starting point, divide \(\lambda\) by two and repeat this step. If \(\lambda\) is too small, accept the best point seen to date and exit without convergence.
If the model improved but only slightly with \(\lambda=1\), exit with convergence. If the maximum number of iterations was reached, exit without convergence. Otherwise update the starting point and return to step 1.
This ensures that PINT tries taking smaller steps if problems arise, and claims convergence only if a normal step worked. It does not solve the problems that arise if some parameters are nearly degenerate, enough to cause problems with the numerical linear algebra.
As a rule, this kind of problem is addressed with the Levenberg-Marquardt algorithm, which
operates on the same principle of taking reduced steps when the derivative appears not to
match the function, but does so in a way that also reduces issues with degenerate parameters;
unfortunately it is not clear how to adapt this problem to the rank-reduced case. Nevertheless,
PINT contains an implementation in pint.fitter.WidebandLMFitter
, but it does not perform as
well as one might hope in practice and must be considered experimental.
Coding Style
We would like PINT to be easy to use and easy to contribute to. To this end we’d like to ask that if you’re going to contribute code or documentation that you try to follow the below style advice. We know that not all of the existing code does this, and it’s something we’d like to change.
For a specific listing of the rules we try to write PINT code by, please see PINT coding style.
More general rules and explanations:
Think about how someone might want to use your code in various ways. Is it called something helpful so that they will be able to find it? Will they be able to do something different with it than you wrote it for? How will it respond if they give it incorrect values?
Code should follow PEP8. Most importantly, if at all possible, class names should be in CamelCase, while function names should be in snake_case. There is also advice there on line length and whitespace. You can check your code with the tool
flake8
, but I’m afraid much of PINT’s existing code emits a blizzard of warnings.Files should be formatted according to the much more specific rules enforced by the tool black. This is as simple as
pip install black
and then runningblack
on a python file. If an existing file does not follow this style please don’t convert it unless you are modifying almost all the file anyway; it will mix in formatting changes with the actual substantive changes you are making when it comes time for us to review your pull request.Functions, modules, and classes should have docstrings. These should start with a short one-line description of what the function (or module or class) does. Then, if you want to say more than fits in a line, a blank line and a longer description. If you can, if it’s something that will be used widely, please follow the numpy docstring guidelines - these result in very helpful usage descriptions in both the interpreter and online docs. Check the HTML documentation for the thing you are modifying to see if it looks okay.
Tests are great! When there is a good test suite, you can make changes without fear you’re going to break something. Unit tests are a special kind of test, that isolate the functionality of a small piece of code and test it rigorously.
When you write a new function, write a few tests for it. You will never have a clearer idea of how it’s supposed to work than right after you wrote it. And anyway you probably used some code to see if it works, right? Make that into a test, it’s not hard. Feed it some bogus data, make sure it raises an exception. Make sure it does the right thing on empty lists, multidimensional arrays, and NaNs as input - even if that’s to raise an exception. We use pytest. You can easily run just your new tests.
Give tests names that describe what property of what thing they are testing. We don’t call test functions ourselves so there is no advantage to them having short names. It is perfectly reasonable to have a function called
test_download_parallel_fills_cache
ortest_cache_size_changes_correctly_when_files_are_added_and_removed
.If your function depends on complicated other functions or data, consider using something like unittest.Mock to replace that complexity with mock functions that return specific values. This is designed to let you test your function specifically in isolation from potential bugs in other parts of the code.
When you find a bug, you presumably have some code that triggers it. You’ll want to narrow that down as much as possible for debugging purposes, so please turn that bug test case into a test - before you fix the bug! That way you know the bug stays fixed.
If you’re trying to track down a tricky bug and you have a test case that triggers it, running
pytest tests/test_my_buggy_code.py --pdb
will drop you into the python debugger pdb at the moment failure occurs so you can inspect local variables and generally poke around.When you’re working with a physical quantity or an array of these, something that has units, please use
Quantity
to keep track of what these units are. If you need a plain floating-point number out of one, use.to(u.m).value
, whereu.m
should be replaced by the units you want the number to be in. This will raise an exception (good!) if the units can’t be converted (u.kg
for example) and convert if it’s in a compatible unit (u.cm
, say). Adding units to a number when you know what they are is as simple as multiplying.When you want to let the user know some information from deep inside PINT, remember that they might be running a GUI application where they can’t see what comes out of
logger
. Conveniently, this has levelsdebug
,info
,warning
, anderror
; the end user can decide which levels of severity they want to see.When something goes wrong and your code can’t continue and still produce a sensible result, please raise an exception. Usually you will want to raise a ValueError with a description of what went wrong, but if you want users to be able to do something with the specific thing that went wrong (for example, they might want to use an exception to know that they have emptied a container), you can quickly create a new exception class (no more than
class PulsarProblem(ValueError): pass
) that the user can specifically catch and distinguish from other exceptions. Similarly, if you’re catching an exception some code might raise, useexcept PulsarProblem:
to catch just the kind you can deal with.
There are a number of tools out there that can help with the mechanical
aspects of cleaning up your code and catching some obvious bugs. Most of
these are installed through PINT’s requirements_dev.txt
.
flake8 reads through code and warns about style issues, things like confusing indentation, unused variable names, un-initialized variables (usually a typo), and names that don’t follow python conventions. Unfortunately a lot of existing PINT code has some or all of these problems.
flake8-diff
checks only the code that you have touched - for the most part this pushes you to clean up functions and modules you work on as you go.isort sorts your module’s import section into conventional order.
black is a draconian code formatter that completely rearranges the whitespace in your code to standardize the appearance of your formatting.
blackcellmagic
allows you to haveblack
format the cells in a Jupyter notebook.pre-commit allows
git
to automatically run some checks before you check in your code. It may require an additional installation step.
make coverage
can show you if your tests aren’t even exercising certain parts of your code.editorconfig allows PINT to specify how your editor should format PINT files in a way that many editors can understand (though some, including vim and emacs, require a plugin to notice).
Your editor, whether it is emacs, vim, JupyterLab, Spyder, or some more graphical tool, can probably be made to understand that you are editing python and do things like highlight syntax, offer tab completion on identifiers, automatically indent text, automatically strip trailing white space, and possibly integrate some of the above tools.
The Zen of Python
by Tim Peters:
>>> import this
The Zen of Python, by Tim Peters
Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Now is better than never.
Although never is often better than *right* now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea -- let's do more of those!
Reference
This section is to provide specific detail for when you know what you’re looking for.
Useful starting places:
pint.toa
- Reading, manipulating, and writing TOAs objectspint.models.model_builder.get_model()
- Loading TimingModel objectspint.models.timing_model.TimingModel
- Working with TimingModel objectspint.fitter
- Fitter objects
Timing Models
PINT, like TEMPO and TEMPO2, support many different ways of calculating pulse
arrival times. The key tool for doing this is a
TimingModel
object, through which a whole
range of Parameter
are accessible. The actual
computation is done by pieces of code that live in
Component
; during the parsing of a parameter
file, these are selected based on the parameters present. Binary models are
selected explicitly using the BINARY
parameter, while each non-binary
component is selected if some parameter unique to it is included (for example
if ELAT
is present, AstrometryEcliptic
is
selected). Ambiguous or contradictory parameter files are possible, and for
these PINT raises an exception.
Components supported by PINT:
AbsPhase
- Absolute phase model.AstrometryEcliptic
- Astrometry in ecliptic coordinates.AstrometryEquatorial
- Astrometry in equatorial coordinates.BinaryBT
- Blandford and Teukolsky binary model. (BINARY BT
)BinaryBTPiecewise
- Model implementing the BT model with piecewise orbital parameters A1X and T0X. This model lets the user specify time ranges and fit for a different piecewise orbital parameter in each time range, (BINARY BT_piecewise
)BinaryDD
- Damour and Deruelle binary model. (BINARY DD
)BinaryDDGR
- Damour and Deruelle model assuming GR to be correct (BINARY DDGR
)BinaryDDH
- DD modified to use H3/STIGMA parameter for Shapiro delay. (BINARY DDH
)BinaryDDK
- Damour and Deruelle model with kinematics. (BINARY DDK
)BinaryDDS
- Damour and Deruelle model with alternate Shapiro delay parameterization. (BINARY DDS
)BinaryELL1
- ELL1 binary model. (BINARY ELL1
)BinaryELL1H
- ELL1 modified to use H3 parameter for Shapiro delay. (BINARY ELL1H
)BinaryELL1k
- ELL1k binary model. (BINARY ELL1k
)DMWaveX
-DispersionDM
- Simple DM dispersion model.DispersionDMX
- This class provides a DMX model - multiple DM values.DispersionJump
- This class provides the constant offsets to the DM values.EcorrNoise
- Noise correlated between nearby TOAs.FD
- A timing model for frequency evolution of pulsar profiles.FDJump
- A timing model for system-dependent frequency evolution of pulsarFDJumpDM
- This class provides system-dependent DM offsets for narrow-bandGlitch
- Pulsar spin-down glitches.IFunc
- This class implements tabulated delays.PLDMNoise
- Model of DM variations as radio frequency-dependent noise with aPLRedNoise
- Timing noise with a power-law spectrum.PhaseJump
- Arbitrary jumps in pulse phase.PhaseOffset
- Explicit pulse phase offset between physical TOAs and the TZR TOA.PiecewiseSpindown
- Pulsar spin-down piecewise solution.ScaleDmError
- Correction for estimated wideband DM measurement uncertainty.ScaleToaError
- Correct reported template fitting uncertainties.SolarSystemShapiro
- Shapiro delay due to light bending near Solar System objects.SolarWindDispersion
- Dispersion due to the solar wind (basic model).SolarWindDispersionX
- This class provides a SWX model - multiple Solar Wind segments.Spindown
- A simple timing model for an isolated pulsar.TroposphereDelay
- Model for accounting for the troposphere delay for topocentric TOAs.Wave
- Delays expressed as a sum of sinusoids.WaveX
-
Supported Parameters
The following table lists all the parameters that PINT can understand (along with their aliases). The model components that use them (linked below) should give more information about how they are interpreted.
Some parameters PINT understands have aliases - for example, the parameter PINT
calls “ECC” may also be written as “E” in parameter files. PINT will understand
these parameter files, but will always refer to this parameter internally as
“ECC”. By default, though, when PINT reads a parameter file, PINT will remember
the alias that was used, and PINT will write the model out using the same
alias. This can be controlled by the use_alias
attribute of
Parameter
objects.
PINT support for families of parameters, either specified by prefix (F0
,
F1
, F2
, … or DMX_0017
, DMX_0123
, …) or selecting subsets of
parameters based on flags (JUMP -tel AO
). These are indicated in the table
with square brackets. Note that like the frequency derivatives, these families
may have units that vary in a systematic way.
Parameters can also have different types. Most are long double floating point,
with or without units; these can be specified in the usual 1.234e5
format,
although they also support 1.234d5
as well as capitalized versions for
compatibility. One or two parameters - notably A1DOT
- can accept a value
scaled by 1e12
, automatically rescaling upon read; although this is
confusing, it is necessary because TEMPO does this and so there are parameter
files “in the wild” that use this feature. Other data types allow input of
different formats, for example RAJ 10:23:47.67
; boolean parameters allow
1
/0
, Y
/N
, T/F
, YES
/NO
, TRUE
/FALSE
, or
lower-case versions of these.
Name / Aliases |
Description |
Kind |
Components |
---|---|---|---|
A0 |
DD model aberration parameter A0 |
s |
|
A1 |
Projected semi-major axis of pulsar orbit, ap*sin(i) |
ls |
|
A1DOT / XDOT |
Derivative of projected semi-major axis, d[ap*sin(i)]/dt |
ls / s |
|
A1X_{number} |
ParameterA1variation |
ls |
|
B0 |
DD model aberration parameter B0 |
s |
|
BINARY |
Pulsar System/Binary model |
string |
|
CHI2 |
Chi-squared value obtained during fitting |
number |
|
CHI2R |
Reduced chi-squared value obtained during fitting |
number |
|
CLOCK / CLK |
Timescale to use |
string |
|
CORRECT_TROPOSPHERE |
Enable Troposphere Delay Model |
boolean |
|
DECJ / DEC |
Declination (J2000) |
deg |
|
DILATEFREQ |
Whether or not TEMPO2 should apply gravitational redshift and time dilation to observing frequency (Y/N; PINT only supports N) |
boolean |
|
DM |
Dispersion measure |
pc / cm3 |
|
DMDATA |
Was the fit done using per-TOA DM information? |
boolean |
|
DMEFAC {flag} {value} |
A multiplication factor on the measured DM uncertainties, |
number |
|
DMEPOCH |
Epoch of DM measurement |
d |
|
DMEQUAD {flag} {value} |
An error term added in quadrature to the scaled (by EFAC) TOA uncertainty. |
pc / cm3 |
|
DMJUMP {flag} {value} |
DM value offset. |
pc / cm3 |
|
DMRES |
DM residual after fitting (wideband only) |
pc / cm3 |
|
DMWXCOS_{number} |
Cosine amplitudes for Fourier representation of DM noise |
dmu |
|
DMWXEPOCH |
Reference epoch for Fourier representation of DM noise |
d |
|
DMWXFREQ_{number} |
Component frequency for Fourier representation of DM noise |
1 / d |
|
DMWXSIN_{number} |
Sine amplitudes for Fourier representation of DM noise |
dmu |
|
DMX |
Dispersion measure |
pc / cm3 |
|
DMXR1_{number} |
Beginning of DMX interval |
d |
|
DMXR2_{number} |
End of DMX interval |
d |
|
DMX_{number} |
Dispersion measure variation |
pc / cm3 |
|
DM{number} |
1’th time derivative of the dispersion measure |
pc / (yr cm3) |
|
DR |
Relativistic deformation of the orbit |
number |
|
DTH / DTHETA |
Relativistic deformation of the orbit |
number |
|
ECC / E |
Eccentricity |
number |
|
ECL |
Obliquity of the ecliptic (reference) |
string |
|
ECORR {flag} {value} / TNECORR {flag} {value} |
An error term that is correlated among all TOAs in an observing epoch. |
us |
|
EDOT |
Eccentricity derivative respect to time |
1 / s |
|
EFAC {flag} {value} / T2EFAC {flag} {value}, TNEF {flag} {value} |
A multiplication factor on the measured TOA uncertainties, |
number |
|
ELAT / BETA |
Ecliptic latitude |
deg |
|
ELONG / LAMBDA |
Ecliptic longitude |
deg |
|
EPHEM |
Ephemeris to use |
string |
|
EPS1 |
First Laplace-Lagrange parameter, ECC*sin(OM) |
number |
|
EPS1DOT |
First derivative of first Laplace-Lagrange parameter |
1e-12 / s |
|
EPS2 |
Second Laplace-Lagrange parameter, ECC*cos(OM) |
number |
|
EPS2DOT |
Second derivative of first Laplace-Lagrange parameter |
1e-12 / s |
|
EQUAD {flag} {value} / T2EQUAD {flag} {value} |
An error term added in quadrature to the scaled (by EFAC) TOA uncertainty. |
us |
|
FB{number} |
0th time derivative of frequency of orbit |
1 / s |
|
FD10JUMP {flag} {value} |
System-dependent FD parameter of polynomial index 10 |
s |
|
FD11JUMP {flag} {value} |
System-dependent FD parameter of polynomial index 11 |
s |
|
FD12JUMP {flag} {value} |
System-dependent FD parameter of polynomial index 12 |
s |
|
FD13JUMP {flag} {value} |
System-dependent FD parameter of polynomial index 13 |
s |
|
FD14JUMP {flag} {value} |
System-dependent FD parameter of polynomial index 14 |
s |
|
FD15JUMP {flag} {value} |
System-dependent FD parameter of polynomial index 15 |
s |
|
FD16JUMP {flag} {value} |
System-dependent FD parameter of polynomial index 16 |
s |
|
FD17JUMP {flag} {value} |
System-dependent FD parameter of polynomial index 17 |
s |
|
FD18JUMP {flag} {value} |
System-dependent FD parameter of polynomial index 18 |
s |
|
FD19JUMP {flag} {value} |
System-dependent FD parameter of polynomial index 19 |
s |
|
FD1JUMP {flag} {value} |
System-dependent FD parameter of polynomial index 1 |
s |
|
FD20JUMP {flag} {value} |
System-dependent FD parameter of polynomial index 20 |
s |
|
FD2JUMP {flag} {value} |
System-dependent FD parameter of polynomial index 2 |
s |
|
FD3JUMP {flag} {value} |
System-dependent FD parameter of polynomial index 3 |
s |
|
FD4JUMP {flag} {value} |
System-dependent FD parameter of polynomial index 4 |
s |
|
FD5JUMP {flag} {value} |
System-dependent FD parameter of polynomial index 5 |
s |
|
FD6JUMP {flag} {value} |
System-dependent FD parameter of polynomial index 6 |
s |
|
FD7JUMP {flag} {value} |
System-dependent FD parameter of polynomial index 7 |
s |
|
FD8JUMP {flag} {value} |
System-dependent FD parameter of polynomial index 8 |
s |
|
FD9JUMP {flag} {value} |
System-dependent FD parameter of polynomial index 9 |
s |
|
FDJUMPDM {flag} {value} |
System-dependent DM offset. |
pc / cm3 |
|
FDJUMPLOG |
Whether to use log-frequency (Y) or linear-frequency (N) for computing FDJUMPs. |
boolean |
|
FD{number} |
Polynomial coefficient of log-frequency-dependent delay |
s |
|
FINISH |
End MJD for fitting |
d |
|
F{number} |
Spin-frequency |
Hz |
|
GAMMA |
Time dilation & gravitational redshift |
s |
|
GLEP_{number} |
Epoch of glitch 1 |
d |
|
GLF0D_{number} |
Decaying frequency change for glitch 1 |
Hz |
|
GLF0_{number} |
(‘Permanent frequency change for glitch 1’,) |
Hz |
|
GLF1_{number} |
(‘Permanent frequency-derivative change for glitch 1’,) |
Hz / s |
|
GLF2_{number} |
(‘Permanent second frequency-derivative change for glitch 1’,) |
Hz / s2 |
|
GLPH_{number} |
Phase change for glitch 1 |
number |
|
GLTD_{number} |
Decay time constant for glitch 1 |
d |
|
H3 |
Shapiro delay parameter H3 as in Freire and Wex 2010 Eq(20) |
s |
|
H4 |
Shapiro delay parameter H4 as in Freire and Wex 2010 Eq(21) |
s |
|
IFUNC{number} |
Interpolation control point pair (MJD, delay) |
s |
|
INFO |
Tells TEMPO to write some extra information about frontend/backend combinations; -f is recommended |
string |
|
JUMP {flag} {value} |
Phase jump for selection. |
s |
|
K96 |
Flag for Kopeikin binary model proper motion correction |
boolean |
|
KIN |
Inclination angle |
deg |
|
KINIAU |
Inclination angle in the IAU convention |
deg |
|
KOM |
The longitude of the ascending node |
deg |
|
KOMIAU |
The longitude of the ascending node in the IAU convention |
deg |
|
LNEDOT |
Log-derivative of the eccentricity EDOT/ECC |
1 / yr |
|
M2 |
Companion mass |
solMass |
|
MP |
Pulsar mass |
solMass |
|
MTOT |
Total system mass in units of Solar mass |
solMass |
|
NE_SW / NE1AU, SOLARN0 |
Solar Wind density at 1 AU |
1 / cm3 |
|
NHARMS |
Number of harmonics for ELL1H shapiro delay. |
integer |
|
NTOA |
Number of TOAs used in the fitting |
integer |
|
OM |
Longitude of periastron |
deg |
|
OMDOT |
Rate of advance of periastron |
deg / yr |
|
PB |
Orbital period |
d |
|
PBDOT |
Orbital period derivative respect to time |
number |
|
PEPOCH |
Reference epoch for spin-down |
d |
|
PHOFF |
Overall phase offset between physical TOAs and the TZR TOA. |
number |
|
PLANET_SHAPIRO |
Include planetary Shapiro delays |
boolean |
|
PMDEC |
Proper motion in DEC |
mas / yr |
|
PMELAT / PMBETA |
Proper motion in ecliptic latitude |
mas / yr |
|
PMELONG / PMLAMBDA |
Proper motion in ecliptic longitude |
mas / yr |
|
PMRA |
Proper motion in RA |
mas / yr |
|
POSEPOCH |
Reference epoch for position |
d |
|
PSR / PSRJ, PSRB |
Source name |
string |
|
PWEP_{number} |
(‘Epoch of solution piece 1’,) |
d |
|
PWF0_{number} |
Frequency of solution piece 1 |
Hz |
|
PWF1_{number} |
Frequency-derivative of solution piece 1 |
Hz / s |
|
PWF2_{number} |
Second frequency-derivative of solution piece 1 |
Hz / s2 |
|
PWPH_{number} |
(‘Starting phase of solution piece 1’,) |
number |
|
PWSTART_{number} |
(‘Start epoch of solution piece 1’,) |
d |
|
PWSTOP_{number} |
(‘Stop epoch of solution piece 1’,) |
d |
|
PX |
Parallax |
mas |
|
RAJ / RA |
Right ascension (J2000) |
hourangle |
|
RM |
Rotation measure |
rad / m2 |
|
RNAMP |
Amplitude of powerlaw red noise. |
number |
|
RNIDX |
Spectral index of powerlaw red noise. |
number |
|
SHAPMAX |
Function of inclination angle |
number |
|
SIFUNC |
Type of interpolation |
number |
|
SINI |
Sine of inclination angle |
number |
|
START |
Start MJD for fitting |
d |
|
STIGMA / VARSIGMA, STIG |
Shapiro delay parameter STIGMA as in Freire and Wex 2010 Eq(12) |
number |
|
SWM |
Solar Wind Model (0 is from Edwards+ 2006, 1 is from You+2007,2012/Hazboun+ 2022) |
number |
|
SWP |
Solar Wind Model radial power-law index (only for SWM=1) |
number |
|
SWXDM_{number} |
Max Solar Wind DM |
pc / cm3 |
|
SWXP_{number} |
Solar wind power-law index |
number |
|
SWXR1_{number} |
Beginning of SWX interval |
d |
|
SWXR2_{number} |
End of SWX interval |
d |
|
T0 |
Epoch of periastron passage |
d |
|
T0X_{number} |
ParameterT0variation |
d |
|
T2CMETHOD |
Method for transforming from terrestrial to celestial frame (IAU2000B/TEMPO; PINT only supports ????) |
string |
|
TASC |
Epoch of ascending node |
d |
|
TIMEEPH |
Time ephemeris to use for TDB conversion; for PINT, always FB90 |
string |
|
TNDMAMP |
Amplitude of powerlaw DM noise in tempo2 format |
number |
|
TNDMC |
Number of DM noise frequencies. |
number |
|
TNDMGAM |
Spectral index of powerlaw DM noise in tempo2 format |
number |
|
TNEQ {flag} {value} |
An error term added in quadrature to the scaled (by EFAC) TOA uncertainty in units of log10(second). |
dex(s) |
|
TNREDAMP |
Amplitude of powerlaw red noise in tempo2 format |
number |
|
TNREDC |
Number of red noise frequencies. |
number |
|
TNREDGAM |
Spectral index of powerlaw red noise in tempo2 format |
number |
|
TRACK |
Tracking Information |
string |
|
TRES |
TOA residual after fitting |
us |
|
TZRFRQ |
The frequency of the zero phase TOA. |
MHz |
|
TZRMJD |
Epoch of the zero phase TOA. |
d |
|
TZRSITE |
Observatory of the zero phase TOA. |
string |
|
UNITS |
Units (TDB assumed) |
string |
|
WAVEEPOCH |
Reference epoch for wave solution |
d |
|
WAVE_OM |
Base frequency of wave solution |
1 / d |
|
WAVE{number} |
Wave components |
s |
|
WXCOS_{number} |
Cosine amplitudes for Fourier representation of red noise |
s |
|
WXEPOCH |
Reference epoch for Fourier representation of red noise |
d |
|
WXFREQ_{number} |
Component frequency for Fourier representation of red noise |
1 / d |
|
WXSIN_{number} |
Sine amplitudes for Fourier representation of red noise |
s |
|
XOMDOT |
Excess longitude of periastron advance compared to GR |
deg / yr |
|
XPBDOT |
Excess Orbital period derivative respect to time compared to GR |
number |
|
XR1_{number} |
Beginning of paramX interval |
d |
|
XR2_{number} |
End of paramX interval |
d |
For comparison, there is a table of parameters that TEMPO supports.
Observatory List
The current list of defined observatories is:
Name / Aliases |
Origin |
Location |
Clock File(s) |
---|---|---|---|
acre (acreroad, ar, a) |
University of Glasgow observatory.
The origin of this data is unknown. |
||
algonquin (aro, ar) |
The Algonquin Radio Observatory.
The origin of this data is unknown. |
||
arecibo (aoutc, 3, ao) |
The Arecibo telescope.
These are the coordinates used for VLBI as of March 2020 (MJD 58919). They are based on a fiducial position at MJD 52275 plus a (continental) drift velocity of [0.0099, 0.0045, 0.0101] m/yr. This data was obtained from Ben Perera in September 2021. |
||
arecibo_pre_2021 | The Arecibo telescope (pre-2021).
The origin of this data is unknown but as of 2021 June 8 it agrees exactly with the values used by TEMPO and TEMPO2. It is preserved to facilitate comparisons with the more modern position measurement. |
||
ata (hcro) |
The Allen Telescope Array (ATA).
At Hat Creek Radio Observatory. Origin of this data is unknown but as of 2021 June 8 this value agrees exactly with the value used by TEMPO2. |
||
axis (axi) |
Fake telescope for IPTA data challenge.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
barycenter (bary, bat, @, ssb) |
Built-in special location. |
||
cambridge (cam) |
Mullard Radio Astronomy Observatory (MRAO).
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
ccera | Canadian Centre for Experimental Radio Astronomy.
The origin of this data is unknown. Note that this location is in Smiths Falls, ON, where the telescope was previously deployed. It has since moved to Rideau Ferry, ON but the coordinates have not been updated. |
||
chime (y, ch) |
The Canadian Hydrogen Intensity Mapping Experiment (CHIME).
Origin of these coordinates are from surveyor reports of the CHIME site (circa 2019 & 2020) and technical documents on the dimensions of the telescope structure (circa 2015). Results were compiled in January 2021. The coordinates are relative to the GRS80 ellipsoid. |
||
darnhall | Darnhall.
Part of MERLIN. Imported from TEMPO2 observatories.dat 2021 June 7. |
||
de601 (eflfr) |
The Low-Frequency Array (LOFAR) high-band antenna (HBA) at Effelsberg Observatory.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
de601hba (eflfrhba) |
The Low-Frequency Array (LOFAR) high-band antenna (HBA) at Effelsberg Observatory.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
de601lba (eflfrlba) |
The Low-Frequency Array (LOFAR) low-band antenna (LBA) at Effelsberg Observatory.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
de601lbh (eflfrlbh) |
The Low-Frequency Array (LOFAR) low-band antenna (LBA) at Effelsberg Observatory?.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
de602 (uwlfr) |
The Low-Frequency Array (LOFAR) high-band antenna (HBA) at Unterweilenbach, Germany.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
de602hba (uwlfrhba) |
The Low-Frequency Array (LOFAR) high-band antenna (HBA) at Unterweilenbach, Germany.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
de602lba (uwlfrlba) |
The Low-Frequency Array (LOFAR) low-band antenna (LBA) at Unterweilenbach, Germany.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
de602lbh (uwlfrlbh) |
The Low-Frequency Array (LOFAR) low-band antenna (LBA) at Unterweilenbach, Germany.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
de603 (tblfr) |
The Low-Frequency Array (LOFAR) high-band antenna (HBA) at Tautenburg, Germany.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
de603hba (tblfrhba) |
The Low-Frequency Array (LOFAR) high-band antenna (HBA) at Tautenburg, Germany.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
de603lba (tblfrlba) |
The Low-Frequency Array (LOFAR) low-band antenna (LBA) at Tautenburg, Germany.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
de603lbh (tblfrlbh) |
The Low-Frequency Array (LOFAR) low-band antenna (LBA) at Tautenburg, Germany.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
de604 (polfr) |
The Low-Frequency Array (LOFAR) high-band antenna (HBA) at Bornim, Germany.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
de604hba (polfrhba) |
The Low-Frequency Array (LOFAR) high-band antenna (HBA) at Bornim, Germany.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
de604lba (polfrlba) |
The Low-Frequency Array (LOFAR) low-band antenna (LBA) at Bornim, Germany.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
de604lbh (polfrlbh) |
The Low-Frequency Array (LOFAR) low-band antenna (LBA) at Bornim, Germany.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
de605 (julfr) |
The Low-Frequency Array (LOFAR) high-band antenna (HBA) at Jülich, Germany.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
de605hba (julfrhba) |
The Low-Frequency Array (LOFAR) high-band antenna (HBA) at Jülich, Germany.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
de605lba (julfrlba) |
The Low-Frequency Array (LOFAR) low-band antenna (LBA) at Jülich, Germany.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
de605lbh (julfrlbh) |
The Low-Frequency Array (LOFAR) low-band antenna (LBA) at Jülich, Germany.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
de609 (ndlfr) |
The Low-Frequency Array (LOFAR) high-band antenna (HBA) at Norderstedt, Germany.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
de609hba (ndlfrhba) |
The Low-Frequency Array (LOFAR) high-band antenna (HBA) at Norderstedt, Germany.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
de609lba (ndlfrlba) |
The Low-Frequency Array (LOFAR) low-band antenna (LBA) at Norderstedt, Germany.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
de609lbh (ndlfrlbh) |
The Low-Frequency Array (LOFAR) low-band antenna (LBA) at Norderstedt, Germany.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
defford | Defford.
Part of MERLIN. Imported from TEMPO2 observatories.dat 2021 June 7. |
||
drao (dr) |
The Dominion Radio Astronomical Observatory (DRAO).
The origin of this data is unknown. |
||
dss_43 (tid43) |
Tidbinbilla Deep Space Network.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
effelsberg (ef, g, eff) |
The Effelsberg radio telescope.
These are the coordinates used for VLBI as of March 2020 (MJD 58919). They are based on a fiducial position at MJD 56658 plus a (continental) drift velocity of [-0.0144, 0.0167, 0.0106] m/yr. This data was obtained from Ben Perera in September 2021. |
||
effelsberg_asterix (effix) |
The Effelsberg Radio Telescope with the Asterix backend.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
effelsberg_pre_2021 | The Effelsberg radio telescope (pre-2021).
The origin of this data is unknown but as of 2021 June 8 it agrees exactly with the values used by TEMPO and TEMPO2. |
||
fast (k, fa) |
The FAST radio telescope in China.
Origin of this data is unknown but as of 2021 June 8 it agrees exactly with the TEMPO value and disagrees by about 17 km with the TEMPO2 value. |
||
fi609 (filfr) |
The Low-Frequency Array (LOFAR) high-band antenna (HBA) at Kilpisjärvi, Finland.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
fi609hba (filfrhba) |
The Low-Frequency Array (LOFAR) high-band antenna (HBA) at Kilpisjärvi, Finland.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
fi609lba (filfrlba) |
The Low-Frequency Array (LOFAR) low-band antenna (LBA) at Kilpisjärvi, Finland.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
fi609lbh (filfrlbh) |
The Low-Frequency Array (LOFAR) low-band antenna (LBA) at Kilpisjärvi, Finland.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
fr606 (frlfr) |
The Low-Frequency Array (LOFAR) high-band antenna (HBA) at Nançay, France.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
fr606hba (frlfrhba) |
The Low-Frequency Array (LOFAR) high-band antenna (HBA) at Nançay, France.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
fr606lba (frlfrlba) |
The Low-Frequency Array (LOFAR) low-band antenna (LBA) at Nançay, France.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
fr606lbh (frlfrlbh) |
The Low-Frequency Array (LOFAR) low-band antenna (LBA) at Nançay, France.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
gb140 (g1, a) |
The Green Bank 140-foot telescope.
Note that PINT used to accept ‘G1’ as an alias for the GEO600 gravitational-wave observatory but that conflicted with what TEMPO accepted for this telescope so that has been removed. Imported from TEMPO2 observatories.dat 2021 June 7. |
||
gb300 (g3, 9) |
The Green Bank 300-foot telescope.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
gb853 (b, g8) |
The Green Bank 85-3 telescope.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
gb_20m_xyz (g2) |
Green Bank Observatory 20m.
Imported from TEMPO obsys.dat 2021 June 8. |
||
gbt (gb, 1) |
The Robert C. Byrd Green Bank Telescope.
This data was obtained by Joe Swiggum from Ryan Lynch in 2021 September. |
||
gbt_pre_2021 | The Robert C. Byrd Green Bank Telescope (pre-2021).
The origin of this data is unknown but as of 2021 June 8 it agrees exactly with the values used by TEMPO and TEMPO2. |
||
geo600 (geohf) |
The GEO600 gravitational-wave observatory.
Note that PINT used to list ‘G1’ as an alias for this telescope, but TEMPO accepts ‘G1’ as an alias for the Green Bank 140-foot telescope, so it was removed here. Origin of this data is unknown but as of 2021 June 8 this value agrees exactly with the value used by TEMPO2. |
||
geocenter (o, 0, coe, geo) |
Built-in special location. |
||
gmrt (r, gm) |
The Giant Metre-wave Radio Telescope (GMRT).
The origin of this data is unknown but as of 2021 June 8 it agrees exactly with the values used by TEMPO and TEMPO2. GMRT does not need clock files as the data is recorded against UTC(gps). |
||
goldstone (gs) |
Deep Space Network at Goldstone.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
grao (grao) |
Ghana Radio Astronomy Observatory.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
hamburg | Imported from TEMPO2 observatories.dat 2021 June 7. As of 2023-11-01 this shows up in Poland. |
||
hartebeesthoek (hart) |
Hartebeesthoek Radio Astronomy Observatory.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
hawc | High Altitude Water Cherenkov Experiment (HAWC).
Coordinates provided (as geodetic coordinates : 18:59:41.63 N, 97:18:27.39 W, 4096 m) by the spokesperson Ke Fang on behalf of the HAWC Collabotation, 2023 April 21. |
||
hess | The High Energy Stereoscopic System (HESS).
An Imaging Atmospheric Cherenkov Telescope. These coordinates are provided (from geodetic coordinates : lat = 23°16’18’’S, lon = 16°30’00’’E at 1800m asl) by the Maxime Regeard and Arache Djannati-Atai on behalf of the H.E.S.S. Collabotation, 2023 January 18. |
||
hobart (ho, 4) |
Mt Pleasant Radio Observatory in Hobart, Tasmania.
Origin of this data is unknown but as of 2021 June 8 this value agrees exactly with the value used by TEMPO2 and TEMPO. |
||
iar1 | Argentine Institute for Radio Astronomy Telescope 1.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
iar2 | Argentine Institute for Radio Astronomy Telescope 2.
Argentine Institute for Radio Astronomy. Imported from TEMPO2 observatories.dat 2021 June 7. |
||
jb_42ft (jb42) |
42ft telescope at Jodrell Bank Observatory.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
jb_mkii (h, j2, jbmk2) |
The Mark II telescope at Jodrell Bank Observatory.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
jb_mkii_dfb (jbmk2dfb) |
The Mark II telescope at Jodrell Bank Observatory with the DFB backend.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
jb_mkii_rch (jbmk2roach) |
The Mark II telescope at Jodrell Bank Observatory with the Roach backend.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
jbafb (jboafb) |
The Lovell telescope at Jodrell Bank with the AFB backend.
These are the coordinates used for VLBI as of March 2020 (MJD 58919). They are based on a fiducial position at MJD 50449 plus a (continental) drift velocity of [-0.0117, 0.0170, 0.0093] m/yr. This data was obtained from Ben Perera in September 2021. This data is for the AFB instrument - This doesn’t require any clock corrections. |
||
jbdfb (jbodfb) |
The Lovell telescope at Jodrell Bank with the DFB backend.
These are the coordinates used for VLBI as of March 2020 (MJD 58919). They are based on a fiducial position at MJD 50449 plus a (continental) drift velocity of [-0.0117, 0.0170, 0.0093] m/yr. This data was obtained from Ben Perera in September 2021. This data is for the DFB instrument - a different clock file is required for this instrument to accommodate recorded instrumental delays. |
||
jbroach (jboroach) |
The Lovell telescope at Jodrell Bank with the Roach backend.
These are the coordinates used for VLBI as of March 2020 (MJD 58919). They are based on a fiducial position at MJD 50449 plus a (continental) drift velocity of [-0.0117, 0.0170, 0.0093] m/yr. This data was obtained from Ben Perera in September 2021. This data is for the Roach instrument - a different clock file is required for this instrument to accommodate recorded instrumental delays. |
||
jodrell (jb, 8) |
The Lovell telescope at Jodrell Bank.
These are the coordinates used for VLBI as of March 2020 (MJD 58919). They are based on a fiducial position at MJD 50449 plus a (continental) drift velocity of [-0.0117, 0.0170, 0.0093] m/yr. This data was obtained from Ben Perera in September 2021. Note that any actual instrument - DFB, Roach, AFB - requires a different observatory code to obtain the correct clock files. |
||
jodrell_pre_2021 | The Lovell telescope at Jodrell Bank (pre-2021).
The origin of this data is unknown but as of 2021 June 8 it agrees exactly with the values used by TEMPO and TEMPO2. |
||
jodrellm4 (jbm4) |
The Mark IV telescope at Jodrell Bank Observatory.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
kagra (k1, lcgt) |
The KAGRA gravitational-wave observatory.
Origin of this data is unknown but as of 2021 June 8 this value agrees exactly with the value used by TEMPO2. |
||
kat-7 (k7) |
KAT-7.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
knockin | Knockin.
Part of MERLIN. Imported from TEMPO2 observatories.dat 2021 June 7. |
||
la_palma (lapalma, lap) |
La Palma observatory in the Canary Islands.
Note that as of 2021 June 8 TEMPO2’s position for this observatory lists it as somewhere in central Pakistan, exactly 90 degrees to the east of this position. |
||
leap (leap) |
The Large European Array for Pulsars.
This is the same as the position of the Effelsberg radio telescope. Imported from TEMPO2 observatories.dat 2021 June 7. |
||
lho (hanford, h1) |
The LIGO Hanford gravitational-wave observatory.
Origin of this data is unknown but as of 2021 June 8 this value agrees exactly with the value used by TEMPO2. |
||
llo (l1, livingston) |
The LIGO Livingston gravitational-wave observatory.
Origin of this data is unknown but as of 2021 June 8 this value agrees exactly with the value used by TEMPO2. |
||
lofar (t, lf) |
The Dutch Low-Frequency Array (LOFAR).
Note that other TEMPO codes have been used for this telescope. Imported from TEMPO2 observatories.dat 2021 June 7. |
||
lst | Large Size Telescope Cherenkov facility.
Origin of this data is unknown. |
||
lwa1 (lw, x) |
The Long Wavelength Array (LWA) in New Mexico.
Origin of this data is unknown but as of 2021 June 8 this value agrees exactly with the value used by TEMPO2 but disagrees with the value used by TEMPO by about 125 m. |
||
lwa_sv (ls, lwasv) |
Long Wavelength Array (LWA) at Sevilleta.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
magic | MAGIC Florian Goebel Telescopes.
A ground-based gamma-ray telescope. Origin of this data is unknown. |
||
meerkat (m, mk) |
MeerKAT.
For MeerKAT when used in timing mode. The origin of this data is unknown but as of 2021 June 8 it agrees exactly with the values used by TEMPO and TEMPO2. |
||
mkiii (jbmk3) |
The Mark III telescope at Jodrell Bank Observatory.
Imported from TEMPO2 observatories.dat 2021 June 7. However, it is likely incorrect (shows up as north of Greenland). |
||
most (mo, e) |
The Molonglo Observatory Synthesis Telescope (MOST).
Origin of this data is unknown but as of 2021 June 8 this value agrees exactly with the value used by TEMPO2. |
||
mwa (mw, u) |
The Murchison Widefield Array (MWA).
Origin of this data is unknown but as of 2021 June 8 this value agrees exactly with the value used by TEMPO2 and TEMPO. |
||
nancay (nc, f, ncy) |
The Nançay radio telescope.
The origin of this data is unknown but as of 2021 June 8 it agrees exactly with the values used by TEMPO and TEMPO2. This telescope appears to require zero clock corrections to GPS. |
||
nanshan (ns) |
Nanshan Radio Telescope.
At the Xinjiang Astronomical Observatory. Imported from TEMPO2 observatories.dat 2021 June 7. |
||
narrabri (atca) |
Australia Telescope Compact Array (ATCA).
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
ncyobs (w, nuppi) |
The Nançay radio telescope with the NUPPI back-end.
The origin of this data is unknown but as of 2021 June 8 it agrees exactly with the values used by TEMPO and TEMPO2. |
||
northern_cross (bo, d) |
Northern Cross Radio Telescope.
Imported from TEMPO obsys.dat 2021 June 8. |
||
op (obspm) |
The Nançay radio telescope.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
ort (or) |
The Ooty Radio Telescope (ORT).
Coordinates provided by Jaikhomba Singha on 25 Jan 2023. ORT does not need clock files as the data is recorded against UTC(gps). |
||
parkes (pk, pks, 7) |
The Parkes (Murriyang) radio telescope.
The origin of this data is unknown but as of 2021 June 8 it agrees exactly with the values used by TEMPO and TEMPO2. |
||
pico_veleta (v, pv) |
Observatorio Pico Veleta.
Imported from TEMPO2 observatories.dat 2021 June 7. Fixed signed of y coordinate on 2023 Nov 6 along with M. Keith in tempo2 to make location sensible. |
||
princeton (5, pr) |
Princeton University.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
ps1 (p, ps) |
Pan-STARRS.
Origin of this data is unknown. |
||
quabbin (qu, 2) |
Five College Radio Astronomy Observatory.
Imported from TEMPO obsys.dat 2021 June 8. |
||
se607 (onlfr) |
The Low-Frequency Array (LOFAR) high-band antenna (HBA) at Onsala, Sweden.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
se607hba (onlfrhba) |
The Low-Frequency Array (LOFAR) high-band antenna (HBA) at Onsala, Sweden.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
se607lba (onlfrlba) |
The Low-Frequency Array (LOFAR) low-band antenna (LBA) at Onsala, Sweden.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
se607lbh (onlfrlbh) |
The Low-Frequency Array (LOFAR) low-band antenna (LBA) at Onsala, Sweden.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
shao (sh, s) |
Shanghai Astronomical Observatory.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
srt (sr, z) |
Sardinia Radio Telescope.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
stl_geo (spacecraft, stl_geo) |
Built-in special location. |
||
tabley (pickmere) |
Pickmere.
Part of MERLIN. Imported from TEMPO2 observatories.dat 2021 June 7. |
||
uao (ns) |
Urumqi Astronomical Observatory.
At the Xinjiang Astronomical Observatory. Imported from TEMPO2 observatories.dat 2021 June 7. |
||
uk608 (uklfr) |
The Low-Frequency Array (LOFAR) high-band antenna (HBA) at Chilbolton, UK.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
uk608hba (uklfrhba) |
The Low-Frequency Array (LOFAR) high-band antenna (HBA) at Chilbolton, UK.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
uk608lba (uklfrlba) |
The Low-Frequency Array (LOFAR) low-band antenna (LBA) at Chilbolton, UK.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
uk608lbh (uklfrlbh) |
The Low-Frequency Array (LOFAR) low-band antenna (LBA) at Chilbolton, UK.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
utr-2 (utr2) |
Giant Ukrainian Radio Telescope.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
virgo (v1) |
The VIRGO gravitational-wave observatory.
Origin of this data is unknown but as of 2021 June 8 this value agrees exactly with the value used by TEMPO2. |
||
vla (jvla, vl, 6) |
The Jansky Very Large Array (VLA).
The origin of this data is unknown but as of 2021 June 8 it agrees exactly with the values used by TEMPO and TEMPO2. |
||
vla_site (v2, c) |
The Jansky Very Large Array (VLA) site.
Imported from TEMPO obsys.dat 2021 June 8. |
||
warkworth_12m (wark12m) |
Warkwork 12m Radio Telescope.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
warkworth_30m (wark30m) |
Warkwork 30m Radio Telescope.
Imported from TEMPO2 observatories.dat 2021 June 7. |
||
wsrt (i, we, ws) |
The Westerbork Synthesis Radio Telescope (WSRT).
Note that different letters have been used in the past to indicate this telescope. The origin of this data is unknown but as of 2021 June 8 it agrees exactly with the values used by TEMPO and TEMPO2. |
Command-line tools
PINT comes with several command line tools that perform several useful tasks, without needing to write a Python script. The scripts are installed automatically by setup.py into the bin directory of your Python distro, so they should be found in your PATH.
Examples of the scripts are below. It is assumed that you are running in the “examples” subdirectory of the PINT distro.
All of the tools accept -h
or --help
to provide a description of
the options.
pintk
pintk
is a GUI for PINT (based on the Tk GUI toolbox). It has many of the same functions as the plk plugin for Tempo2.
pintk NGC6440E.par NGC6440E.tim
convert_parfile
convert_parfile
allows a user to convert a par file between various formats and among various binary models. For instance:
convert_parfile -f tempo input.par -o output.par
will convert input.par
to output.par
in the tempo
format. Or:
convert_parfile -b DD ell1.par -o dd.par
will convert the model in ell1.par
to the DD binary model.
pintbary
pintbary
does quick barycentering calculations, converting an
MJD(UTC) on the command line to TDB with barycentric delays applied. The
position used for barycentering can be read from a par file or from the
command line
pintbary 56000.0 --parfile J0613-sim.par
pintbary 56001.0 --ra 12h13m14.2s --dec 14d11m10.0s --ephem DE421
pintempo
pintempo
is a command line tool for PINT that is similar to
tempo
or tempo2
. It takes two required arguments, a parfile and
a tim file.
pintempo --plot NGC6440E.par NGC6440E.tim
zima
zima
is a command line tool that uses PINT to create simulated TOAs
zima NGC6440E.par fake.tim
photonphase
This tool reads FITS event files from the NICER, RXTE or other experiments that produce FITS event files and computes phases for each photon, according to a timing model. The phases can be plotted or output as a FITS file column. Currently NICER and RXTE events can be raw files, which will be processed by reading an orbit file to compute spacecraft positions. XMM/Newton or Chandra data can be processed if they are barycentered events produced by their mission-specific barycentering tools. Specific support for those missions would be easy to add.
cd ../tests/datafile
photonphase --plot B1509_RXTE_short.fits J1513-5908_PKS_alldata_white.par --orbfile FPorbit_Day6223
fermiphase
This tool uses PINT to read Fermi LAT event (FT1) files and compute phases for each photon. Can plot phaseogram of computed phases, or write PULSE_PHASE column back to FITS file.
Works with raw Fermi FT1 files, geocentered events (as produced by the
Fermi Science Tool gtbary tcorrect=geo
), or barycentered events.
fermiphase --plot J0030+0451_P8_15.0deg_239557517_458611204_ft1weights_GEO_wt.gt.0.4.fits PSRJ0030+0451_psrcat.par CALC
event_optimize
This code uses PINT and emcee to do an MCMC likelihood fitting of a
timing model to a set of Fermi LAT photons. Currently requires the Fermi
FT1 file to contain geocentered events (usually from
gtbary tcorrect=geo
).
The code reads in Fermi photon events, along with a par file and a pulse profile template, and optimizes the timing model using an MCMC sampling process. The parameters to fit and their priors are determined by reading the par file. It can use photon weights, if available, or compute them based on a simple heuristic computation, if desired. There are many options to control the behavior.
An example run is shown below, using sample files that are included in the examples subdirectory of the PINT distro.
event_optimize J0030+0451_P8_15.0deg_239557517_458611204_ft1weights_GEO_wt.gt.0.4.fits PSRJ0030+0451_psrcat.par templateJ0030.3gauss --weightcol=PSRJ0030+0451 --minWeight=0.9 --nwalkers=100 --nsteps=500
tcb2tdb
A command line tool that converts par files from TCB timescale to TDB timescale.
tcb2tdb J0030+0451_tcb.par J0030+0451_tdb.par
pint
PINT Is Not TEMPO3!
This package has many submodules, but useful starting places may be
pint.toa.TOAs
, pint.models.timing_model.TimingModel
, and
pint.residuals.Residuals
.
Below you will find a tree of submodules. The online documentation should also provide a usable table of contents.
These docstrings contain reference documentation; for tutorials, explanations, or how-to documentation, please see other sections of the online documentation.
Functions
Print the OS version, Python version, PINT version, versions of the dependencies etc. |
Bayesian interface providing the pulsar timing likelihood, prior and posterior functions. |
|
Potential issues: * orbital frequency derivatives * Does EPS1DOT/EPS2DOT imply OMDOT and vice versa? |
|
Functions related to PINT configuration. |
|
Functions to compute various derived quantities from pulsar spin parameters, masses, etc. |
|
Observatory position and velocity calculation. |
|
Generic functions to load TOAs from events files, along with specific implementations for different missions. |
|
Various routines for calculating pulsation test statistics on event data and helper functions. |
|
External/third-party modules that are distributed with PINT. |
|
Work with Fermi TOAs. |
|
FITS handling functions |
|
Objects for managing the procedure of fitting models to TOAs. |
|
Tools for building chi-squared grids. |
|
Custom logging filter for PINT using |
|
Markov Chain Monte Carlo fitting. |
|
Pulsar timing models and tools for working with them. |
|
Machinery to support PINT's list of observatories. |
|
Orbital models |
|
pint_matrix module defines the pint matrix base class, the design matrix . |
|
Polynomial coefficients for phase prediction |
|
PulsarMJD special time format. |
|
Generate random models distributed like the results of a fit. |
|
Objects for comparing models to data. |
|
Functions related to simulating TOAs and models |
|
Solar system ephemeris downloading and setting support. |
|
Tools for working with pulse time-of-arrival (TOA) data. |
|
Tool for selecting a subset of TOAs. |
|
Miscellaneous potentially-helpful functions. |
PINT coding style
Code should follow PEP8. This constrains whitespace, formatting, and naming.
Code should follow the black style.
Use the standard formats for imports:
The only abbreviated imports are
import numpy as np
,import astropy.units as u
, andimport astropy.constants as c
; always use these modules in this form.If you want to import a deeply nested module, use
from pint.models import parameter
.If you are using a function frequently or the module name is long, use
from pint.utils import interesting_lines
Sort imports with isort.
Remove all unused imports.
Do not abbreviate any other imports.
Modules should list all public functions, classes, and constants in
__all__
. You can use the order of__all__
to specify the order that things appear in the documentation.Every public function or class should have a docstring. It should be in the correct format (numpy guidelines).
Do not abbreviate public names (for example, use “Residuals” not “resids”). If absolutely necessary, make certain that there is One True Abbreviation and that it is used everywhere.
Raise an exception if the code cannot continue and produce correct results.
Use
logger
to signal conditions the user should know about but that do not prevent the code from producing correct results. Be conservative; normal operation should generate no warnings or else users will ignore them (think LaTeX warnings).Use keyword argument names when calling a function with more than two or three arguments.
Use
Quantity
for things with physical units.If you need a
Time
object usefrom pint.pulsar_mjd import Time
; this ensures that you have available thepulsar_mjd
time format.PINT should work with python 2.7, 3.5, 3.6, 3.7, and later. There is no need to maintain compatibility with older versions, so use appropriate modern constructs like sets, iterators, and the string
.format()
method.Use six to manage python 2/3 compatibility problems.
How-tos
This section is to provide detailed solutions to particular problems. Some of these are based on user questions, others are more about how to develop PINT. Please feel free to change this by writing more; there are also some entries on the PINT wiki which you can contribute to more directly.
How to Install PINT
There are two kinds of PINT installation you might be interested in. The first is a simple PINT installation for someone who just wants to use PINT. The second is an installation for someone who wants to be able to run the tests and develop PINT code. The latter naturally requires more other python packages and is more complicated (but not too much).
Prerequisites
PINT requires Python 3.8+ [1]
Your Python must have the package installation tool pip installed. Also make sure your setuptools
are up to date (e.g. pip install -U setuptools
).
We highly recommend using a Conda/Anaconda environment or the package isolation tool virtualenv.
IMPORTANT Notes!
Naming conflict
PINT has a naming conflict with the pint units package available from PyPI (i.e. using pip) and conda.
Do NOT pip install pint
or conda install pint
! See Basic Install via pip or Install with Anaconda.
Apple M1/M2 processors
PINT requires longdouble
(80- or 128-bit floating point) arithmetic within numpy
, which is currently not supported natively on M1/M2 Macs.
However, you can use an x86 version of conda
even on an M1/M2 Mac: see instructions for using Apple Intel packages on Apple silicon.
It’s possible to have parallel versions of conda for x86 and ARM.
Basic Install via pip
PINT is now available via PyPI as the package pint-pulsar, so it is now simple to install via pip. This will get you the latest released version of PINT.
For most users, who don’t want to develop the PINT code, installation should just be a matter of:
$ pip install pint-pulsar
By default this will install in your system site-packages. Depending on your system and preferences, you may want to append --user
to install it for just yourself (e.g. if you don’t have permission to write in the system site-packages), or you may want to create a
virtualenv to work on PINT (using a virtualenv is highly recommended by the PINT developers). In that case, you just activate your
virtualenv before running the pip
command above.
Install with Anaconda
If you use Anaconda environments to manage your python packages, PINT is also available for Anaconda python under the conda-forge channel:
$ conda install -c conda-forge pint-pulsar
Install from Source
If you want access to the latest development version of PINT, or want to be able to make any edits to the code, you can install from source by cloning the git repository.
If your python setup is “nice”, you should be able to install as easily as:
$ git clone https://github.com/nanograv/PINT.git
$ cd PINT
$ mkvirtualenv -p `which python3` pint
(pint) $ pip install -e .
(pint) $ python
>>> import pint
Note that you can use your own method to activate your virtualenv if you don’t have virtualenvwrapper installed.
This should install PINT along with any python packages it needs to run. (If
you want to run the test suite or work on PINT code, see below.)
Note that the -e
installs PINT in “editable” or “develop” mode. This means that the source code is what is actually being run,
rather than making a copy in a site-packages directory. Thus, if you edit any .py file, or do a git pull
to update the code
this will take effect immediately rather than having to run pip install
again. This is a choice, but is the way
most developers work.
Unfortunately there are a number of reasons the install can go wrong. Most have to do with not having a “nice” python environment. See the next section for some tips.
Potential Install Issues
Old setuptools (egg-info
error message)
PINT’s setup.cfg
is written in a declarative style that does not work with
older versions of setuptools
. The lack of a sufficiently recent version of
setuptools
is often signalled by the otherwise impenetrable error message
error: 'egg_base' must be a directory name (got src)
. You can upgrade with
pip
:
$ pip install -U pip setuptools
If this does not help, check your versions of installed things:
$ pip list
You should be able to upgrade to setuptools
version at least 0.41
. If
running pip
does not change the version that appears on this list, or if
your version changes but the problem persists, you may have a problem with your
python setup; read on.
Bad PYTHONPATH
The virtualenv mechanism uses environment variables to create an isolated
python environment into which you can install and upgrade packages without
affecting or being affected by anything in any other environment. Unfortunately
it is possible to defeat this by setting the PYTHONPATH
environment
variable. Double unfortunately, setting the PYTHONPATH
environment used to
be the Right Way to use python things that weren’t part of your operating
system. So many of us have PYTHONPATH
set in our shells. You can check this:
$ printenv PYTHONPATH
If you see any output, chances are that’s causing problems with your
virtualenvs. You probably need to go look in your .bashrc
and/or
.bash_profile
to see where that variable is being set and remove it. Yes,
it is very annoying that you have to do this.
Previous use of pip install --user
Similarly, it used to be recommended to install packages locally as your user
by running pip install --user thing
. Unfortunately this causes something of
the same problem as having a PYTHONPATH
set, where packages installed
outside your virtualenv can obscure the ones you have inside, producing bizarre
error messages. Record your current packages with pip freeze
, then try,
outside a virtualenv, doing pip list
with various options, and pip uninstall
; you shouldn’t be able to uninstall anything system-wise (do not
use sudo
!) and you shouldn’t be able to uninstall anything in an inactive
virtualenv. So once you’ve blown away all those packages, you should be able to
work in clean virtualenvs. If you saved the output of pip freeze
above, you
should be able to use it to create a virtualenv with all the same packages you
used to have in your user directory.
Bad conda
setup
Conda is a tool that attempts to create isolated environments, like a
combination of virtualenv and pip
. It should make installing scientific
software with lots of dependencies easy and reliable, and you should just be
able to set up an appropriate conda
environment and use the basic install
instructions above. But it may not work.
Specifically, for some reason the python 3 version of conda
does not
provide the gdbm
module, which astropy
needs to work on Linux. Good
luck.
Installing PINT for Developers
You will need to be able to carry out a basic install of PINT as above.
You very likely want to install in a virtualenv and using the develop mode pip -e
.
Then you will need to install the additional development dependencies:
$ pip install -Ur requirements_dev.txt
PINT development (building the documentation) requires pandoc, which isn’t a python package and therefore needs to be installed in some way appropriate for your operating system. On Linux you may be able to just run:
$ apt install pandoc
On a Mac using MacPorts this would be:
$ sudo port install pandoc
Otherwise, there are several ways to install pandoc
For further development instructions see How to Set Up Your Environment For PINT Development
Footnotes
How to do a number of things users have asked about
Quick solutions to common tasks (taken from #pint and elsewhere)
How to upgrade PINT
With pip
:
pip install -U pint-pulsar
With conda
:
conda update pint-pulsar
How to check out some user’s particular branch for testing:
If you wish to checkout branch testbranch
from user pintuser
:
git checkout -b pintuser-testbranch master
git pull https://github.com/pintuser/PINT.git testbranch
The first command makes a new local branch with name pintuser-testbranch
from the master
branch.
The second pulls the remote branch from the desired user’s fork into that local branch.
You may still need to install/reinstall that branch, depending on how you have things set up
(so pip install .
or pip install -e .
, where the later keeps the files in-place for faster developing).
How to go to a specific version of PINT
With pip
:
pip install -U pint-pulsar==0.8.4
or similar.
With conda
:
conda install pint-pulsar=0.8.4
or similar.
Find data files for testing/tutorials
The data files (par and tim) associated with the tutorials and other examples
can be located via pint.config.examplefile()
(available via the
pint.config
module):
import pint.config
fullfilename = pint.config.examplefile(filename)
For example, the file NGC6440E.par
from the Time a Pulsar notebook can be found via:
import pint
fullfilename = pint.config.examplefile("NGC6440E.par")
Load a par file
To load a par file:
from pint.models import get_model
m = get_model(parfile)
Load a tim file
To load a tim file:
from pint.toa import get_TOAs
t = get_TOAs(timfile)
Note that a par file may contain information, like which solar system ephemeris to use, that affects how a tim file should be loaded:
t = get_TOAs(timfile, model=model)
Load a tim and par file together
To load both:
from pint.models import get_model_and_toas
m, t = get_model_and_toas(parfile, timfile)
Create TOAs from an array of times
A pint.toa.TOA
object represents a single TOA as an object that contains
both a time and a location, along with optional information like frequency, measurement error, etc.
So each TOA
object should only contain a single time, since otherwise the location information would be ambiguous.
If you wish to create TOAs from a astropy.time.Time
object containing multiple times,
you can do:
import numpy as np
from astropy import units as u, constants as c
from pint import pulsar_mjd
from astropy.time import Time
from pint import toa
t = Time(np.array([55000, 56000]), scale="utc", format="pulsar_mjd")
obs = "gbt"
toas = toa.get_TOAs_array(t, obs)
Note that we import pint.pulsar_mjd
to allow the
pulsar_mjd
format, designed to deal properly with leap seconds.
We use pint.toa.get_TOAs_array()
to make sure clock corrections are
applied when constructing the TOAs.
Other information like errors
, frequencies
, and flags
can be added.
You can also merge multiple data-sets with pint.toa.merge_TOAs()
Get the red noise basis functions and the corresponding coefficients out of a PINT fitter object
…?
Select TOAs
You can index by column name into the TOAs object, so you can do toas["observatory"]
or whatever the column is called; and that’s an array, so you can do toas["observatory"]=="arecibo"
to get a Boolean array; and you can index with boolean arrays, so you can do toas[toas["observatory"]=="arecibo"]
to get a new TOAs object referencing a subset.
Modify TOAs
The TOAs have a table with mjd
, mjd_float
, tdb
, and tdbld
columns. To modify them all safely and consistently the best way is to use:
t.adjust_TOAs(dt)
where dt
is an astropy.time.TimeDelta
object. This function does not
change the pulse numbers column, if present, but does recompute mjd_float
,
the TDB times, and the observatory positions and velocities.
Avoid “KeyError: ‘obs_jupiter_pos’ error when trying to grab residuals?”
You need to have the TOAs object compute the positions of the planets and add them to the table:
ts.compute_posvels(ephem,planets=True)
This should be done automatically if you load your TOAs with the
pint.toa.get_TOAs()
or
pint.models.model_builder.get_model_and_toas()
Convert from ELAT/ELONG <-> RA/DEC if I have a timing model
If model
is in ecliptic coordinates:
model.as_ICRS(epoch=epoch)
which will give it to you as a model with
pint.models.astrometry.AstrometryEquatorial
components at the
requested epoch. Similarly:
model.as_ECL(epoch=epoch)
does the same for pint.models.astrometry.AstrometryEcliptic
(with an
optional specification of the obliquity).
Convert between binary models
If m
is your initial model, say an ELL1 binary:
from pint import binaryconvert
m2 = binaryconvert.convert_binary(m, "DD")
will convert it to a DD binary.
Some binary types need additional parameters. For ELL1H, you can set the number of harmonics and whether to use H4 or STIGMA:
m2 = binaryconvert.convert_binary(m, "ELL1H", NHARMS=3, useSTIGMA=True)
For DDK, you can set OM (known as KOM
):
m2 = binaryconvert.convert_binary(mDD, "DDK", KOM=12 * u.deg)
Parameter values and uncertainties will be converted. It will also make a best-guess as to which parameters should be frozen, but it can still be useful to refit with the new model and check which parameters are fit.
Note
The T2 model from tempo2 is not implemented, as this is a complex model that actually encapsulates several models. The best practice is to change the model to the actual underlying model (ELL1, DD, BT, etc).
These conversions can also be done on the command line using convert_parfile
:
convert_parfile --binary=DD ell1.par -o dd.par
Add a jump programmatically
PINT
can handle jumps in the model outside a par
file. An example is:
import numpy as np
from astropy import units as u, constants as c
from pint.models import get_model, get_model_and_toas, parameter
from pint import fitter
from pint.models import PhaseJump
import pint.config
m, t = get_model_and_toas(pint.config.examplefile("NGC6440E.par"),
pint.config.examplefile("NGC6440E.tim"))
# fit the nominal model
f = fitter.WLSFitter(toas=t, model=m)
f.fit_toas()
# group TOAs: find clusters with gaps of <2h
clusters = t.get_clusters(add_column=True)
# put in the pulse numbers based on the previous fit
t.compute_pulse_numbers(f.model)
# just for a test, add an offset to a set of TOAs
t['delta_pulse_number'][clusters==3]+=3
# now fit without a jump
fnojump = fitter.WLSFitter(toas=t, model=m, track_mode="use_pulse_numbers")
fnojump.fit_toas()
# add the Jump Component to the model
m.add_component(PhaseJump(), validate=False)
# now add the actual jump
# it can be keyed on any parameter that maskParameter will accept
# here we will use a range of MJDs
par = parameter.maskParameter(
"JUMP",
key="mjd",
value=0.0,
key_value=[t[clusters==3].get_mjds().min().value,
t[clusters==3].get_mjds().max().value],
units=u.s,
frozen=False,
)
m.components['PhaseJump'].add_param(par, setup=True)
# you can also do it indirectly through the flags as:
# m.components["PhaseJump"].add_jump_and_flags(t.table["flags"][clusters == 3])
# and fit with a jump
fjump = fitter.WLSFitter(toas=t, model=m, track_mode="use_pulse_numbers")
fjump.fit_toas()
print(f"Original chi^2 = {f.resids.calc_chi2():.2f} for {f.resids.dof} DOF")
print(f"After adding 3 rotations to some TOAs, chi^2 = {fnojump.resids.calc_chi2():.2f} for {fnojump.resids.dof} DOF")
print(f"Then after adding a jump to those TOAs, chi^2 = {fjump.resids.calc_chi2():.2f} for {fjump.resids.dof} DOF")
print(f"Best-fit value of the jump is {fjump.model.JUMP1.quantity} +/- {fjump.model.JUMP1.uncertainty} ({(fjump.model.JUMP1.quantity*fjump.model.F0.quantity).decompose():.3f} +/- {(fjump.model.JUMP1.uncertainty*fjump.model.F0.quantity).decompose():.3f} rotations)")
which returns:
Original chi^2 = 59.57 for 56 DOF
After adding 3 rotations to some TOAs, chi^2 = 19136746.30 for 56 DOF
Then after adding a jump to those TOAs, chi^2 = 56.60 for 55 DOF
Best-fit value of the jump is -0.048772786677935796 s +/- 1.114921182802775e-05 s (-2.999 +/- 0.001 rotations)
showing that the offset we applied has been absorbed by the jump (plus a little extra, so chi^2 has actually improved).
See pint.models.parameter.maskParameter
documentation on the ways to select the TOAs.
Choose a fitter
Use pint.fitter.Fitter.auto()
:
f = pint.fitter.Fitter.auto(toas, model)
Include logging in a script
PINT now uses loguru for its logging. To get this working within a script, try:
import pint.logging
from loguru import logger as log
pint.logging.setup(sink=sys.stderr, level="WARNING", usecolors=True)
That sets up the logging and ensures it will play nicely with the rest of PINT.
You can customize the level, the destination (e.g., file, stderr
, …) and
format. The pint.logging.LogFilter
suppresses some INFO/DEBUG messages that can clog up your screen: you can make
a custom filter as well to add/remove messages.
If you want to include a standard way to control the level using command line arguments, you can do:
parser.add_argument(
"--log-level",
type=str,
choices=("TRACE", "DEBUG", "INFO", "WARNING", "ERROR"),
default=pint.logging.script_level,
help="Logging level",
dest="loglevel",
)
...
pint.logging.setup(level=args.loglevel, ...)
assuming you are using argparse
. Note that loguru
doesn’t let you
change existing loggers: you should just remove and add (which the
pint.logging.setup()
function does).
Make PINT stop reporting a particular warning
If PINT keeps emitting a warning you know is irrelevant from somewhere inside your code, you can disable that specific warning coming from that place. For example if you are reading a par file with T2CMETHOD
set but you know that’s fine, you can shut off the message about T2CMETHOD
while you’re loading the file:
with warnings.catch_warnings():
warnings.filterwarnings("ignore", message=r".*T2CMETHOD.*")
model = get_model(os.path.join(datadir, "J1614-2230_NANOGrav_12yv3.wb.gls.par"))
How to: Common timing workflow
(This was originally written by Alex McEwen on the PINT wiki.)
When I’m working on new pulsar solutions, my work flow usually follows something like this. This uses some of the basic tools that are included in the user questions. Please send suggestions/comments/questions to aemcewen@uwm.edu.
load pulsar model/TOAs via get_TOAs() and get_model():
model = get_model('parfile.par') toas = get_TOAs('toas.tim', model=model)
make a copy of the TOAs that I will edit (for easy resets):
newtoas = toas
use plot_fit(newtoas), identify bad toas/missing wraps and mask those data. in this example, i am zapping all TOAs before MJD 59000 and also on 59054.321. also, i’m adding a wrap on 59166:
newtoas = mask_toas(newtoas,before=59000,on=[59054.321]) newtoas.compute_pulse_numbers(model) add_wraps(newtoas,59166,'-') plot_fit(newtoas,model)
once i have the TOAs i want, i fit the data, look at the residuals, and see how the model changed:
f=WLSFitter(newtoas,model,track_mode='use_pulse_numbers') f.model.free_params=['F0'] f.fit_toas() plot_fit(newtoas,f.model) f.print_summary() f.model.compare(model,verbosity='check')
when the model appears to be a good fit, i update the model and add new observations:
model=f.model newtoas=mask_toas(toas,before=58000,on=59054.321) plot_fit(newtoas,model)
iterate these last two steps until the fit breaks/i run out of data.
How to: Simple python PINT tools
(This was originally posted by Alex McEwen to the PINT wiki.)
Below are several tools I use for new pulsar timing, including cleaning TOAs, adding wraps, and fitting parameters. Please send suggestions/comments/questions to aemcewen@uwm.edu.
Load various packages as well as some little convenience functions:
import numpy as np
import matplotlib.pyplot as plt
import pint.residuals as res
import copy
from pint.models import BinaryELL1, BinaryDD, PhaseJump, parameter, get_model
from pint.simulation import make_fake_toas_uniform as mft
from astropy import units as u, constants as c
def dot(l1,l2):
return np.array([v1 and v2 for v1,v2 in zip(l1,l2)])
def inv(l):
return np.array([not i for i in l])
Zapping TOAs on given MJDs, before/after some MJD, or within a window of days:
def mask_toas(toas,before=None,after=None,on=None,window=None):
cnd=np.array([True for t in toas.get_mjds()])
if before is not None:
cnd = dot(cnd,toas.get_mjds().value > before)
if after is not None:
cnd = dot(cnd,toas.get_mjds().value < after)
if on is not None:
on=np.array(on)
for i,m in enumerate(on):
m=m*u.day
if type(m) is int:
cnd = dot(cnd,inv(np.abs(toas.get_mjds()-m).astype(int) == np.abs((toas.get_mjds()-m)).min().astype(int)))
else:
cnd = dot(cnd,inv(np.abs(toas.get_mjds()-m) == np.abs((toas.get_mjds()-m)).min()))
if window is not None:
if len(window)!=2:
raise ValueError("window must be a 2 element list/array")
window = window*u.day
lower = window[0]
upper = window[1]
cnd = dot(cnd,toas.get_mjds() < lower)+dot(cnd,toas.get_mjds() > upper)
print(f'{sum(cnd)}/{len(cnd)} TOAs selected')
return toas[cnd]
Add in integer phase wraps on a given MJD:
def add_wraps(toas,mjd,sign,nwrap=1):
cnd = toas.table['mjd'] > Time(mjd,scale='utc',format='mjd')
if sign == '-':
toas.table['pulse_number'][cnd] -= nwrap
elif sign == '+':
toas.table['pulse_number'][cnd] += nwrap
else:
raise TypeError('sign must be "+" or "-"')
Plot residuals in phase:
def plot_fit(toas,model,track_mode="use_pulse_numbers",title=None,xlim=None,ylim=None):
rs=res.Residuals(toas,model,track_mode=track_mode)
fig, ax = plt.subplots(figsize=(12,10))
if xlim is not None:
ax.set_xlim(xlim)
if ylim is not None:
ax.set_ylim(ylim)
ax.errorbar(rs.toas.get_mjds().value,rs.calc_phase_resids(), \
yerr=(rs.toas.get_errors()*model.F0.quantity).decompose().value,fmt='x')
ax.tick_params(labelsize=15)
if title is None:
ax.set_title('%s Residuals, %s toas' %(model.PSR.value,len(toas.get_mjds())),fontsize=18)
else:
ax.set_title(title,fontsize=18)
ax.set_xlabel('MJD',fontsize=15)
ax.set_ylabel(f'Residuals [phase, P0 = {((1/model.F0.quantity).to(u.ms)).value:2.0f} ms]',fontsize=15)
ax.grid()
return fig, ax
Model phase uncertainty over a range of MJDs:
def calculate_phase_uncertainties(model, MJDmin, MJDmax, Nmodels=100, params = 'all', error=1*u.us):
mjds = np.arange(MJDmin,MJDmax)
Nmjd = len(mjds)
phases_i = np.zeros((Nmodels,Nmjd))
phases_f = np.zeros((Nmodels, Nmjd))
tnew = mft(MJDmin,MJDmax,Nmjd,model=model, error=error)
pnew = {}
if params == 'all':
params = model.free_params
for p in params:
pnew[p] = getattr(model,p).quantity + np.random.normal(size=Nmodels) * getattr(model,p).uncertainty
for imodel in range(Nmodels):
m2 = copy.deepcopy(model)
for p in params:
getattr(m2,p).quantity=pnew[p][imodel]
phase = m2.phase(tnew, abs_phase=True)
phases_i[imodel] = phase.int
phases_f[imodel] = phase.frac
phases = phases_i+ phases_f
phases0 = model.phase(tnew, abs_phase = True)
dphase = phases - (phases0.int + phases0.frac)
return tnew, dphase
Plot the phase uncertainty from calculate_phase_uncertainties():
def plot_phase_unc(model,start,end,params='all'):
if params == 'all':
print("calculating phase uncertainty due to all parameters")
plab = 'All params'
t, dp = calculate_phase_uncertainties(model, start, end)
else:
if type(params) is list:
print("calculating phase uncertainty due to params "+str(params))
plab = str(params)
t, dp = calculate_phase_uncertainties(model, start, end, params = params)
else:
raise TypeError('"params" should be either list or "all"')
plt.gcf().set_size_inches(12,10)
plt.plot(t.get_mjds(),dp.std(axis=0),'.',label=plab)
dt = t.get_mjds() - model.PEPOCH.value*u.d
plt.plot(t.get_mjds(), np.sqrt((model.F0.uncertainty * dt)**2 + (0.5*model.F1.uncertainty*dt**2)**2).decompose(),label='Analytic')
plt.xlabel('MJD')
plt.ylabel('Phase Uncertainty (cycles)')
plt.legend()
Less common tools
Plot frequency against residuals:
rs=res.Residuals(newtoas,f.model)
fig,ax = plt.subplots(figsize=(12,10))
ax.tick_params(labelsize=15)
ax.set_ylabel('Frequency [MHz]',fontsize=18)
ax.set_xlabel('Phase residuals',fontsize=18)
y = newtoas.get_freqs().to('MHz').value
x = rs.calc_phase_resids()
ax.errorbar(x,y,xerr=newtoas.get_errors().to('s').value*f.model.F0.value,elinewidth=2,lw=0,marker='+')
Plot residuals in orbital phase:
x = f.model.orbital_phase(newtoas.get_mjds()).value
rs=res.Residuals(newtoas,f.model)
y = rs.calc_phase_resids()
fig, ax = plt.subplots(figsize=(12,10))
ax.tick_params(labelsize=15)
ax.set_xlabel('Orbital Phase',fontsize=18)
ax.set_ylabel('Phase Residuals',fontsize=18)
ax.grid()
for mjd in np.unique(newtoas.get_mjds().astype(int)):
cnd = dot(newtoas.get_mjds().astype(int) == mjd,newtoas.get_errors().astype(int) <= 125*u.us)
ax.errorbar(x[cnd],y[cnd],yerr=(newtoas.get_errors().to('s')*f.model.F0.quantity).value[cnd],elinewidth=2,lw=0,marker='+',label=mjd.value)
ax.legend(fontsize=15)
Removing/adding binary components:
if 'BinaryELL1' in model.components:
model.remove_component('BinaryELL1')
cmp = BinaryELL1()
cmp.PB.value = 10
cmp.EPS1.value = 1e-5
cmp.EPS2.value = 1e-5
cmp.TASC.value = 59200
cmp.A1.value = 10
model.add_component(cmp,setup=True)
cmp = BinaryDD()
cmp.PB.value = 10
cmp.ECC.value = 0.8
cmp.T0.value = 59251.
cmp.OM.value = 269.
cmp.A1.value = 136.9
model.add_component(cmp,setup=True)
Add spin-down component of a given order:
n = 2
model.components['Spindown'].add_param(
parameter.prefixParameter(
name='F'+str(n),
value=0.0,
units=u.Hz/u.s**n,
frozen=False,
parameter_type="float",
longdouble=True
),
setup=True
)
model.components['Spindown'].validate()
How to Contribute to PINT
Contributions are welcome, and they are greatly appreciated! Every little bit helps, and credit will always be given.
You can contribute in many ways:
Types of Contributions
Report Bugs
Report bugs at https://github.com/nanograv/pint/issues.
If you are reporting a bug, please include:
The output of
pint.print_info()
. This command provides the version information of the OS, Python, PINT, and the various dependencies along with other information about your system.Any details about your local setup that might be helpful in troubleshooting, such as the command used to install PINT and whether you are using a virtualenv, conda environment, etc.
Detailed steps to reproduce the bug, as simply as possible. A self-contained code snippet that triggers the issue will be most helpful.
Submit Feedback
The best way to send feedback is to file an issue at https://github.com/nanograv/pint/issues.
If you are proposing a feature:
Explain in detail how it would work.
Keep the scope as narrow as possible, to make it easier to implement.
Remember that this is a volunteer-driven project, and that contributions are welcome :)
Fix Bugs
Look through the GitHub issues for bugs. Anything tagged with good first issue, help wanted, or bug is open to whoever wants to implement it. If you want to fix a bug or add any other code, please use GitHub and suggest your changes in the form of a Pull Request (see below); this makes it easy for everyone to examine your changes, discuss them with you, and update them as needed.
Implement Features
Look through the GitHub issues for features. Anything tagged with “enhancement” and “help wanted” is open to whoever wants to implement it. If your idea is for a new feature or an important change, you may want to open an issue where the idea can be discussed before you write too much code.
Write Documentation
PINT could always use more documentation, whether as part of the official pint docs, in docstrings, or even on the web in blog posts, articles, and such.
Writing documentation is a great way to get started: everyone wants there to be documentation, but no one wants to stop writing code long enough to write it, so we are all very grateful when you do. And as a result of figuring out enough to write good documentation, you come to understand the code very well.
Get Started!
Ready to contribute? Here’s how to set up PINT for local development.
Fork the PINT repo on GitHub.
Clone your fork locally:
$ git clone git@github.com:your_name_here/pint.git
Install your local copy into a conda environment. Assuming you have conda installed, this is how you set up your fork for local development:
$ conda create -n pint-devel python=3.10 $ conda activate pint-devel $ cd PINT/ $ conda install -c conda-forge --file requirements_dev.txt $ conda install -c conda-forge --file requirements.txt $ pip install -e . $ pre-commit install
The last command installs pre-commit hooks which will squawk at you while trying to commit changes that don’t adhere to our Coding Style.
Alternatively, this can also be done using virtualenv. Assuming you have virtualenvwrapper installed, this is how you set up your fork for local development:
$ mkvirtualenv pint-devel $ cd PINT/ $ pip install -r requirements_dev.txt $ pip install -e . $ pre-commit install
Create a branch for local development:
$ git checkout -b name-of-your-bugfix-or-feature
Now you can make your changes locally.
When you’re done making changes, check that your changes pass the tests. Also check that any new docs are formatted correctly:
$ make test $ tox $ make docs
Commit your changes and push your branch to GitHub:
$ git add . $ git commit -m "Detailed description of your changes." $ git push origin name-of-your-bugfix-or-feature
Submit a pull request through the GitHub website.
Check that our automatic testing in “GitHub Actions” passes for your code. If problems crop up, fix them, commit the changes, and push a new version, which will automatically update the pull request:
$ git add pint/file-i-just-fixed.py $ git commit -m "Fixed bug where..." $ git push
The maintainers will review and comment on the PR. They may ask why you made certain design decisions or ask you to make some stylistic or functional changes. If accepted, it will be merged into the master branch.
Pull Request Guidelines
Before you submit a pull request, check that it meets these guidelines:
Try to write clear Pythonic code, follow our Coding Style, and think about how others might use your new code.
The pull request should include tests that cover both the expected behavior and sensible error reporting when given bad input.
If the pull request adds or changes functionality, the docs should be updated. Put your new functionality into a function with a docstring. Check the HTML documentation produced by
make docs
to make sure your new documentation appears and looks reasonable. If the new functionality needs a more detailed explanation than can be put in a docstring, add it todocs/explanation.rst
. Make sure that the docstring contains a brief description as well.The pull request should work for and 3.8+. Make sure that all the CI tests for the pull request pass.
Update
CHANGELOG-unreleased.md
with an appropriate entry. Please note thatCHANGELOG.md
should not be updated for pull requests.
How to Set Up Your Environment For PINT Development
See also How to Contribute to PINT
Working on PINT code requires a few more tools than simply running PINT, and there are a few settings that can make it much easier. Some of what follows will depend on your editor, but most of it is possible in any modern programmer’s editor, including Sublime Text, Atom, Visual Studio Code, PyCharm, vim/neovim, and emacs. (Okay those last two are only arguably modern but they are extensible enough that they can be made to do most of the things described here.) Some of these tools may also be available in more basic editing environments like the Jupyter notebook, the JupyterLab text editor, and Spyder.
What you should expect your editor to do
It may take some configuration, but once set up any modern editor should be able to do the following:
Highlight python syntax.
Flag syntax or style errors (line length, unused/undefined variables, dubious exception handling) visually as you edit.
Offer completions for any identifier (keyword, function, variable, et cetera) you start typing.
Reformat text into the
black
code style with a keypress.Sort your imports into the standard arrangement with a keypress.
Jump to the definition of a function, class, or method with a keypress.
Obey
.editorconfig
settings.
A little Googling should reveal how to get all this working in your favorite editor, but if you have some helpful links for a particular editor feel free to add them to the documentation right here.
Command-line tools and automation
PINT is developed with git
and on GitHub. Some operations are presented
graphically in the web interface, but in many cases you will want to do
something direct on your local machine. Having the right tools available and
configured should make this easy.
In your development virtualenv, install the development requirements:
pip install -Ur requirements_dev.txt
Set up a few tools to make the git repository behave better. pre-commit
runs various things, like check the text formatting, making sure you didn’t
accidentally include some huge binary file, and so on, when you go to commit
something to git:
pre-commit install
Configure git so git blame
ignores large-scale reformatting commits in
favour of changes to the actual contents:
git config blame.ignoreRevsFile .git-blame-ignore-revs
How To Build and Test From the Command Line
To run the whole test suite (on all the cores of your machine):
pytest -n auto
To run tests on just one file:
pytest tests/test_my_new_thing.py
To run just one test:
pytest tests/test_my_new_thing.py::test_specific
To test everything but start with tests that failed last time, stopping when something goes wrong (this is great when you’re trying to fix that one bug; if you haven’t you’ll get new error messages, if you have, it’ll continue on to run all the tests):
pytest --ff -x
To drop into the python debugger at the point where a test fails so you can investigate, for example go up and down the call history and inspect local variables:
pytest --pdb -x
The python debugger also allows you to step through your code, put in breakpoints, and many other things. It can save a ton of time compared to putting print statements in and rerunning your code, especially if the code takes a while and you don’t know exactly what you want to inspect.
To run the whole test suite in fresh installs on several python versions, and also rebuilt the notebooks and documentation as well as compute combined code coverage for all the versions:
tox
To run tests on multiple python versions and build the documentation in parallel:
tox --parallel=auto
If this finds a problem in just one python environment that doesn’t appear in your development environment, you can run just the problem environment:
tox -e py27
You can also run other things in the environments tox
uses, including
interactive python sessions (though these will include only PINT’s installation
requirements, so no IPython):
tox -e py27 -- pytest --ff --pdb -x
tox -e py27 -- pytest tests/test_my_new_thing.py
tox -e py27 -- python
To automatically run black on all of PINT’s code:
black src/ tests/
Under examples/
there are a few Jupyter notebooks. These actually get
incorporated into the online documentation (you may have seen them). To avoid
headaches, we don’t store these as notebooks on github but as special markdown
files. If you are using jupyter
or jupyter-lab
, it should be smart
enough to synchronize these between the storage format and normal notebooks,
but if there is any confusion, try make notebooks
, which synchronizes the
two formats and runs all the notebooks to fill in the outputs. If something
goes wrong, try jupytext --sync
, which synchronizes the code between the
notebooks and the storage format but doesn’t run the notebooks.
Coping with git
To import any changes that have been made to the PINT distribution:
git fetch --all
git checkout master
git merge upstream/master
git push
To switch between branches:
git checkout a-branch
git checkout another-branch
git checkout master
These are very fast but they do change all the source code files to reflect what they look like in the branch you’re switching to. If you have them open in editor windows your editor may give you surprised messages as the files change under it.
To start a new branch for a thing:
git checkout master
git checkout -b a-thing
To send your changes to the current branch to your fork of the PINT repository:
git push
If this is the first time you’ve done this with a new branch git
will
refuse because it doesn’t exist in your fork on GitHub. It will print out a
command to create the branch on your GitHub. Just paste that. It will look
like:
git push --set-upstream origin a-thing
If you now go to GitHub and poke around a bit, say on the Issues or Pull Requests page, GitHub will have a button that says essentially “you just pushed a new branch, do you want to make it into a pull request?” If your branch was meant to go into PINT, this is what you want to do, so click that button. GitHub will allow you to enter a more detailed description and then create a Pull Request that can be seen on the main PINT pages. People can then comment on the pull request (“PR”) in general or specific lines of code you have changed in particular.
If you are working on a pull request and the main PINT development has changed in a way that conflicts with it (itHub will tell you on the pull request page), you want to rebase your pull request. There are more details you can look up, but in short, update master as above, then:
git checkout a-thing
git rebase master
This will attempt to take your branch, a-thing
, look at how it differs from
where you created it from, and then apply those same changes to the new
master
. This will sometimes run into trouble, which you have to resolve
before you can continue normal work. Once you have finished the rebase, you
will need to push it to your GitHub. This is a little more complicated than
usual because you are changing not just the current state of the code but the
history that led to the current state of the code in your branch. This may
mess up comments that people have attached to particular lines of your pull
request, so pick a quiet moment to do this. You will need to tell git
that
yes, you really mean to change the public history:
git push -f
If you are digging through the source code and see something strange in a file,
and if you think “who thought that was a good idea?”, you can ask git
who
last modified each line in a file, and when:
git blame src/pint/utils.py
To track and checkout another user’s branch (pull request):
git remote add other-user-username https://github.com/other-user-username/pint.git
git fetch other-user-username
git checkout --track -b branch-name other-user-username/branch-name
If you make a mistake and get git
into a strange or awkward state. Don’t
panic, and try Googling the specific error message. git
is quite thorough
about keeping history around, so you can probably undo whatever has happened,
especially if you have been pushing your changes to GitHub. If it helps, there
is Dang it, git! (there is a ruder version which may feel more appropriate
in the moment), or the git choose-your-own-adventure (which is extremely
useful as well as amusing).
Tagging and Releasing versions
This portion is only for developers with permission to modify the master NANOGrav repository!
Tagging
The current version string is available as pint.__version__
PINT uses MAJOR.MINOR.PATCH versioning inspired by, but not strictly following, Semantic Versioning.
PINT uses versioneer.py to make sure that pint.__version__
is available in the code for version checking.
This constructs the version string from git using tags and commit hashes.
To create a new tagged version of PINT (assuming you are going from 0.5.0 to 0.5.1):
You can see what tags already exist like this:
git tag --list
First make sure you are on the PINT master branch in the nanograv/PINT
repository and your working copy is clean (git status
), then:
git push origin
Now wait 15 minutes and check that travis-ci says that the build is OK, before tagging! If needed, push any bug fixes.
Next, check the unreleased CHANGELOG (CHANGELOG-unreleased.md) and make sure all the significant changes from PRs since the last release have been documented. Move these entries to the released CHANGELOG (CHANGELOG.md), and change title of the newly moved entries from “Unreleased” to the version number you are about to tag and commit. But don’t yet push.
When tagging, always use “annotated tags” by specifying -a
, so do these commands to tag and push:
git tag -a 0.5.1 -m "PINT version 0.5.1"
git push origin --tags
Releasing
To release, you need to have your PyPI API token in ~/.pypirc
.
You must be on a clean, tagged, version of the nanograv/master branch. Then you can just:
make release
This will build the distribution source and wheel packages and use twine
to upload to PyPI.
Doing this will also trigger conda-forge to create a new PR for this release. Once this passes tests, it will need to be merged.
As a last step, go to the Releases tab on github and Draft a new release.
How to Edit PINT’s Documentation
PINT’s documentation is in the Sphinx format, which is based on reStructuredText. This is a plain-text-based documentation format, which means that the documentation is largely editable as plain text with various arcane bits of punctuation strewn throughout to keep you on your toes. This documentation can be converted automatically into HTML, LaTeX, and PDF, but in practice by far the most useful form is that produced by and served up on the readthedocs servers. You can find that rendered version here.
PINT’s documentation is created from three different inputs:
reStructuredText files (
.rst
) underdocs/
(but notdocs/api/
; see below),docstrings in the code; these follow the
numpy
docstring guidelines and thus are not pure reStructuredText, andJupyter notebooks under
docs/examples
; see How to Work With Example Notebooks.
It should be fairly clear where to look for any given piece of documentation’s
code, but if there is any question, start from docs/index.rst
. Everything
is in a tree depending from here. If you add a function or a class to
a file, make sure that the new function/class is included in the
__all__
list, so that it will be included in the documentation.
To build a local copy of the documentation, run:
$ make docs
or:
$ tox -e docs
$ firefox docs/_build/index.html
These both use Sphinx to construct the documentation, check it for formatting
and internal consistency, and open it in a browser window. If you have a
browser window already open, say on a page you are working on, I recommend
using tox
and skipping the new window and just hitting reload. You can
also run a faster build that regenerates only what has changed (though this can
become confused, and may not detect errors as vigorously):
$ make -C docs/ html
If something goes wrong, the error messages should be fairly clear, but it may not be obvious what the right way is to do what you are trying to do. In an ideal world, the documentation tools themselves would have good documentation, and it would be easy to look up the right way to do things. A few pointers:
In a .rst
file:
Web links:
short_
and`long text`_
, then at the bottom of the section add lines.. _short: http://...
and.. _`long text`: http://...
Section references:
:ref:`label`
; before the section header add a line.. _`label`:
To refer to a class, module, or function in text, use
:class:`~astropy.module.ClassName`
,:mod:`~astropy.module`
, or:func:`~astropy.module.function`
.To get
typewriter font
for text, use double back-ticks. Single back-ticks are for Sphinx special objects like links and class names and things.To get emphasis, use asterisks.
To get lists, start with a blank line, start each list item with an indented hyphen, indent successive lines further, and end with a blank line.
To get definition lists (lists where each item starts with a highlighted word or phrase), start with a blank line, then the word or phrase, then the definition indented, then the next word or phrase at indentation level zero, then its definition indented… end with a blank line.
To get a code block, end the preceding line with a double colon
::
, leave a blank line, indent every line of the code, then leave a blank line at the end.
In a docstring:
In most text, you can write things the way you do in normal documentation.
When specifying an argument type, just type the fully qualified name (for example
astropy.units.Quantity
) and napoleon will fill in the link.Use the standard sections
Parameters
,Returns
, andNote
as appropriate.
Of course you can also just look at existing documentation to see how it is done, but please check that the documentation you are using as a reference renders correctly!
You may also want to run, from time to time:
$ make linkcheck
This will try to make sure that all the external links in the documentation still work, though of course they can’t verify that the page that comes up is still the intended one and not something else.
How to Work With Example Notebooks
Converting Notebooks to Plain Python
PINT’s documentation includes a certain number of Jupyter notebooks. When the
online documentation is built these are executed and the results are included
in the documentation. This is a nice way to set up python tutorials, but there
are a few wrinkles in the way these are integrated into version control. In
particular, storing a Jupyter notebook in git
causes headaches. So we store
a sort of “distilled” python version.
If you create a new notebook, tell jupytext that you want to keep a plain python copy:
$ jupytext --set-formats ipynb,py:percent docs/examples/my_notebook.ipynb
This will generate a .py
version that also contains the
information from non-Python cells as comments. The format is
understandable to Spyder as well, which can recognize and execute
code cells.
Where to Put the Data
Put any data files in src/pint/data/examples
, and include a note about the
data (where you got it from) in src/pint/data/examples/README.md
. This
will ensure that the data get put in the proper place on installing
pint
. To refer to the files use pint.config.examplefile()
:
import pint.config
fullfilename = pint.config.examplefile(filename)
Compiling and Synchronizing the Notebooks
If you check something out of git
, or switch branches, or want to make sure you have current versions of all the notebooks, run:
$ make notebooks
or:
$ tox -e notebooks
This will both synchronize the working Jupyter notebooks with the python version and also execute the notebooks. So if there is an error in a notebook, this may stop part-way through. If this happens, try the simpler:
$ jupytext --sync
This will synchronize the notebook contents without trying to execute them.
Using the Notebooks
Whichever of those you ran, now you can use Jupyter Lab to work with the notebooks as per normal. You may see a strange message about rebuilding and jupytext; just hit okay. The jupytext code should ensure that as you manipulate the notebook, the plain python is kept in sync (it contains only the inputs, not the outputs).
Checking it Back Into GitHub
When you are ready to check things in to git
, just run:
$ make notebooks
or:
$ tox -e notebooks
This will synchronize and execute all the notebooks; if an error occurs, you
have a problem in your notebooks and you probably shouldn’t check them in to
git
. If everything is fine, and especially if you have added a new
notebook, ensure that it gets checked in to git
with:
$ git add docs/examples/my_notebook.py
That is, check the python versions in to git
, not the .ipynb
versions.
Adding it to the Documentation
Now add the new notebook to the documentation somewhere — after all, that’s why
you wrote it, right? Do this by putting it in a “toctree”, that is, add its
name (without the .py
extension) to a list of sub-documents like this one
from our Tutorials section:
.. toctree::
basic-installation
examples/PINT_walkthrough
examples/Example of parameter usage
examples/TimingModel_composition
examples/Timing_model_update_example
Finally, since you changed the documentation, rebuild it:
$ make docs
or:
$ tox -e docs
$ firefox docs/_build/index.html
This will rebuild the documentation (and in the case of make
open it in a
browser window); if you have any bad formatting or links that point to nowhere
it will stop with an error.
How to Control PINT Logging Output
If you have run PINT, you have probably noticed that PINT can emit a generous
amount of information in the form of log messages. These come in two forms:
warnings emitted through the python warnings
module, and messages
sent through the python logging
mechanism (some of which are also
warnings). The amount of information emitted can result in the messages of
interest being lost among routine messages, or one could wish for further
detail (for example for debugging PINT code). There are tools for managing this
information flow; although PINT is a library and thus simply emits messages, a
user with a notebook, script, or GUI application can use these tools to
manage this output.
Controlling log messages
Python’s logging
is somewhat complicated and confusing. In PINT’s case we use the
loguru
to reconfigure it and make it easier to use, with some additional code in
pint.logging
to adapt it to our purposes (things like changing the format, adding colors,
capturing warnings, and preventing duplicate messages from overwhelming users). It is worth explaining a design
principle: libraries simply emit messages, while applications, notebooks, and
scripts configure what to do with those messages. You can (re)configure the logging output:
import pint.logging
pint.logging.setup(level="DEBUG")
You can optionally pass other options to the setup()
function, such as
a destination, level, formats, custom filters, colors, etc. See documentation for pint.logging.setup()
.
level
can be any of the existing loguru
levels: TRACE
, DEBUG
, INFO
, WARNING
, ERROR
, or you can define new ones.
The format can be something new, or you can use pint.logging.format
. A full format that might be useful as a reference is:
format = "<green>{time:YYYY-MM-DD HH:mm:ss.SSS}</green> | <level>{level: <8}</level> | <cyan>{name}</cyan>:<cyan>{function}</cyan>:<cyan>{line}</cyan> - <level>{message}</level>"
while the default for pint.logging
is:
format = "<level>{level: <8}</level> ({name: <30}): <level>{message}</level>"
If you want to use command-line arguments in a script to set the level you can do that like:
parser.add_argument("--log-level",type=str,choices=("TRACE", "DEBUG", "INFO", "WARNING", "ERROR"),default=pint.logging.script_level,help="Logging level",dest="loglevel")
args = parser.parse_args(argv)
pint.logging.setup(level=args.loglevel)
Note that loguru
does not allow you to change the properties of an existing logger.
Instead it’s better to remove it and make another (e.g., if you want to change the level). This is done by default, but
if instead you want to add another logger (say to a file) you can run setup()
with removeprior=False
.
Defaults can be changed with environment variables like:
$LOGURU_LEVEL
, $LOGURU_FORMAT
, $LOGURU_DEBUG_COLOR
.
See loguru documentation for full set of options.
Warnings versus logging
The logging HOWTO describes the difference between warnings.warn
and logging.warning
thus:
warnings.warn()
in library code if the issue is avoidable and the client application should be modified to eliminate the warning
logging.warning()
if there is nothing the client application can do about the situation, but the event should still be noted
Although PINT does not follow these rules perfectly, it does emit both kinds of
warning, and users may quite reasonably want to handle them in various ways. By default setup()
will capture warnings and emit them through the logging module, but this can be turned off by setting capturewarnings=False
.
Users can control the handling of warnings with “warning filters”; in the simplest arrangements, users can just use warnings.simplefilter("ignore")
or similar to arrange for all warnings to be ignored or treated as exceptions; users can use the more sophisticated warnings.filterwarnings()
to control warnings based on their module of origin and/or the class supplied to the warnings.warn()
call. Of particular note is the confusingly named warnings.catch_warnings()
, which is a context manager that supports temporary changes in how warnings are handled:
with warnings.catch_warnings():
warnings.simplefilter("ignore")
fitter.fit_toas()
For further details on the management of warnings, see the documentation of the module warnings
.
History
TEMPO was originally written in the 1980s, aiming for microsecond-level accuracy. TEMPO2 was written more recently, with an attention to nanosecond-level effects. Both are still in current use. But TEMPO is in FORTRAN, TEMPO2 is in C++, and neither is easy to extend for use in tasks different from plain pulsar timing. Most of TEMPO2 is also a direct conversion of TEMPO, so many bugs from TEMPO were carried over to TEMPO2. PINT was created to be, as far as possible, an independent re-implementation based on existing libraries - notably astropy - and to be a flexible toolkit for working with pulsar timing models and data.
Credits
Development of PINT is partially supported by the NANOGrav pulsar timing array project.
If you use PINT for your work, please cite the PINT paper and the ASCL entry .
Contributors
Active developers are indicated by (*). Authors of the PINT paper are indicated by (#).
Gabriella Agazie (*)
Akash Anumarlapudi (*)
Anne Archibald (#*)
Matteo Bachetti (#)
Bastian Beischer
Deven Bhakta (*)
Chloe Champagne (#)
Jonathan Colen (#)
Thankful Cromartie
Christoph Deil
Paul Demorest (#)
Julia Deneva
Justin Ellis
William Fiore (*)
Fabian Jankowski
Rick Jenet (#)
Ross Jennings (#*)
Luo Jing (#)
David Kaplan (*)
Matthew Kerr (#*)
Michael Lam (#)
Bjorn Larsen (*)
Sasha Levina
Nikhil Mahajan
Alex McEwen
Patrick O’Neill (*)
Tim Pennucci
Camryn Phillips (#)
Matt Pitkin
Scott Ransom (#*)
Paul Ray (#*)
Brent Shapiro-Albert
Chris Sheehy
Renee Spiewak
Kevin Stovall (#)
Abhimanyu Susobhanan (*)
Joe Swiggum (*)
Jackson Taylor
Michele Vallisneri
Rutger van Haasteren (#)
Marten van Kerkwijk
Josef Zimmerman (#)
Packages using PINT
This page provides a list of some of the software packages that use PINT.