Potential "Level 2 diagnostics"
5 Dec 2005
Crab, Geminga, and Vela pulsars
The primary "Level 2 diagnostic" probably should be spectra and light curves
of the bright pulsars, Crab, Geminga, and Vela. These diagnostics would be
quite comprehensive of the quality of the Level 2 data.
- Measuring the spectra permits verifying that the effective areas have
not changed, because the pulsars are steady (averaged over phase) sources.
Testing for pulsations is a way of testing that the LAT is where we think it
is and the times are being assigned correctly.
The sticking point is probably that this kind of quick look analysis will be
more interesting on time scales longer than the interval between downlinks.
Still, periodicity tests are worth making for every downlink, and they should be
quite fast (no exposure calculations, for example).
- Although we will very rarely (maybe never) have a downlink of science
data that doesn't include at least one of these pulsars, for a scanning
observation the number of gamma rays from each of the pulsars will range
from 0 (not in FOV) to about 100 (for the brightest pulsar, Vela, scanned
through the central part of the FOV).
- With the typical number of gamma rays being a few 10s, we still should
be able to usefully apply periodicity tests to the data for every downlink.
This will be a very high level confirmation that the event times are right,
that GLAST is where we think it is in the sky, and that it is pointing where
we think it is.
- Of course, to do this we'll be extrapolating the radio (or whatever for
Geminga) ephemerides of the pulsars, so this will also serve as sensitive
glitch test. That is, if we see a sudden loss of periodicity, a
hardware or telemetry problem will not necessarily be to blame.
Possible procedure:
- Once the L1 pipeline has finished processing a downlink, we need to
check how many gamma rays we detected in small (say 5 or 10 degree) regions
around the positions of the bright pulsars.
- If more than some nominal (TBD) number are present, run the equivalent
of gtbary, gtpphase, and gtpsearch on them to verify periodicity. We
may be able to have very sensitive tests because the light curves are known
(although energy dependent). In principle the output could be light
curves, but in general the light curves won't look like much. The
useful output would be pulsation significances - so at most one number for
each of the 3 pulsars.
- Some database needs to be updated with the results - like downlink
number, time range, numbers of gamma rays for each pulsar, results of
periodicity tests.
To be done: Evaluate periodicity tests for various number of gamma rays
for realistic diffuse backgrounds. Determine the sensitivity to pointing,
location, and time errors.
Jim points out that we should accumulate light curves for the pulsars using
the events selected above. The accumulation would be on at least one-week
scales and would provide an even more sensitive test of whether we have got the
absolute times correct.
On long time scales, like weeks or months, we'll accumulate enough gamma rays
and the orbit will precess enough that we'll be able to make deeper
investigations, like measuring the spectra, positions, etc., of the pulsars as a
functions of inclination angle, plane of conversion, etc. This kind of
study is not a 'system test' diagnostic.
Diffuse emission
We'll have plenty of diffuse gamma rays from the Milky Way in every downlink
of science data.
- A rough guess would be 1.5 Hz * 3 hours * 3600 sec/hr * 0.75 live time
fraction ~ 10k. Except in the case of the rare (possibly never)
enormous GRB, the total will not be affected very much by flaring sources.
If we refine the diffuse model enough, or maybe just observe the sky for
enough time to get an idea of how it usually looks (point sources + diffuse), we
should be able to come up with a good idea of how many gamma rays to expect (or
even their distribution in energy or in event classes - like front vs. back) for
any given downlink. This would not be a precise test, of course, but not
all that coarse either.
- We'd be able to see gross changes right away (say, in effective area),
but also should be able to see, e.g., shifts in spectra that would suggest
that the energy calibration is off.
Possible procedure:
- Once the L1 pipeline has finished processing a downlink, we need to
tabulate some quantities about the events classified as gamma rays.
The classification cuts are not too important, but probably should
correspond to something that we use for our basic IRFs. The quantities
would be (for example), number of gamma rays, distribution in conversion
layer, distribution in energy, and maybe distribution in inclination angle.
- The hard part is then to come up with reference values or reference
distributions for comparison. One way would be to run a gtobssim
simulation for the same time range and a reasonable model of the sky.
The diffuse emission is the most important to have accurately in the model.
With a precession period or so of data, we can probably make a reference
'sky model' intensity map that could be used with gtobssim. The system
test manager would compare the values and distributions and flag important
differences.
- Some database needs to be updated with the results - like downlink
number, time range, the numbers and distributions, observed and simulated.