Sources, Simulation and Reconstruction

This section describes the physics modeling of the incident particle flux and their traversal through GLAST, and the reconstruction and interpretation of the instrument response to those particles.

As GLAST is a pair-conversion telescope, the detection of photons is done on an individual basis: the trail of ionization in the instrument from each conversion and subsequent showering undergo pattern recognition and fitting stages. These stages serve to identify  the particle, and provide the needed estimates of direction, energy and quality.

Simulation is required for several purposes: instrument design, algorithm development, and estimation of performance, e.g. efficiency vs. purity for photons. A model of the incident flux is needed for the performance estimates to map out the full space of energies, angles and background contaminations that the real instrument is expected to encounter.

The entire process of source modeling, event simulation and reconstruction can be represented in the following diagram:

Sources: Incident Flux

An obvious requirement for simulation is to provide flexible sources of incident particles, corresponding to the "event generators" use in accelerator-based detector simulation. The sources must meet several needs: illumination of the entire detector or only a portion; incident angles or ranges of angles specified with respect to the detector, or with respect to the local zenith, or finally with respect the to sky. The rates of incident particles must be a property of the source, with allows composite sources to be constructed that determine the relative fractions of the components according to the total flux of each component. The orbital position must also be a parameter, for geometric transformations from local coordinates to galactic ones, and for cosmic ray sources that depend on geomagnetic latitude.

Our choice of a design that facilitates implementation of the above requirements involves abstraction of the properties  of a source, which can be satisfied by actual instances of the variety of sources that we want to be available. For example, for studying the response of the detector to specific particles as a function of direction and energy; we want to specify an angle, or range of angles with respect to the instrument; to understand the rates and potential contamination from cosmic ray background, we need sources that represent the particle composition, energy dependence, zenith angular dependence, and dependence on geomagnetic latitude corresponding to the observed cosmic rays. Finally, the abstract definition of a source, along with global parameters representing the instrument orientation, orbital position, orbital orientation, absolute time, must accommodate the representation of gamma rays emitted by astrophysical objects, such as AGN's and gamma-ray bursts.

While we must provide a library of sources sufficient for the intended uses of the simulation, there must be a mechanism for users to easily add new sources, either modifications of the parameters used to create instances of existing sources, or entirely new code,  without the need to modify existing code. This is to be implemented with the same mechanism used for the built-in code, that is, use of the abstract definition, or interface, to represent sources, and the use of a "library" of available sources to which a user can add a new module.

The diagram shows some of the elements of this picture. The Flux Service box represents the interface to the other elements. It has access to a library of sources, from which the selected source is chosen, and to which external source descriptions can be added. It manages a description of the orbital parameters and GLAST orientation (the oval labeled Orbit), on which the selected source may depend.

Event Simulation

Given an incident particle, the task of the simulation is in principle simple to state: transport the particle, using Monte Carlo techniques of sampling from a variety of distributions (ionization losses, interaction probabilities, interaction daughter products, etc), through the instrument and record the effects of the interactions on the sensitive detectors of the instrument. These effects are generally the ionization losses of charged particles in the detector elements. Once these losses are tallied per detector element, there is a digitization phase where the losses are converted into expected digital outputs (hit strips, ADC values in the CAL and ACD). These digitizations are then formatted to look identical to the real data, so that the reconstruction process can be blind as to whether the data it is working on is real or from simulations.

There are four major categories involved with the simulation:

Geometry

The simulation puts the heaviest demand on the geometry of all its clients. It must be able to handle a complex physical setup, forming a hierarchy of many shapes and materials. The goal of the design was to provide sufficient flexibility to describe the breadth of devices likely to be used (all engineering models plus the flight instrument), and to give equal access to all clients. the implementation made use of XML to provide both a human-readable specification and one rich enough to describe a hierarchical geometry.

A number of advantages of the design follow:

Particle Transport

This area has seen tremendous effort in the past twenty years, with the trail blazed by the EGS and GEANT projects. etc

...... more here .....

Bookkeeping of energy depositions in the instrument

We wish to, as an option, track all energy lost anywhere. This means that energy lost in insensitive materials must be accounted for. In addition, the energy deposits must be traceable to their primary parents (e.g. the e+ or e- in a photon conversion or to the original particle otherwise).

Additional requirements are

CAL

TKR

ACD

To respond to these requirements, we propose to notionally separate the instrument into 'volume integrating' and 'single-step' components. In any case, all volumes are marked as 'sensitive' to the simulation package (e.g., Gismo, G4). Volumes will carry an additional property indicating whether they are 'sensitive' for digitizations. 

Single-step

All steps are recorded in all volumes; dE, position vector, volume name and MC parent are recorded per hit. The hits are recorded up to and including the termination point or exit from the world volume.

Volume integrating

All steps until the particle interacts are stored, just as for the single-step volumes. Once the particle interacts, it is tagged as showering and all its daughters' contributions are assigned to it.

Digitization

The hit information is sufficient to simulate the instrument digitizations, in which the simulated energy deposit is gathered up per sensitive element and transformed into a readout. For the TKR, this is a list of hit strips and TOT per layer; for the CAL and ACD, these are pulse heights from the various photo-diodes and photo-multipliers respectively.

 Random noise is added to the system (which can add or subtract from the counts above threshold) in all systems. Charge is shared amongst hit strips in a geometric fashion. Studies are underway for a more realistic handling of the sharing and of modeling of the TOT.

Depending on the readout mode, the best or all four PIN diode readouts are simulated and include the light output taper from end to end of the logs.

Event Reconstruction

The event reconstruction takes the raw readouts from the detector elements, converts them to physics units (e.g. energies in MeV, distances in mm), performs pattern recognition and fitting to find tracks and then photons in the tracker; finds energy clusters in the calorimeter and characterizes their energies and directions and uses the ACD to veto events in which a tile fired in the vicinity of a track extrapolation.

Tracker

Gamma ray sources will be located in the Universe via the reconstruction of the gamma directions that arrive at Earth. This direction is measured via the conversion of the gamma in thin tungsten planes and the subsequent reconstruction of the electron/positron pair with a precise tracker, and the measurement of the energy with a calorimeter.

The Tracker pattern recognition is initially done in separate x and y projections.  The projections are associated with each other whenever possible by matching tracks that pass from one Tracker module to another.  This significantly improves the power of the reconstruction for complex events (e.g., multi-photon events from primordial black hole evaporation). 

 The converter, needed to produce the interactions, introduces however, an unavoidable error due to the multiple Coulomb scattering (MS) in the trajectory of the particles. It is crucial to understand how the multiple scattering affects the reconstruction of the particle trajectories.

The presence of non-negligible multiple scattering complicates the fitting procedure and the pattern recognition problem. The covariance matrix becomes non-diagonal in order to take into account the error correlation between different planes; thus it becomes larger and requires more computing time to invert it. The Kalman Filter (KF) technique alleviates both problems elegantly.

The power of the Kalman Filter to handle the track fitting problem when multiple scattering errors are involved comes from its iterative property. KF considers only one measurement each time, introducing it independently into the fit. This property facilitates the decision of adding or removing a given measurement to the track, therefore aiding the track finding. It also permits the introduction of random errors (as it is the case of the multiple scattering) in a natural way. Now, one has to consider only the multiple scattering error produced between two measurement planes. This simplifies greatly the problem of the MS error.

The KF also allows us to compute the precision or resolution on the track parameters at the vertex position, since it provides the parameters and the covariance matrices of the track at each measurement location and, most importantly, at the first plane. Extrapolating this covariance matrix to the interaction vertex, one obtains the resolution of the track parameters. With this technique one can quantify the multiple scattering effect, and the relation with the other detector parameters. For example, on can address the following question: "when does a given multiple scattering error make useless the inclusion of a new measurement into the fit?". In other words, "how many planes are relevant in the fit for a given multiple scattering error?".

The track model is a straight line and the measurements are a set of periodic hit positions. The distance between planes, the resolution and the amount of MS per plane, are constant. This makes the application of the KF simple and straightforward. 

Calorimeter

The calorimeter consists of 16 modules of 8 layers of 12 CsI(Tl) crystals in an hodoscopic arrangement, this is to say alternatively oriented in X and Y directions, to provide an image of the shower. It is designed to measure energies from 30 MeV to 300 GeV and even 1 TeV.

However, the calorimeter is only 8.5 X0 thick and therefore cannot provide good shower containment for high energy events, though these events are very precious for several astrophysics topics. Indeed, the mean fraction of the shower contained can be as low as 30% at 300 GeV, normal incidence. In this case, the energy observed becomes very different from the incident energy, the shower development fluctuations become larger and the resolution decreases quickly. 

Two solutions have been pursued so far to correct for the shower leakage. The first solution to correct for the energy loss is to fit a mean shower profile to the observed longitudinal profile. There are 2 free parameters, E0 and the starting point of the shower to take into account early fluctuations. 

The profile fitting method proves to be an efficient way to correct for shower leakage, specially at low incidence angles when the shower maximum is not contained. The resolution is 18% for on axis 1 TeV photons, which is a 50 % improvement compared to the raw sum of the energies recorded in the crystals.

The second method uses the correlation between the escaping energy and the energy deposited in the last layer of the calorimeter. The last layer carries the most important information concerning the leaking energy: the total number of particles escaping through the back should be nearly proportional to the energy deposited in the last layer. The measured signal in that layer can therefore be modified to account for the leaking energy.

The methods presented significantly improve the resolution. Up to 1 TeV, the resolution on axis is better than 20 %, and for large incident angles (more than 60 degrees) it is around and even less than 4 %. It should be noted that the best layer correction is more robust since it doesn’t rely on a fit, but its validity is limited to relatively well contained showers, making it difficult to use at more than 70 GeV for low incidence events. There is still some room for improvements, especially by correcting for losses between the different calorimeter modules and through the sides.

Background Rejection

Background rejection performs the function of particle identification, specifically, was the incoming particle a  photon or not. With a 105:1 background to signal ratio, shower fluctuations in background interactions can mimic photon showers in non-negligible numbers. To that end, cuts are applied to the events to suppress the background. Studies done to date use the following properties of the instrument and particle interactions to do the suppression:

These selection criteria were sufficient to meet the science requirements. They will continue to be honed as reconstruction algorithms and understanding of the instrument and background improve.


T.Burnett, R.Dubois Last Modified: 07/26/2001 02:37