ACD Software scalar documentation

Monday, July 19, 2004 2:29 PM

JJ Russell




The software accumulates three distinct sets of numbers and uses both the GEM and ACD contributions as input to from the output record)

  1. Summary statistics (16 32-bit numbers, only 9 used; 7 spare) 
  2. CNO  statistics       (24 32-bit numbers)
  3. Tile statistics          (4 x 32 x 32 32-bit numbers)


The first 9 32-bit numbers are arranged a 3 x 3 array giving the status of the GEM and ACD contributions. The next 7 are spares for future expansion.  Either contribution (GEM/ACD) may be

  1. Absent
  2. Present and OKAY
  3. Present, but with internal errors

Currently the GEM contribution can generate no internal errors. About the only thing that I could think of was that the length of the GEM contribution did not match the expectation (the length of the GEM contribution is constant). The ACD contribution, however, can generate internal consistency errors, so things may show up in this category. As soon as I have a good idea of what to check for, I may change the GEM contribution to check for consistency also.


The 3x3 numbers are laid out as


   Value   Meaning

         0    Present&Ok

         1    Present&InError

         2    Missing



So, for example, entry 0 is both ACD and GEM Present and OK. The GEM is the faster moving index. See EMP/ASC.h for details; its the ASC_SUMMARY enumeration. Naturally, there really is no category called Missing&InError.


Here is a formatted view of these numbers from a test set of data. You can tell it is a test set since all events have both GEM and ACD contributions in the Present & Ok state; i.e. everything is perfect.


                     Counting Statistics
                |           GEM            |
 | Total Events |     PRESENT     |        |
 |              +-----------------+ MISSING|
 |          186 |   OK   |  ERROR |        |
 |   | P | OK   |     186|       0|       0|
 | A | R +------+--------+--------+--------+
 | E | S | ERR  |       0|       0|       0|
 | M +---+------+--------+--------+--------+
 |   | MISSING  |       0|       0|       0|


What to do about events with missing contributions

There was a design decision I had to make; what to do with events having a missing contribution. The decision was to update the remaining statistics as best I could. This causes no confusion for the CNO statistics, since either the ACD contribution is present (in which case they are accumulated) or the ACD contribution is absent (in which case, surprise, surprise, they aren't accumulated). However the tile statistics block does mix the two contributions in a way which is obvious once you read what is being accumulated, so a missing ACD or GEM contribution could distort the statistics. For the application currently targeted, testing of the ACD, this seemed like a reasonable compromise. (If I carry these counters into the FSW proper, I'll probably separate the tile statistics into 3 classes; ACD only, GEM only; ACD&GEM both present.)



CNO Statistics

The CNO statistics block is a block of 4 numbers for each of the 6 A/B board pairs. The four numbers for each board pair represent the number of events where

For reasons of efficiency, category 0 (CNO signal was absent on both boards) is not accumulated, but can be easily gleaned from the data by subtracting the sum of the other 3 categories from the total number of events accumulated.


Here is a formatted example (note that I didn't even bother with the CNO signal absent of both boards category).


CNO Statistics
 1LA:     36  2LA:      0  2RA:     36  3LA:    114  4LA:      0  4RA:      0
 1RB:      0  2LB:      0  2RB:      0  3RB:      0  4LB:      0  4RB:      0
Both:      0 Both:      0 Both:      0 Both:      0 Both:      0 Both:      0



TILE Statistics

Ah, the main course. On a per tile basis, an array of 32 counters is accumulated. The index of this array (running from 0->31) is actually a bit mask of the status of the 5 differ rent bits associated with each tile. The index is formed as follows

So, a counter at array index of 31 (0x1f) have all 5 bits set. Note also that, again for efficiency reasons, index 0, no bits set, is not accumulated, but can be had be subtracting the sum of all the other entries from the total number of events.


Why did I choose this method?

Depending on the setting of the discriminators one would certainly expect various strong correlations. For example, all things being equal (efficiency collection, light transport, photo-tube gains, amplifier gains, discriminator setting etc), one would expect a high correlation between the A and B readouts of the same tile. One can check if there is consistency by beating the 5 signals against each other.


Tile Order

The tiles arrays are arranged in 4 arrays of 32. These 4 arrays or 32 tiles and the order within the 32 tiles match the order of the tiles in the GEM data (bit 0 = array index 0). This ordering of the tiles was another design decision, but a somewhat easier one than the previous decision on what to do when an ACD or GEM contribution was missing. The order could have been by electronics, essentially following the board order, or it could have been (as was chosen) by the more geometrical order as presented by the GEM data. I choose the latter for two reasons

  1. The data can be easily remapped by a display program from one space to another, because... (see point 2)
  2. The GEM space was larger, this choice avoids ambiguities about what to do about GEM bits that have no active channel associated with them. By picking the larger space, even the unassigned channels in the GEM get counted. I thought this might be useful,  if, in the early days, we had a mis-assignment in the GEM which put an real tile into an unassigned channel. At least the data might provide a clue to where it is when it didn't some up in the right spot. (If it doesn't show up anywhere, well that's another piece of information.) 

Basically, I didn't want to destroy any information. Naturally I can't preserve all the information present in the raw data, but I thought this scheme allowed for preserving some of the more interesting correlations.



The software I provided only does 2 things interesting to you, accumulates the statistics and clears the statistics. The starting/stopping and when to clear is purely a LATTE control functionality. I would suggest that one use the CLEAR function sparingly. I would suggest whenever one wished to issue a clear, instead one read the statistics at that point and make it a base-line set. Display of subsequent readings would then subtract this base-line set to get the delta.


This scheme has two advantages.

  1. It allows for multiple base-lines (one could base-line from 1 second ago,  30 seconds ago, 1 minute ago. Note that having the 'display' program adding up statistics over multiple 1 second samples is not same (there is the time lost during the clear). This method (particularly if the timestamps LATTE provides are done right) allows one the most accurate and consistent bookkeeping possible.
  2. It allows for multiple consumers. That is, more than one display program can sponge on this data. Consumer A never has to worry about Consumer B mucking with his data by clearing the counters at some random time.