Core Minutes 7/19/2005
Present: Pol D'Avezac, Joanne Bogart, Toby Burnett,
David Chamont, Jim Chiang, Richard Dubois, Dan Flath,
Warren Focke, Riccardo Giannitrapani, Berrie Giebels,
Tom Glanzman, Navid Golpayegani,
Tony Johnson, Heather Kelly, Michael Kuss,
Matt Langston, Julie McEnery, Chuck Patterson,
Igor Pavlin, James Peachey, Leon Rochester, Robert Schaeffer,
Tom Stephens, Tracy Usher
There was very little to talk about other than DC2 readiness:
- Weekend runs: (Richard)
2 million event all-gamma and 10 million event background runs including
the latest and greatest (as of last weekend) CAL code were
largely successful, earlier Pipeline problems having been ironed out.
Heather fixed a Data Server problem with concatenating files. There
were some lesser glitches; see CAL discussion below.
- Peeler and Pruner: (Tom G.) Memory
problems in Pruner have been fixed, reducing usage from over a gigabyte to
about 70 megabytes.
A new summer student (Andrey Goder) hit the ground running: he has
already written a program to verify that the Peeler really retrieves
the events it was requested to retrieve.
Development of a secnd new indexer for TChains is in progress: design is done; first draft is
ready for testing. This is something that is supposed to be part of ROOT
proper, but isn't. [Related information from Tom: the first new indexer required
an open file descriptor for every single data file. When the number of runs
being processed in a task gets large enough (larger than about 950), the job
crashes on Linux boxes: they have a default limit of 1024 open descriptors.
Tom has asked SCS to up this limit, but the second new indexer will not only
solve this problem; it will also make for smaller and faster jobs.]
Peeler works on all Tree data (MC, digi, recon) but not on tuples.
- Data server: (Tony) See the
Data Server
page for access to all its facilities. He has tested the web interface
to the Pruner; everything seems ok, but it would be well if others
would try it out. He was holding off on the Peeler in case it wasn't
ready, but Tom says it is. It works for any ROOT Tree data (MC, digi,
recon) but not for ntuple data.
See this
Data
Catalog screen shot. It still needs a web interface.
The Data Catalog can be used to keep track of Pipeline data
or data produced remotely. Cataloging of Pipeline data should happen
automatically; doesn't yet, though. Tony saw pruned datasets in Richard's
hand-generated page. Is there such a thing as a standard pruning, which
should autatically be cataloged? And if so, should users be allowed to further
prune a pruned dataset? (Richard) Yes to both.
Tony will be on vacatio next week. Igor will be back-up.
- I & T plans: (Warren)
Expect data-taking throughout the week, including retaking the 6-tower
data on Thursday and Friday. Another tracker is due by the end of the month,
at which time we'll go to 8 towers. Then install ACD [do we take data
with 8 towers before installing ACD?] and take data in that configuration.
(Richard) With more towers, we're having to make runs shorter in order
to avoid running up against per-job batch processing limits.
This impacts data-taking. We hope to get
around this by
spreading out the recon step among multiple jobs.
Warren made it official with JIRA request
SVAC-68
- Background model: (Toby)
See his presentation
(pdf or
ppt) on experience so far with background runs according to
the DC2 scheme.
The new collection could be used for training. Julie is interested in
comparing new- and old-style background samples.
- CalRecon: (Tracy)
The big changes in last weekend's runs concerned the
moments analysis calculation and CAL mip-finding. Also David
put in some exception handling, a very good thing since the code
does on occasion throw exceptions. Unfortunately,
a missing call in Merit kept the
the output from the new mip-finding from making it all the way to the ntuple;
it was in the ROOT tree. This is now fixed. There was also a problem
with the moments analysis which was tracked down to a bug in
CalXtalRecAlg (under certain circumstances, the energy was assigned to the
wrong end of the crystal). Fix is now in and tagged [and
Toby started up a new GR HEAD build, even as we spoke].
To get a handle on what's left, see this
Confluence page. For reference see also
the
original refactoring proposal.
- More for next round: (Richard) So far
we have a 10 million-event background run, and flawed at that. The
original target was 100 million by mid-month. We'll try for this in the
next week or so.
(Julie) Change to CAL threshold has been agreed to;
need to modify parameter(s?) in XML file.
(Toby) There is still a problem with CAL LO trigger. Fixing it is
Zach's highest priority.
(Joanne) In order to use event time as generated by FluxSource,
the job option parameter
CalibDataSvc.CalibTimeSource should be allowed to take on its default value,
"data". See CalibSvc
mainpage, job options section, for a description of this parameter.
- Navid news:
(Navid) He's added author lines to packages
lacking them so that RM will always know who the culprit is.
He's been working on the new archiver, which will support astore as well as
mstore. astore and mstore are disk-to-tape-and-back systems supported by
SCS. astore is newer and better (faster; larger file sizes).
J. Bogart, Last Modified:
01-Jun-2010 15:48:08 -0700