Present: Joanne Bogart, Toby Burnett, Xin Chen, Johann Cohen-Tanugi, Richard Dubois, Dan Flath, Navid Golpayegani, Heather Kelly, Traudl Hansl-Kozanecka, Sean Robinson, Leon Rochester, Alex Schlessinger, Tracy Usher, Karl Young
G4Propagator: (Leon) Tracy has committed a new version with a modified interface. It didn't make it through the nightly build [Linux compile error] but this ought to be solvable in short order. The new G4Propagator has additional functionality, such as access to volume id and various step information. To start Leon plans to use the new features to read radiation lengths (rather than hard-coding them) and for fetching information needed for alignment.
Nightly build: (Alex) There are two new features:
a link to documentation, with Doxygen output for each package within a tagged release of our major containers, GlastRelease and ScienceTools.
indication on each major container home page (for example see the one for GlastRelease) of failures. One red dot means a failure in the unit test; two means a build failure. It should be straightforward to make links from the dots to the relevant output.
Tag collector and more: (Navid) The tag collector is well on its way. A no-op version is working on his private (slow!) machine with a copy of the repository. Features will include the ability to add or delete packages from GlastRelease as well as selecting which tag of a contained package should appear in the GlastRelease requirements. You can take a peek at the prototype on his home server (at least sometimes; don't be dismayed if the connection is refused once or twice) or see some screen shots, currently taken from an earlier design, but soon to be updated. A single username/password will be used for authentication.
Meanwhile Navid has a couple more projects near completion: automated generation of release.notes diffs and a restricted CVSweb-like view of packages used by some single package like GlastRelease. (To-do: organize packages in the list somehow, probably in case-sensitive order as CVSweb does.)
System tests: (Richard) They're now available for GlastRelease v2r1 though until certain technical matters get straightened out they might not be visible outside SLAC. Julie McEnery has agreed to take overall responsibility for oversight of the results.
CU: (Xin) He has been working on a model which includes not only the CU geometry but also a magnetic field, breaking new ground in our use of G4. Advice from knowledgeable SLAC users could be helpful.
OPUS: (Dan) [link awaiting regeneration of pdf file] So far, OPUS, developed originally for Hubble, looks like a viable option for our pipeline. If it works out, it would probably save considerable design and coding effort on our part. It runs on the batch farm platforms we might conceivably care about (Linux, Solaris). Because we're also a NASA mission it's free and we should be able to obtain the source if we need to.
The manager process for an ongoing pipeline is built from compiled C++ code. There are also two Java managers, one for configuration (which tasks are running, on which machines...) and one for monitoring the output.
So far OPUS has been successfully installed on a (slow!) RedHat 8 machine on campus. Next steps will be
install on a significantly faster SLAC machine
get it to run Gleam (MC + Reconstruction to start, but note that at some point we also have to check out running Gleam with event data file input)
integrate with SLAC batch farm
integrate with processing database
OPUS developers believe the last two steps probably just require use of the OPUS C++ API. That is, the hope is that we won't have to modify actual OPUS source.
Data challenge planning: (Richard) See this [daunting! ed.] summary of most, if not all, of what's needed for the first data challenge. The goal is to generate and process a day's worth of data including at least a couple flaring phenomena. Kick-off is planned for mid-September, coincident with the collaboration meeting.
Are we missing anything substantial?
Do we need to be concerned with resources? Even generating just an orbit's worth of pre-FSW-trigger data, including backgrounds, takes significant cpu and space (about 1/2 Tbyte). If we only keep the post trigger data space won't be a concern.
EM: (Joanne) Work required to handle FITS files is more or less understood. Estimated time to get it working (once the clock has started, which it hasn't yet) is a few days to a week. [Should probably also have mentioned that problems on Linux last week were indeed pilot error. ebf files can be read on both platforms, though more verification that what we're reading is sensible is in order.]
(Richard) According to Eduardo we can expect data in a week or two.
Dependency graphics: (Toby) He generated a sample of the sort of output we can expect from GraphViz. We still need to learn how to make the best use of this facility. Joanne was wondering why all the dependencies among calibration packages weren't shown until Leon pointed out that packages can appear more than once in the graph. Traudl suggested one can generate a collapsible view. Toby had already tried that, but what he got was fully collapsed!
previous | minutes index | next |
J. Bogart Last Modified: 04-Aug-2004 15:41:12 -0700