Present: Ursula Berthon, Joanne Bogart, Toby Burnett, Johann Cohen-Tanugi, Seth Digel, Richard Dubois, Dan Flath, Berrie Giebels, Navid Golpayegani, Traudl Hansl-Kozanecka, Heather Kelly, Matt Langston, Julie McEnery, Sean Robinson, Leon Rochester, Alex Schlessinger, Tracy Usher
Root conversion service: (Ursula) This is well on its way. She discussed reasons for having such a Service, features of what she's got so far, and alternate choices we could still make. For example, the current design writes out (and reads in) three separate files for mc, digi and recon data, as is done by GlastRelease now. Instead we could have a single logical file (all that the client would be aware of) facilitating access to the three physical files.
Ursula is starting to implement the full complement of converter classes. The best strategy is to put most of the work in base classes. A first version capable of writing and reading at least one object belonging to each of the three files should be available in about a week.
DC1:
(Alex) The OPUS pipeline is close to ready for a serious trial. See this diagram of the database tables which track jobs. A more systematic naming scheme (design in progress) will simplify handling. They (Alex and Dan) are also working on tools to help with maintenance and development; e.g., a simple way to delete everything associated with a failed job from database tables.
(Navid, Richard) Navid's web interface to the Root analog of D1 has most of its intended functionality. It can be used for pruning (e.g., apply cuts to require at least one track and a trigger) or to extract a set of events identifed by run and event number. Still remaining: handle requests of the standard SSC form (cuts on time, energy, area,..) and requests for D2-like data (the Exposure ntuple). The web interface sends requests to a server, which generates a file fulfilling the request. The client gets an email with the location of the file. Performance, not seriously considered to date, needs work.
New FluxSvc coming soon, will need to be incorporated.
Randoms and reproducibility (cont'd): It should suffice to first do a run to generate the 4-vectors (in particular, will have event time, needed to phase properly w.r.t. rocking). Then to recover a particular event, start with this file as well as (run number, event id). Tracy will try out this procedure.
System Tests: (Matt, Julie)
Submitting System Test runs at SLAC Matt has had an intermittent problem
with nfs servers. Sometimes the job crashes with the message "stale nfs file
handle". There may be some correlation with earlier Root
complaints in the job. He will try running the jobs manually. It could help
in getting the results out, or in seeing what's going wrong, or both.
Meanwhile, Julie has been able to run System Tests to completion.
Matt will adjust the web interface to point
to her results. Bill has seen significant differences (improvement, in fact)
in his analysis between p1 and p4; it would be nice to know why.
A project is underway in the capable hands of Mike Sexter to use JAS
visualization for System Tests. He hopes to have it working for us in a week or two.
Processing: (Heather/SLAC) The 5 million-event all-gamma run is done; a 1-million event background run is in progress. (Berrie/Lyon) At Lyon having to write to a Storage System rather than direct to disk is an extra complication, but he is coping. One good sign is that that JJ has been able to read the output. A 10 million event run is in progress, close to done. There is about to be an outage of the batch farm of unannounced length; but in the past such things haven't been long. He is hoping to be done with his part of the the 100-million background events next week. He has seen an impressive improvement in performance with the optimized build.
previous | minutes index | next |
J. Bogart Last Modified: 01-Jun-2010 15:45:30 -0700