Onboard Filter: (Tracy)
configData and the latest glitch with LATEST: (Eric C., Joanne) Eric put some code into configData which interprets some fsw CDMs, in particular, those containing filter configuration. This functionality is needed by MOOT (online portion only, not offline). However the code depends on the fsw external which is currently not part of GlastRelease, so configData and anything depending on it failed to build. This code has now been moved from configData to a little package all its own, fswDecipher, which is not in Gleam, and LATEST is happy again.
This does not address the looming big issue, however: within Gleam how do we handle schema evolution of the CDMs?
Pass 6 Reprocessing: (Leon) After last week's meeting, I settled on using Tracy's modified RootTupleSvc to read in the merit file. Over the weekend I sorted out my confusion about the event timestamps and verified that the pointing information is correctly retrieved from the pointing-history file Gleam_survey_orbit8.txt. [See last week's minutes for background. ed.]
(Tracy) The current solution is expedient and the best we can do for now, but it's a band-aid. Rather than expanding it we need to get everything in the TDS that belongs there. That includes some information from obf (Leon) and truncation information.
Later the day of the meeting, Leon says:
I have run a test job on my laptop that produces a correct-looking merit output file. The code to run this job tagged in cvs, both on the BigRun branch and at the head of AnalysisNtuple. (The ntupleWriterSvc mods are also tagged, but the code isn't branched.)
I've sent my jobOptions files to Tom, who will be incorporating them into his pipeline job shortly.
A little glitch has appeared: the RootTupleSvc was not written to accept a list of merit files, which is how Tom wants to run the reprocessing. Heather is fixing that now.
Reprocessing — the big picture: (Tom) Post launch it is expected that in addition to processing fresh data from the satellite there will be some demand for reprocessing. Reprocessing in this context refers to some subset (up to 100%) of the current half-pipe and L1Proc pipeline functionality. A reprocess step can be motivated for many reasons such as: new calibration constants; improved reconstruction code; bug fixes; or reprocessing a large set of data for analysis consistency (i.e., everything processed with the latest code and constants). As there are many steps in the normal L1Proc, it may not always be necessary to perform a full reprocessing. For example, a new classification tree analysis could be applied to the merit files then propagated downstream. Or a new variable may be extracted from the DIGI or RECON trees and plopped into the svac ntuple which would not require running the time-consuming reconstruction code.
A study is underway to understand the scope of reprocessing scenarios and look at the current L1 processing with an eye toward adapting it for reprocessing or stealing bits for a completely new task. We will need a way to "tag" sets of calibration constants, much as we already do for our code (i.e., a GlastRelease). Also needed is appropriate bookkeeping to record all aspects of all reprocessings and make it possible for data analyzers to make meaningful queries, for example, to request a list of files processed with the same version of code and calibration constants within a given time (or run) range: in essence, a "runCatalog". There is an issue in the trending system that it is currently somewhat limited in whether it can distinguish reprocessed data. It is also an open question as to how much (if any) of the monitoring/trending performed in prompt processing is necessary for later reprocessing. There are other issues, but this gives a flavor of the project.
As a new development (which will not interfere with the current production system), this project will fall outside the purview of the various CCBs.
Science Tools report: Jim went through the Science Tools Update for April 1st.
cfitsio: (Navid) Emmanuel is working on the move to a newer version, in particular getting it to build on Windows. It's good training for him as well as a job that needs to be done.
ST news: (Jim) Toby backed out the changes he made to SwigPolicy since they caused build failures in other packages. Instead he made a new package, SwigModule, which does what he needs without impacting anyone else.
(Heather) What can you say about a new ST tag? (Jim) The new likelihood is ready. Other things should go in, some of them perhaps subject to CCB review. He may make a lesser release first of those things which do not need to be reviewed.
Data handling: (Dan) spoke from this data handling report.
Documentation: (Chuck) On this typical Science Tools help page note the "Help" and "Example" links in the box. All but two of such pages are done.
The March Newsletter is almost ready.
GR news: (Heather) We've moved to ROOT 5.18b, required to escape certain xrootd problems. Following our standard convention, that bumps the major release version, so we are now at GR v14r0. We are hoping to use a release in the v14 series for L1 processing of first data after launch. Richard has started querying the interested parties to start up validation within the next three weeks. It includes some big ticket items: ACD ribbon attenuation and Zach's upgrades to CAL.
Skimmer: (David C.) The basic Skimmer/CEL use-case, i.e. using a ROOT CEL as input for a skimming job, is implemented and under test. After release, next step will be to generate a ROOT CEL as output of a skimming job.
Event display/Corba: (Leon) Joe Perl and I have been investigating why large events fail to display, and what to do about it. We've isolated the problem to OmniOrb, the Corba application used by Gleam. We're in communication with Riccardo G., who says he came to the same conclusion a long time ago.
We're currently trying to install the Babar Corba server, ACE/TAO. It turns out that this was Riccardo's original choice, but he switched to OmniOrb because it was much lighter... maybe too light.
So far, we think we've built the code on my laptop, and Joe is trying to figure out how to proceed from there, since he's done similar things before.
SCons, RM news: (Navid) SCons Central just came out with a new version with some interesting features. He would like to grab it and try it out.
The build problems having to do with missing environment variables are still with us. One concerns an environment variable used by flux_xml which has the same value as one of the standard, already defined variables. (Toby) The code in question could be updated to use the standard variable instead.
(Navid) On Linux the new RM takes about 45 minutes to build ScienceTools. Old RM took about twice as long. He is currently working on new RM builds on Windows.
New RM pages are no longer behind the SLAC firewall.
(Toby) What happens to CMT? (Heather) Richard has said we can't expect to get rid of it until late summer or fall.
(Navid) Doxygen output is not always getting written because of inode shortages. He can delete some old releases. (Heather) We do need to see all the output from new release builds; that's more important than hanging on to old releases. (Leon) We've been running into resource problems with SysTests as well.
Thanks to Leon and Tom for substantial contributions to these minutes.
previous | minutes index | next |