US LHC Computing Review Nov 2001

This was a DoE review of the US ATLAS and CMS computing projects. The reviews were to cover their core software and Tier 1 & 2 center work. Each experiment was given about a day to make their case. It was way to compressed to allow much depth and no time to talk details, so these notes just point out tidbits I picked up that might be of interest to GLAST.

CMS

In fact, CMS did not get into much detail on their technical work and not much of what they said appeared to be of interest to us, partly because of the technology decisions they have made (eg following what used to be called LHC++).

They did have one product called Ignominy that could be of interest. Its goal is to analyze software for complexity and show its structure. It appears to not be CMS-specific.

ATLAS

ATHENA has been accepted by ATLAS. They have recently completed the migration to CMT. Their supported OS' are linux and solaris, with the latter a poor cousin just maintained to show that the code builds on a 2nd OS.

 They have built a layer on top of the Gaudi TDS - called StoreGate. They said it was necessary to address shortcomings of the TDS. No details. There is also a new data type called DataLink. It sounds like what we need to relate one kind of data to another - eg MC contributions to a CAL log readout, or hits on a track. I think it is bi-directional. Don't know if it depends on StoreGate.

They have extended IDL (now ADL) to auto-generate code for the PDS, ie the converters and streamers for both Objectivity and Root.

All of LHC is looking at migrating to a hybrid datastore consisting of Root files organized by an RDBS. STAR is already doing this.

They have created a service hooking G4 up to ATHENA.

They say their Root converters are now done - no more blobs. They also said they had an ntuple problem - they were only using memory-resident ntuples so far? Does that mean we are too? Aren't we using their ntuple service?

Support

Neither experiment has a bug tracking system. Both make do with a mailing list and don't seem so unhappy about the situation.

They are doing nightly builds. They use AFS for code distribution. Presumably we could too if we wanted for linux, assuming we sort out the site-dependent stuff that Alex is looking at.

They differentiate between Developer's and Production releases. Should we too?

Coding compliance: they were using something called Code Wizard, but it only addressed maybe 1/2 of their rules and updating it would be too hard. They will investigate using the same tool that ALICE does, written for them by an Italian software house - and said to be available to CERN experiments.

There is a package called pacman, written by Saul Youssef of BU, for code distribution and installation. It works off tarballs and rpms. Not sure if this would help us. We should investigate InstallShield for Windows.

They've written a user guide. Perhaps we should look at it.


R.Dubois Last Modified: 06/01/2010 15:48