Using the Gen01 Analyzer Engine OFFLINE

(Conveniently hidden at http://hallcweb.jlab.org/gen01/using_offline.html)

Note:
The following instructions require a computer account on the Jlab CUE system
and a working analyzer setup on one of the Jlab farm Linux computers:
ifarml1, ifarml2   (more?)

Briefly, instructions for setting up a Gen01 offline replay on the work disk at Jlab:
  • log in to one of the Linux farm machines
  • change directory to /work/hallc/e93026
  • if you already have a subdirectory under your name, change into that; otherwise, create one using your logon ID and then change into it
  • make a new replay directory (e.g. mkdir replay) and change into that
  • run this script to obtain an up-to-date replay setup:
    /group/e93026/analyzer/setmeup
    N.B.: to check an existing setup against the current group version, run
    /group/e93026/analyzer/checkmysetup
  • follow the prompts and be sure to adjust the paths to reflect the actual location of your replay directory; change the location of the RAW data source from the one to be used in the counting house to the one appropriate for the farm (see comments in REPLAY.PARM as the editor window pops up)
  • execute the script dosetupreplay in your directory to crate convenient links to the Linux version of relevant files.
More general setup instructions (for off-site setup, too) can be found at: (http://hallcweb.jlab.org/gen01/setup.html).

Description

The analyzer engine decodes the raw detector data that were acquired during a run, event by event, and reconstructs (or attempts to) the kinematic parameters of the corresponding physics event. In addition, hardware scalers are decoded, accumulated and output in a readable format, and histograms are filled with various detector quantities.

The input consists of the raw data file, which is either on tape, on a cache disk, or still on the buffer disk in the counting house (ONLINE replay). The output is written to the screen (mostly status and error messages) and to file, as ASCII report files, PAW histogram file, and one or more Ntuple files.

Additionally, numerous setup files define various system parameters and processing options. Many of these quantities specify the environment under which the data were taken and change over the course of the experiment, even from run to run. A database system automatically selects individual parameter values, or sets of values, based on the sequential ID number of the run being analyzed. Even entire files of parameter values can be selected based on the run number. The main database definition is contained in the file DBASE/gen01.database.

The histograms created in the output are defined in the files contained in the HIST/ directory, the main file is HIST/gen.hist. Similarly, the reports generated are based upon the definitions in the Templates/ directory. The PARAM/ and DAT/ directories contain the bulk of parameter specifications, often with multiple versions that are selected by the database rules. The MAP/ directory contains the electronis maps, which define which detector element the individual electronic signals belong to. The files in Tests/ define various cuts which are applied in the filling of the histograms and also in the report creation. The source code which makes up the replay engine (together with the Csoft libraries) is in the SRC/ directory.

The resulting output files are placed in the directories output/,   paw/,   ntup/,   hv/ and cosmics/. The resulting Ntuples and histograms can be further analyzed using the PAW KUMAC scripts in the kumac/ directory.

The file REPLAY.PARM and the shell scripts replayup_ define these locations and other fundamental runtime variables.

Note:
Since the location of all these files is easily changed, the actual assignments may differ from what is discussed here!

Analyzing a Run in OFFLINE Mode

In the following, it will be assumed that the convenience links mentioned above have been established, via the command exec dosetupreplay. This will create the links for the analyzer engine executable replay.exe, the filter syncfilter and the graphical progress monitor runstats to the platform-specific version. In any case, the platform-specific versions of the commands can always be used -- with the correct of platform identifier!
Prior to running the analysis engine, its runtime environment must be properly defined. If the setup meets the requirements detailed above, this can be easily accomplished with the command
source replayup
This command must be executed once in each xterm and each login. It will also define the location of PAW et al.

Now we still need to get the raw data to analyze. These are stored on tape in the computing center's tape silo. To be able to access them, we need to stage the data file in. Using the command jcache we request the data file be made available to us. This requires us to supply the stub file in identify the file to retrieve. The stub files are located in /mss/hallc/e93026/raw and the staged data file will be available (via a link) in /cache/mss/hallc/e93026/raw Both files follow the naming convention (for Gen01) e93026_.log. e.g. e93026_40345.log.0 The script getfromsio.sh takes the run number as an argument and issues the appropriate jcache command, provided the raw data file is indeed in the silo (there is a few hours lag time between the run taking place and it being available in the tape silo). Issuing the jcache command places the request in a queue and you will be notified by email once the file is available (the email is sent to the account you are logged in as when issuing the request, unless otherwise specified on the command line -- see documentation).

The replay can then be started using the command
engine.exe grun=45678
to, for example, analyze run 45678. Unless an error occurs, the replay will continue until the end of the run. Two optional switches can be used to specify the first event in the run which is to be actually analyzed or to end the replay after a certain count of events has been analyzed (NOT event number): gstart= and gstop= , respectively. In general, any parameter that can be defined in a setup file can also be defined on the command line, thereby overriding any file definition.