Difference between revisions of "Hall C CODA/DAQ Layout"
Line 9: | Line 9: | ||
There are two primary hosts dedicated to running the SHMS and HMS DAQs: | There are two primary hosts dedicated to running the SHMS and HMS DAQs: | ||
− | * HMS: coda@cdaql5 | + | * for NPS-standalone or "coinc" config : cdaql6 |
− | * SHMS | + | * HMS-standalone: coda@cdaql5 |
− | When running in 'coincidence' mode, all ROCs ( | + | * SHMS DAQ is disabled at the moment. |
+ | When running in 'coincidence' mode, all ROCs (NPS+HMS) are picked up by the 'coinc' configuration running under coda@cdaql6 | ||
There is nothing 'special' about those machines however. If needed, failover to another host by replacing 'cdaql6' with a new/different host in | There is nothing 'special' about those machines however. If needed, failover to another host by replacing 'cdaql6' with a new/different host in | ||
Line 18: | Line 19: | ||
* CODA msqld server | * CODA msqld server | ||
** See '''coda:bin/run-msqld''' -- It is presently started through crontab @reboot under coda@cdaql6. | ** See '''coda:bin/run-msqld''' -- It is presently started through crontab @reboot under coda@cdaql6. | ||
− | |||
− | |||
== CODA support software == | == CODA support software == | ||
Line 27: | Line 26: | ||
=== ROC code === | === ROC code === | ||
− | * | + | * For experts only, ssh -X hcvme01 -l hccoda as coda@cdaql1 for the linux/intel ROCs |
− | + | ||
** The files are stored on cdaqfs1 and NFS mounted at /net/cdaqfs1/cdaqfs-coda-home/pxeboot/ | ** The files are stored on cdaqfs1 and NFS mounted at /net/cdaqfs1/cdaqfs-coda-home/pxeboot/ | ||
** The PXE boot options are delivered by the JLab central DHCP server to all hosts (non-PXE systems ignore them). At present they are: | ** The PXE boot options are delivered by the JLab central DHCP server to all hosts (non-PXE systems ignore them). At present they are: | ||
Line 34: | Line 33: | ||
next-server hcpxeboot.jlab.org; # TFTP server (hcpxboot is a CNAME for cdaqfs1 at present) | next-server hcpxeboot.jlab.org; # TFTP server (hcpxboot is a CNAME for cdaqfs1 at present) | ||
− | |||
− | |||
− | |||
=== Experiment Changeover Tasks === | === Experiment Changeover Tasks === | ||
− | * Ensure all CODA files from prior run are pushed to the old tape destination and removed from the data/raw/ | + | * Ensure all CODA files from prior run are pushed to the old tape destination and removed from the data/raw/ directory |
− | ** Move any 'orphan' files from ~coda/data/raw/ | + | ** Move any 'orphan' files from ~coda/data/raw/ |
** Run <code>jmirror-sync-raw.copiedtotape push</code> as ''coda@cdaql1'' | ** Run <code>jmirror-sync-raw.copiedtotape push</code> as ''coda@cdaql1'' | ||
*** Note that files will only be removed from the local system by jmirror if both the original copy and duplicate copy are on tape. Either repeat the <code>jmirror-sync-raw.copiedtotape push</code> command after the tape system has made the duplicates, or wait and the cron job will clear things up for you within a few days. | *** Note that files will only be removed from the local system by jmirror if both the original copy and duplicate copy are on tape. Either repeat the <code>jmirror-sync-raw.copiedtotape push</code> command after the tape system has made the duplicates, or wait and the cron job will clear things up for you within a few days. | ||
Line 50: | Line 46: | ||
* Confirm the trigger table mapping is consistent with the Experimental requirements. | * Confirm the trigger table mapping is consistent with the Experimental requirements. | ||
** This table sets the 'trigger bits' that the Trigger Master adds to its data header to flag whether an particular trigger involved the SHMS, HMS, etc. | ** This table sets the 'trigger bits' that the Trigger Master adds to its data header to flag whether an particular trigger involved the SHMS, HMS, etc. | ||
− | + | ||
* Update the target ladder list in '~coda/coda/scripts/runstart_gui.tcl' to match actual target ladder (allows the prescale GUI to auto select the in-use target). | * Update the target ladder list in '~coda/coda/scripts/runstart_gui.tcl' to match actual target ladder (allows the prescale GUI to auto select the in-use target). | ||
− | + | ||
* Trigger timing log entries / snapshots are in the logbook and are also recorded here: [[Trigger History]] | * Trigger timing log entries / snapshots are in the logbook and are also recorded here: [[Trigger History]] | ||
* Ensure the <code>go_analysis</code> script is updated and pointing at the right replay scripts. | * Ensure the <code>go_analysis</code> script is updated and pointing at the right replay scripts. | ||
Line 59: | Line 55: | ||
== Data Movers == | == Data Movers == | ||
− | The 'data mover' algorithm takes care of copying CODA data files from the Hall C system(s) to tape | + | The 'data mover' algorithm takes care of copying CODA data files from the Hall C system(s) to tape. Please don't interfere with this. |
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | The mover relies on jmirror. | |
− | + | Clean-up of the local files is managed by the 'jmirror' tool. | |
− | + | There are crontab entries under coda@cdaql5 that monitor the file transfers for 'stuck' files or other issues and will email responsible parties. | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
== Git repos == | == Git repos == |
Revision as of 10:31, 22 August 2023
Hall C CODA Layout
- Detailed 'User' instructions are on the Hall C DAQ page. That includes the ROC layout, standard recovery procedures, etc. Read and understand that first.
- Includes instructions on updating F250 pedestals, switching DAQ modes, recovery procedures, etc.
CODA process and file locations
See also: Hall C Compute Cluster
There are two primary hosts dedicated to running the SHMS and HMS DAQs:
- for NPS-standalone or "coinc" config : cdaql6
- HMS-standalone: coda@cdaql5
- SHMS DAQ is disabled at the moment.
When running in 'coincidence' mode, all ROCs (NPS+HMS) are picked up by the 'coinc' configuration running under coda@cdaql6
There is nothing 'special' about those machines however. If needed, failover to another host by replacing 'cdaql6' with a new/different host in
- coda:bin/coda_user_setup
- coda:bin/run-vncserver -- This will start a new CODA VNC server session (should only be run on cdaql5 or cdaql6). This will run @reboot via crontab
- CODA msqld server
- See coda:bin/run-msqld -- It is presently started through crontab @reboot under coda@cdaql6.
CODA support software
- The start-/end-of-run scripts, EPICS logger scripts, RunStart GUI, Prescale GUI are located in coda:coda/scripts/
- There are multiple 'README' files in that directory and its children that describe the intended execution flow and `best practices'
- Log files in coda:debug_logs/ may be useful in understanding problems.
ROC code
- For experts only, ssh -X hcvme01 -l hccoda as coda@cdaql1 for the linux/intel ROCs
- The files are stored on cdaqfs1 and NFS mounted at /net/cdaqfs1/cdaqfs-coda-home/pxeboot/
- The PXE boot options are delivered by the JLab central DHCP server to all hosts (non-PXE systems ignore them). At present they are:
filename "linux-diskless/pxelinux.0"; # Bootloader program next-server hcpxeboot.jlab.org; # TFTP server (hcpxboot is a CNAME for cdaqfs1 at present)
Experiment Changeover Tasks
- Ensure all CODA files from prior run are pushed to the old tape destination and removed from the data/raw/ directory
- Move any 'orphan' files from ~coda/data/raw/
- Run
jmirror-sync-raw.copiedtotape push
as coda@cdaql1- Note that files will only be removed from the local system by jmirror if both the original copy and duplicate copy are on tape. Either repeat the
jmirror-sync-raw.copiedtotape push
command after the tape system has made the duplicates, or wait and the cron job will clear things up for you within a few days.
- Note that files will only be removed from the local system by jmirror if both the original copy and duplicate copy are on tape. Either repeat the
- Update the '~coda/coda/scripts/DATAFILE-Locations.sh' to point at the new 'raw/' tape destination
- NOTE: Only do this after all prior files have been moved or you can get files mixed up on tape.
- This file is used by CODA and the data mover scripts to move raw CODA files at end of run (and watch for and correct file transfer interruptions, etc)
- Update the 'T1, T2, ... T6' cables into the Trigger Master(s) modules to match Experimental requirements.
- Note: The EDTM system is designed to trigger all detector pretriggers (3/4, PbGl, Cerenkovs, etc) with timing similar to what the physics will generate (including SHMS+HMS coincidences) so that can get you quite close pre-beam. However, the timing will need to be checked/tweaked when beam arrives (of course).
- Confirm the trigger table mapping is consistent with the Experimental requirements.
- This table sets the 'trigger bits' that the Trigger Master adds to its data header to flag whether an particular trigger involved the SHMS, HMS, etc.
- Update the target ladder list in '~coda/coda/scripts/runstart_gui.tcl' to match actual target ladder (allows the prescale GUI to auto select the in-use target).
- Trigger timing log entries / snapshots are in the logbook and are also recorded here: Trigger History
- Ensure the
go_analysis
script is updated and pointing at the right replay scripts.- Update the cdaq:hallc-online/ symlink to point at the new experiment directory
- Ensure symlinks in the new experiment directory are pointing at the right Hall C cluster /net/cdaq/cdaqlXdata/cdaq/<experiment> directories and are not writing files into the /home/cdaq/ directory.
Data Movers
The 'data mover' algorithm takes care of copying CODA data files from the Hall C system(s) to tape. Please don't interfere with this.
The mover relies on jmirror.
Clean-up of the local files is managed by the 'jmirror' tool.
There are crontab entries under coda@cdaql5 that monitor the file transfers for 'stuck' files or other issues and will email responsible parties.
Git repos
All of the CODA configurations, coda:bin/, and other directories are maintained with git.
The remote repos are stored on the 'hallcgit.jlab.org' server.