CSV software

From HallCWiki
Revision as of 10:43, 13 March 2019 by Whit (talk | contribs) (→‎ssh)
Jump to navigationJump to search

Run Group | ELOG | PT Scan Experiment | CSV Experiment | Software/Computing | CSV Analysis | SIDIS pages | This Navigation

Repositories


Other notables:

Working on the farm

File System

Large File Output: /volatile

That is what /volatile and /cache are designed for large file output.

CSV has a tape volume /mss/hallc/E12-09-002/ allocated to it.

Your workflow should be

  1. writing to /volatile for tests (followed by pushing those files to tape if the results are useful).
  2. or if you have 'known good' replay scripts, then you can actually 'write' directly to /cache (which is automagically backed to tape) and cut out the /volatile middleman.

See the write-through cache docs here for details.

Analysis Code: /group

The /group disks are backed up like home directories. Here we have limited space 60GB and only code scripts and small data files should be kept here.

The CSV software is installed under /group/c-csv/local.

modules files

CSV software can be found in /group/c-csv/local and is setup to use environment-modules (modulefiles).

If module avail doesn't work then you must add source /etc/profile.d/modules.sh to your .bashrc. group disk: /group/c-csv

Running module avail should produce something like following.

--------------------------------------- /group/c-csv/local/etc/modulefiles ----------------------------------------
cmake/3.10.3                epics/latest                hcana/1.0                   root/6.14.04
cmake/3.12.2                experimental/imgui_dm/0.0.1 hcana/latest                root/6.15.0x
cmake/latest                gcc/8.1.0                   llvm/6.0.1                  root/dev
csv/1.0                     gcc/8.2.0                   llvm/latest                 root/latest
csv/latest                  gcc/latest                  ncurses/6.1                 tmux/2.7
curl/7.61.1                 git/2.18.0                  ncurses/latest              tmux/latest
curl/latest                 git/latest                  python/2.7.15
eigen3/3.3.5                hallc_tools/0.1             python/3.7.1
epics/base_7.0.1            hallc_tools/latest          root/6.14.0

Run the following to setup your environment

source /group/c-csv/local/setup.sh
module load csv/latest

Batch Jobs

Auger XML Example

Note the numerous places you need to fix with your own information below.

<Request>
  <Email email="you@jlab.org" request="false" job="false"/>
  <Project name="c-comm2017"/>
  <Track  name="reconstruction"/>
  <Name   name="sidis_2018"/>
  <OS     name="centos7"/>
  <Memory space="3024" unit="MB"/>
  <List name="runs">
6000
6001
etc...
  </List>
  <ForEach list="runs">
    <Job>
      <Input src="mss:/mss/hallc/spring17/raw/coin_all_0${runs}.dat" dest="coin_all_0${runs}.dat"/>
      <Output src="ROOTfiles/coin_replay_production_${runs}_-1.root" 
        dest="/volatile/hallc/your_volatile/ROOTfiles/coin_replay_production_${runs}_-1.root"/>
      <Output src="REPORT_OUTPUT/COIN/PRODUCTION/replay_coin_production_${runs}_-1.report" 
        dest="/volatile/hallc/your_work/REPORT_OUTPUT/COIN/PRODUCTION/replay_coin_production_${runs}_-1.report"/>
      <Output src="REPORT_OUTPUT/COIN/PRODUCTION/summary_production_${runs}_-1.report" 
        dest="/volatile/hallc/your_volatile/REPORT_OUTPUT/COIN/PRODUCTION/summary_production_${runs}_-1.report"/>
      <Command><![CDATA[
/bin/bash <<EOF
echo " YOU SHOULD CHANGE THIS "
#source /home/whit/.bashrc
#make_hallc_replay_symlinks -c
#make_hallc_replay -c
#mkdir ROOTfiles
#ls -lrth
#./bin/hc_coin -n -1 -r ${runs}
#ls -lrth ROOTfiles
#ls -lrth 
EOF
        ]]></Command>
    </Job>
  </ForEach> 
</Request>

Submit your job with the command

jsub -xml your_file.xml

Online

Working on cdaq machines

For details on how to load different environments run:

bash_csv help

replay

Online replay directory was the same as the pt sidis.

To get to a good bash environment run

bash_csv counter

Run Information Monitor (blue screen)

To get things started, on cdaql1 run:

bash_csv run_info

which will launch (or attach) the run_info tmux session. Use ctrl-a n to cycle through the windows. The IOC window should be launched at start.

The bulk of the work is done by the EPICS software IOC found here.

New process variables

hcHMSAngleEncoderOffset       -- HMS offset used to calculate corrected angle: CorrectedAngle = Encoder + EncoderOffset
hcSHMSAngleEncoderOffset      -- SHMS offset.
hcRunSettingHMSAngle          -- not used
hcRunSettingSHMSAngle         -- not used
hcRunPlanChargeGoal           -- 
hcRunPlanTimeRemainingEst
hcCOINIntRunNumber            -- Mirrors hcCOINRunNumber (not supplied by this IOC)
hcSHMSIntRunNumber            -- Mirrors hcCOINRunNumber (not supplied by this IOC)
hcHMSIntRunNumber             -- Mirrors hcCOINRunNumber (not supplied by this IOC)
hcDAQ_ps1
hcDAQ_ps2
hcDAQ_ps3
hcDAQ_ps4
hcDAQ_ps5
hcDAQ_ps6
hcRunSettingNumber         -- A count that is incremented at the start of a new run when hcRunSettingReconfigured is non-zero.
hcRunSettingReconfigured   -- becomes nonzero when the spectrometer angle, momentum, or target is changed. Resets "RunSetting" counters
hcKinematicSettingNumber   -- user supplied
hcKinematicSettingGroup    -- user supplied
hcKinematicSettingID       -- user supplied
hcRunPlanCountGoal        
hcRunPlanNTrigEventsGoal
hcCOINRunAccumulatedCharge   -- Latest/current run charge while beam > 1uA 
hcCOINRunTime                -- Latest/current run time while beam > 1uA
hcCOINRunAverageBeamCurrent  -- Average beam current while beam > 1uA
hcSHMSRunAccumulatedCharge
hcSHMSRunTime
hcSHMSRunAverageBeamCurrent
hcHMSRunAccumulatedCharge
hcHMSRunTime
hcHMSRunAverageBeamCurrent
hcHMSCorrectedAngle
hcSHMSCorrectedAngle
hcHMSMomentum             -- mirrored set momentum value
hcSHMSMomentum            -- mirrored set momentum value
hcHMSAngleChanged         -- 
hcBDSSELECT_mirror        -- mirrors target selected
hcCreateNewRunSetting 
hcRunSettingAccumulatedCharge
hcRunSettingTime
hcRunSettingAverageBeamCurrent
hcStartNewRunSetting
hcCOINRunChargeReset
hcCOINResetRunTime
hcSHMSRunChargeReset
hcSHMSResetRunTime
hcHMSRunChargeReset
hcHMSResetRunTime
hcHMSSettingChange
hcSHMSSettingChange
hcSHMSAngleChanged
hcTargetChange
hcRunSettingIncrement
hcResetRunSetting
hcRunSettingChargeReset
hcResetRunSettingTime

Tips and Tricks

userweb

Want a userweb.jlab.org/~USER account?

Note: /userweb/USER/public_html is not mounted on the farm nodes. You just be on labl4 or a similar node.

How to effectively use userweb:

  1. On the farm, you write a script that outputs myplot.pdf
  2. ssh jlabl4 'mkdir -p /userweb/USER/public_html/hallc/my_analysis/'
  3. scp myplot.pdf jlabl4:/userweb/USER/public_html/hallc/my_analysis/.

Or better yet, inside your root script.

...
TCanvas* c = ...
...
c->SaveAs("myplot.png");
c->SaveAs("myplot.pdf");
std::system("ssh jlabl4 'mkdir -p /userweb/USER/public_html/hallc/my_analysis/");
std::system("scp myplot.pdf jlabl4:/userweb/USER/public_html/hallc/my_analysis/.");

Where we assume that passwordless ssh (see below) has been setup.

Now you can get your plots at https://userweb.jlab.org/~USER/hallc/my_analysis/. It helps to save a png and pdf for plots because a png is easily viewed in the browser while a pdf is generally better.

ssh

Looking at online web server (running on cdaql1 for example) by forwarding ssh

# step 1. On local machine:
ssh -L 8888:localhost:8888 hallc

# step 2 from the login server run 
ssh -L 8888:cdaql1.jlab.org:8888 cdaq@cdaql1