Difference between revisions of "NPS Disk Space"
From HallCWiki
Jump to navigationJump to searchLine 8: | Line 8: | ||
/home/cdaq/nps-2023/ | /home/cdaq/nps-2023/ | ||
2. output directory (ROOTfiles, REPORT_OUTPUT, PDF files, etc.): | 2. output directory (ROOTfiles, REPORT_OUTPUT, PDF files, etc.): | ||
− | /net/cdaq/cdaql1data/cdaq/hallc-online- | + | /net/cdaq/cdaql1data/cdaq/hallc-online-nps2023/ |
;NOTES: | ;NOTES: | ||
− | * Please do take special care to avoid modifying any files outside of your ~/ | + | * Please do take special care to avoid modifying any files outside of your ~/nps-2023 directory. |
− | * File storage directories during the online analysis of | + | * File storage directories during the online analysis of NPS. |
* There is a README in the first directory with a few notes. | * There is a README in the first directory with a few notes. | ||
− | * It is recommended to make a '~/ | + | * It is recommended to make a '~/nps-2023/go_analysis_nps' shell script that sets up your environment from scratch. For example, no changes/additions to anything in ~/bin, changes to init files like .tcshrc, .bashrc, etc, text editor config files, etc. |
== Farm/CUE disk allocations == | == Farm/CUE disk allocations == |
Revision as of 14:47, 21 August 2023
Disk space allocation directories for the Deuteron experiment. Directory paths marked with [*] are not confirmed as of Feb 09, 2023
For additional info on filesystems, see: https://scicomp.jlab.org/docs/filesystemsv2
Hall C cdaq-cluster filesystems
1. main directory (for online analyzer, DBs, replay scripts, etc.):
/home/cdaq/nps-2023/
2. output directory (ROOTfiles, REPORT_OUTPUT, PDF files, etc.):
/net/cdaq/cdaql1data/cdaq/hallc-online-nps2023/
- NOTES
- Please do take special care to avoid modifying any files outside of your ~/nps-2023 directory.
- File storage directories during the online analysis of NPS.
- There is a README in the first directory with a few notes.
- It is recommended to make a '~/nps-2023/go_analysis_nps' shell script that sets up your environment from scratch. For example, no changes/additions to anything in ~/bin, changes to init files like .tcshrc, .bashrc, etc, text editor config files, etc.
Farm/CUE disk allocations
/volatile/hallc/c-deuteron/ ( 4 TB high quota; 2 TB guarantee )
- NOTES
- Files are NOT backed-up
- Use for all large file output from analysis or simulation jobs. When guarantee threshold is exceeded it is possible to have files auto-cleaned up (removed). See policy: https://scicomp.jlab.org/docs/volatile_disk_pool
- Files you want to keep should be pushed to tape using jput
/work/hallc/c-deuteron/ directory ( 1TB quota )
- NOTES
- Files are NOT backed up.
- Good place for analysis software, database files, etc.
- These files should be in GitHub or have similar backups.
- Nothing that matters should be stored only on /work.
/group/c-deuteron/ directory still needs to be created ( 100GB quota )
- NOTES
- Backed up regularly.
- Best place for analysis/replay scripts; software that is being actively developed, etc. (But, still use GitHub!)
Tape allocations
/mss/hallc/c-deuteron/analysis
- tape volume for analysis output (ie. simulation output, replay output that you want to keep long-term, etc.)
/mss/hallc/c-deuteron/raw
- tape volume for raw data output (ie. production CODA files from the Hall)
- Raw data are NOT accessible directly from tape. To make them accessible, they must be copied over to cache.
Cache allocations
/cache/hallc/c-deuteron/analysis/
/cache/hallc/c-deuteron/raw/
- NOTES
- During online analysis, raw data from tape will be automatically copied over to cache.
- Cache has a certain quota, which if it is exceeded, the raw data files will be automatically copied to tape. See cache data policy: https://scicomp.jlab.org/docs/filesystemsv2#cacheDataPolicy
- to recover the raw data back to cache, the user will have to use the 'jcache get' command to copy over the files from tape to cache. See https://scicomp.jlab.org/docs/node/586