Difference between revisions of "Hall C EPICS"
(Add some information on vmec15) |
|||
Line 1: | Line 1: | ||
= Hall C EPICS = | = Hall C EPICS = | ||
− | Hall C EPICS systems contain a mix of hardware and software IOCs. | + | Hall C EPICS systems contain a mix of hardware and software IOCs. The RPi-based IOCs are managed by the Spectrometer Support Group (Ellen, et al.). The softIOC processes run on hcepics. Those are managed by Hall C (Bill H, Steve W., etc). See [[#SoftIOCs managed by Hall C]]. There is a single legacy vxWorks based IOC, [[#vmec15]], that provides the beam current from several Hall C BCMs. |
− | |||
− | |||
== SoftIOCs managed by Hall C == | == SoftIOCs managed by Hall C == | ||
− | * Soft IOCs are presently running under cvxwrks@ | + | * Soft IOCs are presently running under cvxwrks@hcepics |
− | ** The file layout follows | + | ** The file layout follows an EPICS (JLab?) recommended pattern. High-level software components (GUIs, etc) are stored under ~cvxwrks/EpicsHL/ and low-level components (softIOCs, etc) are stored under ~cvxwrks/EpicsLL/ |
* The softIOCs are managed using the procServMgr tool: | * The softIOCs are managed using the procServMgr tool: | ||
Line 66: | Line 64: | ||
* iochcgs.acc.jlab.org -- MCC/OPS managed softIOC that handles communication with the MFCs in the Hall C gas shed. | * iochcgs.acc.jlab.org -- MCC/OPS managed softIOC that handles communication with the MFCs in the Hall C gas shed. | ||
** Also relies on nbhc1.jlab.org: 4-port portserver in the Hall C gas shed for serial communication with the MFCs | ** Also relies on nbhc1.jlab.org: 4-port portserver in the Hall C gas shed for serial communication with the MFCs | ||
+ | |||
+ | == vmec15 == | ||
+ | vmec15 is a vxWorks single board computer located in rack CH03B12. (Check this, could be CH03B11). The primary function of this IOC is to provide the EPICS signals ibcm1 and ibcm2. The OS kernel is located at ~cvxwrks/KERNELS/5.5/vx2306_v4 and the boot script is at ~cvxwrks/SCRIPTS/vmec15.boot. This boot script calls the script ~cvxwrks/EpicsLL/apps/ iocBoot/iocvmec15/st.cmd. The source code of the software loaded is located under ~cvxwrks/EpicsLL/apps in the directories bcmApp, caenScalerApp, and vmec15App. | ||
+ | |||
+ | The full boot information for vmec15 is: | ||
+ | <pre> | ||
+ | boot device : dc0 | ||
+ | processor number : 0 | ||
+ | host name : hcepics | ||
+ | file name : ~/KERNELS/5.5/vx2306_v4 | ||
+ | inet on ethernet (e) : 129.57.168.115:fffffc00 | ||
+ | inet on backplane (b): | ||
+ | host inet (h) : 129.57.147.143 | ||
+ | gateway inet (g) : 129.57.168.1 | ||
+ | user (u) : cvxwrks | ||
+ | ftp password (pw) (blank = use rsh): | ||
+ | flags (f) : 0x0 | ||
+ | target name (tn) : vmec15 | ||
+ | startup script (s) : ~/SCRIPTS/vmec15.boot | ||
+ | other (o) | ||
+ | </pre> : | ||
+ | |||
+ | vmec15 boots over the network from a linux host running a remote shell server. The server was recently changed from cdaql1 to hcepics. To make that move, these steps were followed. | ||
+ | # Install the remote shell server on hcepics with "dnf install rsh-server" (as root) | ||
+ | # Enable the remote shell server with "systemctl enable rsh.socket --now" (as root) | ||
+ | # Ensure that the file ~cvxwrks/.rhosts contains that line "vmec15.jlab.org cvxwrks" and has the permission 644. | ||
+ | # Modify the boot information on vmec15 to reflect the "host name" as "hcepics" and "129.57.147.143", the ip address for hcepics, for the "host inet". (If "telnet vmec15" does not work, use "telnet hctsv4 2011" to connect to the console port of vmec15. | ||
== Miscellaneous == | == Miscellaneous == | ||
* cdaqpi1.jlab.org -- EDTM pulser control located in rack 3B06. [[Hall C EDTM Pulser]] | * cdaqpi1.jlab.org -- EDTM pulser control located in rack 3B06. [[Hall C EDTM Pulser]] |
Latest revision as of 10:15, 27 October 2024
Hall C EPICS
Hall C EPICS systems contain a mix of hardware and software IOCs. The RPi-based IOCs are managed by the Spectrometer Support Group (Ellen, et al.). The softIOC processes run on hcepics. Those are managed by Hall C (Bill H, Steve W., etc). See #SoftIOCs managed by Hall C. There is a single legacy vxWorks based IOC, #vmec15, that provides the beam current from several Hall C BCMs.
SoftIOCs managed by Hall C
- Soft IOCs are presently running under cvxwrks@hcepics
- The file layout follows an EPICS (JLab?) recommended pattern. High-level software components (GUIs, etc) are stored under ~cvxwrks/EpicsHL/ and low-level components (softIOCs, etc) are stored under ~cvxwrks/EpicsLL/
- The softIOCs are managed using the procServMgr tool:
cvxwrks@cdaql1 2012% procServMgr status iocAlias4527 running on port 20000 of cdaql1 with pid 3325 iocHallATgt running on port 20001 of cdaql1 with pid 3348 iocMisc running on port 20002 of cdaql1 with pid 3371 iocAlarms running on port 20003 of cdaql1 with pid 3394 iocMagAlarms running on port 20004 of cdaql1 with pid 3417 iocCryoAlarms running on port 20005 of cdaql1 with pid 3440 iocsnmp running on port 20006 of cdaql1 with pid 3463
- iocAlias4527 : 'mirrors' the CAEN 4527 HV PVs so we can add alarm limits, etc. These are the PVs that the Alarm Handler uses.
- iocHallATgt : Cryo target related PVs
- iocMisc : Supports some beam, IHWP, BTA, spectrometer Shutter notifications and alarms
- iocAlarms : A mix magnet, gas system, BCM temp, etc PVs mirrored to add alarm limits
- iocMagAlarms : Cryo magnet related PVs
- iocCryoAlarms : More Cryo related PVs
- iocsnmp : SNMP <-> EPICS layer to allow EPICS control of APC outlets and Weiner VXS crates
crontab entry under cvxwrks@cdaql1
# /home/cvxwrks/CRONTAB/crontab.cdaql1 # # Add this to the crontab on the cvxwrks account on all machines # that will run soft iocs PATH=/usr/local/bin:/bin:/usr/bin EPICS_CA_ADDR_LIST="129.57.171.255 129.57.165.84 129.57.165.213 129.57.255.13" EPICS_CA_AUTO_ADDR_LIST=YES ## Rotate softios logs 0 4 * * * /usr/sbin/logrotate -s ~/.logrotate.state ~/.logrotate-hallc_softiocs */5 * * * * procServMgr check >> /home/cvxwrks/EpicsLL/logs/`hostname -s`.log #EPICS Archiver and web viewer (NOT EPICS related) 10 2 * * * /bin/rm -f /home/cvxwrks/public_html/cgi/tmp/* ## [BDS -- 24 March 2022 ## Skip run keepalive.pl during hour after midnight or it gets confused and ## kills/restarts the archiver for no good reason. Hopefully we can shut ## this system down soon. 1,11,21,31,41,51 1-23 * * * /home/cvxwrks/Archives/keepalive.pl
Drift Chamber Threshold Controls
- These are operated by an MCC/OPS managed softIOC controlling the DC power supplies used to set the thresholds.
- See also Hall C Drift Chamber Threshold Controls
High Voltage Controls
- Newer CAEN SY4527 HV crates support EPICS internally
- Older CAEN crates controlled using CAENnet
- vmec16.jlab.org : SHMS HV IOC (CH03B13)
- vmec17.jlab.org : HMS HV IOC (CH03B13)
Gas System
The 'hcgas0X' hosts are managed by the Spectrometer Support Group (Ellen, Jack, et al).
- hcgas01.jlab.org -- located in the gas system interlock box in rack 3B?? (under the main Hall C network switch in the electronics room)
- hcgas02.jlab.org -- located in the Hall C gas shed. Provides temperature readbacks for gas system components
- iochcgs.acc.jlab.org -- MCC/OPS managed softIOC that handles communication with the MFCs in the Hall C gas shed.
- Also relies on nbhc1.jlab.org: 4-port portserver in the Hall C gas shed for serial communication with the MFCs
vmec15
vmec15 is a vxWorks single board computer located in rack CH03B12. (Check this, could be CH03B11). The primary function of this IOC is to provide the EPICS signals ibcm1 and ibcm2. The OS kernel is located at ~cvxwrks/KERNELS/5.5/vx2306_v4 and the boot script is at ~cvxwrks/SCRIPTS/vmec15.boot. This boot script calls the script ~cvxwrks/EpicsLL/apps/ iocBoot/iocvmec15/st.cmd. The source code of the software loaded is located under ~cvxwrks/EpicsLL/apps in the directories bcmApp, caenScalerApp, and vmec15App.
The full boot information for vmec15 is:
boot device : dc0 processor number : 0 host name : hcepics file name : ~/KERNELS/5.5/vx2306_v4 inet on ethernet (e) : 129.57.168.115:fffffc00 inet on backplane (b): host inet (h) : 129.57.147.143 gateway inet (g) : 129.57.168.1 user (u) : cvxwrks ftp password (pw) (blank = use rsh): flags (f) : 0x0 target name (tn) : vmec15 startup script (s) : ~/SCRIPTS/vmec15.boot other (o)
:
vmec15 boots over the network from a linux host running a remote shell server. The server was recently changed from cdaql1 to hcepics. To make that move, these steps were followed.
- Install the remote shell server on hcepics with "dnf install rsh-server" (as root)
- Enable the remote shell server with "systemctl enable rsh.socket --now" (as root)
- Ensure that the file ~cvxwrks/.rhosts contains that line "vmec15.jlab.org cvxwrks" and has the permission 644.
- Modify the boot information on vmec15 to reflect the "host name" as "hcepics" and "129.57.147.143", the ip address for hcepics, for the "host inet". (If "telnet vmec15" does not work, use "telnet hctsv4 2011" to connect to the console port of vmec15.
Miscellaneous
- cdaqpi1.jlab.org -- EDTM pulser control located in rack 3B06. Hall C EDTM Pulser