Extract Angle Cam Image from CODA files
SHMS and HMS Angle Cam Images in CODA files
The angle camera images are stored at the start of every CODA run. They can be extracted using the /home/coda/coda/scripts/Angle_snapshots/get_coda_photos tool.
This script uses evio2xml to support extracting the image so you will need to run it in a place that has access to the coda tools. There are lots of ways to do this, but the simplest is to just login to cdaq@cdaql1.
The transition to CODA3 in 2023 seems to have disabled access to the CODA tools under the cdaq account. You can work around this and get access to the necessary evio2xml binary by switching to tcsh and running the coda310 setup script: % tcsh ## IF this is not your default shell % setenv CODA_HOME /home/coda/coda % setenv JAVA_HOME $CODA_HOME/jdk1.8.0_152 % source $CODA_HOME/3.10/examples/setupcoda310 This is still a hack. See BONUS CREDITS below for better approaches.
Login to the cdaq cluster, cd into an appropriate directory, and make a symlink of the script here:
% ln -s /home/coda/coda/scripts/Angle_snapshots/get_coda_photos .
Or copy the script if you want:
% cp /home/coda/coda/scripts/Angle_snapshots/get_coda_photos .
The basic syntax is "./get_coda_photos <coda_file.dat>". It will extract the angle-cam images into your current working directory. For example:
SHMS_angle_07977.jpg, and HMS_angle_07977.jpg
If you are working with a single-arm run, then you can add '-s' and speed things up a little:
./get_coda_photos -s hms_all_04352.dat
If you are in a directory with symlinks to raw/, cache/, etc, then you can just give it the filename and it will search the usual list of paths automatically like the analyzer does:
Angle snapshot image decoder class
- The script above is a real kludge. It would be far better if a student could integrate the image decoder into an hcana class so that the angle images are stored in the ROOT tree as JPGs as part of the standard analysis run. The encoded image format is described in the get_coda_photos script. Talk to email@example.com if you have questions.
Automatic angle extraction from Angle-cam snapshot
Update: We had some summer students in 2022 that took a shot at this and made some real progress. The github project with documentation is here: https://github.com/JeffersonLab/Spectreye It would be a fantastic platform to refine.
- It would be great for someone to get some ML character and feature recognition working that can *reliably* extract the spectrometer angle from the camera image. This includes recognizing the angle digits and correctly interpreting the vernier.
- Basic criteria for success should include:
- Let's say it should work correctly on 95% of a representative set of (different) images pulled from an experiment. (Can't just be copies of the same angle snapshot though.)
- There should be a 'standalone' executable that can evaluate individual frames/images from files.
- There should be a version that is implemented as a Class in the PODD/hcana framework.
- Even having the code identify and flag potentially smudged areas/numbers during an angle move when we are effectively scanning over a range of angles would be very useful.
- For example, if the code is tracking the angle change 'frame by frame' during a move, you could predict where angle graduations 'should' be, and what angle digits should show up when. Errors/mismatches during such a move would be interesting to log.
- Suggestions, hints:
- Note that you can take advantage of the fact that the image frame is fixed, the graduations have fixed spacing, and the camera center (from which the vernier is interpreted) never moves. Don't overthink the problem -- some hardcoded parameters are entirely fine (but do describe them and store them in a config file!)
- You can use the 'Encoder Angle' information (which is rarely off by more than fraction of a degree) to help constrain the code. This may help with corner cases (bad/dirty floor marks, etc).
- You could even do something clever by monitoring the live feed and continuously updating an EPICS PV with the extracted angle. This would allow you to apply heuristics based on recent good images to keep track of where you are now (providing additional resilience against dirty floor marks, poor quality images, etc) that drive failures in the current code.