Difference between revisions of "Extract Angle Cam Image from CODA files"

From HallCWiki
Jump to: navigation, search
(BONUS CREDITS)
(Automatic angle extraction from Angle-cam snapshot)
Line 37: Line 37:
  
 
* I will give $100 to anyone who manages to get some decent ML character and feature recognition that can *reliably* extract the spectrometer angle from the camera image.  This includes recognizing the angle digits and correctly interpreting the vernier.
 
* I will give $100 to anyone who manages to get some decent ML character and feature recognition that can *reliably* extract the spectrometer angle from the camera image.  This includes recognizing the angle digits and correctly interpreting the vernier.
* Given the work already done above, those claiming the bounty would need to demonstrate three capabilities:
+
* '''Given the work already done above, those claiming the bounty would need to demonstrate three capabilities:'''
 
** Let's say it should work correctly on 95% of a representative set of (different) images pulled from an experiment.  (Can't just be copies of the same angle snapshot though.)
 
** Let's say it should work correctly on 95% of a representative set of (different) images pulled from an experiment.  (Can't just be copies of the same angle snapshot though.)
 
** There should be a 'standalone' executable that can evaluate individual frames/images from files.
 
** There should be a 'standalone' executable that can evaluate individual frames/images from files.
 
** There should be a version that is implemented as a Class in the PODD/hcana framework.
 
** There should be a version that is implemented as a Class in the PODD/hcana framework.
 +
* Even having the code identify and flag potentially smudged areas/numbers during an angle move when we are effectively scanning over a range of angles would be very useful.
 +
** For example, if the code is tracking the angle change 'frame by frame' during a move, you could predict where angle graduations 'should' be, and what angle digits should show up when.  Errors/mismatches during such a move would be interesting to log.
  
 
* Suggestions, hints:
 
* Suggestions, hints:

Revision as of 22:01, 13 November 2022

SHMS and HMS Angle Cam Images in CODA files

The angle camera images are stored at the start of every CODA run. They can be extracted using the /home/coda/coda/scripts/Angle_snapshots/get_coda_photos tool.

This script uses evio2xml to support extracting the image so you will need to run it in a place that has access to the coda tools. There are lots of ways to do this, but the simplest is to just login to cdaq@cdaql1.

Login to the cdaq cluster, cd into an appropriate directory, and make a symlink of the script here:

 % ln -s /home/coda/coda/scripts/Angle_snapshots/get_coda_photos .

Or copy the script if you want:

 % cp /home/coda/coda/scripts/Angle_snapshots/get_coda_photos .

The basic syntax is "./get_coda_photos <coda_file.dat>". It will extract the angle-cam images into your current working directory. For example:

 ./get_coda_photos  /cache/hallc/spring17/raw/coin_all_07977.dat

Generates:

 SHMS_angle_07977.jpg, and
  HMS_angle_07977.jpg

If you are working with a single-arm run, then you can add '-s' and speed things up a little:

 ./get_coda_photos -s hms_all_04352.dat

If you are in a directory with symlinks to raw/, cache/, etc, then you can just give it the filename and it will search the usual list of paths automatically like the analyzer does:

 ./get_coda_photos coin_all_07977.dat

BONUS CREDITS

Feature bounties offered by Brad <brads@jlab.org>:

Angle snapshot image decoder class

  • Lunch at restaurant of your choice
    • I will buy lunch for any student that integrates the image decoder into an hcana class so that the angle images are stored in the ROOT tree as JPGs as part of the standard analysis run. The encoded image format is described in the get_coda_photos script. Talk to me if you have questions though.

Automatic angle extraction from Angle-cam snapshot

  • $100 Bounty -- The Angle-Cam image recognition project bounty below is re-opened!
Update: We had some summer students in 2022 that took a shot at this and made some real progress.
        The github project with documentation is here: https://github.com/JeffersonLab/Spectreye
        It would be a fantastic platform to refine.
  • I will give $100 to anyone who manages to get some decent ML character and feature recognition that can *reliably* extract the spectrometer angle from the camera image. This includes recognizing the angle digits and correctly interpreting the vernier.
  • Given the work already done above, those claiming the bounty would need to demonstrate three capabilities:
    • Let's say it should work correctly on 95% of a representative set of (different) images pulled from an experiment. (Can't just be copies of the same angle snapshot though.)
    • There should be a 'standalone' executable that can evaluate individual frames/images from files.
    • There should be a version that is implemented as a Class in the PODD/hcana framework.
  • Even having the code identify and flag potentially smudged areas/numbers during an angle move when we are effectively scanning over a range of angles would be very useful.
    • For example, if the code is tracking the angle change 'frame by frame' during a move, you could predict where angle graduations 'should' be, and what angle digits should show up when. Errors/mismatches during such a move would be interesting to log.
  • Suggestions, hints:
    • Note that you can take advantage of the fact that the image frame is fixed, the graduations have fixed spacing, and the camera center (from which the vernier is interpreted) never moves. Don't overthink the problem -- some hardcoded parameters are entirely fine (but do describe them and store them in a config file!)
    • You can use the 'Encoder Angle' information (which is rarely off by more than fraction of a degree) to help constrain the code. This may help with corner cases (bad/dirty floor marks, etc).
    • You could even do something clever by monitoring the live feed and continuously updating an EPICS PV with the extracted angle. This would allow you to apply heuristics based on recent good images to keep track of where you are now (providing additional resilience against dirty floor marks, poor quality images, etc) that drive failures in the current code.