Difference between revisions of "Extract Angle Cam Image from CODA files"

From HallCWiki
Jump to: navigation, search
(SHMS and HMS Angle Cam Images in CODA files)
 
(6 intermediate revisions by the same user not shown)
Line 4: Line 4:
  
 
This script uses evio2xml to support extracting the image so you will need to run it in a place that has access to the coda tools.  There are lots of ways to do this, but the simplest is to just login to cdaq@cdaql1.
 
This script uses evio2xml to support extracting the image so you will need to run it in a place that has access to the coda tools.  There are lots of ways to do this, but the simplest is to just login to cdaq@cdaql1.
 +
The transition to CODA3 in 2023 seems to have disabled access to the CODA tools under the cdaq account.
 +
You can work around this and get access to the necessary evio2xml binary by switching to tcsh and running the coda310 setup script:
 +
  % tcsh  ## IF this is not your default shell
 +
  % setenv CODA_HOME /home/coda/coda
 +
  % setenv JAVA_HOME $CODA_HOME/jdk1.8.0_152
 +
  % source $CODA_HOME/3.10/examples/setupcoda310
 +
This is still a hack.  See [[#BONUS CREDITS|BONUS CREDITS]] below for better approaches.
  
 
Login to the cdaq cluster, cd into an appropriate directory, and make a symlink of the script here:
 
Login to the cdaq cluster, cd into an appropriate directory, and make a symlink of the script here:
Line 15: Line 22:
 
   SHMS_angle_07977.jpg, and
 
   SHMS_angle_07977.jpg, and
 
   HMS_angle_07977.jpg
 
   HMS_angle_07977.jpg
 +
 +
If you are working with a single-arm run, then you can add '-s' and speed things up a little:
 +
  ./get_coda_photos -s hms_all_04352.dat
  
 
If you are in a directory with symlinks to raw/, cache/, etc, then you can just give it the filename and it will search the usual list of paths automatically like the analyzer does:
 
If you are in a directory with symlinks to raw/, cache/, etc, then you can just give it the filename and it will search the usual list of paths automatically like the analyzer does:
 
   ./get_coda_photos coin_all_07977.dat
 
   ./get_coda_photos coin_all_07977.dat
 +
 +
== Example Raw 'Blob' in CODA file ==
 +
Here is an example of what the angle image 'blob' should look like in a CODA file (contained in a buffer tagged '135':
 +
  % evio2xml /cache/hallc/c-nps/raw/nps_coin_1646.dat.0 | less
 +
    - search for "135" and you should see this:
 +
    ----------------------
 +
    <!-- ===================== Buffer 6 contains 3103 words (12412 bytes) ===================== -->
 +
    <event format="evio" count="6" content="bank" data_type="0x10" tag="135" padding="0" num="0" length="3102" ndata="3101">
 +
      <bank content="string" data_type="0x3" tag="0" padding="0" num="0" length="3100" ndata="1">
 +
        <![CDATA[begin PART SHMS_angle.jpg.uu.aa
 +
    begin 644 SHMS_angle.jpg
 +
    M_]C_X``02D9)1@`!`0```0`!``#_X0"417AI9@``34T`*@````@``P$R``(`
 +
    M```4````7(=I``0````!````,H@J``@````!__P``````````Y````<````$
 +
    M,#(R,)`#``(````4````<)(4``,````$````A``````R,#(S.C$P.C`V(#`X
 +
    M.C`W.C`P`#(P,C,Z,3`Z,#8@,#@Z,#<Z,#````````````#_VP!#``4#!`0$
 +
    M`P4$!`0%!04&!PP(!P<'!P\+"PD,$0\2$A$/$1$3%AP7$Q0:%1$1&"$8&AT=
 +
    M'Q\?$Q<B)"(>)!P>'Q[_VP!#`04%!0<&!PX("`X>%!$4'AX>'AX>'AX>'AX>
 +
    M'AX>'AX>'AX>'AX>'AX>'AX>'AX>'AX>'AX>'AX>'AX>'AX>'A[_P``1"`'@
 +
    ----------------------
  
 
== BONUS CREDITS ==
 
== BONUS CREDITS ==
Feature bounties offered by Brad:
 
  
* I will buy lunch for any student that integrates the image decoder into an hcana class so that the angle images just get stored in the ROOT tree as JPGs as part of the standard analysis run. The encoded image format is described in the get_coda_photos script.  Talk to me if you have questions though.
+
=== Angle snapshot image decoder class ===
 +
* The script above is a real kludge.  It would be far better if a student could integrate the image decoder into an hcana class so that the angle images are stored in the ROOT tree as JPGs as part of the standard analysis run. The encoded image format is described in the get_coda_photos script.  Talk to brads@jlab.org if you have questions.
 +
 
 +
=== Automatic angle extraction from Angle-cam snapshot ===
 +
Update: We had some summer students in 2022 that took a shot at this and made some real progress.
 +
        The github project with documentation is here: https://github.com/JeffersonLab/Spectreye
 +
        It would be a fantastic platform to refine.
 +
 
 +
* It would be great for someone to get some ML character and feature recognition working that can *reliably* extract the spectrometer angle from the camera image.  This includes recognizing the angle digits and correctly interpreting the vernier.
 +
* Basic criteria for success should include:
 +
** Let's say it should work correctly on 95% of a representative set of (different) images pulled from an experiment.  (Can't just be copies of the same angle snapshot though.)
 +
** There should be a 'standalone' executable that can evaluate individual frames/images from files.
 +
** There should be a version that is implemented as a Class in the PODD/hcana framework.
 +
* Even having the code identify and flag potentially smudged areas/numbers during an angle move when we are effectively scanning over a range of angles would be very useful.
 +
** For example, if the code is tracking the angle change 'frame by frame' during a move, you could predict where angle graduations 'should' be, and what angle digits should show up when.  Errors/mismatches during such a move would be interesting to log.
  
* I will give $100 to anyone who manages to get some decent character and feature recognition working to ML working to *reliably* extract the spectrometer angle from the camera image.  This includes recognizing the angle digits and correctly interpreting the vernier.  Note that you can take advantage of the fact that the image frame is fixed, the graduations have fixed spacing, and the camera center (from which the vernier is interpreted) never moves.  Don't overthink the problem -- some hardcoded parameters are entirely fine (but do describe them and store them in a config file!)
+
* Suggestions, hints:
 +
** Note that you can take advantage of the fact that the image frame is fixed, the graduations have fixed spacing, and the camera center (from which the vernier is interpreted) never moves.  Don't overthink the problem -- some hardcoded parameters are entirely fine (but do describe them and store them in a config file!)
 +
** You can use the 'Encoder Angle' information (which is rarely off by more than fraction of a degree) to help constrain the code.  This may help with corner cases (bad/dirty floor marks, etc).
 +
** You could even do something clever by monitoring the live feed and continuously updating an EPICS PV with the extracted angle.  This would allow you to apply heuristics based on recent good images to keep track of where you are now (providing additional resilience against dirty floor marks, poor quality images, etc) that drive failures in the current code.

Latest revision as of 11:31, 29 February 2024

SHMS and HMS Angle Cam Images in CODA files

The angle camera images are stored at the start of every CODA run. They can be extracted using the /home/coda/coda/scripts/Angle_snapshots/get_coda_photos tool.

This script uses evio2xml to support extracting the image so you will need to run it in a place that has access to the coda tools. There are lots of ways to do this, but the simplest is to just login to cdaq@cdaql1.

The transition to CODA3 in 2023 seems to have disabled access to the CODA tools under the cdaq account.
You can work around this and get access to the necessary evio2xml binary by switching to tcsh and running the coda310 setup script:
  % tcsh  ## IF this is not your default shell
  % setenv CODA_HOME /home/coda/coda
  % setenv JAVA_HOME $CODA_HOME/jdk1.8.0_152
  % source $CODA_HOME/3.10/examples/setupcoda310
This is still a hack.  See BONUS CREDITS below for better approaches.

Login to the cdaq cluster, cd into an appropriate directory, and make a symlink of the script here:

 % ln -s /home/coda/coda/scripts/Angle_snapshots/get_coda_photos .

Or copy the script if you want:

 % cp /home/coda/coda/scripts/Angle_snapshots/get_coda_photos .

The basic syntax is "./get_coda_photos <coda_file.dat>". It will extract the angle-cam images into your current working directory. For example:

 ./get_coda_photos  /cache/hallc/spring17/raw/coin_all_07977.dat

Generates:

 SHMS_angle_07977.jpg, and
  HMS_angle_07977.jpg

If you are working with a single-arm run, then you can add '-s' and speed things up a little:

 ./get_coda_photos -s hms_all_04352.dat

If you are in a directory with symlinks to raw/, cache/, etc, then you can just give it the filename and it will search the usual list of paths automatically like the analyzer does:

 ./get_coda_photos coin_all_07977.dat

Example Raw 'Blob' in CODA file

Here is an example of what the angle image 'blob' should look like in a CODA file (contained in a buffer tagged '135':

 % evio2xml /cache/hallc/c-nps/raw/nps_coin_1646.dat.0 | less
   - search for "135" and you should see this:
   ----------------------
   <event format="evio" count="6" content="bank" data_type="0x10" tag="135" padding="0" num="0" length="3102" ndata="3101">
     <bank content="string" data_type="0x3" tag="0" padding="0" num="0" length="3100" ndata="1">
        <![CDATA[begin PART SHMS_angle.jpg.uu.aa
   begin 644 SHMS_angle.jpg
   M_]C_X``02D9)1@`!`0```0`!``#_X0"417AI9@``34T`*@````@``P$R``(`
   M```4````7(=I``0````!````,H@J``@````!__P``````````Y````<````$
   M,#(R,)`#``(````4````<)(4``,````$````A``````R,#(S.C$P.C`V(#`X
   M.C`W.C`P`#(P,C,Z,3`Z,#8@,#@Z,#<Z,#````````````#_VP!#``4#!`0$
   M`P4$!`0%!04&!PP(!P<'!P\+"PD,$0\2$A$/$1$3%AP7$Q0:%1$1&"$8&AT=
   M'Q\?$Q<B)"(>)!P>'Q[_VP!#`04%!0<&!PX("`X>%!$4'AX>'AX>'AX>'AX>
   M'AX>'AX>'AX>'AX>'AX>'AX>'AX>'AX>'AX>'AX>'AX>'AX>'A[_P``1"`'@
   ----------------------

BONUS CREDITS

Angle snapshot image decoder class

  • The script above is a real kludge. It would be far better if a student could integrate the image decoder into an hcana class so that the angle images are stored in the ROOT tree as JPGs as part of the standard analysis run. The encoded image format is described in the get_coda_photos script. Talk to brads@jlab.org if you have questions.

Automatic angle extraction from Angle-cam snapshot

Update: We had some summer students in 2022 that took a shot at this and made some real progress.
        The github project with documentation is here: https://github.com/JeffersonLab/Spectreye
        It would be a fantastic platform to refine.
  • It would be great for someone to get some ML character and feature recognition working that can *reliably* extract the spectrometer angle from the camera image. This includes recognizing the angle digits and correctly interpreting the vernier.
  • Basic criteria for success should include:
    • Let's say it should work correctly on 95% of a representative set of (different) images pulled from an experiment. (Can't just be copies of the same angle snapshot though.)
    • There should be a 'standalone' executable that can evaluate individual frames/images from files.
    • There should be a version that is implemented as a Class in the PODD/hcana framework.
  • Even having the code identify and flag potentially smudged areas/numbers during an angle move when we are effectively scanning over a range of angles would be very useful.
    • For example, if the code is tracking the angle change 'frame by frame' during a move, you could predict where angle graduations 'should' be, and what angle digits should show up when. Errors/mismatches during such a move would be interesting to log.
  • Suggestions, hints:
    • Note that you can take advantage of the fact that the image frame is fixed, the graduations have fixed spacing, and the camera center (from which the vernier is interpreted) never moves. Don't overthink the problem -- some hardcoded parameters are entirely fine (but do describe them and store them in a config file!)
    • You can use the 'Encoder Angle' information (which is rarely off by more than fraction of a degree) to help constrain the code. This may help with corner cases (bad/dirty floor marks, etc).
    • You could even do something clever by monitoring the live feed and continuously updating an EPICS PV with the extracted angle. This would allow you to apply heuristics based on recent good images to keep track of where you are now (providing additional resilience against dirty floor marks, poor quality images, etc) that drive failures in the current code.